text
stringlengths 14
1.76M
|
|---|
# A Particle Filtering Framework for Integrity Risk of GNSS-Camera Sensor
Fusion
Adyasha Mohanty, Shubh Gupta and Grace Xingxin Gao
Stanford University
## BIOGRAPHIES
Adyasha Mohanty is a graduate student in the Department of Aeronautics and
Astronautics at Stanford University. She graduated with a B.S. in Aerospace
Engineering from Georgia Institute of Technology in 2019.
Shubh Gupta is a graduate student in the Department of Electrical Engineering
at Stanford University. He received his B.Tech degree in Electrical
Engineering with a minor in Computer Science from the Indian Institute of
Technology Kanpur in 2018.
Grace Xingxin Gao is an assistant professor in the Department of Aeronautics
and Astronautics at Stanford University. Before joining Stanford University,
she was faculty at University of Illinois at Urbana-Champaign. She obtained
her Ph.D. degree at Stanford University. Her research is on robust and secure
positioning, navigation and timing with applications to manned and unmanned
aerial vehicles, robotics, and power systems.
## ABSTRACT
Adopting a joint approach towards state estimation and integrity monitoring
results in unbiased integrity monitoring unlike traditional approaches. So
far, a joint approach was used in Particle RAIM [1] for GNSS measurements
only. In our work, we extend Particle RAIM to a GNSS-camera fused system for
joint state estimation and integrity monitoring. To account for vision faults,
we derive a probability distribution over position from camera images using
map-matching. We formulate a Kullback-Leibler Divergence [2] metric to assess
the consistency of GNSS and camera measurements and mitigate faults during
sensor fusion. The derived integrity risk upper bounds the probability of
Hazardously Misleading Information (HMI). Experimental validation on a real-
world dataset shows that our algorithm produces less than 11 m position error
and the integrity risk over bounds the probability of HMI with 0.11 failure
rate for an 8 m Alert Limit in an urban scenario.
## 1 INTRODUCTION
In urban environments, GNSS signals suffer from lack of continuous satellite
signal availability, non line-of-sight (NLOS) errors and multi-path effects.
Thus, it is important to quantify the integrity or measure of trust in the
correctness of the positioning solution provided by the navigation system.
Traditional integrity monitoring approaches [3] provide point positioning
estimates i.e. the state estimation algorithm is assumed to be correct and
then the integrity of the estimated position is assessed. However, addressing
state estimation and integrity monitoring separately does not capture the
uncertainty in the state estimation algorithm. As a result, the integrity
monitoring becomes biased by the acquired state estimate leading to subsequent
faulty state estimation.
Recently, an approach towards joint state estimation and integrity monitoring
for GNSS measurements was proposed in Particle RAIM [1]. Instead of producing
point positioning estimates, Particle RAIM uses a particle filter to form a
multi-modal probability distribution over position, represented as particles.
Traditional RAIM [4] is used to assess the correctness of different ranging
measurements and the particle weights are updated to form the distribution
over the position. From the resulting probability distribution, the integrity
risk is derived using an approximate upper bound to the probability of HMI or
the reference risk. By incorporating the correctness of different measurement
subsets directly into the state estimation, Particle RAIM is able to exclude
multiple faults in GNSS ranging measurements. However, due to large errors
from GNSS measurements, Particle RAIM requires employing conservative measures
such as large Alert Limits to adequately bound the reference risk.
For urban applications, improved positioning accuracy from Particle RAIM is
necessary to provide adequate integrity for smaller Alert Limits. Since
measurements from GNSS are not sufficient to provide the desired accuracy, it
is helpful to augment GNSS with additional sensors that increase redundancy in
measurements. Sensors such as cameras are effective complimentary sensors to
GNSS. In urban regions, cameras have access to rich environmental features [5]
[6] [7] and provide superior sensing than GNSS which suffers from multi-path
and NLOS errors [3] [8] [9] [10].
Thus, with added vision, we need a framework to provide integrity for the
fused GNSS-camera navigation system to account for two categories of faults.
The first category includes data association errors across images, where
repetitive features are found in multiple images creating ambiguity during
feature and image association. This ambiguity is further amplified due to
variations in lighting and environmental conditions. The second category
comprises errors that arise during sensor fusion of GNSS and camera
measurements. Ensuring that faults in either measurement do not dominate the
sensor fusion process is paramount for maximizing the complimentary
characteristics of GNSS and camera.
Many works provide integrity for GNSS-camera fused systems utilizing a Kalman
Filter [11] framework or an information filter [12]. Vision Aided-RAIM [13]
introduced landmarks as pseudo-satellites and integrated them into a linear
measurement model alongside GPS observations. In [14], the authors implemented
a sequential integrity monitoring approach to isolate single satellite faults.
The integrity monitor uses the innovation sequence output from a single Kalman
filter to derive a recursive expression of the worst case failure mode slopes
and to compute the protection levels (PL) in real-time. An Information Filter
(IF) is used in [15] for data fusion wherein faults are detected based on the
Kullback-Leibler divergence (KL divergence) [2] between the predicted and the
updated distributions. After all detected faulty measurements are removed, the
errors are modeled by a student’s t distribution to compute a PL. A student’s
t distribution is also used in [16] alongside informational sensor fusion for
fault detection and exclusion. The degree of the distribution is adapted in
real-time based on the computed residual from the information filter. A
distributed information filter is proposed in [17] to detect faults in GPS
measurement by checking the consistency through log-likelihood ratio of the
information innovation of each satellite. These approaches model measurement
fault distributions with a Gaussian distribution although for camera
measurements, the true distribution might be non-linear, multi-modal, and
arbitrary in nature. Using a simplified linear measurement probability
distribution renders these frameworks infeasible and unreliable for safety-
critical vision augmented GNSS applications.
Another line of work builds on Simultaneous Localization and Mapping (SLAM)
based factor graph optimization techniques. Bhamidipati et al [5] derived PL
by modeling GPS satellites as global landmarks and introducing image pixels
from a fish-eye camera as additional landmarks. The raw image is categorized
into sky and non-sky pixels to further distinguish between LOS and NLOS
satellites. The overall state is estimated using graph optimization along with
an M-estimator. Although this framework is able to exclude multiple faults in
GPS measurements, it is not extendable to measurements from forward or rear
facing cameras that do not capture sky regions. Along similar lines,
measurements from a stereo camera along with GNSS pseudoranges are jointly
optimized in a graph optimization framework in [18]. GNSS satellites are
considered as feature vision points and pose-graph SLAM is applied to achieve
a positioning solution. However, graph optimization approaches also share the
same limitation as Kalman Filter based approaches: They produce point
positioning estimates and do not account for the uncertainty in state
estimation that biases integrity monitoring.
Overall, existing integrity monitoring algorithms for GNSS- camera fusion have
the following limitations:
* 1
They address state estimation and integrity monitoring separately, similar to
traditional RAIM approaches.
* 2
They accommodate camera measurements within a linear or linearizable framework
such as KF, EKF, or IF and become infeasible when camera measurements are not
linearizable without loss of generality.
* 3
There is no standard way in literature to quantify the uncertainty in camera
measurements directly from raw images.
* 4
They use outlier rejection techniques to perform fault detection and exclusion
after obtaining the positioning solution. There is no framework that accounts
for faults both independently in GNSS and camera as well as the faults that
arise during sensor fusion.
In our work, we overcome the above limitations by proposing the following
contributions. This paper is based on our recent ION GNSS+ 2020 conference
paper [19].
* 1
We jointly address state estimation and integrity monitoring for GNSS-camera
fusion with a particle filtering framework. We retain the advantages of
Particle RAIM while extending it to include camera measurements.
* 2
We derive a probability distribution over position directly from images
leveraging image registration.
* 3
We develop a metric based on KL divergence [2] to fuse probability
distributions obtained from GNSS and camera measurements. By minimizing the KL
divergence of the distribution from each camera measurement with respect to
the GNSS measurement distribution, we ensure that erroneous camera
measurements do not affect the overall probability distribution. Stated
otherwise, the divergence metric augments the shared belief over the position
from both sensor measurements by minimizing cross-contamination during sensor
fusion.
* 4
We experimentally validate our framework on an urban environment dataset [20]
with faults in GNSS and camera measurements.
The rest of the paper is organized as follows. In Section 2, we describe the
overall particle filter framework for probabilistic sensor fusion. In Sections
3 and 4, we infer a distribution over position from GNSS and camera
measurements, respectively. Section 5 elaborates on the probabilistic sensor
fusion of GNSS and camera measurements along with the proposed KL divergence
metric. In Section 6, we describe the integrity risk bounding. Sections 7 and
8 shows the experimental setup and the results from experimental validation on
the urban environment dataset, respectively. In Section 9, we conclude our
work.
## 2 PARTICLE FILTER FRAMEWORK FOR PROBABILISTIC SENSOR FUSION
The distribution over the position inferred from GNSS and camera measurements
is multi-modal due to faults in a subset of measurements. Thus, to model such
distributions, we choose a particle filtering approach that further allows us
to keep track of multiple position hypotheses rather than a single position
estimate. Although a particle filtering approach was used in Particle RAIM
[1], the authors only considered GNSS ranging measurements. In our work, we
extend the framework to include measurements from a camera sensor. Figure 1
represents our overall framework. We add the camera and probabilistic sensor
fusion modules to the framework proposed in [1].
Figure 1: Particle filter framework with probabilistic sensor fusion of GNSS
and camera measurements and integrity risk bounding. The highlighted modules
represent our contributions.The GNSS and Risk Bounding Modules are adopted
from Particle RAIM [1].
Our framework consists of the following modules:
* •
Perturbation and propagation: Using noisy inertial odometry from the IMU, we
generate a set of motion samples, each of which perturbs the previous particle
distribution in the propagation step.
* •
GNSS module: This module from Particle RAIM [1] takes GNSS ranging
measurements from multiple satellites, some of which may be faulty and outputs
a probability distribution over position using a fault-tolerant weighting
scheme described in Section 3. The particles from the GNSS module are
propagated to the camera module to ensure that the distributions from GNSS and
camera share the same domain of candidate positions.
* •
Camera module and synchronization with motion data: The camera module takes a
camera image and matches it to the images in a map database using image
registration to generate similarity scores. The underlying state of the best
matched image is extracted and propagated forward to the current GNSS time
stamp by interpolating with IMU odometry. This step ensures that the
probability distributions from camera and GNSS measurements are generated at
the same time stamps. Finally, we use a categorical distribution function to
transform the similarity scores into a probability distribution over position
hypotheses as described in Section 4.
* •
Probabilistic sensor fusion: This module outputs a joint likelihood over
positions from GNSS and camera measurements after fusing them with the
proposed KL divergence metric in Section 5.1. Particles are resampled from the
current distribution with Sequential Importance Resampling [21].
* •
Risk bounding: We adopt the risk bounding formulation proposed in [1] to
compute the integrity risk from the derived probability distribution over the
position domain. Using generalization bounds from statistical learning theory
[22], the derived risk bound is formally shown to over bound the reference
risk in Section 6.
We elaborate on the various modules of our framework in the following
sections.
## 3 GNSS MODULE- PARTICLE RAIM
A likelihood model for the GNSS measurements is derived using the mixture
weighting method proposed in Particle RAIM [1]. Instead of assuming
correctness of all GNSS measurements, the likelihood is modeled as a mixture
of Gaussians to account for faults in some measurements. Individual
measurement likelihoods are modeled as Gaussians with the expected
pseudoranges as means and variance based on Dilution of Precision(DoP). The
GMM [23] [24] is expressed as:
$L_{t}(m^{t})=\sum_{k=0}^{R}\gamma_{k}\mathcal{N}(m_{k}^{t}|\mu_{X}^{t,k},\sigma_{X}^{t,k});\sum_{k=0}^{R}\gamma_{k}=1,$
(1)
where $L_{t}(m^{t})$ denotes the likelihood of measurement $m$ at time $t$.
$\gamma$ denotes the measurement responsibility or the weights of the
individual measurement components and $R$ refers to the total number of GNSS
ranging measurements. $\mu$ and $\sigma$ represent the mean and the standard
deviation of each Gaussian component inferred from DOP. $X$ refers to the
collection of position hypotheses denoted by particles and $k$ is the index of
the number of Gaussians in the mixture. The weights are inferred with a single
step of the Expectation-Maximization (EM) scheme [25] as shown in Figure 2.
Figure 2: Two steps of the EM scheme used to derive the weight of each
Gaussian likelihood in the GMM. In the expectation step, the local vote for
each particle is computed based on the squared-normal voting on the normalized
residual for a particle obtained with traditional RAIM. The overall confidence
is inferred by normalizing the votes and pooling them using Bayesian maximum a
posteriori (MAP) estimation.
To avoid numerical errors due to finite precision, the log likelihood of the
likelihood model is implemented by extending the input space to include
additional copies of the state space variable, one for each GNSS measurement
[26]. The new likelihood is written as:
$P\left(m^{t}\middle|X^{t},\chi=k\right)=\gamma_{k}\mathcal{N}\left(m_{k}^{t}\middle|\mu_{x}^{t,k},\sigma_{x}^{t,k}\right)\
;\sum_{k=1}^{R}\gamma_{k}=1,$ (2)
where $\chi$ is an index that denotes the associated GNSS measurement with the
particle replica.
## 4 CAMERA MODULE
To quantify the uncertainty from camera images, we use a map-matching
algorithm that matches a camera image directly to an image present in a map
database. Our method is implemented in OpenCV [27] and comprises three steps
shown in Figure 3.
Figure 3: Generating probability distribution over position from camera
images.
Each block is elaborated below.
* •
Database Creation: We assume prior knowledge of the geographical region where
we are navigating. Based on GPS coordinates, we select images from the known
area using Google Street View Imagery. These images along with their
associated coordinates form the database. Features are extracted from these
images and stored in a key point-descriptor format.
* •
Image Registration: After receiving a camera test image, we extract features
and descriptors with the ORB [28] algorithm. Although we experimented with
other feature extraction methods such as SIFT [29], SURF [30], and AKAZE [31],
ORB was found most effective for extracting descriptors from highly blurred
images. The descriptor vectors are clustered with a k-means algorithm [32] to
form a vocabulary tree [33]. Each node in the tree corresponds to an inverted
file, i.e., a file containing the ID-numbers of images in which a particular
node is found and the relevance of each feature to that image. The database is
then scored hierarchically based on Term Frequency Inverse Document Frequency
(TF-IDF) scoring [33], which quantifies the relevance of the images in the
database to the camera image. We refer to these scores as the similarity
scores. The image with the highest score is chosen as the best match and the
underlying state is extracted.
* •
Probability generation after synchronization: After extracting the state from
the best camera image in the database, we propagate the state to the same time
stamp as the GNSS measurement. The raw vehicle odometry is first synchronized
with GNSS measurements using the algorithm in [20]. Using the time difference
between the previous and current GNSS measurements, we linearly interpolate
the extracted state with IMU motion data as shown below.
$x^{t}=x^{t-1}+v^{t-1}dt+0.5a^{t-1}\ dt^{2}$ (3)
where $x^{t}$ refers to the 3D position at epoch $t$, $dt$ refers to the time
difference between successive camera measurements, and $v$ and $a$ are the
interpolated IMU velocity and accelerations at epoch $t$.
Next, we compute the Euclidean distance between the interpolated state and the
current particle distribution from GNSS measurements to obtain new similarity
scores. This step ensures that the probability distributions computed from
camera and GNSS measurements share the same domain of candidate positions. A
SoftMax function takes the scores and outputs a probability distribution over
position. Normalization of the scores enforces a unit integral for the
distribution.
$Q(n^{t}|X^{t})=\frac{\exp(\omega^{t})}{\sum_{c}\exp(\omega_{c}^{t})}$ (4)
where $Q$ is the probability distribution associated with camera measurement
$n$ at time $t$ over the position domain $X$, $\omega_{c}^{t}$ represents
computed distance score, and $c$ is the index for individual particles.
## 5 PROBABILISTIC SENSOR FUSION
After obtaining the probability distributions from GNSS and camera, we need to
form a joint distribution over the position. However, we need to ensure that
faults in camera measurements do not degrade the distribution from GNSS
measurements, one that is coarse but correct since the distribution accounts
for faults in the ranging measurements through the RAIM voting scheme. Thus,
we need a metric to identify and exclude faulty camera measurements leveraging
knowledge of the distribution from GNSS. Additionally, the metric should
assess the consistency of the probability distribution from each camera
measurement with respect to the GNSS distribution and mitigate inconsistent
distributions that result from vision faults. The KL divergence [34]
represents one way to assess the consistency of two probability distributions.
By minimizing the divergence between the distributions inferred from camera
and GNSS, we ensure that both distributions are consistent.
### 5.1 Kl Divergence: Metric Formulation
We provide a background on KL divergence prior to explaining our metric.
The KL divergence [34] between two discrete probability distributions, $p$ and
$q$, in the same domain is defined as:
$D_{KL}(p||q)=\sum\nolimits_{z\in\zeta}p_{z}\ log\ \frac{p_{z}}{q_{z}}$ (5)
where $\zeta$ represents the domain of both distributions and $z$ is each
element of the domain. In our work, we ensure that distributions from GNSS and
camera share the same position domain by propagating the particles from the
GNSS distribution to the camera module prior to generating the distribution
from camera measurements. Two important properties of the KL divergence are:
* •
The KL divergence between two distributions is always non-negative and not
symmetrical [34]
$D_{KL}(p||q)\neq D_{KL}(q||p)$ (6)
where $D_{KL}(q||p)$ is the reverse KL divergence between the distributions
$p$ and $q$.
* •
$D_{KL}(p||q)$ is convex in the pair $(p||q)$ if both distributions represent
probability mass functions (pmf) [34].
Leveraging the above properties, we formulate our metric below.
* •
Mixture of Experts (MoE): We form a mixture distribution to represent
probability distributions from successive camera measurements, where a non-
Gaussian probability distribution is derived from a single camera image. Each
measurement is assigned a weight to represent its contribution in the mixture.
Instead of setting arbitrary weights, we leverage the GNSS distribution to
infer weights that directly correspond to whether a camera measurement is
correct or faulty. Thus, highly faulty camera measurements are automatically
assigned low weights in the MoE. The mixture distribution is given as:
$Q^{*}(n^{t}|X^{t})=\sum\limits_{j=1}^{K}\alpha_{j}^{*}\
Q^{j}(n_{j}^{t}|X^{t});\sum\limits_{j=1}^{K}\alpha_{j}^{*}=1$ (7)
where $Q^{*}(n^{t}|X^{t})$ represents the mixture distribution formed using
$K$ camera images between two successive GNSS time epochs.
$Q^{j}(n_{j}^{t}|X^{t})$ is the likelihood of a single camera image
$n_{j}^{t}$ recorded at time $t$ with $\alpha_{j}^{*}$ as the normalized
weight. $X^{t}$ are the particles representing position hypothesis and $j$ is
the index for the camera images. The weights are normalized below to ensure
that the MoE forms a valid probability distribution:
$\alpha_{j}^{*}=\frac{\alpha_{j}}{\sum\limits_{r=1}^{K}\alpha_{r}}$ (8)
where $\alpha_{j}^{*}$ is the normalized weight, $\alpha_{j}$ is the weight
prior to normalization, $r$ is the index for the number of camera images
between two successive GNSS time epochs, and $K$ is the total number of camera
measurements.
* •
Setup KL divergence: We set up a divergence minimization metric between the
distributions from each camera measurement and all GNSS measurements.
${KL}_{j}\ ((\alpha_{j}\ Q^{j}\left(n_{j}^{t}\middle|X^{t}\right)\ ||\ P\
\left(m_{k}^{t}\middle|X^{t},\chi=k\right))=\sum_{i=1}^{S}{\left(\alpha_{j}\
Q^{j}(n_{j}^{t}|\ X^{t})\right)\ log\left[\frac{\left(\alpha_{j}\
Q^{j}(n_{j}^{t}|\ X^{t})\right)}{P\
\left(m_{k}^{t}\middle|X^{t},\chi=k\right)}\right]}$ (9)
where $||\ $ denotes the divergence between both probability distributions,
$S$ represents the total number of particles or position hypotheses across
both distributions, and $i$ is the index for the particles. $\ P\
\left(m_{k}^{t}\middle|X^{t},\chi=k\ \right)$ is the probability distribution
at epoch $t$ from GNSS measurements as defined in Equation (2), $\alpha_{j}\ $
is the unnormalized weight, and $j$ is the index for the camera measurement.
* •
Minimize divergence: Using the convexity of the KL divergence (Property 2), we
minimize each divergence metric with respect to the unknown weight assigned to
the likelihood of each camera measurement. We abbreviate $\
P\left(m_{i}^{t}\middle|X^{t},\chi=i\ \right)$ as $P(x_{i})$ and
$Q\left(n_{j}^{t}\middle|X^{t}\right)\ $ as $Q(x_{i})$ for brevity and expand
Equation (9). Since $\alpha_{j}$ is independent of the summation index, we
keep it outside the summation and simplify our expansion below.
${KL}_{j}(Q|\left|P\right)=\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ \alpha_{j}\ +\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ Q\left(x_{i}\right)\ -\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ P\ \left(x_{i}\right)$ (10)
Taking the first derivative with respect to $\alpha_{j}$ we obtain,
${min}_{\alpha_{j}}{KL}_{j}\ \ (Q|\left|P\right)=\ log\ \alpha_{j}\sum_{i\ =\
1}^{S}Q\left(x_{i}\right)\ +\ \sum_{i\ =\ 1}^{S}Q\left(x_{i}\right)\ +\sum_{i\
=\ 1}^{S}Q\left(x_{i}\right)log\ Q\left(x_{i}\right)\ -\ \sum_{i\ =\
1}^{S}Q\left(x_{i}\right)log\ P\left(x_{i}\right)$ (11)
Equating the expression on the right to 0 and solving for $\alpha_{j}$ gives
us:
$\alpha_{j}=e^{k}\ ;\ k=\frac{\sum_{i=1}^{S}Q\left(x_{i}\right)\ log\
\frac{P\left(x_{i}\right)}{Q\left(x_{i}\right)}}{\sum_{i=1}^{S}Q\left(x_{i}\right)}-1$
(12)
We also perform a second derivative test to ensure that the $\alpha_{j}$ value
inferred is a minimum value of the divergence measure. Since the exponential
function with the natural base is always positive, $\alpha_{j}$ is always
positive as well. Thus, evaluating the second derivative gives us a positive
value.
$\frac{1}{\alpha_{j}}\sum_{i=1}^{S}Q(x_{i})>0$ (13)
* •
Joint probability distribution over position: After obtaining the weights, we
normalize them using Equation (8). We obtain the joint distribution assuming
that the mixture distribution from camera measurements and the GMM from GNSS
measurements are mutually independent. The joint distribution is given as:
$P^{\ast}\left(n^{t},\ m^{t}\middle|X^{t}\right)=\
P\left(m_{i}^{t}\middle|X^{t},\chi=k\ \right)\
Q^{\ast}\left(n^{t}\middle|X^{t}\right)$ (14)
where $\ P\ \left(m_{k}^{t}\middle|X^{t},\chi=k\ \right)$ is the probability
distribution from GNSS measurements in Equation (2). We take the log
likelihood of the joint distribution to avoid finite precision errors.
## 6 INTEGRITY RISK BOUNDING
We upper bound the probability of HMI using the risk bounding framework
introduced in [1]. For a single epoch, the probability of HMI for a given
Alert Limit $r$ is defined as:
$R_{x*}(\pi)=\mathop{\mathbb{E}}_{x\sim\pi}[P(\|x-x^{*}\|\geq r)]$ (15)
where $R_{x*}(\pi)$ is the probability of HMI with reference position $x^{*}$
and mean distribution in position space induced by all posterior distributions
$\pi$. The distributions are created by generating samples around the measured
odometry and then perturbing the initial particle distribution. From the PAC-
Bayesian [35] formulation and as shown in [1], the reference risk
$\mathop{\mathbb{\textbf{R}}(\pi^{t})}$ upper bound is:
$\mathop{\mathbb{\textbf{R}}(\pi^{t})}\leq\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})}+\mathcal{D}_{Ber}^{-1}(\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})},\epsilon)$
(16)
The first and second terms refer to empirical and divergence risk,
respectively. We explain the computation of each term below.
The empirical risk $\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})}$ is computed
from a finite set of perturbed samples of size $M$.
$\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})}=\frac{1}{M}\sum_{i=1}^{M}\mathop{\mathbb{E}}_{x\sim\pi^{t}}[l(x,\pi_{u}^{t})],$
(17)
where, $l(x,\pi_{u}^{t})$ is the classification loss with respect to a motion
sample resulting in the posterior distribution being classified as hazardous.
$\pi$ refers to the mean posterior distribution at time $t$.
The divergence risk term
$\mathcal{D}_{Ber}^{-1}(\mathop{\mathbb{\textbf{R}}_{M}(\pi^{t})},\epsilon)$
accounts for uncertainty due to perturbations that are not sampled. First, we
compute the gap term $\epsilon$ using KL divergence [2] of the current
distribution from the prior and a confidence requirement in the bound
$\delta$.
$\epsilon=\frac{1}{M}(KL(\pi^{t}||\pi^{t-1})+log(\frac{M+1}{\delta}))$ (18)
where $\delta$ refers to the bound gap. The means of the prior and current
distributions are taken as $\pi^{t-1}$ and $\pi^{t}$. The prior and current
distributions are approximated as multivariate Gaussian distributions.
The Inverse Bernoulli Divergence [1] $\mathcal{D}_{Ber}^{-1}$ is defined as:
$\mathcal{D}_{Ber}^{-1}(q,\epsilon)=t\;\;s.t.\;\;\mathcal{D}_{Ber}(q||q+t)=\epsilon$
(19)
where $q||q+t$ is the KL divergence [2] between $q$ and $q+t$ and $q$ is given
by the empirical risk term. Finally, the Inverse Bernoulli Divergence [1] is
obtained approximately as:
$\mathcal{D}_{Ber}(q,\epsilon)=\sqrt{\frac{2\epsilon}{\frac{1}{q}+\frac{1}{1-q}}}$
(20)
## 7 EXPERIMENTS
### 7.1 Datasets
We test our framework on a 2.3 km long urban driving dataset from Frankfurt
[20]. We use GNSS pseudorange measurements, images from a forward-facing
camera, ground truth from a NovAtel receiver, and odometry from the IMU. The
dataset contains NLOS errors in GNSS measurements and vision faults due to
variations in illumination. In addition to the real-world dataset, we create
emulated datasets by inducing faults in GNSS and vision measurements with
various controlled parameters.
### 7.2 Experimental Setup and Parameters
* •
Real-world dataset: We use GNSS ranging measurements with NLOS errors. For
simplicity, we estimate the shared clock bias by subtracting the average
residuals with respect to ground truth from all GNSS pseudoranges at one time
epoch.
* •
Emulated dataset: First, we vary the number of satellites with NLOS errors by
adding back the residuals to randomly selected satellites. This induces clock
errors in some measurements which are perceived as faults. Secondly, we remove
the NLOS errors from all measurements but add Gaussian bias noise to
pseudorange measurements from random satellites at random time instances. The
number of faults are varied between 2-9 out of 12 available measurements at
any given time step. We induce faults in camera measurements by adding
blurring with a 21x21 Gaussian kernel and occlusions of 25-50 % height and
width to random images.
During the experimental simulation, a particle filter tracks the 3D position
(x,y,z) of the car and uses faulty GNSS and camera measurements along with
noisy odometry. Probability distributions are generated independently from
GNSS and camera and fused with the KL divergence metric to form the joint
distribution over positions. At each time epoch, the particle distribution
with the highest total log-likelihood is chosen as the estimated distribution
for that epoch. The integrity risk is computed from 10 posterior distributions
of the initial particle distribution and the reference risk is computed with
ground truth. Our experimental parameters are listed in Table 1.
Table 1: Experimental Parameters for Validation with Real-world and Emulated Datasets Parameter | Value | Parameter | Value
---|---|---|---
No. of GNSS measurements | 12 | Added Gaussian bias to GNSS measurements | 20- 200 m
No. of faults in GNSS measurements | 2-9 | No. of particles | 120
Measurement noise variance | 10 m 2 | Filter propagation variance | 3 m 2
Alert Limit | 8, 16 m | No. of odometry perturbations | 10
### 7.3 Baselines and Metrics
We use Particle RAIM as the baseline to evaluate our algorithm’s performance
for state estimation. The metric for state estimation is the root mean square
error (RMSE) of the estimated position with respect to ground truth for the
entire trajectory. The risk bounding performance is evaluated with metrics
derived from a failure event, i.e., when the derived risk bound fails to upper
bound the reference risk. The metrics are the following: failure ratio(the
fraction of cases where the derived risk bound fails to upper bound the
reference risk), failure error(the mean error during all failure events), and
the bound gap(average gap between the derived integrity risk) and the
reference risk.
For evaluating the integrity risk, we specify a performance requirement that
the position should lie within the Alert Limit with at least 90% probability.
A fault occurs if the positioning error exceeds the Alert Limit. The metrics
for integrity risk are reported based on when the system has insufficient
integrity or sufficient integrity [36], which respectively refer to the states
when a fault is declared or not. The false alarm rate equals the fraction of
the number of times the system declares insufficient integrity in the absence
of a fault. The missed identification rate is defined as the fraction of the
number of times the system declares sufficient integrity even though a fault
is present.
## 8 RESULTS
### 8.1 State Estimation
First, we test our algorithm with NLOS errors in GNSS ranging measurements and
added camera faults. Quantitative results in Table 2 demonstrate that our
algorithm produces 3D positioning estimates with overall RMSE of less than 11
m. Additionally, our algorithm reports lower errors compared to Particle RAIM
for all test cases. Our algorithm is able to compensate for the residual
errors from Particle RAIM by including camera measurements in the framework.
This leads to improved accuracy in the positioning solution.
Table 2: RMSE in 3D Position with NLOS errors and added vision faults No. of faults out of 12 available GNSS measurements | Particle RAIM-Baseline (meter) | Our Algorithm (meter)
---|---|---
2 | 18.1 | 6.3
4 | 19.1 | 6.1
6 | 16.9 | 5.9
9 | 26.6 | 10.6
For qualitative comparison, we overlay the trajectories from our algorithm on
ground truth and highlight regions with positioning error of greater than 10 m
in Figures 4 and 5. Trajectories from Particle RAIM show large deviations from
ground truth in certain regions, either due to poor satellite signal
availability or high NLOS errors in the faulty pseudorange measurements.
However, similar deviations are absent from the trajectories from our
algorithm which uses both GNSS and camera measurements. Our KL divergence
metric is able to mitigate the errors from vision and the errors from cross-
contamination during sensor fusion, allowing us to produce lower positioning
error.
(a) Particle RAIM (Baseline)
(b) Our Algorithm
Figure 4: State estimation under NLOS errors for 6 faulty GNSS pseudo range
measurements and added vision faults. Regions with positioning error greater
than 10 m are highlighted in red.
(a) Particle RAIM (Baseline)
(b) Our Algorithm
Figure 5: State estimation under NLOS errors for 9 faulty GNSS pseudo range
measurements and added vision faults. Regions with positioning error greater
than 10 m are highlighted in red.
Secondly, we test our algorithm with the emulated datasets. Quantitatively, we
plot the RMSE as a function of the added Gaussian bias value in Figure 6 and
as a function of the number of faulty GNSS ranging measurements in Figure 7.
For all validation cases, our algorithm produces an overall RMSE less than 10
m. Similar to the results from the real-world dataset, our algorithm reports
lower RMSE values than Particle RAIM. With a fixed number of faults, the
errors generally increase with increasing bias. At a fixed bias value, the
errors decrease with increasing number of faults up to 6 faulty GNSS
measurements since large number of faults are easily excluded by Particle RAIM
producing an improved distribution over the position. The improved
distribution from GNSS further enables the KL divergence metric to exclude
faulty camera measurements and produce a tighter distribution over the
position domain. However, with a higher number of faults, Particle RAIM does
not have enough redundant correct GNSS measurements to exclude the faulty
measurements resulting in higher positioning error. Nevertheless, with added
vision, our algorithm produces better positioning estimates for all test cases
than Particle RAIM.
Figure 6: RMSE from our algorithm and Particle RAIM (baseline) for varying
numbers of faults in GNSS ranging measurements at a fixed added Gaussian bias
value.
Figure 7: RMSE from our algorithm and Particle RAIM (baseline) for various
added Gaussian bias values with fixed number of faulty GNSS measurements.
### 8.2 Integrity Monitoring
We evaluate the integrity risk bounding performance for two Alert Limits, 8 m
and 16 m. For an Alert Limit of 8 m, Table 3 shows that the derived integrity
risk satisfies the performance requirement with very low false alarm and
missed identification rates. While the false alarm rates reported are 0 for
all test cases except two and the missed identification rates are always less
than 0.11. Additionally, the integrity risk bound upper bounds the reference
risk with a failure ratio of less than 0.11 and a bound gap of less than 0.4
for all cases. Figures 8 and 9 further support the observation that the
derived risk bound is able to over bound the reference risk with low failure
rate for the same Alert Limit. The few instances when the derived risk bound
fails to upper bound the reference risk occur due to large sudden jumps in the
reference risk that go undetected considering the fixed size of our motion
samples. However, in general, the integrity risk produced from our algorithm
is able to satisfy the desired performance requirement and successfully
overbound the reference risk for an Alert Limit as small as 8 m. This choice
of Alert Limit is allowed because of the low positioning errors that further
enable non-conservative integrity risk bounds.
Table 3: Integrity Risk for Alert Limit of 8 m Added Bias Value (meter) | No. of Faults | $P_{FA}$ | $P_{MI}$ | Failure Ratio | Failure Error (meter) | Bound Gap
---|---|---|---|---|---|---
100 | 2 | 0 | 0.03 | 0.07 | 7.5 | 0.26
100 | 4 | 0 | 0.04 | 0.04 | 2.3 | 0.25
100 | 6 | 0 | 0.07 | 0.11 | 2.9 | 0.25
100 | 9 | 0.07 | 0.03 | 0.07 | 4.7 | 0.36
200 | 2 | 0 | 0.07 | 0.07 | 3.5 | 0.20
200 | 4 | 0.11 | 0 | 0.04 | 4.8 | 0.40
200 | 6 | 0 | 0 | 0 | - | 0.38
200 | 9 | 0 | 0.07 | 0.04 | 5.4 | 0.36
Figure 8: Reference risk and integrity risk bound with 8 m Alert Limit for
varying numbers of faults and added bias of 100 m in GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.11 failure
ratio for all test cases.
Figure 9: Reference risk and integrity risk bound with 8 m Alert Limit for
varying numbers of faults and added bias of 200 m in GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.07 failure
ratio for all test cases.
For an Alert Limit of 16 m, Table 4 shows that the integrity risk satisfies
the integrity performance requirement with 0 false alarm rates. Furthermore,
the missed identification rates are always 0 except for the test case with 9
faults and 100 m added bias. Specifying a larger Alert Limit lowers the risk
associated with the distribution over position since almost all particles from
the perturbed distributions lie within the Alert Limit. Thus, the integrity
risk with a 16 m Alert Limit is reported to be much smaller compared to the
risk obtained with a 8 m Alert Limit as shown in Figures 8 and 9.
Additionally, the derived risk bound produces even lower failure ratio of less
than 0.07 and a tighter bound gap of less than 0.1. Overall, the derived risk
bound over bounds the reference risk for various bias and fault scenarios in
Figures 10 and 11.
Table 4: Integrity Risk for Alert Limit of 16 m Added Bias Value (meter) | No. of Faults | $P_{FA}$ | $P_{MI}$ | Failure Ratio | Failure Error (meter) | Bound Gap
---|---|---|---|---|---|---
100 | 2 | 0 | 0 | 0 | - | 0.10
100 | 4 | 0 | 0 | 0 | - | 0.08
100 | 6 | 0 | 0 | 0.04 | 5.9 | 0.05
100 | 9 | 0 | 0.04 | 0.07 | 9.7 | 0.08
200 | 2 | 0 | 0 | 0.07 | 5.0 | 0.09
200 | 4 | 0 | 0 | 0.07 | 4.2 | 0.07
200 | 6 | 0 | 0 | 0 | 3.6 | 0.06
200 | 9 | 0 | 0 | 0.04 | 3.8 | 0.01
Figure 10: Reference risk and integrity risk bound with 16 m Alert Limit for
varying numbers of faults and added bias of 100 m in GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.07 failure
ratio for all test cases.
Figure 11: Reference risk and integrity risk bound with 16 m Alert Limit for
varying numbers of faults and added bias of 200 m GNSS measurements. The
derived risk bound over bounds the reference risk with less than 0.07 failure
ratio for all test cases.
## 9 CONCLUSION
In this paper, we presented a framework for joint state estimation and
integrity monitoring for a GNSS-camera fused system using a particle filtering
approach. To quantify the uncertainty in camera measurements, we derived a
probability distribution directly from camera images leveraging a data-driven
approach along with image registration. Furthermore, we designed a metric
based on KL divergence to probabilistically fuse measurements from GNSS and
camera in a fault-tolerant manner. The metric accounts for vision faults and
mitigates the errors that arise due to cross-contamination of measurements
during sensor fusion. We experimentally validated our framework on real-world
data under NLOS errors, added Gaussian bias noise to GNSS measurements, and
added vision faults. Our algorithm reported lower positioning error compared
to Particle RAIM which uses only GNSS measurements. The integrity risk from
our algorithm satisfied the integrity performance requirement for Alert Limits
of 8 m and 16 m with low false alarm and missed identification rates.
Additionally, the derived integrity risk successfully provided an upper bound
to the reference risk with a low failure rate for both Alert Limits, making
our algorithm suitable for practical applications in urban environments.
## 10 ACKNOWLEDGMENT
We express our gratitude to Akshay Shetty, Tara Mina and other members of the
Navigation of Autonomous Vehicles Lab for their feedback on early drafts of
the paper.
## References
* [1] S. Gupta and G. X. Gao, “Particle raim for integrity monitoring,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2019, 2019.
* [2] H. Zhu, “On information and sufficiency,” 04 1997.
* [3] N. Zhu, J. Marais, D. Bétaille, and M. Berbineau, “Gnss position integrity in urban environments: A review of literature,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 9, pp. 2762–2778, 2018.
* [4] Y. C. Lee, “Analysis of range and position comparison methods as a means to provide gps integrity in the user receiver,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation), vol. 19, no. 9, pp. 1–4, 1986.
* [5] S. Bhamidipati and G. Gao, “Slam-based integrity monitoring using gps and fish-eye camera,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2019, pp. 4116–4129, 10 2019.
* [6] Z. Wang, Y. Wu, and Q. Niu, “Multi-sensor fusion in automated driving: A survey,” IEEE Access, vol. 8, pp. 2847–2868, 2020.
* [7] J. Rife, “Collaborative vision-integrated pseudorange error removal: Team-estimated differential gnss corrections with no stationary reference receiver,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 1, pp. 15–24, 2012.
* [8] He Chengyan, Guo Ji, Lu Xiaochun, and Lu Jun, “Multipath performance analysis of gnss navigation signals,” pp. 379–382, 2014.
* [9] S. M. Steven Miller, X. Zhang, and A. Spanias, Multipath Effects in GPS Receivers: A Primer. 2015\.
* [10] K. Ali, X. Chen, F. Dovis, D. De Castro, and A. J. Fernández, “Gnss signal multipath error characterization in urban environments using lidar data aiding,” pp. 1–5, 2012.
* [11] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME–Journal of Basic Engineering, vol. 82, no. Series D, pp. 35–45, 1960.
* [12] X. Wang, N. Cui, and J. Guo, “Information filtering and its application to relative navigation,” Aircraft Engineering and Aerospace Technology, vol. 81, pp. 439–444, 09 2009.
* [13] L. Fu, J. Zhang, R. Li, X. Cao, and J. Wang, “Vision-aided raim: A new method for gps integrity monitoring in approach and landing phase,” Sensors (Basel, Switzerland), vol. 15, pp. 22854–73, 09 2015.
* [14] C. Tanil, S. Khanafseh, M. Joerger, and B. Pervan, “Sequential integrity monitoring for kalman filter innovations-based detectors,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2018, 10 2018.
* [15] J. Al Hage and M. E. El Najjar, “Improved outdoor localization based on weighted kullback-leibler divergence for measurements diagnosis,” IEEE Intelligent Transportation Systems Magazine, pp. 1–1, 2018.
* [16] J. Al Hage, P. Xu, and P. Bonnifait, “Bounding localization errors with student’s distributions for road vehicles,” International Technical Symposium on Navigation and timing, 11 2018.
* [17] N. A. Tmazirte, M. E. E. Najjar, C. Smaili, and D. Pomorski, “Multi-sensor data fusion based on information theory. application to gnss positionning and integrity monitoring,” 15th International Conference on Information Fusion, pp. 743–749, 2012.
* [18] Z. Gong, P. Liu, Q. Liu, R. Miao, and R. Ying, “Tightly coupled gnss with stereo camera navigation using graph optimization,” Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2018, pp. 3070–3077, 10 2018.
* [19] A. Mohanty, S. Gupta, and G. X.Gao, “A particle filtering framework for integrity risk of gnss-camera sensor fusion,” Proceedings of the 33 nd International Technical Meeting of the Satellite Division of The Institute of Navigatin, ION GNSS+ 2020, 2020.
* [20] P. Reisdorf, T. Pfeifer, J. Breßler, S. Bauer, P. Weissig, S. Lange, G. Wanielik, and P. Protzel, “The problem of comparable gnss results – an approach for a uniform dataset with low-cost and reference data,” in The Fifth International Conference on Advances in Vehicular Systems, Technologies and Applications (M. Ullmann and K. El-Khatib, eds.), vol. 5, p. 8, nov 2016. ISSN: 2327-2058.
* [21] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P. . Nordlund, “Particle filters for positioning, navigation, and tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 425–437, 2002.
* [22] S. B. O. Bousquet and G. Lugosi, “Introduction to statistical learning theory,” Advanced Lectures on Machine Learning, vol. 3176, pp. 169–207, 2004.
* [23] M. Simandl and J. Dunik, “Design of derivative-free smoothers and predictors,” 14th IFAC Symposium on System Identification, Newcastle, Australia, 03 2006.
* [24] H. W. Sorenson and D. L. Alspach, “Recursive bayesian estimation using gaussian sums,” Automatica, 1971.
* [25] J. P. Vila and P. Schniter, “Expectation-maximization gaussian-mixture approximate message passing,” IEEE Transactions on Signal Processing, vol. 61, no. 19, pp. 4658–4672, 2013.
* [26] C. M. Bishop, Pattern recognition and machine learning. Information science and statistics, New York, NY: Springer, ”2006”.
* [27] G. Bradski, “The opencv library,” Dr. Dobb’s Journal of Software Tools, 2000\.
* [28] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” International Conference on Computer Vision, pp. 2564–2571, 2011.
* [29] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–110, 11 2004.
* [30] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” European Conference on Computer Vision, vol. 3951, pp. 404–417, 07 2006\.
* [31] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” British Machine Vision Conference, 2013.
* [32] S. P. Lloyd, “Least squares quantization in pcm,” Information Theory, IEEE Transactions, vol. 28.2, pp. 129–137, 1982.
* [33] D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2161–2168, 2006.
* [34] T. Erven and P. Harremoës, “Rényi divergence and kullback-leibler divergence,” Information Theory, IEEE Transactions on, vol. 60, pp. 3797–3820, 2014.
* [35] L. G. Valiant, “A theory of the learnable,” Commun. ACM, vol. 27, p. 1134–1142, Nov. 1984.
* [36] H. Pesonen, “A framework for bayesian receiver autonomous integrity monitoring in urban navigation,” NAVIGATION: Journal of the Institute of Navigation, vol. 58, pp. 229–240, 09 2011.
|
# Chance constrained sets approximation:
A probabilistic scaling approach - EXTENDED VERSION
M. Mammarella<EMAIL_ADDRESS>V. Mirasierra<EMAIL_ADDRESS>M. Lorenzen<EMAIL_ADDRESS>T. Alamo<EMAIL_ADDRESS>F. Dabbene
<EMAIL_ADDRESS>CNR-IEIIT; c/o Politecnico di Torino; C.so Duca
degli Abruzzi 24, Torino; Italy. Universidad de Sevilla, Escuela Superior de
Ingenieros, Camino de los Descubrimientos s/n, Sevilla; Spain.
Systemwissenschaften, TTI GmbH, Nobelstr. 15, 70569 Stuttgart, Germany
###### Abstract
In this paper, a sample-based procedure for obtaining simple and computable
approximations of chance-contrained sets is proposed. The procedure allows to
control the complexity of the approximating set, by defining families of
simple-approximating sets of given complexity. A probabilistic scaling
procedure then allows to rescale these sets to obtain the desired
probabilistic guarantees. The proposed approach is shown to be applicable in
several problem in systems and control, such as the design of Stochastic Model
Predictive Control schemes or the solution of probabilistic set membership
estimation problems.
## 1 Introduction
In real-world applications, the complexity of the phenomena encountered and
the random nature of data makes dealing with uncertainty essential. In many
cases, uncertainty arises in the modeling phase, in some others it is
intrinsic to both the system and the operative environment, as for instance
wind speed and turbulence in aircraft or wind turbine control [1]. Hence, it
is crucial to include underlying stochastic characteristic of the framework
and eventually accept a violation of constraints with a certain probability
level, in order to improve the coherence of the model and reality. Deriving
results in the presence of uncertainty is of major relevance in different
areas, including, but not limited to, optimization [2] and robustness analysis
[3]. However, with respect to robust approaches, where the goal is to
determine a feasible solution which is optimal in some sense for all possible
uncertainty instances , the goal in the stochastic framework is to find a
solution that is feasible for almost all possible uncertainty realizations,
[4, 5]. In several applications, including engineering and finance, where
uncertainties in price, demand, supply, currency exchange rate, recycle and
feed rate, and demographic condition are common, it is acceptable, up to a
certain safe level, to relax the inherent conservativeness of robust
constraints enforcing probabilistic constraints. More recently, the method has
been used also in unmanned autonomous vehicle navigation [6, 7] as well as
optimal power flow [8, 9].
In the optimization framework, constraints involving stochastic parameters
that are required to be satisfied with a pre-specified probability threshold
are called chance constraints (CC). In general, dealing with CC implies facing
two serious challenges, that of stochasticity and of nonconvexity [10].
Consequently, while being attractive from a modeling viewpoint, problems
involving CC are often computationally intractable, generally shown to be NP-
hard, which seriously limits their applicability. However, being able to
efficiently solve CC problems remains an important challenge, especially in
systems and control, where CC often arise, as e.g. in stochastic model
predictive control (SMPC) [11, 12]. The scientific community has devoted large
research in devising computationally efficient approaches to deal with chance-
constraints. We review such techniques in Section 3, where we highlight three
mainstream approaches: i) exact techniques; ii) robust approximations and iii)
sample-based approximations . In this paper, we present what we consider an
important step forward in the sample-based approach. We propose a simple and
efficient strategy to obtain a probabilistically guaranteed inner
approximation of a chance constrained set, with given confidence.
In particular, we describe a two step procedure the involves: i) the
preliminary approximation of the chance constraint set by means of a so-called
Simple Approximating Set (SAS), ii) a sample-used scaling procedure that
allows to properly scale the SAS so to guarantee the desired probabilistic
properties. The proper selection of a low-complexity SAS allows the designer
to easily tune the complexity of the approximating set, significantly reducing
the sample complexity. We propose several candidate SAS shapes, grouped in two
classes: i) sampled-polytopes; and ii) norm-based SAS.
The probabilistic scaling approach was presented in the conference papers [13,
14]. The present work extends these in several directions: first, we performe
here a thorough mathematical analysis the results, providing of all results.
Second, the use of norm-based SAS is extended to comprise more general sets
(as e.g. , and More importantly, we consider here joint chance constraints.
This choice is motivated by the fact that enforcing joint chance constraints,
which have to be satisfied simultaneously, adheres better to some
applications, despite the inherent complexity. Finally, we present here a
second application, besides SMPC, related to probabilistic set-membership
identification.
The paper is structured as follows. Section 2 provides a general preamble of
the problem formulation and of chance constrained optimization, including two
motivating examples. An extensive overview on methods for approximating chance
constrained sets is reported in Section 3 whereas the probabilistic scaling
approach has been detailed in Section 4. Section 5 and Section 6 are dedicated
to the definition of selected candidate SAS, i.e. sampled-polytope and norm-
based SAS, respectively. Last, in Section 7, we validate the proposed approach
with a numerical example applying our method to a probabilistic set membership
estimation problem. Main conclusions and future research directions are
addressed in Section 8.
### 1.1 Notation
Given an integer $N$, $[N]$ denotes the integers from 1 to $N$. Given
$z\in\mathbb{R}^{s}$ and $p\in[1,\infty)$, we denote by $\|z\|_{p}$ the
$\ell_{p}$-norm of $z$, and by
$\mathbb{B}^{s}_{p}\doteq\\{\;z\in\mathbb{R}^{s}\;:\;\|z\|_{p}\leq 1\;\\}$
$\ell_{p}$-norm ball of radius one. Given integers $k,N$, and parameter
$p\in(0,1)$, the Binomial cumulative distribution function is denoted as
$\mathbf{B}(k;N,p)\doteq\sum\limits_{i=0}^{k}\left(\begin{array}[]{c}N\\\ i\\\
\end{array}\right)p^{i}(1-p)^{N-i}.$ (1)
The following notation is borrowed from the field of order statistics [15].
Given a set of $N$ scalars $\gamma_{i}\in\mathbb{R}^{N}$, $i\in[N]$, we denote
$\gamma_{1:N}$ the smallest one, $\gamma_{2:N}$ the second smallest one, and
so on and so forth until $\gamma_{N:N}$, which is equal to the largest one. In
this way, given $r\geq 0$ we have that $\gamma_{r+1:N}$ satisfies that no more
than $r$ elements of $\\{\gamma_{1},\gamma_{2},\ldots,\gamma_{N}\\}$ are
strictly smaller than $\gamma_{r+1:N}$.
The Chebyshev center of a given set $\mathbb{X}$, denoted as
$\mathsf{Cheb}(\mathbb{X})$, is defined as the center of the largest ball
inscribed in $\mathbb{X}$, i.e.
$\mathsf{Cheb}(\mathbb{X})\doteq\arg\min_{\theta_{c}}\max_{\theta\in\mathbb{X}}\left\\{\|\theta-\theta_{c}\|^{2}\right\\}.$
Given an $\ell_{p}$-norm $\|\cdot\|_{p}$, its dual norm $\|\cdot\|_{p^{*}}$ is
defined as
$\|c\|_{p^{*}}\doteq\sup\limits_{z\in\mathbb{B}^{s}_{p}}c^{\top}z,\;\forall
c\in\mathbb{R}^{s}.$
In particular, the couples $(p,p^{*})$: $(2,2)$, $(1,\infty)$, $(\infty,1)$
give raise to dual norms.
## 2 Problem formulation
Consider a robustness problem, in which the controller parameters and
auxiliary variables are parametrized by means of a decision variable vector
$\theta$, which is usually referred to as design parameter and is restricted
to a set $\Theta\subseteq\mathbb{R}^{n_{\theta}}$. Furthermore, the
uncertainty vector $w\in\mathbb{R}^{n_{w}}$ represents one of the admissible
uncertainty realizations of a random vector with given probability
distribution $\mathsf{Pr}_{\mathbb{W}}$ and (possibly unbounded) support
$\mathbb{W}$.
This paper deals with the special case where the design specifications can be
decoded as a set of $n_{\ell}$ uncertain linear inequalities
$F(w)\theta\leq g(w),$ (2)
where
$F(w)=\begin{bmatrix}f_{1}^{\top}(w)\\\ \vdots\\\
f_{n_{\ell}}^{\top}(w)\end{bmatrix}\in\mathbb{R}^{n_{\ell}\times{n_{\theta}}},\quad
g(w)=\begin{bmatrix}g_{1}(w)\\\ \vdots\\\
g_{n_{\ell}}(w)\end{bmatrix}\in\mathbb{R}^{n_{\ell}},$
are measurable functions of the uncertainty vector $w\in\mathbb{R}^{n_{w}}$.
The inequality in (2) is to be interpreted component-wise, i.e.
$f_{\ell}(w)\theta\leq g_{\ell}(w),\forall\ell\in[n_{\ell}].$
Furthermore, we notice that each value of $w$ gives raise to a corresponding
set
$\mathbb{X}(w)=\\{\;\theta\in\Theta\;:\;F(w)\theta\leq g(w)\;\\}.$ (3)
Due to the random nature of the uncertainty vector $w$, each realization of
$w$ corresponds to a different set of linear inequalities. Consequently, each
value of $w$ gives raise to a corresponding set
$\mathbb{X}(w)=\\{\;\theta\in\Theta\;:\;F(w)\theta\leq g(w)\;\\}.$ (4)
In every application, one usually accepts a risk of violating the constraints.
While this is often done by choosing the set $\mathbb{W}$ appropriately, we
can find a less conservative solution by choosing the set $\mathbb{W}$ to
encompass all possible values and characterizing the region of the design
space $\Theta$ in which the fraction of elements of $\mathbb{W}$, that violate
the constraints, is below a specified level. This concept is rigorously
formalized by means of the notion of _probability of violation_.
###### Definition 1 (Probability of violation).
Consider a probability measure ${\rm Pr}_{\mathbb{W}}$ over $\mathbb{W}$ and
let $\theta\in\Theta$ be given. The probability of violation of $\theta$
relative to inequality (2) is defined as
$\mathsf{Viol}(\theta)\doteq\mathsf{Pr}_{\mathbb{W}}\,\\{\,F(w)\theta\not\leq
g(w)\,\\}.$
Given a constraint on the probability of violation, i.e.
$\mathsf{Viol}(\theta)\leq\varepsilon$, we denote as (joint) _chance
constrained set_ of probability $\varepsilon$ (shortly, $\varepsilon$-CCS) the
region of the design space for which this probabilistic constraint is
satisfied. This is formally stated in the next definition.
###### Definition 2 ($\varepsilon$-CCS).
Given $\varepsilon\in(0,1)$, we define the chance constrained set of
probability $\varepsilon$ as follows
$\mathbb{X}_{\varepsilon}=\\{\;\theta\in\Theta\;:\;\mathsf{Viol}(\theta)\leq\varepsilon\;\\}.$
(5)
Note that the $\varepsilon$-CCS represents the region of the design space
$\Theta$ for which this probabilistic constraint is satisfied and it is
equivalently defined as
$\mathbb{X}_{\varepsilon}\doteq\Bigl{\\{}\theta\in\Theta\;:\;\mathsf{Pr}_{\mathbb{W}}\left\\{F(w)\theta\leq
g(w)\right\\}\geq 1-\varepsilon\Bigr{\\}}.$ (6)
###### Remark 1 (Joint vs. individual CCs).
The constraint $\theta\in\mathbb{X}_{\varepsilon}$, with
$\mathbb{X}_{\varepsilon}$ defined in (6), describes a joint chance
constraint. That is, it requires that the joint probability of satisfying the
inequality constraint
$F(w)\theta\leq g(w)$
is guaranteed to be greater than the probabilistic level $1-\varepsilon$. We
remark that this constraint is notably harder to impose than individual CCs,
i.e. constraints of the form
$\displaystyle\theta\in\mathbb{X}_{\varepsilon_{\ell}}^{\ell}\,\,$
$\displaystyle\\!\\!\\!\\!\doteq\\!\\!\\!$
$\displaystyle\Bigl{\\{}\theta\in\Theta\,:\,\mathsf{Pr}_{\mathbb{W}}\left\\{f_{\ell}(w)^{\top}\theta\leq
g_{\ell}(w)\right\\}\geq 1-\varepsilon_{\ell}\Bigr{\\}},$
$\displaystyle\qquad\ell\in[n_{\ell}],$
with $\varepsilon_{\ell}\in(0,1)$. A discussion on the differences and
implications of joint and individual chance constraints may be found in
several papers, see for instance [10, 16] and references therein.
###### Example 1.
A simple illustrating example of the set $\varepsilon$-CCS is shown in Figure
1. The dotted circle is the region of the design space that satisfies all the
constraints (the so called robust region), which are tangent to the dotted
circle at points uniformly generated. The outer red circle represents the
chance constrained set $\mathbb{X}_{\varepsilon}$ for the specific value
$\varepsilon=0.15$. That is, the red circle is obtained in such a way that
every point in it has a probability of violating a random constraint no larger
than $0.15$. Note that in this very simple case, the set
$\mathbb{X}_{\varepsilon}$ can be computed analytically, and turns out to be a
scaled version of the robust set. We observe that the $\varepsilon$-CCS is
significantly larger than the robust set.
Figure 1: Red circle = $\mathbb{X}_{\varepsilon}$, dotted circle = unit
circle, blue lines = constraint samples.
Hence, while there exist simple examples for which a closed-form computation
of $\mathbb{X}_{\varepsilon}$ is possible, as the one re-proposed here and
first used in [13], we remark that this is not the case in general. Indeed, as
pointed out in [10], typically the computation of the $\varepsilon$-CCS is
extremely difficult, since the evaluation of the probability
$\mathsf{Viol}(\theta)$ amounts to the computation of a multivariate integral,
which is NP-Hard [17].
Moreover, the set $\varepsilon$-CCS is often nonconvex, except for very
special cases. For example, [1, 18] show that the solution set of separable
chance constraints can be written as the union of cones, which is nonconvex in
general.
###### Example 2 (Example of nonconvex $\varepsilon$-CCS).
To illustrate these inherent difficulties, we consider the following three-
dimensional example ($n_{\theta}=3$) with $w=\left\\{w_{1},w_{2}\right\\}$,
where the first uncertainty $w_{1}\in\mathbb{R}^{3}$ is a three-dimensional
normal-distributed random vector with zero mean and covariance matrix
$\Sigma=\left[\begin{array}[]{ccc}4.5&2.26&1.4\\\ 2.26&3.58&1.94\\\
1.4&1.94&2.19\end{array}\right],$
and the second uncertainty $w_{2}\in\mathbb{R}^{3}$ is a three-dimensional
random vector whose elements are uniformly distributed in the interval
$[0,1]$. The set of viable design parameters is given by $n_{\ell}=4$
uncertain linear inequalities of the form
$F(w)\theta\leq\mathbf{1}_{4},\quad
F(w)=\left[\begin{array}[]{cccc}w_{1}&w_{2}&(2w_{1}-w_{2})&w_{1}^{2}\end{array}\right]^{\top}.$
(7)
The square power $w_{1}^{2}$ is to be interpreted element-wise.
In this case, to obtain a graphical representation of the set
$\mathbb{X}_{\varepsilon}$, we resorted to gridding the set $\Theta$ and, for
each point $\theta$ in the grid, to approximate the probability through a
Monte Carlo computation. This procedure is clearly unaffordable for higher
dimensions frameworks. In Figure 2 we report the plot of the computed
$\varepsilon$-CCS set for different values of $\varepsilon$. We observe that
the set is indeed nonconvex.
Figure 2: The $\varepsilon$-CCS set for $\varepsilon=0.15$ (smaller set),
$\varepsilon=0.30$ (intermediate set), and $\varepsilon=0.45$ (larger set). We
observe that all sets are nonconvex, but the nonconvexity is more evident for
larger values of $\varepsilon$, corresponding to larger levels of accepted
violation, while the set $\mathbb{X}_{\varepsilon}$ appears “almost convex”
for small values of $\varepsilon$. This kind of behaviour is in accordance
with a recent result that prove convexity of the $\varepsilon$-CCS for values
of $\varepsilon$ going to zero, and it is usually referred to as eventual
convexity [19].
### 2.1 Chance constrained optimization
Finding an optimal $\theta\in\mathbb{X}_{\varepsilon}$ for a given cost
function $J:~{}\mathbb{R}^{n_{\theta}}\rightarrow\mathbb{R}$, leads to the
chance constrained optimization (CCO) problem
$\min_{\theta\in\mathbb{X}_{\varepsilon}}J(\theta),$ (8)
where the cost-function $J(\theta)$ is usually assumed to be a convex, often
even a quadratic or linear function.
We remark that the solution of the CCO problem (8) is in general NP-hard, for
the same reasons reported before. We also note that several stochastic
optimization problems arising in different application contexts can be
formulated as a CCO. Typical examples are for instance the reservoir system
design problem proposed in [20], where the problem is to minimize the total
building and penalty costs while satisfying demands for all sites and all
periods with a given probability, or the cash matching problem [21], where one
aims at maximizing the portfolio value at the end of the planning horizon
while covering all scheduled payments with a prescribed probability. CCO
problems also frequently arise in short-term planning problems in power
systems. These optimal power flow (OPF) problems are routinely solved as part
of the real-time operation of the power grid. The aim is determining minimum-
cost production levels of controllable generators subject to reliably
delivering electricity to customers across a large geographical area, see e.g.
[8] and references therein.
In the next subsections, we report two control-related problems which served
as motivation of our study.
### 2.2 First motivating example: Stochastic MPC
To motivate the proposed approach, we consider the Stochastic MPC framework
proposed in [12, 11]. We are given a discrete-time system
$x_{k+1}=A(\sigma_{k})x_{k}+B(\sigma_{k})u_{k}+a_{\sigma}(\sigma_{k}),$ (9)
subject to generic uncertainty $\sigma_{k}\in\mathbb{R}^{n_{\sigma}}$, with
state $x_{k}\in\mathbb{R}^{n_{x}}$, control input
$u_{k}\in\mathbb{R}^{n_{u}}$, and the vector valued function
$a_{\sigma}(\sigma_{k})$ representing additive disturbance affecting the
system state. The system matrices $A(\sigma_{k})$ and $B(\sigma_{k})$, of
appropriate dimensions, are (possibly nonlinear) functions of the uncertainty
$\sigma_{k}$ at step $k$. For $k=1,2,\ldots$, the disturbances $\sigma_{k}$
are modeled as realizations of a stochastic process. In particular,
$\sigma_{k}$ are assumed to be independent and identically distributed (iid)
realizations of zero-mean random variables with support
$\mathcal{S}\subseteq\mathbb{R}^{n_{\sigma}}$. Note that the presence of both
additive and multiplicative uncertainty, combined with the nonlinear
dependence on the uncertainty, renders the problem particularly arduous.
Furthermore, we remark that the system representation in (9) is very general,
and encompasses, among others, those in [11, 12, 22].
Given the model (9) and a realization of the state $x_{k}$ at time $k$, state
predictions $t$ steps ahead are random variables as well and are denoted
$x_{t|k}$, to differentiate it from the realization $x_{t+k}$. Similarly
$u_{t|k}$ denotes predicted inputs that are computed based on the realization
of the state $x_{k}$.
Contrary to [11, 12, 22], where the system dynamics were subject to individual
state and input chance constraints, here we take a more challenging route, and
we consider joint state and input chance constraints of the form 111The case
where one wants to impose hard input constraints can be also be formulated in
a similar framework, see e.g. [11].
$\mathsf{Pr}_{\boldsymbol{\sigma}}\left\\{H_{x}x_{t|k}+H_{u}u_{t|k}\leq\mathbf{1}_{n_{t}}|x_{k}\right\\}\geq
1-\varepsilon,$ (10)
with $t\in\\{0,\ldots,T-1\\}$, $\varepsilon\in(0,1)$, and
$H_{x}\in\mathbb{R}^{n_{\ell}\times n_{x}}$,
$H_{u}\in\mathbb{R}^{n_{\ell}\times n_{u}}$.
The probability $\mathsf{Pr}_{\boldsymbol{\sigma}}$ is measured with respect
to the sequence ${\boldsymbol{\sigma}}=\\{\sigma_{t}\\}_{t>k}$. Hence,
equation (10) states that the probability of violating the linear constraint
$H_{x}x+H_{u}u\leq 1$ for any future realization of the disturbance should not
be larger than $\varepsilon$.
The objective is to derive an asymptotically stabilizing control law for the
system (9) such that, in closed loop, the constraint (10) is satisfied.
Following the approach in [12], a stochastic MPC algorithm is considered to
solve the constrained control problem. The approach is based on repeatedly
solving a stochastic optimal control problem over a finite, moving horizon,
but implementing only the first control action. The design parameter $\theta$
is then given by the control sequence
$\mathbf{u}_{k}=(u_{0|k},u_{1|k},...,u_{T-1|k})$ and the prototype optimal
control problem to be solved at each sampling time $k$ is defined by the cost
function
$\displaystyle J_{T}(x_{k},\mathbf{u}_{k})=$
$\displaystyle\mathbb{E}\left\\{\sum_{t=0}^{T-1}\left(x_{t|k}^{\top}Qx_{t|k}+u_{t|k}^{\top}Ru_{t|k}\right)+x_{T|k}^{\top}Px_{T|k}~{}|~{}x_{k}\right\\},$
with $Q\in\mathbb{R}^{n_{x}\times n_{x}}$, $Q\succeq 0$,
$R\in\mathbb{R}^{n_{u}\times n_{u}}$, $R\succ 0$, and appropriately chosen
$P\succ 0$, subject to the system dynamics (9) and constraints (10).
The online solution of the stochastic MPC problem remains a challenging task
but several special cases, which can be evaluated exactly, as well as methods
to approximate the general solution have been proposed in the literature. The
approach followed in this work was first proposed in [11, 12], where an
offline sampling scheme was introduced. Therein, with a prestabilizing input
parameterization
$u_{t|k}=Kx_{t|k}+v_{t|k},$ (12)
with suitably chosen control gain $K\in\mathbb{R}^{n_{u}\times n_{x}}$ and new
design parameters $v_{t|k}\in\mathbb{R}^{n_{u}}$, equation (9) is solved
explicitly for the predicted states $x_{1|k},\ldots,x_{T|k}$ and predicted
inputs $u_{0|k},\ldots,u_{T-1|k}$. In this case, the expected value of the
finite-horizon cost (2.2) can be evaluated offline, leading to a quadratic
cost function of the form
$J_{T}(x_{k},\mathbf{v}_{k})=\begin{bmatrix}x_{k}^{\top}&\textbf{v}_{k}^{\top}&\textbf{1}_{n_{x}}^{\top}\end{bmatrix}\tilde{S}\begin{bmatrix}x_{k}\\\
\textbf{v}_{k}\\\ \textbf{1}_{n_{x}}\\\ \end{bmatrix}$ (13)
in the deterministic variables
$\mathbf{v}_{k}=(v_{0|k},v_{1|k},...,v_{T-1|k})$ and $x_{k}$.
Focusing now on the constraint definition, we notice that by introducing the
uncertainty sequence
$\boldsymbol{\sigma}_{k}=\\{\sigma_{t}\\}_{t=k,...,k+T-1}$, we can rewrite the
joint chance constraint defined by equation (10) as
$\displaystyle\mathbb{X}_{\varepsilon}^{\textsc{smpc}}=\left\\{\
\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}\in\mathbb{R}^{n_{x}+n_{u}T}~{}:~{}\right.$
$\displaystyle\Bigl{.}\mathsf{Pr}_{\boldsymbol{\sigma}_{k}}\left\\{\begin{bmatrix}f_{\ell}^{x}(\boldsymbol{\sigma}_{k})\\\
f_{\ell}^{v}(\boldsymbol{\sigma}_{k})\end{bmatrix}^{\top}\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}\leq 1,\ell\in[n_{\ell}]\right\\}\geq
1-\varepsilon\Bigr{\\}},$ (14)
with
$f_{\ell}^{x}:\mathbb{R}^{n_{\sigma}}\to\mathbb{R}^{n_{x}},f_{\ell}^{v}:\mathbb{R}^{n_{\sigma}}\to\mathbb{R}^{n_{u}T}$
being known functions of the sequence of random variables
$\boldsymbol{\sigma}_{k}$. We remark that, in the context of this paper,
neither the detailed derivation of the cost matrix $\tilde{S}$ in (13) nor
that of $f_{\ell}^{v},f_{\ell}^{x}$ are relevant for the reader, who can refer
to [12, Appendix A] for details. Note that, by defining
$\theta=[x_{k}^{\top},\mathbf{v}_{k}^{\top}]^{\top}$, (14) is given in the
form of (5) .
As discussed in [11], obtaining a good and simple enough approximation of the
set $\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$ is extremely important for
online implementation of SMPC schemes. In particular, if we are able to
replace the set $\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$ by a suitable inner
approximation, we would be able to guarantee probabilistic constraint
satisfaction of the ensuing SMPC scheme. On the other hand, we would like this
inner approximation to be simple enough, so to render the online computations
fast enough.
### 2.3 Second motivating example: probabilistic set membership estimation
Suppose that there exists $\bar{\theta}\in\Theta$ such that
$|y-\bar{\theta}^{T}\varphi(x)|\leq\rho,\;\forall(x,y)\in\mathbb{W}\subseteq\mathbb{R}^{n_{x}}\times\mathbb{R},$
where $\varphi:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{\theta}}$ is a (possibly
non-linear) regressor function, and $\rho>0$ accounts for modelling errors.
The (deterministic) set membership estimation problem, see [23], [24],
consists of computing the set of parameters $\theta$ that satisfy the
constraint
$|y-\theta^{T}\varphi(x)|\leq\rho$
for all possible values of $(x,y)\in\mathbb{W}$. In the literature, this set
is usually referred to as the feasible parameter set, that is
${\mathsf{FPS}}\doteq\\{\;\theta\in\Theta\;:\;|y-\theta^{T}\varphi(x)|\leq\rho,\;\forall(x,y)\in\mathbb{W}\;\\}.$
(15)
If, for given $w=(x,y)$, we define the set
$\mathbb{X}(w)=\\{\;\theta\in\Theta\;:\;|y-\theta^{T}\varphi(x)|\leq\rho\;\\},$
then the feasible parameter set ${\mathsf{FPS}}$ can be rewritten as
${\mathsf{FPS}}=\\{\;\theta\in\Theta\;:\;\theta\in\mathbb{X}(w),\;\forall
w\in\mathbb{W}\;\\}.$
The deterministic set membership problem suffers from the following
limitations in real applications: i) due to the possible non-linearity of
$\varphi(\cdot)$, checking if a given $\theta\in\Theta$ satisfies the
constraint $\theta\in\mathbb{X}(w)$, for every $w\in\mathbb{W}$, is often a
difficult problem; ii) in many situations, only samples of $\mathbb{W}$ are
available: thus, the robust constraint cannot be checked and only outer bounds
of ${\mathsf{FPS}}$ can be computed; and iii) because of outliers and possible
non finite support of $\mathbb{W}$, set ${\mathsf{FPS}}$ is often empty
(especially for small values of $\rho$).
If a probability distribution is defined on $\mathbb{W}$, the probabilistic
set membership estimation problem is that of characterizing the set of
parameters $\theta$ that satisfy
$\mathsf{Pr}_{\mathbb{W}}\\{|y-\theta^{T}\varphi(x)|\leq\rho\\}\geq
1-\epsilon,$
for a given probability parameter $\epsilon\in(0,1)$. Hence, we can define
${\mathsf{FPS}}_{\epsilon}$ the set of parameters that satisfy the previous
probabilistic constraint, that is,
${\mathsf{FPS}}_{\epsilon}=\\{\;\theta\in\Theta\;:\;\mathsf{Pr}_{\mathbb{W}}\\{\theta\in\mathbb{X}(w)\\}\geq
1-\epsilon\;\\}.$
It is immediate to notice that this problem fits in the formulation proposed
in this section: It suffices to define
$F(w)=\left[\begin{array}[]{c}\varphi^{T}(x)\\\
-\varphi^{T}(x)\end{array}\right],\;g(w)=\left[\begin{array}[]{c}\rho+y\\\
\rho-y\end{array}\right].$
### 2.4 Chance constrained approximations
Motivated by the discussion above, we are ready to formulate the main problem
studied in this paper.
###### Problem 1 ($\varepsilon$-CCS approximation).
Given the set of linear inequalities (2), and a violation parameter
$\varepsilon$, find an inner approximation of the set
$\mathbb{X}_{\varepsilon}$. The approximation should be: i) simple enough, ii)
easily computable.
A solution to this problem is provided in the paper. In particular, regarding
i), we present a solution in which the approximating set is represented by few
linear inequalities. Regarding ii), we propose a computationally efficient
procedure for its construction (see Algorithm 1).
Before presenting our approach, in the next section we provide a brief
literature overview of different methods presented in the literature to
construct approximations of the $\varepsilon$-CCS set.
## 3 Overview on different approaches to $\varepsilon$-CCS approximations
The construction of computational efficient approximations to
$\varepsilon$-CCS is a long-standing problem. In particular, the reader is
referred to the recent work [10], which provides a rather complete discussion
on the topic, and covers the most recent results. The authors distinguish
three different approaches, which we very briefly revisit here.
### 3.1 Exact techniques
In some very special cases, the $\varepsilon$-CCS is convex and hence the CCO
problem admits a unique solution. This is the case, for instance, of
individual chance constraints with $w$ being Gaussian [25]. Other important
examples of convexity of the set $\mathbb{X}_{\varepsilon}$ involve log-
concave distribution [1, 26]. General sufficient conditions on the convexity
of chance constraints may be found in [27, 28, 29, 19]. However, all these
cases are very specific and hardly extend to joint chance constraints
considered on this work.
### 3.2 Robust techniques
A second class of approaches consist in finding deterministic conditions that
allow to construct a set $\underline{\mathbb{X}}$, which is a guaranteed inner
convex approximation of the probabilistic set $\mathbb{X}_{\varepsilon}$. The
classical solution consists in the applications of Chebyshev-like
inequalities, see e.g. [30, 31]. More recent techniques, which are proved
particularly promising, involve robust optimization [3], as the convex
approximations introduced in [32]. A particular interesting convex relaxation
involves the so-called Conditional Value at Risk (CVaR), see [33] and
references therein. Finally, we point out some recent techniques based on
polynomial moments relaxations [34, 35]. Nonetheless, it should be remarked
that these techniques usually suffer from conservatism and computational
complexity issues, especially in the case of joint chance constraints.
### 3.3 Sample-based techniques
In recent years, a novel approach to approximate chance constraints, based on
random sampling of the uncertain parameters, has gained popularity, see e.g.
[4, 5] and references therein. Sampling-based techniques are characterized by
the use of a finite number $N$ of iid samples of the uncertainty
$\left\\{w^{(1)},w^{(2)},\ldots,w^{(N)}\right\\}$ drawn according to a
probability distribution $\mathsf{Pr}_{\mathbb{W}}$. To each sample
$w^{(i)},i\in[N]$, we can associate the following sampled set
$\mathbb{X}(w^{(i)})=\\{\;\theta\in\Theta\;:\;F(w^{(i)})\theta\leq
g(w^{(i)})\;\\},$ (16)
sometimes referred to as scenario, since it represents an observed instance of
our probabilistic constraint.
Then, the scenario approach considers the CCO problem (8) and approximates its
solution through the following scenario problem
$\displaystyle\theta^{*}_{sc}=\arg\min J(\theta)$ (17)
$\displaystyle\text{subject to }\theta\in\mathbb{X}(w^{(i)}),i\in[N].$
We note that, if the function $J(\theta)$ is convex, problem (17) becomes a
linearly constrained convex program, for which very efficient solution
approaches exist. A fundamental result [36, 37, 38, 39] provides a
probabilistic certification of the constraint satisfaction for the solution to
the scenario problem. In particular, it is shown that, under some mild
assumptions (non-degenerate problem), we have
$\mathsf{Pr}_{\mathbb{W}^{N}}\left\\{\mathsf{Viol}(\theta^{*}_{sc})>\varepsilon\right\\}\leq\mathbf{B}(n_{\theta}-1;N,\varepsilon),$
(18)
where the probability in (18) is measured with respect to the samples
$\\{w^{(1)},w^{(2)},\ldots,w^{(N)}$}. Moreover, the bound in (18) is shown to
be tight. Indeed, for the class of so-called fully-supported problems, the
bound holds with equality, i.e. the Binomial distribution
$\mathbf{B}(n_{\theta}-1;N,\varepsilon)$ represents the exact probability
distribution of the violation probability [37].
A few observations are at hand regarding the scenario approach and its
relationship with Problem 1. First, if we define the sampled constraints set
as
$\mathbb{X}_{N}\doteq\bigcap_{i=1}^{N}\mathbb{X}(w^{(i)}),$ (19)
we see that the scenario approach consists in approximating the constraint
$\theta\in\mathbb{X}_{\varepsilon}$ in (8) with its sampled version
$\theta\in\mathbb{X}_{N}$. On the other hand, it should be remarked that the
scenario approach cannot be used to derive any guarantee on the relationship
existing between $\mathbb{X}_{N}$ and $\mathbb{X}_{\varepsilon}$. Indeed, the
nice probabilistic property in (18) holds only for the optimum of the scenario
program $\theta^{*}_{sc}$. This is a fundamental point, since the scenario
results build on the so-called support constraints, which are defined for the
optimum point $\theta^{*}_{sc}$ only.
On the contrary, in our case we are interested in establishing a direct
relation (in probabilistic terms) between the set $\mathbb{X}_{N}$ and the
$\varepsilon$-CCS $\mathbb{X}_{\varepsilon}$. This is indeed possible, but
needs to resort to results based on Statistical Learning Theory [40],
summarized in the following lemma.
###### Lemma 1 (Learning Theory bound).
Given probabilistic levels $\delta\in(0,1)$ and $\varepsilon\in(0,0.14)$, if
the number of samples $N$ is chosen so that $N\geq N_{LT}$, with
$N_{LT}\doteq\frac{4.1}{\varepsilon}\Big{(}\ln\frac{21.64}{\delta}+4.39n_{\theta}\,\log_{2}\Big{(}\frac{8en_{\ell}}{\varepsilon}\Big{)}\Big{)},$
(20)
then
$\mathsf{Pr}_{\mathbb{W}^{N}}\left\\{\mathbb{X}_{N}\subseteq\mathbb{X}_{\varepsilon}\right\\}\geq
1-\delta$.
The lemma, whose proof is reported in Appendix A.1, is a direct consequence of
the results on VC-dimension of the so-called $(\alpha,k)$-Boolean Function,
given in [41].
###### Remark 2 (Sample-based SMPC).
The learning theory-based approach discussed in this section has been applied
in [11] to derive an _offline_ probabilistic inner approximation of the chance
constrained set $\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$ defined in (14),
considering individual chance constraints. In particular, the bound (2) is a
direct extension to the case of joint chance constraints of the result proved
in [11]. Note that since we are considering multiple constraints at the same
time (like in (2)), the number of constraints $n_{\ell}$ enters into the
sample size bound. To explain how the SMPC design in [11] extends to the joint
chance constraints framework, we briefly recall it.
First, we extract offline (i.e. when designing the SMPC control) $N$ iid
samples of the uncertainty, $\boldsymbol{\sigma}_{k}^{(i)}$ of
$\boldsymbol{\sigma}_{k}$, and we consider the sampled set
$\displaystyle\mathbb{X}^{\textsc{smpc}}(\boldsymbol{\sigma}_{k}^{(i)})=\Biggl{\\{}\
\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}:\begin{bmatrix}f_{\ell}^{x}(\boldsymbol{\sigma}_{k}^{(i)})\\\
f_{\ell}^{v}(\boldsymbol{\sigma}_{k}^{(i)})\end{bmatrix}^{\top}\begin{bmatrix}x_{k}\\\
\mathbf{v}_{k}\end{bmatrix}\leq 1,\Biggl{.}\ell\in[n_{\ell}]\Biggr{\\}},$
and
$\mathbb{X}_{N}^{\textsc{smpc}}\doteq\bigcap_{i=1}^{N}\mathbb{X}^{\textsc{smpc}}(\boldsymbol{\sigma}_{k}^{(i)})$.
Then, applying Lemma 1 with $n_{\theta}=n_{x}+n_{u}T$, we conclude that if we
extract $N\geq N_{LT}^{\textsc{smpc}}$ samples, it is guaranteed that, with
probability at least $1-\delta$, the sample approximation
$\mathbb{X}_{N}^{\textsc{smpc}}$ is a subset of the original chance constraint
$\mathbb{X}_{\varepsilon}^{\textsc{smpc}}$. Exploiting these results, the SMPC
problem can be approximated conservatively by the linearly constrained
quadratic program
$\displaystyle\min_{\mathbf{v}_{k}}~{}J_{T}(x_{k},\mathbf{v}_{k})\textrm{
subject to }(x_{k},\mathbf{v}_{k})\in\mathbb{X}_{N}^{\textsc{smpc}}.$ (21)
Hence the result reduces the original stochastic optimization program to an
efficiently solvable quadratic program. This represents an undiscussed
advantage, which has been demonstrated for instance in [12]. On the other
hand, it turns out that the ensuing number of linear constraints, equal to
$n_{\ell}\cdot N_{LT}^{\textsc{smpc}}$ may still be too large. For instance,
even for a moderately sized MPC problem with $n_{x}=5$ states, $n_{u}=2$
inputs, prediction horizon of $T=10$, simple interval constraints on states
and inputs (i.e. $n_{\ell}=2n_{x}+2n_{u}=14$), and for a reasonable choice of
probabilistic parameters, i.e. $\varepsilon=0.05$ and $\delta=10^{-6}$, we get
$N_{LT}^{\textsc{smpc}}=114,530$, which in turn corresponds to more than $1.6$
million linear inequalities. For this reason, in [11] a post-processing step
was proposed to remove redundant constraints. While it is indeed true that all
the cumbersome computations may be performed offline, it is still the case
that, in applications with stringent requirements on the solution time, the
final number of inequalities may easily become unbearable.
Remark 2 motivates the approach presented in the next section, which builds
upon the results presented in [13]. We show how the probabilistic scaling
approach directly leads to approximations of user-chosen complexity, which can
be directly used in applications instead of creating the need for a post-
processing step to reduce the complexity of the sampled set.
## 4 The Probabilistic Scaling Approach
We propose a novel sample-based approach, alternative to the randomized
procedures proposed so far, which allows to maintain the nice probabilistic
features of these techniques, while at the same time providing the designer
with a way of tuning the complexity of the approximation.
The main idea behind this approach consists of first obtaining a simple
initial approximation of the shape of the probabilistic set
$\mathbb{X}_{\varepsilon}$ by exploiting scalable simple approximating sets
(Scalable SAS) of the form
${\mathbb{S}}(\gamma)=\theta_{c}\oplus\gamma{\mathbb{S}}.$ (22)
These sets are described by a center point $\theta_{c}$ and a low-complexity
shape set ${\mathbb{S}}$. The center $\theta_{c}$ and the shape ${\mathbb{S}}$
constitute the design parameters of the proposed approach. By appropriately
selecting the shape ${\mathbb{S}}$, the designer can control the complexity of
the approximating set.
Note that we do not ask this initial set to have any guarantee of
probabilistic nature. What we ask is that this set is being able to “capture”
somehow the shape of the set $\mathbb{X}_{\varepsilon}$. Recipes on a possible
procedure for constructing this initial set are provided in section 5. The set
${\mathbb{S}}$ constitutes the starting point of a scaling procedure, which
allows to derive a probabilistic guaranteed approximation of the
$\varepsilon$-CCS, as detailed in the next section. In particular, we show how
an optimal scaling factor $\gamma$ can be derived so that the set (22) is
guaranteed to be an inner approximation of $\mathbb{X}_{\varepsilon}$ with the
desired confidence level $\delta$. We refer to the set ${\mathbb{S}}(\gamma)$
as Scalable SAS.
### 4.1 Probabilistic Scaling
In this section, we address the problem of how to scale the set
${\mathbb{S}}(\gamma)$ around its center $\theta_{c}$ to guarantee, with
confidence level $\delta\in(0,1)$, the inclusion of the scaled set into
$\mathbb{X}_{\varepsilon}$. Within this sample-based procedure we assume that
$N_{\gamma}$ iid samples $\\{w^{(1)},\ldots,w^{(N_{\gamma})}\\}$ are obtained
from $\mathsf{Pr}_{\mathbb{W}}$ and based on these, we show how to obtain a
scalar $\bar{\gamma}>0$ such that
$\mathsf{Pr}_{\mathbb{W}^{N_{\gamma}}}\\{{\mathbb{S}}(\bar{\gamma})\subseteq\mathbb{X}_{\varepsilon}\\}\geq
1-\delta.$
To this end, we first define the scaling factor associated to a given
realisation of the uncertainty.
###### Definition 3 (Scaling factor).
Given a Scalable SAS ${\mathbb{S}}(\gamma)$, with given center $\theta_{c}$
and shape ${\mathbb{S}}\subset\Theta$, and a realization $w\in\mathbb{W}$, we
define the scaling factor of ${\mathbb{S}}(\gamma)$ relative to $w$ as
$\gamma(w)\doteq\left\\{\begin{array}[]{cc}0&\,\,\,\mbox{if}\;\theta_{c}\not\in\mathbb{X}(w)\\\
\max\limits_{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w)}\gamma&\,\,\,\mbox{otherwise}.\end{array}\right.$
with $\mathbb{X}(w)$ defined as in (16).
That is $\gamma(w)$ represents the maximal scaling that can be applied to
${\mathbb{S}}(\gamma)=\theta_{c}\oplus\gamma{\mathbb{S}}$ around the center
$\theta_{c}$ so that ${\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w)$. The
following theorem states how to obtain, by means of sampling, a scaling factor
$\bar{\gamma}$ that guarantees, with high probability, that
${\mathbb{S}}(\bar{\gamma})\subseteq\mathbb{X}_{\varepsilon}$.
###### Theorem 1 (Probabilistic scaling).
Given a candidate Scalable SAS ${\mathbb{S}}(\gamma)$, with
$\theta_{c}\in\mathbb{X}_{\varepsilon}$, accuracy parameter
$\varepsilon\in(0,1)$, confidence level $\delta\in(0,1)$, and a discarding
integer parameter $r\geq 0$, let $N_{\gamma}$ be chosen such that
$\mathbf{B}(r;N_{\gamma},\varepsilon)\leq\delta.$ (23)
Draw $N_{\gamma}$ iid samples $\\{w^{(1)},w^{(2)},\ldots,w^{(N_{\gamma})}\\}$
from distribution $\mathsf{Pr}_{\mathbb{W}}$, compute the corresponding
scaling factor
$\gamma_{i}\doteq\gamma(w^{(i)}),$ (24)
for $i\in[N_{\gamma}]$ according to Definition 3, and let
$\bar{\gamma}=\gamma_{1+r:N_{\gamma}}$. Then, with probability no smaller than
$1-\delta$,
${\mathbb{S}}(\bar{\gamma})=\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}\subseteq\mathbb{X}_{\varepsilon}.$
Proof: If $\bar{\gamma}=0$, then we have
${\mathbb{S}}(\bar{\gamma})\equiv\theta_{c}\in\mathbb{X}_{\varepsilon}$.
Hence, consider $\bar{\gamma}>0$. From Property 1 in Appendix A.2, we have
that $\bar{\gamma}0$ satisfies, with probability no smaller than $1-\delta$,
that
$\mathsf{Pr}_{\mathbb{W}}\\{{\mathbb{S}}(\gamma)\not\subseteq\mathbb{X}(w)\\}\leq\varepsilon$.
Equivalently,
$\mathsf{Pr}_{\mathbb{W}}\\{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w)\\}>1-\varepsilon.$
This can be rewritten as $\mathsf{Pr}_{\mathbb{W}}\\{F(w)^{\top}\theta\leq
g(w),\;\;\forall\theta\in{\mathbb{S}}(\gamma)\\}>1-\varepsilon,$ and it
implies that the probability of violation in
$\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}$ is no larger than $\varepsilon$,
with probability no smaller than $1-\delta$. ∎
In the light of the theorem above, from now on we will assume that the
Scalable SAS is such that $\theta_{c}\in\mathbb{X}_{\varepsilon}$. The above
result leads to the following simple algorithm, in which we summarise the main
steps for constructing the scaled set, and we provide an explicit way of
determining the discarding parameter $r$.
Algorithm 1 Probabilistic SAS Scaling
1:Given a candidate Scalable SAS ${\mathbb{S}}(\gamma)$, and probability
levels $\varepsilon$ and $\delta$, choose
$N_{\gamma}\geq\frac{7.47}{\varepsilon}\ln\frac{1}{\delta}\quad\text{ and
}\quad r=\left\lfloor\frac{\varepsilon N_{\gamma}}{2}\right\rfloor.$ (25)
2:Draw $N_{\gamma}$ samples of the uncertainty
$w^{(1)},\ldots,w^{(N_{\gamma})}$
3:for $i=1$ to $N_{\gamma}$ do
4: Solve the optimization problem $\displaystyle\gamma_{i}\doteq$
$\displaystyle\max_{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w^{(i)})}\gamma$
(26)
5:end for
6:Return $\bar{\gamma}=\gamma_{1+r:N_{\gamma}}$, the $(1+r)$-th smallest value
of $\gamma_{i}$.
A few comments are in order regarding the algorithm above. In step 4, for each
uncertainty sample $w^{(i)}$ one has to solve an optimization problem, which
amounts to finding the largest value of $\gamma$ such that
${\mathbb{S}}(\gamma)$ is contained in the set $\mathbb{X}(w^{(i)})$ defined
in (16). If the SAS is chosen accurately, we can show that this problem is
convex and computationally very efficient: this is discussed in Section 5.
Then, in step 6, one has to re-order the set
$\\{\gamma_{1},\gamma_{2},\ldots,\gamma_{N_{\gamma}}\\}$ so that the first
element is the smallest one, the second element is the second smallest one,
and so on and so fort, and then return the $r+1$-th element of the reordered
sequence. The following Corollary applies to Algorithm 1.
###### Corollary 1.
Given a candidate SAS set in the form
${\mathbb{S}}(\gamma)=\theta_{c}\oplus\gamma{\mathbb{S}}$, assume that
$\theta_{c}\in\mathbb{X}_{\varepsilon}$. Then, Algorithm 1 guarantees that
${\mathbb{S}}(\bar{\gamma})\subseteq\mathbb{X}_{\varepsilon}$ with probability
at least $1-\delta$.
Proof: The result is a direct consequence of Theorem 1, which guarantees that,
for given $r\geq 0$,
$\mathsf{Pr}\\{{\mathbb{S}}(\gamma)\subseteq\mathbb{X}_{\varepsilon}\\}$ is
guaranteed if the scaling is performed on a number of samples satisfying (23).
From [42, Corollary 1]) it follows that, in order to satisfy (23) it suffices
to take $N_{\gamma}$ such that
$N_{\gamma}\geq\frac{1}{\varepsilon}\left(r+\ln\frac{1}{\delta}+\sqrt{2r\ln\frac{1}{\delta}}\right).$
(27)
Since $r=\lfloor\frac{\varepsilon N}{2}\rfloor$, we have that
$r\leq\frac{\varepsilon N}{2}$. Thus, inequality (27) is satisfied if
$\displaystyle N_{\gamma}$ $\displaystyle\geq$
$\displaystyle\frac{1}{\varepsilon}\left(\frac{\varepsilon
N_{\gamma}}{2}+\ln\frac{1}{\delta}+\sqrt{\varepsilon
N_{\gamma}\ln\frac{1}{\delta}}\right)$ $\displaystyle=$
$\displaystyle\frac{N_{\gamma}}{2}+\frac{1}{\varepsilon}\ln\frac{1}{\delta}+\sqrt{N_{\gamma}\frac{1}{\varepsilon}\ln\frac{1}{\delta}}.$
Letting $\nabla\doteq\sqrt{N_{\gamma}}$ and
$\alpha\doteq\sqrt{\frac{1}{\varepsilon}\ln\frac{1}{\delta}}$222Note that both
quantities under square root are positive., the above inequality rewrites
$\nabla^{2}-2\alpha\nabla-2\alpha^{2}\geq 0,$ which has unique positive
solution $\nabla\geq(1+\sqrt{3})\alpha$. In turn, this rewrites as
$N_{\gamma}\geq\frac{(1+\sqrt{3})^{2}}{\varepsilon}\ln\frac{1}{\delta}.$
The formula (25) follows by observing that $(1+\sqrt{3})^{2}<~{}7.47$. ∎
In the next sections, we provide a “library” of possible candidates SAS
shapes. We remind that these sets need to comply to two main requirements: i)
being a simple and low-complexity representation; and ii) being able to
capture the original shape of the $\varepsilon$-CCS. Moreover, in the light of
the discussion after Algorithm 1, we also ask these sets to be convex.
## 5 Candidate SAS: Sampled-polytope
First, we note that the most straightforward way to design a candidate SAS is
again to recur to a sample-based procedure: we draw a fixed number $N_{S}$ of
“design” uncertainty samples333These samples are denoted with a tilde to
distinguish them from the samples used in the probabilistic scaling procedure.
$\\{\tilde{w}^{(1)},\ldots,\tilde{w}^{(N_{S})}\\}$, and construct an initial
sampled approximation by introducing the following sampled-polytope SAS
${\mathbb{S}}_{N_{S}}=\bigcap_{j=1}^{N_{S}}\mathbb{X}(\tilde{w}^{(j)}).$ (28)
Note that the sampled polytope ${\mathbb{S}}_{N_{S}}$, by construction, is
given by the intersection of $n_{\ell}N_{S}$ half-spaces. Hence, we observe
that this approach provides very precise control on the final complexity of
the approximation, through the choice of the number of samples $N_{S}$.
However, it is also clear that a choice for which $N_{S}<<N_{LT}$ implies that
the probabilistic properties of ${\mathbb{S}}_{N_{S}}$ before scaling will be
very bad. However, we emphasize again that this initial geometry doesn’t have
nor require any probabilistic guarantees, which are instead provided by the
probabilistic scaling discussed in Section 4.1. It should be also remarked
that this is only one possible heuristic. For instance, along this line one
could as well draw many samples and then apply a clustering algorithm to boil
it down to a desired number of samples.
We remark that, in order to apply the scaling procedure, we need to define a
center around which to apply the scaling procedure. To this end, we could
compute the so-called Chebyshev center, defined as the center of largest ball
inscribed in ${\mathbb{S}}_{N_{S}}$, i.e.
$\theta_{c}=\mathsf{Cheb}({\mathbb{S}}_{N_{S}})$. We note that computing the
Chebyshev center of a given polytope is an easy convex optimization problem,
for which efficient algorithms exist, see e.g. [43]. A possible alternative
would be the analytic center of ${\mathbb{S}}_{N_{S}}$, whose computation is
even easier (see [43] for further details). Once the center $\theta_{c}$ has
been determined, the scaling procedure can be applied to the set
${\mathbb{S}}_{N_{S}}(\gamma)\doteq\theta_{c}\oplus\gamma\\{{\mathbb{S}}_{N_{S}}\ominus\theta_{c}\\}$.
Note that the center needs to be inside $\mathbb{X}_{\varepsilon}$. Aside for
that, the choice of $\theta_{c}$ only affects the goodness of the shape, but
we can never know a priori if the analytic center is a better choice than any
random center in $\mathbb{X}_{\varepsilon}$.
(a) ${\mathbb{S}}_{N_{S}}$ with $N_{S}=100$. $\rightarrow$ $\gamma=0.8954$
(b) ${\mathbb{S}}_{N_{S}}$ with $N_{S}=1,000$. $\rightarrow$ $\gamma=1.2389$
(c) LT-based (Lemma 1). $N_{LT}=52,044$
Figure 3: (a-b) Probabilistic scaling approximations of the $\varepsilon$-CCS.
Scaling procedure applied to a sampled-polytope with $N_{S}=100$ (a) and
$N_{S}=1,000$ (b). The initial sets are depicted in red, the scaled ones in
green. (c) Approximation obtained by direct application of Lemma 1. Note that,
in this latter case, to plot the set without out-of-memory errors a pruning
procedure [44] of the $52,044$ linear inequalities was necessary.
###### Example 3 (Sample-based approximations).
To illustrate how the proposed scaling procedure works in practice in the case
of sampled-polytope SAS, we revisit Example 2. To this end, a pre-fixed number
$N_{S}$ of uncertainty samples were drawn, and the set inequalities
$F(\tilde{w}^{(j)})\theta\leq g(\tilde{w}^{(j)}),\quad j\in[N_{S}],$
with $F(w),g(w)$ defined in (7), were constructed, leading to the candidate
set ${\mathbb{S}}_{N_{S}}$. Then, the corresponding Chebyshev center was
computed, and Algorithm 1 was applied with $\varepsilon=0.05$,
$\delta=10^{-6}$, leading to $N_{\gamma}=2,120$.
We note that, in this case, the solution of the optimization problem in (26)
may be obtained by bisection on $\gamma$. Indeed, for given $\gamma$, checking
if ${\mathbb{S}}_{N_{S}}(\gamma)\subseteq\mathbb{X}(w^{(i)})$ amounts to
solving some simple linear programs.
Two different situations were considered: a case where the number of
inequalities is rather small $N_{S}=100$, and a case where the complexity of
the SAS is higher, i.e. $N_{S}=1,000$. The outcome procedure is illustrated in
Figure 3. We can observe that, for a small $N_{S}$ – Fig. 3(a) – the initial
approximation is rather large (although it is contained in
$\mathbb{X}_{\varepsilon}$, we remark that we do not have any guarantee that
this will happen). In this case, the probabilistic scaling returns
$\gamma=0.8954$ which is less than one. This means that, in order to obtain a
set fulfilling the desired probabilistic guarantees, we need to shrink it
around its center. In the second case, for a larger number of sampled
inequalities – Fig. 3(b) \- the initial set (the red one) is much smaller, and
the scaling procedure inflates the set by returning a value of $\gamma$
greater than one, i.e. $\gamma=1.2389$. Note that choosing a larger number of
samples for the computation of the initial set does not imply that the final
set will be a better approximation of the $\varepsilon$-CCS.
Finally, we compare this approach to the scenario-like ones discussed in
Subsection 3.3. To this end, we also draw the approximation obtained by
directly applying the Learning Theory bound (20). Note that in this case,
since $n_{\theta}=3$ and $n_{\ell}=4$, we need to take $N_{LT}=13,011$
samples, corresponding to $52,044$ linear inequalities. The resulting set is
represented in Fig. 3(c). We point out that using this approximation i) the
set is much more complex, since the number of involved inequalities is much
larger, ii) the set is much smaller, hence providing a much more conservative
approximation of the $\varepsilon$-CCS. Hence, the ensuing chance-constrained
optimization problem will be computationally harder, and lead to a solution
with a larger cost or even to an infeasible problem, in cases where the
approximating set is too small.
## 6 Candidate SAS: Norm-based SAS
In this section, we propose a procedure in which the shape of the scalable SAS
may be selected a-priori. This corresponds to situations where the designer
wants to have full control in the final shape in terms of structure and
complexity. The main idea is to define so-called norm-based SAS of the form
${\mathbb{S}_{p}}(\gamma)\doteq\theta_{c}\oplus\gamma H\mathbb{B}_{p}^{s}$
(29)
where $\mathbb{B}_{p}^{s}$ is a $\ell_{p}$-ball in $\mathbb{R}^{s}$,
$H\in\mathbb{R}^{n_{\theta},s}$, with $s\geq n_{\theta}$, is a design matrix
(not necessarily square), and $\gamma$ is the scaling parameter. Note that
when the matrix $H$ is square (i.e. $s=n_{\theta}$) and positive definite
these sets belong to the class of $\ell_{p}$-norm based sets originally
introduced in [45]. In particular, in case of $\ell_{2}$ norm, the sets are
ellipsoids. This particular choice is the one studied in [14]. Here, we extend
this approach to a much more general family of sets, which encompasses for
instance zonotopes, obtained by letting $p=\infty$ and $s\geq n_{\theta}$.
Zonotopes have been widely studied in geometry, and have found several
applications in systems and control, in particular for problems of state
estimation and robust Model Predictive Control, see e.g. [46].
### 6.1 Scaling factor computation for norm-bases SAS
We recall that the scaling factor $\gamma(w)$ is defined as $0$ if
$\theta_{c}\not\in\mathbb{X}(w)$ and as the largest value $\gamma$ for which
${\mathbb{S}_{p}}(\gamma)\subseteq\mathbb{X}(w)$ otherwise. The following
theorem, whose proof is reported in Appendix A.3, provides a direct and simple
way to compute in closed form the scaling factor for a given candidate norm-
based SAS.
###### Theorem 2 (Scaling factor for norm-based SAS).
Given a norm-based SAS ${\mathbb{S}}(\gamma)$ as in (29), and a realization
$w\in\mathbb{W}$, the scaling factor $\gamma(w)$ can be computed as
$\gamma(w)=\min_{\ell\in[n_{\ell}]}\;\gamma_{\ell}(w),$
with $\gamma_{\ell}(w)$, $\ell\in[n_{\ell}]$, given by
$\gamma_{\ell}(w)=\left\\{\begin{array}[]{ccl}0&\mbox{if
}&\tau_{\ell}(w)<0,\\\ \infty&\mbox{if}&\tau_{\ell}(w)\geq 0\mbox{ and
}\rho_{\ell}(w)=0,\\\
{\displaystyle{\frac{\tau_{\ell}(w)}{\rho_{\ell}(w)}}}&\mbox{if}&\tau_{\ell}(w)\geq
0\mbox{ and }\rho_{\ell}(w)>0,\end{array}\right.$ (30)
where $\tau_{\ell}(w)\doteq g_{\ell}(w)-f_{\ell}^{T}(w)\theta_{c}$ and
$\rho_{\ell}(w)\doteq\|H^{T}f_{\ell}(w)\|_{p^{*}}$, with $\|\cdot\|_{p}^{*}$
being the dual norm of $\|\cdot\|_{p}$.
Note that $\gamma(w)$ is equal to zero if and only if $\theta_{c}$ is not
included in the interior of $\mathbb{X}(w)$.
### 6.2 Construction of a candidate norm-based set
Similarly to Section 5, we first draw a fixed number $N_{S}$ of “design”
uncertainty samples $\\{\tilde{w}^{(1)},\ldots,\tilde{w}^{(N_{S})}\\},$ and
construct an initial sampled approximation by introducing the following
sampled-polytope SAS ${\mathbb{S}}_{N_{S}}$ as defined in
$\eqref{eq:sampledSAS}$. Again, we consider the Chebyshev center of
${\mathbb{S}}_{N_{S}}$, or its analytical center as a possible center
$\theta_{c}$ for our approach.
Given ${\mathbb{S}}_{N_{S}}$, $s\geq n_{\theta}$ and $p\in\\{1,2,\infty\\}$,
the objective is to compute the largest set $\theta_{c}\oplus
H\mathbb{B}^{s}_{p}$ included in ${\mathbb{S}}_{N_{S}}$. To this end, we
assume that we have a function $\mathsf{Vol}_{p}(H)$ that provides a measure
of the size of $H\mathbb{B}^{s}_{p}$. That is, larger values of
$\mathsf{Vol}_{p}(H)$ are obtained for increasing sizes of
$H\mathbb{B}^{s}_{p}$.
###### Remark 3 (On the volume function).
The function $\mathsf{Vol}_{p}(H)$ may be seen as a generalization of the
classical concept of Lebesgue volume of the set ${\mathbb{S}}_{N_{S}}$.
Indeed, when $H$ is a square positive definite matrix, some possibilities are
$\mathsf{Vol}_{p}(H)=\log\,\det(H)$ – which is directly proportional to the
classical volume definition, or $\mathsf{Vol}_{p}(H)=\rm{tr}\,H$ – which for
$p=2$ becomes the well known sum of ellipsoid semiaxes (see [47] and [43,
Chapter 8]). These measures can be easily generalized to non square matrices.
It suffices to compute the singular value decomposition. If $H=U\Sigma V^{T}$,
we could use the measures $\mathsf{Vol}_{p}(H)=\rm{tr}\,\Sigma$ or
$\mathsf{Vol}_{p}(H)=\log\,\det(\Sigma)$.
For non square matrices $H$, specific results for particular values of $p$ are
known. For example, we remind that if $p=\infty$ and
$H\in\mathbb{R}^{n_{\theta}\times s}$, $s\geq n_{\theta}$, then
$\theta_{c}\oplus H\mathbb{B}^{s}_{\infty}$ is a zonotope. Then, if we denote
as generator each of the columns of $H$, the volume of a zonotope can be
computed by means of a sum of terms (one for each different way of selecting
$n_{\theta}$ generators out of the $s$ generators of $H$); see [48], [49].
Another possible measure of the size of a zonotope $\theta_{c}\oplus
H\mathbb{B}^{s}_{\infty}$ is the Frobenious norm of $H$ [48].
Given an initial design set ${\mathbb{S}}_{N_{S}}$, we elect as our candidate
Scalable SAS the largest “volume” norm-based SAS contained in
${\mathbb{S}}_{N_{S}}$. Formally, this rewrites as the following optimization
problem
$\displaystyle\max\limits_{\theta_{c},H}~{}\mathsf{Vol}_{p}(H)$
$\displaystyle\text{subject to }\theta_{c}\oplus
H\mathbb{B}_{p}^{s}\subseteq{\mathbb{S}}_{N_{S}}$
As it has been shown, this problem is equivalent to
$\displaystyle\min\limits_{\theta_{c},H}$ $\displaystyle-\mathsf{Vol}_{p}(H)$
s.t. $\displaystyle
f_{\ell}^{T}(\tilde{w}^{(j)})\theta_{c}+\|H^{T}f_{\ell}(w^{(j)})\|_{p^{*}}-g_{\ell}(w^{(j)})\leq
0,$ $\displaystyle\qquad\qquad\qquad\ell\in[n_{\ell}],\;j\in[N_{S}],$
where we have replaced the maximization of $\mathsf{Vol}_{p}(H)$ with the
minimization of -$\mathsf{Vol}_{p}(H)$.
We notice that the constraints are convex on the decision variables; also, the
functional to minimize is convex under particular assumptions. For example
when $H$ is assumed to be square and positive definite and
$\mathsf{Vol}_{p}(H)=\log\det(H)$. For non square matrices, the constraints
remain convex, but the convexity of the functional to be minimized is often
lost. In this case, local optimization algorithms should be employed to obtain
a possibly sub-optimal solution.
(a) $\gamma=0.9701$
(b) $\gamma=1.5995$
(c) $\gamma=0.9696$
(d) $\gamma=1.5736$
Figure 4: Scaling procedure applied to (a) ${\mathbb{S}}_{1}$-SAS with
$N_{S}=100$, (b) ${\mathbb{S}}_{1}$-SAS with $N_{S}=1,000$ (b),
${\mathbb{S}}_{\infty}$-SAS with $N_{S}=100$ (c), and $\ell_{\infty}$-poly
with $N_{S}=1,000$ (d). The initial set is depicted in red, the final one in
green. The sampled design polytope ${\mathbb{S}}_{N_{S}}$ is represented in
black.
###### Example 4 (Norm-based SAS).
We revisit again Example 2 to show the use of norm-based SAS. We note that, in
this case, the designer can control the approximation outcome by acting upon
the number of design samples $N_{S}$ used for constructing the set
${\mathbb{S}}_{N_{S}}$. In Figure 4 we report two different norm-based SAS,
respectively with $p=1$ and $p=\infty$, and for each of them we consider two
different values of $N_{S}$, respectively $N_{S}=100$ and $N_{S}=1,000$.
Similarly to what observed for the sampled-polys, we see that for larger
$N_{S}$, the ensuing initial set becomes smaller. Consequently, we have an
inflating process for small $N_{S}$ and a shrinkage one for large $N_{S}$
However, we observe that in this case, the final number of inequalities is
independent on $N_{S}$, being equal to $3n_{\theta}+1=10$ for
${\mathbb{S}}_{1}$ and $2n_{\theta}$ for ${\mathbb{S}}_{\infty}$.
#### 6.2.1 Relaxed computation
It is worth remarking that that the minimization problem of the previous
subsection might be infeasible. In order to guarantee the feasibility of the
problem, a soft-constrained optimization problem is proposed. With a relaxed
formulation, $\theta_{c}$ is not guaranteed to satisfy all the sampled
constraints. However $\theta_{c}\in{\mathbb{S}}_{N_{S}}$ is not necessary to
obtain an $\varepsilon$-CSS (in many practical applications, every element of
$\Theta$ has a non zero probability of violation and ${\mathbb{S}}_{N_{S}}$ is
empty with non-zero probability). Moreover, a relaxed formulation is necessary
to address problems in which there is no element of $\Theta$ with probability
of violation equal to zero (or significantly smaller than $\varepsilon$). Not
considering the possibility of violations is an issue especially when $N_{S}$
is large, because the probability of obtaining an empty sampled set
${\mathbb{S}}_{N_{S}}$ grows with the number of samples $N_{S}$.
Given $\xi>0$ the relaxed optimization problem is
$\displaystyle\min\limits_{\theta_{c},H,\tau_{1},\ldots,\tau_{N_{S}}}~{}-\mathsf{Vol}_{p}(H)+\xi\sum\limits_{j=1}^{N_{S}}\max\\{\tau_{j},0\\}$
(31) $\displaystyle\text{s.t.
}\;f_{\ell}^{T}(w^{(j)})\theta_{c}+\|H^{T}f_{\ell}(w^{(j)})\|_{p^{*}}-g_{\ell}(w^{(j)})\leq\tau_{j},$
$\displaystyle\qquad\qquad\qquad\ell\in[n_{\ell}],\;j\in[N_{S}].$
The parameter $\xi$ serves to provide an appropriate trade off between
satisfaction of the sampled constraints and the size of the obtained region. A
possibility to choose $\xi$ would be to choose it in such a way that the
fraction of violations $n_{viol}/N_{S}$ (where $n_{viol}$ is the number of
elements $\tau_{j}$ larger than zero) is smaller than $\varepsilon/2$.
## 7 Numerical example: Probabilistic set membership estimation
We now present a numerical example in which the results of the paper are
applied to the probabilistic set membership estimation problem, introduced in
subSection 2.3. We consider the universal approximation functions given by
Gaussian radial basis function networks (RBFN) [50].
Given the nodes $[x_{1},x_{2},\ldots,x_{M}]$ and the variance parameter $c$,
the corresponding Gaussian radial basis function network is defined as
${\rm{RBFN}}(x,\theta)=\theta^{T}\varphi(x),$
where
$\theta=\left[\begin{array}[]{ccc}\theta_{1}&\ldots&\theta_{M}\end{array}\right]^{T}$
represents the weights and
$\varphi(x)=\left[\begin{array}[]{ccc}\exp\left(\frac{-\|x-x_{1}\|^{2}}{c}\right)&\ldots&\exp\left(\frac{-\|x-x_{M}\|^{2}}{c}\right)\end{array}\right]^{T}$
is the regressor function. Given $\delta\in(0,1)$ and $\varepsilon\in(0,1)$,
the objective is to obtain, with probability no smaller than $1-\delta$, an
inner approximation of the probabilistic feasible parameter set
${\mathsf{FPS}}_{\varepsilon}$, which is the set of parameters
$\theta\in\mathbb{R}^{M}$ that satisfies
$\mathsf{Pr}_{\mathbb{W}}\\{|y-\theta^{T}\varphi(x)|\leq\rho\\}\geq
1-\varepsilon,$ (32)
where $x$ is a random scalar with uniform distribution in $[-5,5]$ and
$y=\sin(3x)+\sigma,$
where $\sigma$ is a random scalar with a normal distribution with mean $5$ and
variance 1.
We use the procedure detailed in Sections 4, 5 and 6 to obtain an SAS of
${\mathsf{FPS}}_{\varepsilon}$. We have taken a grid of $M=20$ points in the
interval $[-5,5]$ to serve as nodes for the RBFN, and a variance parameter of
$c=0.15$. We have taken $N_{S}=350$ random samples $w=(x,y)$ to compute the
initial geometry, which has been chosen to be an $\ell_{\infty}$ norm-based
SAS of dimension 20 with a relaxation parameter of $\xi=1$ (see (31)). The
chosen initial geometry is $\theta_{c}\oplus H\mathbb{B}^{20}_{\infty}$, where
$H$ is constrained to be a diagonal matrix.
When the initial geometry is obtained, we scale it around its center by means
of probabilistic scaling with Algorithm 1. The number of samples required for
the scaling phase to achieve $\varepsilon=0.05$ and $\delta=10^{-6}$ is
$N_{\gamma}=2065$ and the resulting scaling factor is $\gamma=0.3803$. The
scaled geometry $\theta_{c}\oplus\gamma H\mathbb{B}^{20}_{\infty}$ is, with a
probability no smaller than $1-\delta$, an inner approximation of
${\mathsf{FPS}}_{\varepsilon}$ which we will refer to as
${\mathsf{FPS}}_{\varepsilon}^{\delta}$. Since it is a transformation of an
$\ell_{\infty}$ norm ball with a diagonal matrix $H$, we can write it as
${\mathsf{FPS}}_{\varepsilon}^{\delta}=\\{\theta:\theta^{-}\leq\theta\leq\theta^{+}\\},$
where the extreme values $\theta^{-},\theta^{+}\in\mathbb{R}^{20}$ are
represented in Figure 5 [51], along with the central value
$\theta_{c}\in\mathbb{R}^{20}$.
Figure 5: Representation of the extreme values $\theta^{+}$ and $\theta^{-}$
and the central value $\theta_{c}$ of the
${\mathsf{FPS}}_{\varepsilon}^{\delta}$.
Once the ${\mathsf{FPS}}_{\varepsilon}^{\delta}$ has been computed, we can use
its center $\theta_{c}$ to make the point estimation
$y\approx\theta_{c}^{T}\varphi(x)$. We can also obtain probabilistic upper and
lower bounds of $y$ by means of equation (32). That is, every point in
${\mathsf{FPS}}_{\varepsilon}^{\delta}$ satisfies, with confidence $1-\delta$:
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\leq\theta^{T}\varphi(x)+\rho\\}\geq
1-\varepsilon,$ (33)
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\geq\theta^{T}\varphi(x)-\rho\\}\geq
1-\varepsilon.$
We notice that the tightest probabilistic bounds are obtained with
$\theta^{+}$ for the lower bound and $\theta^{-}$ for the upper one. That is,
we finally obtain that, with confidence $1-\delta$:
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\leq{\theta^{-}}^{T}\varphi(x)+\rho\\}\geq
1-\varepsilon,$ (34)
$\displaystyle\mathsf{Pr}_{\mathbb{W}}\\{y\geq{\theta^{+}}^{T}\varphi(x)-\rho\\}\geq
1-\varepsilon.$
Figure 6 shows the results of both the point estimation and the probabilistic
interval estimation.
Figure 6: Real values of $y$ vs central estimation (blue) and interval
prediction bounds (red).
## 8 Conclusions, extensions, and future directions
In this paper, we proposed a general approach to construct probabilistically
guaranteed inner approximations of the chance-constraint set
$\mathbb{X}_{\varepsilon}$. The approach is very general and flexible.
First, we remark that the proposed scaling approach is not limited to sets
defined by linear inequalities, but immediately extends to more general sets.
Indeed, we may consider a generic binary performance function
$\phi:\Theta\times\mathbb{W}\to\\{0,\,1\\}$ defined as 444Clearly, this
formulation encompasses the setup discussed, obtained by simply setting
$\phi(\theta,w)=\left\\{\begin{array}[]{ll}0&\text{if $F(w)\theta\leq
g(w)$}\\\ 1&\text{otherwise.}\end{array}\right.$
$\phi(\theta,q)=\left\\{\begin{array}[]{ll}0&\text{if $\theta$ meets design
specifications for $w$}\\\ 1&\text{otherwise.}\end{array}\right.$ (35)
In this case, the violation probability may be written as
$\mathsf{Viol}(\theta)\doteq\mathsf{Pr}_{\mathbb{W}}\,\\{\,\psi(\theta,w)=1\,\\}=\mathbb{E}(\theta)$,
and we can still define the set $\mathbb{X}_{\varepsilon}$ as in (5). Then,
given an initial SAS candidate, Algorithm 1 still provides a valid
approximation. However, it should be remarked that, even if we choose a “nice”
SAS as those previously introduced, the nonconvexity of $\phi$ will most
probably render step 4 of the algorithm intractable. To further elaborate on
this point, let us focus on the case when the design specification may be
expressed as a (nonlinear) inequality of the form
$\psi(\theta,q)\leq 0.$
Then, step 4 consist in solving the following nonconvex optimization problem
$\displaystyle\gamma_{i}\doteq$ $\displaystyle\arg\max\gamma$ (36)
$\displaystyle\text{s.t.}\quad{\mathbb{S}}(\gamma)\subseteq\mathbb{X}(w^{(i)})=\Bigl{\\{}\theta\in\Theta\;|\;\psi(\theta,w^{(i)})\leq
0\Bigr{\\}}.$
We note that this is general a possibly hard problem. However, there are cases
when this problem is still solvable. For instance, whenever $\psi(\theta,q)$
is a convex function of $\theta$ for fixed $w$ and the set ${\mathbb{S}}$ is
also convex, the above optimization problem may be formulated as a convex
program by application of Finsler lemma. We remark that, in such situations,
the approach proposed here is still completely viable, since all the
derivations continue to hold.
Second, we remark that the paper open the way to the design of other families
of Scaling SAS. For instance, we are currently working on using the family of
sets defined in the form of polynomial superlevel sets (PSS) proposed in [52].
## Appendix A Appendix
### A.1 Proof of Lemma 1
To prove the lemma, we first recall the following definition from [41].
###### Definition 4 ($(\alpha,k)$-Boolean Function).
The function $h:\Theta\times\mathbb{W}\to\mathbb{R}$ is an
$(\alpha,k)$-Boolean function if for fixed $w$ it can be written as an
expression consisting of Boolean operators involving $k$ polynomials
$p_{1}(\theta),p_{2}(\theta),\ldots,p_{k}(\theta),$ in the components
$\theta_{i}$, $i\in[n_{\theta}]$ and the degree with respect to $\theta_{i}$
of all these polynomials is no larger than $\alpha$.
Let us now define the binary functions
$h_{\ell}(\theta,w)\doteq\left\\{\begin{array}[]{rl}0&\mbox{ if
}f_{\ell}(w)\theta\leq g_{\ell}(w)\\\ 1&\mbox{
otherwise}\end{array}\right.,\;\ell\in[n_{\ell}].$
Introducing the function
$h(\theta,w)\doteq\max\limits_{\ell=1,\ldots,n_{\ell}}h_{\ell}(\theta,w),$ we
see that the violation probability can be alternatively written as
$\mathsf{Viol}(\theta)\doteq\mathsf{Pr}_{\mathbb{W}}\,\\{\,h(\theta,w)=1\,\\}.$
The proof immediately follows by observing that $h(\theta,w)$ is an
$(1,n_{\ell})$-Boolean function, since it can be expressed as a function of
$n_{\ell}$ Boolean functions, each of them involving a polynomial of degree 1.
Indeed, it is proven in [41, Theorem 8], that, if
$h:\Theta\times\mathbb{W}\to\mathbb{R}$ is an $(\alpha,k)$-Boolean function
then, for $\varepsilon\in(0,0.14)$, with probability greater than $1-\delta$
we have $\mathsf{Pr}_{\mathbb{W}}\,\\{\,h(\theta,w)=1\,\\}\leq\varepsilon$ if
$N$ is chosen such that
$N\geq\frac{4.1}{\varepsilon}\Big{(}\ln\frac{21.64}{\delta}+4.39n_{\theta}\,\log_{2}\Big{(}\frac{8e\alpha
k}{\varepsilon}\Big{)}\Big{)}.$
### A.2 Property 1
###### Property 1.
Given $\varepsilon\in(0,1)$, $\delta\in(0,1)$, and $0\leq r\leq N$, let $N$ be
such that $\mathbf{B}(r;N,\varepsilon)\leq\delta$. Draw $N$ iid sample-sets
$\\{\mathbb{X}^{(1)},\mathbb{X}^{(2)},\ldots,\mathbb{X}^{(N)}\\}$ from a
distribution $\mathsf{Pr}_{\mathbb{X}}$. For $i\in[N]$, let
$\gamma_{i}\doteq\gamma(\mathbb{X}^{(i)})$, with $\gamma(\cdot)$ as in
Definition 3, and suppose that $\bar{\gamma}=\gamma_{1+r:N}>0$. Then, with
probability no smaller than $1-\delta$, it holds that
$\mathsf{Pr}_{\mathbb{X}}\\{\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}\not\subseteq\mathbb{X}\\}\leq\varepsilon$.
Proof: It has been proven in [38, 39] that if one discards no more than $r$
constraints on a convex problem with $N$ random constraints, then the
probability of violating the constraints with the solution obtained from the
random convex problem is no larger than $\varepsilon\in(0,1)$, with
probability no smaller than $1-\delta$, where
$\delta=\left(\begin{array}[]{c}d+r-1\\\ d-1\\\
\end{array}\right)\sum\limits_{i=0}^{d+r-1}\left(\begin{array}[]{c}N\\\ i\\\
\end{array}\right)\varepsilon^{i}(1-\varepsilon)^{N-i},$
and $d$ is the number of decision variables. We apply this result to the
following optimization problem
$\max\limits_{\gamma}\gamma\text{ subject to
}\theta_{c}\oplus\gamma{\mathbb{S}}\subseteq\mathbb{X}^{(i)},\;\;i\in[N].$
From Definition 3, we could rewrite this optimization problem as
$\max\limits_{\gamma}\gamma\text{ subject to
}\gamma\leq\gamma(\mathbb{X}^{(i)}),\;i\in[N].$
We first notice that the problem under consideration is convex and has a
unique scalar decision variable $\gamma$. That is, $d=1$. Also, the non-
degeneracy and uniqueness assumption required in the application of the
results of [38] and [39] are satisfied. Hence, if we allow $r$ violations in
the above minimization problem, we have that with probability no smaller than
$1-\delta$, where
$\delta=\left(\begin{array}[]{c}r\\\ 0\\\
\end{array}\right)\sum\limits_{i=0}^{r}\left(\begin{array}[]{c}N\\\ i\\\
\end{array}\right)\varepsilon^{i}(1-\varepsilon)^{N-i}=\mathbf{B}(r;N,\varepsilon),$
the solution $\bar{\gamma}$ of problem (A.2) satisfies
$\mathsf{Pr}_{\mathbb{X}}\\{\bar{\gamma}>\gamma(\mathbb{X})\\}\leq\varepsilon.$
We conclude from this, and Definition 3, that with probability no smaller than
$1-\delta$,
$\mathsf{Pr}_{\mathbb{X}}\\{\theta_{c}\oplus\bar{\gamma}{\mathbb{S}}\not\subseteq\mathbb{X}\\}\leq\varepsilon.$
Finally, note that the optimization problem under consideration can be solved
directly by ordering the values $\gamma_{i}=\gamma(\mathbb{X}^{(i)})$. It is
clear that if $r\geq 0$ violations are allowed, then the optimal value for
$\gamma$ is $\bar{\gamma}=\gamma_{r+1:N}$. ∎
### A.3 Proof of Theorem 2
Note that, by definition, the condition $\theta_{c}\oplus\gamma
H\mathbb{B}^{s}_{p}\subseteq\mathbb{X}(w)$ is equivalent to
$\max\limits_{z\in\mathbb{B}^{s}_{p}}f_{\ell}^{T}(w)(\theta_{c}+\gamma
Hz)-g_{\ell}(w)\leq 0,\;\ell\in[n_{\ell}].$
Equivalently, from the dual norm definition, we have
$f_{\ell}^{T}(w)\theta_{c}+\gamma\|H^{T}f_{\ell}(w)\|_{p^{*}}-g_{\ell}(w)\leq
0,\;\ell\in[n_{\ell}].$
Denote by $\gamma_{\ell}$ the scaling factor $\gamma_{\ell}$ corresponding to
the $\ell$-th constraint
$f_{\ell}^{T}(w)\theta_{c}+\gamma_{\ell}\|H^{T}f_{\ell}(w)\|_{p^{*}}-g_{\ell}(w)\leq
0.$
With the notation introduced in the Lemma, this constraint rewrites as
$\gamma_{\ell}\rho_{\ell}(w)\leq\tau_{\ell}(w).$
The result follows noting that the corresponding scaling factor
$\gamma_{\ell}(w)$ can be computed as
$\gamma_{\ell}(w)=\max_{\gamma_{\ell}\rho_{\ell}(w)\leq\tau_{\ell}(w)}\gamma_{\ell},$
and that the value for $\gamma(w)$ is obtained from the most restrictive one.
∎
## References
* [1] A. Prékopa, _Stochastic Programming_. Springer Science & Business Media, 2013.
* [2] N. V. Sahinidis, “Optimization under uncertainty: state-of-the-art and opportunities,” _Computers & Chemical Engineering_, vol. 28, no. 6-7, pp. 971–983, 2004.
* [3] A. Ben-Tal and A. Nemirovski, “Robust convex optimization,” _Mathematics of Operations Research_ , vol. 23, pp. 769–805, 1998.
* [4] G. Calafiore, F. Dabbene, and R. Tempo, “Research on probabilistic methods for control system design,” _Automatica_ , vol. 47, pp. 1279–1293, 2011.
* [5] R. Tempo, G. Calafiore, and F. Dabbene, _Randomized Algorithms for Analysis and Control of Uncertain Systems: with Applications_. Springer Science & Business Media, 2012.
* [6] M. Mammarella, E. Capello, F. Dabbene, and G. Guglieri, “Sample-based SMPC for tracking control of fixed-wing UAV,” _IEEE Control Systems Letters_ , vol. 2, no. 4, pp. 611–616, 2018.
* [7] J. Li, W. Zhan, Y. Hu, and M. Tomizuka, “Generic tracking and probabilistic prediction framework and its application in autonomous driving,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019.
* [8] M. Chamanbaz, F. Dabbene, and C. Lagoa, _Algorithms for Optimal AC Power Flow in the Presence of Renewable Sources_. Wiley Encyclopedia of Electrical and Electronics Engineering, 2020, pp. 1–13.
* [9] M. Chamanbaz, F. Dabbene, and C. M. Lagoa, “Probabilistically robust AC optimal power flow,” _IEEE Transactions on Control of Network Systems_ , vol. 6, no. 3, pp. 1135–1147, 2019.
* [10] X. Geng and L. Xie, “Data-driven decision making in power systems with probabilistic guarantees: Theory and applications of chance-constrained optimization,” _Annual Reviews in Control_ , vol. 47, pp. 341–363, 2019\.
* [11] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, “Stochastic MPC with offline uncertainty sampling,” _Automatica_ , vol. 81, no. 1, pp. 176–183, 2017.
* [12] M. Mammarella, M. Lorenzen, E. Capello, H. Park, F. Dabbene, G. Guglieri, M. Romano, and F. Allgöwer, “An offline-sampling SMPC framework with application to autonomous space maneuvers,” _IEEE Transactions on Control Systems Technology_ , pp. 1–15, 2018.
* [13] T. Alamo, V. Mirasierra, F. Dabbene, and M. Lorenzen, “Safe approximations of chance constrained sets by probabilistic scaling,” in _2019 18th European Control Conference (ECC)_. IEEE, 2019, pp. 1380–1385.
* [14] M. Mammarella, T. Alamo, F. Dabbene, and M. Lorenzen, “Computationally efficient stochastic mpc: a probabilistic scaling approach,” in _Proc. of 4th IEEE Conference on Control Technology and Applications_ , 2020.
* [15] M. Ahsanullah, V. Nevzorov, and M. Shakil, _An introduction to Order Statistics_. Paris: Atlantis Press, 2013\.
* [16] B. Miller and H. Wagner, “Chance constrained programming with joint constraints,” _Operations Research_ , vol. 13, pp. 930–945, 1965.
* [17] L. Khachiyan, “The problem of calculating the volume of a polyhedron is enumerably hard,” _Russian Mathematical Surveys_ , 1989.
* [18] A. Shapiro, D. Dentcheva, and A. Ruszczyński, _Lectures on stochastic programming: modeling and theory_. SIAM, 2014.
* [19] W. van Ackooij, “Eventual convexity of chance constrained feasible sets,” _Optimization_ , vol. 64, no. 5, pp. 1263–1284, 2015.
* [20] A. Prékopa, T. Rapcsák, and I. Zsuffa, “Serially linked reservoir system design using stochastic programing,” _Water Resources Research_ , vol. 14, no. 4, 1978.
* [21] D. Dentcheva, B. Lai, and A. Ruszczyński, “Dual methods for probabilistic optimization problems*,” _Mathematical Methods of Operations Research_ , vol. 60, no. 2, pp. 331–346, 2004.
* [22] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, “Constraint-tightening and stability in stochastic model predictive control,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 7, pp. 3165–3177, 2017.
* [23] A. Vicino and G. Zappa, “Sequential approximation of feasible parameter sets for identification with set membership uncertainty,” _IEEE Transactions on Automatic Control_ , vol. 41, no. 6, pp. 774–785, 1996.
* [24] J. M. Bravo, T. Alamo, and E. F. Camacho, “Bounded error identification of systems with time-varying parameters,” _IEEE Transactions on Automatic Control_ , vol. 51, no. 7, pp. 1144–1150, 2006.
* [25] S. Kataoka, “A stochastic programming model,” _Econometrica: Journal of the Econometric Society_ , pp. 181–196, 1963.
* [26] A. Prékopa, “Logarithmic concave measures with application to stochastic programming,” _Acta Scientiarum Mathematicarum_ , pp. 301–316, 1971.
* [27] C. M. Lagoa, “On the convexity of probabilistically constrained linear programs,” in _Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)_ , vol. 1, 1999, pp. 516–521 vol.1.
* [28] G. C. Calafiore and L. E. Ghaoui, “On distributionally robust chance-constrained linear programs,” _Journal of Optimization Theory and Applications_ , vol. 130, no. 1, pp. 1–22, 2006.
* [29] R. Henrion and C. Strugarek, “Convexity of chance constraints with independent random variables,” _Computational Optimization and Applications_ , vol. 41, no. 2, pp. 263–276, 2008.
* [30] L. Hewing and M. N. Zeilinger, “Stochastic model predictive control for linear systems using probabilistic reachable sets,” in _2018 IEEE Conference on Decision and Control (CDC)_ , 2018, pp. 5182–5188.
* [31] S. Yan, P. Goulart, and M. Cannon, “Stochastic model predictive control with discounted probabilistic constraints,” in _2018 European Control Conference (ECC)_. IEEE, 2018, pp. 1003–1008.
* [32] A. Nemirovski and A. Shapiro, “Convex approximations of chance constrained programs,” _SIAM Journal on Optimization_ , vol. 17, no. 4, pp. 969–996, 2006.
* [33] W. Chen, M. Sim, J. Sun, and C.-P. Teo, “From CVaR to uncertainty set: Implications in joint chance-constrained optimization,” _Operations Research_ , vol. 58, no. 2, pp. 470–485, 2010.
* [34] A. Jasour, N. S. Aybat, and C. M. Lagoa, “Semidefinite programming for chance constrained optimization over semialgebraic sets,” _SIAM Journal on Optimization_ , vol. 25, no. 3, pp. 1411–1440, 2015.
* [35] J. B. Lasserre, “Representation of chance-constraints with strong asymptotic guarantees,” _IEEE Control Systems Letters_ , vol. 1, no. 1, pp. 50–55, 2017\.
* [36] G. Calafiore and M. Campi, “The scenario approach to robust control design,” _IEEE Transactions on Automatic Control_ , vol. 51, no. 5, pp. 742–753, 2006\.
* [37] M. Campi and S. Garatti, “The exact feasibility of randomized solutions of robust convex programs,” _SIAM Journal of Optimization_ , vol. 19, pp. 1211—1230, 2008.
* [38] G. Calafiore, “Random convex programs,” _SIAM Journal of Optimization_ , vol. 20, pp. 3427–3464, 2010.
* [39] M. Campi and S. Garatti, “A sampling-and-discarding approach to chance-constrained optimization: feasibility and optimality,” _Journal of Optimization Theory and Applications_ , vol. 148, pp. 257–280, 2011.
* [40] V. Vapnik, _Statistical Learning Theory_. New York: John Wiley and Sons, 1998.
* [41] T. Alamo, R. Tempo, and E. F. Camacho, “Randomized strategies for probabilistic solutions of uncertain feasibility and optimization problems,” _IEEE Transactions on Automatic Control_ , vol. 54, no. 11, pp. 2545–2559, 2009.
* [42] T. Alamo, R. Tempo, A. Luque, and D. Ramirez, “Randomized methods for design of uncertain systems: Sample complexity and sequential algorithms,” _Automatica_ , vol. 52, pp. 160–172, 2015.
* [43] S. Boyd and L. Vandenberghe, _Convex Optimization_. Cambridge University Press, 2004.
* [44] M. Herceg, M. Kvasnica, C. N. Jones, and M. Morari, “Multi-parametric toolbox 3.0,” in _2013 European control conference (ECC)_. IEEE, 2013, pp. 502–510.
* [45] F. Dabbene, C. Lagoa, and P. Shcherbakov, “On the complexity of randomized approximations of nonconvex sets,” in _2010 IEEE International Symposium on Computer-Aided Control System Design_. IEEE, 2010, pp. 1564–1569.
* [46] V. T. H. Le, C. Stoica, T. Alamo, E. F. Camacho, and D. Dumur, _Zonotopes: From Guaranteed State-estimation to Control_. Wiley, 2013.
* [47] F. Dabbene, D. Henrion, C. Lagoa, and P. Shcherbakov, “Randomized approximations of the image set of nonlinear mappings with applications to filtering,” _IFAC-PapersOnLine_ , vol. 48, no. 14, pp. 37–42, 2015.
* [48] T. Alamo, J. M. Bravo, and E. F. Camacho, “Guaranteed state estimation by zonotopes,” _Automatica_ , vol. 41, no. 6, pp. 1035–1043, 2005.
* [49] E. Gover and N. Krikorian, “Determinants and the volumes of parallelotopes and zonotopes,” _Linear Algebra and its Applications_ , vol. 433, no. 1, pp. 28–40, 2010.
* [50] M. D. Buhmann, “Radial basis functions,” _Acta numerica_ , vol. 9, pp. 1–38, 2000.
* [51] L. J, “Plotrix: a package in the red light district of r,” _R-News_ , vol. 6, no. 4, pp. 8–12, 2006.
* [52] F. Dabbene, D. Henrion, and C. M. Lagoa, “Simple approximations of semialgebraic sets and their applications to control,” _Automatica_ , vol. 78, pp. 110 – 118, 2017.
|
# Structure Of Flavor Changing Goldstone Boson Interactions
Jin<EMAIL_ADDRESS>Yu<EMAIL_ADDRESS>Xiao-Gang
<EMAIL_ADDRESS>1Tsung-Dao Lee Institute, and School of Physics
and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China
2Department of Physics, National Taiwan University, Taipei 10617, Taiwan
3Physics Division, National Center for Theoretical Sciences, Hsinchu 30013,
Taiwan
###### Abstract
General flavor changing Goldstone boson (GB) interactions with fermions from a
spontaneous global $U(1)_{G}$ symmetry breaking are discussed. This GB may be
the Axion, solving the strong QCD CP problem, if there is a QCD anomaly for
the assignments of quarks $U(1)_{G}$ charge. Or it may be the Majoron,
producing seesaw Majorana neutrino masses by lepton number violation, if the
symmetry breaking scale is much higher than the electroweak scale. It may
also, in principle, play the roles of Axion and Majoron simultaneously as far
as providing solution for the strong CP problem and generating a small
Majorana neutrino masses are concerned. Great attentions have been focused on
flavor conserving GB interactions. Recently flavor changing Axion and Majoron
models have been studied in the hope to find new physics from rare decays in
the intensity frontier. In this work, we will provide a systematic model
building aspect study for flavor changing neutral current (FCNC) GB
interactions in the fermion sectors, or separately in the quark, charged
lepton and neutrino sectors and will identify in detail the sources of FCNC
interactions in a class of beyond standard model with a spontaneous global
$U(1)_{G}$ symmetry breaking. We also provide a general proof of the
equivalence of using physical GB components and GB broken generators for
calculating GB couplings to two gluons and two photons, and discuss some
issues related to spontaneous CP violation models. Besides, we will also
provide some details for obtaining FCNC GB interactions in several popular
models, such as the Type-I, -II, -III seesaw and Left-Right symmetric models,
and point out some special features in these models.
## I Introduction
A Goldstone boson (GB), a massless spin zero particle, from spontaneous
symmetry break down of some global symmetries is an important result of
quantum field theory Nambu:1960tm ; Goldstone:1961eq . When the original
symmetry is gauged, the GB would be “eaten” by gauge boson corresponding to
the broken generator of the symmetry, so that it acquires the longitudinal
component degrees of freedom. The Higgs mechanism Englert:1964et ;
Higgs:1966ev ; Guralnik:1964eu for electroweak symmetry breaking and mass
generation of the standard model (SM) particles is a good example of this
type. This mechanism has been verified experimentally by the discovery of the
Higgs boson. If the original symmetry is a global symmetry, the GB will be a
physical massless particle 444If there are anomalies at quantum level, the
corresponding GB may gain a finite mass, such as QCD Axion Weinberg:1977ma ;
Wilczek:1977pj from Peccei-Quinn symmetry Peccei:1977hh ; Peccei:1977ur
breaking.. When going beyond the SM there are well motivated theoretical
models with additional broken symmetries leading to the existence of physical
GB particles. Some of the interesting examples are the Axion Weinberg:1977ma ;
Wilczek:1977pj from Peccei-Quinn symmetry Peccei:1977hh ; Peccei:1977ur
breaking for solving the strong CP problem, and the Majoron Chikashige:1980qk
from lepton number (LN) symmetry breaking for neutrino mass generation.
Goldstone bosons have many laboratory, astrophysical and cosmological
implications Cheng:1987gp ; Kim:1986ax ; DiLuzio:2020wdo ; Ballesteros:2016euj
. However, no fundamental GB has been detected experimentally so far. New
dedicated experiments have been/are being designed to detect physical effects
of GB. There have been extensive studies in this area. A great attentions have
been focused on flavor conserving GB interaction Cheng:1987gp ; Kim:1986ax ;
DiLuzio:2020wdo ; Ballesteros:2016euj . Recently flavor changing axion models
have received more attentions in the hope to find new physics from rare decays
in the intensity frontier. With several high luminosity facilities in running,
such as the BESIII, LHCb, BELLE-II, in recent years, looking for GB at the
intensity frontier has attracted a lot of attentions. Flavor changing neutral
current (FCNC) induced by GB in rare decays is some of the promising places to
look for signs of new physics beyond SM including effects of GB interactions.
There are some stringent constraints from data already Celis:2014iua ;
Ema:2016ops ; Heeck:2017wgr ; Calibbi:2016hwq ; Marciano:2016yhf ;
CidVidal:2018blh ; Heeck:2019guh ; Calibbi:2020jvd ; MartinCamalich:2020dfe ;
Zyla:2020zbs ; Cornella:2019uxs ; Cheng:2020rla . It has recently been shown
that by measuring the polarization of final-state charged leptons, the chiral
structure of FCNC GB interaction can also be studied Cheng:2020rla .
Some of the well motivated models having a GB are the Axion and Majoron
models. Many of the searches depend on how GB interacts with SM fermions. GB
couplings to fermions not only have flavor conserving interactions, but also
flavor changing ones. This was known a long time ago with some interesting
phenomena Schechter:1981cv ; Gelmini:1983ea ; Anselm:1985bp ;
Berezhiani:1989fp ; GonzalezGarcia:1988rw ; Pilaftsis:1993af and has
attracted many attentions recently Celis:2014iua ; Heeck:2017wgr ; Ema:2016ops
; Heeck:2019guh ; MartinCamalich:2020dfe .
GB interaction with fermions is in derivative form and it is usually
parameterized as the following
$\displaystyle L^{c}_{int}={\partial_{\mu}a\over
2f_{a}}\bar{f}_{j}\gamma^{\mu}(c^{jk}_{V}+c^{jk}_{A}\gamma_{5})f_{k}\;,$ (1)
where $f$ stands for a quark or a charged lepton or a light active neutrino,
and $j\;,k$ are the generation indices. $f_{a}$ is the GB decay constant which
sets the scale of $U(1)_{G}$ symmetry breaking. $c_{V,A}$ satisfy the
condition $c^{\dagger}_{V/A}=c_{V/A}$ to have a hermitian interaction
Lagrangian. The sizes of $c_{V,A}$ are model dependent.
If neutrinos are Dirac particles, the GB interactions with neutrinos will be
the same in form as given above. If neutrinos are Majorana particles, the form
will be modified. Also if right-handed neutrinos $\nu_{R}$ are introduced to
facilitate the seesaw mechanism, $\nu_{L}$ and $\nu^{c}_{R}$ will have
different masses, this also modifies the form of the interaction. The
Lagrangian $L^{\nu}_{int}$ of GB interaction with neutrinos as appearing in
seesaw models will have the following form
$\displaystyle L^{\nu}_{int}={\partial_{\mu}a\over
2f_{a}}\left(\bar{\nu}_{Lj}\gamma^{\mu}c^{jk}_{LL}\nu_{Lk}+\bar{\nu}^{c}_{Rj}\gamma^{\mu}c^{jk}_{RR}\nu^{c}_{Rk}+(\bar{\nu}_{Lj}\gamma^{\mu}c^{jk}_{LR}\nu^{c}_{Rk}+\mbox{H.c.})\right)\;.$
(2)
The flavor changing GB interactions with fermions have a lot of interesting
phenomena which can be used to discover a GB. These can be from rare decays of
particles containing b, c and s quarks, $\tau$, $\mu$ charged lepton decays,
neutrino decays, and B-, D-, K-meson and muonium oscillations, and also g-2 of
charged leptons Anselm:1985bp ; Berezhiani:1989fp ; Calibbi:2016hwq ;
CidVidal:2018blh ; MartinCamalich:2020dfe ; GonzalezGarcia:1988rw ;
Pilaftsis:1993af ; Heeck:2017wgr ; Heeck:2019guh ; Calibbi:2020jvd ;
Marciano:2016yhf ; Cornella:2019uxs ; Cheng:2020rla . In this work, we will
not repeat to obtain the stringent constraints from various data, but to
investigate some interesting features of FCNC GB interactions from a general
$U(1)_{G}$ global symmetry break down beyond SM and some related issues. This
GB can be an Axion, a Majoron or a mixture of them, that is, a GB can play the
role of the Axion and Majoron simultaneously Mohapatra:1982tc and has some
interesting features Ballesteros:2016euj . In a concrete model, there are
usually other Higgs doublets besides the SM one. In general the additional
Higgs may also mediate FCNC interactions Glashow:1976nt . These new Higgs
bosons all have masses and some of them can be much larger than the
electroweak scale. For a massless GB, its FCNC effects will be different. So
we will concentrate on FCNC structure of a GB. Note here that FCNC processes
can also be generated at loop level where the strength is suppressed. So in
this paper we only consider the tree level interactions.
The paper is arranged in the following way. In section II, we provide a
systematic model building aspect study for GB interactions in both the quark
and lepton sectors with a simple way to identify GB components, and to obtain
GB-fermion interactions. For neutrino sector, we take Type-I seesaw as the
prototype of model to study. In section III, we discuss under what conditions
the general GB can be viewed as the usual Axion or Majoron. We provide a
general proof of the equivalence of using physical GB components and GB broken
generators for calculating Axion couplings to two gluons and two photons. In
section IV, we identify in details the sources for FCNC GB interactions, and
discuss how spontaneous CP violation may affect GB-fermion interactions. In
section V, we discuss some interesting features of GB interactions with
fermions in Type-II, -III seesaw models and Left-Right symmetric models. In
section VI, we provide our conclusions.
## II A general global $U(1)_{G}$ model and its goldstone-fermion
interactions
In the standard model, with the SM gauge particles, the standard three
generations of fermions and the Higgs boson doublet, and also with the fully
allowed Yukawa couplings of the Higgs doublet with SM fermions, the model
contains several accidental global symmetries, such as the lepton number $L$
and baryon number $B$. Each of them can be identified with a global $U(1)$
symmetry respectively 555non-perturbative effects, such as instanton effects,
will break $B+L$ tHooft:1976rip .. When going beyond the minimal SM, by adding
new particles, the global lepton and baryon number symmetries can occur
spontaneously broken to produce GBs. There are five types of fermions in SM,
such as the three generations of left-handed quark doublets $Q^{j}_{L}$ or
lepton doublets $L^{j}_{L}$ and the right-handed quark singlets $U^{j}_{R}$,
$D^{j}_{R}$ or charged lepton singlets $E_{R}$. If one switches off the Yukawa
couplings, each type of fermions poses a $U(3)$ global symmetry so that the
model has a $U(3)^{5}$ global symmetry. If further introducing right-handed
neutrinos, the global symmetry can be even larger. Starting with such a theory
at a high energy scale and then breaking these global symmetries spontaneously
down to lower energies with only a $U(1)$ baryon and a $U(1)$ lepton numbers
as the usual SM, it will result in many GBs associated with the broken
generators. Depending on the structure of the vacuum expectation values (vevs)
of the new scalar particles in the model, the symmetry breaking chains may
have a complicated route for having a phenomenologically acceptable model. The
complicated analysis may blur our aim to have a clear picture for the
properties about GB itself and it has been beyond the scope of our paper.
Therefore we will limit our discussions to the specific class of models, which
only has an additional global $U(1)_{G}$ symmetry occurring spontaneously
broken by vevs of some necessary new introduced scalar particles besides three
generations of fermions, so that we can obtain detailed information that how
this GB interacts with fermions to generate FCNC interactions. This $U(1)_{G}$
can be the Peccei-Quinn symmetry for solving the strong CP problem or lepton
number (LN) symmetry in connection with Majoron models or some other flavor
symmetries, which depends on how the $U(1)_{G}$ acts on the particles in the
model.
In a general form, we assume fermions in the model transform under $U(1)_{G}$
as
$\displaystyle f^{j}_{L}\to e^{iX^{j}_{L}}f^{j}_{L}\;,\;\;\;\;f^{j}_{R}\to
e^{iX^{j}_{R}}f^{j}_{R}\;,$ (3)
$f_{L,R}$ are the fermions in the SM with $SU(3)_{C}\times SU(2)_{L}\times
U(1)_{Y}$ gauge symmetries. For quarks, $f^{j}_{L}$ is
$Q^{j}_{L}:(3,2,1/6)(X_{L}^{qj})$, $f^{j}_{R}$ is one of
$U_{R}^{j}:(3,1,2/3)(X^{uj}_{R})$, or $D_{R}^{j}:(3,1,-1/3)(X^{dj}_{R})$, and
for leptons, $f^{j}_{L}$ is $L^{j}_{L}:(1,2,-1/2)(X^{lj}_{L})$, $f_{R}^{j}$
can be $E_{R}^{j}:(1,1,-1)(X^{ej}_{R})$. Since $X_{L}^{qj}$ and $X_{L}^{lj}$
contain $u_{L}^{j},\;d^{j}_{L}$ and $\nu_{L}^{j},\;e^{j}_{L}$, we indicate
their individual $U(1)_{G}$ charges as $X^{uj}_{L}=X^{dj}_{L}=X^{qj}_{L}$ and
$X_{L}^{\nu j}=X_{L}^{ej}=X_{L}^{lj}$ for conveniences. If there are right-
handed neutrinos, $f^{j}_{R}$ is $\nu_{R}^{j}:(1,1,0)(X^{\nu j}_{R})$. The
quantum numbers in the brackets correspond to $SU(3)_{C}$, $SU(2)_{L}$,
$U(1)_{Y}$ and $U(1)_{G}$, respectively. The diagonal matrix
diag$(X^{f1}_{L,R},X^{f2}_{L,R},X^{f3}_{L,R})$ in flavor space will be
indicated by a diagonal matrix $X_{L,R}^{f}$. In general there are several
Higgs doublets $H^{u,d,e,\nu}_{jk}$ transforming as
$(1,2,1/2)(X^{q,l\;j}_{L}-X^{u,d,e,\nu\;k}_{R})$ which couple to fermions,
$\displaystyle
L_{Y}=-\bar{Q}_{L}^{j}Y^{jk}_{u}\tilde{H}^{u}_{jk}U_{R}^{k}-\bar{Q}_{L}^{j}Y^{jk}_{d}H^{d}_{jk}D_{R}^{k}-\bar{L}_{L}^{j}Y^{jk}_{e}H^{e}_{jk}E_{R}^{k}-\bar{L}_{L}^{j}Y^{jk}_{\nu}\tilde{H}^{\nu}_{jk}\nu_{R}^{k}+\mbox{H.c.}\;.$
(4)
In the above $j$ and $k$ are summed over generation indices. The superscripts
(subscripts) $u$, $d$, $e$ and $\nu$ on Higgs doublets (Yukawa couplings) are
summed over Higgs doublets in the model. In component form
$\displaystyle H^{a}_{jk}=\left(\begin{array}[]{cc}h^{a+}_{jk}\\\ \\\
{1\over\sqrt{2}}(v^{a}_{jk}+h^{a}_{jk}+iI_{jk}^{a})\end{array}\right)\;.$ (8)
When the Higgs bosons develop vevs, $v^{u,d,e,\nu}_{jk}$, the electroweak
symmetry $SU(2)_{L}\times U(1)_{Y}$ is broken down to electromagnetic symmetry
$U(1)_{em}$, and at the same time the $U(1)_{G}$ is also broken. Non-zero vevs
will give the masses of fermions and gauge bosons $W$, $Z$.
If at the same time the singlets $S_{jk}$ are introduced with $U(1)_{G}$
charge $-(X^{\nu j}_{R}+X^{\nu k}_{R})$, one can also have the terms
$-(1/2)\bar{\nu}^{cj}_{R}Y^{jk}_{s}S_{jk}\nu^{k}_{R}$. Here the superscript c
indicates the charge conjugated field. If there are more than one singlet,
$Y^{jk}_{s}S_{jk}$ implies summation of singlets contributions.
$S_{jk}=(1/\sqrt{2})(v^{s}_{jk}+R^{s}_{jk}+iI^{s}_{jk})$. When the vevs of
$v^{s}_{jk}/\sqrt{2}$ become non-zero and are larger than
$v^{u,d,e,\nu}_{jk}$, the Type-I seesaw Minkowski:1977sc ; Yanagida:1980xy ;
type1-seesaw ; Glashow:1979nm ; Mohapatra:1979ia ; Schechter:1981cv mechanism
will be in effective to provide small Majorana masses for light neutrinos. The
singlets can also play the role of making possible dangerous GB interactions
invisible as in the DFSZ invisible Axion model Dine:1981rt ; Zhitnitsky:1980tq
.
One may wonder whether one just needs to consider the effects where only one
global $U(1)_{G}$ symmetry in addition to the SM gauge is broken spontaneously
when the model has the above complicated scalar particle contents. This
replies on how model dependent new scalars are introduced. Since the singlets
can have arbitrary $U(1)_{G}$ charges, one can choose appropriate charges for
the singlets so that in the model only one global $U(1)_{G}$ symmetry is
broken spontaneously. Several example models of this type with reasonably
complicated Higgs structure have been discussed in Ref. Sun:2020iim . One may
also resort to higher dimensional operators to break appropriately unnecessary
left-over symmetries Ema:2016ops ; Calibbi:2016hwq except the $U(1)_{G}$ at
the beginning of the symmetry breaking. Our discussions in the following apply
to this class of renormalizable models.
As mentioned before, the non-zero vevs of scalars $H^{a}_{jk}$ and $S_{jk}$
not only break the electroweak symmetry to provide the longitudinal components
of weak gauge bosons $W$ and $Z$, but also break the global $U(1)_{G}$
symmetry to result in a massless GB. The vector $z$ “eaten” by $Z$ boson, in
the basis
$\vec{I}=(I^{u}_{jk},I^{d}_{jk},I^{e}_{jk},I^{\nu}_{jk},I^{s}_{jk})$, is given
by
$\displaystyle\vec{z}=(v^{u}_{jk},\;v^{d}_{jk},\;v^{e}_{jk},\;v^{\nu}_{jk},\;0)\;,$
(9)
and the $U(1)_{G}$ broken generator vector $A$ is given by
$\displaystyle\vec{A}=\left(-(X^{uj}_{L}-X^{uk}_{R})v^{u}_{jk},\;(X^{dj}_{L}-X^{dk}_{R})v^{d}_{jk},\;(X^{ej}_{L}-X^{ek}_{R})v^{e}_{jk},\;-(X^{\nu
j}_{L}-X^{\nu k}_{R})v^{\nu}_{jk},\;-(X^{\nu j}_{R}+X^{\nu
k}_{R})v^{s}_{jk}\right)\;.$ (10)
The physical GB in this model should be the linear combination
$a=\vec{a}\cdot\vec{I}^{T}$, which is orthogonal to
$z=\vec{z}\cdot\vec{I}^{T}$. The corresponding vector form is
$\vec{a}=\alpha\vec{z}+\vec{A}$. The requirement that
$\vec{a}\cdot\vec{z}^{T}=0$ dictates
$\alpha\sim{-\vec{A}\cdot\vec{z}^{T}/\vec{z}\cdot\vec{z}^{T}}$. Therefore
$\vec{a}$ is given by Sun:2020iim
$\displaystyle\vec{a}={1\over N_{\alpha}}(\bar{v}^{2}\vec{z}-v^{2}\vec{A})\;,$
(11)
where $N_{\alpha}$ is a normalization constant to ensure
$\vec{a}\cdot\vec{a}^{T}=1$, and
$\displaystyle
v^{2}=\vec{z}\cdot\vec{z}^{T}=(v^{u}_{jk})^{2}+(v^{d}_{jk})^{2}+(v^{e}_{jk})^{2}+(v^{\nu}_{jk})^{2}\;,$
$\displaystyle\bar{v}^{2}=\vec{A}\cdot\vec{z}^{T}=-(X^{uj}_{L}-X^{uk}_{R})(v^{u}_{jk})^{2}+(X^{dj}_{L}-X^{dk}_{R})(v^{d}_{jk})^{2}+(X^{ej}_{L}-X^{ek}_{R})(v^{e}_{jk})^{2}-(X^{\nu
j}_{L}-X^{\nu k}_{R})(v^{\nu}_{jk})^{2}\;.$ (12)
Expressing the physical GB, $a=\vec{a}\cdot\vec{I}^{T}$, in terms of
$I^{a}_{jk}$ , we have
$\displaystyle a={1\over
N_{\alpha}}\left[\left((X^{pl}_{L}-X^{pm}_{R})-(X^{qj}_{L}-X^{qk}_{R})\right)(v^{p}_{lm})^{2}v^{q}_{jk}sign(q)I^{q}_{jk}+(X^{\nu
j}_{R}+X^{\nu k}_{R})(v^{p}_{lm})^{2}v^{s}_{jk}I^{s}_{jk}\right]\;.$ (13)
In the above, $j,\;k$, and $l,\;m$ are summed over flavor spaces in each
sector, and p, q are summed over $u,\;d,\;e,\nu$. Here sign(q) takes “$-$” for
$q=u,\;\nu$ and “$+$” for $q=d,e$. The GB field is uniquely determined. We
comment that if there are other additional global symmetry breaking
spontaneously, it would involve in identifying the other associated GB fields
Berezhiani:1989fp . The above results will not apply again.
The above shows that $I^{q}_{jk}$ and $I^{s}_{jk}$ contain the GB $a$ with
amplitude
$(1/N_{\alpha})((X^{pl}_{L}-X^{pm}_{R})-(X^{qj}_{L}-X^{qk}_{R}))(v^{p}_{lm})^{2}v^{q}_{jk}sign(q)$
and $(1/N_{\alpha})(X^{\nu j}_{R}+X^{\nu k}_{R})(v^{p}_{lm})^{2}v^{s}_{jk}$,
respectively. The Yukawa couplings of GB $a$ to fermions along with the mass
terms are given by
$\displaystyle L_{Y}$ $\displaystyle=$
$\displaystyle-\bar{U}^{j}_{L}M^{jk}_{u}\left[1+ia{v^{2}\over
N_{\alpha}}\left(-{\bar{v}^{2}\over
v^{2}}-(X^{uj}_{L}-X^{uk}_{R})\right)\right]U^{k}_{R}-\bar{D}^{j}_{L}M^{jk}_{d}\left[1+ia{v^{2}\over
N_{\alpha}}\left({\bar{v}^{2}\over
v^{2}}-(X^{dj}_{L}-X^{dk}_{R})\right)\right]D^{k}_{R}$ (14)
$\displaystyle-\bar{E}^{j}_{L}M^{jk}_{e}\left[1+ia{v^{2}\over
N_{\alpha}}\left({\bar{v}^{2}\over
v^{2}}-(X^{ej}_{L}-X^{ek}_{R})\right)\right]E^{k}_{R}-\bar{\nu}^{j}_{L}M^{jk}_{D}\left[1+ia{v^{2}\over
N_{\alpha}}\left(-{\bar{v}^{2}\over v^{2}}-(X^{\nu j}_{L}-X^{\nu
k}_{R})\right)\right]\nu^{k}_{R}$ $\displaystyle-{1\over
2}\bar{\nu}^{cj}_{R}M^{jk}_{R}\left(1+ia{v^{2}\over N_{\alpha}}(X^{\nu
j}_{R}+X^{\nu k}_{R})\right)\nu^{k}_{R}+\mbox{H.c.}\;,$
where $M^{jk}_{q}$ are mass matrices for up quark $M_{u}$, down quark $M_{d}$,
charged lepton $M_{e}$ and neutrino $M_{\nu}$. They are given by
$\displaystyle M^{jk}_{u}={Y^{jk}_{u}v^{u}_{jk}\over\sqrt{2}}\;,\hskip
31.2982ptM^{jk}_{d}={Y^{jk}_{d}v^{d}_{jk}\over\sqrt{2}}\;,\hskip
39.83368ptM^{jk}_{e}={Y^{jk}_{e}v^{e}_{jk}\over\sqrt{2}}\;,$ $\displaystyle
M^{jk}_{\nu}=\left(\begin{array}[]{ll}0&\;\;M_{D}\\\
M^{T}_{D}&\;\;M_{R}\end{array}\right)^{jk}\;,\;\;\mbox{with}\;\;M^{jk}_{D}={Y^{jk}_{\nu}v^{\nu}_{jk}\over\sqrt{2}}\;,\;\;\;\;M^{jk}_{R}={Y^{jk}_{s}v^{s}_{jk}\over\sqrt{2}}\;.$
(17)
The above mass matrices should be summed over contributions from different
pieces of each vev $v^{q}_{jk}$ for each “$q$” type of fermions. Note that
here $j$ and $k$ are not summed.
From the above Yukawa couplings, we can identify the fermion current
interacting with derivative form of $a$, $L_{Y}\to
L_{af}=\partial_{\mu}aj^{\mu}_{af}$, with the help of the equations of motion
as
$\displaystyle j^{\mu}_{af}=$ $\displaystyle{\bar{v}^{2}\over
2N_{\alpha}}\left((\bar{U}_{L}\gamma^{\mu}U_{L}-\bar{U}_{R}\gamma^{\mu}U_{R})-(\bar{D}_{L}\gamma^{\mu}D_{L}-\bar{D}_{R}\gamma^{\mu}D_{R})-(\bar{E}_{L}\gamma^{\mu}E_{L}-\bar{E}_{R}\gamma^{\mu}E_{R})+2\bar{\nu}_{L}\gamma^{\mu}\nu_{L}\right)$
$\displaystyle+{v^{2}\over
N_{\alpha}}(\bar{U}_{L}X^{u}_{L}\gamma^{\mu}U_{L}+\bar{U}_{R}X^{u}_{R}\gamma^{\mu}U_{R})+{v^{2}\over
N_{\alpha}}(\bar{D}_{L}X^{d}_{L}\gamma^{\mu}D_{L}+\bar{D}_{R}X^{d}_{R}\gamma^{\mu}D_{R})$
$\displaystyle+{v^{2}\over
N_{\alpha}}(\bar{E}_{L}X^{e}_{L}\gamma^{\mu}E_{L}+\bar{E}_{R}X^{e}_{R}\gamma^{\mu}E_{R})+{v^{2}\over
N_{\alpha}}\left(\bar{\nu}_{L}X^{\nu}_{L}\gamma^{\mu}\nu_{L}+\bar{\nu}_{R}X^{\nu}_{R}\gamma^{\mu}\nu_{R}\right)\;.$
We identify $1/f_{a}=v^{2}/N_{\alpha}$.
Note that the following relation holds,
$\displaystyle\bar{\nu}_{L}X^{\nu}_{L}\gamma^{\mu}\nu_{L}+\bar{\nu}_{R}X^{\nu}_{R}\gamma^{\mu}\nu_{R}=(\bar{\nu}_{L},\bar{\nu}^{c}_{R})X^{\nu}\gamma^{\mu}\left(\begin{array}[]{l}\nu_{L}\\\
\nu^{c}_{R}\end{array}\right),$ (21)
where $X^{\nu}$ is a $6\times 6$ diagonal matrix with non-zero entries to be
$(X^{\nu}_{L},-X^{\nu}_{R})=(X^{\nu 1}_{L},X^{\nu 2}_{L},X^{\nu 3}_{L},-X^{\nu
1}_{R},-X^{\nu 2}_{R},-X^{\nu 3}_{R})$.
The above discussions can be easily extended to models having different
scalars and fermions with different $SU(2)_{L}$ representations, such as the
Type-II and Type-III seesaw models with FCNC GB interactions to be discussed
later. The same method can be applied to construct GB components in those
models. In the models we discussed, FCNC GB interactions are generated at the
tree level. One can also generate FCNC GB interactions at loop levels
Heeck:2019guh . However, this will not be our aim in this paper and so it will
not be discussed further.
Early invisible Axion models Zhitnitsky:1980tq ; Dine:1981rt and many of
variants without addressing neutrino masses can be obtained by dropping the
last term in Eq. (II). In most of the models, each type of the three
generations of fermions is assigned to same $U(1)_{G}$ so that no FCNC GB
interactions can be generated. Recently people paid more attention to the
models with different $U(1)_{G}$ charges for different generations of the type
covered by Eq. (II) to generate FCNC GB interactions. Some of these models are
discussed in Refs. Celis:2014iua ; MartinCamalich:2020dfe .
The original Majoron model in Ref. Chikashige:1980qk is obtained by just
introducing one Higgs doublet with a zero global lepton number, a singlet and
leptons with all the same lepton number. Soon after it was realized that in
such a model there are FCNC GB interactions and also some more elaborated
models were constructed Schechter:1981cv ; Gelmini:1983ea ;
GonzalezGarcia:1988rw ; Berezhiani:1989fp ; Heeck:2019guh .
A GB may play the role of both Axion and Majoron Mohapatra:1982tc . In fact
the models taking into account neutrino mass generations and also GB
interactions, usually mix the role of Axion and Majoron. Most of models can be
obtained by assigning different $U(1)_{G}$ charges DiLuzio:2020wdo ;
Calibbi:2020jvd to different generations of fermion or by adding terms like
the last term in Eq. (II) for neutrino masses to generate FCNC GB interactions
Sun:2020iim ; Cheng:2020rla ; Chen:2007nx ; He:2010hz ; Pan:2020qqd .
Our discussions so far do not include models with additional gauge symmetries.
But this can be easily implemented by focusing on what symmetries are broken
by the scalar vevs and reading off the GB components using the method
described earlier. Along with more gauge symmetries are broken by the scalar
vevs, the analysis becomes more complicated Mohapatra:1982tc ; Grimus:1982qu ,
but the way of identifying the GB discussed earlier still applies. To make the
points more explicit, we will provide some illustrations for Left-Right
symmetric model Mohapatra:1974gc ; Senjanovic:1975rk later.
There are also models with non-renormalizable GB couplings Ema:2016ops ;
Calibbi:2016hwq . Our method still can be easily extended to this type of
models since the identification of GB components for each of the scalar boson
with vev breaking the symmetries is the same as discussed before. But allowing
non-renormalizable terms in the model provides another type of source of FCNC
GB interactions. An example of this type of models is the flaxion model
discussed in Ref. Ema:2016ops ; Calibbi:2016hwq . In this model besides the SM
particles, a singlet S with non-trivial $U(1)_{G}$ charge is introduced so
that one adds additional terms of the type
$y_{jk}^{f}\left(S/M\right)^{n_{kj}^{f}}\overline{f_{L}}_{j}Hf_{Rk}$. The
Higgs doublet does not have $U(1)_{G}$ charge. The $U(1)_{G}$ charge is
balanced by the fermion $f_{L,R}$ $U(1)_{G}$ charges. The vev of the singlet
$S$ does not break SM symmetry, but provide the only source for $U(1)_{G}$
breaking. The imaginary of $S$, $a$, is the GB in the model. Expanding
additional Yukawa couplings around the vacuum, the GB coupling to fermions
becomes $iM_{jk}n_{jk}$ which are in general not simultaneously diagonalized
depending on the choice of $n_{jk}$ and therefore the FCNC GB interaction can
arise. This class of models have simple scalar sector at the expenses of
models with non-renormalizable interactions. We consider renormalizable models
more attractive and therefore will work with this class of models.
## III Goldstone boson as Axion or Majoron
As mentioned before, the GB may or may not be a usual Axion or Majoron. Here
we make a rough distinction among them depending on their primary role in
addressing some physics problems. The massless GB will become massive if the
relevant $U(1)_{G}$ charge assignments have $SU(3)_{C}$ anomalies, then this
model can be used to solve the strong CP problem. The GB in such models can be
viewed as an Axion and the $U(1)_{G}$ can be identified as a variant of the
$U(1)_{PQ}$. The condition is to have
$\displaystyle Tr(X^{u}_{R}-X^{u}_{L})+Tr(X^{d}_{R}-X^{d}_{L})\neq 0\;.$ (22)
This can be understood from a possible GB-gluon coupling
$a\tilde{G}^{a\mu\nu}G^{a}_{\mu\nu}$ by calculating the triangle diagram using
the current in Eq. (II). We have Cheng:1987gp ; Kim:1986ax ; DiLuzio:2020wdo
$\displaystyle L_{ag}=a{g^{2}_{3}\over
16\pi^{2}}N(X)T(q)\tilde{G}^{a\mu\nu}G^{a}_{\mu\nu}={\alpha_{s}\over
8\pi}{a\over f_{a}}\tilde{G}^{a\mu\nu}G^{a}_{\mu\nu}\;,$ (23)
where $g_{3}$ is the $SU(3)_{C}$ gauge coupling constant, and $T(q)$ is the
generator of $SU(3)_{C}$ for color triplet quarks defined by
$Tr(T^{a}T^{b})=T(q)\delta^{ab}=\delta^{ab}/2$. $N(X)=N^{u}(X)+N^{d}(X)$. Here
the superscripts indicate the contributions from up- and down-type quarks
running in the loop of the triangle diagram. They are given by
$\displaystyle N^{u}(X)=N_{G}{\bar{v}^{2}\over N_{\alpha}}+{v^{2}\over
N_{\alpha}}Tr(X^{u}_{R}-X^{u}_{L})\;,\;\;N^{d}(X)=-N_{G}{\bar{v}^{2}\over
N_{\alpha}}+{v^{2}\over N_{\alpha}}Tr(X^{d}_{R}-X^{d}_{L})\;.$ (24)
As long as
$N(X)=(v^{2}/N_{\alpha})Tr(X^{u}_{R}-X^{u}_{L}+X^{d}_{R}-X^{d}_{L})$ is not
zero, there is a color anomaly. This makes the GB to be massive and play the
role of the usual AxionMohapatra:1982tc ; Ballesteros:2016euj .
Here we would like to make a comment on the relation of $a$ couplings to two
gluons and two photons. GB coupling to two photons of the type
$a\tilde{F}^{\mu\nu}F_{\mu\nu}$ will be generated by just replacing gluons by
photons in the above mentioned triangle diagram. We have
$\displaystyle L_{a\gamma}=a{e^{2}\over
16\pi^{2}}\tilde{E}(X)\tilde{F}^{\mu\nu}F_{\mu\nu}\;,$ (25)
where $\tilde{E}(X)=E^{u}(X)+E^{d}(X)+E^{e}(X)$. The superscripts indicate the
contributions from quarks and charged leptons running in the loop. They are
given by
$\displaystyle
E^{u}(X)=Q^{2}_{u}N^{q}_{c}N^{u}(X)\;,\;\;E^{d}(X)=Q^{2}_{d}N^{q}_{c}N^{d}(X)\;,$
$\displaystyle E^{e}(X)=Q^{2}_{e}N_{c}^{e}\left(-N_{G}{\bar{v}^{2}\over
N_{\alpha}}+{v^{2}\over N_{\alpha}}Tr(X^{e}_{R}-X^{e}_{L})\right)\;.$ (26)
Here $N^{q}_{c}=3$ and $N^{e}_{c}=1$ are the effective number of color for
quarks and charged leptons, respectively. The above method is referred as
calculation using the physical GB.
In the literature for Axion models, the GB-two-photon coupling is usually
written as DiLuzio:2020wdo
$\displaystyle L_{a\gamma}=a{e^{2}\over
16\pi^{2}}E(X)\tilde{F}^{\mu\nu}F_{\mu\nu}={1\over
4}ag^{0}_{a\gamma}\tilde{F}^{\mu\nu}F_{\mu\nu}\;,$ (27)
Where $g^{0}_{a\gamma}=(\alpha_{em}/2\pi f_{a})E(X)/N(X)$ with
$E(X)={v^{2}\over
N_{\alpha}}Tr((X^{u}_{R}-X^{u}_{L})Q^{2}_{u}N^{q}_{c}+(X^{d}_{R}-X^{d}_{L})Q^{2}_{d}N^{q}_{c}+(X^{e}_{R}-X^{e}_{L})Q^{2}_{e}N^{e}_{c})$.
This method is referred as calculation using the broken generators.
The above gives the same result as Eq. (25), if
$(Q^{2}_{u}-Q^{2}_{d})N^{q}_{c}-Q^{2}_{e}N^{e}_{c}=0$, that is
$E(X)=\tilde{E}(X)$. This condition is actually one of the gauge anomaly free
conditions Bouchiat:1972iq ; Geng:1988pr
$\displaystyle
I_{3}^{u}Q^{2}_{u}N^{u}_{c}+I^{d}_{3}Q^{2}_{d}N^{d}_{c}+I_{3}^{e}Q^{2}_{e}N^{e}_{c}=0\;.$
(28)
Here $I^{f}_{3}$ is the value of the third weak isospin component of the
“$f$th” fermion. Therefore this condition is guaranteed for a gauge anomaly
free theory to eliminate the term proportional to $\bar{v}^{2}$ related to the
component “eaten” by Z-boson, which results in the same results obtained as
using broken PQ generator. The above provides a general proof as discussed in
Ref. Sun:2020iim . The results are completely fixed by the $U(1)_{G}$ charges
$X^{f}_{L,R}$ and the kind of colored, and charged particles in the model.
Note that if there is no color anomaly for $U(1)_{G}$, that is $N(X)=0$ as in
the Majoron models in Ref. Chikashige:1980qk ; Cheng:2020rla , the situation
will be different. In this case to avoid that $N(X)$ appears in the
denominator of $g^{0}_{a\gamma}=(\alpha_{em}/2\pi f_{a})E(X)/N(X)$, it is
better to use $g^{0}_{a\gamma}=(\alpha_{em}/4\pi)E(X)$ directly.
Majoron is also another commonly studied GB which results from spontaneous
break down of lepton number, like in the Type-I seesaw model Chikashige:1980qk
. Therefore there is no color anomaly for GB produced by lepton number
breaking.
From our discussion in previous section, the GB can in general have color
anomaly and also break lepton number, therefore the GB can be viewed as an
Axion and Majoron simultaneously Mohapatra:1982tc . The GB also exists other
names Ma:2017vdv , such as Familon Anselm:1985bp ; Berezhiani:1989fp , and
Arion Anselm:1982ip , which can be considered as special cases discussed here.
But whichever name the GB has, it results from a global $U(1)$ symmetry
breaking.
## IV Flavor changing Goldstone boson interactions
We now discuss how FCNC GB interactions with fermions emerge. The relevant
information is contained in the GB current in Eq. (II). The flavor changing
nature of the interaction can be easily seen in the mass eigen-state basis.
The mass matrices for fermions can be diagonalized by bi-unitary
transformation to the diagonal ones,
$\hat{M}_{f}=V^{f}_{L}M_{f}V^{f\dagger}_{R}$. In the mass-eigen basis, the GB
interaction current $j^{\mu}_{ac}$ with quarks and charged leptons is given by
$\displaystyle j^{\mu}_{ac}=$ $\displaystyle-{\bar{v}^{2}\over
2N_{\alpha}}(\bar{U}^{m}\gamma^{\mu}\gamma_{5}U^{m}-\bar{D}^{m}\gamma^{\mu}\gamma_{5}D^{m}-\bar{E}^{m}\gamma^{\mu}\gamma_{5}E^{m})+{v^{2}\over
N_{\alpha}}(\bar{U}^{m}_{L}V^{u}_{L}X^{u}_{L}V_{L}^{u\dagger}\gamma^{\mu}U^{m}_{L}+\bar{U}^{m}_{R}V^{u}_{R}X^{u}_{R}V^{u\dagger}_{R}\gamma^{\mu}U^{m}_{R})$
(29) $\displaystyle+{v^{2}\over
N_{\alpha}}(\bar{D}^{m}_{L}V^{d}_{L}X^{d}_{L}V^{d\dagger}_{L}\gamma^{\mu}D^{m}_{L}+\bar{D}^{m}_{R}V^{d}_{R}X^{d}_{R}V^{d\dagger}_{R}\gamma^{\mu}D^{m}_{R})+{v^{2}\over
N_{\alpha}}(\bar{E}^{m}_{L}V^{e}_{L}X^{e}_{L}V^{e\dagger}_{L}\gamma^{\mu}E^{m}_{L}+\bar{E}^{m}_{R}V^{e}_{R}X^{e}_{R}V^{e\dagger}_{R}\gamma^{\mu}E^{m}_{R})\;.$
Here $X^{f}_{L,R}$ are diagonal matrices with the diagonal entries given by
$(X^{f1}_{L,R},X^{f2}_{L,R},X^{f3}_{L,R})$. $f^{m}$ indicates the mass eigen-
states. We will drop the superscript “$m$” to keep notation simple unless
stated otherwise. It is clear that when $X^{f}_{L,R}$ are not proportional to
unit matrix the GB current is not diagonal in the mass eigen-state basis and
therefore flavor changing interaction emerges.
The GB decay constant $f_{a}$ is identified by the relation
$1/f_{a}=v^{2}/N_{a}$. The off-diagonal elements for $c_{V}$ and $c_{A}$ in
Eq. (1) are given by
$(V^{f}_{R}X^{f}_{R}V^{f\dagger}_{R}+V^{f}_{L}X^{f}_{L}V^{f\dagger}_{L})$ and
$(V^{f}_{R}X^{f}_{R}V^{f\dagger}_{R}-V^{f}_{L}X^{f}_{L}V^{f\dagger}_{L})$. For
the diagonal elements, $\pm(\bar{v}^{2}/v^{2})$ needs to be added to
$c^{jk}_{A}$ entries with “-” for up-quarks, and “+” for down-quarks and
charged leptons. If $X_{L,R}^{f}$ entries are order $O(1)$ and have no
accidental cancellations, $c_{V,A}$ can be order $O(1)$.
Similarly, GB couplings to neutrinos can be worked with some modifications. We
provide some details here. The mass matrix $M_{\nu}$ for neutrinos is
diagonalized by a $6\times 6$ unitary matrix
$\hat{M}_{\nu}=V^{\nu}_{L}M_{\nu}V^{\nu T}_{L}$. Writing $V_{L}^{\nu}$ into
$3\times 3$ matrices blocks, we have
$\displaystyle
V^{\nu}_{L}=\left(\begin{array}[]{ll}V^{\nu}_{LL}&V^{\nu}_{LR}\\\ \\\
V^{\nu}_{RL}&V^{\nu}_{RR}\end{array}\right)\;,$ (33)
we have the current $j^{\mu}_{a\nu}$ for neutrinos given by
$\displaystyle j^{\mu}_{a\nu}=$ $\displaystyle{\bar{v}^{2}\over
N_{\alpha}}(\bar{\nu}_{L}V^{\nu}_{LL}+\bar{\nu}^{c}_{R}V^{\nu}_{RL})\gamma^{\mu}(V^{\nu\dagger}_{LL}\nu_{L}+V^{\nu\dagger}_{RL}\nu^{c}_{R})$
(34) $\displaystyle+{v^{2}\over
N_{\alpha}}\left((\bar{\nu}_{L}V^{\nu}_{LL}+\bar{\nu}^{c}_{R}V^{\nu}_{RL})X^{\nu}_{L}\gamma^{\mu}(V^{\nu\dagger}_{LL}\nu_{L}+V^{\nu\dagger}_{RL}\nu^{c}_{R})+(\bar{\nu}^{c}_{L}V^{\nu*}_{LR}+\bar{\nu}_{R}V^{\nu*}_{RR})X^{\nu}_{R}\gamma^{\mu}(V^{\nu
T}_{LR}\nu_{L}^{c}+V^{\nu T}_{RR}\nu_{R})\right)\;.$
Again $f_{a}$ is identified by the relation $1/f_{a}=v^{2}/N_{a}$. Compared
with Eq. (2), we have
$\displaystyle c_{LL}=2({\bar{v}^{2}\over
v^{2}}V^{\nu}_{LL}V^{\nu\dagger}_{LL}+V^{\nu}_{LL}X^{\nu}_{L}V^{\nu\dagger}_{LL}-V^{\nu}_{LR}X^{\nu}_{R}V^{\nu\dagger}_{LR})\;,$
$\displaystyle c_{RR}=2({\bar{v}^{2}\over
v^{2}}V^{\nu}_{RL}V^{\nu\dagger}_{RL}+V^{\nu}_{RL}X^{\nu}_{L}V^{\nu\dagger}_{RL}-V^{\nu}_{RR}X^{\nu}_{R}V^{\nu\dagger}_{RR})\;,$
(35) $\displaystyle c_{LR}=2({\bar{v}^{2}\over
v^{2}}V^{\nu}_{LL}V^{\nu\dagger}_{RL}+V^{\nu}_{LL}X^{\nu}_{L}V^{\nu\dagger}_{RL}-V^{\nu}_{LR}X^{\nu}_{R}V^{\nu\dagger}_{RR})\;.$
From the above, we see that there are more possibilities that FCNC interaction
can emerge due to seesaw mass matrix diagonalization. For example, as
$V^{\nu}_{LL}$ are not unitary in general, FCNC interaction exists in
$a\bar{\nu}_{L}\nu_{L}$ interaction with amplitude proportional to
$V^{\nu}_{LL}V^{\nu\dagger}_{LL}$. Since $V^{\nu}_{LL}$ should be close to the
unitary $V_{PMNS}$ matrix, the FCNC interaction is naturally small. The FCNC
interaction can also occur, similar to the quarks and charged leptons if
$X^{\nu}_{L,R}$ are not proportional to unit matrix. Even, $X^{\nu}_{L}$ and
$X^{\nu}_{R}$ are separately proportional to unit matrix, FCNC interactions
can still occur if the $6\times 6$ diagonal matrix $X^{\nu}$ is not
proportional to a $6\times 6$ unit matrix.
One observes that if $X^{f}_{L,R}$ are set to be unit matrix, there exists
only FCNC interaction of $a$ with neutrinos but no interaction with quarks and
charged leptons, because $V^{\nu}_{LL,RR,LR,RL}$ are separately not unitary.
Working in the basis where $M_{e}$ and $M_{R}$ are diagonalized, one can
approximate Abada:2007ux ; He:2009tf $V_{LL}=(1-\epsilon/2)V_{PMNS}$ with
$\epsilon=Y_{D}M_{R}^{-2}Y^{\dagger}_{D}v^{2}/2$. Global fit finds that the
matrix elements in $\epsilon$ are ${\cal O}(10^{-3})$ Fernandez-
Martinez:2016lgt . Therefore, the couplings
$V^{\nu}_{LL}{V^{\nu}_{LL}}^{\dagger}$ are allowed at the level of $10^{-3}$.
If different singlets are introduced for corresponding right-handed neutrinos
to have different lepton numbers, one would need to change the Majoron
couplings to light neutrinos to
$V^{\nu}_{LL}X^{\nu}_{R}{V^{\nu}_{LL}}^{\dagger}$ with $X^{\nu}_{R}$ a
diagonal matrix but different diagonal entries. The individual off-diagonal
couplings can be much larger than $10^{-3}$. In general, the off-diagonal
entries are arbitrary and should therefore be constrained by data. There are
also constraints from mixing between heavy and light neutrinos. However, they
can be independent from light neutrino mixings and need to be constrained
using data He:2009ua .
Before closing this section, we would like to make a comment about theories
with spontaneous CP violation and how to identify the GB in the model.
Spontaneous CP violation requires more than one Higgs doublet. When a global
$U(1)_{G}$ is imposed, there may need more Higgs bosons to construct a model
consistent with data Geng:1988ty ; He:1988dm . For the model in Ref.
Geng:1988ty , it is based on two Higgs doublet fields $H_{j}$ and two scalar
singlet fields $S_{j}$ to incorporate spontaneous CP violation and PQ
mechanism with invisible axion. The model in Ref. He:1988dm achieves
spontaneous CP violation by adding another doublet and introducing one
singlet. But in both models each type of the three generations of fermions has
the same PQ charge, therefore it does not have FCNC GB interaction. However,
in this case the vevs are complex, that is, $v^{q}_{jk}$ becomes
$v^{q}_{jk}e^{i\theta^{q}_{jk}}$. This may be more complicated in identifying
the physical GB.
In this case the $z$ and $A$ become in the basis $-i(h_{jk}^{q}+iI^{q}_{jk})$,
$\displaystyle\vec{z}=(v^{u}_{jk}e^{i\theta^{u}_{jk}},\;v^{d}_{jk}e^{i\theta^{d}_{jk}},\;v^{e}_{jk}e^{i\theta^{e}_{jk}},\;v^{\nu}_{jk}e^{i\theta^{\nu}_{jk}},\;0)\;,$
$\displaystyle\vec{A}=(-(X^{uj}_{L}-X^{uk}_{R})v^{u}_{jk}e^{i\theta^{u}_{jk}},\;(X^{dj}_{L}-X^{dk}_{R})v^{d}_{jk}e^{i\theta^{d}_{jk}},\;(X^{ej}_{L}-X^{ek}_{R})v^{e}_{jk}e^{i\theta^{e}_{jk}},\;$
$\displaystyle\hskip 19.91684pt\;-(X^{\nu j}_{L}-X^{\nu
k}_{R})v^{\nu}_{jk}e^{i\theta^{\nu}_{jk}},\;-(X^{\nu j}_{R}+X^{\nu
k}_{R})v^{s}_{jk}e^{i\theta^{s}_{jk}})\;.$ (36)
The physical GB field is now
$\displaystyle a={1\over
N_{\alpha}}Im\left[\left((X^{pl}_{L}-X^{pm}_{R})-(X^{qj}_{L}-X^{qk}_{R})\right)(v^{p}_{lm})^{2}v^{q}_{jk}e^{i\theta^{q}_{jk}}sign(q)(h^{q}_{ij}+iI^{q}_{ij})+(v^{p}_{lm})^{2}(X^{\nu
j}_{R}+X^{\nu
k}_{R})v^{s}_{jk}e^{\theta^{s}_{jk}}(h^{s}_{jk}+iI^{s}_{jk})\right]\;.$
This leads to the same $j_{af}^{\mu}$ as discussed before. We therefore
conclude that no new CP violation phases for GB interactions with fermions
arise. Some special cases of this type of models have been discussed in Ref.
Chen:2007nx ; He:2010hz ; Pan:2020qqd . Three Higgs doublets $H_{j}$ and one
complex scalar singlet S are introduced by setting $Q_{\mathrm{L}}(0)$,
$U_{\mathrm{R}}(\pm 1)$, $D_{\mathrm{R}}(\pm 1)$, $L_{L}(0)$, $E_{R}(\pm 1)$,
$H_{1,2}(+1)$, $H_{3}(-1)$, $S(+2)$ to get their model. The resulting $j_{af}$
are special cases of Eq. (II) with same $U(1)_{G}$ charge for each type of the
three generations of fermions, therefore there are no FCNC GB interactions in
the models.
## V Special features for seesaw and Left-Right symmetric models
We now discuss some interesting features of flavor changing GB interactions
with fermions in some of the popular models, the Type-II, -III seesaw, and
Left-Right symmetric models.
### V.1 Type-II seesaw model
The simplest realization of Type-II seesaw Schechter:1981cv ; Magg:1980ut ;
Cheng:1980qt ; Mohapatra:1980yp ; Lazarides:1980nt is by introducing a
triplet Higgs field $\chi:(1,3,1)(-2)$ that couples to the neutrinos to give
neutrino mass when $\chi$ develops a vev $v_{\chi}/\sqrt{2}$ via the term
$\bar{L}^{c}_{L}\chi L_{L}$. There is no need of introducing right-handed
neutrino $\nu_{R}$ as in Type-I seesaw model. To have a GB, the Majoron in
this case, one can impose the global lepton number conservation in the
potential Gelmini:1980re ; Georgi:1981pg . Since the $\chi$ field has a non-
zero lepton number, its vev breaks both electroweak symmetry and global lepton
number. The Goldstone boson “eaten” by Z boson is given by
$z=(vI+2v_{\chi}I_{\chi})/\sqrt{v^{2}+4v^{2}_{\chi}}$. The Majoron is the
another orthogonal component $(2vI-
v_{\chi}I_{\chi})/\sqrt{v^{2}+4v^{2}_{\chi}}$ whose coupling to neutrinos is
proportional to the neutrino mass matrix. The mixing will induce Majoron to
couple to charged leptons and quarks. Since the vev of $\chi$ is constrained
to be less than a few GeV from the precise measurement of $\rho$ parameter
Zyla:2020zbs , therefore the couplings of GB to charged leptons and quarks are
small. There are no FCNC GB interactions. To remedy the problems related to
light degrees of freedom in the model, one can introduce a singlet $S$ of the
type discussed in Type-I seesaw model which couples to $\chi$ and $H$ through
the term $H\chi HS$. But this still will not induce FCNC GB interactions.
If $L_{j}$ have different $U(1)_{G}$ as discussed in the general GB model in
section II, there is the need to introduce several $\chi$ fields with the
$U(1)_{G}$ charges $X_{\chi}^{jk}=-(X^{\nu j}_{L}+X^{\nu k}_{L})$ and also to
extend $S$ to $S^{jk}$. The term $H\chi HS$ is changed to
$H_{j}\chi_{lm}H_{k}S^{pq}$ with the indices contracted in all possible ways
for SM gauge group and also $U(1)_{G}$ singlets. In this case, following
procedures in section II, we obtain the GB-neutrino current
$\displaystyle j^{\mu}_{a\nu}={\bar{v}^{2}\over
N_{\alpha}}\bar{\nu}_{L}\gamma^{\mu}\nu_{L}+{v^{2}\over
N_{\alpha}}\bar{\nu}_{L}X^{\nu}_{L}\gamma^{\mu}\nu_{L}\;.$ (38)
Here
$v^{2}=(v^{u}_{jk})^{2}+(v^{d}_{jk})^{2}+(v^{e}_{jk})^{2}+4(v^{\chi}_{jk})^{2}$
and
$\bar{v}^{2}=(-(X^{uj}_{L}-X^{uk}_{R})(v^{u}_{jk})^{2}+(X^{dj}_{L}-X^{dk}_{R})(v^{d}_{jk})^{2}+(X^{ej}_{L}-X^{ek}_{R})(v^{e}_{jk})^{2}-2(X^{\nu
j}_{L}+X^{\nu k}_{L})(v^{\chi}_{jk})^{2}$.
If $X_{L}^{\nu}$ is not proportional to unit matrix, FCNC interactions will
emerge. In the neutrino mass eigen-state basis, we have
$\displaystyle j^{\mu}_{a\nu}={\bar{v}^{2}\over
N_{\alpha}}\bar{\nu}_{L}\gamma^{\mu}\nu_{L}+{v^{2}\over
N_{\alpha}}\bar{\nu}_{L}V_{PMNS}X^{\nu}_{L}\gamma^{\mu}V^{\dagger}_{PMNS}\nu_{L}\;,$
(39)
where $V_{PMNS}$ is the lepton mixing matrix.
At least two triplet fields $\chi$ with different $U(1)_{G}$ charges need to
be introduced to have FCNC interaction. If the quark and charged lepton
$U(1)_{G}$ charges are also similarly the general model discussed, their
corresponding couplings to the GB will be given by Eq. (29) which lead to FCNC
GB interaction with fermions in general.
### V.2 Type-III seesaw model
In Type-III seesaw model Foot:1988aq , one replaces the right handed neutrinos
$\nu_{R}$ by the $SU(2)_{L}$ triplet $\Sigma_{L}^{c}=\Sigma_{R}$, the charge
conjugation of $\Sigma_{L}$, transforming as a $(1,3,0)$ under the SM gauge
group. It carries a $U(1)_{G}$ charge $X_{R}^{\nu}$ as in the Type-I seesaw
model. The component fields are as the following
$\displaystyle\Sigma_{L}=\left(\begin{array}[]{cc}\Sigma_{L}^{0}/\sqrt{2}&\;\;\Sigma^{+}_{L}\\\
\Sigma^{-}_{L}&\;\;-\Sigma^{0}_{L}/\sqrt{2}\end{array}\right)\;,\;\;\;\;\Sigma_{R}=\left(\begin{array}[]{cc}\Sigma_{L}^{0\;c}/\sqrt{2}&\;\;\Sigma^{-\;c}_{L}\\\
\Sigma^{+\;c}_{L}&\;\;-\Sigma^{0\;c}_{L}/\sqrt{2}\end{array}\right)\;.$ (44)
We will rename them with $\nu_{R}=\Sigma_{L}^{0\;c}$,
$\psi_{L}=\Sigma^{-}_{L}$ and $\psi_{R}=\Sigma^{+\;c}_{L}$.
The Yukawa interaction terms are given by
$\displaystyle
L=-\bar{Q}_{L}^{j}Y^{jk}_{u}\tilde{H}^{u}_{jk}U_{R}^{k}-\bar{Q}_{L}^{j}Y^{jk}_{d}H^{d}_{jk}D_{R}^{k}-\bar{L}_{L}^{j}Y^{jk}_{e}H^{e}_{jk}E_{R}^{k}-\bar{L}_{L}^{j}\sqrt{2}Y^{jk}_{\nu}\Sigma_{R}^{k}\tilde{H}^{\nu}_{jk}-{1\over
2}Tr\bar{\Sigma}_{R}^{jc}Y^{jk}_{s}S_{jk}\Sigma^{k}_{R}+\mbox{H.c.}\;.$ (45)
The GB field is in general given by Eq. (13). The GB couplings to up- and
down-type quarks and also to neutrinos are the same as those given in Type-I
seesaw model. But the couplings to charged leptons will be modified because of
the existence of $\psi_{L,R}$. We have the mass and GB interaction terms
$\displaystyle L=$
$\displaystyle-(\bar{E}_{L},\bar{\psi}_{L})M_{c}\left(\begin{array}[]{c}E_{R}\\\
\psi_{R}\end{array}\right)$ (54) $\displaystyle-
ia(\bar{E}_{L},\bar{\psi}_{L})\left(\begin{array}[]{cc}M_{e}{\bar{v}^{2}\over
N_{\alpha}}-{v^{2}\over
N_{\alpha}}(X_{L}^{e}M_{e}-M_{e}X^{e}_{R})&\;\;\sqrt{2}M_{D}{\bar{v}^{2}\over
N_{\alpha}}-{v^{2}\over
N_{\alpha}}(X^{e}_{L}\sqrt{2}M_{D}-\sqrt{2}M_{D}X^{\nu}_{R})\\\ \\\
0&\;\;{v^{2}\over
N_{\alpha}}(X^{\nu}_{R}M_{R}+M_{R}X^{\nu}_{R})\end{array}\right)\left(\begin{array}[]{c}E_{R}\\\
\psi_{R}\end{array}\right)+\mbox{H.c.}\;.$
where
$\displaystyle M_{c}=\left(\begin{array}[]{cc}M_{e}&\;\;\sqrt{2}M_{D}\\\
0&\;\;M_{R}\end{array}\right)\;.$ (57)
Using the equations of motion, the GB current $j^{\mu}_{e}$ in the interaction
$\partial_{\mu}a\;j^{\mu}_{e}$, can be written as
$\displaystyle j^{\mu}_{e}$ $\displaystyle=$ $\displaystyle-{\bar{v}^{2}\over
N_{\alpha}}\bar{E}_{L}\gamma^{\mu}E_{L}+{v^{2}\over
N_{\alpha}}(\bar{E}_{L}X^{e}_{L}\gamma^{\mu}E_{L}+\bar{E}_{R}X^{e}_{R}\gamma^{\mu}E_{R})+{v^{2}\over
N_{\alpha}}(\bar{\psi}_{R}X^{\nu}_{R}\gamma^{\mu}\psi_{R}-\bar{\psi}_{L}X^{\nu}_{R}\gamma^{\mu}\psi_{L})\;.$
(58)
One can easily see that GB will have FCNC interactions with charged leptons
too.
We would like to mention a special feature noticed recently in Ref.
Cheng:2020rla which can be achieved by just introducing one $S$ to the usual
Type-III seesaw model, and normalizing $f_{a}$ to be equal to $v_{s}$ as that
in Ref. Cheng:2020rla by choosing $X^{e}_{L,R}=X^{\nu}_{R}=1/2$. In this case
$\bar{v}^{2}=0$. Using vector current conservation
$\partial_{\mu}(\bar{E}\gamma^{\mu}E+\bar{\psi}\gamma^{\mu}\psi)=0$, we have
$\displaystyle j^{\mu}_{e}=-{v^{2}\over
N_{\alpha}}\bar{\psi}_{L}\gamma^{\mu}\psi_{L}\;.$ (59)
The mass matrix $M_{c}$ can be diagonalized in the form
$M_{c}={V^{e\,L}}^{\dagger}\hat{M}_{c}V^{e\,R}$. Here $V^{e\,L(R)}$ are
$6\times 6$ unitary matrices. Writing $V^{e}$ into blocks of $3\times 3$
matrices, we have
$\displaystyle
V^{e\;L(R)}=\left(\begin{array}[]{ll}V^{e\;L(R)}_{LL}&V^{e\;L(R)}_{LR}\\\ \\\
V^{e\;L(R)}_{RL}&V^{e\;L(R)}_{RR}\end{array}\right).$ (63)
We then obtain Majoron $J$ interactions with neutrinos and charged leptons in
the mass basis as
$\displaystyle{\partial_{\mu}J\over
2f_{J}}\left[-2(\bar{E}_{L}\gamma^{\mu}V^{e\;L}_{LR}{V^{e\;L}_{LR}}^{\dagger}E_{L}+\bar{\psi}_{L}\gamma^{\mu}V^{e\;L}_{RR}{V^{e\;L}_{LR}}^{\dagger}E_{L}+\bar{E}_{L}\gamma^{\mu}V^{e\;L}_{LR}{V^{e\;L}_{RR}}^{\dagger}\psi_{L}+\bar{\psi}_{L}\gamma^{\mu}V^{e\;L}_{RR}{V^{e\;L}_{RR}}^{\dagger}\psi_{L})\right].$
(64)
The size of off-diagonal entries is as large as the level of $10^{-3}/f_{J}$,
similar to that in Type-I seesaw model. If there are more than one singlet
with different lepton numbers and different right-handed neutrinos are
assigned with different lepton numbers, one would need to change the Majoron
couplings to light neutrinos to
$V^{\nu}_{LL}X^{\nu}_{R}{V^{\nu}_{LL}}^{\dagger}$ with $X^{\nu}_{R}$ a
diagonal matrix but different diagonal entries. The individual off-diagonal
couplings can be much larger than $10^{-3}/f_{J}$. In this model, the GB is a
typical Majoron whose FCNC interactions with fermions can lead to interesting
consequences as shown in Ref.Cheng:2020rla .
We note in passing that because of the appearance of new particle $\psi$ in
the theory, the GB-two-photon coupling in Type-III seesaw model will be
modified compared with that in Type-I seesaw model. One needs to add a new
term $\frac{v^{2}}{N_{\alpha}}Tr(X^{\nu}_{R})Q^{2}_{\psi}N_{c}^{\psi}$ into
$\tilde{E}(X)$ for $aF_{\mu\nu}\tilde{F}^{\mu\nu}$ in Eq. (25).
### V.3 Left-Right symmetric model
For Left-Right symmetric model, the gauge group is extended from the SM gauge
group to $SU(3)_{C}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$
Mohapatra:1980yp ; Mohapatra:1974gc ; Senjanovic:1975rk . The left-handed
quarks $Q_{L}$ and leptons $L_{L}$ transform as $(3,2,1,1/6)$ and
$(1,2,1,-1/2)$. The right-handed quarks $Q_{R}$ and leptons $L_{R}$ are
grouped into doublets of $SU(2)_{R}$, and transform as $(3,1,2,1/6)$ and
$(1,1,2,-1/2)$. If a global $U(1)_{G}$ imposed on the model is broken, a GB
will arise. We will indicate the $U(1)_{G}$ charges similarly as what we have
done in section II.
To have a GB symmetry in the Left-Right symmetric model, at least two bi-
doublets $\phi_{1,2}$ transforming as $(1,2,2,0)$ with different $U(1)_{G}$
charges need to be introduced in order to have phenomenologically acceptable
quark mass matrices and mixing. This also implies different generations of
quarks and also leptons, some of them, should have different $U(1)_{G}$
charges. We will construct a minimal model which also has triplets
$\Delta_{L}:(1,3,1,1)$ and $\Delta_{R}:(1,1,3,1)$ to make effective the seesaw
mechanism. It turns out at least two different sets of triplets are needed to
make the resulting $U(1)_{G}$ invisible as in the sense of DFSZ type
Grimus:1982qu .
As an example, the $U(1)_{G}$ charges for various fermions and scalars as well
as their Left-Right components can be set as below
$\displaystyle
Q_{L1}:(0),\;\;Q_{L2,3}:(X),\;\;Q_{R1}:(0),\;\;Q_{R2,3}:(-X),\;\;L_{L1}:(0),\;\;L_{L2,3}:(X),\;\;L_{R1}:(0),\;\;L_{R2,3}:(-X),$
$\displaystyle\phi_{1}:(X),\;\;\phi_{2}:(2X)\;,\;\;\Delta_{L1}:(X),\;\;\Delta_{L2}:(2X),\;\;\Delta_{R1}:(-X),\;\;\Delta_{R2}:(-2X).$
(65)
We take this type of model as an example to work out some details. With
different assignment of $U(1)_{G}$ charges for fermions, the resulting Yukawa
texture will be different. Therefore this illustrates how to construct a
realistic Left-Right symmetric model with an additional global $U(1)_{G}$
symmetry broken spontaneously.
We write the bi-doublets as:
$\phi_{1,2}=\left(\tilde{\phi}_{1,2},\bar{\phi}_{1,2}\right)$, where
$\tilde{\phi}_{j}=i\sigma_{2}\phi^{*}_{j}$. Both $\phi_{j}$ and
$\bar{\phi}_{j}$ are doublets of $SU(2)_{L}$. Writing in this way enables us
to use directly the results obtained before for finding GB field since they
both transform the same under $SU(2)_{L}$. The components of these fields are
$\displaystyle\phi_{j}=\left(\begin{array}[]{c}h^{+}_{j}\\\ \\\
{v_{j}\over\sqrt{2}}(1+{h_{j}\over v_{j}}+i{I_{j}\over
v_{j}})\end{array}\right),\;\;\bar{\phi}_{j}=\left(\begin{array}[]{c}\bar{h}^{+}_{j}\\\
\\\
{\bar{v}_{j}\over\sqrt{2}}(1+{\bar{h}_{j}\over\bar{v}_{j}}+i{\bar{I}_{j}\over\bar{v}_{j}})\end{array}\right),$
(72)
$\displaystyle\Delta_{Lj}=\left(\begin{array}[]{cc}{1\over\sqrt{2}}\delta^{+}_{Lj}&\;\;\delta^{++}_{Lj}\\\
\\\ {v_{Lj}\over\sqrt{2}}(1+{\delta^{0}_{Lj}\over v_{Lj}}+i{I_{Lj}\over
v_{Lj}})&\;\;-{1\over\sqrt{2}}\delta^{+}_{Lj}\end{array}\right),\;\;\Delta_{Rj}=\left(\begin{array}[]{cc}{1\over\sqrt{2}}\delta^{+}_{Rj}&\;\;\delta^{++}_{Rj}\\\
\\\ {v_{Rj}\over\sqrt{2}}(1+{\delta^{0}_{Rj}\over v_{Rj}}+i{I_{Rj}\over
v_{Rj}})&\;\;-{1\over\sqrt{2}}\delta^{+}_{Rj}\end{array}\right).$ (79)
The Yukawa interactions are given by
$\displaystyle L_{Y}=$ $\displaystyle-$
$\displaystyle\bar{Q}_{L}(\kappa^{q}_{1}\phi_{1}+\kappa^{q}_{2}\phi_{2})Q_{R}-\bar{L}_{L}(\kappa^{l}_{1}\phi_{1}+\kappa_{2}^{l}\phi_{2})L_{R}$
(80) $\displaystyle-$
$\displaystyle\bar{L}_{L}^{c}(Y_{L1}\Delta_{L1}+Y_{L2}\Delta_{L2})L_{L}-\bar{L}_{R}^{c}(Y_{R1}\Delta_{R1}+Y_{R2}\Delta_{R2})L_{R}+\mbox{H.c.}\;.$
If there is just one bi-doublet, only one of the $\kappa$ terms is allowed for
the quark and lepton sectors because of the non-zero $U(1)_{G}$ charges. This
leads to the up and down sector of quark mass matrices to be proportional each
other, which results in unrealistic mass relations without mixing. This is the
reason that one needs to have more than one bi-doublet. Because of the
$U(1)_{G}$ charges assigned, the $\kappa$ and $Y$ have the following forms
$\displaystyle\kappa^{q,l}_{1}=\left(\begin{array}[]{ccc}0&\;\;K^{q,l}_{12}&\;\;K^{q,l}_{13}\\\
K^{q,l}_{21}&\;\;0&\;\;0\\\
K^{q,l}_{31}&\;\;0&\;\;0\end{array}\right),\;\;\;\;\;\;\;\;\;\kappa^{q,l}_{2}=\left(\begin{array}[]{ccc}0&\;\;0&\;\;0\\\
0&\;\;K^{q,l}_{22}&\;\;K^{q,l}_{23}\\\
0&\;\;K^{q,l}_{32}&\;\;K^{q,l}_{33}\end{array}\right),$ (87) $\displaystyle
Y_{1}^{L,R}=\left(\begin{array}[]{ccc}0&\;\;Y^{L,R}_{12}&\;\;Y^{L,R}_{13}\\\
Y^{L,R}_{12}&\;\;0&\;\;0\\\
Y^{L,R}_{13}&\;\;0&\;\;0\end{array}\right),\;\;Y^{L,R}_{2}=\left(\begin{array}[]{ccc}0&\;\;0&\;\;0\\\
0&\;\;Y^{L,R}_{22}&\;\;Y^{L,R}_{23}\\\
0&\;\;Y^{L,R}_{23}&\;\;Y^{L,R}_{33}\end{array}\right).$ (94)
We will assume $v_{Lj}=0$, the quark mass matrices $M_{u,d}$ and the lepton
mass matrices $M_{e}$ and $M_{\nu}$ are given by
$\displaystyle
M_{u}={\kappa^{q}_{1}v_{1}\over\sqrt{2}}+{\kappa^{q}_{2}v_{2}\over\sqrt{2}},\;\;M_{d}={\kappa^{q}_{1}\bar{v}_{1}\over\sqrt{2}}+{\kappa^{q}_{2}\bar{v}_{2}\over\sqrt{2}}\;;\;\;M_{e}={\kappa^{l}_{1}\bar{v}_{1}\over\sqrt{2}}+{\kappa^{l}_{2}\bar{v}_{2}\over\sqrt{2}},$
$\displaystyle M_{\nu}=\left(\begin{array}[]{cc}0&\;\;M_{D}\\\
M^{T}_{D}&M_{R}\end{array}\right),\;\;\mbox{with}\;\;M_{D}={\kappa^{l}_{1}v_{1}\over\sqrt{2}}+{\kappa^{l}_{2}v_{2}\over\sqrt{2}},\;\;M_{R}={Y_{R1}v_{R1}\over\sqrt{2}}+{Y_{R2}v_{R2}\over\sqrt{2}}.$
(97)
We now work out the GB fields following the method previously used. The vevs
of $\Delta_{Ri}$ break $SU(2)_{R}$ and also $U(1)_{B-L}$, and the vevs of
$\phi_{1,2}$ break both the $SU(2)_{R}$ and $SU(2)_{L}$, and all of them also
break $U(1)_{G}$. For working out the physical GB, we choose three broken
generators $I^{L}_{3}$, $B-L$ and $A$ of $I^{L}_{3}$, $B-L$ and $U(1)_{G}$
symmetries as
$\displaystyle
I^{L}_{3}:(v_{1},\;\bar{v}_{1},\;v_{2},\;\bar{v}_{2},0,\;0),\;\;B-L:(0,\;0,\;0,\;0,\;v_{R1},\;v_{R2}),\;\;A:(-v_{1},\;\bar{v}_{1},\;-v_{2},\;\bar{v}_{2},v_{R1},\;2v_{R2})\;.$
(98)
The physical GB will be the linear combination with its orthogonal to
$I^{L}_{3}$ and $B-L$. We have
$\displaystyle
a:\left(-v^{2}_{R}\bar{v}^{2}I^{L}_{3}+v^{2}\bar{v}^{2}_{R}(B-L)-v^{2}v^{2}_{R}A\right),$
(99)
where $v^{2}=v^{2}_{1}+\bar{v}^{2}_{1}+v^{2}_{2}+\bar{v}^{2}_{2}$,
$\bar{v}^{2}=v^{2}_{1}-\bar{v}^{2}_{1}+v^{2}_{2}-\bar{v}^{2}_{2}$ and
$v_{R}^{2}=v^{2}_{R1}+v^{2}_{R2}$ and
$\bar{v}^{2}_{R}=v^{2}_{R1}+2v^{2}_{R2}$. Expressing $a$ in terms of $I_{j}$
field of the various scalars, we have
$\displaystyle a={1\over N_{\alpha}}$
$\displaystyle\left[-v^{2}_{R}(\bar{v}^{2}-v^{2})v_{1}I_{1}-v^{2}_{R}(\bar{v}^{2}-v^{2})v_{2}I_{2}-v^{2}_{R}(\bar{v}^{2}+v^{2})\bar{v}_{1}\bar{I}_{1}-v^{2}_{R}(\bar{v}^{2}+v^{2})\bar{v}_{2}\bar{I}_{2}\right.$
(100)
$\displaystyle+\left.v^{2}(\bar{v}^{2}_{R}-v^{2}_{R})v_{R1}I_{R1}+v^{2}(\bar{v}^{2}_{R}-2v^{2}_{R})v_{R2}I_{R2}\right].$
Note that if there is only one $\Delta_{Rj}$ or both of $\Delta_{R_{j}}$ have
the same $U(1)_{G}$ charge, there is no $I_{Rj}$ in $a$, then the axion decay
constant is order $v$ which is a visible axion type.
We obtain the GB currents for charged fermions and neutrinos in the form given
in Eqs. (29) and (34) with
$\displaystyle
X^{u}_{L}=X^{d}_{L}=X^{e}_{L}=X^{\nu}_{L}=\left(\begin{array}[]{ccc}0&\;0&\;0\\\
0&\;-1&\;0\\\
0&\;0&\;-1\end{array}\right),\;\;X^{u}_{R}=X^{d}_{R}=X^{e}_{R}=X^{\nu}_{R}=\left(\begin{array}[]{ccc}0&\;\;0&\;\;0\\\
0&\;\;1&\;\;0\\\ 0&\;\;0&\;\;1\end{array}\right),$ (107)
and for $u$, $d$ and $e$ replace $\bar{v}^{2}/N_{\alpha}$ and
$v^{2}/N_{\alpha}$ by $-v^{2}_{R}\bar{v}^{2}/N_{\alpha}$
and$-v^{2}_{R}v^{2}/N_{\alpha}$. Also for right handed neutrinos, replace
$(v^{2}/N_{\alpha})X^{\nu}_{R}$ by
$v^{2}(\bar{v}^{2}_{R}-v^{2}_{R})X^{\nu}_{R}/N_{\alpha}$.
## VI Discussions and Conclusions
We have carried out a systematic model building study for FCNC GB interactions
in the three generations of fermion sectors, or separately in the quark,
charged lepton and neutrino sectors. It is based on renormalizable models with
an additional $U(1)_{G}$ global symmetry which is spontaneously broken besides
the gauge symmetries of the model. Several popular models have been discussed.
To study how FCNC GB interactions emerge, we have developed a method to
identify the GB in a beyond SM with an additional $U(1)_{G}$ global symmetry
which is broken by an arbitrary number of Higgs bosons. Although our main aim
is to study how FCNC GB interactions emerge, we find that our method can be
used easily to build a desired model and to provide some insight about some
general properties of GB interactions in a simple fashion. Many models studied
in the literature can be easily reproduced by just assigning the appropriate
$U(1)_{G}$ charges as discussed in the previous sections. We also provide a
general proof of the equivalence of using physical GB components and GB broken
generators for calculating GB couplings to two gluons and two photons,
although they have different form. The final results only depend on the
$U(1)_{G}$ charges $X^{f}_{L,R}$ and the kind of colored, and charged
particles in the model. Parameters in the FCNC interactions do not affect GB
interactions with two gluons and two photons. We have shown that for
spontaneous CP violation models, there is no new CP violating phase of GB-
fermions interactions.
For FCNC GB interactions with fermions, we find that there are two types of
sources. One of them is that different generations of fermions have different
$U(1)_{G}$ charges, and another is due to mass splits of left- and right-
handed particles, like neutrino masses in Type-I and Type-III seesaw models.
Even if all generations have the same $U(1)_{G}$ charges, there still are in
general FCNC GB interactions with neutrinos which have not been studied
carefully previously. For Type-III seesaw model, there are also FCNC GB
interactions with charged leptons. For Type-II seesaw model, at least two
triplets are needed to have FCNC GB interactions with fermions. For Left-Right
symmetry model, to make FCNC GB interactions with fermions to be invisible, at
least two bi-doublets plus more than one triplets scalars need to be
introduced similar as that in DFSZ model.
Whether or not fundamental GB exist is of course an experimental issue.
Several high luminosity facilities in running, such as the BESIII, LHCb,
BELLE-II, will provide us with more information. We eagerly wait for more data
to come to test models having FCNC GB interactions with fermions.
###### Acknowledgements.
This work was supported in part by NSFC (Grants 11735010, 11975149, 12090064),
by Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry
of Education, and Shanghai Key Laboratory for Particle Physics and Cosmology
(Grant No. 15DZ2272100), and in part by the MOST (Grants
No.109-2112-M-002-017-MY3).
## References
* (1) Y. Nambu, Phys. Rev. 117 (1960), 648-663.
* (2) J. Goldstone, Nuovo Cim. 19 (1961), 154-164.
* (3) F. Englert and R. Brout, Phys. Rev. Lett. 13 (1964), 321-323.
* (4) G. S. Guralnik, C. R. Hagen and T. W. B. Kibble, Phys. Rev. Lett. 13 (1964), 585-587.
* (5) P. W. Higgs, Phys. Rev. 145 (1966), 1156-1163.
* (6) S. Weinberg, Phys. Rev. Lett. 40 (1978), 223-226.
* (7) F. Wilczek, Phys. Rev. Lett. 40 (1978), 279-282.
* (8) R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38 (1977), 1440-1443.
* (9) R. D. Peccei and H. R. Quinn, Phys. Rev. D 16 (1977), 1791-1797.
* (10) Y. Chikashige, R. N. Mohapatra and R. D. Peccei, Phys. Rev. Lett. 45 (1980), 1926.
* (11) J. E. Kim, Phys. Rept. 150 (1987), 1-177.
* (12) H. Y. Cheng, Phys. Rept. 158 (1988), 1.
* (13) L. Di Luzio, M. Giannotti, E. Nardi and L. Visinelli, Phys. Rept. 870 (2020), 1-117.
* (14) G. Ballesteros, J. Redondo, A. Ringwald and C. Tamarit, Phys. Rev. Lett. 118 (2017) no.7, 071802.
* (15) A. Celis, J. Fuentes-Martin and H. Serodio, Phys. Lett. B 741 (2015), 117-123.
* (16) W. J. Marciano, A. Masiero, P. Paradisi and M. Passera, Phys. Rev. D 94 (2016) no.11, 115033.
* (17) L. Calibbi, F. Goertz, D. Redigolo, R. Ziegler and J. Zupan, Phys. Rev. D 95 (2017) no.9, 095009.
* (18) Y. Ema, K. Hamaguchi, T. Moroi and K. Nakayama, JHEP 01 (2017), 096.
* (19) J. Heeck, [arXiv:1709.07670 [hep-ph]].
* (20) X. Cid Vidal, A. Mariotti, D. Redigolo, F. Sala and K. Tobioka, JHEP 01 (2019), 113 [erratum: JHEP 06 (2020), 141].
* (21) J. Heeck and H. H. Patel, Phys. Rev. D 100 (2019) no.9, 095015.
* (22) C. Cornella, P. Paradisi and O. Sumensari, JHEP 01 (2020), 158.
* (23) J. Martin Camalich, M. Pospelov, P. N. H. Vuong, R. Ziegler and J. Zupan, Phys. Rev. D 102 (2020) no.1, 015023.
* (24) P. A. Zyla et al. [Particle Data Group], PTEP 2020 (2020) no.8, 083C01.
* (25) L. Calibbi, D. Redigolo, R. Ziegler and J. Zupan, [arXiv:2006.04795 [hep-ph]].
* (26) Y. Cheng, C. W. Chiang, X. G. He and J. Sun, [arXiv:2012.15287 [hep-ph]].
* (27) J. Schechter and J. W. F. Valle, Phys. Rev. D 25 (1982), 774.
* (28) G. B. Gelmini and J. W. F. Valle, Phys. Lett. B 142 (1984), 181-187.
* (29) A. A. Anselm, N. G. Uraltsev and M. Y. Khlopov, Sov. J. Nucl. Phys. 41 (1985), 1060 LENINGRAD-85-1034.
* (30) M. C. Gonzalez-Garcia and J. W. F. Valle, Phys. Lett. B 216 (1989), 360-366.
* (31) Z. G. Berezhiani and M. Y. Khlopov, Z. Phys. C 49 (1991), 73-78.
* (32) A. Pilaftsis, Phys. Rev. D 49 (1994), 2398-2404.
* (33) R. N. Mohapatra and G. Senjanovic, Z. Phys. C 17 (1983), 53-56.
* (34) S. L. Glashow and S. Weinberg, Phys. Rev. D 15 (1977), 1958.
* (35) G. ’t Hooft, Phys. Rev. Lett. 37 (1976), 8-11.
* (36) P. Minkowski, Phys. Lett. B 67 (1977), 421-428.
* (37) M. Gell-Mann, P. Ramond, and R. Slansky, in Proceedings of the Workshop on Super- gravity (Stony Brook, New York, 1979), edited by D. Freedman and P. van Nieuwenhuizen (North-Holland, Amsterdam, 1979).
* (38) S. L. Glashow, NATO Sci. Ser. B 61 (1980), 687.
* (39) R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44 (1980), 912.
* (40) T. Yanagida, Prog. Theor. Phys. 64 (1980), 1103.
* (41) A. R. Zhitnitsky, Sov. J. Nucl. Phys. 31 (1980), 260.
* (42) M. Dine, W. Fischler and M. Srednicki, Phys. Lett. B 104 (1981), 199-202.
* (43) J. Sun and X. G. He, Phys. Lett. B 811 (2020), 135881.
* (44) S. L. Chen, N. G. Deshpande, X. G. He, J. Jiang and L. H. Tsai, Eur. Phys. J. C 53 (2008), 607-614.
* (45) X. G. He and L. H. Tsai, Eur. Phys. J. C 71 (2011), 1598.
* (46) J. Pan, J. Sun, X. D. Ma and X. G. He, Phys. Lett. B 807 (2020), 135573.
* (47) W. Grimus, Z. Phys. C 20 (1983), 141.
* (48) R. N. Mohapatra and J. C. Pati, Phys. Rev. D 11 (1975), 2558.
* (49) G. Senjanovic and R. N. Mohapatra, Phys. Rev. D 12 (1975), 1502.
* (50) C. Bouchiat, J. Iliopoulos and P. Meyer, Phys. Lett. B 38 (1972), 519-523.
* (51) C. Q. Geng and R. E. Marshak, Phys. Rev. D 39 (1989), 693.
* (52) E. Ma, T. Ohata and K. Tsumura, Phys. Rev. D 96 (2017) no.7, 075039.
* (53) A. A. Anselm and N. G. Uraltsev, Phys. Lett. B 116 (1982), 161-164.
* (54) A. Abada, C. Biggio, F. Bonnet, M. B. Gavela and T. Hambye, JHEP 12 (2007) 061.
* (55) X. G. He and S. Oh, JHEP 09 (2009) 027.
* (56) E. Fernandez-Martinez, J. Hernandez-Garcia and J. Lopez-Pavon, JHEP 08 (2016) 033.
* (57) X. G. He, S. Oh, J. Tandean and C. C. Wen, Phys. Rev. D80 (2009) 073012.
* (58) C. Q. Geng, X. D. Jiang and J. N. Ng, Phys. Rev. D 38 (1988), 1628.
* (59) X. G. He and R. R. Volkas, Phys. Lett. B 208 (1988), 261 [erratum: Phys. Lett. B 218 (1989), 508].
* (60) M. Magg and C. Wetterich, Phys. Lett. B 94 (1980), 61-64.
* (61) T. P. Cheng and L. F. Li, Phys. Rev. D 22 (1980), 2860.
* (62) G. Lazarides, Q. Shafi and C. Wetterich, Nucl. Phys. B 181 (1981), 287-300.
* (63) R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 23 (1981), 165.
* (64) G. B. Gelmini and M. Roncadelli, Phys. Lett. B 99 (1981), 411-415.
* (65) H. M. Georgi, S. L. Glashow and S. Nussinov, Nucl. Phys. B 193 (1981), 297-316.
* (66) R. Foot, H. Lew, X. G. He and G. C. Joshi, Z. Phys. C 44 (1989), 441.
|
Also at ]Institute for Biomedical Engineering and Informatics, TU Ilmenau,
Germany
# Mean-field approximations of networks of spiking neurons with short-term
synaptic plasticity
Richard Gast<EMAIL_ADDRESS>Thomas R. Knösche [ Helmut Schmidt Max
Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
###### Abstract
Low-dimensional descriptions of spiking neural network dynamics are an
effective tool for bridging different scales of organization of brain
structure and function. Recent advances in deriving mean-field descriptions
for networks of coupled oscillators have sparked the development of a new
generation of neural mass models. Of notable interest are mean-field
descriptions of all-to-all coupled quadratic integrate-and-fire (QIF) neurons,
which have already seen numerous extensions and applications. These extensions
include different forms of short-term adaptation (STA) considered to play an
important role in generating and sustaining dynamic regimes of interest in the
brain. It is an open question, however, whether the incorporation of pre-
synaptic forms of synaptic plasticity driven by single neuron activity would
still permit the derivation of mean-field equations using the same method.
Here, we discuss this problem using an established model of short-term
synaptic plasticity at the single neuron level, for which we present two
different approaches for the derivation of the mean-field equations. We
compare these models with a recently proposed mean-field approximation that
assumes stochastic spike timings. In general, the latter fails to accurately
reproduce the macroscopic activity in networks of deterministic QIF neurons
with distributed parameters. We show that the mean-field models we propose
provide a more accurate description of the network dynamics, although they are
mathematically more involved. Using bifurcation analysis, we find that QIF
networks with pre-synaptic short-term plasticity can express regimes of
periodic bursting activity as well as bi-stable regimes. Together, we provide
novel insight into the macroscopic effects of short-term synaptic plasticity
in spiking neural networks, as well as two different mean-field descriptions
for future investigations of such networks.
††preprint: APS/123-QED
## I Low-Dimensional Manifolds of Spiking Neural Network Activity
The brain can generate a variety of highly complex and chaotic patterns of
neural activity [1]. However, given the vast number of neurons in the brain,
these patterns appear to be less complex than they could be theoretically,
indicating a high level of neuronal redundancy [2, 3]. Electrophysiological
recordings of macroscopic neural activity have revealed highly stereotyped
responses to sensory stimulation as well as strongly synchronized regimes of
neural activity [4, 5, 6, 7]. More recently, multi-unit recordings have
demonstrated that strong redundancies are present at the level of spiking
neurons as well [8, 9]. These findings indicate the existence of low-
dimensional manifolds in the state space of the brain that typically govern
its neural dynamics and its response to extrinsic stimulation. The
identification and description of such low-dimensional manifolds has been a
central topic of neuroscientific research for many years [10, 11, 12, 13, 14,
15]. Different approaches for the derivation of mathematical descriptions of
the temporal evolution of low-dimensional neural activity have been proposed
[16]. Among those are classic neural mass models that use direct,
phenomenological descriptions of macroscopic measures of neural dynamics [17,
18, 19, 20, 21]. For these neural mass models, equivalent spiking neural
networks do not exist in general. Other approaches make use of probabilistic
descriptions of the evolution of the collective behavior inside a neural
population [22, 23, 24], which make it possible to capture the statistics
inside the spiking neural network up to a certain order. However, some of
these approaches are restricted to asynchronous regimes of neural activity
[22, 23], whereas others use approximations of random fluctuations in the
spiking neural network [24]. Hence, neither of these approaches provide a
mathematically exact set of mean-field equations that can describe the
macroscopic dynamics of a spiking neural network in general.
The Ott-Antonsen ansatz has provided a new tool to derive mean-field models of
spiking neural networks [25]. While originally devised for networks of all-to-
all coupled Kuramoto oscillators [26], it has since been applied to networks
of theta neurons [27, 28], and, most relevant to this study, to networks of
all-to-all coupled quadratic integrate-and-fire (QIF) neurons [29]. For future
applications of this method, it is of interest to know how well the derivation
of the mean-field equations generalizes to other descriptions of neural
dynamics than the particular QIF networks considered in [29]. Consequently,
different extensions of the QIF model have been proposed that added
biophysical mechanisms or structural details to the model in order to explain
interesting neurodynamic phenomena, such as the onset of synchronized neural
activity [30, 31, 32, 33, 34]. Particularly interesting are extensions that
include dynamic variables which are not driven by the mean-field activity of
the network, but by neuron- or synapse-specific processes instead. In such
cases, it is unclear whether mean-field equations can still be found. In [34],
the QIF network was extended by a spike-frequency adaptation mechanism, where
a neuron-specific adaptation current was elicited by the spiking activity of
the same neuron. Thus, the adaptation variable was not simply driven by the
mean-field activity of the network. To derive the mean-field equations
nonetheless, the authors applied an adiabatic approximation to the adaptation
dynamics. This approximation assumes that the adaptation variable evolves
slowly in comparison to the membrane potential dynamics and permits one to
apply the mean-field derivation on the fast time-scale. Based on this mean-
field model it will be possible to investigate the effects of neuron-specific
currents at meso- and macroscopic scales, such as for example the effects of
calcium-dependent spikes on thalamic dynamics [35] or the effects of spike-
frequency adaptation on cortical microcircuits [36].
In this work, we address the question of whether exact mean-field equations
can be derived for QIF networks with synapse-specific dynamic variables.
Synaptic dynamics are especially interesting for the computational modeling of
macroscopic neurodynamic phenomena. This is because synaptic currents are
thought to trigger the potential changes visible in macroscopic
electrophysiological recordings of brain activity, and different synapse types
come with different dynamic characteristics that are pivotal for our
understanding of brain dynamics. Classic neural mass models, for example,
typically use different synaptic time scales to model rhythm generation in the
brain [18, 20, 21]. The QIF mean-field reduction generalizes to any
convolution of the synaptic input with a synaptic response kernel [29, 30]
and, hence, allows one to derive mean-field descriptions of QIF networks with
standard descriptions of synaptic dynamics such as the alpha kernel
convolution [20, 21]. However, given appropriate stimulation, synaptic
dynamics also undergo short-term plasticity (STP) that changes properties of
the synaptic response. It has been shown that synapses can express short-term
depression and facilitation and that time scales and strengths of these two
STP forms differ between synapse and neuron types. Moreover, synaptic STP has
been linked to various functions and dynamic properties of the brain, such as
working memory [37] or operating in a critical regime [38]. A generalization
of the above discussed mean-field approaches to neural networks with synaptic
STP would thus provide a valuable tool for modeling brain dynamics and
function at the meso- and macroscopic level.
Here, we discuss the descriptions of synaptic STP that are allowed for in the
context of deriving Ott-Antonsen manifolds for heterogeneous QIF networks.
Recent work has demonstrated that mean-field equations can be derived for QIF
networks with synaptic STP if two conditions are satisfied [34]: First, each
time a neuron spikes in the network, it triggers synaptic STP at every other
neuron, which is the case in all-to-all coupled networks. Second, a single
incoming spike triggers synaptic STP at all synapses of a neuron. Under those
conditions, synaptic STP is no longer neuron specific and can simply be
treated as a macroscopic variable driven by the mean-field activity of the
network. This form of synaptic STP could be used to model forms of post-
synaptic receptor desensitization, short-term changes in the number of
available post-synaptic receptors, or resource depletion at the post-synaptic
complex. Importantly, it cannot be considered to represent pre-synaptic forms
of plasticity, such as vesicle depletion. While the first assumption would
still hold for pre-synaptic STP in all-to-all coupled QIF networks, the second
assumption would not. Pre-synaptic resource depletion cannot be assumed to
affect all network connections, but only the efferent connections of a
specific neuron (see Fig. 1).
Figure 1: Pre- vs. Post-Synaptic Forms of Short-Term Plasticity. Nodes
represent neurons in an all-to-all coupled network and edges between the nodes
represent bidirectional synaptic couplings. Red nodes are active, i.e. did
just spike, whereas blue nodes have not spiked for a sufficient period in
time. Edges that are colored in red show adaptation in response to the
activity of the red nodes, whereas grey edges do not. The two equations
describe the membrane potential evolution of a QIF neuron for the cases of
pre- and post-synaptic plasticity. Note that the adaptation variable $A_{i}$
is specific for pre-synaptic source neurons for the former case, and specific
to post-synaptic target neurons for the latter.
A well established model of pre-synaptic STP is the phenomenological model
introduced in [39], which describes the dynamics of pre-synaptic facilitation
and depression. We will discuss the derivation of mean-field equations for QIF
networks with pre-synaptic STP with respect to this model, though we will
discuss the implications of our findings for general descriptions of pre-
synaptic STP dynamics as well. In the following section, we define the
microscopic model under consideration. This will be followed by sections in
which we discuss different approaches to derive equations for the low-
dimensional network dynamics. While we do not find the exact mean-field
equations for QIF networks with pre-synaptic STP, we provide two different
approximations that match well with the QIF network dynamics. We point to the
problems that would have to be solved in future attempts at an exact mean-
field derivation and evaluate the accuracy of our approximate solutions via
numerical simulations and bifurcation analysis.
## II Low-Dimensional Manifolds of QIF Networks with STP
We consider a network of $N$ all-to-all coupled QIF neurons with pre-synaptic
STP
$\displaystyle\tau\dot{V}_{i}$
$\displaystyle=V_{i}^{2}+\eta_{i}+I(t)+\frac{J\tau}{N}\sum_{j=1}^{N}X_{j}^{-}U_{j}^{+}S_{j},$
(1a) $\displaystyle\tau_{x}\dot{X}_{i}$ $\displaystyle=1-X_{i}-\alpha
X_{i}^{-}U_{i}^{+}S_{i}\tau_{x},$ (1b) $\displaystyle\tau_{u}\dot{U}_{i}$
$\displaystyle=U_{0}-U_{i}+U_{0}(1-U_{i}^{-})S_{i}\tau_{u},$ (1c)
$\displaystyle S_{i}$ $\displaystyle=\sum_{k\backslash
t_{i}^{k}<t}\int_{-\infty}^{t}a(t-t^{\prime})\delta(t^{\prime}-t_{i}^{k})dt^{\prime},$
(1d)
where eq. (1d) represents a convolution of the spiking activity of neuron $i$
with a synaptic response kernel $a$, e.g. in the case of exponential synapses
$a(t)=\mbox{e}^{-t/\tau_{s}}/\tau_{s}$ with synaptic time scale $\tau_{s}$. A
neuron $i$ emits its $k^{th}$ spike at time $t_{i}^{k}$ when it reaches a
threshold $V_{\theta}$ upon which $V_{i}$ is reset to $V_{r}=-V_{i}$. Without
loss of generality, we consider the limit $\tau_{s}\rightarrow 0$, such that
$S_{i}$ represents the spiking activity of neuron $i$. Eq. (1b) and eq. (1c)
resemble the pre-synaptic STP mechanism described in [39]. We note here that
$\cdot^{-}$ denotes a quantity just before a spike occurs (left limit), and
$\cdot^{+}$ denotes a quantity just after the neuron spiked (right limit).
This discontinuity accounts for the biological fact that a pre-synaptic spike
triggers synaptic facilitation before it can affect the post-synaptic neuron,
by moving vesicles closer to the membrane. Synaptic depression, however,
results from the consumption of vesicles for the synaptic transmission process
and is thus affected slightly later than synaptic facilitation. We assume
neural spiking activity to affect all outgoing synapses of a neuron equally,
hence $X_{i}$ and $U_{i}$ can be considered as neuron- and not synapse-
specific. The adaptation dynamics are controlled by the depression and
facilitation time constants $\tau_{x}$ and $\tau_{u}$, a depression strength
$\alpha$, and a baseline synaptic efficacy $U_{0}$. Eq. (1a) describes the
evolution of the membrane potential $V_{i}$ of neuron $i$, which depends on a
background excitability parameter $\eta_{i}$, an extrinsic forcing term
$I(t)$, the membrane time constant $\tau$, and the coupling with the network
activity. The latter is given by a sum over the output $S_{i}$ of each neuron
in the network, weighted by a global coupling strength $J$, and the neuron-
specific synaptic depression $X_{i}$ and facilitation $U_{i}$.
In the limit $V_{\theta}\rightarrow\infty$, the membrane potential $V_{i}$ of
a QIF neuron can be directly related to its phase via the transform
$V_{i}=\tan(\frac{\theta_{j}}{2})$. Under this transformation, (1a-1d)
represents a network of theta neurons [40], which can be considered a network
of globally coupled oscillators. Thus, the network satisfies the conditions
for the existence of the Ott-Antonsen manifold, a low-dimensional manifold
along which the network dynamics are guaranteed to evolve for
$N\rightarrow\infty$ [25, 41]. This manifold can be described for (1a-1d) by
following the Lorentzian ansatz described in [29], i.e. by making the
assumption that the state variables $V_{i}$ are distributed according to a
Lorentzian where the probability density of $V$ for background excitability
$\eta$ at time $t$ is given by
$\rho(V|\eta,t)=\frac{1}{\pi}\frac{z(\eta,t)}{[V-y(\eta,t)]^{2}+z(\eta,t)^{2}}.$
(2)
The center $y(\eta,t)$ and half-width-at-half-maximum (HWHM) $z(\eta,t)$ of
eq. (2) are associated with the mean firing rate $r(\eta,t)$ and the membrane
potential average over all neurons $v(\eta,t)$ via $z(\eta,t)=\pi r(\eta,t)$,
and $y(\eta,t)=v(\eta,t)$, respectively. Due to the conservation of the number
of neurons, the network dynamics obey the following continuity equation:
$\partial_{t}\rho+\partial_{V}\left[\left(\frac{V^{2}+\eta+I}{\tau}+Jr_{\mathrm{eff}}\right)\rho\right]=0,$
(3)
where $r_{\mathrm{eff}}=\frac{1}{N}\sum_{j=1}^{N}X_{j}^{-}U_{j}^{+}S_{j}$ is
the effective mean-field network activity that arrives at each neuron. By
inserting eq. (2) into eq. (3) it can be shown that the dynamics of
$z(\eta,t)$ and $y(\eta,t)$ obey
$\partial_{t}w(\eta,t)=i\left[\frac{-w(\eta,t)^{2}+\eta+I}{\tau}+Jr_{\mathrm{eff}}\right],$
(4)
for any $\eta$, with $w(\eta,t)=z(\eta,t)+iy(\eta,t)$. Without synaptic STP,
i.e. for $U(t)=X(t)=1$, eq. (4) can be solved for certain choices of the
background excitability distribution. The most drastic reduction in the
dimensionality of the system can be achieved by choosing a Lorentzian
distribution with density function
$g(\eta)=\frac{1}{\pi}\frac{\Delta}{(\eta-\bar{\eta})^{2}+\Delta^{2}},$ (5)
where $\bar{\eta}$ and $\Delta$ represent the center and HWHM of the
distribution, respectively. This choice allows one to solve
$\dot{w}=\int_{-\infty}^{\infty}\partial_{t}w(\eta,t)g(w)dw$ (6)
using the residue theorem of complex analysis, i.e. by evaluating the integral
at the two poles of $g(w)$ given by $\bar{\eta}\pm i\Delta$. Subsequently, eq.
(4) can be solved for $r$ and $v$, yielding
$\displaystyle\tau\dot{r}$ $\displaystyle=\frac{\Delta}{\pi\tau}+2rv,$ (7a)
$\displaystyle\tau\dot{v}$ $\displaystyle=v^{2}+\bar{\eta}+I(t)+Jr\tau-(\pi
r\tau)^{2},$ (7b)
where we additionally used
$r_{\mathrm{eff}}=\frac{1}{N}\sum_{j=1}^{N}S_{j}=r$.
However, for non-constant $X$ and $U$, solving eq. (4) for $r$ and $v$ becomes
a non-trivial problem. In this case,
$r_{\mathrm{eff}}=\frac{1}{N}\sum_{j=1}^{N}X_{j}^{-}U_{j}^{+}S_{j}\neq r$ and,
hence, $r_{\mathrm{eff}}$ must be calculated to arrive at closed-form
equations for $r$ and $v$. Two major problems have to be solved in this
regard: (a) The effective network input $r_{\mathrm{eff}}$ has to be expressed
via mean-field variables such as the average firing rate $r$ and average
depression and facilitation variables $x$ and $u$. If this cannot be done, the
mean-field equations would still contain neuron-specific variables, thus
increasing their dimensionality dramatically. (b) The mean-field equations for
the average depression $x=\frac{1}{N}\sum_{i=1}^{N}X_{i}$ and facilitation
$u=\frac{1}{N}\sum_{i=1}^{N}U_{i}$ have to be solved. However, the evaluation
of these sums requires one to solve the coupled, non-linear differential
equations (1b) and (1c), which only has been achieved for stationary network
input so far [39]. In the following section, we will address problem (b) and
compare our results with recently proposed mean-field equations for a similar
synaptic STP model [42]. The remainder of this article will address different
attempts to solve problem (a).
## III Analytical solutions for microscopic STP
As argued in the previous section, finding closed-form mean-field equations
for the system given by equations (1) requires one to calculate the average
depression $x=\frac{1}{N}\sum_{i=1}^{N}X_{i}$ and average facilitation
$u=\frac{1}{N}\sum_{i=1}^{N}U_{i}$ across neurons. We start by considering
neuron $i$ that spikes periodically with a period $T$, thus producing a spike
train $S_{i}(t)=\sum_{n=-\infty}^{\infty}\delta(t-nT_{i})$. The inter-spike
interval $T_{i}$ corresponds to a firing rate of $1/T_{i}$. In this scenario,
solutions for the microscopic STP variables can be obtained analytically [39].
The evolution equations for synaptic short-term depression $X_{i}$ and short-
term facilitation $U_{i}$ are given by eq. (1b) and eq. (1c), respectively.
For the remainder of this section, we will omit the neuron index $i$ for
brevity. The (relative) strength of a synapse is given by $0<U^{+}X^{-}<1$. We
denote $U$ by $U_{n}^{-}$ just before the corresponding neuron emitted its
$n^{th}$ spike, and by $U_{n}^{+}$ just after the $n^{th}$ spike. Solving the
homogeneous part of the model equation, we obtain
$U_{n+1}^{-}=U_{0}+(U_{n}^{+}-U_{0})\exp(-T/\tau_{u}),$ (8)
and the change of $U$ due to a spike is found to be
$U_{n+1}^{+}=U_{n+1}^{-}+U_{0}(1-U_{n+1}^{-}).$ (9)
These expressions can be reformulated into the following iteration scheme:
$\displaystyle U_{n+1}^{+}$
$\displaystyle=U_{0}+(1-U_{0})(U_{0}+(U_{n}^{+}-U_{0})\mbox{e}^{-T/\tau_{u}}),$
(10a) $\displaystyle U_{n+1}^{-}$
$\displaystyle=U_{0}+(1-U_{0})U_{n}^{-}\mbox{e}^{-T/\tau_{u}}.$ (10b)
For the depression variable $X$, we find the following set of equations:
$\displaystyle X_{n+1}^{+}$ $\displaystyle=1+\left((1-\alpha
U_{n}^{+})X_{n}^{-}-1\right)\mbox{e}^{-T/\tau_{x}},$ (11a) $\displaystyle
X_{n+1}^{-}$ $\displaystyle=(1-\alpha
U_{n+1}^{+})(1+(X_{n}^{+}-1)\mbox{e}^{-T/\tau_{x}}).$ (11b)
In the stationary case, i.e. in the absence of transient dynamics, stationary
solutions $U_{\star}^{+}=U_{n}^{+}$, $U_{\star}^{-}=U_{n}^{-}$ and
$X_{\star}^{-}=X_{n}^{-},\,\forall n$ can be found:
$\displaystyle U_{\star}^{+}$
$\displaystyle=\frac{U_{0}+U_{0}(1-U_{0})(1-\exp(-T/\tau_{u}))}{1-(1-U_{0})\exp(-T/\tau_{u})},$
(12a) $\displaystyle U_{\star}^{-}$
$\displaystyle=\frac{U_{0}}{1-(1-U_{0})\exp(-T/\tau_{u})},$ (12b)
$\displaystyle X_{\star}^{+}$ $\displaystyle=\frac{(1-\alpha
U_{\star}^{+})(1-\exp(-T/\tau_{x}))}{1-(1-\alpha
U_{\star}^{+})\exp(-T/\tau_{x})},$ (12c) $\displaystyle X_{\star}^{-}$
$\displaystyle=\frac{1-\exp(-T/\tau_{x})}{1-(1-\alpha
U_{\star}^{+})\exp(-T/\tau_{x})}.$ (12d)
It is interesting to note that these results differ from the results when the
firing rate is assumed to be a constant, i.e. when $S_{i}=r_{0}=\rm{const.}$
In this case, we set $\dot{U}=\dot{X}=0$, and obtain
$\displaystyle U_{\star}$
$\displaystyle=\displaystyle{\frac{U_{0}+U_{0}\tau_{u}r_{0}}{1+U_{0}\tau_{u}r_{0}}},$
(13a) $\displaystyle X_{\star}$
$\displaystyle=\displaystyle{\frac{1}{1+\alpha\tau_{x}U^{\star}r_{0}}},$ (13b)
where we have made use of $U^{+}_{\star}=U^{-}_{\star}=U_{\star}$, as well as
$X^{+}_{\star}=X^{-}_{\star}=X_{\star}$ since spike times are irrelevant. The
spike and rate description can be compared by equating $r_{0}=1/T$.
In figure 2 we compare these solutions for varying firing rates.
Figure 2: Comparison of the microscopic adaptation variables before and after
spikes for discrete spikes, and for constant firing rates $r_{0}$. The inter-
spike interval $T$ is varied. The constant firing rate is expressed as
$r_{0}=1/T$. Parameters: $\alpha=0.1$, $U_{0}=0.2$, $\tau_{x}=50.0$,
$\tau_{u}=20.0$.
As can be seen, the results for constant firing rates $r_{0}$ are more closely
related to the adaptation variables before spikes than after spikes. This
shows that it does matter for microscopic STP whether exact spike timings and
the time of evaluation of $U$ and $X$ are considered or not, a finding which
we expect to hold for non-stationary firing rates $S(t)$ as well.
The expressions derived above can be used to evaluate the mean-field
quantities $x$ and $u$, if the spike times or firing rates of all neurons are
known. Alternatively, they can be used to evaluate $r_{\mathrm{eff}}$
directly. In the following sections, we will address the problem of evaluating
$r_{\mathrm{eff}}$ to derive the mean-field equations for equations (1). We
will derive two different mean-field models, for which the results of this
section will be used to refine the mean-field descriptions of the pre-synaptic
STP dynamics. In this context, we will evaluate how eq. (12) vs. eq. (13)
affect the mean-field dynamics of the QIF network.
## IV Mean-Field Derivation Under a Poissonian Assumption of Neural Dynamics
Recently, an approach for the derivation of a mean-field model for the system
defined by eqs. (1) has been presented in [37]. The authors used a mean-field
approximation of macroscopic quantities $x$ and $u$, averaged over all neurons
in the network, that has been proposed in [42]. In this article, a mean-field
approximation of the effective network input
$r_{\mathrm{eff}}(t)=\frac{1}{N}\sum_{j=1}^{N}U_{j}^{-}X_{j}^{-}s_{j},$ (14)
is derived, where $X_{j}^{-}$ and $U_{j}^{-}$ are given by eq. (1b) and eq.
(1c), respectively, with the modification that $U_{j}^{+}$ is replaced by
$U_{j}^{-}$. Whereas the original STP model formulation described in [39] uses
$U_{j}^{+}X_{j}^{-}$ as the effective weight of a synapse at the time of an
incoming spike, Schmutz et al. use $U_{j}^{-}X_{j}^{-}$ instead [42]. As shown
in Fig. 2C, these two choices can lead to substantial differences of the
synaptic weight for small input rates. Since an effective synaptic weight of
$U_{j}^{-}X_{j}^{-}$ is also used in [37], we will discuss the validity of
their mean-field description for both the spiking neural network given by eq.
(1) and the spiking neural network considered in [37]. Henceforth, we will
refer to the former as $\mathrm{SNN}_{\mathrm{pre}}$ and to the latter as
$\mathrm{SNN}_{\mathrm{pre}}$ II. Under the assumption that all $S_{i}$ follow
independent Poisson processes, the effective network input in
$\mathrm{SNN}_{\mathrm{pre}}$ II is approximated by $r_{\mathrm{eff}}\approx
u(t)x(t)r(t)$, where $r(t)$ is the average firing rate across neurons at time
$t$. As explained in [42], this mean-field approximation rests on two
assumptions: (I) Synapse indices can be randomized, i.e. the spike times
matter, but not the synapses at which those spikes occur. (II) The average
impact of a spike on $X_{i}$ and $U_{i}$, $\forall i$ can be approximated by
sampling from Gaussian distributions around the current values of $x$ and $u$.
A first-order mean-field approximation is then given by
$\displaystyle\tau_{x}\dot{x}$ $\displaystyle=1-x-\alpha\tau_{x}xur,$ (15a)
$\displaystyle\tau_{u}\dot{u}$ $\displaystyle=U_{0}-u+U_{0}\tau_{u}(1-u)r.$
(15b)
As can be seen from these equations, both $x$ and $u$ are driven by the
average firing rate $r=\frac{1}{N}\sum_{j=1}^{N}S_{j}$ of the QIF network.
This allows to one to apply the Lorentzian ansatz in the same way as
demonstrated for post-synaptic depression in [34]. The dynamics of the complex
variable $w(\eta,t)$ can be expressed as
$\partial_{t}w(\eta,t)=i[\frac{-w(\eta,t)^{2}+\eta+I(t)}{\tau}+Jxur],$ (16)
and by evaluating eq. (16) at $\pi r(t)+iv(t)=w(\bar{\eta}-i\Delta,t)$ one
finds that the dynamics of $r$ and $v$ follow:
$\displaystyle\tau\dot{r}$ $\displaystyle=\frac{\Delta}{\pi\tau}+2rv,$ (17a)
$\displaystyle\tau\dot{v}$ $\displaystyle=v^{2}+\bar{\eta}+I(t)+Jxur\tau-(\pi
r\tau)^{2}.$ (17b)
We will refer to the set of mean-field equations given by (15) and (17) as
$\mathrm{FRE}_{\mathrm{Poisson}}$ where $\mathrm{FRE}$ stands for firing rate
equations.
It is important to notice, however, that $\mathrm{FRE}_{\mathrm{Poisson}}$
cannot be considered exact. While assumption (I) holds for a network of
independent, homogeneous Poisson neurons (hence called Poissonian assumption),
it does not hold in general [42]. Therefore, the mean-field derivation
essentially approximates a heterogeneous network of deterministic QIF neurons
by a homogeneous network of stochastic Poisson neurons. Furthermore, the
first-order approximation given by eq. (15a) and eq. (15b) ignores the non-
linear interaction between $X_{i}$ and $U_{i}$ in eq. (1b). As shown in [42],
considering second order dynamics can improve the accuracy of the mean-field
approximation, especially in the vicinity of transient inputs to the network.
Adding second-order dynamics would involve sampling from a multivariate
Gaussian distribution over $(x,u)$, however. This means that the mean-field
derivation could not be considered deterministic and, hence, also not exact
anymore.
Still, it has been shown in [37] that $\mathrm{FRE}_{\mathrm{Poisson}}$ can
accurately describe the mean-field dynamics of $\mathrm{SNN}_{\mathrm{pre}}$
II under certain conditions. To test whether this holds in general, we
compared the dynamics of the two models for three different STP
parametrizations, leading to synapses that are either depressing,
facilitating, or depressing and facilitating. We solved the initial value
problem of both sets of equations via an explicit Euler formalism with an
integration step-size of $\mathrm{dt}=0.0001$. This step-size was sufficiently
small to capture the dynamics of the network and was used for all subsequent
numerical integration problems as well. We then applied rectangular input
pulses to the models and observed their dynamic responses around these inputs.
The resulting time series can be observed in Fig. 3.
Figure 3: Evolution of the state variables of a QIF network and a mean-field
approximation thereof for three different types of synaptic short-term
plasticity (A: depression, B: facilitation, combined C: depression and
facilitation). The first two rows show the distribution over the synaptic
state $X_{j}U_{j}$ and the spiking activity of 100 randomly selected neurons,
respectively. The last 4 rows show a comparison between the spiking neural
network (black) and the mean-field approximation (orange) for the average
firing rate $r$, the average membrane potential $v$, the average depression
$x$, and the average facilitation $u$. In the SNN, averages were calculated
across neurons $i$. Grey-shaded areas depict time intervals in which a
rectangular input of $I(t)=2.0$ was applied to the model. Color bars depict
the probability density inside a given bin of the distribution over
$X_{i}U_{i}$. Parameters for A: $U_{0}=1.0$, $\alpha=0.1$. Parameters for B:
$U_{0}=0.2$, $\alpha=0.0$. Parameters for C: $U_{0}=0.2$, $\alpha=0.1$. Other
model parameters: $\tau=1.0$, $\Delta=2.0$, $\bar{\eta}=-3.0$,
$J=15.0\sqrt{\Delta}$, $\tau_{x}=50.0$, $\tau_{u}=20.0$, $N=10000$.
For purely depressing synapses, we find that there is a substantial mismatch
between the mean-field dynamics of $\mathrm{SNN}_{\mathrm{pre}}$ II and
$\mathrm{FRE}_{\mathrm{Poisson}}$. As can be seen in Fig. 3A for the average
depression $x$, there is a considerable offset between the mean-field model
(orange) and the average of $X_{i}$ evaluated across neurons in the QIF
network (black). With respect to purely facilitating synapses, we find that
the mean-field model provides a reasonable approximation of the QIF network.
Even though offsets can be observed between the mean-field model and the QIF
network (see dynamics of $v$ in Fig. 3B), the qualitative behavior of the QIF
network is captured well by the mean-field model. This holds both in the
steady-state regimes and during transient behavior around the on- and offsets
of the input $I(t)$. In the case of synapses with short-term depression and
facilitation, the mean-field model expresses a substantial mismatch to the QIF
network dynamics again. For example, Fig. 3C shows that the dynamics of the
average firing rate $r$ express focus dynamics for
$\mathrm{FRE}_{\mathrm{Poisson}}$ after the onset of the first stimulus,
whereas the average firing inside $\mathrm{SNN}_{\mathrm{pre}}$ II does not
show such behavior. In the upper row of Fig. 3, we show the evolution of the
distribution over the combined synaptic state $X_{i}U_{i}$ in the microscopic
model. We find that this distribution tends to express multi-modalities in
regions with a strong mismatch between mean-field and microscopic model. These
results suggest that the mean-field model can approximate the low-dimensional
dynamics of the QIF network only if $X_{i}$ and $U_{i}$ express uni-modal,
narrow distributions. This finding makes intuitive sense, since the mean-field
approximation of the dynamics of $U_{i}$ and $X_{i}$ given by eqs. (15)
represents a first order approximation. Our results confirm that this
approximation only performs well if the mean over $X_{i}$ and $U_{i}$ contains
much information about the actual underlying distributions. Thus, by providing
these counter examples, we have shown that the mean-field model resulting from
the Poisson assumption does not provide an exact mean-field description of the
QIF network.
Since we are actually interested in the mean-field equations for
$\mathrm{SNN}_{\mathrm{pre}}$ given by eqs. (1), we now examine whether
$\mathrm{FRE}_{\mathrm{Poisson}}$ can nonetheless provide an approximation of
$\mathrm{SNN}_{\mathrm{pre}}$ under some conditions. To gain further insight
into the relationship between the mean-field equations and the QIF network, we
asked whether there exists a QIF network description for which the mean-field
model given by (15a, 15b, 17a, 17b) can be considered exact. Indeed, such a
network exists and is easy to find. Since $x$ and $u$ are only driven by the
mean-field firing rate $r$, we can just introduce microscopic variables
$U_{i}$ and $X_{i}$ that enter the microscopic evolution equation for $v_{i}$
in the same was as the macroscopic evolution equation for $v$ ((17b)) and are
also driven by the mean-field activity of the QIF network:
$\displaystyle\tau\dot{V}_{i}$
$\displaystyle=V_{i}^{2}+\eta_{i}+I(t)+\frac{J\tau}{N}U_{i}X_{i}s,$ (18a)
$\displaystyle\tau_{x}\dot{X}_{i}$ $\displaystyle=1-X_{i}-\alpha
X_{i}U_{i}s\tau_{x},$ (18b) $\displaystyle\tau_{u}\dot{U}_{i}$
$\displaystyle=U_{0}-U_{i}+U_{0}(1-U_{i})s\tau_{u},$ (18c) $\displaystyle s$
$\displaystyle=\sum_{j=1}^{N}\sum_{k\backslash
t_{j}^{k}<t}\int_{-\infty}^{t}\delta(t^{\prime}-t_{j}^{k})dt^{\prime},$ (18d)
where $s=r$ is the mean firing rate across all neurons in the network. Apart
from the description of the STP dynamics, this network description is
equivalent to the one used in [34] for a QIF network with post-synaptic
depression. Indeed, under a first-order approximation of the dynamics of $x$
and $u$ via the Poissonian assumption, the system given by eqs. (1), a QIF
network with pre-synaptic STP, is essentially approximated by eqs. (18), a QIF
network with post-synaptic STP (see Fig. 1 for a visualization of the
differences between the two). Hence, we will refer to the network given by
eqs. (18) as $\mathrm{SNN}_{\mathrm{post}}$.
Next, we compared the behavior of the two different QIF network descriptions
($\mathrm{SNN}_{\mathrm{pre}}$ and $\mathrm{SNN}_{\mathrm{post}}$) to the
mean-field model dynamics. This was done to verify that
$\mathrm{FRE}_{\mathrm{Poisson}}$ is indeed an exact mean-field model of
$\mathrm{SNN}_{\mathrm{post}}$ and to see under which conditions pre- and
post-synaptic STP have similar or different effects on the QIF network
dynamics. To this end, we used bifurcation analysis to identify phase
transitions in the mean-field model around which we compared the behavior of
the three models. This way, we were able to set up stimulation paradigms that
induce strong changes in the dynamic behavior of the mean-field model and
evaluate whether the QIF networks express qualitatively similar phase
transitions or not. Bifurcation analysis was performed numerically, using the
Python software PyRates [43], which provides an interface to the parameter
continuation software Auto-07p [44]. We initialized the mean-field model with
either purely depressing synapses ($U_{0}=1.0$, $\alpha=0.04$) or purely
facilitating synapses ($U_{0}=0.2$, $\alpha=0.0$). In each case, we performed
a parameter continuation in the background excitability $\bar{\eta}$ for two
different values of $\Delta\in{0.01,0.4}$. The latter introduces two different
levels of firing rate heterogeneity to the QIF network. We expected this
firing rate heterogeneity to directly affect the broadness of the
distributions over $X_{i}$ and $U_{i}$. If that is indeed the case, the mean-
field model should provide a better description of the
$\mathrm{SNN}_{\mathrm{pre}}$ dynamics for $\Delta=0.01$ than for
$\Delta=0.4$.
As can be seen in Fig. 4A and B, we identified fold bifurcations for
facilitating synapses for $\Delta=0.4$ as well as $\Delta=0.01$. These fold
bifurcations mark the outer limits of a bi-stable regime in which a stable
high-activity focus and a stable low-activity node can co-exist, separated by
a saddle-focus.
Figure 4: Comparison between $\mathrm{FRE}_{\mathrm{Poisson}}$ (orange),
$\mathrm{SNN}_{\mathrm{pre}}$ (black), and $\mathrm{SNN}_{\mathrm{post}}$
(purple) for 4 different parameter sets (A-D). The first column shows 1D
bifurcation diagrams in $\bar{\eta}$. Grey triangles represent fold
bifurcations and green circles represent Andronov-Hopf bifurcations. Blue
dashed lines mark the value of $\bar{\eta}$ that was used for the firing rate
and spike raster plots in the second column. Spike raster plots show the
spiking activity of 50 randomly selected neurons of
$\mathrm{SNN}_{\mathrm{pre}}$. Grey shaded areas represent time intervals
during which an extrinsic input $I(t)$ was applied to the models. Remaining
model parameters: $J=8.0$, $\tau_{u}=20.0$, $\tau_{x}=50.0$, $\tau=1.0$,
$N=10000$
Indeed, we find that the steady-state behavior of the mean-field model and
$\mathrm{SNN}_{\mathrm{post}}$ can be forced towards either of the two stable
equilibria via extrinsic stimulation. As shown for $\Delta=0.4$ and
$\Delta=0.01$ in Fig. 4A and B, respectively, there is always a very good
agreement between those two models. Regarding $\mathrm{SNN}_{\mathrm{pre}}$,
we failed to identify the bi-stable regime for $\Delta=0.4$. In Fig. 4A, it
can be seen that the system behavior is only governed by a high-activity
focus, even though the mean-field model predicts the co-existence of a low-
activity stable node for $\bar{\eta}=-0.6$. Thus, the mean-field model fails
to predict the behavior of the QIF network with pre-synaptic STP in this case.
However, in the case of very low heterogeneity, we identified both stable
states exists in $\mathrm{SNN}_{\mathrm{pre}}$ and found a good agreement with
the mean-field model (see Fig. 4B).
For depressing synapses, we found regimes of synchronized oscillations that
emerge via Andronov-Hopf bifurcations for small as well as for high firing
rate heterogeneity (see Fig. 4C and D). Again, these oscillations could be
induced in $\mathrm{FRE}_{\mathrm{Poisson}}$ as well as in
$\mathrm{SNN}_{\mathrm{post}}$ with a very good match between the two.
Consistent with our findings for facilitating synapses,
$\mathrm{SNN}_{\mathrm{pre}}$ expressed oscillations only for $\Delta=0.01$
(see Fig. 4D). For higher firing rate heterogeneity ($\Delta=0.4$), the
network did not show any tendency to oscillate at all, even though the mean-
field model predicted oscillations to be present at $\bar{\eta}=-0.85$ (see
Fig. 4C).
Thus, our results confirm that $\mathrm{FRE}_{\mathrm{Poisson}}$ is indeed an
exact mean-field equation of $\mathrm{SNN}_{\mathrm{post}}$. Furthermore, they
demonstrate that $\mathrm{SNN}_{\mathrm{pre}}$ and
$\mathrm{SNN}_{\mathrm{post}}$ can behave both very differently and very
similarly, depending on the firing rate heterogeneity inside the network. In
our simulations, we were able to control this heterogeneity successfully via
the parameter $\Delta$. In regimes of low firing rate heterogeneity,
$\mathrm{SNN}_{\mathrm{pre}}$ and $\mathrm{SNN}_{\mathrm{post}}$ expressed
similar behavior, thus allowing for a good approximation of the mean-field
dynamics of $\mathrm{SNN}_{\mathrm{pre}}$ via
$\mathrm{FRE}_{\mathrm{Poisson}}$. In regimes of high firing rates
heterogeneity, the opposite was the case. In the next sections, we investigate
whether more accurate mean-field models of QIF networks with pre-synaptic STP
can be derived and, if so, how they perform near the parameter regimes
described in this section.
## V Multi-population approximation of distributed parameters in the QIF
network
In the previous section, we have found that $\mathrm{FRE}_{\mathrm{Poisson}}$
is in good agreement with the dynamics of $\mathrm{SNN}_{\mathrm{pre}}$, when
the distribution of $\eta_{i}$ is particularly narrow, i.e. when $\Delta\ll
1$. Here, we exploit this fact and approximate the mean field dynamics by
dividing the microscopic network into sub-networks with narrow distributions
in $\eta_{i}$. In other words, the Lorentzian distribution with
$\\{\bar{\eta},\Delta\\}$ is divided into a set of $M$ Lorentzian
distributions with $\\{\bar{\eta}_{m},\Delta_{m}\\}$, $m=1,\ldots,M$, such
that
$\frac{\Delta/\pi}{(\eta-\bar{\eta})^{2}+\Delta^{2}}\approx\frac{1}{M}\sum_{m=1}^{M}\frac{\Delta_{m}/\pi}{(\eta-\bar{\eta}_{m})^{2}+\Delta_{m}^{2}}.$
(19)
The resulting set of equations for the evolution of the mean field variables
is then given by
$\displaystyle\tau\dot{r}_{m}$
$\displaystyle=\frac{\Delta_{m}}{\pi\tau}+2r_{m}v_{m},$ (20a)
$\displaystyle\tau\dot{v}_{m}$
$\displaystyle=v_{m}^{2}+\bar{\eta}_{m}+I(t)+\frac{J\tau}{M}\sum_{n=1}^{M}x_{n}u_{n}r_{n}-(\pi
r_{m}\tau)^{2},$ (20b) $\displaystyle\dot{x}_{m}$
$\displaystyle=\frac{1-x_{m}}{\tau_{x}}-\alpha u_{m}x_{m}r_{m},$ (20c)
$\displaystyle\dot{u}_{m}$
$\displaystyle=\frac{U_{0}-u}{\tau_{u}}+U_{0}(1-u_{m})r_{m}.$ (20d)
We will refer to this set of mean-field equations as
$\mathrm{FRE}_{\mathrm{mpa}}$, for multi-population approximation. One
assumption we make here is that each sub-network contains the same number of
neurons, which means that the weights for each sub-network are the same, and
the mean field variables can be obtained by computing the mean
$y=(1/M)\sum_{m=1}^{M}y_{m}$, where $y$ represents the mean field variable
under consideration. The parameters $\bar{\eta}_{m}$ and $\Delta_{m}$ are
chosen as follows:
$\displaystyle\bar{\eta}_{m}$ $\displaystyle=\mathrm{}$
$\displaystyle\bar{\eta}+\Delta\tan\frac{\pi(2m-M-1)}{2(M+1)},$ (21a)
$\displaystyle\Delta_{m}$ $\displaystyle=$
$\displaystyle\Delta(\tan\frac{\pi(2m-M-1/2)}{2(M+1)}$
$\displaystyle-\tan\frac{\pi(2m-M-3/2)}{2(M+1)}).$ (21b)
The density of the parameters $\eta_{m}$ follows the Lorentzian distribution,
and the $\Delta_{m}$ are chosen such that the half-widths approximately match
the distances between the centers of the distributions of the sub-networks,
i.e. $\bar{\eta}_{m+1}-\bar{\eta}_{m}\approx\Delta_{m+1}+\Delta_{m}$. The
results are shown in figure 5A. As can be seen, even at large $M$ the
adaptation variables still show a small discrepancy with the result obtained
from the spiking neural network $\mathrm{SNN}_{\mathrm{pre}}$. We hypothesise
that this difference is due to different results for the adaptation variables
when the firing rate is assumed constant, and when it is assumed to be a spike
train with constant ISI, as shown in Fig. 2. In other words, we expect that
accounting for the fact that $\mathrm{FRE}_{\mathrm{Poisson}}$ was derived for
$\mathrm{SNN}_{\mathrm{pre}}$ II instead of $\mathrm{SNN}_{\mathrm{pre}}$ will
reduce the difference. As the adaptation variables are in essence time-
averaged quantities, the adaptation variables could be posed as
$x=(X^{-}+X^{+})/2$ and $u=(U^{-}+U^{+})/2$. However, with the update rules
$U^{+}=U^{-}+U_{0}(1-U^{-})$ and $X^{+}=X^{-}-\alpha U^{+}X^{-}$, this would
yield out-of-bound values for $X^{-}$ at $x=1$, and $U^{-}$ at $u=0$. The
results shown in Figure 2 suggest that the mean field variables are closest to
$X^{-}$ and $U^{-}$, which is why we set $X^{-}\approx x$, and $U^{-}\approx
u$. The update rule for $U^{+}$ gives the following correction term:
$\displaystyle U^{+}(u)\approx u+U_{0}(1-u).$ (22)
Inserting this term into the mean field equations for
$\mathrm{FRE}_{\mathrm{mpa}}$ produces a closer match of the mean field
variables with the results of the microscopic model
$\mathrm{SNN}_{\mathrm{pre}}$, see figure 5B.
Figure 5: Comparison of the mean field variables of the microscopic spiking
neural network, and the mean field model of the spiking neural network divided
into $M$ sub-networks with narrow distribution (multi-population
approximation, MPA). Grey shaded areas indicate time intervals with
$I(t)=3.0$. A: MPA with standard mean field description, B: MPA with
correction term for $U^{+}$. Parameters: $\alpha=0.1$, $\tau=1.0$,
$\Delta=2.0$, $\bar{\eta}=-3.0$, $J=15.0\sqrt{\Delta}$, $\tau_{x}=50.0$,
$\tau_{u}=20.0$, $N=10000$.
As a final test of the predictive accuracy of $\mathrm{FRE}_{\mathrm{MPA}}$,
we examined how well the model can predict the onset of oscillations in the
QIF network. Using bifurcation analysis, we identified the Hopf bifurcation
leading to the oscillations in Fig. 4C and investigated the locus of that Hopf
bifurcation in the 2D parameter space spanned by $\bar{\eta}$ and $\Delta$.
This, we did for both $\mathrm{FRE}_{\mathrm{Poisson}}$ and
$\mathrm{FRE}_{\mathrm{MPA}}$ with $M=100$ mean-field populations. As shown in
Fig. 6A, we found that the Hopf curves emerged from a Bogdanov-Takens
bifurcation in both $\mathrm{FRE}$ models. This represents the same
bifurcation structure as has already been identified for QIF networks with SD
(see Fig.2 and 4 in [34] for the corresponding 1D and 2D bifurcation diagrams,
respectively). Furthermore, we have shown the corresponding 1D bifurcation
diagrams for the $\mathrm{FRE}_{\mathrm{Poisson}}$ model for $\Delta=0.4$ and
$\Delta=0.01$ in Fig. 4C and D, respectively. Thus, we expect stable
oscillations to exist in the regions enclosed by the Hopf curves. As shown in
Fig.6A, the difference between the Hopf curves predicted by
$\mathrm{FRE}_{\mathrm{Poisson}}$ and $\mathrm{FRE}_{\mathrm{MPA}}$ becomes
larger when $\Delta$ increases. For $\Delta=0.4$,
$\mathrm{FRE}_{\mathrm{Poisson}}$ predicts stable oscillations to exist at
$\bar{\eta}=-0.85$, which we already failed to find in the QIF network in
Fig.4D. $\mathrm{FRE}_{\mathrm{MPA}}$ predicts the existence of a stable node
at $\bar{\eta}=-0.85$, however, and the existence of stable oscillations for
$-0.66<\bar{\eta}<-0.6$. To see whether the oscillations predicted by
$\mathrm{FRE}_{\mathrm{MPA}}$ indeed exist in $\mathrm{SNN}_{\mathrm{pre}}$,
we performed numerical simulations where we initialized the QIF network at
$\bar{\eta}=-0.85$ and then forced it towards $\bar{\eta}=-0.62$ via extrinsic
stimulation. As can be seen in Fig.6B, the QIF network expressed steady-state
behavior for $\bar{\eta}=-0.85$ and started to oscillate when pushed to
$\bar{\eta}=-0.62$. Hence, $\mathrm{FRE}_{\mathrm{MPA}}$ correctly predicted
the existence of oscillatory bursts in the QIF network for $M=100$, but not
for $M=1$, for which $\mathrm{FRE}_{\mathrm{MPA}}$ reduces to
$\mathrm{FRE}_{\mathrm{Poisson}}$. The bursts have similar properties as the
ones found in QIF networks with post-synaptic plasticity [34] and can be
expected to result from the interaction between synaptic short-term depression
and recurrent excitation via the network. Comparing the firing rate dynamics
of $\mathrm{FRE}_{\mathrm{MPA}}$ and $\mathrm{SNN}_{\mathrm{pre}}$ in Fig.6
reveals a slight difference between the oscillation period of the mean-field
model and the QIF network. This difference shows that
$\mathrm{FRE}_{\mathrm{MPA}}$ can not be considered an exact mean-field model,
even for $M=100$. Still, we find that it captures the phase transitions inside
$\mathrm{SNN}_{\mathrm{pre}}$ well and thus provides a reasonable trade-off
between accuracy and computational complexity.
Figure 6: Phase transitions between steady-state and oscillatory regimes in
$\mathrm{FRE}_{\mathrm{Poisson}}$ and $\mathrm{FRE}_{\mathrm{MPA}}$. A: 2D
bifurcation diagram of the Hopf curve in $\mathrm{FRE}_{\mathrm{Poisson}}$
(orange) and $\mathrm{FRE}_{\mathrm{MPA}}$ (blue). The arrow represents the
phase transition introduced by I(t) in either model. The black square
represents the Bogdanov-Takens bifurcation from which the Hopf bifurcations
emerge. B:The first row shows the simulated firing dynamics of the spiking
neural network and both mean-field models. The second row shows the
corresponding spiking activity of 100 randomly selected neurons of
$\mathrm{SNN}_{\mathrm{pre}}$. Parameters: $\alpha=0.04$, $U_{0}=1.0$,
$\tau=1.0$, $\Delta==0.4$, $\bar{\eta}=-0.85$, $J=8.0$, $\tau_{x}=50.0$,
$\tau_{u}=20.0$, $N=10000$, $M=100$, $I(t)=0.23$ for $t>250$ and $I(t)=0.0$
otherwise.
## VI Adiabatic Approximation of STP Dynamics
For simplification, we will consider synapses with mere short-term depression
in this section, since we showed in section IV that the mismatch between the
mean-field model $\mathrm{FRE}_{\mathrm{Poisson}}$ and the QIF networks
$\mathrm{SNN}_{\mathrm{pre}}$ and $\mathrm{SNN}_{\mathrm{pre}}$ II could be
reproduced in this simpler case as well. We thus consider the microscopic
system given by
$\displaystyle\tau\dot{V}_{i}$
$\displaystyle=V_{i}^{2}+\eta_{i}+I(t)+\frac{J\tau}{N}\sum_{j=1}^{N}X_{j}^{-}S_{j},$
(23a) $\displaystyle\tau_{x}\dot{X}_{i}$ $\displaystyle=1-X_{i}-\alpha
X_{i}^{-}S_{i}\tau_{x},$ (23b) $\displaystyle S_{i}$
$\displaystyle=\sum_{k\backslash
t_{j}^{k}<t}\int_{-\infty}^{t}a(t-t^{\prime})\delta(t^{\prime}-t_{j}^{k})dt^{\prime}.$
(23c)
In this system, we approximate the STP dynamics via a linear differential
operator $L$, i.e. $LX_{i}(t)=S_{i}(t)$. In such a case, a Green’s function
$G(t)$ exists that allows one to express the dynamics of $X_{i}$ via a
convolution of $G(t)$ with the spiking activity of neuron $i$:
$X_{i}(t)=\int_{-\infty}^{t}G(t-t^{\prime})S_{i}(t^{\prime})dt^{\prime}=G*S_{i}.$
(24)
Then, since $S_{i}$ is related to $z(\eta_{i},t)$ via
$S_{i}\pi=z(\eta_{i},t)$, eq. (4) can be written as
$\partial_{t}w(\eta,t)=i[\frac{-w(\eta,t)^{2}+\eta+I(t)}{\tau}+J(G*\frac{\Re[w]}{\pi})\Re[w]].$
(25)
To solve eq. (25) for $r$ and $v$, the effective firing rate
$r_{\mathrm{eff}}=\int_{-\infty}^{\infty}(G*r(\eta))r(\eta)g(\eta)\mbox{d}\eta$
must be determined, which requires one to evaluate the product between the
single cell firing rate and a convolution of itself. This makes it difficult
to find a closed-form solution for $r$ and $v$, since the synaptic depression
kernel $G$ cannot simply be pulled out from the convolution integral. The
simplest approximation of this problem is to replace the convolution integral
by a mean synaptic depression, as is done for the Poissonian assumption.
Alternatively, we assume that the dynamics of $X_{i}$ are slow in comparison
to the dynamics of $v_{i}$. For the relaxation dynamics of $X_{i}$, this
assumption is met if $\tau_{x}\gg\tau$. We note here, however, that the
spiking activity of the neuron also introduces a relatively fast time scale to
eq. (23b), which may violate our assumption. Still, under this assumption, we
can apply an adiabatic approximation to the system and consider the dynamics
of the fast sub-system for effectively constant adaptation (see [45, 34] for a
similar approach):
$\displaystyle\tau\dot{V}_{i}$
$\displaystyle=V_{i}^{2}+\eta_{i}+I(t)+\frac{J\tau}{N}\sum_{j=1}^{N}X_{j}^{-}S_{j},$
(26a) $\displaystyle S_{i}$ $\displaystyle=\sum_{k\backslash
t_{j}^{k}<t}\int_{-\infty}^{t}\delta(t^{\prime}-t_{j}^{k})\mbox{d}t^{\prime},$
(26b)
where $X_{j}$ is approximated as neuron-specific constant. Due to the
Lorentzian distribution of the background excitabilities $\eta_{i}$ and the
resulting heterogeneity of single cell firing rates in the network, $X_{i}$
cannot be assumed as homogeneous across neurons. Instead, it must be
considered a distributed quantity, governed by a probability density function
$h(X_{i})$. Then, the main difficulty in developing the mean field description
lies in the fact that $h(X_{i})$ is generally unknown if a mean field variable
is considered. More precisely, if we consider the mean field variable $x$ that
describes the average synaptic depression across the network, little is known
about the distribution of the microscopic variables $X_{i}$, which is required
to determine the effective firing rate $r_{\mathrm{eff}}$. By using the
adiabatic approximation, we argue that an approximation of $r_{\mathrm{eff}}$
can be obtained by estimating the distributions $X(\eta)$ and $r(\eta)$ from
the mean field variables in the stationary case, and solving
$r_{\mathrm{eff}}=\int_{0}^{1}\int_{-\infty}^{\infty}Xr(\eta)h(X|\eta)g(\eta)\mbox{d}\eta\mbox{d}X.$
(27)
Assuming independent Lorentzian density functions for $h$ and $g$, i.e.
$h(X|\eta)g(\eta)=h(X)g(\eta)$, eq. (25) would only need to be evaluated at
the poles in the lower half-planes $\pi
r(t)+iv(t)=w(\bar{\eta}-i\Delta,\bar{X}-i\Delta_{X},t)$, where $\bar{X}$ and
$\Delta_{X}$ would represent the center and HWHM of the Lorentzian
distribution over $X$, respectively. Then, the effect of pre-synaptic STP on
the network dynamics would effectively reduce to a distribution over the
coupling parameter $J$. For the mean-field equations of a QIF network with
distributed coupling parameters see [29]. However, $h$ and $g$ cannot be
assumed to be independent, since $\eta_{i}$ controls the firing rate of neuron
$i$, which in turn controls its synaptic depression $X_{i}$. Furthermore, $X$
is bound between $[0,1]$ and hence a Lorentzian distribution cannot be
assumed. In the upper row of Fig. 3, we show the evolution of the distribution
over $X_{i}U_{i}$ for three different parametrizations, corresponding to a
purely depressing synapse, a purely facilitating synapse, and a synapse with
facilitation and depression acting on different time scales. Importantly, the
evolution of the distribution reveals that it is not always uni-modal. For
purely depressing synapses, it clearly expresses an at least bi-modal
distribution over the whole time course. Thus, finding an appropriate form of
$h$ that holds in general is a highly non-trivial problem that we did not find
a solution for.
To further simplify the problem, we assume that the depression of a neuron’s
efferent synapses $X_{i}$ is merely a function of the firing rate $r_{i}$ of
the same neuron. The stationary firing rate of a QIF neuron in response to an
external Input $I_{in}$ is $\sqrt{I_{in}}/\pi$ if $I_{in}>0$, and zero
otherwise. Hence, the distribution of firing rates for a given input is (in
the stationary case) given by
$r(\eta;I_{in})=H(\eta+I_{in})\sqrt{\eta+I_{in}}/\pi,$ (28)
where $H$ is the Heaviside step function. Therefore, for any given mean field
firing rate $r$ one can find a unique constant $I_{r}$ for which
$r=\int_{-\infty}^{\infty}r(\eta;I_{r})g(\eta)\mbox{d}\eta,$ (29)
which allows us to translate the mean field variable $r$ into the distribution
$r(\eta;I_{r})$.
Similarly, we can use the assumption that $X_{i}$ is a function of $r_{i}$ to
translate the mean field variable for synaptic depression, $x$, into the
distribution $X(\eta;I_{x})$. First, we use the rate relationship given by eq.
(13) to approximate
$x(\eta;I_{x})=1/(1+\alpha\tau_{x}r(\eta;I_{x})),$ (30)
for any given input $I_{x}$, and then define
$x_{1}=\int_{-\infty}^{\infty}\rho(\eta)/(1+\alpha\tau_{x}r(\eta;I_{x}))\mbox{d}\eta.$
(31)
Alternatively, we can use eq. (12) to approximate the distribution $x(\eta)$
in the spiking scenario:
$x(\eta;I_{x})=\frac{1-\exp(-1/\tau_{x}r(\eta;I_{x}))}{1-(1-\alpha)\exp(-1/\tau_{x}r(\eta;I_{x}))},$
(32)
which yields
$x_{2}=\int_{-\infty}^{\infty}\frac{(1-\exp(-1/\tau_{x}r(\eta;I_{x})))g(\eta)}{1-(1-\alpha)\exp(-1/\tau_{x}r(\eta;I_{x}))}\mbox{d}\eta.$
(33)
Having obtained $I_{r}$ and $I_{x}$, we can ultimately compute
$r_{\mathrm{eff}}=\int_{-\infty}^{\infty}r(\eta;I_{r})x(\eta;I_{x}))g(\eta)\mbox{d}\eta,$
(34)
where $x(\eta;I_{x})$ is either chosen for the rate scenario (eq. (30)), or in
the spike scenario (eq. (32)). This requires one to solve
$r_{\mathrm{eff}}=\frac{\Delta}{\pi^{2}}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\int\displaylimits_{\mathrm{min}(-I_{x},-I_{r})}^{\infty}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{1}{1\\!+\\!\alpha\tau_{x}\sqrt{\eta\\!+\\!I_{x}}}\frac{\sqrt{\eta+I_{r}}}{(\eta-\bar{\eta})^{2}+\Delta^{2}}\mathrm{d}\eta,$
(35)
in the rate scenario, and
$r_{\mathrm{eff}}=\frac{\Delta}{\pi^{2}}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\int\displaylimits_{\mathrm{min}(-I_{x},-I_{r})}^{\infty}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{\exp\left(\frac{\pi}{\tau_{x}\sqrt{\eta+I_{x}}}\right)-1}{\exp\left(\frac{\pi}{\tau_{x}\sqrt{\eta+I_{x}}}\right)\\!-\\!(1\\!-\\!\alpha)}\frac{\sqrt{\eta+I_{r}}}{(\eta-\bar{\eta})^{2}+\Delta^{2}}\mathrm{d}\eta,$
(36)
in the spiking scenario. We refer to this mean-field model as
$\mathrm{FRE}_{\mathrm{aa}}$ for adiabatic approximation, with
$\mathrm{FRE}_{\mathrm{aa1}}$ and $\mathrm{FRE}_{\mathrm{aa2}}$ denoting the
mean-field model considering the rate and spike scenario, respectively.
The integrals involved in this approximation are hard to evaluate
analytically, therefore we solve these integrals numerically for a range of
values of $I_{r}$ and $I_{x}$ and create look-up tables for $I_{r}$, $I_{x}$
and $r_{\mathrm{eff}}$ in order to be able to integrate the resulting model
equations numerically. In Figure 7 we compare the results of the mean-field
model $\mathrm{FRE}_{\mathrm{aa}}$ with the dynamics of the spiking neural
network $\mathrm{SNN}_{\mathrm{pre}}$, and the mean field model
$\mathrm{FRE}_{\mathrm{Poisson}}$. We find that $\mathrm{FRE}_{\mathrm{aa}}$
is closer to the microscopic dynamics of $\mathrm{SNN}_{\mathrm{pre}}$ than
$\mathrm{FRE}_{\mathrm{Poisson}}$.
Figure 7: Comparison of the mean field variables of the microscopic spiking
neural network, the mean field model using the Poissonian assumption, and the
mean field model with approximation of the effective firing rate. Grey shaded
areas indicate time intervals with $I(t)=3.0$. Parameters: $\alpha=0.1$,
$\tau=1.0$, $\Delta=1.0$, $\bar{\eta}=-2.0$, $J=15.0$, $\tau_{x}=50.0$,
$\tau_{u}=20.0$, $N=10000$.
## VII Conclusion
In this work, we examined whether spiking neural networks with pre-synaptic
short-term plasticity allow for the derivation of low-dimensional mean-field
equations via the Lorentzian ansatz described in [29]. To this end, we
considered heterogeneous, all-to-all coupled QIF networks with pre-synaptic
STP dynamics, described by a well-known phenomenological model of synaptic
short-term depression and facilitation [39]. For such QIF networks, other
forms of STP have already been shown to be compatible with the Lorentzian
ansatz [34]. In the case of pre-synaptic STP, we identified the evaluation of
the effective network input $r_{\mathrm{eff}}$ as the central problem for a
mean-field derivation via the Lorentzian ansatz. This effective network input
represents a weighted sum of incoming spikes, where the weights are given by
the pre-synaptic depression and facilitation terms. We presented three
different approaches to express $r_{\mathrm{eff}}$ and thus find the mean-
field equations: First, a mean-field description of the STP dynamics via the
Poissonian assumption used in [37]; second, a multi-population approximation
that approximates distributed parameters inside the QIF network via a set of
coupled sub-populations with different parametrizations; and third, an
adiabatic approximation of the STP time scales.
For the first approach, the effective network input $r_{eff}$ is approximated
by a modulation of the mean-field firing rate with an average depression and
an average facilitation. Our analysis revealed that this approach essentially
approximates pre-synaptic STP with post-synaptic STP. We compared the behavior
of QIF networks with pre- vs. post-synaptic STP and found that they can
express substantial qualitative differences in their dynamics, especially when
$\mathrm{SNN}_{\mathrm{pre}}$ expresses a high firing rate heterogeneity
across neurons. Near such regimes, $\mathrm{FRE}_{\mathrm{Poisson}}$ follows
the dynamics of $\mathrm{SNN}_{\mathrm{post}}$, and thus fails to capture the
behavior of $\mathrm{SNN}_{\mathrm{pre}}$. It is worth noticing that the mean-
field derivation via the Poissonian assumption works well for networks of
homogeneous Poisson neurons with independent noise [42]. In such networks,
single cell firing rates can differ momentarily due to noise, but approach the
same rate when averaged over increasing time intervals. This is a very
different scenario compared to the QIF network considered here, where the
Lorentzian distribution over $\eta_{i}$ causes substantial heterogeneity in
the single cell firing rates. Hence, the Poissonian approximation becomes
worse the stronger the heterogeneity of single cell firing rates inside the
QIF network is. In [37], where the Poissonian approximation was first applied
to a QIF network with pre-synaptic STP, the authors chose QIF networks with
relatively low firing rate heterogeneity, leading to a good correspondence
with the mean-field model. Here, we clarified that this correspondence does
not generalize to regimes where the QIF network expresses more heterogeneous
firing rates.
Populations of neurons that naturally express heterogeneous firing rates exist
in sub-cortical structures, for example. Single cell firing rates in the
globus pallidus have been shown to differ substantially across neurons [46,
47]. This firing rate heterogeneity has been suggested as an important de-
synchronization mechanism of pallidal activity [48, 49]. Our results suggest
that studying the mean-field dynamics in such a population via
$\mathrm{FRE}_{\mathrm{Poisson}}$ comes at the risk of substantial errors. We
thus developed a mean-field model that addresses the issue of high firing rate
heterogeneities. Since the distribution over $\eta_{i}$ is the source of
heterogeneity in the QIF network, we attempted to improve the mean-field model
by considering a set of coupled sub-networks with distinct, but narrow
distributions over $\eta_{i}$. This way, the neurons inside each sub-
population are parametrized such that they express a considerably lower firing
rate heterogeneity than the overall network. We found that, by increasing the
number of sub-populations, the mean-field model converges to the QIF network
behavior. Of course, this approach leads to mean-field models of relatively
high dimensionality. Still, we found that a mean-field model with 100 sub-
populations (i.e. a 400-dimensional model), accurately predicted phase
transitions of the QIF network from steady-state to oscillatory behavior in a
regime where $\mathrm{FRE}_{\mathrm{Poisson}}$ failed to do so. Thus, we argue
that this multi-population approximation provides a flexible mean-field
description, the dimensionality of which can be chosen based on the expected
firing rate heterogeneity in the neural population under investigation.
As an alternative to the Poissonian approximation, we applied an adiabatic
approximation to the QIF network, assuming slow STP dynamics in comparison to
the QIF dynamics. This assumption is supported by experimental results that
suggest depression and facilitation recovery time scales that are at least 10
times slower than typical membrane potential time scales [50, 39, 37].
Previously, this approach has been used successfully for the derivation of
mean-field equations for QIF networks with spike-frequency adaptation [34]. By
approximating the pre-synaptic STP dynamics as slow, they can be considered as
constant, distributed quantities in the fast sub-system. This way, the STP
dynamics do not have to be considered for the evaluation of
$r_{\mathrm{eff}}$. Instead, appropriate distributions over the STP constants
have to be chosen. In our work, we derived analytical solutions of the
microscopic STP dynamics in the stationary case and used these solutions to
approximate the STP distributions. This approach can be considered exact for
the description of steady-state solutions, but not for transient dynamics.
That is, the network must have converged to an equilibrium for our
approximation to be accurate. Still, we find that our adiabatic approximation
provides a more accurate approximation of the mean-field dynamics of the QIF
network dynamics than the Poissonian approximation, even for transient
dynamics. A disadvantage of this method is, however, that we had to
approximate the integrals over the STP distribution numerically and calculate
$r_{\mathrm{eff}}$ via look-up tables. This makes it more difficult to
implement the model equations and perform parameter continuations.
In conclusion, we performed a thorough analysis of the problems that arise
when attempting to derive the mean-field equations for QIF networks with
synaptic short-term plasticity. Though we did not find a set of exact, closed-
form mean-field equations, we provided two different mean-field approximations
that we found to be more accurate than a previously proposed mean-field model.
Both of these mean-field approximations can capture the qualitative dynamics
of the QIF network and can thus be used for future investigations of its
macroscopic dynamics. Finally, our work provides insight into the distinct
effects that pre- vs post-synaptic STP can have on the mean-field dynamics of
spiking neural networks.
## Acknowledgements
R.G. was funded by the Studienstiftung des deutschen Volkes. H.S. was
supported by the German Research Foundation (DFG (KN 588/7-1) awarded to
T.R.K. via Priority Program 2041, “Computational Connectomics”).
## References
* Başar [2012] E. Başar, _Chaos in Brain Function: Containing Original Chapters by E. Basar and T. H. Bullock and Topical Articles Reprinted from the Springer Series in Brain Dynamics_ (Springer Science & Business Media, 2012).
* Chialvo [2010] D. R. Chialvo, Emergent complex neural dynamics, Nature Physics 6, 744 (2010).
* Deco _et al._ [2011] G. Deco, V. K. Jirsa, and A. R. McIntosh, Emerging concepts for the dynamical organization of resting-state activity in the brain, Nature Reviews Neuroscience 12, 43 (2011).
* Engel _et al._ [2001] A. K. Engel, P. Fries, and W. Singer, Dynamic predictions: Oscillations and synchrony in top–down processing, Nature Reviews Neuroscience 2, 704 (2001).
* Knösche _et al._ [2005] T. R. Knösche, C. Neuhaus, J. Haueisen, K. Alter, B. Maess, O. W. Witte, and A. D. Friederici, Perception of phrase structure in music, Human Brain Mapping 24, 259 (2005).
* Kujala _et al._ [2007] T. Kujala, M. Tervaniemi, and E. Schröger, The mismatch negativity in cognitive and clinical neuroscience: Theoretical and methodological considerations, Biological Psychology 74, 1 (2007).
* Jirsa _et al._ [2014] V. K. Jirsa, W. C. Stacey, P. P. Quilichini, A. I. Ivanov, and C. Bernard, On the nature of seizure dynamics, Brain 137, 2210 (2014).
* Sadtler _et al._ [2014] P. T. Sadtler, K. M. Quick, M. D. Golub, S. M. Chase, S. I. Ryu, E. C. Tyler-Kabara, B. M. Yu, and A. P. Batista, Neural constraints on learning, Nature 512, 423 (2014).
* Murray _et al._ [2017] J. D. Murray, A. Bernacchia, N. A. Roy, C. Constantinidis, R. Romo, and X.-J. Wang, Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex, Proceedings of the National Academy of Sciences 114, 394 (2017).
* Babloyantz and Destexhe [1986] A. Babloyantz and A. Destexhe, Low-dimensional chaos in an instance of epilepsy, Proceedings of the National Academy of Sciences 83, 3513 (1986).
* Kelso [1995] J. A. S. Kelso, _Dynamic Patterns: The Self-organization of Brain and Behavior_ (MIT Press, 1995).
* Celletti and Villa [1996] A. Celletti and A. E. P. Villa, Low-dimensional chaotic attractors in the rat brain, Biological Cybernetics 74, 387 (1996).
* Bollimunta _et al._ [2008] A. Bollimunta, Y. Chen, C. E. Schroeder, and M. Ding, Neuronal Mechanisms of Cortical Alpha Oscillations in Awake-Behaving Macaques, Journal of Neuroscience 28, 9976 (2008).
* Spiegler _et al._ [2011] A. Spiegler, T. R. Knösche, K. Schwab, J. Haueisen, and F. M. Atay, Modeling Brain Resonance Phenomena Using a Neural Mass Model, PLOS Computational Biology 7, e1002298 (2011).
* Deco and Jirsa [2012] G. Deco and V. K. Jirsa, Ongoing Cortical Activity at Rest: Criticality, Multistability, and Ghost Attractors, Journal of Neuroscience 32, 3366 (2012).
* Deco _et al._ [2008] G. Deco, V. K. Jirsa, P. A. Robinson, M. Breakspear, and K. Friston, The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields, PLOS Computational Biology 4, e1000092 (2008).
* Wilson and Cowan [1972] H. R. Wilson and J. D. Cowan, Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons, Biophysical Journal 12, 1 (1972).
* Lopes da Silva _et al._ [1974] F. H. Lopes da Silva, A. Hoeks, H. Smits, and L. H. Zetterberg, Model of brain rhythmic activity, Kybernetik 15, 27 (1974).
* Freeman [1978] W. Freeman, Models of the dynamics of neural populations, Electroencephalography and clinical neurophysiology. Supplement , 9 (1978).
* Jansen and Rit [1995] B. H. Jansen and V. G. Rit, Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns, Biological Cybernetics 73, 357 (1995).
* Robinson _et al._ [1997] P. A. Robinson, C. J. Rennie, and J. J. Wright, Propagation and stability of waves of electrical activity in the cerebral cortex, Physical Review E 56, 826 (1997).
* El Boustani and Destexhe [2009] S. El Boustani and A. Destexhe, A Master Equation Formalism for Macroscopic Modeling of Asynchronous Irregular Activity States, Neural Computation 21, 46 (2009).
* Buice _et al._ [2009] M. A. Buice, J. D. Cowan, and C. C. Chow, Systematic Fluctuation Expansion for Neural Network Activity Equations, Neural Computation 22, 377 (2009).
* Schwalger _et al._ [2017] T. Schwalger, M. Deger, and W. Gerstner, Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size, PLOS Computational Biology 13, e1005507 (2017).
* Ott and Antonsen [2008] E. Ott and T. M. Antonsen, Low dimensional behavior of large systems of globally coupled oscillators, Chaos: An Interdisciplinary Journal of Nonlinear Science 18, 037113 (2008).
* Kuramoto [1991] Y. Kuramoto, Collective synchronization of pulse-coupled oscillators and excitable units, Physica D: Nonlinear Phenomena 50, 15 (1991).
* Luke _et al._ [2013] T. B. Luke, E. Barreto, and P. So, Complete Classification of the Macroscopic Behavior of a Heterogeneous Network of Theta Neurons, Neural Computation 25, 3207 (2013).
* Coombes and Byrne [2019] S. Coombes and A. Byrne, Next Generation Neural Mass Models, in _Nonlinear Dynamics in Computational Neuroscience_, PoliTO Springer Series, edited by F. Corinto and A. Torcini (Springer International Publishing, Cham, 2019) pp. 1–16.
* Montbrió _et al._ [2015] E. Montbrió, D. Pazó, and A. Roxin, Macroscopic Description for Networks of Spiking Neurons, Physical Review X 5, 021028 (2015).
* Ratas and Pyragas [2016] I. Ratas and K. Pyragas, Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons, Physical Review E 94, 032215 (2016).
* Byrne _et al._ [2017] A. Byrne, M. J. Brookes, and S. Coombes, A mean field model for movement induced changes in the beta rhythm, Journal of Computational Neuroscience 43, 143 (2017).
* di Volo and Torcini [2018] M. di Volo and A. Torcini, Transition from Asynchronous to Oscillatory Dynamics in Balanced Spiking Networks with Instantaneous Synapses, Physical Review Letters 121, 128301 (2018).
* Pietras _et al._ [2019] B. Pietras, F. Devalle, A. Roxin, A. Daffertshofer, and E. Montbrió, Exact firing rate model reveals the differential effects of chemical versus electrical synapses in spiking networks, Physical Review E 100, 042412 (2019).
* Gast _et al._ [2020] R. Gast, H. Schmidt, and T. R. Knösche, A Mean-Field Description of Bursting Dynamics in Spiking Neural Networks with Short-Term Adaptation, Neural Computation 32, 1615 (2020).
* Suffczynski _et al._ [2001] P. Suffczynski, S. Kalitzin, G. Pfurtscheller, and F. Lopes da Silva, Computational model of thalamo-cortical networks: dynamical control of alpha rhythms in relation to focal attention, International Journal of Psychophysiology 43, 25 (2001).
* Moran _et al._ [2007] R. Moran, S. Kiebel, K. Stephan, R. Reilly, J. Daunizeau, and K. Friston, A neural mass model of spectral responses in electrophysiology, NeuroImage 37, 706 (2007).
* Taher _et al._ [2020] H. Taher, A. Torcini, and S. Olmi, Exact neural mass model for synaptic-based working memory, PLOS Computational Biology 16, e1008533 (2020).
* Levina _et al._ [2007] A. Levina, J. M. Herrmann, and T. Geisel, Dynamical synapses causing self-organized criticality in neural networks, Nature Physics 3, 857 (2007).
* Tsodyks _et al._ [1998] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Networks with Dynamic Synapses, Neural Computation 10, 821 (1998).
* Ermentrout and Kopell [1986] G. B. Ermentrout and N. Kopell, Parabolic Bursting in an Excitable System Coupled with a Slow Oscillation, SIAM Journal on Applied Mathematics 46, 233 (1986).
* Pietras and Daffertshofer [2016] B. Pietras and A. Daffertshofer, Ott-Antonsen attractiveness for parameter-dependent oscillatory systems, Chaos: An Interdisciplinary Journal of Nonlinear Science 26, 103101 (2016).
* Schmutz _et al._ [2020] V. Schmutz, W. Gerstner, and T. Schwalger, Mesoscopic population equations for spiking neural networks with synaptic short-term plasticity, The Journal of Mathematical Neuroscience 10, 5 (2020).
* Gast _et al._ [2019] R. Gast, D. Rose, C. Salomon, H. E. Möller, N. Weiskopf, and T. R. Knösche, PyRates—A Python framework for rate-based neural simulations, PLOS ONE 14, e0225900 (2019).
* Doedel _et al._ [2007] E. J. Doedel, T. F. Fairgrieve, B. Sandstede, A. R. Champneys, Y. A. Kuznetsov, and X. Wang, _AUTO-07P: Continuation and bifurcation software for ordinary differential equations_ , Tech. Rep. (2007).
* Gigante _et al._ [2007] G. Gigante, M. Mattia, and P. D. Giudice, Diverse Population-Bursting Modes of Adapting Spiking Neurons, Physical Review Letters 98, 148101 (2007).
* Kita _et al._ [2004] H. Kita, A. Nambu, K. Kaneda, Y. Tachibana, and M. Takada, Role of Ionotropic Glutamatergic and GABAergic Inputs on the Firing Activity of Neurons in the External Pallidum in Awake Monkeys, Journal of Neurophysiology 92, 3069 (2004).
* Mercer _et al._ [2007] J. N. Mercer, C. S. Chan, T. Tkatch, J. Held, and D. J. Surmeier, Nav1.6 Sodium Channels Are Critical to Pacemaking and Fast Spiking in Globus Pallidus Neurons, Journal of Neuroscience 27, 13552 (2007).
* Wilson [2013] C. J. Wilson, Active decorrelation in the basal ganglia, Neuroscience 250, 467 (2013).
* Gast _et al._ [2021] R. Gast, R. Gong, H. Schmidt, H. G. E. Meijer, and T. R. Knoesche, On the role of arkypallidal and prototypical neurons for phase transitions in the external pallidum, bioRxiv , 2021.01.06.425526 (2021).
* Tsodyks and Markram [1997] M. V. Tsodyks and H. Markram, The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability, Proceedings of the National Academy of Sciences 94, 719 (1997).
|
# Best approximations, distance formulas and orthogonality in $C^{*}$-algebras
Priyanka Grover and Sushil Singla Department of Mathematics, Shiv Nadar
University, NH-91, Tehsil Dadri, Gautam Buddha Nagar, U.P. 201314, India.
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
For a unital $C^{*}$-algebra ${\mathcal{A}}$ and a subspace ${\mathcal{B}}$ of
${\mathcal{A}}$, a characterization for a best approximation to an element of
${\mathcal{A}}$ in ${\mathcal{B}}$ is obtained. As an application, a formula
for the distance of an element of ${\mathcal{A}}$ from ${\mathcal{B}}$ has
been obtained, when a best approximation of that element to ${\mathcal{B}}$
exists. Further, a characterization for Birkhoff-James orthogonality of an
element of a Hilbert $C^{*}$-module to a subspace is obtained.
###### Key words and phrases:
Best approximation, conditional expectation, Birkhoff-James orthogonality,
cyclic representation, state, Hilbert $C^{*}$-module
###### 2010 Mathematics Subject Classification:
Primary 46L05, 46L08, 41A50; Secondary 46B20, 41A52, 47B47
## 1\. Introduction
Let ${\mathcal{A}}$ be a unital $C^{*}$-algebra over
${\mathbb{F}}(={\mathbb{R}}$ or ${\mathbb{C}})$ with the identity element
$1_{{\mathcal{A}}}$. The $C^{*}$-subalgebras of ${\mathcal{A}}$ are assumed to
contain $1_{{\mathcal{A}}}$. For $a\in{\mathcal{A}}$ and ${\mathcal{B}}$ a
subspace of ${\mathcal{A}}$, ${\mathop{\rm dist}}(a,{\mathcal{B}})$ denotes
$\inf\\{\|a-b\|:b\in{\mathcal{B}}\\}$. An element $b_{0}\in{\mathcal{B}}$ is
said to be _a best approximation to $a$ in ${\mathcal{B}}$_ if
$\|a-b_{0}\|={\mathop{\rm dist}}(a,{\mathcal{B}})$. It is a well known fact
that $b_{0}$ is a best approximation to $a$ in ${\mathcal{B}}$ if and only if
there exists a functional $\psi\in{\mathcal{A}}^{*}$ such that
$\psi(a-b_{0})={\mathop{\rm dist}}(a,{\mathcal{B}})$ and $\psi(b)=0$ for all
$b\in{\mathcal{B}}$ (see [15, Theorem 1.1]).
Let $({C}(X),\|\cdot\|_{\infty})$ be the $C^{*}$-algebra of real or complex
continuous functions on a compact Hausdorff space $X$, where
$\|f\|_{\infty}=\sup_{x\in X}|f(x)|.$ It was proved in Theorem 1.3 of [15]
that if $f\in{C}(X)$ and ${\mathcal{B}}$ is a subspace of ${C}(X)$, then $g$
is a best approximation to $f$ in ${\mathcal{B}}$ if and only if there exists
a regular Borel probability measure $\mu$ on $X$ such that the support of
$\mu$ is contained in the set $\\{x\in X:|(f-g)(x)|=\|f-g\|_{\infty}\\}$ and
$\int\limits_{X}\overline{(f-g)}h\,d\mu=0\text{ for all }h\in{\mathcal{B}}$.
The condition that the support of $\mu$ is contained in the set $\\{x\in
X:|(f-g)(x)|=\|f-g\|_{\infty}\\}$ is equivalent to
$\int\limits_{X}|f-g|^{2}\,d\mu=\|f-g\|_{\infty}^{2}$.
A _positive linear map_ from ${\mathcal{A}}$ to another $C^{*}$-algebra
${\mathcal{A}}_{0}$ is a linear map that maps positive elements of
${\mathcal{A}}$ to positive elements of ${\mathcal{A}}_{0}$. For
${\mathbb{F}}={\mathbb{C}}$, a _state_ on ${\mathcal{A}}$ is a positive linear
functional $\phi$ on ${\mathcal{A}}$ such that $\phi(1_{{\mathcal{A}}})=1$.
For ${\mathbb{F}}={\mathbb{R}}$, an additional requirement for $\phi$ to be a
state is that $\phi(a^{*})=\phi(a)$ for all $a\in{\mathcal{A}}$. Let
$\mathcal{S}_{{\mathcal{A}}}$ denotes the set of states on ${\mathcal{A}}$.
Using Riesz Representation Theorem, the above characterization for best
approximation in ${C}(X)$ is equivalent to saying that there exists
$\phi\in\mathcal{S}_{{C}(X)}$ such that
(1) $\phi(|f-g|^{2})=\|f-g\|_{\infty}^{2}\text{ and
}\phi(\overline{(f-g)}h)=0\text{ for all }h\in{\mathcal{B}}.$
For $a\in{\mathcal{A}}$ and ${\mathcal{B}}$ a subspace of ${\mathcal{A}}$, $a$
is said to be _Birkhoff-James orthogonal_ to ${\mathcal{B}}$ (or
_${\mathcal{B}}$ -minimal_) if $\|a\|\leq\|a+b\|$ for all $b\in{\mathcal{B}}$.
Note that this is equivalent to saying that $0$ is a best approximation to $a$
in ${\mathcal{B}}$. It was proved in Theorem 2 of [16] that $0$ is a best
approximation to an element $a$ of a complex $C^{*}$-algebra ${\mathcal{A}}$
in ${\mathbb{C}}1_{{\mathcal{A}}}$ if and only if there exists
$\phi\in\mathcal{S}_{{\mathcal{A}}}$ such that $\phi(a^{*}a)=\|a\|^{2}$ and
$\phi(a)=0$. Theorem 6.1 in [13] shows that if ${\mathcal{B}}$ is a
$C^{*}$-subalgebra containing $1_{{\mathcal{A}}}$ of a complex $C^{*}$-algebra
${\mathcal{A}}$ and if $0$ is a best approximation to a Hermitian element $a$
of ${\mathcal{A}}$ in ${\mathcal{B}}$, then there exists $\phi\in
S_{{\mathcal{A}}}$ such that $\phi(a^{2})=\|a\|^{2}$ and $\phi(ab+b^{*}a)=0$
for all $b\in{\mathcal{B}}$. In Proposition 4.10 of [4], it was proved that
for any elements $a$ and $b$ of a complex $C^{*}$-algebra ${\mathcal{A}}$, $0$
is a best approximation to $a$ in ${\mathbb{C}}b$ if and only if there exists
$\phi\in\mathcal{S}_{{\mathcal{A}}}$ such that $\phi(a^{*}a)=\|a\|^{2}$ and
$\phi(a^{*}b)=0$. The main result of this article shows the existence of such
a state for any element $a$ and for any subspace ${\mathcal{B}}$ of a
$C^{*}$-algebra over ${\mathbb{F}}$.
###### Theorem 1.1.
Let $a\in{\mathcal{A}}$. Let ${\mathcal{B}}$ be a subspace of ${\mathcal{A}}$.
Then $b_{0}$ is a best approximation to $a$ in ${\mathcal{B}}$ if and only if
there exists $\phi\in\mathcal{S}_{{\mathcal{A}}}$ such that
(2) $\phi((a-b_{0})^{*}(a-b_{0}))=\|a-b_{0}\|^{2}\text{ and
}\phi(a^{*}b)=\phi(b_{0}^{*}b)\text{ for all }b\in{\mathcal{B}}.$
For $\phi\in S_{{\mathcal{A}}}$ and $a_{1},a_{2}\in{\mathcal{A}}$, define
$\langle a_{1}|a_{2}\rangle_{\phi}=\phi(a_{1}^{*}a_{2})$. This is a semi-inner
product on ${\mathcal{A}}$. Let $\|a_{1}\|_{\phi}=\langle
a_{1}|a_{1}\rangle_{\phi}^{1/2}$. In this notation, the above theorem says
that $b_{0}$ is a best approximation to $a$ in ${\mathcal{B}}$ if and only if
there exists $\phi\in\mathcal{S}_{{\mathcal{A}}}$ such that
$\|a-b_{0}\|_{\phi}=\|a-b_{0}\|\text{ and }\langle
a-b_{0}|b\rangle_{\phi}=0\text{ for all }b\in{\mathcal{B}}.$
We note that (2) is a Pythagoras theorem in the semi-inner product space
$({\mathcal{A}},\langle\cdot|\cdot\rangle_{\phi})$. Consider the triangle with
vertices $0,a,b_{0}$ in $({\mathcal{A}},\langle\cdot|\cdot\rangle_{\phi})$. If
$a\notin{\mathcal{B}}$, then (2) gives that
$\|a\|_{\phi}^{2}=\|b_{0}\|_{\phi}^{2}+\|a-b_{0}\|^{2}$ and $\langle
a-b_{0}|b\rangle_{\phi}=0\text{ for all }b\in{\mathcal{B}}$. If
$\|b_{0}\|_{\phi}=0$, then we have $\|a\|_{\phi}=\|a-b_{0}\|$. This means that
the length of the base and the length of the perpendicular are $0$ and
$\|a-b_{0}\|$, respectively. Suppose $\|b_{0}\|_{\phi}\neq 0$. Let
$\theta_{\phi}^{a_{1},a_{2}}=\cos^{-1}\left(\dfrac{\langle
a_{1}|a_{2}\rangle_{\phi}}{\|a_{1}\|_{\phi}\|a_{2}\|_{\phi}}\right)$ be the
angle between the vectors $a_{1}$ and $a_{2}$ in
$({\mathcal{A}},\langle\cdot|\cdot\rangle_{\phi})$, when
$\|a_{1}\|_{\phi},\|a_{2}\|_{\phi}\neq 0$. Then we have
$\|a-b_{0}\|_{\phi}=\|a-b_{0}\|$ and $\theta_{\phi}^{a-b_{0},b}=\pi/2$ for all
$b\in{\mathcal{B}}$. In particular, the above triangle becomes a right angled
triangle and the length of the perpendicular is $\|a-b_{0}\|$.
As a consequence, we obtain a distance formula of an element
$a\in{\mathcal{A}}$ from a subspace ${\mathcal{B}}$ of ${\mathcal{A}}$.
###### Corollary 1.2.
Let $a\in{\mathcal{A}}$. Let ${\mathcal{B}}$ be a subspace of ${\mathcal{A}}$.
If $b_{0}$ is a best approximation to $a$ in ${\mathcal{B}}$, then
(3) ${\mathop{\rm
dist}}(a,{\mathcal{B}})^{2}=\max\\{\phi(a^{*}a)-\phi(b_{0}^{*}b_{0}):\phi\in\mathcal{S}_{{\mathcal{A}}}\text{
and }\phi(a^{*}b)=\phi(b_{0}^{*}b)\text{ for all }b\in{\mathcal{B}}\\}.$
A special case of the above corollary is the below result by Williams [16]. He
proved that for $a\in{\mathcal{A}}$,
(4) ${\mathop{\rm
dist}}(a,\mathbb{C}1_{\mathcal{A}})^{2}=\max\\{\phi(a^{*}a)-|\phi(a)|^{2}:\phi\in\mathcal{S}_{{\mathcal{A}}}\\}.$
See [14, Theorem 3.10] for a different proof of (4). For $n\times n$ complex
matrices, a different proof has also been given in [2, Theorem 9].
As a direct consequence of Theorem 1.1, we get the following characterization
of Birkhoff-James orthogonality to a subspace in a $C^{*}$-algebra.
###### Corollary 1.3.
Let $a\in{\mathcal{A}}$. Let ${\mathcal{B}}$ be a subspace of ${\mathcal{A}}$.
Then $a$ is Birkhoff-James orthogonal to ${\mathcal{B}}$ if and only if there
exists $\phi\in\mathcal{S}_{{\mathcal{A}}}$ such that
$\phi(a^{*}a)=\|a\|^{2}\text{ and }\phi(a^{*}b)=0\text{ for all
}b\in{\mathcal{B}}.$
Geometrically, this says that $a$ is Birkhoff-James orthogonal to
$\mathcal{B}$ if and only if there exists $\phi\in\mathcal{S}_{{\mathcal{A}}}$
and a corresponding semi-inner product $\langle\cdot|\cdot\rangle_{\phi}$ on
${\mathcal{A}}$ such that $\|a\|_{\phi}=\|a\|$ and $a$ is perpendicular to
${\mathcal{B}}$ in $({\mathcal{A}},\langle\cdot|\cdot\rangle_{\phi})$.
In Section 2, we give the proofs of Theorem 1.1 and Corollary 1.2. In Section
3, we give some other applications of Theorem 1.1. In Theorem 3.1, we show
that $0$ is a best approximation to $a$ in ${\mathcal{B}}$ if and only if $0$
is a best approximation to $a^{*}a$ in $a^{*}{\mathcal{B}}$. In Theorem 3.4,
it is shown that for any element $a\in{\mathcal{A}}$ and a subspace
${\mathcal{B}}$ of ${\mathcal{A}}$, there exists a cyclic representation
$({\mathcal{H}},\pi,\xi)$ of ${\mathcal{A}}$ and a unit vector
$\eta\in{\mathcal{H}}$ such that ${\mathop{\rm
dist}}(a,{\mathcal{B}})=\langle\eta|\pi(a)\xi\rangle$ and
$\langle\eta|\pi(b)\xi\rangle=0$ for all $b\in{\mathcal{B}}$. In Theorem 3.5,
a characterization for Birkhoff-James orthogonality of an element of a
_Hilbert $C^{*}$-module _ to a subspace is given. It is proved that an element
$e$ of a Hilbert $C^{*}$-module ${\mathcal{E}}$ over ${\mathcal{A}}$ is
Birkhoff-James orthogonal to a subspace ${\mathcal{B}}$ of ${\mathcal{E}}$ if
and only if there exists $\phi\in S_{{\mathcal{A}}}$ such that
$\phi(\left<e,e\right>)=\|e\|^{2}\mbox{ and }\phi(\left<e,b\right>)=0$ for all
$b\in{\mathcal{B}}$. In [14], it was desired to have the generalization of
distance formula (4) in terms of _conditional expectations_ from
${\mathcal{A}}$ to ${\mathcal{B}}$. In Section 4, we make some remarks on our
progress towards obtaining this. Corollary 1.2, Corollary 1.3, Theorem 3.5 and
Equation (15) are mentioned in the survey article [9]. We provide the complete
details here.
## 2\. Proofs
Few notations are in order. Let ${\mathcal{H}}$ be a Hilbert space over
${\mathbb{F}}$. The inner product is assumed to be conjugate linear in the
first coordinate and linear in the second coordinate. Let
$\mathscr{B}({\mathcal{H}})$ be the $C^{*}$-algebra of bounded
${\mathbb{F}}$-linear operators on ${\mathcal{H}}$. The symbol $I$ denotes the
identity in $\mathscr{B}({\mathcal{H}})$. The triple $({\mathcal{H}},\pi,\xi)$
denotes a cyclic representation of ${\mathcal{A}}$ where $\|\xi\|=1$,
$\pi:{\mathcal{A}}\rightarrow\mathscr{B}({\mathcal{H}})$ is a ∗-algebra map
satisfying $\pi(1_{A})=I$ and closure of $\\{\pi(a)\xi:a\in{\mathcal{A}}\\}$
is ${\mathcal{H}}$.
Proof of Theorem 1.1 If $\phi$ is a state such that (2) holds, then for every
$b\in{\mathcal{B}}$,
$\displaystyle\|a-b_{0}\|^{2}$ $\displaystyle=$
$\displaystyle\phi((a-b_{0})^{*}(a-b_{0}))$ $\displaystyle\leq$
$\displaystyle\phi((a-b_{0})^{*}(a-b_{0}))+\phi(b^{*}b)$ $\displaystyle=$
$\displaystyle\phi((a-b_{0}-b)^{*}(a-b_{0}-b))$ $\displaystyle\leq$
$\displaystyle\|a-b_{0}-b\|^{2}.$
So $b_{0}$ is a best approximation to $a$ in ${\mathcal{B}}$. For the other
side, first let us assume that ${\mathcal{A}}$ is a complex $C^{*}$-algebra.
By the Hahn-Banach theorem, there exists $\psi\in{\mathcal{A}}^{*}$ such that
$\|\psi\|=1$, $\psi(a-b_{0})={\mathop{\rm dist}}(a,{\mathcal{B}})=\|a-b_{0}\|$
and $\psi(b)=0$ for all $b\in{\mathcal{B}}$. By Lemma 3.3 of [13], there
exists a cyclic representation $({\mathcal{H}},\pi,\xi)$ of ${\mathcal{A}}$
and a unit vector $\eta\in{\mathcal{H}}$ such that
(5) $\psi(c)=\langle\eta|\pi(c)\xi\rangle\text{ for all }c\in{\mathcal{A}}.$
Now $\psi(a-b_{0})=\langle\eta|\pi(a-b_{0})\xi\rangle=\|a-b_{0}\|$. So by
using the condition for equality in Cauchy-Schwarz inequality, we obtain
$\|a-b_{0}\|\eta=\pi(a-b_{0})\xi$. Equation (5) gives
$\psi(c)=\dfrac{1}{\|a-b_{0}\|}\langle\pi(a-b_{0})\xi|\pi(c)\xi\rangle\text{
for all }c\in{\mathcal{A}}.$
Therefore
(6) $\langle\pi(a-b_{0})\xi|\pi(a-b_{0})\xi\rangle=\|a-b_{0}\|^{2}$
and
(7) $\langle\pi(a-b_{0})\xi|\pi(b)\xi\rangle=0\text{ for all
}b\in{\mathcal{B}}.$
Define $\phi\in{\mathcal{A}}^{*}$ as $\phi(c)=\langle\xi|\pi(c)\xi\rangle$.
Then $\phi\in\mathcal{S}_{{\mathcal{A}}}$ and by (6) and (7), we obtain (2).
Next, let ${\mathcal{A}}$ be a real $C^{*}$-algebra. Let ${\mathcal{A}}_{c}$
be the complexification of $({\mathcal{A}},\|\cdot\|)$ with the unique norm
$\|\cdot\|_{c}$ such that $({\mathcal{A}}_{c},\|\cdot\|_{c})$ is a
$C^{*}$-algebra and the natural embedding of ${\mathcal{A}}$ into
${\mathcal{A}}_{c}$ is an isometry [7, Corollary 15.4]. From the above case,
there exists $\psi\in S_{{\mathcal{A}}_{c}}$ such that
$\psi((a-b_{0})^{*}(a-b_{0}))=\|a-b_{0}\|^{2}$ and
$\psi(a^{*}b)=\psi(b_{0}^{*}b)\text{ for all }b\in{\mathcal{B}}.$ Let
$\phi=\text{Re }\psi|_{{\mathcal{A}}}$. Then $\phi\in S_{{\mathcal{A}}}$,
$\phi((a-b_{0})^{*}(a-b_{0}))=\|a-b_{0}\|^{2}$ and
$\phi(a^{*}b)=\phi(b_{0}^{*}b)\text{ for all }b\in{\mathcal{B}}$. ∎
Another proof of Theorem 1.1, in the case when ${\mathcal{A}}$ is a complex
$C^{*}$-algebra, can be given as follows. The importance of this approach is
that it indicates that proving the theorem when ${\mathcal{B}}$ is a one
dimensional subspace is sufficient. Since $b_{0}$ is a best approximation to
$a$ in ${\mathcal{B}}$, $0$ is a best approximation to $a-b_{0}$ in
${\mathcal{B}}$. So without loss of generality, we assume $b_{0}=0$. For
$b\in{\mathcal{B}}$, we have $\|a\|\leq\|a+\lambda b\|$ for all
$\lambda\in{\mathbb{C}}$. By Proposition 4.1 of [4], there exists
$\phi_{b}\in\mathcal{S}_{{\mathcal{A}}}$ such that
$\phi_{b}(a^{*}a)=\|a\|^{2}$ and $\phi_{b}(a^{*}b)=0$. Let
${\mathcal{N}}=\\{\alpha a^{*}a+\beta
1_{{\mathcal{A}}}+a^{*}b:\alpha,\beta\in{\mathbb{C}}$, $b\in{\mathcal{B}}\\}$,
the subspace generated by $a^{*}a$, $1_{{\mathcal{A}}}$ and
$a^{*}{\mathcal{B}}$. Define $\psi:{\mathcal{N}}\longrightarrow{\mathbb{C}}$
as $\psi(\alpha a^{*}a+\beta 1_{{\mathcal{A}}}+a^{*}b)=\alpha\|a\|^{2}+\beta$
for all $\alpha,\beta\in{\mathbb{C}}$ and $b\in{\mathcal{B}}$. To see that
$\psi$ is well defined, note that for any $b\in B$ we have $\phi_{b}(\alpha
a^{*}a+\beta 1_{{\mathcal{A}}}+a^{*}b)=\alpha\|a\|^{2}+\beta$. Since
$\|\phi_{b}\|=1$, we get
(8) $|\alpha\|a\|^{2}+\beta|\leq\|\alpha a^{*}a+\beta
1_{{\mathcal{A}}}+a^{*}b\|.$
Thus $\alpha a^{*}a+\beta 1_{{\mathcal{A}}}+a^{*}b=0$ implies
$\alpha\|a\|^{2}+\beta=0.$ Clearly $\psi$ is a linear map and equation (8)
shows that $\|\psi\|\leq 1$. Since $\psi(1_{{\mathcal{A}}})=1$, we have
$\|\psi\|=1$. By the Hahn-Banach theorem, there exists a linear functional
$\phi:{\mathcal{A}}\rightarrow{\mathbb{C}}$ such that $\|\phi\|=1$ and
$\phi|_{{\mathcal{N}}}=\psi$. Since $\|\phi\|=1=\phi(1_{{\mathcal{A}}})$,
using Theorem II.6.2.5(ii) of [5], we get that
$\phi\in\mathcal{S}_{{\mathcal{A}}}$. By definition, $\phi$ satisfies the
required conditions.
Proof of Corollary 1.2 Let $\phi\in\mathcal{S}_{{\mathcal{A}}}$ be such that
$\phi(a^{*}b)=\phi(b_{0}^{*}b)$ for all $b\in{\mathcal{B}}$. In particular we
have $\phi(a^{*}b_{0})=\phi(b_{0}^{*}b_{0})$. So
$\phi((a-b_{0})^{*}(a-b_{0}))=\phi(a^{*}a)-\phi(b_{0}^{*}b_{0}).$ Since
$\phi((a-b_{0})^{*}(a-b_{0}))\leq\left\lVert
a-b_{0}\right\rVert^{2}={\mathop{\rm dist}}(a,{\mathcal{B}})^{2}$, we have
$\phi(a^{*}a)-\phi(b_{0}^{*}b_{0})\leq{\mathop{\rm
dist}}(a,{\mathcal{B}})^{2}.$
This gives
$\sup\\{\phi(a^{*}a)-\phi(b_{0}^{*}b_{0}):\phi\in\mathcal{S}_{{\mathcal{A}}},\phi(a^{*}b)=\phi(b_{0}^{*}b)\text{
for all }b\in{\mathcal{B}}\\}\leq{\mathop{\rm dist}}(a,{\mathcal{B}})^{2}.$
By Theorem 1.1, there exists $\phi\in S_{{\mathcal{A}}}$ such that
${\mathop{\rm
dist}}(a,{\mathcal{B}})^{2}=\phi(a^{*}a)-\phi(b_{0}^{*}b_{0})\text{ and
}\phi(a^{*}b)=\phi(b_{0}^{*}b)\text{ for all }b\in{\mathcal{B}}.$
This completes the proof. ∎
## 3\. Applications
An interesting fact arises out of Corollary 1.3, which is worth noting
separately.
###### Theorem 3.1.
Let $a\in{\mathcal{A}}$. Let ${\mathcal{B}}$ be a subspace of ${\mathcal{A}}$.
Then $a$ is Birkhoff-James orthogonal to ${\mathcal{B}}$ if and only if
$a^{*}a$ is Birkhoff-James orthogonal to $a^{*}{\mathcal{B}}$.
###### Proof.
First let $a$ is Birkhoff-James orthogonal to ${\mathcal{B}}$. Then by
Corollary 1.3, there exists $\phi\in S_{{\mathcal{A}}}$ such that
$\phi(a^{*}a)=\|a\|^{2}$ and $\phi(a^{*}b)=0$ for all $b\in{\mathcal{B}}$. So
for $b\in{\mathcal{B}}$, $\phi(a^{*}a+a^{*}b)=\|a\|^{2}$. Since $\|\phi\|=1$,
we get $\|a^{*}a\|=\|a\|^{2}\leq\|a^{*}a+a^{*}b\|.$ Conversely, suppose
$a^{*}a$ is Birkhoff-James orthogonal to $a^{*}{\mathcal{B}}$, that is,
$\|a^{*}a\|\leq\|a^{*}a+a^{*}b\|$ for every $b\in{\mathcal{B}}$. This implies
$\|a\|^{2}\leq\|a^{*}\|\|a+b\|$ and thus $\|a\|\leq\|a+b\|$ for all
$b\in{\mathcal{B}}$. ∎
We now show that Theorem 1 of [8] can also be proved using Corollary 1.3. We
first prove the following lemma, which is of independent interest. The proof
of the lemma is along the same lines as a portion of the proof of Theorem 1 of
[3]. For $u,v\in{\mathcal{H}}$, $u\bar{\mathbin{\mathop{\otimes}\limits}}v$
will denote the finite rank operator of rank one on ${\mathcal{H}}$ defined as
$u\bar{\mathbin{\mathop{\otimes}\limits}}v(w)=\langle v|w\rangle u$ for all
$w\in{\mathcal{H}}$.
###### Lemma 3.2.
Let $A\in\mathscr{B}({\mathcal{H}})$. Let $T$ be a positive trace class
operator with $\|T\|_{1}=1$ and ${\mathrm{tr}}(AT)=\|A\|$. Then there is an at
most countable index set $\mathcal{J}$, a set of positive numbers
$\\{s_{j}:j\in\mathcal{J}\\}$ and an orthonormal set
$\\{u_{j}:j\in\mathcal{J}\\}\subseteq\text{Ker}(T)^{\bot}$ such that
1. (i)
$\sum\limits_{j\in\mathcal{J}}s_{j}=1$ ,
2. (ii)
$Au_{j}=\|A\|u_{j}$ for each $j\in\mathcal{J}$,
3. (iii)
$T=\sum\limits_{j\in\mathcal{J}}s_{j}u_{j}\bar{\mathbin{\mathop{\otimes}\limits}}u_{j}$.
###### Proof.
Using Corollary 5.4 of [6, Ch. II], there exists a sequence of real numbers
$s_{1},s_{2}\dots$ with orthonormal basis $\\{u_{1},u_{2},\dots\\}$ of
$\text{Ker}(T)^{\bot}$ such that
$T=\sum\limits_{i=1}^{\infty}s_{i}u_{i}\bar{\mathbin{\mathop{\otimes}\limits}}u_{i}$.
Since $T$ is positive, $s_{i}$ are non-negative. And $\|T\|_{1}=1$ implies
$\sum\limits_{i=1}^{\infty}s_{i}=1$. Now
$AT=\sum\limits_{i=1}^{\infty}s_{i}Au_{i}\bar{\mathbin{\mathop{\otimes}\limits}}u_{i}$.
Let $\mathcal{J}=\\{i\in{\mathbb{N}}:s_{i}\neq 0\\}$. Then
$\sum\limits_{j\in\mathcal{J}}s_{j}=1$ and
$AT=\sum\limits_{j\in\mathcal{J}}s_{j}Au_{j}\bar{\mathbin{\mathop{\otimes}\limits}}u_{j}$.
So
${\mathrm{tr}}(AT)=\sum\limits_{j\in\mathcal{J}}s_{j}{\mathrm{tr}}(Au_{j}\bar{\mathbin{\mathop{\otimes}\limits}}u_{j})=\sum\limits_{j\in\mathcal{J}}s_{j}\left\langle
u_{j}|Au_{j}\right\rangle$.
Now
$\displaystyle\|A\|$ $\displaystyle=$
$\displaystyle{\mathrm{tr}}(AT)=\sum\limits_{j\in\mathcal{J}}s_{j}\langle
u_{j}|Au_{j}\rangle={\bigg{|}}\sum\limits_{j\in\mathcal{J}}s_{j}\langle
u_{j}|Au_{j}\rangle{\bigg{|}}\leq\sum\limits_{j\in\mathcal{J}}s_{j}{\bigg{|}}\langle
u_{j}|Au_{j}\rangle{\bigg{|}}\leq\sum\limits_{j\in\mathcal{J}}s_{j}\|Au_{j}\|$
$\displaystyle\leq$
$\displaystyle\sum\limits_{j\in\mathcal{J}}s_{j}\|A\|=\|A\|.$
So
$\displaystyle\sum\limits_{j\in\mathcal{J}}s_{j}{\bigg{|}}\langle
u_{j}|Au_{j}\rangle{\bigg{|}}=\sum\limits_{j\in\mathcal{J}}s_{j}\|Au_{j}\|=\|A\|.$
Therefore
$\displaystyle
0=\sum\limits_{j\in\mathcal{J}}s_{j}\left(\|A\|-{\bigg{|}}\langle
u_{j}|Au_{j}\rangle{\bigg{|}}\right)=\sum\limits_{j\in\mathcal{J}}s_{j}\left(\|Au_{j}\|-{\bigg{|}}\langle
u_{j}|Au_{j}\rangle{\bigg{|}}\right).$
Since $s_{j}>0$ for all $j\in\mathcal{J}$, we get
(9) $\|A\|={\bigg{|}}\langle u_{j}|Au_{j}\rangle{\bigg{|}}=\|Au_{j}\|\text{
for all }j\in\mathcal{J}.$
By the condition of equality in Cauchy-Schwarz inequality, for every
$j\in\mathcal{J}$ there exists $\alpha_{j}\in{\mathbb{C}}$ such that
$\alpha_{j}Au_{j}=u_{j}$. And using (9), we get $Au_{j}=\|A\|u_{j}$. This
completes the proof. ∎
Let $\mathbb{M}_{n}({\mathbb{F}})$ be the $C^{*}$-algebra of $n\times n$
matrices with entries in ${\mathbb{F}}$. A _density matrix_
$A\in\mathbb{M}_{n}({\mathbb{F}})$ is a positive element in
$\mathbb{M}_{n}({\mathbb{F}})$ with ${\mathrm{tr}}(A)=1$. A different proof of
Theorem 1 in [8] follows.
###### Theorem 3.3.
[8, Theorem 1] Let $A\in\mathbb{M}_{n}({\mathbb{F}})$. Let $m(A)$ be the
multiplicity of the maximum singular value $\|A\|$ of $A$. Let ${\mathcal{B}}$
be a subspace of $\mathbb{M}_{n}({\mathbb{F}})$. Then $A$ is Birkhoff-James
orthogonal to ${\mathcal{B}}$ if and only if there exists a density matrix
$T\in\mathbb{M}_{n}({\mathbb{F}})$ of rank at most $m(A)$ such that
$A^{*}AT=\|A\|^{2}T$ and ${\mathrm{tr}}(B^{*}AT)=0$ for all
$B\in{\mathcal{B}}$.
###### Proof.
By Corollary 1.3, there exists a density matrix $T$ such that
${\mathrm{tr}}(A^{*}AT)=\|A\|^{2}$ and ${\mathrm{tr}}(B^{*}AT)=0$ for all
$B\in{\mathcal{B}}$. Using Lemma 3.2, there exists $s_{1},\ldots,s_{m}$ and a
set of orthonormal vectors $\\{u_{1},\ldots,u_{m}\\}$ such that
$\sum\limits_{j=1}^{m}s_{j}=1$, $A^{*}Au_{j}=\|A\|^{2}u_{j}$ for every
$j=1,\ldots,m$ and
$T=\sum\limits_{j=1}^{m}s_{j}u_{j}\bar{\mathbin{\mathop{\otimes}\limits}}u_{j}$.
Clearly $\text{rank }T\leq m\leq m(A)$ and $A^{*}AT=\|A\|^{2}T$. ∎
It is worth noting that from the proof of Theorem 3.3, we get that
$A^{*}AT=\|A\|^{2}T$ is equivalent to ${\mathrm{tr}}(A^{*}AT)=\|A\|^{2}$,
where $A,T\in\mathbb{M}_{n}({\mathbb{F}})$ and $T$ is a density matrix. This
supplements Remark 1 of [8].
Next we note that the idea of the proof of Theorem 1.1 also proves the
following generalization of Corollary 2.8 in [1].
###### Theorem 3.4.
Let $a\in{\mathcal{A}}$. Let ${\mathcal{B}}$ be a subspace of ${\mathcal{A}}$.
Then there exists a cyclic representation $({\mathcal{H}},\pi,\xi)$ of
${\mathcal{A}}$ and a unit vector $\eta\in{\mathcal{H}}$ such that
${\mathop{\rm dist}}(a,{\mathcal{B}})=\langle\eta|\pi(a)\xi\rangle$ and
$\langle\eta|\pi(b)\xi\rangle=0$ for all $b\in{\mathcal{B}}$.
###### Proof.
By the Hahn-Banach theorem, there exists $\psi\in{\mathcal{A}}^{*}$ such that
$\|\psi\|=1$, $\psi(a)={\mathop{\rm dist}}(a,{\mathcal{B}})$ and $\psi(b)=0$
for all $b\in{\mathcal{B}}$. By Lemma 3.3 of [13], there exists a cyclic
representation $({\mathcal{H}},\pi,\xi)$ of ${\mathcal{A}}$ and a unit vector
$\eta\in{\mathcal{H}}$ such that $\psi(c)=\langle\eta|\pi(c)\xi\rangle\text{
for all }c\in{\mathcal{A}}.$ ∎
It was shown in [3] that for any $A\in\mathbb{M}_{n}({\mathbb{C}})$
${\mathop{\rm dist}}(A,{\mathbb{C}}1_{{\mathcal{A}}})=\max\\{|\langle
y|Ax\rangle|:x,y\in{\mathbb{C}}^{n},\|x\|=\|y\|=1\text{ and }x\bot y\\}.$
Using Theorem 3.4, we obtain a similar formula for ${\mathop{\rm
dist}}(a,{\mathcal{B}})$, in the general case of a unital $C^{*}$-algebra
${\mathcal{A}}$ and ${\mathbb{C}}1_{{\mathcal{A}}}$ replaced with any subspace
${\mathcal{B}}$. We have
(10) $\displaystyle{\mathop{\rm dist}}(a,{\mathcal{B}})$
$\displaystyle=\max\left\\{\bigg{|}\langle\eta|\pi(a)\xi\rangle\bigg{|}:({\mathcal{H}},\pi,\xi)\text{
is a cyclic representation of }{\mathcal{A}},\eta\in{\mathcal{H}},\right.$
$\displaystyle\hskip 113.81102pt\|\eta\|=1\text{ and
}\langle\eta|\pi(b)\xi\rangle=0\text{ for all }b\in{\mathcal{B}}\bigg{\\}}.$
Under the restriction that best approximation to $a$ in ${\mathcal{B}}$
exists, the above formula was obtained in [9, Theorem 4.3]. Another formula
for ${\mathop{\rm dist}}(a,{\mathcal{B}})$ when ${\mathcal{B}}$ is a
$C^{*}$-subalgebra of ${\mathcal{A}}$ was proved in Theorem 3.2 of [13]. For
more distance formulas, see [3] and [8] for a discussion in
$\mathbb{M}_{n}({\mathbb{C}})$, [1] and [12] for $\mathscr{B}({\mathcal{H}})$
and [1] for general complex $C^{*}$-algebras and Hilbert $C^{*}$-modules over
a complex $C^{*}$-algebra.
A Hilbert $C^{*}$-module ${\mathcal{E}}$ over ${\mathcal{A}}$ is a right
${\mathcal{A}}$-module with a function
$\left<\cdot,\cdot\right>:{\mathcal{E}}\times{\mathcal{E}}\rightarrow{\mathcal{A}}$,
known as ${\mathcal{A}}$-valued semi-inner product, with the following
properties for
$\xi,\eta,\zeta\in{\mathcal{E}},a\in{\mathcal{A}},\lambda\in{\mathbb{C}}:$
1. (1)
$\left<\xi,\eta+\zeta\right>=\left<\xi,\eta+\zeta\right>\text{ and
}\left<\xi,\lambda\eta\right>=\lambda\left<\xi,\eta\right>$,
2. (2)
$\left<\xi,\eta a\right>=\left<\xi,\eta\right>a$,
3. (3)
$\left<\xi,\eta\right>=\left<\eta,\xi\right>^{*}$,
4. (4)
$\left<\xi,\xi\right>$ is a positive element of ${\mathcal{A}}$.
Let ${\mathcal{K}}$ be a Hilbert space. Let
$\mathscr{B}({\mathcal{H}},{\mathcal{K}})$ denotes the space of bounded
${\mathbb{F}}$-linear operators from ${\mathcal{H}}$ to ${\mathcal{K}}$. It is
a Hilbert $C^{*}$-module over $\mathscr{B}({\mathcal{H}})$ with
$\left<A,B\right>=A^{*}B$ for all
$A,B\in\mathscr{B}({\mathcal{H}},{\mathcal{K}})$. The below result extends
Theorem 2.7 of [1] and Theorem 4.4 of [4].
###### Theorem 3.5.
Let $e\in{\mathcal{E}}$. Let ${\mathcal{B}}$ be a subspace of ${\mathcal{E}}$.
Then $e$ is Birkhoff-James orthogonal to ${\mathcal{B}}$ in the Banach space
${\mathcal{E}}$ if and only if there exists $\phi\in S_{{\mathcal{A}}}$ such
that $\phi(\left<e,e\right>)=\|e\|^{2}\mbox{ and }\phi(\left<e,b\right>)=0$
for all $b\in{\mathcal{B}}$.
###### Proof.
We prove the theorem for the special case
${\mathcal{E}}=\mathscr{B}({\mathcal{H}},{\mathcal{K}})$ . The general case
follows by Lemma 4.3 of [4]. The reverse direction is easy. Now let $e$ be
orthogonal to ${\mathcal{B}}$. For any operator
$t\in\mathscr{B}({\mathcal{H}},{\mathcal{K}})$ we denote by $\tilde{t}$, the
operator on ${\mathcal{H}}\oplus{\mathcal{K}}$ given by
$\tilde{t}=\left[\begin{array}[]{clrr}0&0\\\ t&0\end{array}\right].$ We have
$e$ is Birkhoff-James orthogonal to ${\mathcal{B}}$ if and only if $\tilde{e}$
is Birkhoff-James orthogonal to
$\tilde{\mathcal{B}}=\\{\tilde{b}:b\in{\mathcal{B}}\\}$. Now using Corollary
1.3, we get that there exists
$\tilde{\phi}\in\mathcal{S}_{\mathscr{B}({\mathcal{H}}\oplus{\mathcal{K}})}$
such that $\tilde{\phi}(\tilde{e}^{*}\tilde{e})=\|\tilde{e}\|^{2}\text{ and
}\tilde{\phi}(\tilde{e}^{*}\tilde{b})=0\text{ for all
}\tilde{b}\in\tilde{\mathcal{B}}.$ Now $\phi$ defined as
$\phi(e)=\tilde{\phi}(\tilde{e})$ is the required state. ∎
Another approach to prove the above theorem has been briefly discussed after
Theorem 3.7 in [9]. We also remark that some related results with restricted
hypotheses for $\mathscr{B}({\mathcal{H}})$ and
$\mathscr{B}({\mathcal{H}},{\mathcal{K}})$ have appeared recently in [11]. The
results in this article are stronger in these spaces.
## 4\. Remarks
###### Remark 4.1.
For a complex $C^{*}$-algebra ${\mathcal{A}}$ and a $C^{*}$-subalgebra
${\mathcal{B}}$ of ${\mathcal{A}}$ such that
$1_{{\mathcal{A}}}\in{\mathcal{B}}$, a conditional expectation from
${\mathcal{A}}$ to ${\mathcal{B}}$ is a positive linear map $E$ of norm $1$
such that $E(1_{{\mathcal{A}}})=1_{{\mathcal{A}}}$ and
$E(b_{1}ab_{2})=b_{1}E(a)b_{2}$ for all $b_{1},b_{2}\in{\mathcal{B}}$ and
$a\in{\mathcal{A}}$. For any given conditional expectation $E$ from
${\mathcal{A}}$ to ${\mathcal{B}}$, we can define a ${\mathcal{B}}$-valued
inner product on ${\mathcal{A}}$ given by $\langle
a_{1}|a_{2}\rangle_{E}=E(a_{1}^{*}a_{2})$ (see [14]). So
$\displaystyle\langle a-E(a)|a-E(a)\rangle_{E}$ $\displaystyle=$
$\displaystyle E((a-E(a))^{*}(a-E(a)))$ $\displaystyle=$ $\displaystyle
E(a^{*}a)-E(E(a)^{*}a)-E(a^{*}E(a))+E(E(a)^{*}E(a))$ $\displaystyle=$
$\displaystyle
E(a^{*}a)-E(a)^{*}E(a)-E(a)^{*}E(a)+E(a)^{*}E(a)E(1_{{\mathcal{A}}})$
$\displaystyle=$ $\displaystyle E(a^{*}a)-E(a)^{*}E(a).$
For $\phi\in\mathcal{S}_{{\mathcal{A}}}$, we have
(11) $\phi(\langle
a-E(a)|a-E(a)\rangle_{E})=\phi(E(a^{*}a))-\phi(E(a)^{*}E(a)).$
Since $a^{*}a\leq\|a\|^{2}1_{{\mathcal{A}}}$ and
$E(1_{{\mathcal{A}}})=1_{{\mathcal{A}}}$, we get
$\phi(E(a^{*}a))\leq\|a\|^{2}$ . So
(12) $\phi(E(a^{*}a))-\phi(E(a)^{*}E(a))\leq\|a\|^{2}.$
By (11) and (12), we obtain
(13) $\phi(\langle a-E(a)|a-E(a)\rangle_{E})\leq\|a\|^{2}.$
Now for $b\in{\mathcal{B}}$,
(14) $\langle a-E(a)|a-E(a)\rangle_{E}=\langle
a-b-E(a-b)|a-b-E(a-b)\rangle_{E}.$
By (13) and (14), we obtain $\phi(\langle
a-E(a)|a-E(a)\rangle_{E})\leq\|a-b\|^{2}$ for all $b\in{\mathcal{B}}$, and so
$\phi\left(\langle a-E(a)|a-E(a)\rangle_{E}\right)\leq{\mathop{\rm
dist}}(a,{\mathcal{B}})^{2}$. Thus we obtain a lower bound for ${\mathop{\rm
dist}}(a,{\mathcal{B}})$ as follows:
(15) ${\mathop{\rm
dist}}(a,{\mathcal{B}})^{2}\geq\sup\\{\phi(E(a^{*}a)-E(a)^{*}E(a)):\phi\in\mathcal{S}_{{\mathcal{A}}},E\text{
is a conditional expectation from }{\mathcal{A}}\text{ to }{\mathcal{B}}\\},$
(where $\sup(\emptyset)=-\infty$).
###### Remark 4.2.
In the case ${\mathcal{B}}={\mathbb{C}}1_{{\mathcal{A}}}$, equality holds in
(15). To see this, let $\langle a,{\mathbb{C}}1_{{\mathcal{A}}}\rangle$ be the
subspace generated by $a$ and $1_{{\mathcal{A}}}$. Let
$\lambda_{0}1_{{\mathcal{A}}}$ be a best approximation to $a$ in
${\mathbb{C}}1_{{\mathcal{A}}}$. We define $\tilde{E}:\langle
a,{\mathbb{C}}1_{{\mathcal{A}}}\rangle\rightarrow{\mathbb{C}}1_{{\mathcal{A}}}$
as $\tilde{E}(a+\lambda
1_{{\mathcal{A}}})=(\lambda_{0}+\lambda)1_{{\mathcal{A}}}$. For any
$c\in{\mathcal{A}}$, the norm of the best approximation of $c$ to
${\mathbb{C}}1_{{\mathcal{A}}}$ is less than or equal to $\|c\|$. Since
$(\lambda_{0}+\lambda)1_{{\mathcal{A}}}$ is the best approximation to
$a+\lambda 1_{{\mathcal{A}}}$, we get that $\|\tilde{E}\|=1$. By Hahn-Banach
theorem, there exists an extension $E$ of $\tilde{E}$ which is of norm $1$. By
Corollary II.6.10.3 of [5], $E$ is a conditional expectation. By Theorem 1.1,
there exists $\phi\in\mathcal{S}_{{\mathcal{A}}}$ such that ${\mathop{\rm
dist}}(a,{\mathcal{B}})^{2}=\phi(a^{*}a)-|\lambda_{0}|^{2}$ and
$\phi(a)=\lambda_{0}=\phi(E(a))$. Since $\phi\circ E=\phi$, we get the
required state for which equality in (15) holds.
###### Remark 4.3.
It would be very interesting to find a counterexample to equality in (15) when
${\mathcal{B}}\neq{\mathbb{C}}1_{{\mathcal{A}}}$.
Acknowledgments
We would like to thank Sneh Lata and Ved Prakash Gupta for many useful
discussions. We would also like to acknowledge several discussions with Amber
Habib, which helped us to understand the geometric ideas behind the theorems.
The research of the first-named author is supported by INSPIRE Faculty Award
IFA14-MA-52 of DST, India, and by Early Career Research Award ECR/2018/001784
of SERB, India.
## References
* [1] L. Aramba$\check{\text{s}}$i$\acute{\text{c}}$, R. Raji$\acute{\text{c}}$, The Birkhoff-James orthogonality in Hilbert $C^{*}$-modules, Linear Algebra Appl., 437 (2012), 1913–1929.
* [2] K. M. R. Audenaert, Variance bounds, with an application to norm bounds for commutators, Linear Algebra Appl., 432 (2010), 1126–1143.
* [3] R. Bhatia, P. $\check{\text{S}}$emrl, Orthogonality of matrices and some distance problems, Linear Algebra Appl., 287 (1999), 77–85.
* [4] T. Bhattacharyya, P. Grover, Characterization of Birkhoff-James orthogonality, J. Math. Anal. Appl., 407 (2013), 350–358.
* [5] B. Blackadar, Operator Algebras - Theory of $C^{*}$-Algebras and von Neumann Algebras, Springer-Verlag, Berlin, 2006.
* [6] J. B. Conway, A Course in Functional Analysis, Springer, New York, 1990.
* [7] K. R. Goodearl, Notes on Real and Complex $C^{*}$-Algebras, Shiva Publishing Ltd., Cambridge, 1982.
* [8] P. Grover, Orthogonality to matrix subspaces, and a distance formula, Linear Algebra Appl., 445 (2014), 280–288.
* [9] P. Grover, S. Singla, Birkhoff-James orthogonality and applications: A survey, Operator Theory, Functional Analysis and Applications, Oper. Theory Adv. Appl., 282, to appear.
* [10] D. J. Ke$\check{\text{c}}$ki$\acute{\text{c}}$, Orthogonality and smooth points in $C(K)$ and $C_{b}(\Omega)$, Eurasian Math. J., 3 (2012), 44–52.
* [11] A. Mal, K. Paul, Birkhoff-James orthogonality to a subspace of operators defined between Banach spaces, J. Operator Theory, to appear.
* [12] K. Paul, Translatable radii of an operator in the direction of another operator II, Math. Slovaca, 60 (2010), 121–128.
* [13] M. A. Rieffel, Leibniz seminorms and best approximation from $C^{*}$-subalgebras, Sci. China Math., 54 (2011), 2259–2274.
* [14] M. A. Rieffel, Standard deviation is a strongly Leibniz seminorm, New York J. Math., 20 (2014), 35–56.
* [15] I. Singer, Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces, Springer-Verlag, Berlin, 1970.
* [16] J. P. Williams, Finite operators, Proc. Amer. Math. Soc., 26 (1970), 129-136.
|
# Heating up decision boundaries:
isocapacitory saturation, adversarial
scenarios and generalization bounds
Bogdan Georgiev
Fraunhofer IAIS, ML2R
<EMAIL_ADDRESS>
&Lukas Franken
Fraunhofer IAIS, ML2R, University of Cologne
<EMAIL_ADDRESS>
Mayukh Mukherjee
IIT Bombay
<EMAIL_ADDRESS>
###### Abstract
In the present work we study classifiers’ decision boundaries via Brownian
motion processes in ambient data space and associated probabilistic
techniques. Intuitively, our ideas correspond to placing a heat source at the
decision boundary and observing how effectively the sample points warm up. We
are largely motivated by the search for a soft measure that sheds further
light on the decision boundary’s geometry. En route, we bridge aspects of
potential theory and geometric analysis (Maz’ya (2011); Grigor’Yan & Saloff-
Coste (2002)) with active fields of ML research such as adversarial examples
and generalization bounds. First, we focus on the geometric behavior of
decision boundaries in the light of adversarial attack/defense mechanisms.
Experimentally, we observe a certain capacitory trend over different
adversarial defense strategies: decision boundaries locally become flatter as
measured by isoperimetric inequalities (Ford et al. (2019)); however, our more
sensitive heat-diffusion metrics extend this analysis and further reveal that
some non-trivial geometry invisible to plain distance-based methods is still
preserved. Intuitively, we provide evidence that the decision boundaries
nevertheless retain many persistent "wiggly and fuzzy" regions on a finer
scale.
Second, we show how Brownian hitting probabilities translate to soft
generalization bounds which are in turn connected to compression and noise
stability (Arora et al. (2018)), and these bounds are significantly stronger
if the decision boundary has controlled geometric features.
## 1 Introduction and background
The endeavor to understand certain geometric aspects of decision problems has
lead to intense research in statistical learning. These range from the study
of data manifolds, through landscapes of loss functions to the delicate
analysis of a classifier’s decision boundary. In the present work we focus on
the latter. So far, a wealth of studies has analyzed the geometry of decision
boundaries of deep neural networks (DNN), reaching profound implications in
the fields of adversarial machine learning (adversarial examples), robustness,
margin analysis and generalization. Inspired by recent isoperimetric results
and curvature estimates (Ford et al. (2019); Moosavi-Dezfooli et al. (2019);
Fawzi et al. (2016)), we attempt to provide some new aspects of decision
boundary analysis by introducing and studying a corresponding diffusion-
inspired approach.
In this note the guiding idea is to place a heat source at the classifier’s
decision boundary and estimate its size/shape in terms of the amount of heat
the boundary is able to emit within a given time (Fig. 1). The goal is to
extract geometric information from the behavior of heat transmission. This
technique of heat content seems well-known within capacity/potential theory
and has led to a variety of results in spectral analysis relating heat
diffusion and geometry, Jorgenson & Lang (2001); Grigor’Yan & Saloff-Coste
(2002); Maz’ya (2011). However, working with such heat diffusion directly in
terms of the corresponding differential equations is impractical. To this end,
we note that, due to Feynman-Kac duality, the heat estimates are convertible
to Brownian motion hitting probabilities. Thus we circumvent the need for
solving intractable differential equations and instead are able to employ a
straightforward Monte-Carlo sampling scheme in the ambient data space (Section
3).
#### Background on defense training
We apply the above analysis in the context of adversarial machine learning
(Section 4) where one studies the interaction between an adversary and a ML
system. One of the goals of the subject is to design attack/defense training
strategies improving the robustness of a given ML model - in the present work
we are interested in how adversarial/noise defense training are reflected
geometrically. Many different metrics to estimate robustness have been
proposed: on one hand, there is adversarial robustness (the probability that
error samples lie very near a given data point $x$); on the other hand, there
is corruption robustness (the probability of getting an error sample after
perturbing a given data point $x$ with some specified noise). In our context,
heat diffusion naturally suggests a capacitory robustness metric: this metric
is built upon the probability that Brownian motion started at a given data
point $x$ will hit error samples within a given time window. One can perceive
this metric as a combination of adversarial and noise robustness (Brownian
motion has continuous paths and specified stopping time determined by boundary
impact). In this perspective, our work is aligned with studies of other
robustness metrics and curvature results (cf. Fawzi et al. (2016) for a "semi-
random" projection robustness and relations to curvature). We study the
capacitory metric on the well-known CIFAR10 and MNIST datasets and observe
that defense training techniques may either yield a certain (although not
substantial) decrease (noise training) or fail to have a significant effect on
continuous Brownian attacks overall. Surprisingly, in both cases the studied
capacitory metric does not converge to the corresponding value as in the case
of a flat decision boundary. Due to our comparison statements and curvature
considerations, this means that locally around clean data points the geometry
is in general flattened out but may still retain complexity and substantial
areas of (small) non-vanishing curvature. In other words, from the point of
view of our heat diffusion metrics, decision boundaries locally exhibit non-
flat behaviour.
Figure 1: Heating up a planar decision boundary of a 5-layer MLP over time.
The amounts of radiated heat reflect the geometry of the decision boundary:
size, density, curvature.
#### Background on generalization estimates
Finally, we observe that the collected heat/hitting-probability metrics can
further be used to obtain generalization bounds where, in a nutshell, one
evaluates the performance of a model on unseen data in terms of the
performance over a given sampled data, the model’s expressiveness, dimension,
etc. In this regard, we view decision boundary heat diffusion traits as an
indicator of how noise-stable a given model is - this relates Brownian hitting
bounds with recent compression-based generalization techniques in the spirit
of Arora et al. (2018); Suzuki et al. (2018; 2020). More precisely, we proceed
in two steps: first, we construct a "smaller" compressed model that is almost
equivalent to the initial one in an appropriate heat-theoretic way; second, we
obtain generalization estimates for the smaller model in terms of the decision
boundary hitting probabilities (computed on the empirical dataset).
Furthermore, the bounds are significantly improved under additional geometric
assumptions on the decision boundary of the initial model.
#### Additional related work
The interplay between heat diffusion and geometry lies at the heart of many
topics in geometric analysis and spectral theory (cf. Jorgenson & Lang (2001);
Grigor’Yan (2001) for a far reaching overview). Some direct applications of
heat diffusion techniques to zero sets of eigenfunctions are seen, for
example, in Steinerberger (2014); Georgiev & Mukherjee (2018a; b). The
literature on adversarial ML is vast: to name a few central works in the
field, we refer to Dalvi et al. (2004); Biggio & Roli (2018); Szegedy et al.
(2014). Much effort has been invested in designing and understanding
strategies that will render a model robust to various attacks (e.g. Madry et
al. (2018); Carlini & Wagner (2017)). In particular, the geometry of decision
boundaries has been the focus of many works in the subject leading to
breakthroughs in curvature estimates, boundary flatness and robustness,
schemes for detecting boundary complexity, proposing adversarial
attacks/defenses and diffusion based techniques towards constructing decision
boundary from partially pre-labelled data (e.g. Ford et al. (2019); Fawzi et
al. (2016; 2017; 2018); Dezfooli et al. (2018); Moosavi-Dezfooli et al.
(2019); Karimi et al. (2019); Karimi & Tang (2020); He et al. (2018); Szlam et
al. (2008)). The theory of generalization bounds has formed a classical main
line of ML and statistical inference research (Vapnik (1999)). In this
direction central questions address the generalization properties of heavily
over-parametrized deep neural network models. According to some classical VC-
dimension results such models should overfit the data and generalize poorly.
Extensive research effort has been invested in developing appropriate sharper
techniques to explain generalization of DNN models: on one hand there are the
methods based on norm estimation whose bounds are not explicitly using the
number of the network’s parameters (see Golowich et al. (2019); Neyshabur et
al. (2015; 2018); Wei & Ma (2019); Bartlett et al. (2017), etc). On the other
hand, recent results based on compression and VC-dimension can lead to sharper
bounds (Arora et al. (2018); Suzuki et al. (2018; 2020)).
## 2 Contributions, context and paper outline
An outline of our essential contributions is given as follows:
1. 1.
We analyze decision boundary geometries in terms of novel heat diffusion and
Brownian motion techniques with thorough theoretical estimates on curvature
and flattening.
2. 2.
We show, both theoretically and empirically (in terms of adversarial scenarios
on state-of-art DNN models), that the proposed heat diffusion metrics detect
the curvature of the boundary; they complement, and in some respects are more
sensitive in comparison to previous methods of boundary analysis -
intuitively, our heat driven metrics are sharper on a finer scale and can
detect small-scale "wiggles and pockets". As an application, we are thus able
to provide evidence that adversarial defenses lead to overall flatter
boundaries but, surprisingly, the heat traits do not converge to the
corresponding flat-case, and hence, finer-scale non-linear characteristics
(e.g. "wiggles and pockets") are persistent.
3. 3.
Moreover, the preservation of "wiggles and pockets" means that susceptibility
to naive Brownian motion attacks is not significantly decreased via
adversarial defense mechanisms.
4. 4.
Finally, we introduce a novel notion of compression based on heat diffusion
and prove that stability of heat signature translates to compression
properties and generalization capabilities.
In terms of context, the present note is well-aligned with works such as Ford
et al. (2019); Dezfooli et al. (2018); Fawzi et al. (2016; 2018). Among other
aspects, these works provide substantial analysis of the interplay between
geometry/curvature and adversarial robustness/defenses - in particular, we use
some of the these tools (e.g. isoperimetric saturation) as benchmarks and
sanity checks. However, in contrast, in our work we provide a non-equivalent
technique to address decision boundary geometry for which we provide an
extensive theoretical and empirical evaluation with insights on the
preservation of finer-scale traits. Intuitively, previous distance-based
geometric methods could be considered as a "coarser lens", whereas the present
heat-diffusion tools appear to be much more sensitive. As a large-scale
example, Brownian particles emanating from a point are able to distinguish
between a decision boundary which is a hyperplane at distance $d$ and a
decision boundary which is a cylinder of radius $d$ wrapping around the point.
Our notion of compression is inspired by Arora et al. (2018), and establishes
a connection between the Johnson-Lindenstrauss dimension reduction algorithm
with diffusion techniques. Furthermore, we bridge the proposed heat-theoretic
techniques with generalization bounds in the spirit of Arora et al. (2018);
Suzuki et al. (2020). In particular, this shows that overall lower heat
quantities at sample points imply better generalization traits. A step-wise
road map of the present work is given below:
* •
(Subsection 3.1) We start by discussing what heat diffusion is and how it is
to be evaluated - here we discuss that, via Feynman-Kac duality, one can
essentially work with Brownian motion hitting probabilities.
* •
(Subsections 3.2 and 3.3) We introduce the isocapacitory saturation $\tau$ \-
a heat-theoretic metric that will be used to estimate boundary flatness.
Moreover, here we emphasize the properties of $\tau$ such as relations to
curvature (Proposition 3.1) and the novel information obtained from heat
theoretic methods in comparison to previous distance-based ones.
* •
(Subsection 3.4) We compute $\tau$ for certain geometric model cases such as
hyperplanes, cones, wedges and "spiky" sets (Lemmas 3.2 and 3.3). This allows
us later to evaluate how much a given geometry resembles these model cases.
* •
(Section 4) Next, we are in a position to evaluate and compare $\tau$ for
decision boundaries of DNNs. We experimentally illustrate the effect of
adversarial defense mechanisms and noise robustness on $\tau$ (PGD/FGSM on
MNIST and CIFAR-10).
* •
(Section 5) We prove that heat transmission relates to generalization bounds
(Propositions 5.1 and 5.2) - in particular, lower levels of heat at sample
points yield sharper generalization bounds. Finally, we complete the
discussion by informally stating our compression scheme.
* •
(Appendix) Our methods leverage several tool sets extensively. For this reason
our goal in the main text is to only collect and showcase the techniques and
results. However, the thorough in-depth analysis is provided in the Appendix
where the reader can find all relevant proofs and further background and
references.
## 3 Motivation and main ideas
### 3.1 Geometry seen through Brownian motion and Diffusion
#### Notation
Let us consider a dataset $\mathcal{X}:=\\{(x_{i},y_{i})\\}_{i=1}^{m}$
consisting of feature points $x_{i}\in\mathbb{R}^{n}$ and their corresponding
labels $y\in\\{1,\dots,k\\}$. Let us suppose that a $k$-label classifier
$f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}$ labels a point $x\in\mathcal{X}$
as $\operatorname*{arg\,max}_{i\in[1,k]}f(x)[i]$. The decision boundary of $f$
is given by $\mathcal{N}:=\\{x\in\mathbb{R}^{n}|f(x)\text{ has two or more
equal coordinates}\\}$ (cf. Fig. 2). Assuming $f$ is sufficiently regular, one
thinks of $\mathcal{N}$ as a collection of hypersurfaces in $\mathbb{R}^{n}$.
Further, for a given target label $y$ we define the target (error) set $E(y)$
as the set of points on which the classifier’s decision is different from $y$,
i.e.
$E(y):=\\{x\in\mathbb{R}^{n}|\operatorname*{arg\,max}_{i\in[1,k]}f(x)[i]\neq
y\\}$ (here we remark that if $\operatorname*{arg\,max}$ is set-valued at $x$
with several coordinates obtaining the maximum value, then by convention $x$
is contained in $E(y)$). Clearly, if a given data sample
$(x_{0},y_{0})\in\mathcal{X}$ is correctly classified by $f$, then $x_{0}$ is
outside of the error set $E(y_{0})$. Finally, we note that the boundary of
$E(y)$ coincides with $E(y)\cap\mathcal{N}$ and moreover, $\mathcal{N}$ is the
union of the boundaries of $E(y)$ for all labels $y$.
Figure 2: A planar 2-class dataset that alternates along a circle. (Left) A
depiction of the planar circle-like dataset and the corresponding decision
boundary of a 5-layer MLP. (Center) Brownian paths starting at a data point
$x$ and killed upon impacting the decision boundary/opposite class. (Right)
Set-up of the local Brownian motion analysis with notation on radius $r$,
dimension $n$ and Brownian runtime $t$.
#### Feynman-Kac duality and hitting probabilities
As mentioned in Section 1 we wish to study a heat diffusion process where we
place a heat source at the decision boundary $\mathcal{N}$: formally, this is
given by a heat equation with appropriate initial and boundary conditions
(Appendix, Subsection A.2). Avoiding the impracticality of working with the
differential equations directly, we bring forward the theorem of Feynman-Kac
that relates the solution of the diffusion process to hitting probabilities of
Brownian motion (Appendix, Subsection A.3). By way of notation, for an open
set $U\subseteq\mathbb{R}^{n}$, let $\psi_{U}(x,t)$ denote the probability
that a Brownian particle starting at the point $x$ will enter $U$ within time
$t$. In other words,
$\psi_{U}(x,t):=\operatorname{\mathbb{P}}_{\omega\sim\mathbb{W}}\left[\,\exists\,t_{0}\in[0,t]~{}|~{}\omega(t_{0})\in
U\right],\quad x\in\mathcal{X},$ (1)
where $\omega$ denotes a Brownian motion defined over the interval $[0,t]$
that follows the standard Euclidean Wiener distribution. The amount of heat
that a point $x$ receives from $\mathcal{N}$ within time $t$ is comparable to
the hitting probability that a Brownian particle starting at $x$ will impact
the boundary within time $t$ (cf. Fig. 2). Provided that $x$ is correctly
classified this is equivalent to the probability of impacting the decision
boundary. In general, we evaluate $\psi_{E(y)}(x,t)$ (which we often denote by
$\psi(x,t)$ by minor abuse of notation) through direct sampling; however, in
some model cases, e.g. $E(y)$ being a half-space, a spherical shell or a
conical set, $\psi(x,t)$ has a concise closed form (Subsection 3.4 below) that
can be evaluated analytically. This allows us to easily measure deviations and
compare the heat imprint of $\mathcal{N}$ to particular model cases.
#### Local analysis and set-up
As mentioned above our analysis is local. For each clean data point $x$ we
consider a ball $B(x,r)$ centered at $x$ with radius $r$ and perform all our
computations there. In particular, a free Brownian motion starting at $x$ and
defined over a maximal time interval $[0,t]$ will on average travel a distance
of $\sqrt{nt}$ (Appendix, Subsection A.1). This suggests to couple $r$ and the
maximal Brownian running time $t$ via $r=\sqrt{nt}$ (cf. Fig. 2), so that, if
not stopped by boundary impact, Brownian motion will, on average, reach the
sphere $\partial B(x,r)$ by its maximal stopping time.
### 3.2 An isoperimetric and isocapacitory perspective
#### Isoperimetric results
Isoperimetric estimates will be the starting baseline (Ford et al. (2019)) to
detect low levels of curvature and boundary flatness. For some background in
isoperimetric results we refer to (Appendix, Subsection A.4). Let us start by
defining the relative error volume
$\mu(x,r):=\frac{\operatorname{\operatorname{Vol}}(E(y)\cap
B(x,r))}{\operatorname{\operatorname{Vol}}(B(x,r))}.$ (2)
We recall the so-called Gaussian isoperimetric inequality Borell (1975); Ford
et al. (2019):
$\tilde{d}\leq-\frac{r\,\Phi^{-1}(\mu)}{\sqrt{n}},\quad\mu\leq 1/2,$ (3)
where $\Phi^{-1}$ denotes the inverse standard normal c.d.f. and where
$\tilde{d}=d(\tilde{x},\mathcal{N}_{f})$ denotes the median distance with
$\tilde{x}$ varying normally and concentrated in the ball $B(x,r)$, and
$\tilde{d}=0$ if $\mu\geq 1/2$. Here the isoperimetric result is rigid in the
sense that equality in (3) occurs only if $E(y)$ is a half-space. In Ford et
al. (2019) the authors demonstrate that defense training mechanisms lead to
decision boundaries that saturate this isoperimetric inequality, i.e. in this
isoperimetric sense, the decision boundary $\mathcal{N}$ becomes locally
closer to being a flat hyperplane. We define the ratio between the LHS and RHS
in eq. (3) as the isoperimetric saturation.
#### Isocapacitory results
In our context of hitting probabilities (eq. (1)), results in potential theory
allows us to prove isocapacitory bounds which are similar in spirit to
isoperimetric bounds. More precisely one has:
$\mu(x,r)\leq c_{n}\,\psi(x,t)^{\frac{n}{n-2}},$ (4)
where $c_{n}$ is an appropriate constant depending on the dimension $n$, and
$r=\sqrt{nt}$. The proof relies on potential theory tools (capacity) and can
be found in Appendix, Proposition A.3. Motivated by the above isoperimetric
saturation results, one of our main goals is to study how $\mu$ compares to
$\psi(x,t)$. To this end we define the isocapacitory saturation $\tau$ as
$\tau(x,r):=\frac{\psi(x,t)^{\frac{n}{n-2}}}{\mu(x,r)}.$ (5)
The basic guiding heuristic is that high values of $\tau$ indicate that $E(y)$
has a very low volume in comparison to its boundary size and respective heat
emission. This is the case whenever $E(y)$ is a very thin region with a well-
spread boundary of large surface area - e.g. a set that resembles thin spikes
entering the ball $B(x,r)$. In contrast, lower values of $\tau$ should
indicate a saturation of the isocapacitory inequality (4) and imply that
$E(y)$ has a volume that is more comparable to its heat emission - e.g.
thicker sets with tamer boundary. To quantify this intuition, we explicitly
evaluate $\tau$ for some model scenarios (Subsection 3.4).
### 3.3 The novel information given by heat diffusion
#### Distances vs. hitting probabilities
As discussed above, several works investigate decision boundaries in terms of
distance-based analysis (Ford et al. (2019); Fawzi et al. (2016); Karimi &
Tang (2020); Karimi et al. (2019)). We remark that our analysis based on
hitting probabilities augments and extends the mentioned distance-based
approaches. Although related, the two concepts are not equivalent. A guiding
example is given by $E(y)$ being a dense collection of "thin needles"
(Appendix, Subsections A.4, A.5); in such a scenario the average distance to
$\mathcal{N}$ is very small, as well as the chance a Brownian particle will
hit $\mathcal{N}$. On the other hand, if $\mathcal{N}$ is a dense collection
of hyperplanes, the average distance to $\mathcal{N}$ is again small, but
Brownian motions almost surely will hit $\mathcal{N}$. In this sense,
evaluating hitting probabilities yields a different perspective than is
available from distance-based analysis and sheds further light on the size and
shape of the decision boundary, particularly with regards to its capacity and
curvature features.
#### Isoperimetric vs. isocapacitory saturation
Another demonstration of the additional information obtained through $\tau$ is
given by almost flat shapes in higher dimensions that saturate isoperimetric
bounds (Appendix, Subsection A.4). In these scenarios small geometric
deformations can have a significant impact on $\tau$, and at the same time
almost preserve isoperimetric bounds. In other words $\tau$ provides an
additional level of geometric sensitivity. We discuss this further in Section
4.
#### The effect of curvature
The interplay between curvature of the decision boundary and robustness has
been well studied recently, e.g. Fawzi et al. (2016); Moosavi-Dezfooli et al.
(2019) where various forms of robustness (adversarial, semi-random and their
ratio) have been estimated in terms of the decision boundary’s curvature.
Intuitively, the differential geometric notion of curvature measures how a
certain shape is bent. The precise definition of curvature involves taking
second-order derivatives which is in most cases impractical. However, in our
context we show that the isocapacitory saturation $\tau$ implies certain
curvature bounds. These statements exploit relations between curvature and
volume and lead to pointwise and integral curvature bounds. As an
illustration, we have:
###### Proposition 3.1 (Informal).
Let $(x,y)\in\mathcal{X}$ be a data sample. Then, provided that the distance
$d(x,\mathcal{N})$ is kept fixed, larger values of $\tau$ locally imply larger
pointwise/integral curvature values.
A deeper analysis with formal statements and additional details are provided
in Appendix, Subsection A.6. The advantages that curvature yields for some
types of compression schemes and generalization bounds is also intensely
investigated in Appendix, Section B.
### 3.4 Model decision boundaries: hyperplanes, wedges, cones and “spiky”
sets
Given a certain geometric shape, one is often faced with questions as to how
flat or spherical the given geometry is. To this end, a central technique in
geometric analysis is comparing to certain model cases - e.g. a sphere, plane,
saddle, etc. After having introduced $\tau$ and its basic traits we now
evaluate it for several model cases (flat hyperplanes, wedges, cones, balls
and "spiky" sets). Each of these model cases illustrates a distinguished
$\tau$-behaviour: from "tame" behaviour (hyperplanes, balls) to explosion
(thin cylinders, "needles and spiky" sets). Hence, having comparisons to these
model cases and given an decision boundary, one can, quantify how far away is
the given surface from being one of the models. We start by discussing the
flat linear case:
###### Lemma 3.2.
Let $(x,y)$ be a data sample and suppose that $E(y)$ forms a half-space at a
distance $d$ from the given data point $x\in\mathbb{R}^{n}$. Then
$\tau(x,r)=2\,\Phi\left(-\frac{d}{\sqrt{t}}\right)\,\frac{\operatorname{\operatorname{Vol}}\left(B(x,r)\right)}{V_{n}(d,r)},$
(6)
where $\Phi(s)$ is the c.d.f. for the standard normal distribution, and
$V_{n}(d,r)$ is the volume of the smaller $n$-dimensional solid spherical cap
cut-off at distance $d$ from the center of a ball of radius $r$.
The computation uses standard reflection principle techniques. Figure 3
depicts an experimental discussion on Lemma 3.2. Another illuminating model is
given by a "spiky" set - e.g. a thin cylinder, which is in some sense the
other extreme. We have
###### Lemma 3.3 (Appendix, Subsection A.5).
Suppose that $E(y)$ is a cylinder of height $h$ and radius $\rho$ that enters
the ball $B(x,r)$. Then $\tau\nearrow\infty$ as $\rho\searrow 0$.
Further comparison results for additional model cases are given in Appendix,
Subsection A.5.
Figure 3: A visual depiction of decision boundaries and saturation $\tau$ for
5-layer MLP models with 20 and 100 hidden units trained over a planar
"circular" dataset (depicted in grey). For each data sample $x$ the ball
$B(x,r)$ is selected so that the relative volume $\mu(x,r)$ is $0.1$.
According to Lemma 3.2 a flat decision boundary would correspond to
$\tau\approx 3.32$. (Left) The saturation $\tau$ exhibits a bi-modal behaviour
with peaks around the values $3$ and $4.3$. These correspond to data points
squeezed between thin elongated regions that locally closely resemble the flat
case, or tinier "pockets" with higher curvature, respectively. (Right) The
saturation $\tau$ is more closely concentrated around $4.3$ and, accordingly,
the decision boundary mainly consists of smaller "pockets" of higher
curvature.
## 4 Adversarial Attacks and Defenses
#### Background and set-up
We now analyze how strategies for improving adversarial and noise shift
robustness affect the decision boundary’s heat diffusion properties. In
particular, we keep track of Brownian hitting probabilities $\psi$ and the
isocapacitory saturation $\tau$. On one hand, we can view $\psi$ as a
capacitory robustness metric against continuous interpolation attacks given by
Brownian noise (see also Section 1). On the other hand, Subsection 3.4
indicates how the behaviour of $\tau$ reveals deviation from the case of a
flat or "spiky" and curvy decision boundary. Our empirical analysis uses the
well-known CIFAR10 and MNIST datasets (details, preprocessing and enhancements
are given in Appendix, Subsection C.5). For CIFAR10, we used the Wide-
ResNet-28-10 (Zagoruyko & Komodakis (2016); Ford et al. (2019)) and ResNets
with 32, 44 and 56 layers (He et al. (2016)). For MNIST, we selected a LeNet-5
and additional CNN architectures. Motivated by previous work (e.g. Ford et al.
(2019)), we perform 3 types of training: ordinary stochastic gradient descent
(ADAM optimization), training with Gaussian noise data augmentation and
training with adversarial defense strategies (FGSM and PGD methods, see also
Appendix, Section C.4 for details and remarks on robustness). Detailed outline
of the numerics behind Brownian motion sampling, isoperimetric/isocapacitory
saturation and relative volume sampling are given in Appendix, Subsection C.3.
Figure 4: Results for a Wide-ResNet 28-10 and a LeNet-5 trained on CIFAR10 and
MNIST, respectively. Different boxplots correspond to different training
strategies: ordinary, adversarial, with noise or with a Brownian augmentation.
Data is collected over 1000 test data points, where each radius $r$ is
selected so that the relative error volume $\mu$ equals $1\%$. Left-to-right
the columns correspond to the isocapacitory saturation $\tau$, the radius $r$
realizing $\mu=1\%$ and the isoperimetric saturation. Finally, red punctured
horizontal lines indicate the corresponding values for flat decision
boundaries.
#### Analysis of results
Recent results (Ford et al. (2019); Schmidt et al. (2017)) have shown
qualitative differences between the adversarially robust boundaries of MNIST
and CIFAR-10, which also impact the experimental findings in this work. In
short, a robust decision boundary is in the MNIST case less spiky in
comparison to CIFAR. For more details we refer to Appendix, Subsection C.2. In
Fig. 4 we collect the statistics of the WRN and LeNet models on CIFAR10 and
MNIST, respectively. On one hand, we confirm previous results (Ford et al.
(2019); Fawzi et al. (2016)) implying the "flattening-of-boundary" phenomenon:
noisy and adversarial training appear to improve and saturate isoperimetric
bounds. Furthermore, the ball $B(x,r)$ realizing relative error volume $\mu$
of $1\%$ is on average scaled up for adversarial and, especially, noisy
training. On the other hand, an intriguing behaviour is observed for the
decision boundary’s heat diffusion traits. The isocapacitory saturation $\tau$
does not appear to concentrate around the value corresponding to a flat
hyperplane: defense training strategies, both FGSM and PGD-based, may not have
a significant impact on the behaviour of $\tau$ by forcing it to converge to
the case of a flat decision boundary (shown as horizontal red punctured line).
Put differently, the chance that a continuous Brownian perturbation will find
an adversarial example (scaled to the appropriate ball $B(x,r)$) will not be
significantly altered on average (see Appendix, Subsection C.7 for a visual
reference). However, it appears that noisy training consistently delivers
lower values of $\tau$ \- intuitively, this is expected as the decision
boundary is adjusted in terms of adding Gaussian "blobs", thus naturally being
rounder. Geometrically, the sensitivity of $\tau$ to small perturbations in
almost flat surfaces (Subsection 3.2) indicates that locally around clean
(unperturbed) data points an amount of curvature and more complex geometry are
still retained. Of course, this amount is not as large as to violate
saturation of isoperimetric bounds and robustness comparability results in the
sense of Fawzi et al. (2016). For example, in the case of CIFAR10 a simple
geometric model surface that has a similar $\tau$-behaviour (as for the
adversarial and noisy training) is given in (Appendix, Subsections A.4, A.5):
considering a data point $x$, an almost flat decision boundary that is
concavely bent w.r.t. $x$ with approximate curvature of $\approx 1/(12.3r)$.
These observations reveal finer properties concerning decision boundary
flattening due to defense training: in particular, noisy training appears to
flatten decision boundaries and slightly bend them concavely w.r.t. to the
clean data points. Further results for ResNet models and CNN are provided in
(Appendix, Subsection C.7).
#### Spiky sets and control on $\tau$
In Fig. 4 large outlying values of $\tau$ are filtered out. However, values of
$\tau$ larger than $10$ can occupy up to $1.3\%$ for ordinary training and
$2.1\%,2.6\%$ for adversarial, noisy training, respectively. It follows, that
the geometry of high-dimensional decision boundaries does not admit too many
high-curvature (see also Proposition 3.1) spiky regions of low volume and high
heat emission (high surface area) in the sense of Subsections 3.2, 3.4.
However, it appears that defense training can increase the number of such
spiky regions: one might explain such behaviour by seeing defense training as
a bundle of additional geometric conditions that sometimes are not able to
agree and thus lead to a more degenerate (singular) geometry. Further, with
respect to the initial analysis of Fig. 4, a natural question is whether one
can control $\tau$ along with the isoperimetric saturation - ultimately, one
hopes to design better decision boundaries (flatter, or appropriately curved
Moosavi-Dezfooli et al. (2019)) eventually leading to more robustness.
However, getting a tight control on $\tau$ could be a difficult task. It is,
indeed, possible to obtain some basic grip on $\tau$: we trained a LeNet-5
architecture on MNIST that exhibited significantly increased $\tau$ values and
preserved isoperimetric saturation (statistics are shown as the rightmost
boxplot in Fig. 4). Similar to many adversarial defenses, the training
consisted in augmenting the dataset with attacks given in this case by
Brownian paths. However, it seems difficult to force $\tau$ to concentrate
around the flat-case value, as well as to obtain competitive robustness of the
model. On one hand, this is explained via the need to control heat diffusion
through Brownian motion - the mentioned naive method is not able to capture
the hitting properties sufficiently well; on the other hand, as discussed
above heat diffusion properties can be far more sensitive than isoperimetric
saturation w.r.t. minor geometric perturbations.
## 5 Generalization bounds in terms of hitting probabilities
#### Compression, noise stability and generalization
Recent advances (Arora et al. (2018); Suzuki et al. (2018; 2020)) indicate
that generalization can be related to compression and noise stability. The
guiding strategy is: (1) a large DNN $f$ that is stable against (layer-wise)
noise injections admits an effective compression to a simpler model
$\tilde{f}$ which is almost equivalent to $f$. Intuitively, the noise
stability absorbs the defects introduced by compression; (2) concentration
results imply generalization bounds for $\tilde{f}$. Admittedly, the
generalization estimate is obtained initially for the smaller model; however,
it is also possible to "transfer" the bound to $f$ (see the discussion at the
end of this Section).
In this context a simple observation is that Brownian motion and its hitting
probabilities can be related, respectively, to noise injection and margins of
classification: small hitting probability of the decision boundary should
indicate "margin-safety" and allow to compress parameters of the model more
aggressively. However, in contrast to injecting normal noise, Brownian motion,
with stopping time given by boundary impacts, is more delicate and requires
further analysis of the decision boundary. In the following we propose a
theoretical framework that, we hope, will augment and produce further insights
into the interplay between noise stability and generalization bounds. The
statements are inspired by the results in Arora et al. (2018); Suzuki et al.
(2020) and we follow the notation therein. First, we propose several options
for goodness of approximation (compression) in the sense of heat diffusion
(Appendix, Subsection B.1). We give the following definition:
###### Definition 1.
Given a positive real number $\eta$, a classifier $g$ is said to be an
$\eta-$compression of $f$ if
$\left|\psi_{E_{g}(y)}(x,\gamma^{2})-\psi_{E_{f}(y)}(x,\gamma^{2})\right|<\eta$
(7)
for all points $x$ in the training sample, labels $y$ and real numbers
$\gamma$.
Now, as mentioned above we have the following generalization bounds for the
compressed model:
###### Proposition 5.1.
Let us suppose that $f$ is approximable by $g$ in the sense of Definition 1.
Here $g\in A$, where $A$ is a family of classifiers
$\mathbb{R}^{n}\rightarrow\mathbb{R}$ parametrized by $q$ parameters assuming
$r$ discrete values. For a classifier $h$, let $C_{h}(x,y,t)$ be the event
that a Brownian path starting at $x$ hits $E_{h}(y)$ within time $t$. Then for
$t_{1}\leq t_{2}\leq T$ we have
$L_{0}(g)\leq\operatorname{\mathbb{P}}_{(x,y)\sim
D}\left(C_{g_{\alpha}}(x,y,t_{1})\right)\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,t_{2})\right)+\eta+O\left(\sqrt{\frac{q\log
r}{m}}\right)$ (8)
with probability at least $1-e^{-q\log r}$ and $L_{0}$ denoting the expected
loss over the true data distribution.
Taking $t_{2}\to 0$ in (8), one recovers the empirical loss $\hat{L}_{0}(f)$
on the RHS. In other words, the generalization of the smaller model $g$ is
controlled by hitting probabilities of the initial model $f$ and corrections
related to family capacity. The next natural question is the construction of
$g$. Inspired by Johnson-Lindenstrauss techniques (cf. also Arora et al.
(2018)) we are able to recover the following statement (thorough details are
given in Appendix, Subsections B.5, B.6):
###### Proposition 5.2 (Informal).
Considering a fully connected feed-forward neural network $f$ where some
flatness conditions on the layer decision boundaries are fulfilled, there
exists an $\eta$-compression $g$ in the sense of Def. 1 whose number of
parameters is logarithmically smaller than $f$.
Finally, having the generalization estimates on the smaller model $g$ it is
natural to attempt transferring those to the initial model $f$ \- in Suzuki et
al. (2020) this is achieved via certain local Rademacher complexity and
"peeling" techniques. However, we choose not to pursue these bounds in the
present work and assume the perspective in Arora et al. (2018) that $g$, being
almost equivalent to $f$, provides a reasonable indicator of generalization
capabilities.
## Acknowledgments
We would like to thank our anonymous reviewers whose advice helped improve the
quality of the presentation. We are indebted to Prof. Christian Bauckhage for
his constant encouragement, support and fruitful discussions. We also
sincerely thank Benjamin Wulff for maintaining the outstanding computation
environment at Fraunhofer IAIS - his support and coffee conversations played
an essential role for our empirical analysis. In part, this work was supported
by the Competence Center for Machine Learning Rhine-Ruhr (ML2R) which is
funded by the Federal Ministry of Education and Research of Germany (grant no.
01IS18038B). We gratefully acknowledge this support.
## References
* Arora et al. (2018) Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In _35th International Conference on Machine Learning, ICML 2018_ , 2018. ISBN 9781510867963.
* Bartlett et al. (2017) Peter L. Bartlett, Dylan J. Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. In _Advances in Neural Information Processing Systems_ , 2017.
* Biggio & Roli (2018) Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. _Pattern Recognition_ , 2018. ISSN 00313203. doi: 10.1016/j.patcog.2018.07.023.
* Borell (1975) Christer Borell. The Brunn-Minkowski inequality in Gauss space. _Inventiones Mathematicae_ , 1975. ISSN 00209910. doi: 10.1007/BF01425510.
* Burago & Zalgaller (1988) Yuriĭ Dmitrievich Burago and Viktor Abramovich Zalgaller. Isoperimetric Inequalities for Various Definitions of Area. In _Geometric Inequalities_. Springer, 1988. doi: 10.1007/978-3-662-07441-1_3.
* Carlini & Wagner (2017) Nicholas Carlini and David Wagner. Towards Evaluating the Robustness of Neural Networks. In _Proceedings - IEEE Symposium on Security and Privacy_ , 2017. ISBN 9781509055326. doi: 10.1109/SP.2017.49.
* Cubuk et al. (2018) Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data, 2018.
* Dalvi et al. (2004) Nilesh Dalvi, Pedro Domingos, Mausam, Sumit Sanghai, and Deepak Verma. Adversarial classification. In _KDD-2004 - Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2004. ISBN 1581138881. doi: 10.1145/1014052.1014066.
* Dezfooli et al. (2018) Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, and Stefano Soatto. Robustness of classifiers to universal perturbations: A geometric perspective. In _6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings_ , 2018.
* Fawzi et al. (2016) Alhussein Fawzi, Seyed Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers: From adversarial to random noise. In _Advances in Neural Information Processing Systems_ , 2016.
* Fawzi et al. (2017) Alhussein Fawzi, Seyed Mohsen Moosavi-Dezfooli, and Pascal Frossard. The Robustness of Deep Networks: A Geometrical Perspective. _IEEE Signal Processing Magazine_ , 2017. ISSN 10535888. doi: 10.1109/MSP.2017.2740965.
* Fawzi et al. (2018) Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers’ robustness to adversarial perturbations. _Machine Learning_ , 2018. ISSN 15730565. doi: 10.1007/s10994-017-5663-3.
* Ford et al. (2019) Nicolas Ford, Justin Gilmer, Nicholas Carlini, and Ekin D. Cubuk. Adversarial examples are a natural consequence of test error in noise. In _36th International Conference on Machine Learning, ICML 2019_ , 2019. ISBN 9781510886988.
* Georgiev & Mukherjee (2018a) Bogdan Georgiev and Mayukh Mukherjee. Nodal geometry, heat diffusion and Brownian motion. _Analysis and PDE_ , 2018a. ISSN 1948206X. doi: 10.2140/apde.2018.11.133.
* Georgiev & Mukherjee (2018b) Bogdan Georgiev and Mayukh Mukherjee. On maximizing the fundamental frequency of the complement of an obstacle. _Comptes Rendus Mathematique_ , 2018b. ISSN 1631073X. doi: 10.1016/j.crma.2018.01.018.
* Golowich et al. (2019) Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. _Information and Inference: A Journal of the IMA_ , 2019. ISSN 2049-8764. doi: 10.1093/imaiai/iaz007.
* Grigor’Yan (2001) Alexander Grigor’Yan. Heat Kernels on Manifolds, Graphs and Fractals. In _European Congress of Mathematics_. 2001. doi: 10.1007/978-3-0348-8268-2_22.
* Grigor’Yan & Saloff-Coste (2002) Alexander Grigor’Yan and Laurent Saloff-Coste. Hitting probabilities for Brownian motion on Riemannian manifolds. _Journal des Mathematiques Pures et Appliquees_ , 2002. ISSN 00217824. doi: 10.1016/S0021-7824(01)01244-2.
* He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 770–778, 2016.
* He et al. (2018) Warren He, Bo Li, and Dawn Song. Decision boundary analysis of adversarial examples. In _6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings_ , 2018.
* Jorgenson & Lang (2001) Jay Jorgenson and Serge Lang. The Ubiquitous Heat Kernel. In _Mathematics Unlimited — 2001 and Beyond_. Springer, 2001. doi: 10.1007/978-3-642-56478-9_34.
* Karimi & Tang (2020) Hamid Karimi and Jiliang Tang. Decision boundary of deep neural networks: Challenges and opportunities. In _WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining_ , 2020. ISBN 9781450368223. doi: 10.1145/3336191.3372186.
* Karimi et al. (2019) Hamid Karimi, Tyler Derr, and Jiliang Tang. Characterizing the decision boundary of deep neural networks. _ArXiv_ , abs/1912.11460, 2019.
* Kent (1980) John T. Kent. Eigenvalue expansions for diffusion hitting times. _Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete_ , 1980. ISSN 00443719. doi: 10.1007/BF00538895.
* LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ , 1998. ISSN 00189219. doi: 10.1109/5.726791.
* Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In _6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings_ , 2018.
* Maz’ya (2011) Vladimir Maz’ya. _Sobolev spaces with applications to elliptic partial differential equations_. Springer, 2011. ISBN 978-3-642-15564-2.
* Moosavi-Dezfooli et al. (2019) Seyed Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. In _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 2019. ISBN 9781728132938. doi: 10.1109/CVPR.2019.00929.
* Mörters & Peres (2010) Peter Mörters and Yuval Peres. _Brownian motion_. 2010\. ISBN 9780511750489. doi: 10.1017/CBO9780511750489.
* Mu & Gilmer (2019) Norman Mu and Justin Gilmer. Mnist-c: A robustness benchmark for computer vision, 2019.
* Neyshabur et al. (2015) Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In _Journal of Machine Learning Research_ , 2015.
* Neyshabur et al. (2018) Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A Pac-Bayesian approach to spectrally-normalized margin bounds for neural networks. In _6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings_ , 2018.
* Schmidt et al. (2017) Ludwig Schmidt, Aleksander Madry, Aleksandar Makelov, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. _arXiv preprint arXiv:1706.06083_ , 2017.
* Steinerberger (2014) Stefan Steinerberger. Lower Bounds on Nodal Sets of Eigenfunctions via the Heat Flow. _Communications in Partial Differential Equations_ , 2014. ISSN 15324133. doi: 10.1080/03605302.2014.942739.
* Suzuki et al. (2018) Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, and Tomoaki Nishimura. Spectral-pruning: Compressing deep neural network via spectral analysis. _ArXiv_ , abs/1808.08558, 2018.
* Suzuki et al. (2020) Taiji Suzuki, Hiroshi Abe, and Tomoaki Nishimura. Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=ByeGzlrKwH.
* Szegedy et al. (2014) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In _2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings_ , 2014.
* Szlam et al. (2008) Arthur D. Szlam, Mauro Maggioni, and Ronald R. Coifman. Regularization on graphs with function-adapted diffusion processes. _Journal of Machine Learning Research_ , 9(55):1711–1739, 2008. URL http://jmlr.org/papers/v9/szlam08a.html.
* Taylor (2011) Michael E Taylor. _Partial Differential Equations II: Qualitative Studies of Linear Equations_. Springer, 2011. ISBN 9781441970480.
* Vapnik (1999) Vladimir N. Vapnik. An overview of statistical learning theory, 1999. ISSN 10459227.
* Watson (1944) G.N. Watson. _A Treatise on the Theory of Bessel Functions_. Cambridge University Press, 1944. ISBN 9780521093828.
* Wei & Ma (2019) Colin Wei and Tengyu Ma. Data-dependent sample complexity of deep neural networks via lipschitz augmentation. In _Advances in Neural Information Processing Systems 32_ , pp. 9725–9736. Curran Associates, Inc., 2019.
* Wong et al. (2020) Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=BJx040EFvH.
* Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. In _British Machine Vision Conference 2016, BMVC 2016_ , 2016. doi: 10.5244/C.30.87.
## Appendix A Appendix A: Hitting estimates, saturation and curvature
### A.1 Brownian motion and Bessel processes
In this Subsection we introduce some basic background on Brownian motion.
###### Definition 2 (Brownian motion).
A real-valued stochastic process $\\{\omega(t):t\geq 0\\}$ is called a one-
dimensional Brownian motion started at $x\in\mathbb{R}$ if the following hold:
* •
$\omega(0)=x$,
* •
the process has independent increments, that is, for $0\leq t_{1}\leq\cdots
t_{m}$ the increments $\omega(t_{j})-\omega(t_{j-1})$ for $j=2,\cdots,m$ are
independent random variables,
* •
for $t\geq 0,h>0$, the increments $\omega(t+h)-\omega(t)$ are normally
distributed with expectation zero and variance $h$,
* •
almost surely, the function $t\mapsto\omega(t)$ is continuous.
The process $\\{\omega(t):t\geq 0\\}$ is called a standard Brownian motion if
$x=0$.
Finally, if $\omega_{1},\cdots,\omega_{n}$ are independent one-dimensional
Brownian motions started at $x_{1},\cdots,x_{n}$ then the stochastic process
$\omega(t)=(\omega_{1}(t),\cdots,\omega_{n}(t))$ is called an $n$-dimensional
Brownian motion started at $x=(x_{1},\cdots,x_{n})$.
###### Remark A.1.
The distribution of the standard $1$-dimensional Brownian motion $\omega(t)$
is normal with mean ${\bf 0}$ and variance $t$. It follows that the RMSD (root
mean squared displacement) of the standard $n$-dimensional Brownian motion is
$\sqrt{nt}$.
#### Sampling
Brownian motion simulation is prescribed directly by Definition 2. Given a
step size $s$, number of steps $k$ we sample a Brownian path as
$\hat{\omega}(k):=\sum_{i=0}^{k}sX_{i},\quad X_{i}\sim N(0,1).$ (9)
By Definition 2, $\mathrm{Var}[\omega(t)]=t$, hence the sampling
$\hat{\omega}$ corresponds to running a Brownian motion for time
$t=ks^{2}.$ (10)
In particular, the mean displacement of $\hat{\omega}$ is $s\sqrt{nk}$. In
accordance with the main text, Subsection 3.1 and Fig. 2, whenever we need to
sample Brownian motion contained within the ball $B(x,r)$ for its lifespan
$[0,t]$, we will fix the number of steps $k$ (usually, we set $k=400$) and
adjust the step size $s$ accordingly, so that $r=s\sqrt{nk}$.
#### Estimating hitting probabilities
A straightforward empirical way to estimate Brownian hitting probability
$\operatorname{\mathbb{P}}_{\omega}\left[\exists
t_{0}\in[0,t]|\omega(t_{0})\in S\right]$ of a target set $S$ is to evaluate
the steps $\hat{\omega}(i),i=0,\dots,k$ and check whether
$\hat{\omega}(i_{0})\in S$ for some $S$. Of course, the precision of this
computation depends on the number of sampled Brownian paths $\hat{\omega}$, as
well as the step size $s$ and number of steps $k$. Formal statements on
convergence and numerical stability could be obtained, e.g. by means of
concentration/Monte-Carlo results (e.g. Proposition B.12 below); however, in
practice, in our experiments we mostly worked with the regime $k\approx
10^{4}$ which seemed an acceptable choice in terms of numeric stability and
performance.
Explicit closed-form computation of hitting probabilities is a non-trivial
task, though it is possible for some model cases (main text, Lemma 3.2).
Dimension 1 is special, where we have the so-called "reflection principle",
which says that
$\operatorname{\mathbb{P}}\left(\sup_{0\leq s\leq t}\omega(s)\geq
d\right)=2\operatorname{\mathbb{P}}\left(\omega(t)\geq d\right).$ (11)
For a proof of this basic statement we refer to Mörters & Peres (2010).
However, in higher dimensions, there is no straightforward analog of the
reflection principle, and calculating hitting probabilities of spheres leads
one to the deep theory of Bessel processes. Let us consider a Brownian
particle $\omega(t)$ starting at the origin in $\mathbb{R}^{n}$ and look at
the real-valued random variable $\|\omega(t)\|$ (in the literature, these are
known as Bessel processes). We are interested in the probability of the
particle hitting a sphere $\\{x\in\mathbb{R}^{n}:\|x\|=r\\}$ of radius $r$
within time $t$. Curiously, it seems that there is no known closed formula for
such a hitting probability. The only formula we know of is in the form of a
convergent series involving zeros of the Bessel function of the first kind,
and appears in Kent (1980). For the reader interested in Kent’s formula, we
also refer to associated asymptotics of zeros of the Bessel function in Watson
(1944).
The following heuristic is implicit in many of our calculations and motivates
several of our definitions: the probability
$\operatorname{\mathbb{P}}\left(\sup_{0\leq s\leq t}\|\omega(s)\|\geq
r\right)$ (12)
of a Brownian particle hitting a sphere of radius $r$ within time $t$ is
dependent only the ratio $r^{2}/t$. As a consequence, given a small $\eta>0$
and a constant $c$, one can choose the constant $c_{n}$ in $t=c_{n}r^{2}$
small enough (depending on $\eta$) such that
$\operatorname{\mathbb{P}}\left(\sup_{0\leq s\leq c_{n}r^{2}}\|\omega(s)\|\geq
cr\right)<\eta.$ (13)
Roughly what this means is the following: for a Brownian particle, the
probability of hitting even a large and nearby object may be made arbitrarily
small if the motion is not allowed to run sufficiently long.
### A.2 Heat Diffusion and Brownian motion duality
#### Macroscopic vs microscopic
There are roughly two broad viewpoints towards the understanding of diffusion:
the “macroscopic” and the “microscopic”. Macroscopically, the mechanism of
diffusion can be thought of as creating a flux in the direction from greater
to lesser concentration. If $u(x,t)$ measures the intensity of the quantity
undergoing diffusion, and $J$ the flux across the boundary of a region
$\Omega$, then in the simplest model one assumes that (up to a constant)
$J=-\nabla u$. Further, we have the identity
$\partial_{t}\int_{\Omega}u(x,t)\;dx=-\int_{\partial\Omega}\nu.-\nabla u\;dS,$
(14)
where $\nu$ is the outward pointing unit normal vector to $\partial\Omega$. By
applying the divergence theorem to (14), one immediately gets the heat
equation $\partial_{t}u=\Delta u$. Here $\Delta$ denotes the Laplace operator
given by the sum of second derivatives:
$\Delta=\sum_{i=1}^{n}\partial^{2}_{ii}$.
Now, many real-life diffusion processes are the result of microscopic
particles jittering around seemingly in a random manner. This motivates the
microscopic viewpoint, i.e., the modelling of heat diffusion via Brownian
motion of particles. We posit that a particle located at $x\in\mathbb{R}^{n}$
at time $t_{0}$ will have the probability $\psi_{U}(x,t)$ of being in an open
set $U\subset\mathbb{R}^{n}$ at time $t_{0}+t$, where
$\psi_{U}(x,t)=\int_{U}p(t,x,y)\;dy,$ (15)
and $p(t,x,y)$ is the fundamental solution of the heat equation, or more
famously, the “heat kernel”. In other words, $p(t,x,y)$ solves the heat
equation
$\begin{cases}\left(\partial_{t}-\Delta\right)u(x,t)=0,\\\
u(x,0)=\delta(x-y),\end{cases}$ (16)
with the Dirac delta distribution as the initial condition. Via Fourier
transform, it is easy to establish that $p(t,x,y)$ is given by
$p(t,x,y)=\frac{1}{(4\pi t)^{n/2}}e^{-\frac{|x-y|^{2}}{4t}}.$ (17)
This builds the bridge to pass between analytic statements on the side of the
heat equation and probabilistic statements on the side of Brownian motion (see
Grigor’Yan (2001), Taylor (2011)). The precise formulation of this duality is
given by the celebrated Feynman-Kac theorem discussed in Subsection A.3 below.
#### Heating up the decision boundary
In our context we introduce the following heat diffusion process along the
classifier’s decision boundary $\mathcal{N}$:
$\begin{cases}\left(\partial_{t}-\Delta\right)\psi(x,t)=0,\\\
\psi(x,0)=0,\quad\forall x\in\mathbb{R}^{n},\\\
\psi(x,t)|_{x\in\mathcal{N}}=1,\quad\forall t>0.\end{cases}$ (18)
In other words $\psi(x,t)$ gives the heat quantity at the point $x$ at time
$t$ given that at the initial moment $t=0$ all points have a heat quantity $0$
and afterwards a constant heat source of intensity $1$ is applied only at the
decision boundary $\mathcal{N}$. As remarked above this is the macroscopic
picture: the mentioned Feynman-Kac duality implies that $\psi(x,t)$ is also
the hitting probability $\operatorname{\mathbb{P}}_{\omega}\left[\exists
t_{0}\in[0,t]|\omega(t_{0})\in\mathcal{N}\right]$.
### A.3 The Feynman-Kac theorem
It is well-known that given a reasonable initial condition $u(x,0)=f(x)$, one
can find an analytic solution to the heat equation via convolution with heat
kernel,
$e^{t\Delta}f(x):=p(t,x,.)\ast f(.).$
This just follows from (16) by convolving directly. Now, via the duality of
diffusion explained above, one expects a parallel statement on the Brownian
motion side, one which computes the contribution of all the heat transferred
over all Brownian paths reaching a point at time $t$. It stands to reason that
to accomplish this, one needs an integration theory defined over path spaces,
which leads us to the theory of Wiener measures. We describe the main idea
behind Wiener measure briefly: consider a particle undergoing a random motion
in $\mathbb{R}^{n}$ (given by a continuous path
$\omega:[0,\infty)\to\mathbb{R}^{n}$) in the following manner: given
$t_{2}>t_{1}$ and $\omega(t_{1})=x_{1}$, the probability density for the
location of $\omega(t_{2})$ is
$p(t,x,x_{1})=\frac{1}{\left(4\pi(t_{2}-t_{1})\right)^{n/2}}e^{-\frac{|x-x_{1}|^{2}}{4(t_{2}-t_{1})}}.$
We posit that the motion of a random path for $t_{1}\leq t\leq t_{2}$ is
supposed to be independent of its past history. Thus, given
$0<t_{1}<\cdots<t_{k}$, and Borel sets $E_{j}\subseteq\mathbb{R}^{n}$, the
probability that a path starting at $x=0$ at $t=0$, lies in $E_{j}$ at time
$t_{j}$ is
$\int_{E_{1}}\cdots\int_{E_{k}}p(t_{k}-t_{k-1},x_{k},x_{k-1})\cdots
p(t_{1},x_{1},0)\;dx_{k}\;\cdots dx_{1}.$
The aim is to construct a countably-additive measure on the space of
continuous paths that will capture the above property. The above heuristic was
first put on a rigorous footing by Norbert Wiener.
Using the concept of Wiener measure, one gets the probabilistic (microscopic)
description of heat diffusion, which is the content of the celebrated Feynman-
Kac theorem:
###### Proposition A.2.
Let $\Omega\subseteq\mathbb{R}^{n}$ be a domain, with or without boundary (it
can be the full space $\mathbb{R}^{n}$). In case of a boundary, we will work
with the Laplacian with Dirichlet boundary conditions. Now, let $f\in
L^{2}(\Omega)$. Then for all $x\in\Omega$, $t>0$, we have that
$e^{t\Delta}f(x)=\mathbb{E}_{x}\left(f\left(\omega(t)\right)\phi_{\Omega}(\omega,t)\right),$
(19)
where $\omega(t)$ denotes an element of the probability space of Brownian
paths starting at $x$, $\mathbb{E}_{x}$ is the expectation with regards to the
Wiener measure on that probability space, and
$\phi_{\Omega}(\omega,t)=\begin{cases}1,&\text{if
}\omega([0,t])\subset\Omega\\\ 0,&\text{otherwise. }\end{cases}$
For a more detailed discussion, see Georgiev & Mukherjee (2018a).
### A.4 Isoperimetric and Isocapacitory results
#### Isoperimetric bounds
Isoperimetric inequalities relating the volume of a set to the surface area of
its boundary have given rise to a wealth of results Burago & Zalgaller (1988).
Given a set $M$ with boundary $\partial M$, the basic pattern of isoperimetric
inequalities is:
$\operatorname{\operatorname{Vol}}(M)\leq
c_{1}\,\operatorname{\operatorname{Area}}(\partial M)^{\frac{n}{n-1}},$ (20)
where $c_{1}$ is an appropriate positive constant depending on the dimension
$n$. In many cases, equality (or saturation in the sense of almost equality)
in (20) is characterized by rather special geometry. For example, classical
isoperimetric results answer the question, which planar set with a given
circumference possesses the largest area, with the answer being the disk. As
discussed in the main text, isoperimetric considerations have recently lead to
significant insights about decision boundaries of classifiers subject to
adversarial defense training mechanisms Ford et al. (2019) by revealing
flattening phenomena and relations to robustness.
#### Isocapacitory bounds
As mentioned in the main text, one can prove types of isocapacitory bounds
that resemble the isoperimetric ones: roughly speaking, these replace the area
term with suitable Brownian hitting probabilities. We have the following
result (cf. also Georgiev & Mukherjee (2018a)):
###### Proposition A.3.
Let $B(x,r)\subset\mathbb{R}^{n},n\geq 3$, and let $E\subset B(x,r)$ denote an
“obstacle”, and consider a Brownian particle started from $x$. Then the
relative volume of the obstacle is controlled by the hitting probability of
the obstacle:
$\frac{\operatorname{\operatorname{Vol}}(E)}{\operatorname{\operatorname{Vol}}(B(x,r))}\leq
c_{n}\left(\psi_{E}(x,t)\right)^{\frac{n}{n-2}}.$ (21)
Here, $c_{n}$ is a positive constant whose value is dependent only on $n$
provided the ratio between $r^{2}$ and $t$ is suitably bounded. In particular,
in the regime $r^{2}=nt$, we have that
$c_{n}=\left(\Gamma\left(\frac{n}{2}-1\right)/\Gamma\left(\frac{n}{2}-1,\frac{n}{4}\right)\right)^{\frac{n}{n-2}}$.
Here, $\Gamma(s,x)$ represents the upper incomplete Gamma function
$\Gamma(s,x):=\int_{x}^{\infty}e^{-t}t^{s-1}\;dt.$
###### Proof.
Recall that the capacity (or more formally, the $2$-capacity) of a set
$K\subset\mathbb{R}^{n}$ defined as
$\operatorname{\operatorname{Cap}}(K)=\inf_{\eta|_{K}\equiv 1,\eta\in
C_{c}^{\infty}(\mathbb{R}^{n})}\int_{\mathbb{R}^{n}}|\nabla\eta|^{2}.$ (22)
From Section 2.2.3, Maz’ya (2011), we have the following “isocapacitory
inequality”:
$\operatorname{\operatorname{Cap}}(E)\geq\omega_{n}^{2/n}n^{\frac{n-2}{n}}(n-2)|E|^{\frac{n-2}{n}},$
(23)
where $\omega_{n}=\frac{2\pi^{n/2}}{\Gamma\left(\frac{n}{2}\right)}$ is the
$(n-1)$-dimensional surface area of $S^{n-1}$. Now, we bring in the following
estimate given by Theorem 3.7 of Grigor’Yan & Saloff-Coste (2002):
$\psi_{E}(x,t)\geq\operatorname{\operatorname{Cap}}(E)\int_{0}^{t}\inf_{y\in\partial
E}p(s,x,y)\;ds.$ (24)
Now, we have
$\displaystyle\psi_{E}(x,t)$
$\displaystyle\geq\omega_{n}^{2/n}n^{\frac{n-2}{n}}(n-2)|E|^{\frac{n-2}{n}}\int_{0}^{t}\frac{1}{\left(4\pi
s\right)^{n/2}}\inf_{y\in\partial E}e^{-\frac{|x-y|^{2}}{4s}}\;ds$
$\displaystyle\geq\omega_{n}^{2/n}n^{\frac{n-2}{n}}(n-2)|E|^{\frac{n-2}{n}}\int_{0}^{t}\frac{1}{\left(4\pi
s\right)^{n/2}}e^{-\frac{r^{2}}{4s}}\;ds$
$\displaystyle=\omega_{n}^{2/n}n^{\frac{n-2}{n}}(n-2)|E|^{\frac{n-2}{n}}\frac{1}{4r^{n-2}\pi^{n/2}}\int^{\infty}_{\frac{r^{2}}{4t}}e^{-z}z^{n/2-2}\;dz.$
After rearrangement the proposed claim follows. ∎
Intuitively, it makes sense that if the volume of a set is fixed, one can
increase its hitting probability by “hammering” the set into a large thin
sheet. However, it seems unlikely that after lumping the set together (as in a
ball), one can reduce capacity/hitting probability any further. Moreover,
isocapacitory bounds are saturated by the $n$-ball.
It is also illustrative to compare the seemingly allied concepts of capacity
and surface area. A main difference of capacity with surface area is the
interaction of capacity with hitting probabilities. As an illustrative
example, think of a book which is open at an angle of
$180^{\circ},90^{\circ},45^{\circ}$ respectively. Clearly, all three have the
same surface area, but the probability of a Brownian particle striking them
goes from the highest to the lowest in the three cases respectively. It is
rather difficult to make the heuristic precise in terms of capacity (at least
from the definition). Capacity can be thought of as a soft measure of how
"spread out" or "opened-up" a surface is, and is highly dependent on how the
surface is embedded in the ambient space.
Figure 5: Examples illustrating the interplay between isoperimetric and
isocapacitory saturation in high dimensions. (Left) Slightly bending a flat
decision boundary $\mathcal{N}_{0}$ causes significant changes in $\tau$ with
the isoperimetric inequality still being very close to optimal:
$\mathcal{N}_{+}$ (resp. $\mathcal{N}_{-}$) leads to a increase (resp.
decrease) in $\tau$ (cf. also Fig. 6). (Right) Small "pockets" near the data
sample $x$ can also cause large Brownian hitting probabilities (hence, large
$\tau$ values) with still well-saturated isoperimetric bounds. Figure 6: A
continuation on Fig. 5: Isocapacitory and isoperimetric saturation while
slightly bending the decision boundary ($\mathcal{N}_{-}$ and
$\mathcal{N}_{+}$ in Fig. 5). In this plot the decision boundary
$\mathcal{N}_{-},\mathcal{N}_{+}$ is a cap of a larger sphere with radius $R$
(set initially to $15r$) in dimension $3072$ (corresponding to CIFAR10). We
interpolate between $\mathcal{N}_{-}$ and $\mathcal{N}_{+}$: first, by
increasing the radius $R$, $\mathcal{N}_{-}$ converges to the flat
$\mathcal{N}_{0}$ and, similarly, starting from $\mathcal{N}_{0}$ we decrease
$R$ to get to $\mathcal{N}_{+}$. Along this interpolation process, we plot the
graphs of the isocapacitory and isoperimetric saturation. In particular, we
observe at least $96\%$ saturation of the isoperimetric bound whereas the
isocapacitory bounds shows a much more sensitive behaviour on this scale.
#### Isocapacitory vs isoperimetric saturation
A main line of analysis in the present work addresses the interplay between
isocapacitory and isoperimetric saturation. In our particular context of
defense training mechanisms we observe saturation of isoperimetric bounds for
the classifier’s decision boundaries - this implies that decision boundaries
are not far from being flat. However, as mentioned before, it turns out that
isocapacitory saturation does not concentrate around the values corresponding
to hyperplanes (overall, it seems to stay well below that value). In this
sense, isocapacitory saturation acts as a finer sensitive measure of deviation
from flatness. A simple model geometric scenario that provides similar
behaviour is illustrated in Fig. 5 and Fig. 6.
### A.5 Model Cases
We first begin with the proof of Lemma 3.2.
###### Proof.
Let us select an orthonormal basis $\\{e_{1},\dots,e_{n}\\}$ so that $e_{1}$
coincides with the given hyperplane’s normal vector. A standard fact about
$n$-dimensional Brownian motion is that the projections on the coordinate axes
are again one-dimensional Brownian motions Mörters & Peres (2010). Thus,
projecting the $n$-dimensional Brownian motion onto $e_{1}$ the hitting
probability of the hyperplane is the same as the probability that one-
dimensional Brownian motion $\omega(t)$ will pass a certain threshold $d$ by
time $t$. To compute this probability we use the reflection principle (11) in
conjunction with Remark A.1. Consequently, the RHS is equal to
$2\Phi(-d/\sqrt{t})$. The computation of $\mu(x,r)$ follows by definition. ∎
Here we note that the dimension $n$ enters only in terms of the spherical cap
volume. An impression how $\tau$ behaves for different choices of $n$ in terms
of the distance $d$ is given in Fig. 7. In particular, one observes the well-
known concentration of measure phenomenon and Levy’s lemma: the volume of the
spherical cap exhibits a very rapid decay as $n$ becomes large. Moreover,
experiments reveal a curious phenomenon: there is a threshold distance $d_{0}$
until which $\tau\approx 2$ and afterwards $\tau$ explodes.
In Fig. 8 we plot further interesting model cases where the error set forms a
wedge (the region between two intersecting hyperplanes) or a cone.
Figure 7: The isocapacitory saturation $\tau$ of a flat error set. Given a
point $x$, the computation takes place in $B(x,r)$ with $r=1$. The distance to
the flat decision hyperplane is given on the x-axis, while the y-axis gives
$\tau$. Curve labeling indicates the respective dimension. There appears to be
a threshold dividing between the regimes $\tau\approx 2$ and
$\tau\rightarrow\infty$. Figure 8: Further model cases and plots of the
isocapacitory saturation $\tau$. (Left) Isocapacitory saturation of cone in
terms of the opening angle (radians). (Right) Isocapacitory saturation of
wedge in terms of the opening angle (radians). Curve labels indicate the
respective dimension. Again one observes concentration of measure as the
volume of the cone decreases to $0$ exponentially fast in terms of the
dimension $n$: this is why we plot the opening angle around $\pi$ in this
case. Furthermore, cones and wedges with an opening angle of almost $\pi$
behave like hyperplanes in terms of saturation.
#### Spiky sets
As discussed in the main text, one observes a high isocapacitory saturation
$\tau$ for the so-called "spiky" sets - these are sets of relatively small
volume and relatively large/dense boundary. Theoretically, a guiding model
case in this direction is given by Lemma 3.3 in the main text, whose proof we
now record.
###### Proof.
Let $T_{\rho}$ denote the $\rho$\- tubular neighborhood of a line segment of
length $h$ inside $\mathbb{R}^{n}$. Clearly, $T_{\rho}\cong
B(0,\rho)\times[0,h]$, where $B(0,r)$ is a $\rho$-ball inside
$\mathbb{R}^{n-1}$.
By the well-known process of Steiner symmetrization in $\mathbb{R}^{n}$, it is
clear that the expression for capacity in (22) will be minimized by a function
that is “radially symmetric” around the central axis of the tube $T_{\rho}$,
that is $f(x,y)=f(|x|)$, where $x\in B(0,\rho),y\in[0,h]$. Then, as we scale
$\rho\to\lambda\rho$, where $\lambda\searrow 0$,
$\operatorname{\operatorname{Cap}}\left(T_{\lambda\rho}\right)\sim\lambda^{n-3}\operatorname{\operatorname{Cap}}\left(T_{\rho}\right)$
(which is seen directly from the definition (22)), whereas the volume scales
as $\left|T_{\lambda\rho}\right|=\lambda^{n-1}\left|T_{\rho}\right|$.
Now assume that the cylinder $T_{\rho}$ is inside the closed ball
$\overline{B(x,r)}\subset\mathbb{R}^{n}$, the central axis of $T_{\rho}$ is
pointing towards $x$, and $T_{\rho}$ is touching the boundary of $B(x,r)$. To
pass from capacity to hitting probability of the set $T_{\rho}$, we use that
Grigor’Yan & Saloff-Coste (2002):
$\frac{\operatorname{\operatorname{Cap}}(T_{\rho})r^{2}}{\operatorname{\operatorname{Vol}}(B(x,r))}e^{-C\frac{r^{2}}{t}}\leq\psi_{T_{\rho}}(x,t).$
(25)
Finally, using the definition of $\tau$ and putting the above estimates
together, one sees that in the time regime of $O(r^{2})$, $\tau$ scales like
$\lambda^{-2/(n-2)}$, and hence, $\tau\nearrow\infty$ as $\lambda\searrow 0$.
∎
See also Figure 8 for a visual discussion of the isocapacitory saturation for
the model cases of wedges and cones.
Figure 9: Cylindrical "spike" of height $h$ and radius $\rho$ inside the ball
$B(x,r)$.
### A.6 Curvature estimates in terms of isocapacitory saturation
The geometric concept of curvature has a rich history and plays a central role
in differential geometry and geometric analysis. There are several notions of
curvature in the literature, ranging from intrinsic notions like sectional,
Ricci or scalar curvatures to extrinsic (that is, dependent on the embedding)
notions like principal curvatures and mean curvature, which are encoded in the
second fundamental form. In this note we use a somewhat “soft” definition of
curvature, following previous work Fawzi et al. (2016); Dezfooli et al.
(2018). Suppose the decision boundary $\mathcal{N}_{f}$ is sufficiently
regular ($C^{2}$ is enough for our purpose) and it separates $\mathbb{R}^{n}$
into two components $\mathcal{R}_{1}:=\\{f>0\\}$ and
$\mathcal{R}_{2}:=\\{f<0\\}$, corresponding to a binary classification (the
construction in the multi-label case is analogous). For a given
$p\in\mathcal{N}_{f}$, let $r_{j}(p)$ denote the radius of the largest sphere
that is tangent to $\mathcal{N}_{f}$ at $p$, and fully contained in
$\mathcal{R}_{j}$. Then, one defines the curvature $\kappa$ at $p$ as
$\kappa(p)=1/\min\left(r_{1}(p),r_{2}(p)\right).$ (26)
See Fig. 10 for a geometric illustration. However, it turns out that most
notions of curvature are quite subtle (see Fawzi et al. (2016)) and at this
point, seemingly more cumbersome and intractable to handle experimentally. We
will take an indirect approach, and attempt to read off the effect of and on
curvature via the isocapacitory saturation $\tau$.
Figure 10: “Soft” definition of curvature given by the inverse radius of the
osculating sphere.
Again, we begin with the model cases: we first study the behaviour of
curvature $\kappa$ if $\tau$ achieves its least possible value. We start by
fixing some notation. As before let us consider a ball $B(x,r)$ with an error
set $E\subset B(x,r)$ and boundary $\mathcal{N}=\partial E$ (clearly our main
case of interest is $E=E(y)\cap B(x,r)$). Let us denote the the distance
$d=d(x,\mathcal{N})$ and suppose the point $y\in\mathcal{N}$ realizes this
distance, i.e. $d(x,y)=d$. To rule out some degenerate cases and ease the
analysis we introduce the following assumption:
Assumption: The hypersurface $\mathcal{N}$ and the point $x$ are on different
sides of the tangent hyperplane $H^{*}:=T_{y}\mathcal{N}$ (cf. Fig. 11).
This assumption is also technically important, as otherwise low values of
$\tau$ will be produced by annuli surrounding $x$. With that in place, we have
the following rigidity result:
###### Proposition A.4.
Let us fix the distance $d=d(x,\mathcal{N})$ and suppose the assumption above
holds. Then the least possible value of $\tau$ is attained only if the
curvature $\kappa$ of the hypersurface $\mathcal{N}$ is $0$.
###### Proof.
As above let $H^{*}$ be the tangent hyperplane at distance $d$ from $x$, and
let $C$ denote the (smaller) spherical cap formed by $H^{*}\cap B(x,r)$. The
proof relies on the following variational argument. If $\mathcal{N}$ is not
the same as $H^{*}$, then $\mathcal{N}\subseteq C$, with $y\in\mathcal{N}\cap
H^{*}$. We wish to argue then one can perturb $\mathcal{N}$ infinitesimally to
decrease the value of $\tau$, so the only minimizer of the above expression
has to be $H^{*}$. The basic idea is to cut out a small piece $p_{v}$ around
$v$ and paste it in the region of around $\tilde{v}$ (Fig. 11).
We say that $\mathcal{N}$ has positive curvature at some point $z$ if the ball
defining the curvature at $z$ and the point $x$ lie on different sides of
$\mathcal{N}$. The construction is as follows. Let $S(x,s)$ be the
$(n-1)$-sphere centered at $x$ with radius $s$. We consider two cases:
Case I: Let us suppose that there exist $s_{1}<s_{2}\leq r$ and points
$v,\tilde{v}\in\mathcal{N}$ such that the curvature of $\mathcal{N}$ at
$v\in\mathcal{N}\cap S(x,s_{1})$ is greater than the curvature at
$\tilde{v}\in\mathcal{N}\cap S(x,s_{2})$. Let us, moreover, choose the infimum
among such $s_{1}$ and the supremum among such $s_{2}$.
To define the mentioned piece $p_{v}$, we consider two small balls
$B(v,\varepsilon),B(\tilde{v},\varepsilon)$ (where $\varepsilon\ll
s_{2}-s_{1}$), and cut out a set $p_{v}=E\cap B(v,\varepsilon)$ such that
$\partial\left(E\setminus B(v,\varepsilon)\right)$ is congruent to
$\mathcal{N}\cap B(\tilde{v},\varepsilon)$ (this is possible due to the
curvature assumptions at $v,\tilde{v}$). Then, we define the new error set
$E^{\prime}=E\cup p_{\tilde{v}}\setminus p_{v}$ and the boundary
$\mathcal{N}^{\prime}=\partial E^{\prime}$, where $p_{\tilde{v}}$ represents
the image of $p_{v}$ under the rigid motion and attached inside
$B(\tilde{v},\varepsilon)$ (see Fig. 11). It is now clear that
$|E|=|E^{\prime}|$, but $\psi_{E^{\prime}}(x,T)<\psi_{E}(x,T)$ for all $T>0$.
The last inequality follows from the evaluation of the explicit heat kernel
that defines hitting probability $\psi$ as stated by Feynman-Kac duality:
$\displaystyle\psi_{E}(x,T)$ $\displaystyle=\int_{0}^{T}\int_{E}\frac{1}{(4\pi
t)^{n/2}}e^{-\frac{(x-y)^{2}}{4t}}\;dy\;dt$
$\displaystyle>\int_{0}^{T}\int_{E^{\prime}}\frac{1}{(4\pi
t)^{n/2}}e^{-\frac{(x-y)^{2}}{4t}}\;dy\;dt=\psi_{E^{\prime}}(x,T).$
It follows from the definition of $\tau$ that $\tau_{E}\geq\tau_{E^{\prime}}$.
Case II: If Case I is not satisfied, then, similarly, we choose two points
$v,\tilde{v}$, but instead of defining the piece $p_{v}$ by intersection with
a small ball around $v$ we select $p_{v}$ as a “concavo-convex lens shape”
domain, where the curvature on the concave “inner side” of $p_{v}$ of the lens
is greater than that on the convex outer side. As before, we attach a rigid
motion image of $p_{v}$ inside $B(\tilde{v},\varepsilon)$. The rest of the
argument is similar to Case I. ∎
Figure 11: Moving the piece $p_{v}$ near the tip of the obstacle and
reattaching it far away as $p_{\tilde{v}}$
reduces the hitting probability, but preserves volume.
With reference to our previous discussion of spikes, it heuristically makes
sense that a spike must have reasonably high curvature (it can have high
curvature on the average, or if it is flat at most places, then have a sharp
needle like end where the curvature is very high). In the same setting as
Proposition A.4 let us, moreover, for simplicity assume that $\mathcal{N}$ is
the graph of a function over the tangent hyperplane $H^{*}$ (Fig. 11).
###### Proposition A.5.
In the above setting let us fix the value of $d$. Then, if the maximum
curvature $\kappa_{\max}$ of $\mathcal{N}$ is sufficiently high (greater than
some universal constant), then it satisfies
$\kappa_{\max}\geq\frac{\tau^{\frac{1}{n}}}{r}\left(\Phi\left(-\frac{d}{\sqrt{t}}\right)\right)^{-\frac{1}{n-2}},$
(27)
where $\Phi$ denotes the c.d.f. of the standard normal distribution. If a
point attaining this maximum curvature is within the half concentric ball
$B(x,r/2)$, then $\kappa_{\max}$ satisfies the stronger estimate
$\kappa_{\max}\geq\frac{\tau^{\frac{1}{n}}(r-d)}{r^{\frac{n}{n-1}}}\left(\Phi\left(-\frac{d}{\sqrt{t}}\right)\right)^{-\frac{n}{(n-1)(n-2)}}.$
(28)
###### Proof.
Recalling the definition of the isocapacitory saturation $\tau$, we will bound
the numerator (resp. denominator) of $\tau$ from above (resp. below). First,
for the numerator $\psi_{E}(x,t)$ we will use a basic monotonicity property of
hitting probabilities stating that for two sets $A\subseteq B$ one has
$\psi_{A}(x,t)\leq\psi_{B}(x,t)$ \- this follows directly from the definition
of $\psi$. Now, since $E\subseteq C$ where $C$ is the smaller spherical cap of
$B(x,r)\cap H^{*}$, we have $\psi_{E}(x,t)\leq\psi_{C}(x,t)$. However,
recalling the explicit form of $\psi_{C}$ from Lemma 3.2 of the main text, we
have
$\psi_{E}(x,t)\leq\Phi\left(-\frac{d}{\sqrt{t}}\right).$
Second, to bound the denominator of $\tau$ (i.e.
$\operatorname{\operatorname{Vol}}(E)$), we observe that if $\kappa_{\max}$ is
large enough, by definition $E$ contains a ball of radius
$\frac{1}{\kappa_{\max}}$, and
$\operatorname{\operatorname{Vol}}(E)\geq\frac{\omega_{n}}{\kappa_{\max}^{n}}$
where $\omega_{n}$ denotes the volume of unit $n$-dimensional ball. That
finally implies,
$\displaystyle\tau$
$\displaystyle\leq\left(\Phi\left(-\frac{d}{\sqrt{t}}\right)\right)^{\frac{n}{n-2}}\frac{\operatorname{\operatorname{Vol}}(B(x,r))}{\operatorname{\operatorname{Vol}}(E)}$
$\displaystyle\leq\left(\Phi\left(-\frac{d}{\sqrt{t}}\right)\right)^{\frac{n}{n-2}}r^{n}\kappa^{n}_{\max},$
which proves (27).
If a point of maximum curvature is inside a concentric ball of radius $r/2$,
then $E$ contains $\approx\frac{\kappa_{\max}(r-d)}{2}$ balls of radius
$\frac{1}{\kappa_{\max}}$, which implies that
$\operatorname{\operatorname{Vol}}(E)\geq\kappa_{\max}(r-d)\left(\frac{\omega_{n}}{\kappa^{n}_{\max}}\right)$.
The rest of the proof is similar. ∎
Now, we give a curvature estimate which works in any regime, without any
restrictions. The tradeoff is a global average bound of the $L^{p}$-type
rather than pointwise estimates.
###### Proposition A.6.
In the setting as above, let us fix the distance $d=d(x,\mathcal{N})$. At each
point of $\mathcal{N}$, let us denote by $\kappa$ the maximal sectional
curvature of $\mathcal{N}$ at that point. The following estimate holds:
$\|\mathcal{K}\|_{L^{1}}\geq
V_{n}(d,r)-\frac{2\omega_{n}r^{n}\Phi\left(-\frac{d}{\sqrt{t}}\right)}{\tau_{H}},$
(29)
where $V_{n}(d,r)$ denotes the volume of the smaller spherical cap at distance
$d$, the constant $\omega_{n}$ denotes the volume of unit ball in
$\mathbb{R}^{n}$, and the function $\mathcal{K}$ is an integral function of
the curvature $\kappa$ over lines (defined in (31) below).
###### Proof.
Again, we suitably bound the numerator and denominator of $\tau$. Starting
with the numerator, as explained in Proposition A.5, we have by monotonicity
$\psi_{E}(x,t)\leq 2\Phi\left(-\frac{d}{\sqrt{t}}\right).$ (30)
To bound the denominator of $\tau$ we proceed as follows. Let $\mathcal{N}$ be
the graph of the function $\tilde{g}(x_{1},\cdots,x_{n-1})$, where the
variables $x_{j}$ are taken from the hyperplane $H^{*}$ (Fig. 11) at distance
$d$ from $x$; the point at which $\mathcal{N}$ touches this hyperplane is
taken as the origin. Let $\varphi_{\epsilon}$ be a smooth cut-off function
defined on the hyperplane such that $\varphi\equiv 1$ on the set $S$ of all
$(x_{1},\cdots,x_{n-1})$ such that $\tilde{g}(x_{1},\cdots,x_{n-1})\in
B(x,r)$, and $\varphi\equiv 0$ outside the $\epsilon$-tubular neighborhood of
$S$. Finally, let $g_{\epsilon}:=\varphi_{\epsilon}\tilde{g}$.
Now we see that, letting $a=(r^{2}-d^{2})^{1/2}$,
$\displaystyle V_{n}(d,r)-\operatorname{\operatorname{Vol}}(E)$
$\displaystyle\leq\int_{\rho=0}^{a}\int_{S^{n-2}}g_{\epsilon}(\rho,\theta)\;\rho^{n-2}\;d\rho\;d\theta.$
Now, if $\eta$ denotes the unit vector in the direction of a fixed
$(\rho,\theta)$, observing that $g_{\epsilon}(0)=0$, we have by the
fundamental theorem of calculus
$g_{\epsilon}(\rho,\theta)=\int_{0}^{1}\partial_{t}g_{\epsilon}(t\rho\eta,\theta)\;dt.$
In turn, applying the fundamental theorem a second time and observing that
$\nabla g_{\epsilon}(0)=0$, we have that
$g_{\epsilon}(\rho,\theta)=\int_{0}^{1}\int_{0}^{1}\partial_{s}\partial_{t}g_{\epsilon}(st\rho\eta,\theta)\;ds\;dt.$
Putting everything together we get,
$V_{n}(d,r)-\operatorname{\operatorname{Vol}}(E)\leq\int_{\rho=0}^{a}\int_{S^{n-2}}\left(\int_{0}^{1}\int_{0}^{1}\partial_{s}\partial_{t}g_{\epsilon}(st\rho\eta,\theta)\;ds\;dt\right)\;\rho^{n-2}\;d\rho\;d\theta.$
Now, we define the following integral quantity:
$\mathcal{K}_{\epsilon}(\rho,\theta)=\int_{0}^{1}\int_{0}^{1}|\kappa_{\epsilon}(st\rho\eta,\theta)|\;ds\;dt.$
(31)
Noting that the maximum sectional curvature bounds the second derivatives,
finally we have that
$V_{n}(d,r)-\operatorname{\operatorname{Vol}}(E)\leq\|\mathcal{K}_{\epsilon}\|_{L^{1}}.$
(32)
To obtain (29) we now put all the above estimates together and let
$\epsilon\searrow 0$. ∎
## Appendix B Appendix B: generalization bounds and compression schemes
#### Background
A main line of ML and statistical inference research addresses questions of
generalization. To set the stage we start with some notation. Let us suppose
that the dataset $\mathcal{X}$ is sampled from a probability distribution $D$,
i.e. $(x,y)\sim D$. Following conventions from the literature Arora et al.
(2018) we define the expected margin loss of a classifier $f$ by
$L_{\gamma}(f):=\operatorname{\mathbb{P}}_{(x,y)\sim
D}\left[f(x)[y]\leq\gamma+\max_{j=1,\dots,k;j\neq y}f(x)[j]\right].$ (33)
We use the notation $\hat{L}_{\gamma}$ to denote the expected empirical margin
loss over the given data set $\mathcal{X}$. Finally, the generalization error
is defined as $L_{\gamma}-\hat{L}_{\gamma}$.
Quite roughly speaking, standard generalization results attempt to estimate
the performance of the classifier on unseen samples (i.e. the full data
distribution), thus yielding bounds of the form:
$L_{\gamma_{1}}(f)\leq\hat{L}_{\gamma_{2}}(f)+F(\gamma_{1},\gamma_{2},f,\mathcal{X}),$
(34)
where $F$ is an additional term that usually depends, e.g. on the size of
$\mathcal{X}$, the expressiveness of $f$ and further margin information
$(\gamma_{1},\gamma_{2})$.
### B.1 Compression in a heat diffusion sense implies generalization bounds
We first state a well-known concentration inequality due to Hoeffding which
will find repeated use in the ensuing sections:
###### Proposition B.1 (Hoeffding’s inequality).
Let $X_{1},\dots,X_{n}$ be independent random variables taking values in the
interval $[0,1]$, and let $\overline{X}=\frac{1}{n}(X_{1}+\dots+X_{n})$ be the
empirical mean of these random variables. Then we have:
$\operatorname{\mathbb{P}}\left({\overline{X}}-\operatorname{\mathbb{E}}\left({\overline{X}}\right)\geq
t\right)\leq e^{-2nt^{2}}.$ (35)
We now provide the proof of Proposition 5.1 of the main text.
###### Proof.
The strategy of proof follows well-known "weak-law-of-large-numbers"
concentration techniques in a spirit similar to Arora et al. (2018).
Step 1. First, we show that for a given $g$ as
$|\mathcal{X}|\rightarrow\infty$,
$\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{g}(x,y,t_{1})\right)\rightarrow\operatorname{\mathbb{P}}_{(x,y)\sim
D}\left(C_{g}(x,y,t_{1})\right),$ (36)
where $C_{g}(x,y,\gamma^{2})$ is the event that a Brownian path starting at
$x$ hits $E_{g}(y)$ within time $\gamma^{2}$. The rate of convergence is
determined through Chernoff concentration bounds.
Choose $\alpha\in A$, and let $g_{\alpha}$ be the corresponding classifier.
Attached to each sample point $x_{j}$, there is a Bernoulli random variable
$X_{j}$ which takes the value $1$ if $C_{g_{\alpha}}(x_{j},y,\gamma^{2})$
happens, and $0$ otherwise. Then, the average
$\overline{X}=\frac{1}{m}\sum_{j=1}^{m}X_{j}$ is given by the average of $m$
i.i.d. Bernoulli random variables each of whose expectations is given by
$\operatorname{\mathbb{P}}_{(x,y)\sim D}C_{g_{\alpha}}(x,y,\gamma^{2})$.
Furthermore, we note that if a data sample is misclassified, then the Brownian
particle almost surely will hit the error set. Combining this observation with
the concentration estimate (35) above, we obtain
$\displaystyle L_{0}(g_{\alpha})$
$\displaystyle\leq\operatorname{\mathbb{P}}_{(x,y)\sim
D}\left(C_{g_{\alpha}}(x,y,\gamma^{2})\right)$
$\displaystyle\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{g_{\alpha}}(x,y,\gamma^{2})\right)+\xi,$
(37)
with probability at least $1-e^{-2\xi^{2}m}$. If each classifier $g_{\alpha}$
has $q$ parameters, each of which can take $r$ discrete values, we take
$\xi=\sqrt{\frac{q\log r}{m}}$.
Step 2. The estimate from the previous step should hold for every classifier
$g_{\alpha}$ in the family $A$ with large probability. This is guaranteed by a
union bound and tuning the Chernoff bounds from the convergence rate. More
precisely, there are $r^{q}$ different choices $\alpha\in A$, and hence by
taking the union of the estimate in (B.1), one can say that
$\operatorname{\mathbb{P}}_{(x,y)\sim
D}\left(C_{g_{\alpha}}(x,y,\gamma^{2})\right)\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{g_{\alpha}}(x,y,\gamma^{2})\right)+\sqrt{\frac{q\log
r}{m}}$ (38)
with probability at least $1-e^{-q\log r}$ over all $\alpha\in A$.
Step 3. Finally one uses the fact that $f$ is approximable by at least one
$g=g_{\alpha_{0}}$ for some $\alpha_{0}$ in $A$. Via Definition 1 of the main
text, one sees that
$\displaystyle\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{g_{\alpha_{0}}}(x,y,\gamma^{2})\right)$
$\displaystyle\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,\gamma^{2})\right)+\eta,$
which finally gives that with probability at least $1-e^{-q\log r}$, we have
$L_{0}(g)\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,\gamma^{2})\right)+\eta+O\left(\sqrt{\frac{q\log
r}{m}}\right).$ (39)
∎
###### Remark B.2.
As noted, a classifier $f$ classifies a point $x$ wrongly if and only if
$\psi_{E(y)}(x,t)=1$ for all time scales $t$. With this observation, and since
(39) works for all real numbers $\gamma$, letting $\gamma\to 0$, we have that
with probability at least $1-e^{-q\log r}$,
$L_{0}(g)\leq\hat{L}_{0}(f)+\eta+O\left(\sqrt{\frac{q\log r}{m}}\right).$
This recovers a loss estimate which is similar to the estimate in Theorem 2.1
of [1].
Indeed, one can consider
$\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,\gamma^{2}\right)$
as a “soft” or probabilistic measure of classification with margin
$\approx\gamma$.
When defining the notion of a compression, instead of taking a pointwise
difference as in Definition 1 of Arora et al. (2018), we would like to capture
the idea that the decision boundary of a good compression should be “close
enough” to the decision boundary of the original classifier. In our context,
this implies that their “heat signatures” at the sample points should be close
enough at all time scales. As noted in the main text, Definition 1 is
definitely one natural option to define goodness of compression in a heat-
diffusion sense. Another natural way is to consider the Brownian motion’s
running time and define a good approximation as follows:
###### Definition 3.
Given a positive real number $\eta$, a classifier $g$ is said to be an
$\eta-$compression w.r.t. hitting time of $f$ if
$\psi_{E_{g}(y)}(x,\gamma^{2}-\eta)\leq\psi_{E_{f}(y)}(x,\gamma^{2})\leq\psi_{E_{g}(y)}(x,\gamma^{2}+\eta)$
(40)
for all points $x$ in the training sample, labels $y$ and real numbers
$\gamma^{2}\geq\eta$.
Analogously, we have the following
###### Proposition B.3.
Let us suppose that $f$ is approximable by $g$ in the sense of Definition 3.
Here $g\in A$, where $A$ is a family of classifiers
$\mathbb{R}^{n}\rightarrow\mathbb{R}$ parametrized by $q$ parameters assuming
$r$ discrete values. As before, for a classifier $h$, let $C_{h}(x,y,t)$ be
the event that a Brownian path starting at $x$ hits $E_{h}(y)$ within time
$t$. Then we have
$L_{0}(g)\leq\operatorname{\mathbb{P}}_{(x,y)\sim
D}\left(C_{g_{\alpha}}(x,y,\gamma^{2}-\eta)\right)\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,\gamma^{2})\right)+O\left(\sqrt{\frac{q\log
r}{m}}\right)$ (41)
with probability at least $1-e^{-q\log r}$.
The proof proceeds similarly as above. Letting $\gamma^{2}\rightarrow\eta$
gives us
$L_{0}(g)\leq\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,\eta)\right)+O\left(\sqrt{\frac{q\log
r}{m}}\right).$ (42)
Again, the first term on the RHS can be interpreted as the geometric margin of
classification. In particular, if the classifier $f$ separates points by a
distance of $\approx\sqrt{n\eta}$, then since the Brownian motion travels
$\approx\sqrt{n\eta}$ hitting the error set will happen only if a
misclassification occurred, i.e. we have
$\operatorname{\mathbb{P}}_{(x,y)\sim\mathcal{X}}\left(C_{f}(x,y,\eta)\right)\approx
L_{0}(f).$ (43)
### B.2 A sharp variant of the Johnson-Lindenstrauss algorithm
Several state-of-art compression schemes utilize a dimensionality reduction in
the spirit of Johnson-Lindenstrauss (JL), Arora et al. (2018). In this
Subsection we discuss a JL compression scheme that will later be coupled with
and tuned by some heat-diffusion estimates. We begin by discussing a variant
of JL (Alg. 1).
Data: Original matrix $A$ of dimension $h_{1}\times h_{2}$, $\beta\in(0,1)$.
Result: Stochastic compressed matrix $\hat{A}$ with
$O\left(\log(h_{1}h_{2})/\beta\alpha^{2}\right)$ non-zero entries such that
$\operatorname{\mathbb{P}}\left[\|\hat{A}x-Ax\|\geq\alpha\|A\|_{F}\|x\|\right]\leq\beta.$
Start with matrix $A$, real number $\alpha$;
while _$i\leq h_{1}$ , $j\leq h_{2}$_ do
Let $z_{ij}=1$ with probability
$p_{ij}=\frac{2a_{ij}^{2}}{\beta\alpha^{2}\|A\|_{F}^{2}}$, $0$ otherwise;
Let $\hat{a}_{ij}=\frac{z_{ij}a_{ij}}{p_{ij}}$.
end while
Return $\hat{A}=(\hat{a}_{ij})$.
Algorithm 1 Compressing a matrix $A\in\mathbb{R}^{h_{1}\times h_{2}}$
###### Proposition B.4.
Let $A$ be a matrix of dimension $h_{1}\times h_{2}$. Then, one can find a
compressed matrix $\hat{A}$ such that
$\|Ax-\hat{A}x\|\leq\alpha\|A\|_{F}\|x\|,$
with probability at least $1-\beta$, where the number of parameters of
$\hat{A}$ is $O\left(\log(h_{1}h_{2})/\beta\alpha^{2}\right)$.
A proof of Proposition B.4 in the spirit of classical JL can be provided -
however, here we introduce a Bernoulli scheme which is a minor modification of
Algorithm 2 of Arora et al. (2018).
###### Proof.
Define the random variables $z_{ij}$ which take the value $1$ with probability
$p_{ij}=\frac{2a_{ij}^{2}}{\beta\alpha^{2}\|A\|_{F}^{2}}$, and the value $0$
otherwise. Define $\hat{a}_{ij}=\frac{z_{ij}a_{ij}}{p_{ij}}$. One can now
calculate that $\mathbb{E}\left(\hat{a}_{ij}\right)=a_{ij}$, and
$\operatorname{\operatorname{Var}}\left(\hat{a}_{ij}\right)\leq\beta\alpha^{2}\|A\|_{F}^{2}$.
Using the above, one can further calculate that $\mathbb{E}(\hat{A}x)=Ax$, and
$\operatorname{\operatorname{Var}}(\hat{A}x)\leq\|x\|^{2}\|A\|^{2}_{F}\beta\alpha^{2}$.
By Chebyshev’s inequality, this gives us that
$\operatorname{\mathbb{P}}\left[\|\hat{A}x-Ax\|\geq\alpha\|A\|_{F}\|x\|\right]\leq\beta.$
Now, the expected number of non-zero entries in $\hat{A}$ is
$\sum_{i,j}p_{ij}=\frac{2}{\beta\alpha^{2}}$. An application of Chernoff
bounds now gives that with high probability the number of non-zero entries is
$O\left(\log(h_{1}h_{2})/\beta\alpha^{2}\right)$. ∎
### B.3 Hitting probability, capacity sensitivity and compression
As discussed in the main text, here we use hitting probabilities associated to
the decision boundary to define a concept “capacity sensitivity” of a neural
net layer. The heuristic is, the less the capacity sensitivity of a layer, the
greater the facility in compressing the layer to one with fewer parameters.
This goes in the spirit of current state-of-art results on compression and
generalization bounds (Arora et al. (2018), Suzuki et al. (2018), Suzuki et
al. (2020)). In particular, in Arora et al. (2018) the authors provide the
notions of noise sensitivity and noise cushions motivated by Gaussian noise
injections. Our first proposed definition for "heat-diffusion noise cushions"
and capacity sensitivity goes as follows:
###### Definition 4.
Let $\eta\sim\mathcal{N}$ be distributed along a noise distribution
$\mathcal{N}$ concentrated in ball $\|\eta\|\leq\eta_{0}$. We define the
capacity sensitivity $S(x,A_{i};t)$ of a layer $A_{i}$ at the point $x$ as
$S(x,A_{i};t):=\operatorname{\mathbb{E}}_{\eta\sim\mathcal{N}}\frac{\left|\psi_{E_{f}}(\phi(A_{i}(x+\|x\|\eta)),t)-\psi_{E_{f}}(\phi(A_{i}x),t)\right|}{\left|\psi_{E_{f}}(\phi(A_{i}x),t)\right|}.$
(44)
We denote the maximum and expected sensitivity respectively as
$S^{m}(A_{i};t):=\max_{x\in\mathcal{X}}S(x,A_{i};t),\quad
S^{e}(A_{i};t):=\operatorname{\mathbb{E}}_{x\sim\mathcal{X}}S(x,A_{i};t).$
(45)
Now we use Algorithm $1$ to investigate a method for compressing a layer
$A_{i}$ so that the capacity properties are preserved.
###### Proposition B.5.
Let a particular layer $A_{i}$ of the neural net be of dimension $h_{1}\times
h_{2}$. Then, Algorithm $1$ generates an approximation $\hat{A}_{i}$ with
$O\left(\log(h_{1}h_{2})/\beta\alpha^{2}\right)$ parameters for which we
guarantee that $\psi_{E_{f}(y)}(\phi(\hat{A}_{i}))$ is proportional to
$\psi_{E_{f}(y)}(\phi(A_{i}))$ up to an error $\epsilon$ with probability
$\beta+S^{m}(A_{i})/\epsilon$.
###### Proof.
Using the fact that
$\psi_{E_{f}(y)}\left(\phi\left(\hat{A}x\right),t\right)=\psi_{E_{f}(y)}\left(\phi\left(A(x+\|x\|\eta)\right),t\right)$,
let $A_{\delta}$ denote the event that
$\left|\frac{\psi_{E_{f}(y)}\left(\phi(\hat{A}_{i}x),t\right)-\psi_{E_{f}(y)}\left(\phi(A_{i}x),t\right)}{\psi_{E_{f}(y)}(\phi(A_{i}x),t)}\right|=\left|\frac{\psi_{E_{f}(y)}(\phi(A_{i}(x+\|x\|\eta)),t)-\psi_{E_{f}(y)}(\phi(A_{i}x),t)}{\psi_{E_{f}(y)}(\phi(A_{i}x),t)}\right|\geq\delta.$
For every fixed $x\in\mathcal{X}$, using (44) and Markov’s inequality
immediately implies
$\operatorname{\mathbb{P}}\left[A_{\delta}\right]\leq\frac{S(x,A_{i};t)}{\delta}.$
(46)
Since Algorithm $1$ yields controlled distortion, we have that given error
parameters $\alpha,\beta$, one gets $\hat{A}$, a stochastic approximation of
$A$ such that
$\operatorname{\mathbb{P}}\left[\|\hat{A}_{i}(x)-A_{i}(x)\|\geq\alpha\left\|A_{i}\right\|_{F}\|x\|\right]\leq\beta.$
(47)
Here the reduced number of the parameters of $\hat{A}$ is
$O\left(\log(h_{1}h_{2})/\beta\alpha^{2}\right)$. With that, we have
$\displaystyle\operatorname{\mathbb{P}}\left[\hat{A}_{\delta}\right]$
$\displaystyle=\operatorname{\mathbb{P}}\left[\left(\frac{\|\hat{A}_{i}(x)-A_{i}(x)\|}{\alpha\|A_{i}\|_{F}\|x\|}<1\right)\bigcap\hat{A}_{\delta}\right]+\operatorname{\mathbb{P}}\left[\left(\frac{\|\hat{A}_{i}(x)-A_{i}(x)\|}{\alpha\|A_{i}\|_{F}\|x\|}\geq
1\right)\bigcap\hat{A}_{\delta}\right]$ (48)
$\displaystyle\leq\operatorname{\mathbb{P}}\left[A_{\delta}\right]+\operatorname{\mathbb{P}}\left[\frac{\|\hat{A}_{i}(x)-A_{i}(x)\|}{\alpha\|A_{i}\|_{F}\|x\|}\geq
1\right]$ $\displaystyle\leq\frac{S(x,A_{i};t)}{\delta}+\beta.$
This concludes the claim. ∎
The above proposition may seem suboptimal and even somewhat of a tautology,
but we include all the details, because one way forward is now evidently
clear. In particular, the step in (48) can be improved if we know that if the
distance between two vectors $z$ and $w$ is bounded above, then
$\psi_{E_{f}}(z,t)-\psi_{E_{f}}(w,t)$ is bounded above. In plain language, we
would like to say the following: if two points are close, then the respective
probabilities of Brownian particles starting from them and hitting
$\mathcal{N}_{f}$ are also close. This is too much to expect in general, but
can be accomplished when one places, in addition, certain nice assumptions on
the decision boundary.
### B.4 Proof of first part of Proposition 5.2 of the main text
We will break down the proof over three propositions, to illustrate the flow
of ideas. The first is the case of the hyperplane which we discussed to some
extent above in our curvature analysis (see also Lemma 3.2 of the main text).
###### Proposition B.6.
If the decision boundary $\mathcal{N}_{f}$ is a hyperplane, then given
$\beta,\epsilon$, one can find an $\alpha$ for which the compression scheme of
Algorithm $1$ gives a compression of a layer $A_{i}$ of dimension $h_{1}\times
h_{2}$ to $\hat{A}_{i}$ with $O\left(\log(h_{1}h_{2})/\beta\alpha^{2}\right)$
parameters such that
$\operatorname{\mathbb{P}}\left[\|A_{i}(x)-\hat{A}_{i}x\|\leq\alpha\|A_{i}\|_{F}\|x\|\right]\geq
1-\beta,$
and
$\left|\psi_{E_{f}}(A_{i}x,t)-\psi_{E_{f}}(\hat{A}_{i}x,t)\right|\leq\epsilon$
with probability at least $1-\beta$. Here
$t=O\left(\text{dist}(A_{i}(x),\mathcal{N}_{f})^{2}\right)$. The choice of
$\alpha$ is made explicit by (50) below.
###### Proof.
Let $w,z\in\mathbb{R}^{n}$ be two points such that $\|w-z\|\leq\delta$. It is
clear that the maximum value of
$\left|\psi_{E_{f}}(w,t)-\psi_{E_{f}}(z,t)\right|$ is given by the probability
that a Brownian particle starting from a point $x\in\mathbb{R}^{n}$ strikes a
“slab” of thickness $\delta$ at a distance $d-\delta$ from $x$ (a slab is a
tubular neighborhood of a hyperplane) within time $t$. Without loss of
generality, assume that the point $z$ is at a distance $d$ from the hyperplane
$\mathcal{N}_{f}$. Then,
$\displaystyle 0\leq\left|\psi_{E_{f}}(w,t)-\psi_{E_{f}}(z,t)\right|$
$\displaystyle\leq
2\left(\Phi\left(-\frac{d-\delta}{\sqrt{t}}\right)-\Phi\left(-\frac{d}{\sqrt{t}}\right)\right),$
which implies that
$\displaystyle\left|\frac{\psi_{E_{f}}(w,t)-\psi_{E_{f}}(z,t)}{\psi_{E_{f}}(z,t)}\right|$
$\displaystyle\leq
2\left(\frac{\Phi\left(-\frac{d-\delta}{\sqrt{t}}\right)}{\Phi\left(-\frac{d}{\sqrt{t}}\right)}-1\right).$
From the above calculation, we get that
$\displaystyle\frac{\|A_{i}(x)-\hat{A}_{i}(x)\|}{\|A_{i}\|_{F}\|x\|}\leq\alpha$
$\displaystyle\implies\frac{\left|\psi_{E_{f}}(A_{i}(x),t)-\psi_{E_{f}}(\hat{A}_{i}(x),t)\right|}{\left|\psi_{E_{f}}(A_{i}(x),t)\right|}\leq
2\left(\frac{\Phi\left(-\frac{d-\delta}{\sqrt{t}}\right)}{\Phi\left(-\frac{d}{\sqrt{t}}\right)}-1\right),$
(49)
where
$\delta=\alpha\|A\|_{F}\|x\|.$
We wish to apply the above estimate in the regime $t=O(d^{2})$. For the sake
of specificity, let $t=c_{n}d^{2}$. Now, given $\epsilon$, from (49) one can
choose $\alpha$ such that
$\operatorname{\mathbb{P}}\left[\left(\frac{\|\hat{A}_{i}(x)-A_{i}(x)\|}{\alpha\|A_{i}\|_{F}\|x\|}\leq
1\right)\bigcap A_{\epsilon}\right]=0.$
It suffices to choose $\alpha$ such that when $t=c_{n}d^{2}$,
$2\left(\frac{\Phi\left(-\frac{d-\delta}{\sqrt{t}}\right)}{\Phi\left(-\frac{d}{\sqrt{t}}\right)}-1\right)=\epsilon,\text{
where }\delta=\alpha\|A\|_{F}\|x\|.$ (50)
Then,
$\operatorname{\mathbb{P}}[A_{\epsilon}]\leq\beta.$
∎
###### Remark B.7.
In the above calculation, the nonlinearity $\phi$ can be introduced easily.
Clearly, by the compression properties of Algorithm $1$, we have that
$\|\hat{A}_{i}(\phi x)-A_{i}(\phi x)\|\leq\alpha\|A_{i}\|_{F}\|\phi
x\|\leq\alpha\lambda\|A_{i}\|_{F}\|x\|$, where $\lambda$ is the Lipschitz
constant associated to the nonlinearity $\phi$. In particular, if $\phi$ is
the ReLU, then $\lambda=1$. This gives us that if
$\|\hat{A}_{i}(x)-A_{i}(x)\|\leq\alpha\|A\|_{F}\|x\|$,
$\displaystyle\frac{\|A_{i}(\phi x)-\hat{A}_{i}(\phi
x)\|}{\|A_{i}\|_{F}\|x\|}\leq\alpha\lambda$
$\displaystyle\implies\frac{\left|\psi_{E_{f}}(A_{i}(x),t)-\psi_{E_{f}}(\hat{A}_{i}(x),t)\right|}{\left|\psi_{E_{f}}(A_{i}x,t)\right|}\leq
2\left(\frac{\Phi\left(-\frac{d-\delta}{\sqrt{t}}\right)}{\Phi\left(-\frac{d}{\sqrt{t}}\right)}-1\right),$
(51)
where
$\delta=\alpha\lambda\|A\|_{F}\|x\|.$
We mention in passing that the above proposition gives a connection between
our capacity sensitivity $S(x,A;t)$ and the noise sensitivity
$\psi_{\mathcal{N}}$ defined by Arora et al. (2018).
Now consider the case of a curved hypersurface, denoted by $H$ (which is being
thought of as the decision boundary $\mathcal{N}_{f}$), which is “sandwiched”
between two hyperplanes $H_{1}$ and $H_{3}$. Assume that the hypersurface is
at a distance $d$ from the point $z$, and the distance between $H_{1}$ and
$H_{3}$ is $l$.
###### Proposition B.8.
In the above setting, all the conclusions of Proposition B.6 apply to $H$.
###### Proof.
We have that
$\left|\psi_{\mathcal{N}_{f}}(z,t)-\psi_{\mathcal{N}_{f}}(w,t)\right|$ is less
than or equal to the maximum of the quantities
$\left|\Phi\left(-\frac{d}{\sqrt{t}}\right)-\Phi\left(-\frac{d+\delta+l}{\sqrt{t}}\right)\right|$,
$\left|\Phi\left(-\frac{d}{\sqrt{t}}\right)-\Phi\left(-\frac{d-\delta+l}{\sqrt{t}}\right)\right|$,
$\left|\Phi\left(-\frac{d+l}{\sqrt{t}}\right)-\Phi\left(-\frac{d+\delta}{\sqrt{t}}\right)\right|,$
$\left|\Phi\left(-\frac{d+l}{\sqrt{t}}\right)-\Phi\left(-\frac{d-\delta}{\sqrt{t}}\right)\right|$.
Let $M(d,t)$ denote this maximum. As argued before,
$\psi_{\mathcal{N}_{f}}(z,t)\geq\Phi\left(-\frac{d+l}{\sqrt{t}}\right)$. That
gives,
$\frac{\left|\psi_{E_{f}}(z,t)-\psi_{E_{f}}(w,t)\right|}{\left|\psi_{E_{f}}(z,t)\right|}\leq\frac{M(d,t)}{\Phi\left(-\frac{d+l}{\sqrt{t}}\right)}.$
The rest of the argument is similar to the proof of Proposition B.6, and we
skip the details. ∎
Before moving on to the case of controlled curvature, we need a technical
lemma. We state it explicitly because it seems to us that it could have
potentially other applications.
###### Lemma B.9.
Let $p\in\mathbb{R}^{n}$, and consider a cuboid $Q\subset\mathbb{R}^{n}$ with
side lengths $a_{1},\cdots,a_{n}$. Let $q\in Q$ be the unique point which
attains $d=\|p-q\|=\operatorname{\operatorname{dist}}(p,Q)$. Lastly, assume
that the line segment $\overline{pq}$ is perpendicular to the side of $Q$ on
which $q$ lies. Then
$\psi_{Q}(p,t)=2^{n}\left(\Phi\left(-\frac{a_{1}}{\sqrt{t}}\right)-\Phi\left(-\frac{a_{1}+d}{\sqrt{t}}\right)\right)\prod_{j=2}^{n}\left(\Phi\left(\frac{a_{j}}{2\sqrt{t}}\right)-\Phi\left(-\frac{a_{j}}{2\sqrt{t}}\right)\right).$
(52)
###### Proof.
The proof follows easily from the fact that in an $n$-dimensional Brownian
motion, all the coordinates execute the standard $1$-dimensional Brownian
motion independently, and then by applying the reflection principle. The ideas
are very similar to the proof of Lemma 3.2 of the main text. ∎
As an immediate application of Lemma B.9, we now show that the nice properties
of the decision boundaries as mentioned in Propositions B.6 and B.8 above are
also shared by hypersurfaces with controlled curvature.
Figure 12: Covering by cuboids of side length $\delta$.
###### Proposition B.10.
Let $H$ be a hypersurface which is diffeomorphic to a hyperplane, of curvature
$\kappa$ (in the sense of (26)) satisfying $r\leq\kappa\leq R$. Then the
conclusion of Proposition B.6 applies to $H$.
###### Proof.
Let $z$ be a point such that $d:=\text{dist}(x,H)$, and $w$ be another point
such that $z-w=\delta$. Let $E$ denote the misclassification region defined by
$H$.
$\left|\psi_{E}(z,t)-\psi_{E}(w,t)\right|\leq\psi_{A}(z,t),$
where $A$ denotes the region “sandwiched” between $H$ and $H-\delta$. As
before, we will ultimately use $t$ in the regime $O(d^{2})$. Now, given $t$,
start by considering a ball $B(z,\lambda_{t})$, and let
$A_{\lambda_{t}}:=A\cap B(z,\lambda_{t})$. Here, $\lambda_{t}$ has been chosen
so that $\psi_{A_{\lambda_{t}}}(z,t)$ comes arbitrarily close to
$\psi_{A}(z,t)$. We will now cover $A_{\lambda_{t}}$ with $N$ cubes
$Q_{j},j=1,\cdots,N$ such that each cube $Q_{j}$ has sidelengths comparable to
$\delta$. Due to the controlled curvature, we know that the cover has
controlled multiplicity and
$N\sim_{r,R,\lambda_{t}}1/\delta^{n-1}.$
Since we know that
$\psi_{A_{\lambda_{t}}}(z,t)\leq\sum_{j=1}^{N}\psi_{Q_{j}}(z,t),$
it suffices to prove that the RHS above is $O(\delta)$. Via Lemma B.9 above,
it suffices to prove the following:
$\int_{-a}^{a}e^{-x^{2}}\;dx=O(a).$
Now, we employ the following known trick:
$\displaystyle\left(\int_{-a}^{a}e^{-x^{2}}\right)^{n}$
$\displaystyle=\int_{-a}^{a}e^{-r^{2}}r^{n-1}\;dr\;d\omega$
$\displaystyle=2\int_{0}^{a^{2}}e^{-\rho}\rho^{n/2-1}\;d\rho$
$\displaystyle=2\gamma(n/2,a^{2}),$
where $\gamma(s,x)$ denotes the usual lower incomplete Gamma function. From
well-known asymptotics, it is now clear that for small enough $a$, the RHS is
$O(a)$. ∎
### B.5 Compression parameters: general case
Now we go for the full neural net compression, which is essentially an
iterated version of Proposition B.5. Consider a neural net $A$ consisting of
$m$ layers, and let $\hat{A}_{j}$ denote the neural net $A$ whose first $j$
layers have been compressed using the scheme in Algorithm $1$ at each level.
By way of notation, let $A^{j}$ denote the $j$th layer of the original neural
net (assumed to be of dimension $h^{1}_{j}\times h^{2}_{j}$), and
$\hat{A}^{j}$ the $j$th layer of the compressed neural net. Then, we have the
following
###### Proposition B.11.
Given $\varepsilon>0$ and $m$ parameter pairs $(\alpha_{j},\beta_{j})$, we can
find a compression $\hat{A}_{m}$ with
$\displaystyle{\sum_{j=1}^{m}O\left(\log(h^{1}_{j}h^{2}_{j})/\beta_{j}\alpha_{j}^{2}\right)}$
parameters and associated parameters $\rho_{j}$ such that
$\left|\psi_{E_{f}}(Ax,t)-\psi_{E_{f}}(\hat{A}_{m}x,t)\right|\leq\sum_{j=1}^{m}\rho_{j}<\varepsilon$
with probability at least $\displaystyle{\prod_{j=1}^{m}\tau_{j}}$, where
$\tau_{j}=\prod^{j}_{i=1}\left[(1-\beta_{i})-S(\hat{x}^{j-1},A_{j};t)\right].$
###### Proof.
We see that
$\displaystyle\left|\psi_{E_{f}}(Ax,t)-\psi_{E_{f}}(\hat{A}_{m}x,t)\right|$
$\displaystyle\leq\left|\psi_{E_{f}}(Ax,t)-\psi_{E_{f}}(\hat{A}_{1}x,t)\right|+\left|\psi_{E_{f}}(\hat{A}_{1}x,t)-\psi_{E_{f}}(\hat{A}_{2}x,t)\right|$
$\displaystyle+\left|\psi_{E_{f}}(\hat{A}_{2}x,t)-\psi_{E_{f}}(\hat{A}_{3}x,t)\right|+\cdots+\left|\psi_{E_{f}}(\hat{A}_{m-1}x,t)-\psi_{E_{f}}(\hat{A}_{m}x,t)\right|.$
We will be compressing one individual layer at at time. At the first layer, we
start with the entry $x$ taken from the sample set.
Algorithm $1$ gives us a compression $\hat{A}^{1}$ that satisfies, with given
$\alpha_{1},\beta_{1}$ that
$\|A^{1}x-\hat{A}^{1}x\|\leq\alpha_{1}\|A^{1}\|_{F}\|x\|$
with probability at least $1-\beta_{1}$. Here the reduced number of parameters
of $\hat{A}^{1}$ is
$O\left(\log(h^{1}_{1}h^{2}_{1})/\beta_{j}\alpha_{j}^{2}\right)$. As a result,
$\left|\psi_{E_{f}}(Ax,t)-\psi_{E_{f}}(\hat{A}_{1}x,t)\right|\leq\rho_{1},$
where in the general situation (that is, without any additional assumption on
the decision boundary $\mathcal{N}_{f}$),
$\rho_{1}=\psi_{E_{f}}(\phi(A_{1}x),t)\delta_{1}$ with probability at least
$1-S(x,A_{1};t)/\delta_{1}-\beta_{1}$ (this is via Proposition B.5, via
application of Markov’s inequality).
Now that the first layer has been compressed, the entry data at the second
layer is the vector $\phi\hat{A}^{1}x$. Once again, we estimate that with
given parameters $\alpha_{2},\beta_{2}$, Algorithm $1$ generates a contraction
$\hat{A}^{2}$ at the second layer with satisfies (with probability at least
$1-\beta_{2}$)
$\displaystyle\|A^{2}(\phi\hat{A}^{1}x)-\hat{A}^{2}(\phi\hat{A}^{1}x)\|$
$\displaystyle\leq\alpha_{2}\|A^{2}\|_{F}\|\phi\hat{A}^{1}x\|$
$\displaystyle\leq\lambda\alpha_{2}\|A^{2}\|_{F}\|\hat{A}^{1}x\|\quad\text{\hfill(Lipschitz-
ness of the nonlinearity)}$
So, with probability at least $(1-\beta_{2})(1-\beta_{1})$, we have that
$\displaystyle\|A^{2}(\phi\hat{A}^{1}x)-\hat{A}^{2}(\phi\hat{A}^{1}x)\|$
$\displaystyle\leq\lambda\alpha_{2}\|A^{2}\|_{F}\left[\|A^{1}x\|+\alpha_{1}\|A^{1}\|_{F}\|x\|\right]$
$\displaystyle\leq\lambda\alpha_{2}\|A^{2}\|_{F}\left[\|A^{1}\|_{F}\|x\|+\alpha_{1}\|A^{1}\|_{F}\|x\|\right]$
$\displaystyle=\lambda\alpha_{2}(1+\alpha_{1})\|A^{2}\|_{F}\|A^{1}\|_{F}\|x\|.$
We have then
$\left|\psi_{E_{f}}(\hat{A}_{1}x,t)-\psi_{E_{f}}(\hat{A}_{2}x,t)\right|\leq\rho_{2},$
where in the general situation,
$\rho_{2}=\psi_{E_{f}}(\phi(A^{2}\hat{x}^{1}),t)\delta_{2}$ with probability
at least $(1-\beta_{1})(1-\beta_{2})-S(\hat{x}^{1},A_{2};t)/\delta_{2}$. Here
$\hat{x}^{j}$ denotes the output at the $j$th layer of the compressed net.
It can be checked via induction that the above process iterated $j$ times
gives that
$\|A^{j}(\phi(\hat{x}^{j-1}))-\hat{A}^{j}(\phi(\hat{x}^{j-1}))\|\leq\lambda^{j-1}\alpha_{j}\prod_{i=1}^{j-1}(1+\alpha_{i})\prod_{i=1}^{j}\|A_{i}\|_{F}\|x\|$
with probability at least $\displaystyle{\prod_{i=1}^{j}(1-\beta_{i})}$. That
implies that
$\left|\psi_{E_{f}}(\hat{A}_{j-1}x,t)-\psi_{E_{f}}(\hat{A}_{j}x,t)\right|\leq\rho_{j},$
where in the general situation,
$\rho_{j}=\psi_{E_{f}}(\phi(A_{j}\hat{x}^{j-1}),t)\delta_{j}$ with probability
at least
$\displaystyle{\tau_{j}=\prod_{i=1}^{j}(1-\beta_{i})-S(\hat{x}^{j-1},A_{j};t)/\delta_{j}}$.
Finally, this implies that
$\displaystyle\left|\psi_{E_{f}}(Ax,t)-\psi_{E_{f}}(\hat{A}_{m}x,t)\right|$
$\displaystyle\leq\sum_{j=1}^{m}\rho_{j},$ (53)
with probability at least
$\prod_{j=1}^{m}\tau_{j},$
and the reduced number of parameters in the compressed net is
$\sum_{j=1}^{m}O\left(\log(h^{1}_{j}h^{2}_{j})/\beta_{j}\alpha_{j}^{2}\right).$
∎
### B.6 Compression parameters: tame decision boundary
We are left to indicate the proof of the second part of Proposition 5.2 from
the main text. This follows in a straightforward way following the proof of
Proposition B.11 using the bounds in Propositions B.6, B.8 and B.10 at every
step, instead of the bounds in Proposition B.5, as we have done in the above
proof.
### B.7 Second (alternative) definition of capacity sensitivity
As an alternative working definition of noise sensitivity, we define the
following:
###### Definition 5.
$S(x,A;t):=\operatorname{\mathbb{E}}_{\gamma\in\mathcal{B},\eta\sim\mathcal{N}}\left|\frac{\psi_{E_{f},\gamma}(\phi(A(x+\|x\|\eta)),t)-\psi_{E_{f}}(\phi(Ax),t)}{\psi_{E_{f}}(\phi(Ax),t)}\right|,$
(54)
where the expectation is over $\eta\in\mathcal{N}$ and all Brownian paths
$\gamma$ starting at the point $\phi(A(x+\|x\|\eta))$ and ending inside
$E_{f}(y)$ within time $t$ (the latter sits inside the path space starting at
$\phi(A(x+\|x\|\eta))$ and endowed with the Wiener measure). The random
variable $\psi_{E_{f},\gamma}(\phi(A(x+\|x\|\eta)),t)$ is defined as $1$ if
the path $\gamma_{l}$ strikes $E_{f}$ within time $t$ and $0$ if it does not.
From the point of view of ML computation, Definition 5 has a slight advantage
over Definition 4. In other words, it is computationally more efficient in
view of the following sampling scheme:
###### Proposition B.12.
If $\eta_{1},...,\eta_{m}$ denote $m$ sampled values of $\eta$ and
$\gamma_{j1},\gamma_{j2},...,\gamma_{jk}$ denote $k$ sampled Brownian paths
starting at $x+\|x\|\eta_{j}$, then
$\overline{X}=\frac{1}{mk}\sum_{j=1}^{m}\sum_{l=1}^{k}X_{jl},$
where
$X_{jl}=\left|\frac{\psi_{E_{f},\gamma_{l}}(\phi(A(x+\|x\|\eta_{j})),t)-\psi_{E_{f}}(\phi(Ax),t)}{\psi_{E_{f}}(\phi(Ax),t)}\right|$
approximates $S(x,A;t)$ well with high probability.
###### Proof.
Begin by sampling $m$ values $\eta_{1},...,\eta_{m}$ of $\eta$ and $k$
Brownian paths $\gamma_{j1},\gamma_{j2},...,\gamma_{jk}$ starting from each
such $x+\|x\|\eta_{j}$. Attached to each such selection is an independent
random variable $X_{jl}\psi_{E_{f}}(\phi(Ax),t)$ which takes values in
$[0,1]$. For each $j,l$, we have that
$\operatorname{\mathbb{E}}\left(X_{jl}\psi_{E_{f}}(\phi(Ax),t)\right)=S(x,A;t)\psi_{E_{f}}(\phi(Ax),t)$.
Let $\overline{X}$ denote the mean of all the random variables
$X_{jl},~{}~{}j=1,..,m,~{}~{}l=1,...,k$. Now, we can bring in Hoeffding’s
version of the Chernoff concentration bounds, which gives us that
$\operatorname{\mathbb{P}}\left(\left|\overline{X}-S(x,A;t)\right|\geq\frac{\tau}{\psi_{E_{f}}(\phi(Ax),t)}\right)\leq
e^{-2\tau^{2}mk}.$ (55)
∎
## Appendix C Appendix C: datasets, sampling details, training details and
further experiments.
### C.1 Technical Setup
The experimental section of the work was conducted mainly on a CUDA 10.2 GPU-
rack consisting of four NVIDA TITAN V units: this includes the model training
as well as Brownian motion sampling and further statistics. The neural network
framework of choice was PyTorch 1.5. We provide the training as well as the
sampling code for our experiments.
### C.2 Datasets
We worked with the well-known MNIST and CIFAR-10 datasets. The MNIST is a
$784$-dimensional dataset that consists of $60000$ images of handwritten
digits whose dimensions are $(28,28)$; $50000$ images were used for training
and $10000$ for validation. CIFAR-10 is collection of $60000$ 32-by-32 color
images (i.e. a $3072$-dimensional dataset) corresponding to 10 different
classes: airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships and
trucks; $50000$ images were used for training and $10000$ for validation.
As pointed out in the main text, adversarially robust decision boundaries
exhibit fundamental differences between the MNIST and the CIFAR-10 dataset.
MNIST yields particularly simple robust boundaries stemming from it’s almost
binary nature as elaborated in Schmidt et al. (2017) and confirmed in Ford et
al. (2019). CIFAR-10 on the other hand is notoriously vulnerable to attacks,
which is reflected in the quantities we measure. For our experiments this
means that adversarial/noisy training flattens the surrounding boundary, i.e.
saturates the isoperimetric bound, but nevertheless still exhibits spiky
structure as will be reflected in the measurements of the isocapacitory
bounds. For MNIST on the other hand the approximately binary nature of the
examples gives the decision boundary much less ’freedom’, resulting in a less
distinct quantitative representation.
For some exploratory toy-examples (cf. Fig. 1, Fig. 2, Fig. 3 in the main
text) we generated a planar dataset that alternates along a circle of radius
$r=5$: for a given ray through the origin we generate several points on the
ray at approximately distance $r$ from the origin and assign them to class
$0$; then we rotate the ray by a small angle counter-clockwise, sample several
points on the rotated ray again at approximately distance $r$ from the origin
and this time assign them to class $1$. Repeating this process we produce the
mentioned 2-class dataset that alternates along the circle of radius $r$ and
consists of 1250 points.
### C.3 Sampling details
An evaluation of the isocapacitory saturation $\psi$ is obtained by sampling
$10000$ Brownian paths with $400$ steps. In light of the curse of
dimensionality, this configuration seems adequate for our purposes:
theoretically, by projecting Brownian motion along the normal directions of
the decision boundary one sees that estimating hitting probabilities is
essentially a lower dimensional problem, e.g. 1-dimensional if the decision
boundary is a hyperplane; practically, our experiments were numerically stable
w.r.t. resampling and sample-batch-size.
Further, for each data point $x$ the relative error volume $\mu(x,r)$ is
computed by sampling $10000$ points uniformly in $B(x,r)$. To compare with
isoperimetric bounds (Subsection 3.2) for each data point $x$ we sample $1000$
points, normally distributed $N(x,r/\sqrt{n})$ and concentrated around $x$ in
the ball $B(x,r)$, and apply a PGD with $400$ steps to obtain distance to the
decision boundary $\mathcal{N}$ (a setup similar to Ford et al. (2019)). As
above, repetitive runs on average reveal an acceptable numeric stability to
the order of $10^{-4}$.
### C.4 Defense training: FGSM vs PGD
In the present work we are interested in how adversarial/noise defense
training are reflected geometrically. To this end we study the application of
two defense strategies - FGSM and PGD.
Previous work (Ford et al. (2019)) indicates that FGSM-based training already
leads to boundary flattening. However, in general it cannot be guaranteed that
the FGSM-based adversarial training will provide appropriate levels of
robustness (against strong adversaries, e.g. iterative attacks) - recently,
Wong et al. (2020) has shown that only with some proper designs (e.g. random
start) the FGSM-based training will be robust. This indicates that if not
taken carefully, FGSM-based and stronger defense trainings (e.g. PGD-based
adversarial training in Madry et al. (2018)) can be very different in their
resulting geometry of the decision boundary. Therefore, we opt for evaulating
FGSM-based as well as the PGD-based defense in an attempt to reveal the
relationship between the decision boundaries of a truly robust model and the
isocapacitory saturation values. Details are given in Fig. 4 and the
accompanying analysis.
### C.5 Training details
#### Training on the CIFAR-10 dataset.
All training procedures used standard techniques for data augmentation such as
flips, horizontal shifts and crops and were normed with respect to data mean
and standard deviation. The training of the Wide-ResNets followed the
framework provided by Cubuk et al. (2018) with weight decay 5e-4, batch size
128 and a decrease of the initial learning rate of $0.1$ by a factor $0.2$ at
epochs 60, 120 and 160. The ResNets were trained with weight decay 1e-4
respectively and step wise decrease of the learning rate 0.1 by a factor $0.1$
at epochs 100 and 150.
#### Training on the MNIST dataset.
We consider two models trained with various data augmentation techniques. We
trained a LeNet-5 architecture LeCun et al. (1998) over 50 epochs with a
learning rate 1e-3 and weight decay 5e-4, batch size of 64, while optimizing
cross entropy loss using root mean square propagation. The same procedure was
implemented to train a basic convolutional neural network consisting of four
convolutional and two subsequent linear layers. While LeNet-5 also uses
convolutional layers, it additionally uses max-pooling after each
convolutional layer.
#### Training on the planar toy dataset.
We experimented with several $5$-layer MLP models (each layer containing 20,
40, 70 or 100 hidden units) on the mentioned planar dataset concentrated along
the circle of radius $5$ centered at the origin. Training followed a
straightforward ADAM optimization procedure with a learning rate of 1.0e-5 and
batch size of 128.
### C.6 Data manipulations during training
To evaluate how various training methods affect the geometric properties of
the decision boundary, for all models we conduct three major types of
training: training on clean data; on data with a layer of Gaussian
perturbations with variance $\sigma^{2}=0.4$; finally, training on data with
additional adversarial defense methods, where for each training example we add
an adversarially chosen example to the dataset using the fast gradient sign
method (FGSM). For LeNet-5 we also considered the effect of adversarial
training, where the additional example is the result Brownian of random walk
terminated upon collision with the decision boundary. See Fig. 15 for a visual
example of perturbations/attacks with the described methods. The resulting
accuracies evaluated on the clean datasets for all trained models are shown in
tables 1, 2, 3. As an additional benchmark of the trained models, we evaluated
the the robustness of LeNet-5 architectures. Figure 16 exhibits the resulting
for the trained model’s accuracies on clean data, PGD attacks with
$\epsilon=0.5$ and $\epsilon=1.0$, Gaussian perturbations and fog with
severity 4 according to the MNIST-C dataset Mu & Gilmer (2019).
### C.7 Isocapacitory and isoperimetric results
Here we summarize the observations indicated by the obtained geometric data.
Besides the results presented in the main text for models Wide-ResNet 28-10
and LeNet-5 (Fig. 4), we also considered geometric properties for said
Residual Networks (CIFAR-10) (see Fig. 13) with 32, 44 and 56 layers and a
basic Convolutional Neural Network (MNIST) (see Fig. 14). The results admit to
the observations made in the main text.
Figure 13: The statistics obtained from the Residual Networks with 32, 44 and 56 layers on the CIFAR10 dataset. For this experiment we considered the Brownian particles with average displacement equal to the radius of sphere with relative volume $\mu=0.01$, where $\mu$ is defined according to equation (2) in the main text. The considered quantities are (Left) the probability of a Brownian particle to collide with the decision boundary, (Center Left) the isocapacitory bound, i.e. the ratio of said probability versus relative volume $\mu$, (Center Right) the radius of the obtained sphere equal to the RMSD of the particle and (Right) the saturation of the isoperimetric bound. We observe consistent behavior of the shown quantities for all three models. The trend of isoperimetric saturation (although, not so concentrated as in the case of WRN and LeNet-5, Fig. 4) as well as the increase of distances $r$ are present. Again the isocapacitory saturation does not appear to follow a distinguished concentration around the case of a flat decision boundary despite the overall increase in flatness: here both noisy and adversarial training seem to deliver a decrease in $\tau$. In fact, the heat imprint of the ordinarily trained model exhibits a "flatter" behaviour in terms of $\tau$. Figure 14: Statistics for a convolutional neural network with four convolutional and two linear layers applied to the MNIST dataset. This particular convolutional model shows that not every architecture/training/dataset instance displays the distinguished trend in increasing the isoperimetric saturation - however, even in this scenario the isoperimetric saturation is quite sharp. Similar to other experiments above, the isocapacitory saturation $\tau$ on the other hand does not concentrate to such an extent. Figure 15: Typical examples of the CIFAR-10 dataset used to train the models. From left to right, the clean image, a PGD adversarial example, a Gaussian perturbation ($\sigma^{2}=0.4$) and the terminal point of a Brownian random walk (undirected attack) immediately after colliding with the decision boundary are shown. The comparison between the PGD adversarial example and the right picture emphasize the degree to which spikes in the decision boundary deviate from the average distance between boundary and clean example. Figure 16: Evaluation of the accuracies of the LeNet-5 (MNIST) models during a range of attacks. While for clean data all models exhibit almost similar accuracy, the adversarially trained models exhibit more robustness during various attacks. For all measures we see the worst performance of the models trained on randomly chosen adversarial examples. Table 1: Summary of validation accuracies for Wide-ResNets 28-10 for various training methods on the CIFAR10 data set. Architecture | Training Type | Accuracy
---|---|---
Wide-ResNet 28-10 | naturally trained | 94.64%
Wide-ResNet 28-10 | trained on noise ($\sigma^{2}=0.1$) | 91.22 %
Wide-ResNet 28-10 | trained on noise ($\sigma^{2}=0.4$) | 86.07 %
Wide-ResNet 28-10 | adversarially trained (fgsm) | 87.10 %
Wide-ResNet 28-10 | adversarially trained (pgd) | 85.05 %
Table 2: Summary of validation accuracies for the ResNets with 32, 44 and 56 layers for various training methods on the CIFAR10 data set. Architecture | Training Type | Accuracy
---|---|---
Residual Network 32 layers | naturally trained | 91.81%
Residual Network 32 layers | adversarially trained (fgsm) | 86.13%
Residual Network 32 layers | trained on noise ($\sigma^{2}=0.4$) | 84.36%
Residual Network 44 layers | naturally trained | 92.36%
Residual Network 44 layers | adversarially trained (fgsm) | 88.20%
Residual Network 44 layers | trained on noise ($\sigma^{2}=0.4$) | 84.09%
Residual Network 56 layers | naturally trained | 92.77%
Residual Network 56 layers | adversarially trained (fgsm) | 87.53%
Residual Network 56 layers | trained on noise ($\sigma^{2}=0.4$) | 84.09%
Table 3: Summary of validation accuracies for LeNet-5 and a convolutional neural network with four convolutional and two linear layers for various training methods on the clean MNIST data set. Architecture | Training Type | Accuracy
---|---|---
LeNet-5 | naturally trained | 99.00%
LeNet-5 | adversarially trained (fgsm) | 98.99%
LeNet-5 | adversarially trained (pgd) | 98.55%
LeNet-5 | adversarially trained (Brownian) | 97.17%
LeNet-5 | trained on noise ($\sigma^{2}=0.4$) | 99.02%
CNN | naturally trained | 98.99%
CNN | adversarially trained (fgsm) | 98.65%
CNN | trained on noise ($\sigma^{2}=0.4$) | 98.93%
|
# Gluon Correlation Functions from Lattice Quantum Chromodynamics
Guilherme Telo Rodrigues Catumba
Supervisors:
Orlando Oliveira
Paulo Silva
(October 2020)
###### Abstract
This dissertation reports on the work developed in the past year by the author
and in collaboration with his supervisors, Prof. Dr. Orlando Oliveira and Dr.
Paulo Silva. The main topic of the thesis is the study of the gluon sector in
pure Yang-Mills theories via the computation of two, three and four point
Landau gauge gluon correlation functions evaluated using the lattice formalism
of QCD. Monte-Carlo simulations reported herein use the Wilson gauge action
for lattice QCD.
The first goal was to understand and quantify the deviations, relative to the
usual continuum description of lattice correlation functions, introduced by
using appropriate lattice tensors. To achieve this we rely on different
lattice tensor representations for the gluon propagator in four dimensions to
measure the deviations of the lattice propagator from its continuum form. We
also identified classes of kinematic configurations where these deviations are
minimal and the continuum description of lattice tensors is improved. Other
than testing how faithful our description of the propagator is, these tensor
structures also allow to study how the continuum Slavnov-Taylor identity for
the propagator is verified on the lattice for the pure Yang-Mills theory. We
found that the Slavnov-Taylor identity is fulfilled, with good accuracy, by
the lattice data for the two point function.
A second goal was the lattice computation of the three gluon vertex using
large ensembles of configurations. The so-called zero crossing, a property
that is related with the ghost dominance at the infrared mass scales and puts
restrictions on the behaviour of the three gluon vertex, was investigated. In
addition, we also explore the possible existence of a ghost mass preventing
the infrared divergence of the vertex. In our study of the three gluon
correlation function we used functional forms to model the lattice data and
explore the two different possibilities for the behaviour of the function. For
the first case we provide an estimate of the mass scale associated with the
zero-crossing and search for a possible sign of the divergence. On the other
hand, for the second case we study the possible occurrence of a sign change
and the finite value of the three gluon vertex for vanishing momentum.
A last topic is the computation of the four gluon vertex. On the lattice this
is a particularly difficult calculation that requires the subtraction of
contributions from lower order correlation functions. A suitable choice of
kinematics allows to eliminate such unwanted contributions. Furthermore, large
statistical fluctuations hinder the precise computation of this object. Our
investigation is a proof of concept, we show that the lattice computation of
the four gluon correlation function seems to be feasible with reasonable
computational resources. Nonetheless, an increase in statistics is necessary
to provide a clearer and precise signal on the complete correlation function
and to compute the corresponding one particle irreducible function.
Keywords: Lattice QCD, Gluon propagator, Gluon correlation functions, Lattice
tensor representations, Three gluon vertex, Four gluon vertex
###### Resumo
Esta dissertação é o resultado do trabalho desenvolvido ao longo do último ano
pelo autor e juntamente com os seus orientadores, Prof. Dr. Orlando Oliveira e
Dr. Paulo Silva. A dissertação consiste no estudo do sector gluónico em
teorias de Yang-Mills através do cálculo de funções de correlação de dois,
três e quatro gluões. Para isto utilizou-se o formalismo da QCD na rede usando
simulações de Monte-Carlo com a ação de Wilson na gauge de Landau.
O primeiro tópico de estudo passou por analisar os desvios, relativamente ao
contínuo, introduzidos pela substituição do espaço-tempo por uma rede de
quatro dimensões. Para isso foram usadas representações tensoriais da rede
para calcular o propagador de gluões e comparadas com a descrição tensorial do
contínuo. Com esta análise foram identificadas classes de configurações
cinemáticas para as quais os desvios relativamente à descrição do contínuo são
reduzidos. Além de testar a integridade da descrição do propagador, é também
possível investigar como a identidade de Slavnov-Taylor para o propagador é
validada nas simulações de Monte-Carlo. Os resultados das diferentes
representações tensoriais mostram que a identidade de Slavnov-Taylor é
satisfeita na rede.
A função de correlação de três gluões também foi calculada usando dois
conjuntos de configurações na rede. O objetivo principal foi a análise do
comportamento da função de correlação no infra-vermelho, nomeadamente, a
existência de uma possível troca de sinal da função para baixos momentos. Esta
propriedade relaciona-se com o domínio dos campos ghost para baixas escalas de
momentos e que induz uma possível mudança de sinal assim como uma possível
divergência. Além desta hipótese, também a possibilidade da existência de uma
massa para o campo ghost que previne a divergência para baixos momentos foi
estudada. Com o objetivo de melhorar a análise, foram usadas formas funcionais
para modelar o vértice de três gluões e estudar as duas possibilidades no
infra-vermelho. Em particular, através dos modelos, a escala para a mudança de
sinal foi avaliada assim como o comportamento geral da função para baixos
momentos.
O último objetivo foi o cálculo do vértice de quatro gluões, que representa
uma dificuldade acrescentada, nunca tendo sido avaliado na rede. A dificuldade
deve-se à complexidade tensorial e às contribuições de vértices de ordem menor
que surgem na computação da função de correlação completa de quatro gluões.
Estas contribuições foram eliminadas através de uma escolha adequada da
configuração cinemática. Além disso, as flutuações estatísticas são grandes e
dificultam a análise. Os resultados demonstraram que o cálculo do vértice de
quatro gluões é exequível com recursos computacionais acessíveis. No entanto,
é fundamental aumentar a precisão no cálculo para obter um sinal mais definido
e calcular o vértice sem propagadores externos.
Palavras-chave: QCD na rede, Propagador do gluão, Funções de correlação de
gluões, Representações tensoriais na rede, Vértice de três gluões, Vértice de
quatro gluões
###### Acknowledgements
‘A spectre is haunting Europe…’
I would like to begin by thanking my supervisors for their exceptional support
over the past year. Both Prof. Dr. Orlando Oliveira and Dr. Paulo Silva were
very patient and receptive towards my questions and their attentive guidance
was certainly very important. I am grateful for their insight and improvements
towards the construction of this dissertation.
Moreover, I would like to thank all my cherished friends whose company
throughout the past years was fundamental to my growth and without whom this
journey would have been much more tedious. A special thanks to all my friends
in BiF for the company, affection and all the shared adventures. Likewise, to
my childhood friends, thank you for being caring and for the company
throughout this journey.
Finally, I wish to express my deepest gratitude to my mother for the strenuous
care and dedication.
This work was granted access to the HPC resources of the PDC Center for High
Performance Computing at the KTH Royal Institute of Technology, Sweden, made
available within the Distributed European Computing Initiative by the
PRACE-2IP, receiving funding from the European Community’s Seventh Framework
Programme (FP7/2007–2013) under grand agreement no. RI-283493. The use of
Lindgren has been provided under DECI-9 project COIMBRALATT. The author
acknowledges that the results of this research have been achieved using the
PRACE-3IP project (FP7 RI312763) resource Sisu based in Finland at CSC. The
use of Sisu has been provided under DECI-12 project COIMBRALATT2.
It is also important to acknowledge the Laboratory for Advanced Computing at
University of Coimbra for providing HPC resources that have contributed to the
research results reported within this thesis.
This work was supported with funds from Fundação para a Ciência e Tecnologia
under the projects UID/FIS/04564/2019 and UIDB/04564/2020.
###### Contents
1. Introduction
2. 1 Quantum Field Theory
1. 1.1 QCD Lagrangian – Gauge invariance
2. 1.2 Quantization of the theory
3. 1.3 Propagator and vertices
4. 1.4 Complete vertices
5. 1.5 Regularization and Renormalization
3. 2 Lattice quantum chromodynamics
1. 2.1 Euclidean formulation
2. 2.2 Discretization
3. 2.3 Lattice Quantum Chromodynamics
4. 2.4 Gauge fixing
5. 2.5 Correlation functions from the lattice
6. 2.6 Computational aspects
1. 2.6.1 Expectation values on the lattice
2. 2.6.2 Bootstrap method
4. 3 Gluon tensor bases
1. 3.1 Tensor representations on the lattice
1. 3.1.1 Scalars under the hypercubic group
2. 3.1.2 Hypercubic vectors
2. 3.2 Lattice basis – Gluon propagator
3. 3.3 Reconstruction of tensors
4. 3.4 Z4 averaging
5. 3.5 Lattice artifacts and Correction methods
1. 3.5.1 Momentum cuts
2. 3.5.2 H4 method
6. 3.6 Three gluon vertex
7. 3.7 Four gluon vertex
1. 3.7.1 Tensor bases
5. 4 Results
1. 4.1 Gluon propagator – Tensor description
1. 4.1.1 Discretization correction methods
2. 4.1.2 Lattice basis – General kinematics
3. 4.1.3 Lattice basis – Generalized diagonal configurations
4. 4.1.4 Finite volume effects
2. 4.2 Three gluon vertex
1. 4.2.1 Three gluon correlation function
2. 4.2.2 Three gluon one particle irreducible function
3. 4.3 Four gluon vertex
1. 4.3.1 Four gluon correlation function
6. Conclusion
7. A $SU(N)$ generators and identities
8. B Lattice tensors
1. B.1 Construction of the lattice basis
1. B.1.1 Momentum polynomial under a transposition
2. B.1.2 Second order tensors under $H(4)$ symmetry
2. B.2 General construction for projectors
1. B.2.1 Projectors for the lattice bases
9. C Results – Additional figures
1. C.1 Gluon propagator
1. C.1.1 Continuum relations – mixed diagonal configurations
###### List of Figures
1. 1.1 Gluon and ghost propagators.
2. 1.2 Ghost-gluon coupling vertex (top) and three and four gluon vertices with all momenta defined inwards.
3. 1.3 Three and four gluon vertices with external propagators removed.
4. 2.1 Link variables between $n$, $n+a\hat{\mu}$ and $n-a\hat{\mu}$.
5. 2.2 Schematic representation of the minimal planar lattice loop, plaquette in the plane $\mu-\nu$.
6. 3.1 Diagrammatic representation of the connected and disconnected terms contributing for the full, four-gluon correlation function.
7. 4.1 Gluon dressing function $d(p^{2})$ from the continuum basis as a function of lattice momentum (top left), and as a function of the improved momentum (top right). The momenta surviving cylindrical and conical cuts are shown for the each plot. The comparison between the data in terms of the improved and lattice momenta after complete momentum cuts against the H4 corrected data with lattice momentum is shown in the bottom plot. Results from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
8. 4.2 $p^{2}E(p^{2})$, $p^{2}J(p^{2})$, and $p^{2}A(p^{2})$ dressing functions as a function of the lattice momentum after a $p^{[4]}$ extrapolation (left) and as a function of the improved momentum $\hat{p}$ after momentum cuts. The results come from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice and the benchmark continuum dressing function $\hat{p}^{2}D(\hat{p}^{2})$ is plotted as a function of the improved momentum.
9. 4.3 Dimensionless form factors $p^{4}G(p^{2})$ and $p^{4}I(p^{2})$. $G$ is shown only after the correction methods. The original data is shown in the top row for the lattice momentum $p$ (left) and improved momentum $\hat{p}$ (right) for a restricted range of momenta. Below, $p^{4}G(p^{2})$ and $p^{4}I(p^{2})$ after the corrections are applied are presented, namely the H4 extrapolated results and momentum cuts. All data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
10. 4.4 Dressing functions for the different tensor bases as a function of the lattice momentum after a $p^{[4]}$ extrapolation (left) and as a function of the improved momentum $\hat{p}$ after momentum cuts. These come from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. The improved continuum tensor form factor $D(\hat{p}^{2})$ is also shown.
11. 4.5 $E(p^{2})$, $-p^{2}F(p^{2})$, and $-p^{2}H(p^{2})$ from the improved momentum lattice basis (right) and from the normal momentum lattice basis (left). Data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. The standard result for $D(\hat{p}^{2})$ is also shown as a function of the improved momentum.
12. 4.6 Gluon dressing function $d(\hat{p}^{2})$ as a function of the improved momentum for the continuum basis published in [73]. The left plot shows the complete set of data and the curve surviving momentum cuts. Additionally, the right plot shows the averaged data in each bin – description in the text.
13. 4.7 Dressing functions $p^{2}E(p^{2})$, $p^{2}J(p^{2})$, and $p^{2}A(p^{2})$ from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice as a function of the lattice momentum after a $p^{[4]}$ extrapolation (left) and as a function of the improved momentum $\hat{p}$. The data is shown after a binning of $2.5\%$ in momentum was performed. The continuum dressing function $\hat{p}^{2}D(\hat{p}^{2})$ is shown with momentum cuts.
14. 4.8 Form factors for the higher order terms of the extended basis $p^{4}G(p^{2})$ and $p^{4}I(p^{2})$ in terms of the usual momentum after the $p^{[4]}$ extrapolation (left) and as a function of the improved momentum (right) without any correction applied. Both cases are shown after a $2.5\%$ binning is applied in the momentum axis. Data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
15. 4.9 $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice non-metric dressing functions for three tensor bases as a function of the lattice momentum after a $p^{[4]}$ extrapolation (left) and as a function of the improved momentum $\hat{p}$, both after a $2.5\%$ binning procedure applied to the momentum. The continuum dressing function $\hat{p}^{2}D(\hat{p}^{2})$ is shown with momentum cuts.
16. 4.10 Reconstruction ratio for the normal momentum bases after the H4 extrapolation. Each plot is labelled by the corresponding form factors for each basis. Data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
17. 4.11 Reconstruction ratio $\mathcal{R}$ for various single scale momentum configurations using two lattice bases, eqs. 3.16 and 3.15, and the continuum tensor (1.40) using the improved momentum and lattice momentum. Results from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
18. 4.12 Orthogonality condition, eq. 4.10 shown for the normal momentum basis after H4 extrapolation from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. Right plot shows the result using the improved basis result without corrections and also with momentum cuts in terms of the improved momentum. For all data the $p_{4}$ component was considered.
19. 4.13 Reconstruction ratio for all four generalized diagonal configurations from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice considering the most complete lattice basis (left) and the usual continuum tensor basis (right). Also shown is the reconstruction for the kinematics $(n,1,1,0)$ using the same two bases.
20. 4.14 Form factors from the lattice basis for the diagonal configuration $p=(n,n,n,n)$ (left) and for the on-axis momentum $p=(n,0,0,0)$ (right) both as a function of improved momentum. Results from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. Shown for comparison is the benchmark result $d(\hat{p}^{2})$.
21. 4.15 Reconstruction ratio for the extended lattice basis and the usual continuum description both in terms of the improved momentum. These are shown for the two different lattices with $80^{4}$ and $64^{4}$ sites, and same spacing $1/a=1.943(47)\leavevmode\nobreak\ \leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$^{-1}$. Four distinct momentum configurations are shown.
22. 4.16 Reconstruction ratio for all four generalized diagonal configurations considering the most complete lattice basis for the $(6.502\leavevmode\nobreak\ $\mathrm{f}\mathrm{m}$)^{4}$ lattice (left) and the $(8.128\leavevmode\nobreak\ $\mathrm{f}\mathrm{m}$)^{4}$ lattice (right). Both lattices having the same lattice spacing $1/a=1.943(47)\leavevmode\nobreak\ \leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$^{-1}$.
23. 4.17 Three gluon correlation function from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble contracted with, and as a function of the improved momentum. All data is shown without correction methods using a partial Z4 averaging with permutations only, and also for the complete Z4 averaging.
24. 4.18 H4 extrapolated data for the gluon propagator dressing function $d(p^{2})$ compared with full diagonal momenta $(n,n,n,n)$ as a function of improved momentum. Data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
25. 4.19 Original and $p^{[4]}$ extrapolated data for the three gluon correlation function from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble as a function of the lattice momentum $p$. The H4 correction was applied for the full momentum range. The configuration $(n,n,n,n)$ is shown for comparison.
26. 4.20 $\chi^{2}/d.o.f.$ obtained from the fit of the functional form (4.21) to the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice data as a function of the momentum range cut off, $p>p_{0}\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. Left plot shows the result of the fit for the H4 corrected data while the right plot with diagonal momenta as a function of the improved momentum.
27. 4.21 Three gluon correlation function $G(p^{2})$ after the H4 extrapolation as a function of the lattice momentum (left) and as a function of the improved momentum after cuts for $\hat{p}>1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. The perturbative prediction, eq. 4.21 is also represented after a fit to the extrapolated and diagonal configurations, respectively. All results shown are from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
28. 4.22 Gluon propagator $D(p^{2})$ from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice as a function of the improved momentum after cuts abover $1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. The renormalization group improved perturbative result, eq. 4.21 was fitted to the data for $p\in[5,8]\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, resulting in a fit with $\chi^{2}/d.o.f.=1.10$.
29. 4.23 Complete set of data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice for the three-gluon 1PI, $\Gamma(p^{2})$ as a function of the improved momentum. The data surviving momentum cuts above $1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ is also shown.
30. 4.24 $\chi^{2}/d.o.f.$ of the three fits from eqs. 4.22, 4.23 and 4.24 (top left, top right and bottom, respectively) for the varying momentum range $p\in[p_{i},p_{f}]$. Both fits with and without momentum cuts were considered.
31. 4.25 $\Gamma(p^{2})$ from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble as a function of improved momentum. The data after momentum cuts is also shown. Two fits using eq. 4.22 and $p_{f}=1.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ were adjusted considering the complete data, and the set after momentum cuts.
32. 4.26 $\Gamma(p^{2})$ from the complete set as a function of improved momentum from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble. The data after momentum cuts are applied is also shown. The functional form in eq. 4.23 with range $p_{f}=1.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ was adjusted to the complete and partial data.
33. 4.27 $\Gamma(p^{2})$ for the complete kinematics as a function of improved momentum from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble. The set of points surviving momentum cuts is also shown. The functional form in eq. 4.24 with $p_{f}=0.85\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ was adjusted to the complete and partial data.
34. 4.28 Prediction for the sign change $p_{0}$ from the fits using eq. 4.22 (left) and eq. 4.24 (right) for varying fitting ranges $[0,p_{f}]$.
35. 4.29 $\Gamma(p^{2})$ from the $\beta=6.0,80^{4}$ ensemble compared with the results from [21] using the $\beta=6.0,64^{4}$ lattice with 2000 configurations. Above $1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ only data surviving momentum cuts is shown.
36. 4.30 $\Gamma(p^{2})$ with momentum cuts above $1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the $80^{4}$ and $64^{4}$ lattice. The curves result from the fits with eq. 4.22 (top left), eq. 4.23 (top right), and eq. 4.24 (bottom plot) with fitting ranges $p_{f}=1.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the first two, and $p_{f}=0.85\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the latter.
37. 4.31 Four gluon vertex form factor $V_{\Gamma^{(0)}}(p^{2})$ with external propagators from the $\beta=6.0,\leavevmode\nobreak\ 64^{4}$ lattice. Only mixed diagonal configurations are considered. The smaller plot shows a restricted range of momentum to better visualize the mid momentum region. All data was rescaled by a factor of 1000.
38. 4.32 Four gluon vertex form factor $V_{G}(p^{2})$ with external propagators from the $\beta=6.0,\leavevmode\nobreak\ 64^{4}$ lattice. Only mixed diagonal configurations are considered. The smaller plot shows a restricted range of momentum to better visualize the mid momentum region. All data was rescaled by a factor of 1000.
39. 4.33 Four gluon vertex form factors $V_{\Gamma^{(0)}}(p^{2})$ and $V_{G}(p^{2})$ with external propagators from the $\beta=6.0,\leavevmode\nobreak\ 64^{4}$ lattice. Only mixed diagonal configurations are shown and the lowest momentum points disregarded due to large fluctuations.
40. 4.34 Four gluon vertex form factor $V_{\Gamma^{(0)}}(p^{2})$ with external propagators from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ (red) and $64^{4}$ (green) ensembles. Only mixed diagonal configurations are considered and the lowest momentum points were disregarded. All data was rescaled by a factor of 1000.
41. 4.35 Four gluon vertex form factor $V_{G}(p^{2})$ with external propagators from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ (red) and $64^{4}$ (green) ensembles. Only mixed diagonal configurations are considered and the lowest momentum points were disregarded. All data was rescaled by a factor of 1000.
42. 4.36 Original data from [31] for the DSE computation of the pure four gluon vertex associated with the tree-level tensor $V^{\prime}_{\Gamma^{(0)}}(p^{2})$. The ‘total’ result in black is the relevant structure for comparison.
43. 4.37 Original data from [31] for the DSE computation of the pure four gluon vertex associated with the tree-level tensor $V^{\prime}_{G}(p^{2})$. The ‘total’ result in black is the relevant structure for comparison.
44. C.1 Form factors from the lattice basis for the mixed configurations $p=(n,n,n,0)$ (left) and for $p=(n,n,0,0)$ (right) both as a function of improved momentum. Shown for comparison is the benchmark result $d(\hat{p}^{2})$.
###### List of Tables
1. 4.1 Lattice setup for both ensembles used in the computation of the gluon correlation functions.
2. 4.2 Fit parameters for the $64^{4}$ and $80^{4}$ lattice using the three models in eqs. 4.22, 4.23 and 4.24.
##
### Units and Conventions
In this dissertation we use natural units
$\hbar=c=1$
where $\hbar$ is the reduced Planck constant and $c$ the speed of light in the
vacuum. In these units energy, momentum and mass have the same units –
expressed in $$\mathrm{M}\mathrm{e}\mathrm{V}$\leavevmode\nobreak\
(1.6022\times 10^{-13}\leavevmode\nobreak\ $\mathrm{J}$)$. Length and time
also have common units, inverse of energy. To re-establish units, the
following conversion factor is considered
$\hbar c=197.326\leavevmode\nobreak\
$\mathrm{M}\mathrm{e}\mathrm{V}$\leavevmode\nobreak\ $\mathrm{f}\mathrm{m}$=1$
and in SI units
$\displaystyle 1\leavevmode\nobreak\ $\mathrm{M}\mathrm{e}\mathrm{V}$$
$\displaystyle=1.7827\times 10^{-30}\leavevmode\nobreak\ $\mathrm{kg}$$
$\displaystyle 1\leavevmode\nobreak\ $\mathrm{f}\mathrm{m}$$
$\displaystyle=3.3356\times 10^{-24}\leavevmode\nobreak\ $\mathrm{s}$.$
Greek indices ($\mu,\nu,\rho,$ etc) are associated with space-time indices
going through $(0,1,2,3)$ or $(1,2,3,4)$ for Minkowski and Euclidean space,
respectively. The $g_{\mu\nu}$ symbol is reserved for the Minkowski metric
tensor $g_{\mu\nu}=\text{diag}(1,-1,-1,-1)$ while the Kronecker symbol
$\delta_{\mu\nu}$ is the Euclidean metric tensor. Latin indices ($a,b,$ etc)
are usually reserved for the colour degrees of freedom associated with the
$SU(N)$ algebra.
The Einstein summation convention for repeated indices
$a_{\mu}b^{\mu}\equiv\sum_{\mu}a_{\mu}b^{\mu}$ (1)
is used throughout the work, unless explicitly noted. This convention applies
to both space-time and colour degrees of freedom. The position of the indices
is irrelevant when considering colour, or Euclidean metric.
## Introduction
The modern description of the fundamental interactions in nature considers
four interactions: gravitational, electromagnetic, weak, and strong. Apart
from the gravitational interaction which does not have a proper quantum
formulation, the last three are described by quantum field theories. These
three fundamental interactions define what is called the Standard Model, a
gauge theory associated with the symmetry group $SU(3)\otimes SU(2)\otimes
U(1)$ describing current particle physics.
The $SU(2)\otimes U(1)$ sector of the Standard Model contemplates the
electromagnetic and weak interactions (electroweak) [2]. Perturbation theory
accounts for most of the phenomena occurring in this sector. When the physical
processes involve hadrons through the strong force (e.g. protons, neutrons,
pions) for low energy processes, perturbation theory fails. Hence, non-
perturbative methods are necessary to study the $SU(3)$ sector which accounts
for the dynamics of quarks and gluons. Quantum chromodynamics (QCD) is the
current description of the strong interaction.
Lattice field theory is a possible non-perturbative approach to formulate QCD.
The formulation of the theory on a discretized lattice with finite spacing and
volume provides a regularization, which renders the theory finite. When
combined with the Euclidean space-time, lattice field theories become formally
equivalent to classical statistical theories. Hence, other than serving as a
regularized formulation of the theory it also serves as a computational tool.
In lattice quantum chromodynamics (LQCD), physical quantities are computed
using Monte-Carlo simulations that require large computational power. Current
simulations can reach a satisfying level of precision in the computation of
several quantities such as the strong coupling constant, hadron masses, and
also the study of some properties such as confinement and chiral symmetry (see
[3] for a summary of the current advances and investigations in the field).
All of the work developed in this thesis uses the pure Yang-Mills theory,
where the fermion dynamics is not taken into account – quenched approximation.
This corresponds to disregarding quark loops in the diagrammatic expansion.
Although this approximation seems too radical, the systematic errors involved
are small [4].
A quantum field theory is defined by its correlation functions [5, 6],
summarizing the dynamics and interactions among fields. Despite not being
physical observables and not experimentally detectable, due to its gauge
dependency, correlation functions are important for they can be related to
various phenomena of the theory. Indeed, in supposedly confining theories such
as QCD whose quanta (quarks, gluons, and the unphysical ghosts) do not
represent physically observable states, correlation functions should encode
information on this phenomenon [7, 8]. Vertices can also serve to compute the
coupling constant and define a static potential between colour charges [9,
10], and also explore properties of bound states [11]. Correlation functions
are also the building blocks of other non-perturbative continuum approaches
such as the Dyson-Schwinger equations (DSE) [12]. These frameworks usually
partially rely on lattice data, and thus a good comprehension of these objects
is important.
This thesis addresses three different topics. Firstly, we investigate the
lattice gluon propagator relying on lattice tensor representations with the
aim to understand the deviations of correlation functions relative to the
continuum theory [13, 14]. This has become a relevant topic as modern
computations of the gluon propagator use large statistical ensembles of
configurations.
The second objective is to compute the three gluon vertex and study its
infrared (IR) behaviour. The purpose of this analysis is to search for
evidences and shorten the estimated interval of the zero-crossing,
corresponding to a possible sign change of the three gluon one particle
irreducible (1PI) function for low momentum. This property can be traced back
to the fundamental dynamics of the pure Yang-Mills theory, namely the ghost
dynamics as predicted by the DSEs [15, 16]. In this framework, the sign change
is necessary for the finiteness of the equations assuming a tree level form of
the ghost-gluon, and four gluon vertex [17]. Various DSE investigations [18,
17] as well as other methods [19, 20] found the zero-crossing for the deep IR.
Recent lattice $SU(3)$ studies [21, 22, 23] as well as $SU(2)$ [24, 25]
predict the zero crossing for the deep infrared region, around
$150-250\leavevmode\nobreak\ $\mathrm{M}\mathrm{e}\mathrm{V}$$. Moreover, the
exact momentum of the crossing seems to be dependent on the group symmetry and
dimensionality, being generally lower for the four-dimensional case [15].
Additionally, general predictions come from pure Yang-Mills theories and thus
unquenching the theory could spoil this behaviour. However, several DSE based
references [19, 26, 17] argue this is a pure gluon phenomenon, and that the
presence of light mesons [27, 28] only shifts the zero-crossing momentum to a
lower IR region.
From the point of view of continuum frameworks, this property is highly
dependent on the approximations employed and thus should always be validated
by lattice simulations. The latter usually suffer from large fluctuations, or
from difficult access to IR momenta. Furthermore, a recent analytical
investigation on both the gluon and ghost propagators found evidence of the
existence of a non-vanishing ghost mass which could regularize the three gluon
vertex, thus removing the divergence [29]. While the existence of a dynamical
gluon mass is properly established in previous investigations [30], the case
of the ghost field is undetermined. The existence of a finite dynamical ghost
mass would in principle remove the logarithmic divergence and thus we also
explore this possibility.
The last objective of this work is to perform a first lattice computation of
the four gluon correlation function. General predictions for the IR structure
of this vertex exist only from continuum formulations [31, 32]. These are
dependent on truncation schemes and other approximations and again lattice
results are needed to validate the predictions. The four gluon vertex has four
Lorentz indices and four colour indices, therefore its tensor structure is
rather complex, allowing for a large number of possible tensors. The increased
statistical fluctuations are related to it being a higher order correlation
function, involving fields at four distinct lattice sites. Besides, as a
higher order function, its computation requires the removal of unwanted
contributions from lower order correlation functions. These can be eliminated
by a suitable choice of kinematics.
The outline of this dissertation begins with a general introduction to the
necessary tools and theoretical basis to understand the lattice formulation
and results. Chapter 1 begins with a brief description of the formalism for a
general quantum field theory with the QCD theory being introduced and its
properties briefly reviewed. Correlation functions and other objects of the
theory are introduced.
The lattice formulation of QCD is presented in chapter 2. We motivate and
construct the discretization procedure and present the lattice version of
various fundamental objects. This chapter also includes some computational
aspects needed to perform lattice simulations.
In chapter 3 the main work of this dissertation begins with an analysis of the
correct lattice symmetries and the construction of lattice adequate tensor
bases. Additionally, details about discretization effects, possible correction
methods and tensor bases for the three and four gluon correlation functions
are introduced.
Results are shown in chapter 4 which is divided in three main sections,
dedicated to each of the three main objectives of this work. This is followed
by final conclusions and possible extensions for this work.
Finally, the results obtained in this thesis regarding the tensor structure of
the propagator were summarized in [1].
## Chapter 1 Quantum Field Theory
Quantum Chromodynamics is a $SU(3)$ gauge theory. Historically, the colour
quantum number was introduced in order to reconcile Fermi statistics with the
observed ground state of strongly interacting particles. A new quantum number
was needed to guarantee the anti-symmetry of the wave-function [2]. Later,
these new degrees of freedom were found to be associated with a gauge theory.
In this chapter we give a brief overview of QCD and how the theory arises from
the principle of gauge invariance. Some important concepts in a quantum field
theory are also presented. Quantum field theories are well described in [6,
33, 34], and QCD is thoroughly exposed in [35].
### 1.1 QCD Lagrangian – Gauge invariance
The Lagrangian of QCD involves the matter, quark fields $\psi$ and the gluon
fields $A_{\mu}$. The first form a representation of the group symmetry,
namely the fundamental representation of $SU(3)$, while the latter are in the
adjoint representation of the group (see appendix A).
The classical QCD Lagrangian arises when we impose gauge invariance to the
Dirac Lagrangian
$\mathcal{L}_{\text{Dirac}}=\bar{\psi}\left(i\gamma^{\mu}\partial_{\mu}-m\right)\psi.$
(1.1)
where $\bar{\psi}=\psi^{\dagger}\gamma^{0}$ with $\gamma^{0}$ being the zeroth
Dirac matrix, $\gamma^{\mu}$. For a general $SU(N)$ theory, the gauge
principle requires the invariance of the Lagrangian under a local group
transformation
$\psi(x)\rightarrow\psi^{\prime}(x)=V(x)\psi(x)$ (1.2)
with $V(x)$ an element of the fundamental representation of the group. When
performing a local transformation, the kinetic term of the Lagrangian breaks
the invariance since it compares fields at different points with distinct
transformation laws
$\psi(y)-\psi(x)\rightarrow V(y)\psi(y)-V(x)\psi(x).$ (1.3)
In order to make comparisons at different points we introduce the group valued
comparator $U(x,y)$ satisfying $U(x,x)=\mathds{1}$ and the gauge
transformation
$U(x,y)\rightarrow V(x)U(x,y)V^{\dagger}(y).$ (1.4)
With this object we may define the covariant derivative, using the following
difference,
$D_{\mu}\psi(x)\equiv\lim\limits_{\varepsilon_{\mu}\rightarrow
0}\frac{1}{\varepsilon}\left[U(x,x+\varepsilon)\psi(x+\varepsilon)-\psi(x)\right].$
(1.5)
with $y=x+\varepsilon$, and $\varepsilon$ an infinitesimal. With this
definition, the new derivative transforms similarly to the fields,
$D_{\mu}\psi(x)\rightarrow V(x)D_{\mu}\psi(x).$ (1.6)
Introducing a new field, the connection $A_{\mu}(x)$, by
$U(x,x+\varepsilon)=\mathds{1}-ig\varepsilon^{\mu}A_{\mu}(x)+\order{\varepsilon^{2}}.$
(1.7)
where $g$ is the bare strong coupling constant, we write the covariant
derivative as
$D_{\mu}\psi(x)=(\partial_{\mu}-igA_{\mu}(x))\psi(x).$ (1.8)
The transformation law for the newly introduced field $A_{\mu}(x)$ is
$A_{\mu}(x)\rightarrow
V(x)A_{\mu}(x)V^{-1}(x)-\frac{i}{g}(\partial_{\mu}V(x))V^{-1}(x).$ (1.9)
An arbitrary group element $V(x)$ can be expressed by the Lie algebra elements
through the exponentiation mapping
$V(x)=\exp(i\alpha^{a}(x)t^{a})$ (1.10)
with the algebra generators $t^{a}$ defined in appendix A and $\alpha^{a}(x)$
a set of functions parametrizing the transformation. The connection
$A_{\mu}(x)$ is thus an element of the algebra which can be written in terms
of the fields $A_{\mu}^{a}(x)$
$A_{\mu}(x)=A_{\mu}^{a}(x)t^{a}.$ (1.11)
Hence, to guarantee gauge invariance of the Dirac Lagrangian we replace normal
derivatives by the covariant. Furthermore, we need to introduce a kinetic term
for the new field that must depend only on the gauge fields $A_{\mu}$ and its
derivatives. The usual construction is the field-strength tensor
$F_{\mu\nu}=\frac{i}{g}\left[D_{\mu},D_{\nu}\right]=(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu})-ig\left[A_{\mu},A_{\nu}\right]$
(1.12)
which can be written in terms of its components
$F_{\mu\nu}=F_{\mu\nu}^{a}t^{a}$ using the structure constants of the group
$f^{abc}$,
$F_{\mu\nu}^{a}=(\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a})+gf^{abc}A_{\mu}^{b}A_{\nu}^{c}.$
(1.13)
The first equality in 1.12 gives a geometrical interpretation of the tensor,
as it can be seen as the comparison of the field around an infinitesimal
square loop in the $\mu-\nu$ plane, indicating how much it rotates in the
internal space when translated along this path [6]. To obtain a gauge
invariant scalar object from this tensor, we consider the trace operation over
the algebra elements and the following contraction
$\Tr\left[(F_{\mu\nu}^{a}t^{a})^{2}\right]=(F_{\mu\nu}^{a})^{2}/2.$ (1.14)
With these elements we write the classical QCD Lagrangian
$\mathcal{L}_{\text{QCD}}=-\frac{1}{4}F_{\mu\nu}^{a}F^{a\mu\nu}+\bar{\psi}\left(i\gamma^{\mu}(\partial_{\mu}-igA_{\mu}^{a}t^{a})-m\right)\psi$
(1.15)
whose form, namely the gluon-quark interaction is restricted by gauge
invariance111Gauge invariance also restricts the gauge fields to be massless
since the term $A_{\mu}^{a}A_{\mu}^{a}$ is not gauge invariant.. The matter
field $\psi(x)$ is a vector of spinors for each flavour of quark
($f=u,d,s,c,t,b$). Each quark flavour has an additional colour index $a=1,2,3$
in a three dimensional representation of the $SU(3)$ group. $m$ is a diagonal
matrix in flavour space containing the bare quark masses for each flavour. The
eight independent gluon fields associated with the group generators are the
gauge fields $A_{\mu}^{a}(x)$ which also carry a Lorentz index, labelling the
corresponding directions in space-time, $\mu=0,1,2,3$.
For the present work, we are interested in the pure Yang-Mills Lagrangian
involving the gluon dynamics only
$\mathcal{L}_{\text{YM}}=-\frac{1}{4}F_{\mu\nu}^{a}F^{a\mu\nu}.$ (1.16)
### 1.2 Quantization of the theory
In the path integral quantization for a general quantum field theory [6, 36,
5], described by a set of fields $\phi_{a}$222The index $a$ may represent
independent fields, different members of a set of fields related by some
internal symmetry, or the components of a field transforming non-trivially
under Lorentz transformation, e.g., a vector., the theory is defined by the
generating functional
$\mathcal{Z}[J]=\int\mathcal{D}\phi e^{i\int
d^{4}x\left(\mathcal{L}+J_{a}(x)\phi_{a}(x)\right)}$ (1.17)
where $J_{a}(x)$ is an external source, and the condensed notation was
employed
$\mathcal{D}\phi\equiv\prod_{x,a}d\phi_{a}(x).$ (1.18)
A quantum field theory is completely determined by its Green’s functions [5,
6] defined as
$G_{i_{1},...,i_{n}}^{(n)}(x_{1},...,x_{n})=\bra{0}T\left[\hat{\phi}_{i_{1}}(x_{1})...\hat{\phi}_{i_{n}}(x_{n})\right]\ket{0}$
(1.19)
i.e. by a time ordered vacuum expectation value of the product of $n$ field
operators at distinct points. In this quantization procedure, Green’s
functions are computed from the generating functional by functional
differentiation with respect to the sources
$\bra{0}T\left[\hat{\phi}_{i_{1}}(x_{1})...\hat{\phi}_{i_{n}}(x_{n})\right]\ket{0}=\evaluated{\frac{1}{i^{n}\mathcal{Z}[J]}\frac{\delta^{n}\mathcal{Z}[J]}{\delta
J_{i_{1}}(x_{1})...\delta J_{i_{n}}(x_{n})}}_{J=0}.$ (1.20)
This vacuum expectation value can thus be written as
$\expectationvalue{\hat{\phi}_{i_{1}}(x_{1})...\hat{\phi}_{i_{n}}(x_{n})}=\frac{1}{\mathcal{Z}[0]}\int\mathcal{D}\phi\left(\phi_{i_{1}}(x_{1})...\hat{\phi}_{i_{n}}(x_{n})\right)e^{iS}$
(1.21)
with the notation
$\expectationvalue{\hat{\phi}_{i_{n}}(x_{n})...\hat{\phi}_{i_{n}}(x_{n})}\equiv\bra{0}T\left[\hat{\phi}_{i_{n}}(x_{n})...\hat{\phi}_{i_{n}}(x_{n})\right]\ket{0}$.
Equation 1.21 shows that Green’s functions are accessed by performing a
weighted average over all possible configurations of the system.
The path integral quantization carries some problems when applied to gauge
theories. The generating functional
$\mathcal{Z}=\int\mathcal{D}Ae^{iS[A]}.$ (1.22)
involves the integral over the gauge fields $A_{\mu}^{a}(x)$. For any field
configuration $A_{\mu}$ we may define a gauge orbit to be the set of all
fields related to the first by a gauge transformation $\alpha$. All these
configurations have the same contribution to the functional integral, and so
constitute an infinite contribution.
The over counting of these degrees of freedom need to be eliminated in order
to have a well defined theory. Faddeev and Popov [37] suggested the use of a
hypersurface to restrict the integration in configuration space. This is
achieved by a gauge fixing condition of the form
$F^{a}[A]-C^{a}(x)=0$333$F[A]$ is a field dependent term. $C^{a}(x)$ is a set
of functions also determining the gauge fixing condition.
$F[A]=\partial_{\mu}A^{\mu}(x)$ and $C^{a}(x)=0$ in the Landau gauge.. This
way we isolate the contribution over repeated configurations by factorizing it
as $\int\mathcal{D}\alpha\int\mathcal{D}A_{\mu}\exp^{iS[A]}$, being eliminated
by the normalization.
To impose this integration restriction we insert the following expression in
the generating functional,
$1=\int\mathcal{D}\alpha\delta(F^{a}[A^{\alpha}]-C^{a}(x))\det\left(\frac{\delta
F^{a}[A^{\alpha}]}{\delta\alpha}\right)$ (1.23)
where $A^{\alpha}$ represents the gauge transformed field $A$,
$\delta(F[A^{\alpha}])$ is a Dirac $\delta$ over each space-time point, and
the determinant is due to the change of variables. The generating functional
reads
$\mathcal{Z}=\int\mathcal{D}A\int\mathcal{D}\alpha\delta(F^{a}[A^{\alpha}]-C^{a}(x))\det\left(\frac{\delta
F[A^{\alpha}]}{\delta\alpha}\right)e^{iS[A]}.$ (1.24)
Performing a gauge transformation from $A_{\mu}^{\alpha}$ to $A_{\mu}$ we can
eliminate the dependence on the gauge transformation from the integrand. For
this we use the gauge invariance of the action and of the volume element in
group space $\mathcal{D}\alpha$ [38]. Also, an unitary transformation leaves
the measure $\mathcal{D}A$ and the determinant unchanged
$\mathcal{Z}=\int\mathcal{D}\alpha\int\mathcal{D}A\delta(F^{a}[A]-C^{a}(x))\det\left(\frac{\delta
F[A]}{\delta\alpha}\right)e^{iS[A]}.$ (1.25)
This way we factorized the infinite factor, which is eliminated by
normalization. In addition, we may multiply $\mathcal{Z}$ by a constant factor
$\int\mathcal{D}C\exp\left[-\frac{i}{2\xi}\int d^{4}x{C^{a}}^{2}\right]$
(1.26)
corresponding to a linear combination of different Gaussian weighted functions
$C^{a}$. The generating functional now reads
$\mathcal{Z}=\int\mathcal{D}A\det\left(\frac{\delta
F[A]}{\delta\alpha}\right)\exp{iS[A]-\frac{i}{2\xi}\int d^{4}xF[A]^{2}}.$
(1.27)
The Faddeev-Popov determinant is defined as
$\displaystyle\det M=\det\left(\frac{\delta
F([A],x)}{\delta\alpha(y)}\right),$ $\displaystyle
M_{ab}([A],x,y)=\frac{\delta F^{a}([A],x)}{\delta\alpha^{b}(y)}$ (1.28)
Using Grassmann, anti-commuting variables it is possible to define the
Faddeev-Popov determinant as a functional integral over a set of anti-
commuting fields – ghost fields $\bar{\eta},\eta$
$\det M=\int\mathcal{D}\bar{\eta}\mathcal{D}\eta\exp\left(-i\int
d^{4}x\bar{\eta}^{a}M_{ab}\eta^{b}\right).$ (1.29)
With this, we have a final form for the generating functional,
$\mathcal{Z}=\int\mathcal{D}A_{\mu}\mathcal{D}\bar{\eta}\mathcal{D}\eta
e^{i\int d^{4}x\mathcal{L}_{\text{eff}}},$ (1.30)
expressed with an effective Lagrangian
$\mathcal{L}_{\text{eff}}=\mathcal{L}-\frac{F^{2}}{2\xi}-\bar{\eta}M\eta.$
(1.31)
These new anti-commuting fields can be interpreted as new particles
contributing to the dynamics of the system. However, being scalars under
Lorentz transformations while anti-commuting fields, ghosts do not respect the
spin-statistics theorem [39] and cannot be interpreted as physical particles –
only contributing to closed loops in Feynman diagrams and never as external
fields. They are a mathematical artifact resulting from the gauge fixing
procedure.
### 1.3 Propagator and vertices
The effective Yang-Mills Lagrangian is
$\displaystyle\mathcal{L}$
$\displaystyle=\frac{1}{2}(\partial^{\mu}A^{a\nu}\partial_{\nu}A_{\mu}^{a}-\partial^{\mu}A^{a\nu}\partial_{\mu}A_{\nu}^{a})-\frac{1}{2\xi}(\partial^{\mu}A_{\mu})^{2}$
$\displaystyle-\frac{1}{2}gf^{abc}A^{b\mu}A^{c\nu}(\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a})$
$\displaystyle-\frac{1}{4}g^{2}f^{abc}f^{ade}A^{b\mu}A^{c\nu}A_{\mu}^{d}A_{\nu}^{e}$
$\displaystyle-\bar{\eta}^{a}\partial^{\mu}(\partial_{\mu}-gf^{abc}A_{\mu}^{a})\eta^{b}.$
(1.32)
Analytically, the computation of the complete correlation functions (Green’s
functions) is not possible. However, perturbation theory can provide some
information on the form of these functions. For this we need to know the
Feynman rules for the theory, which can be read off from the Lagrangian at
tree level and are summarized in this section. Its derivation can be consulted
in [6, 40].
The gluon propagator is read off from the quadratic terms in the gluon fields
in the Lagrangian. In momentum space, the propagator reads
$D_{\mu\nu}^{ab}(p^{2})=\frac{\delta^{ab}}{p^{2}}\left[g_{\mu\nu}+(\xi-1)\frac{p_{\mu}p_{\nu}}{p^{2}}\right].$
(1.33)
Note that $\xi=0$ in the Landau gauge.
The ghost fields also have associated Feynman rules. In the chosen gauge the
functional derivative (1.28), obtained with the infinitesimal version of
(1.9),
$A^{\prime
a}_{\mu}=A_{\mu}^{a}+f^{abc}A^{b}_{\mu}\alpha^{c}+\partial_{\mu}\alpha^{a},$
(1.34)
is of the form $M_{ab}=\partial^{\mu}D_{\mu}$444Note that $D_{\mu}$ here is
written in the adjoint representation with the generators
$(t^{a})_{bc}=-if^{abc}$., resulting in a lagrangian contribution
$\mathcal{L}_{\text{ghost}}=-\bar{\eta}^{a}\partial_{\mu}\partial^{\mu}\eta^{a}+gf^{abc}\bar{\eta}^{a}\partial^{\mu}(A_{\mu}^{b}\eta^{c}).$
(1.35)
The ghost will have an associated tree-level propagator, fig. 1.1,
$\Delta^{ab}(p^{2})=\frac{\delta^{ab}}{p^{2}}$ (1.36)
and a ghost-gauge field coupling vertex $-gf^{abc}p_{\mu}$ represented in
figure 1.2.
The gluon self ‘interaction’ vertices result from the second and third line of
the Lagrangian. Their form, however, is written considering the Bose symmetry
of the objects, which allow us to interchange each particle
$(p_{i},a_{i},\mu_{i})$ without affecting its form. The Feynman rule for the
three gluon vertex in momentum space, shown schematically in fig. 1.2, reads
${\Gamma^{(0)}}_{\mu_{1}\mu_{2}\mu_{3}}^{a_{1}a_{2}a_{3}}(p_{1},p_{2},p_{3})=gf^{a_{1}a_{2}a_{3}}[g_{\mu_{1}\mu_{2}}(p_{1}-p_{2})_{\mu_{3}}+g_{\mu_{2}\mu_{3}}(p_{2}-p_{3})_{\mu_{1}}+g_{\mu_{3}\mu_{1}}(p_{3}-p_{1})_{\mu_{2}}]$
(1.37)
whereas for the four gluon vertex the corresponding tree level expression is
given by
$\displaystyle{\Gamma^{(0)}}_{\mu_{1}\mu_{2}\mu_{3}\mu
4}^{a_{1}a_{2}a_{3}a_{4}}(p_{1},p_{2},p_{3},p_{4})=-g^{2}\big{[}$
$\displaystyle
f^{a_{1}a_{2}m}f^{a_{3}a_{4}m}(g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}-g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}})$
$\displaystyle
f^{a_{1}a_{3}m}f^{a_{2}a_{4}m}(g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}-g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}})$
$\displaystyle
f^{a_{1}a_{4}m}f^{a_{2}a_{3}m}(g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}-g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}})\big{]}.$
(1.38)
$p$$a$$b$$\mu$$\nu$
$p$$a$$b$
Figure 1.1: Gluon and ghost propagators.
$p$$a$$q$$b$$c\leavevmode\nobreak\ \mu$
$(p_{1}\leavevmode\nobreak\ a_{1}\leavevmode\nobreak\
\mu_{1})$$(p_{2}\leavevmode\nobreak\ a_{2}\leavevmode\nobreak\
\mu_{2})$$(p_{3}\leavevmode\nobreak\ a_{3}\leavevmode\nobreak\ \mu_{3})$
$(a_{1}\leavevmode\nobreak\ \mu_{1})$$(a_{2}\leavevmode\nobreak\
\mu_{2})$$(a_{3}\leavevmode\nobreak\ \mu_{3})$$(a_{4}\leavevmode\nobreak\
\mu_{4})$
Figure 1.2: Ghost-gluon coupling vertex (top) and three and four gluon
vertices with all momenta defined inwards.
### 1.4 Complete vertices
In a non-perturbative framework, we aim to have access to the complete
correlation functions whose tensor structure ought to be different from the
simple bare vertices obtained at zero order in perturbation theory. Hence, we
must build the most general structure for each correlation function under the
symmetries of the theory.
The tensor structure for the gluon propagator is completely defined by the
Slavnov-Taylor identity555These are relations between the correlation
functions which come from the gauge invariance of the theory. They express the
symmetries of the classical theory through the quantum expectation values.
Also called generalized Ward identities. and the gauge condition – see [6,
40]. The Landau gauge Slavnov-Taylor identity for the gluon propagator reads
[41]
$\partial^{\mu}_{x}\partial^{\nu}_{y}\expectationvalue{T\\{A_{\mu}^{a}(x)A_{\nu}^{b}(y)\\}}=0$
(1.39)
which fixes the orthogonal form of the propagator. Therefore, in the Landau
gauge, this results in
$D_{\mu\nu}^{ab}(p)=\delta^{ab}D(p^{2})\left[g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right]$
(1.40)
with its coefficient differing from the tree-level form by a form factor
$D(p^{2})$.
For higher order correlation functions we distinguish the gluon correlation
functions $G_{\mu_{1}...\mu_{n}}^{a_{1}...a_{n}}$ obtained with (1.20) from
the pure gluon vertex $\Gamma_{\mu_{1}...\mu_{n}}^{a_{1}...a_{n}}$ obtained
with the removal of the external propagators. For the three gluon vertex we
thus define
$\displaystyle\expectationvalue{A_{\mu_{1}}^{a_{1}}(p_{1})A_{\mu_{2}}^{a_{2}}(p_{2})A_{\mu_{3}}^{a_{3}}(p_{3})}=(2\pi)^{4}\delta(p_{1}+p_{2}+p_{3})G_{\mu_{1}\mu_{2}\mu_{3}}^{a_{1}a_{2}a_{3}}(p_{1},p_{2},p_{3})$
(1.41) $\displaystyle
G_{\mu_{1}\mu_{2}\mu_{3}}^{a_{1}a_{2}a_{3}}(p_{1},p_{2},p_{3})=D_{\mu_{1}\nu_{1}}^{a_{1}b_{1}}(p_{1})D_{\mu_{2}\nu_{2}}^{a_{2}b_{2}}(p_{2})D_{\mu_{3}\nu_{3}}^{a_{3}b_{3}}(p_{3})\Gamma_{\nu_{1}\nu_{2}\nu_{3}}^{a_{1}a_{2}a_{3}}(p_{1},p_{2},p_{3}).$
(1.42)
Analogous expressions can be considered for the four gluon vertex.
$\Gamma_{\nu_{1}\nu_{2}\nu_{3}}^{a_{1}a_{2}a_{3}}(p_{1},p_{2},p_{3})$
$\Gamma_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}^{a_{1}a_{2}a_{3}a_{4}}(p_{1},p_{2},p_{3},p_{4})$
Figure 1.3: Three and four gluon vertices with external propagators removed.
Notice that the average for the three gluon correlation function is computed
as
$\expectationvalue{A_{\mu_{1}}^{a_{1}}(x_{1})A_{\mu_{2}}^{a_{2}}(x_{2})A_{\mu_{3}}^{a_{3}}(x_{3})}=\frac{\int\mathcal{D}AA_{\mu_{1}}^{a_{1}}(x_{1})A_{\mu_{2}}^{a_{2}}(x_{2})A_{\mu_{3}}^{a_{3}}(x_{3})e^{i\int
d^{4}x\mathcal{L}}}{\int\mathcal{D}Ae^{i\int d^{4}x\mathcal{L}}}.$ (1.43)
To compute these higher order correlation functions we construct their tensor
structures by taking into account the symmetries of the system, namely Bose
symmetry allowing to freely exchange each pair of indistinguishable particles
and their associated quantum numbers. Proceeding this way we construct the
most general form for these objects. This construction will be presented in
chapter 3.
It is also important to make a further distinction between the pure (gluon)
vertices $G$ and the one particle irreducible (1PI) functions, $\Gamma$ which
do not have the contribution from disconnected diagrams and cannot be reduced
to other diagrams by removing a propagator – see [6, 40]. These are the
objects we are interested in obtaining from the lattice – further details will
be given when considering the four gluon vertex in section 3.7.
### 1.5 Regularization and Renormalization
In general, quantum field theories involve divergences other than the ones
solved by the Faddeev-Popov method. These divergences need to be taken care
of.
The theory is first regularized, making it finite. This is done, in general,
by introducing parameters in the theory which absorb the divergences. In a
perturbative approach, this could be done by an ultraviolet momentum cut off
or dimensional regularization for example. The introduction of a finite space-
time lattice with spacing $a$ is a common regularization procedure with the
advantage of allowing to perform numerical simulations.
The theory is then renormalized by rescaling the parameters and fields of the
theory in a way that the removal of the divergences is not spoiled when the
regularization parameter is eliminated.
The rescaling is performed on a finite number of parameters such as the
fields, and the fundamental constants of the theory. Following [5] a possible
rescaling procedure for QCD would be
$\displaystyle A_{\mu}^{a}\rightarrow Z_{A}^{1/2}A_{\mu}^{a},$ $\displaystyle
m\rightarrow Z_{m}Z_{\psi}^{-1}m,$ (1.44) $\displaystyle\psi\rightarrow
Z_{\psi}^{1/2}\psi,$ $\displaystyle g\rightarrow Z_{g}g,$ (1.45)
$\displaystyle\eta^{a}\rightarrow Z_{\eta}^{1/2}\eta^{a},$
$\displaystyle\xi^{-1}\rightarrow Z_{\xi}Z_{A}^{-1}\xi^{-1}$ (1.46)
where the various $Z_{i}$ are the necessary renormalization constants to
render the theory finite.
Green’s functions have associated rescaling rules constructed from the ones
above. Considering gauge fields only, the Green’s functions renormalization
involve $Z_{A}$. For instance, the renormalized gluon propagator $G^{(2)}_{r}$
relates to the bare object as $G^{(2)}_{r}=Z_{A}G^{(2)}$.
Performing a renormalization procedure involves choosing a point where the
quantities are fixed by some given, standard values. The momentum subtraction
MOM scheme is a usual choice, it fixes the renormalized Green’s function to
match the tree level value for a given momentum scale $\mu$. Again, using the
gluon propagator, the constant $Z_{A}$ is found from
$D(p^{2}=\mu^{2})=Z_{A}D_{L}(\mu^{2})=\frac{1}{\mu^{2}}$ (1.47)
where $D(p^{2})$ is the renormalized form factor and $D_{L}(p^{2})$ the non-
renormalized form factor. See [42] for more details, and [43] for a lattice
dedicated description.
## Chapter 2 Lattice quantum chromodynamics
In this chapter the formulation of quantum chromodynamics on a finite
discretized lattice will be presented. Lattice QCD provides a formulation
which allows to study the non-perturbative regime of QCD and a regularization
of the theory. This framework preserves gauge invariance and serves as an
explicit computational tool.
This chapter begins with the introduction of the lattice formalism,
constructing all objects in the discretized framework. After this, attention
will be given to some computational aspects of this work which are necessary
to compute lattice quantities. Lattice theories, with emphasis on LQCD are
presented in [44, 43, 38].
### 2.1 Euclidean formulation
The Minkowski space-time is not convenient to study functional path integrals
due to the oscillatory behaviour of the exponential in the action. We use
imaginary time thus becoming an Euclidean space. This is accomplished by a
Wick rotation, where the real time $t$ is rotated by $\pi/2$ into the complex
plane, $\tau=it$. The exponential becomes similar to the Boltzmann factor on
the partition function of statistical mechanics,
$\int\mathcal{D}\phi e^{iS[\phi]}\rightarrow\int\mathcal{D}e^{-S_{E}[\phi]}.$
The object $S_{E}$ is the Euclidean version of the action, obtained by
performing the change of variables above. This transformation establishes the
formal connection with statistical mechanics, allowing its methods to be
applied on lattice field theories, notably Monte-Carlo methods to obtain
correlation functions. In the forthcoming analysis we consider the Euclidean
formulation of QCD and the metric is thus equivalent to $\delta_{\mu\nu}$.
### 2.2 Discretization
In the lattice formulation the continuous space-time is replaced by a
4-dimensional Euclidean lattice $\Lambda$ with spacing $a$ whereby each point
is labelled by four integers, $n=(n_{1},n_{2},n_{3},n_{4})$. We consider
$n_{4}$ to be the imaginary time direction. In this work we consider
hypercubic lattices, each side having the same number of points,
$n_{i}\in[0,N-1]$.
All objects appearing in the continuum theory must be rewritten on the lattice
formulation. For a general quantum field theory with fields $\phi$, the
degrees of freedom are the classical fields $\phi(an)$ in the discrete lattice
sites. The lattice action must be built in a way that preserves all possible
properties of the continuum theory. However, the discretization procedure is
not unique which can be seen by the structure of the discrete derivative,
taking various possible forms,
$\displaystyle\partial_{\mu}\phi(x)=\frac{1}{a}\left(\phi(x+\hat{\mu}a)-\phi(x)\right)+\order{a}$
(2.1)
$\displaystyle\partial_{\mu}\phi(x)=\frac{1}{2a}\left(\phi(x+\hat{\mu}a)-\phi(x-\hat{\mu}a)\right)+\order{a^{2}}.$
(2.2)
This freedom in obtaining the lattice form can be used to minimize the
appearance of lattice artifacts111This freedom opens the possibility for
improvement schemes which modify the action in a way to reduce lattice
artifacts [45] – these are not considered in this work..
On the lattice, all possible space translations are restricted to be at least
one lattice unit in size. This results in the discretization of the allowed
momenta. To see this, consider the usual continuum Fourier transform,
$\phi(x)=\int\frac{d^{4}p}{(2\pi)^{4}}\tilde{\phi}(p)e^{ipx}.$
Since $x=an$ is an integer multiple of the spacing $a$ we get
$e^{ip_{\mu}x_{\mu}}=e^{i(p_{\mu}x_{\mu}+2\pi
n_{\mu})}=e^{i(p_{\mu}+2\pi/a)x_{\mu}},$
hence the momentum $p_{\mu}$ is equivalent to $p_{\mu}+2\pi/a$, allowing us to
restrict the momentum integration to the Brillouin zone,
$-\pi/a<p_{\mu}\leq\pi/a$. This removes high frequency modes and regularizes
the theory. Thus, in infinite volume we would write
$\phi(x)=\int_{-\pi/a}^{\pi/a}\frac{d^{4}p}{(2\pi)^{4}}\tilde{\phi}(p)e^{ipx}.$
To perform numerical simulations, however, the volume of the lattice is
finite, where we impose boundary conditions,
$\phi(x+\hat{\mu}N_{\mu}a)=e^{i\theta_{\mu}}\phi(x)$. The finite volume
imposes the additional discretization of momentum. Applying the Fourier
transform to this condition
$\displaystyle\int_{-\pi/a}^{\pi/a}\frac{d^{4}p}{(2\pi)^{4}}\tilde{\phi}(p)e^{ip_{\mu}(x_{\mu}+\hat{\mu}N_{\mu}a)}$
$\displaystyle=\int_{-\pi/a}^{\pi/a}\frac{d^{4}p}{(2\pi)^{4}}\tilde{\phi}(p)e^{ip_{\mu}x_{\mu}+i\theta_{\mu}}$
$\displaystyle\Leftrightarrow e^{ip_{\mu}N_{\mu}}$
$\displaystyle=e^{i\theta_{\mu}}\leavevmode\nobreak\ (\text{no sum})$
where $\hat{\mu}$ is an unitary lattice vector in the direction $\mu$. We work
with periodic boundary conditions, thus $\theta_{\mu}=0$ and we get the
discrete momentum values,
$p_{\mu}=\frac{2\pi n_{\mu}}{aN_{\mu}},\leavevmode\nobreak\
n_{\mu}\in\\{-N_{\mu}/2+1,...,N_{\mu}/2\\}.$ (2.3)
Notice how the use of a finite volume relates to the lowest non-zero momentum
accessible on a given lattice and also to its resolution. Having a finite
number of available momenta, the discrete Fourier transform becomes the sum,
$\phi(x)=\frac{1}{V}\sum_{n\in\Lambda}\tilde{\phi}(p_{n})e^{ip_{n}\cdot x}$
where $V=N^{4}$ is the volume of the space-time grid for the hypercubic
lattice.
Other than the discretized momentum (2.3), in this work we will also consider
the lattice perturbation theory [46] improved momentum defined by
$\hat{p}_{\mu}=\frac{2}{a}\sin\left(\frac{ap_{\mu}}{2}\right)=\frac{2}{a}\sin\left(\frac{\pi
n_{\mu}}{N}\right).$ (2.4)
This form comes from the tree-level propagator of a massless scalar field on
the lattice.
The general path integral quantization scheme is built analogously to the
continuum formulation. The partition function is constructed
$\mathcal{Z}=\int\mathcal{D}\phi e^{-S_{E}(\psi)}$ (2.5)
with the field measure replaced by a finite product
$\mathcal{D}\phi=\prod_{n\in\Lambda}d\phi(n)$ (2.6)
and the expectation value of an observable is computed as
$\expectationvalue{\mathcal{O}}=\frac{1}{\mathcal{Z}}\int\mathcal{D}\phi
e^{-S_{E}(\phi)}\mathcal{O}(\phi).$ (2.7)
### 2.3 Lattice Quantum Chromodynamics
We consider the discretization of the pure Yang-Mills sector of the QCD
Lagrangian. On the lattice the gluon fields appear in order to preserve gauge
invariance in local gauge transformations, $\psi(n)\rightarrow V(n)\psi(n)$,
where $V(n)$ are $SU(3)$ group elements on the lattice sites. In the
continuum, we considered the covariant derivative to ensure the gauge
invariance of the action, and this was implemented such that the comparison of
fields at different points was properly defined. To this end, we used the
concept of a comparator.
On the lattice, two fields in neighbouring points have corresponding
transformations $V(n)$ and $V(n+a\hat{\mu})$. We define the link variables as
a comparator $U_{\mu}(n)$, connecting both points. These oriented group
elements live in the links between sites and are the fundamental fields in
this framework. These satisfy an analogous gauge transformation as the
continuum counterpart
$U_{\mu}(n)\rightarrow V(n)U_{\mu}(n)V^{\dagger}(n+a\hat{\mu}).$ (2.8)
The inverse link from the same lattice point is given by the adjoint operator
$U_{\mu}^{\dagger}(n-a\hat{\mu})$ – see figure 2.1.
$U_{\mu}(n)$$U_{-\mu}=U_{\mu}^{\dagger}(n-a\hat{\mu})$$n-a\hat{\mu}$$n+a\hat{\mu}$$n$$n$
Figure 2.1: Link variables between $n$, $n+a\hat{\mu}$ and $n-a\hat{\mu}$.
The simplest lattice action, such that the Yang-Mills form is restored when
the limit $a\rightarrow 0$ is taken, can be built from the product of
comparators in a closed loop. Namely, we consider the plaquette, fig. 2.2,
which is the simplest loop on the lattice
$U_{\mu\nu}(n)=U_{\mu}(n)U_{\nu}(n+a\hat{\mu})U_{\mu}^{\dagger}(n+a\hat{\nu})U_{\nu}^{\dagger}(n).$
(2.9)
The gauge transformation of this product depends on a single lattice point,
$U_{\mu\nu}(n)\rightarrow V(n)U_{\mu\nu}(n)V^{\dagger}(n).$ (2.10)
Hence, applying the trace we obtain a gauge invariant term
$\Tr
U^{\prime}_{\mu\nu}(n)=\Tr\left(V(n)U_{\mu\nu}(n)V^{\dagger}(n)\right)=\Tr
U_{\mu\nu}(n),$ (2.11)
$n$$n+a\hat{\nu}$$n+a\hat{\mu}+a\hat{\nu}$$n+a\hat{\mu}$$U_{\nu}^{\dagger}(n)$$U_{\mu}(n)$$U_{\nu}(n+a\hat{\mu})$$U_{\mu}^{\dagger}(n+a\hat{\nu})$
Figure 2.2: Schematic representation of the minimal planar lattice loop,
plaquette in the plane $\mu-\nu$.
Due to the form of the continuum action we need a relation between the link
variables and the continuum gauge fields $A_{\mu}(x)$. Hence we establish a
relation between lattice and continuum comparators
$U_{\mu}(n)=U(n,n+\hat{\mu})+\order{a}$. For this purpose, we introduce
algebra valued lattice gauge $A_{\mu}$ fields by
$U_{\mu}(n)=e^{iagA_{\mu}(n+a\hat{\mu}/2)}+\order{a}.$ (2.12)
We rewrite222Using the Baker-Campbell-Hausdorff formula for the product of
exponentials of matrices $e^{A}e^{B}=e^{A+B+\frac{1}{2}[A,B]+...}.$ eq. 2.9
using (2.12) to relate the plaquette with $F_{\mu\nu}(n)$
$\displaystyle U_{\mu\nu}$
$\displaystyle=e^{ia^{2}(\partial_{\mu}A_{\nu}(n)+\partial_{\nu}A_{\mu}(n)+i[A_{\mu}(n),A_{\nu}(n)])+\order{a^{3}}}$
$\displaystyle=e^{iga^{2}F_{\mu\nu}(n)+\order{a^{3}}}.$ (2.13)
Hence, the Wilson Landau gauge action is obtained by
$\displaystyle S_{\text{G}}[U]=$
$\displaystyle\frac{\beta}{2N_{c}}\sum_{n}\sum_{\mu,\nu}\real\Tr(\mathds{1}-U_{\mu\nu}(n))$
(2.14) $\displaystyle=$
$\displaystyle\frac{a^{4}}{2g^{2}}\sum_{n}\sum_{\mu,\nu}\Tr(F_{\mu\nu}^{2}(n))+\order{a^{2}}$
(2.15)
where we defined the inverse bare lattice coupling $\beta=2N_{c}/g^{2}$. This
action was formulated by Wilson in 1974 – see [44].
In this work we consider only the gauge part of the QCD action. This
approximation, disregarding the quarks dynamics is called quenched
approximation. Fermions are represented by Grassmann variables and its
contribution to the generating functional can be written as a fermion
determinant. The quenched approximation consists in replacing the determinant
by a constant which diagrammatically consists in neglecting fermion loops
contributions. Typically, quenched lattice calculations of the hadronic
spectra shows differences around $10$ to $20\%$ relative to experimental data
[4].
### 2.4 Gauge fixing
While physical observables are gauge independent, the computation of
correlation functions requires to choose a gauge. In fact, they can be shown
to vanish if no gauge is fixed – Elitzur’s theorem [47].
In this work we consider the Landau gauge which in the continuum reads
$\partial_{\mu}A^{\mu}(x)=0$, or equivalently $p_{\mu}A^{\mu}(p)=0$ in
momentum space. On the lattice, it can be shown [38] that this is equivalent
to finding a stationary point of the following functional
$F_{U}[V]=\frac{1}{VN_{d}N_{c}}\sum_{n,\nu}\Tr\left[V(n)U_{\mu}(n)V^{\dagger}(n+\hat{\mu})\right],$
(2.16)
where $N_{d}$ and $N_{c}$ the dimensions and colour number, respectively, and
$V$ is the volume of the lattice – not to be confused with the gauge
transformation $V(n)$.
However, in general the functional eq. 2.16 has many extrema – this problem
arises already in the continuum formulation. Ideally, we want the gauge
condition (hypersurface defined in section 1.2) to intersect each gauge orbit
uniquely, and thus a single representative is chosen from each gauge orbit.
However, Gribov [48] found333Gribov considered non-abelian gauge theories in
the Coulomb gauge $\partial_{i}A_{i}=0$. This was later generalized for a
4-dimensional hypercubic and periodic lattice for any $SU(N_{c})$ gauge theory
[49]. that the Faddeev-Popov procedure alone is not sufficient, and that there
are multiple solutions for the gauge condition still related by a gauge
transformation. These multiple solutions due to the multiple intersections of
the hypersurface within each orbit are the so called Gribov copies.
The presence of the copies implies the existence of various stationary points
of the functional. Gribov suggested additional constraints to the gauge field
configuration space, restricting the region to the maxima of (2.16). However,
this Gribov region444This subspace contains all local maxima of the
functional. $\Omega=\\{A:\partial_{\mu}A_{\mu}=0,M[A]\geq 0\\}$ where $M$ is
the Faddeev-Popov matrix eq. 1.28. is still not free of Gribov copies. Further
restrictions define a subspace containing only the global maxima of $F_{U}$ –
called fundamental modular region. It can be shown that on the lattice this
restriction guarantees the absence of Gribov copies in this region [50].
Numerically, the search is limited to a local maximum – in this work we used
the steepest descent method, described in [51]. The computer code uses both
the Chroma [52] and PFFT [53] libraries.
A review of the gauge fixing on the lattice can be found in [54]. It is worth
referring that the effect of the Gribov copies was studied for the gluon
propagator on the lattice [55, 56] concluding that its effect are small – less
than $10\%$. In this work we do not consider the effect of the Gribov copies.
### 2.5 Correlation functions from the lattice
We are interested in computing correlation functions involving gauge fields
$A_{\mu}$. On the lattice, the gluon field can be computed from the links eq.
2.12
$agA_{\mu}(x+\hat{\mu}/2)=\frac{1}{2i}\left[U_{\mu}(n)-U_{\mu}^{\dagger}(n)\right]-\frac{1}{6i}\Tr\left[U_{\mu}(n)-U_{\mu}^{\dagger}(n)\right]$
(2.17)
up to $\order{a^{2}}$ corrections. The second term ensures that the field is
traceless, $\Tr A_{\mu}=0$. The momentum space lattice gauge field is obtained
with the discrete Fourier transform defined before,
$A_{\mu}(p)=\sum_{x}e^{-ip\cdot(x+\hat{\mu}/2)}A_{\mu}(x+\hat{\mu}/2)$ (2.18)
with $p=2\pi n/aN$ and $x=an$ where $n_{\mu}\in[-N/2+1,N/2]$.
The gluon two point function is extracted from the average over gauge field
configurations by
$\expectationvalue{A_{\mu_{1}}^{a_{1}}(p_{1})A_{\mu_{2}}^{a_{2}}(p_{2})}=D_{\mu_{1}\mu_{2}}^{a_{1}a_{2}}(p_{1})V\delta(p_{1}+p_{2}).$
(2.19)
In our numerical framework, we have access to algebra valued gauge fields
$A_{\mu}(p)$ from eqs. 2.17 and 2.18. To form a scalar in the colour sector we
consider a trace and a suitable Lorentz contraction for the space-time
indices. Considering the usual continuum tensor description for the gluon
propagator eq. 1.40, the form factor $D(p^{2})$ is obtained by
$D(p^{2})=\frac{2}{(N_{c}^{2}-1)(N_{d}-n)}\sum_{\mu}\expectationvalue{\Tr\left[A_{\mu}(p)A_{\mu}(-p)\right]}$
(2.20)
where $n=0$ if $p=0$, or $1$ otherwise.
For the gluon propagator, the analysis of the colour indices is simple, since
only $\delta^{ab}$ can be used. For the three and four gluon vertices we again
access the product of gauge fields to which we apply the trace to obtain a
scalar in colour space,
$\displaystyle\expectationvalue{\Tr\left[A_{\mu_{1}}(p_{1})A_{\mu_{2}}(p_{2})A_{\mu_{3}}(p_{3})\right]}=V\delta(\sum_{i}p_{i})G_{\mu_{1}\mu_{2}\mu_{3}}(p_{1},p_{2},p_{3})$
(2.21)
$\displaystyle\expectationvalue{\Tr\left[A_{\mu_{1}}(p_{1})A_{\mu_{2}}(p_{2})A_{\mu_{3}}(p_{4})A_{\mu_{4}}(p_{4})\right]}=V\delta(\sum_{i}p_{i})G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}(p_{1},p_{2},p_{3},p_{4}).$
(2.22)
The $G$’s represent the Green’s functions with colour indices absorbed by the
trace operation and whose form depends on the Lorentz tensor basis considered
– these will be properly defined in chapter 3.
### 2.6 Computational aspects
#### 2.6.1 Expectation values on the lattice
In the Euclidean formulation of the theory, the expectation value of some
field dependent operator is given by
$\expectationvalue{\mathcal{O}}=\frac{1}{\mathcal{Z}}\int\mathcal{D}U\mathcal{O}(U)e^{-S_{E}[U]}.$
(2.23)
To obtain numerical results we consider only a finite number of field
configurations. This is done by importance sampling considering the weight of
the Boltzmann factor in the Euclidean action, and the integrals estimated by
Monte-Carlo methods, [57].
A set of gauge field configurations555 By a gauge field configuration we mean
that each site of the lattice is attributed a value of the field $U$, i.e. a
Lorentz vector of $SU(3)$ matrices. $\\{U_{i}\\},\leavevmode\nobreak\
i=1,...,n$ is generated according to the probability distribution
$P(U)=e^{-S_{E}(U)}/\mathcal{Z}.$ (2.24)
The sequence is obtained by a Markov chain which generates the configurations,
one after another according to a transition amplitude $P(U_{i}\rightarrow
U_{j})$666The precise form of the amplitudes depends on the chosen method
[43]. depending solely on the predecessor configuration. This transition
amplitude should create a sequence distributed according to $P(U)$ in the
large $n$ limit.
When the set $\\{U_{i}\\},\leavevmode\nobreak\ i=m,...,n$ is distributed
according to $P(U)$, it is said to be thermalized. From the thermalized set we
chose $N$ configurations, each separated from the former by $k$ Markov steps
in order to reduce correlations among them. The set
$\\{U_{i}\\},\leavevmode\nobreak\ i=1,...,N$ is the one used for the
computation. The configurations considered in this thesis [21] were obtained
using a combination of the over-relaxation and the heat bath methods according
to [38].
Having a finite number of configurations following the
$\exp(-S_{E}(U))/\mathcal{Z}$ probability distribution, the expectation value
(2.23) is estimated by the sample mean
$\bar{\mathcal{O}}=\frac{1}{N}\sum_{i=1}^{N}\mathcal{O}(U_{i}),$ (2.25)
which corresponds to the correct average $\expectationvalue{\mathcal{O}}$ in
the large $N$ limit.
If all configurations in the sample are statistically independent, having no
correlations, then the sample average is normally distributed around the true
expectation value, and the error estimate would be
$\expectationvalue{\mathcal{O}}=\bar{\mathcal{O}}+1/\sqrt{N}$. To estimate the
uncertainty of an average over the configurations without assuming a
statistical distribution inherent to the variables, we use the Bootstrap
method defined below.
#### Setting the scale
Lattice quantities are, in general, dimensionless with the values given in
terms of the lattice spacing $a$. To obtain physical values we need to set
this scale by choosing a suitable value for $a$ which is not an input
parameter of the formulation.
To do this we match a given dimensionless lattice object, $am_{g}$, with an
experimental value ($m_{g,\text{phys}}$). The lattice spacing is then obtained
by
$a=\frac{am_{g}}{m_{g,\text{phys}}}.$ (2.26)
The lattice spacing of the configuration ensembles used in this work were
computed from the string tension data in [9]. The string tension is defined
from the quark-antiquark potential which is related to the large $n_{4}$
behaviour of the lattice expectation value of a planar rectangular loop
(analogous to the square loop, eq. 2.9), see [38].
#### 2.6.2 Bootstrap method
In this thesis, all statistical errors from the simulations are estimated
using the bootstrap method. The bootstrap is a distribution independent method
that can be used to estimate the statistical error of any quantity
$\mathcal{S}$. A review of the method can be found in [58].
Considering a given initial sample of $N$ elements
$\\{U_{i}\\},\leavevmode\nobreak\ i=1,...,N$ obtained from an unknown
distribution (in our case the sample is the set of gauge field
configurations). We are interested in obtaining the statistical error
associated to a quantity $\mathcal{S}(U)$ which in this work corresponds to a
mean value of some quantity over the configurations.
The method considers the empirical distribution for the original sample,
assigning the probability $1/N$ to each of the observed elements. A bootstrap
sample is constructed by random sampling with replacement from this
probability distribution. We obtain $N_{b}$ random samples
$U_{b}^{j}=(U^{j}_{1},...,U^{j}_{N})$ from the original, of the same size $N$.
For each sample $j$, the quantity is computed to be
$\mathcal{S}^{j}\equiv\mathcal{S}(U^{j})$. The idea of the method is that now,
we have a proper random variable $\mathcal{S}^{j}$ with a known distribution –
the empirical.
To obtain confidence intervals without assuming the underlying distribution,
the bootstrap method provides asymmetric boundaries around the expectation
value. Having $N_{b}$ values $\mathcal{S}^{j}$, from which we obtain
$\bar{\mathcal{S}}$, the upper and lower errors are estimated using confidence
intervals,
$\displaystyle\sigma_{\text{up}}=\mathcal{S}_{\text{up}}-\bar{\mathcal{S}},$
$\displaystyle\sigma_{\text{down}}=\bar{\mathcal{S}}-\mathcal{S}_{\text{down}}$
(2.27)
where $\mathcal{S}_{\text{up}}$ and $\mathcal{S}_{\text{down}}$ are found in a
way that they satisfy
$\displaystyle\frac{\\#\\{\mathcal{S}^{j}<\mathcal{S}_{\text{up}}\\}}{N_{b}}=\frac{1+C}{2},$
$\displaystyle\frac{\\#\\{\mathcal{S}^{j}<\mathcal{S}_{\text{down}}\\}}{N_{b}}=\frac{1-C}{2}$
(2.28)
where $C$ is the coefficient chosen for the confidence interval, $C\in[0,1]$
and $\\#\\{\\}$ represents the cardinality of a given set.
In this work, $C$ was chosen to be $C=0.675$ representing a $67.5\%$
probability of the true estimator falling in the interval. The uncertainty was
taken to be the largest of the two errors.
## Chapter 3 Gluon tensor bases
In this chapter we describe how the discretization of space-time affects the
tensor representations of the gluon propagator. Although we consider these
structures for the gluon propagator, we will find that there are special
kinematic configurations for which the lattice structures provide similar
results as those obtained using the continuum tensor basis.
Some general aspects of discretization effects and possible corrections
methods will be also introduced. Finally, the three and four gluon vertices
will be discussed, and corresponding tensor bases will be shown.
### 3.1 Tensor representations on the lattice
The $O(4)$ symmetry of the Euclidean continuum theory is replaced by the
$H(4)$ group when space-time is discretized using an hypercubic group. This
group consists of powers of $\pi/2$ rotations around the coordinate axes and
parity transformations of the whole lattice, i.e. inversions of the axes
(corresponding operators are shown in appendix B).
The definition of a tensor has an underlying group of transformations that for
the lattice is the $H(4)$. Gluon correlation functions are tensors with
respect to the $H(4)$ group and, therefore, identifying the tensor bases for
this group is crucial to achieve a proper description for the gluon Green’s
functions. These tensor structures differ from the continuum tensors due to
lessened symmetry restrictions.
To see how this affects the construction of tensors we consider an
$N_{d}$-dimensional vector space with a given transformation having matrix
representation $M$. A given vector $p$ in this space transforms as111The
summation convention over repeated indices is used throughout this chapter.
$\displaystyle p^{\prime}=Mp,$ $\displaystyle
p^{\prime}_{\mu}=M_{\mu\nu}p_{\nu}.$ (3.1)
with components $p_{\mu}$ defined with respect to a given coordinate basis.
The generalization to higher order vector spaces is given by the definition of
tensors with respect to the given transformation. A $k$-rank tensor is a
quantity described in general by $N_{d}^{k}$ components
$T_{\mu_{1}...\mu_{k}}$ in a given coordinate basis with the following
transformation law
$T^{\prime}_{\mu_{1}...\mu_{k}}=M_{\mu_{1}\nu_{1}}...M_{\mu_{k}\nu_{k}}T_{\nu_{1}...\nu_{k}}.$
(3.2)
This definition includes vectors ($k=1$), as well as scalars ($k=0$) which are
unchanged by the group transformations.
In an $O(N_{d})$ symmetric space, scalar products of vectors are unchanged
under the group transformations, employed by orthogonal $N_{d}\times N_{d}$
matrices, $M_{\mu\nu}=M_{\nu\mu}^{-1}$. To see how the definition (3.2)
restricts the form of tensors, we consider the case of a scalar quantity $S$
depending on a vector $p$. As a scalar, it remains unchanged by the
transformation, $S(p^{\prime})=S(p)$. These two transformations restrict the
dependence of $S$ on $p$ through the scalar product, $S(p^{2})$, since $p^{2}$
is an $O(N_{d})$ group invariant.
If instead of a scalar we consider a vector valued function $\vec{V}(p)$ also
depending on the vector $p$. By using its transformation law
$V^{\prime}_{\mu}(p^{\prime})=M_{\mu\nu}V_{\nu}(p)$ we conclude that the most
general form for its components is
$V_{\mu}(p)=V(p^{2})p_{\mu}$ (3.3)
where $V(p^{2})$ is a scalar of the vector $p$, [59].
An important case for this work are second rank tensors $D_{\mu\nu}(p)$
depending on a single vector $p$. From (3.2) its transformation law is
$D^{\prime}_{\mu\nu}(p^{\prime})=M_{\mu\rho}M_{\nu\sigma}D_{\rho\sigma}(p)$.
Hence, the most general form for this quantity is of the form
$D_{\mu\nu}(p)=A(p^{2})\delta_{\mu\nu}+B(p^{2})p_{\mu}p_{\nu}.$ (3.4)
This tensor will be considered for the description of the gluon propagator to
evaluate how the Landau gauge Slavnov-Taylor identity, eq. 1.39, acts on the
lattice. With these three examples we see that continuum vectors have a
simple, linear structure imposed by the continuum symmetry. We are interested
in performing a similar construction considering the lattice symmetry.
The $H(N_{d})$ group is a discrete subgroup of $O(N_{d})$ in an
$N_{d}$-dimensional space. It consists of $\pi/2$ rotations as well as parity
inversions for each of the axes. However, it can be shown [13] that each group
transformation can be written as a composition of permutations and inversions
of the components – signed permutations222This is seen by considering a
2-dimensional example: performing a clockwise $\pi/2$ rotation of a vector
$c=(c_{1},c_{2})$ to $c^{\prime}=(c_{2},-c_{1})$ can be achieved by the
composition of the inversion of the first component followed by a permutation
of both components. Generalizations for higher dimensional spaces are
straightforward since these transformations may be independently applied to
each hyperplane.. The reason why it is worth to decompose the $H(N_{d})$ group
into these two smaller subgroups is that they are disjoint333In fact,
permutations correspond to transformations with determinant $+1$ while
inversions to transformations with determinant $-1$., and thus can be analysed
independently. Hence, to find objects transforming properly under the
$H(N_{d})$ group it is sufficient to find those which transform properly
according to both permutations and inversions.
#### 3.1.1 Scalars under the hypercubic group
Proceeding as for the continuum case, we start with the scalar functions on
the lattice depending on a single momentum vector $p$. We inspect the vector
dependence of these objects which must be invariant under permutations and
inversions of components. It can be easily seen that the class of objects
$p^{[2n]}\equiv\sum_{\mu}p_{\mu}^{2n},\leavevmode\nobreak\ n\in\mathbb{N}$
(3.5)
satisfies this property, and each of them is an hypercubic invariant444The
case $p^{[2]}=p^{2}$ is the only invariant in the continuum, i.e. for
$O(N_{d})$.. Hence, we would think that in general a momentum dependent scalar
function would depend on all of these objects. It was shown in [60], however,
that only $N_{d}$ invariants are linearly independent, thus creating a minimal
set of invariants.
The interesting cases for this work are the scalar functions depending on a
4-dimensional vector $p$ which will generally change to
$S(p^{2})\rightarrow S_{L}(p^{2},p^{[4]},p^{[6]},p^{[8]})$ (3.6)
when passing to the lattice. The choice of the four lowest mass dimension
independent invariants is done for practical reasons, but is nonetheless
arbitrary.
#### 3.1.2 Hypercubic vectors
We now generalize the vector notion for the hypercubic symmetric space. As
referred, we find its properties by analysing the permutations and inversions
independently.
Starting with the permutations, and given that any general transformation of
this kind can be written as a product of exchanges of only two components –
transpositions [59] – we focus on those. Hence, an object transforming as a
vector under arbitrary transpositions will also transform as a vector under a
general permutation. Performing a transposition of components
$\sigma\leftrightarrow\rho$, the transformation for the vector components
$p_{\mu}$ in an $N_{d}$-dimensional space is
$\displaystyle p^{\prime}_{\nu}=p_{\nu},\leavevmode\nobreak\
\nu\neq\sigma,\rho$ $\displaystyle p^{\prime}_{\sigma}=p_{\rho},$
$\displaystyle p^{\prime}_{\rho}=p_{\sigma}.$ (3.7)
This is the fundamental transformation rule for a vector, however we are
interested in finding the most general structure satisfying this rule. Indeed,
any polynomial of the vector, $(p_{\mu})^{n}$ also transforms as a vector
under transpositions (a brief proof is shown in section B.1.1)
However, to be a proper vector under $H(N_{d})$ it also needs to satisfy the
transformation under inversions. Taking the same $N_{d}$-dimensional vector
$p$ and applying an inversion on its $\sigma$-th component, the transformed
components are
$\displaystyle p^{\prime}_{\mu}=p_{\mu},\leavevmode\nobreak\ \mu\neq\sigma,$
$\displaystyle p^{\prime}_{\sigma}=-p_{\sigma}.$ (3.8)
To be a vector, the polynomial should transform exactly as (3.8)
$\displaystyle(p^{\prime}_{\mu})^{n}=(p_{\mu})^{n},\leavevmode\nobreak\
\mu\neq\sigma,$ $\displaystyle(p^{\prime}_{\sigma})^{n}=-(p_{\sigma})^{n},$
(3.9)
and for this to be true, $n$ is necessarily an odd integer, otherwise an even
integer would spoil the transformation by eliminating the minus sign of the
inversion. Therefore the most general structure satisfying the vector
transformation is
$v_{\nu}^{n}=p_{\nu}^{2n+1},\leavevmode\nobreak\ n\in\mathbb{N}.$ (3.10)
Moreover, we also note that any linear combination of these vectors is also a
vector (by linearity) and thus any function whose Taylor expansion includes
only odd powers of a vector also constitutes a lattice vector. We now see that
the sinusoidal, improved momentum
$\hat{p}_{\mu}=2\sin\left(\frac{ap_{\mu}}{2}\right)$ (3.11)
arising from lattice perturbation theory is a proper lattice vector, since it
transforms correctly under the $H(4)$ group.
A general lattice vector is then composed of a linear combination of $N_{d}$
vectors from the infinite possible vectors of the form (3.10)
$V_{\mu}(p)=\sum_{n=1}^{N_{d}}V_{n}v_{\nu}^{2n+1}$ (3.12)
where $V_{n}(p^{2})$ are lattice scalar functions. The sum is limited by the
dimension of space since in a $N_{d}$-dimensional space only $N_{d}$ linearly
independent basis vectors can be constructed.
### 3.2 Lattice basis – Gluon propagator
We now consider the gluon propagator – a second order tensor depending on a
single vector, the momentum $p$. In colour space the lattice gluon propagator
is a two dimensional tensor having the same form as in the continuum
formulation. Indeed, $\delta^{ab}$ is the only second order $SU(3)$ tensor
available. Thus we focus on the space-time structure of the propagator. Being
a second order tensor depending on a single momentum $D_{\mu\nu}(p)$, the
gluon propagator transforms as
$D^{\prime}_{\mu\nu}(p)=M_{\mu\sigma}M_{\nu\rho}D_{\sigma\rho}(p).$ (3.13)
where $M\in H(4)$ is a matrix representation of an arbitrary group element.
Following [13] we consider the splitting of the tensor basis in the diagonal
and off-diagonal terms. This is related with the way the hypercubic
transformations act on the lattice tensors, not mixing the aforementioned
groups of elements $D_{\mu\mu}$ and $D_{\mu\nu},\leavevmode\nobreak\
\mu\neq\nu$ (see section B.1.2 for a proof of this property). Accordingly, the
diagonal and off-diagonal tensor elements will be parametrized differently,
i.e. by different form factors.
The most general objects to construct the tensor basis are
$\\{\delta_{\mu\nu},p_{\mu}^{m}p_{\nu}^{n}\\}$. However, for the second
element, since the transformation rule for the tensor applies independently
for each momentum, a similar argument as the one used for the vectors in
section 3.1.2 restricts $m$ and $n$ to be odd integers. Thus, we obtain a set
of the most general possible tensor basis elements
$\\{\delta_{\mu\nu},p_{\mu}^{2k+1}p_{\nu}^{2s+1}\\},\leavevmode\nobreak\
k,s\in\mathbb{N}.$ (3.14)
For the propagator itself, notice that a symmetric second order tensor has
only $N_{d}(N_{d}+1)/2$ free parameters, i.e. for 4-dimensional space it is
fully described by 10 form factors555In principle, however, further conditions
implied by the Slavnov-Taylor identity and gauge fixing further reduce the
number of independent parameters.. However, for reasons that will be evident
when analysing the results, we consider only two reduced bases for the
propagator with three and five form factors.
Consider the case of approximating the tensor by three form factors. The
possible choices for diagonal and off-diagonal terms are
$\\{\delta_{\mu\mu},p_{\mu}^{2},p_{\mu}^{4},...\\}$, and
$\\{p_{\mu}p_{\nu},p_{\mu}^{3}p_{\nu},...\\}$, respectively. Choosing the
parametrization with the lowest mass dimension terms we obtain the form
$\displaystyle
D_{\mu\mu}(p)=J(p^{2})\delta_{\mu\mu}+K(p^{2})p_{\mu}^{2},\leavevmode\nobreak\
(\text{no sum})$ $\displaystyle
D_{\mu\nu}(p)=L(p^{2})p_{\mu}p_{\nu},\leavevmode\nobreak\ \mu\neq\nu.$ (3.15)
We also consider an extended tensor basis using five form factors. Performing
the same construction as before and considering an explicit symmetrization on
the space indices for the higher order non-diagonal terms, we obtain
$\displaystyle
D_{\mu\mu}(p)=E(p^{2})\delta_{\mu\mu}+F(p^{2})p_{\mu}^{2}+G(p^{2})p_{\mu}^{4},\leavevmode\nobreak\
(\text{no sum})$ $\displaystyle
D_{\mu\nu}(p)=H(p^{2})p_{\mu}p_{\nu}+I(p^{2})p_{\mu}p_{\nu}(p_{\mu}^{2}+p_{\nu}^{2}),\leavevmode\nobreak\
\mu\neq\nu\leavevmode\nobreak\ (\text{no sum}).$ (3.16)
The extraction of the form factors involves the computation of its projectors,
these are built in section B.2. In chapter 4 these form factors will be
obtained from the lattice and there we will introduce continuum relations
among them that follow from both the Slavnov-Taylor identity and gauge
condition on the lattice.
Notice that the tensor basis can be built with normal momentum $p_{\mu}$ or
the lattice perturbation theory improved momentum $\hat{p}_{\mu}$ which may
serve as a further improvement. However, structures mixing both types of
momenta are not considered.
Notice that the tensor parametrization by the bases is independent of the
chosen gauge, however this choice will entail different relations among the
form factors. We work with the Landau gauge, implying orthogonality of the
gauge fields in the continuum, $p_{\mu}A_{\mu}(p)=0$.
##### Generalized diagonal kinematics
Having the general form of the lattice basis, it is important to consider
configurations for which the basis is reduced to a simpler form, closer to the
continuum tensor basis. To those we call generalized diagonal kinematics and
its form is specified by a single scale or vanishing components. Of this group
belong the full diagonal, $(n,n,n,n)$, the mixed configurations $(n,n,n,0)$
and $(n,n,0,0)$, and on-axis momenta $(n,0,0,0)$.
For these configurations, the inclusion of certain tensor elements is
redundant for they become linearly dependent, thus reducing the possible
independent terms. Namely, for diagonal momenta $(n,n,n,n)$ we get
$p_{\mu}^{2}=n^{2}\delta_{\mu\mu}$. Therefore only a reduced number of form
factors is extracted. Details on the changes of the lattice basis for these
kinematics and how the form factors are extracted are shown in appendix B.
### 3.3 Reconstruction of tensors
To analyse how accurately a tensor basis describes the correlators from the
lattice, we perform a reconstruction procedure [13, 14]. This consists in
extracting a given set of form factors, associated to the corresponding basis
element, from the lattice correlation function and with these functions
rebuild the original tensor. If the rebuilt function is different from the
original we can infer that the basis is not complete and information was lost
during the projection process. To do this we consider the following quotient
$\mathcal{R}=\frac{\sum_{\mu\nu}|\Gamma^{\text{\tiny
orig}}_{\mu\nu}|}{\sum_{\mu\nu}|\Gamma^{\text{\tiny rec}}_{\mu\nu}|}$ (3.17)
given by the sum of absolute values666The absolute value was considered in
order to prevent possible unintentional cancellations among the tensor
components. of the original tensor and the reconstructed one. A value of
$\mathcal{R}=1$ indicates that the basis is complete.
The procedure follows by assuming that the correlator is described by its
basis elements $\tau^{j}$ with corresponding form factor $\gamma^{j}$
$\Gamma=\sum_{j=1}^{N}\gamma^{j}\tau^{j}.$ (3.18)
One starts by computing each form factor $\gamma^{j}$ using the respective
projector – this step is the one where information may be lost if the basis is
not complete, since in this case there are not enough form factors to fully
represent the object. This extraction is performed on the original vertex
$\Gamma^{\text{orig}}$, which in the case of this work comes from the lattice
simulation. Using eq. 3.18 we reconstruct the vertex and obtain
$\Gamma^{\text{rec}}$.
### 3.4 Z4 averaging
In the continuum formulation, having rotational invariance means that the form
factors depend only on the magnitude of the momenta, i.e., that exists some
sort of rotational ‘degeneracy’ on the contribution from those points of the
momentum space. On the lattice, the continuum symmetry is broken into a
discrete subgroup, more generally, the Poincaré invariance is reduced to
$\pi/2$ rotations, inversions and also fixed length translations (considering
periodic boundary conditions) [61].
All points connected by these symmetry transformations have the same $H(4)$
invariants which label the orbits of the group, and are invariant under the
transformations. Therefore, these points should have the same contribution
when computing lattice correlation functions777The contribution of these
points may not be exactly the same due to statistical fluctuations..
Hence, to help suppressing statistical fluctuations we consider equally the
contribution from all points in the subspace defined from all possible group
transformations on a given lattice point. This is accomplished by averaging
all computed quantities over all points in the same orbit which amounts to
$4!\times 4^{2}=384$ points for each momentum configuration in four
dimensions.
### 3.5 Lattice artifacts and Correction methods
In order to properly evaluate the form factors that characterize the
correlation functions it is necessary to account for the artifacts arising
from the discretization of space. These systematic errors become noticeable
when the precision associated with a computation becomes high enough such that
the statistical errors are small compared with these ‘defects’. Since the
gluon propagator is computed with a good degree of precision, the removal of
these artifacts becomes relevant.
We distinguish two types of artifacts related to the introduction of the
lattice. Firstly, finite size effects due to the use of a finite spacing $a$
as well as volume $V$. These were studied in [62] where it was found for the
gluon propagator that the interplay between these two effects were far from
trivial. Secondly, what we call hypercubic artifacts arise from the breaking
of $O(4)$ symmetry, and the appearance of multiple $H(4)$ orbits from each
$O(4)$ orbit. We consider the latter in this section.
Since we are interested in extracting scalar form factors, we consider the
behaviour of lattice scalar functions and how they relate to the corresponding
continuum objects. Any scalar function with respect to a given symmetry group
is invariant along the orbit generated by the corresponding group symmetry
applied to a given point. For the $H(4)$ group each orbit is specified by the
four group invariants
$\\{p^{[2]},\leavevmode\nobreak\ p^{[4]},\leavevmode\nobreak\
p^{[6]},\leavevmode\nobreak\ p^{[8]}\\}.$
The simplest example of this is given by comparing with the continuum
symmetry. In this case, an orbit is simply labelled by the invariant $p^{2}$.
For instance, both momenta $p_{1}=(2,0,0,0)$ and $p_{2}=(1,1,1,1)$ have
$p_{1}^{2}=p_{2}^{2}=4$ in the same $O(4)$ orbit. However, these two points
have different $H(4)$ invariants, ${p_{1}}^{[4]}=16$ and ${p_{2}}^{[4]}=4$
belonging to distinct $H(4)$ orbits, thus should not be averaged equivalently.
We see that the dependence of the scalars on the $p^{[4]}$ invariant spoils
the continuum symmetry.
Clearly, hypercubic artifacts would be eliminated if all higher order
invariants $n>2$ vanished since we would only have a $p^{2}$ dependence as in
the continuum888Note that finite size effects still affect the result after
this correction.. Another way to understand why the finiteness of the higher
order invariants relates to hypercubic artifacts is seen by considering the
improved momentum arising from lattice perturbation theory. By looking at the
improved invariant $\hat{p}^{2}$ expanded in orders of $a$
$\hat{p}^{2}=\left(2\sin(ap/2)\right)^{2}=p^{2}-\frac{a^{2}}{12}p^{[4]}+\frac{a^{4}}{360}p^{[6]}+...$
(3.19)
we see that it differs from the naively discretized continuum momentum by
terms which are proportional to the invariants. Therefore, we can minimize the
lattice invariants in order to suppress hypercubic artifacts depending on non
$O(4)$ group invariants, i.e. by reducing the first higher order invariant
$p^{[4]}$ we are effectively reducing the artifacts. To perform this
correction two distinct methods are considered.
#### 3.5.1 Momentum cuts
The simplest method consists in applying cuts to the momenta. This arises by
noticing that the further a momentum is from the diagonal, the higher are its
non $O(4)$ invariants for a fixed $O(4)$ invariant $p^{2}$. This was seen for
the example considered before with $(2,0,0,0)$ being on-axis momentum with
higher $p^{[4]}$.
An empirical way to deal with higher invariants coming from these kinematics
is to directly discard these momenta from the data. The usual choice is to
consider only momenta inside a cylinder directed along the diagonal of the
lattice as defined in [63]. This selects the largest momenta with the smallest
components, i.e. with the lowest $H(4)$ invariants. The radius of the cylinder
is chosen as to maintain a good amount of data while reducing the artifacts,
and in general a radius of one momentum unit ($ap=2\pi/N$) is considered.
This cut, however, does not remove low momentum on-axis points. To improve the
method we consider further conical cuts, i.e. we consider only momenta falling
inside a conical region around the diagonal of the lattice $(1,1,1,1)$.
Throughout the work we consider an angle of
$20$\mathrm{\SIUnitSymbolDegree}$$.
In addition, the cuts may be applied only to momentum above a given threshold
since for the IR region most of the data falls far from the diagonal and some
information should be kept. The main problem with this method is that it only
keeps a small fraction of the original data.
#### 3.5.2 H4 method
The H4 method [64, 65] is more involved as it attempts to entirely eliminate
the contribution of the invariants $p^{[n]}$ with $n>2$ by performing an
extrapolation. In this work we consider only the extrapolation for the first
invariant $p^{[4]}$, however, this method can be improved with higher order
corrections (given that enough data is available). Examples of the
applications, improvements and general considerations on the method can be
found in [64, 66, 67].
We consider a given scalar function under the lattice symmetry
$\Gamma_{L}(p^{[n]}),\leavevmode\nobreak\ n=2,4,6,8$ obtained by a proper
averaging over the whole group orbit $O(p^{[n]})$,
$\Gamma_{L}\left(p^{2},p^{[4]},p^{[6]},p^{[8]}\right)=\frac{1}{N_{O}}\sum_{p\in
O(p^{[n]})}\Gamma(p)$ (3.20)
where $N_{O}$ corresponds to the cardinality of the orbit. We want to study
how it relates to the continuum counterpart $\Gamma(p^{2})$.
Assuming that the scalar is a smooth function of the invariants, we may
extrapolate to the continuum by
$\Gamma(p^{2})\equiv\lim\limits_{p^{[4]}\rightarrow
0}\Gamma_{L}(p^{2},p^{[4]})$ (3.21)
neglecting higher order invariants which vanish as $\order{a^{4}}$. In fact,
to $\order{a^{4}}$ the same extrapolation is possible for the improved
momentum
$\lim\limits_{p^{[4]}\rightarrow
0}\Gamma_{L}(p^{2},p^{[4]})=\lim\limits_{\hat{p}^{[4]}\rightarrow
0}\Gamma_{L}(\hat{p}^{2},\hat{p}^{[4]})$ (3.22)
although in practice this extrapolation is not easily feasible.
To implement the extrapolation in practice, we assume that the dependence on
the invariants is smooth, and also that the lattice is close to the continuum
limit (small $a$) to use the expansion
$\Gamma_{L}\left(p^{2},p^{[4]},p^{[6]},p^{[8]}\right)=\Gamma_{L}(p^{2},0,0,0)+\frac{\partial\Gamma_{L}}{\partial
p^{[4]}}(p^{2},0,0,0)p^{[4]}+\order{a^{4}}.$ (3.23)
Thus we may identify $\Gamma_{L}(p^{2},0,0,0)$ as the continuum function
$\Gamma(p^{2})$ in finite volume and up to higher order lattice artifacts. In
practice this is applied only when several $H(4)$ orbits exist with the same
$O(4)$ invariant $p^{2}$. The extrapolation is done by a linear regression in
$p^{[4]}$ at fixed $p^{2}$, taking the results as $p^{[4]}\rightarrow 0$.
Since several $H(4)$ orbits should exist, this restricts the range of momentum
to which the method is applicable. Normally, only the mid range of momentum
contains enough data to perform the extrapolation, thus the deep infrared and
high ultraviolet are not considered in this correction. The H4 method can be
generalized for cases with more than a single independent momentum. In this
work, both for the propagator and three gluon vertex, the simplest case of a
single scale momentum is considered.
### 3.6 Three gluon vertex
While the gluon propagator in the continuum is described by a single scalar
function, $D(p^{2})$, under the symmetries of the theory, higher order
correlation functions admit an increased number of form factors for a general
kinematic configuration. Thus we must consider the most general form under the
required symmetries.
For the three gluon vertex the colour structure is restricted to be
antisymmetric
$\Gamma_{\mu_{1}\mu_{2}\mu_{3}}^{abc}(p_{1},p_{2},p_{3})=f^{abc}\Gamma_{\mu_{1}\mu_{2}\mu_{3}}(p_{1},p_{2},p_{3})$
(3.24)
due to the charge invariance of the QCD Lagrangian [68, 69]. This guarantees
the vanishing contribution from the symmetric term $d^{abc}$. We then require
that the complete object obeys Bose symmetry, and since the colour structure
is established by the anti-symmetric structure constants, this requires
$\Gamma_{\mu_{1}\mu_{2}\mu_{3}}(p_{1},p_{2},p_{3})$ to be anti-symmetric to
the interchange of any pair $(p_{i},\mu_{i})$.
For the space-time part of the tensor representing the three gluon vertex we
consider a continuum basis which consists of 14 independent tensors.
Throughout the work we use the basis constructed in [70] which considers a
separation between terms orthogonal to all momenta, and longitudinal terms.
The general tensor is given by the transverse and longitudinal terms
$\Gamma_{\mu_{1}\mu_{2}\mu_{3}}(p_{1},p_{2},p_{3})=\Gamma_{\mu_{1}\mu_{2}\mu_{3}}^{(T)}(p_{1},p_{2},p_{3})+\Gamma_{\mu_{1}\mu_{2}\mu_{3}}^{(L)}(p_{1},p_{2},p_{3}).$
(3.25)
The first consists of four tensors
$\displaystyle\Gamma_{\mu_{1}\mu_{2}\mu_{3}}^{(T)}(p_{1},p_{2},p_{3})=F(p_{1}^{2},p_{2}^{2};p_{3}^{2})\big{[}g_{\mu_{1}\mu_{2}}(p_{1}\cdot
p_{2})-{p_{1}}_{\mu_{2}}{p_{2}}_{\mu_{1}}\big{]}B_{\mu_{3}}^{3}$
$\displaystyle+H(p_{1}^{2},p_{2}^{2},p_{3}^{2})\big{[}-g_{\mu_{1}\mu_{2}}B_{\mu_{3}}^{3}+\frac{1}{3}({p_{1}}_{\mu_{3}}{p_{2}}_{\mu_{1}}{p_{3}}_{\mu_{2}}-{p_{1}}_{\mu_{2}}{p_{2}}_{\mu_{3}}{p_{3}}_{\mu_{1}})\big{]}$
$\displaystyle+\text{cyclic permutations,}$ (3.26)
with the definition,
$B_{\mu_{3}}^{3}={p_{1}}_{\mu_{3}}(p_{2}\cdot
p_{3})-{p_{2}}_{\mu_{3}}(p_{1}\cdot p_{3}).$ (3.27)
The scalar form factors $F(p_{1}^{2},p_{2}^{2};p_{3}^{2})$ are symmetric under
interchange of the first two arguments, evidenced by the used of the semi-
colon, while $H(p_{1}^{2},p_{2}^{2},p_{3}^{2})$ is symmetric under the
interchange of any of its arguments. The remaining 10 longitudinal elements
are of the form
$\displaystyle\Gamma_{\mu_{1}\mu_{2}\mu_{3}}^{(L)}(p_{1},p_{2},p_{3})=$
$\displaystyle
A(p_{1}^{2},p_{2}^{2};p_{3}^{2})g_{\mu_{1}\mu_{2}}(p_{1}-p_{2})_{\mu_{3}}$
$\displaystyle+B(p_{1}^{2},p_{2}^{2};p_{3}^{2})g_{\mu_{1}\mu_{2}}(p_{1}+p_{2})_{\mu_{3}}$
$\displaystyle+C(p_{1}^{2},p_{2}^{2};p_{3}^{2})({p_{1}}_{\mu_{2}}{p_{2}}_{\mu_{1}}-g_{\mu_{1}\mu_{2}}p_{1}\cdot
p_{2})(p_{1}-p_{2})_{\mu_{3}}$
$\displaystyle+\frac{1}{3}S(p_{1}^{2},p_{2}^{2},p_{3}^{2})({p_{1}}_{\mu_{3}}{p_{2}}_{\mu_{1}}{p_{3}}_{\mu_{2}}+{p_{1}}_{\mu_{2}}{p_{2}}_{\mu_{3}}{p_{3}}_{\mu_{1}})$
$\displaystyle+\text{cyclic permutations}$ (3.28)
where both $A(p_{1}^{2},p_{2}^{2};p_{3}^{2})$ and
$C(p_{1}^{2},p_{2}^{2};p_{3}^{2})$ are symmetric in their first two arguments
while $B(p_{1}^{2},p_{2}^{2};p_{3}^{2})$ is anti-symmetric.
$S(p_{1}^{2},p_{2}^{2},p_{3}^{2})$ is completely anti-symmetric.
With this form we have a proper description of the correlation function
extracted from the lattice, with the right hand side of (2.21) being replaced
by
$\displaystyle
G_{\mu_{1}\mu_{2}\mu_{3}}(p_{1},p_{2},p_{3})=\frac{N_{c}(N_{c}^{2}-1)}{4}$
$\displaystyle
D_{\mu_{1}\nu_{1}}(p_{1})D_{\mu_{2}\nu_{2}}(p_{2})D_{\mu_{3}\nu_{3}}(p_{3})\times$
$\displaystyle\times(\Gamma_{\nu_{1}\nu_{2}\nu_{3}}^{(L)}(p_{1},p_{2},p_{3})+\Gamma_{\nu_{1}\nu_{2}\nu_{3}}^{(T)}(p_{1},p_{2},p_{3}))$
(3.29)
where the colour factor comes from the trace operation and $N_{c}=3$. The
extraction of a general form factor is done by suitable projectors built
analogously to those considered for the propagator.
#### Kinematical configuration $(p,0,-p)$
The kinematics used in this work is defined by $(p_{1},p_{2},p_{3})=(p,0,-p)$
which due to having a single scale $p$ allows only the longitudinal terms.
This is because contractions with external propagators eliminate the
transverse terms with
$p_{\mu_{i}}\Gamma_{\mu_{1}\mu_{2}\mu_{3}}^{(T)}(p_{1},p_{2},p_{3})=0$ (3.30)
for any $i=1,2,3$. The explicit expression for eq. 3.29 becomes
$G_{\mu_{1}\mu_{2}\mu_{3}}(p,0,-p)=V\frac{N_{c}(N_{c}^{2}-1)}{4}D(p^{2})^{2}D(0)\Gamma(p^{2})p_{\mu_{2}}\left(\delta_{\mu_{1}\mu_{3}}-\frac{p_{\mu_{1}}p_{\mu_{3}}}{p^{2}}\right)$
(3.31)
with
$\Gamma(p^{2})=2\left(p^{2}C(p^{2},p^{2};0)-A(p^{2},p^{2};0)\right)$ (3.32)
a dimensionless form factor. We see that for this specific configuration, only
a combination of form factors can be extracted. Finally, the 1PI form factor
$\Gamma(p^{2})$ can be projected by the following contraction
$\Gamma(p^{2})p^{2}=\frac{4p_{\mu_{2}}\delta_{\mu_{1}\mu_{3}}G_{\mu_{1}\mu_{2}\mu_{3}}(p,0,-p)}{VN_{c}(N_{c}^{2}-1)D(p^{2})^{2}D(0)(N_{d}-1)}$
(3.33)
for non-vanishing momentum.
### 3.7 Four gluon vertex
The four point correlation function in QCD is the most complex elementary
correlation function arising in the Yang-Mills theory. Having three
independent momenta, four Lorentz and colour indices, it generates a large
amount of possible structures [71]. On the other hand, being a higher order
correlation function, its signal from the Monte-Carlo simulations is strongly
affected by noise. This last problem justifies the absence of previous four
gluon lattice studies.
A further complication arises for this higher order correlation function. We
are interested in computing the four gluon 1PI function, i.e. the pure four
gluon vertex. While for the three gluon vertex this is simply obtained by the
removal of external propagators from the complete correlation function, the
four gluon correlation function carries additional contributions from lower
order Green’s functions. Namely, disconnected terms and the three gluon vertex
enter in the computation of the complete correlation function – see fig. 3.1.
Thus the object we have access in the lattice for a general momentum
configuration reads
$\displaystyle G^{(4)a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}$
$\displaystyle(p_{1},p_{2},p_{3},p_{4})=$ $\displaystyle
D_{\mu_{1}\nu_{1}}(p_{1})D_{\mu_{2}\nu_{2}}(p_{2})D_{\mu_{3}\nu_{3}}(p_{3})D_{\mu_{4}\nu_{4}}(p_{4})\bar{\Gamma}^{(4)a_{1}a_{2}a_{3}a_{4}}_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}(p_{1},p_{2},p_{3},p_{4})$
$\displaystyle-
iD_{\mu_{1}\nu_{1}}(p_{1})D_{\mu_{4}\nu_{4}}(p_{4})\Gamma^{(3)ma_{1}a_{4}}_{\sigma\nu_{1}\nu
4}(p_{1}+p_{4},p_{1},p_{4})\times$ $\displaystyle\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \times
D_{\sigma\rho}(p_{1}+p_{4})\Gamma^{(3)ma_{2}a_{3}}_{\rho\nu_{2}\nu
3}(p_{2}+p_{3},p_{2},p_{3})D_{\mu_{2}\nu_{2}}(p_{2})D_{\mu_{3}\nu_{3}}(p_{3})$
$\displaystyle+D^{a_{1}a_{3}}_{\mu_{1}\mu_{3}}(p_{1})D^{a_{2}a_{4}}_{\mu_{2}\mu_{4}}(p_{2})\delta(p_{1}+p_{3})\delta(p_{2}+p_{4})$
$\displaystyle+\text{cyclic permutations.}$ (3.34)
Only the first term, that includes the four gluon 1PI function is of interest
to us and the remaining ought to be removed.
= +3 +3
Figure 3.1: Diagrammatic representation of the connected and disconnected
terms contributing for the full, four-gluon correlation function.
We wish to remove lower order contributions without affecting the quality of
the signal. Hence, we do not directly subtract the unwanted contributions in
the simulations since other than requiring a heavier computation, the
statistical fluctuations would be increased. To carry out this extraction we
consider a suitable choice of kinematics.
To see how this removes the unwanted contributions we notice that momentum
conservation constrains the possible kinematic configuration for each vertex.
Moreover, the orthogonality of external gluon propagators eliminates terms
when contracted with the corresponding momentum
$p_{\mu}D_{\mu\nu}(p)=0.$ (3.35)
The disconnected terms without interaction (last line in eq. 3.34) are
eliminated by a suitable kinematic configuration, that while allowed by
momentum conservation for the four gluon vertex, it is not permitted for the
two propagators. Whereas the cancellation of disconnected terms is
straightforward, the three gluon contributions requires to notice that the
most general rank-3 continuum tensor necessarily involves a momentum factor.
They are either linear, $g_{\mu_{1}\mu_{2}}{p_{1}}_{\mu_{3}}$ or cubic in the
momenta ${p_{1}}_{\mu_{2}}{p_{2}}_{\mu_{3}}{p_{3}}_{\mu_{1}}$ – see section
3.6. Therefore we can eliminate the three gluon contribution by eliminating
each of these terms appearing in $\Gamma^{(3)}$ above. If we choose a single
scale momentum configuration $(p_{1},p_{2},p_{3},p_{4})=(ap,bp,cp,dp)$999Of
the coefficients $a,b,c,d$ only three are independent, by momentum
conservation., each external propagator will be of the form $D_{\mu\nu}(p)$
thus eliminating each of the three gluon tensor structures by orthogonality.
We see that a proper choice of kinematic configuration provides access to the
pure four gluon vertex in the lattice
$\displaystyle G^{(4)a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}$
$\displaystyle(ap,bp,cp,dp)=$ $\displaystyle
D_{\mu_{1}\nu_{1}}(ap)D_{\mu_{2}\nu_{2}}(bp)D_{\mu_{3}\nu_{3}}(cp)D_{\mu_{4}\nu_{4}}(dp)\bar{\Gamma}^{(4)a_{1}a_{2}a_{3}a_{4}}_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}(ap,bp,cp,dp))$
(3.36)
using the complete correlation function only, i.e. without additional
operations involving lower order functions.
#### 3.7.1 Tensor bases
Having access to the four gluon 1PI function we need to construct a tensor
basis in which this function will be projected. This basis involves a large
number of possible structures. At the level of Lorentz tensors, there are
three types of structures allowed that are built with the metric tensor and
momenta. These are linear, quadratic or quartic in momenta,
$\\{g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}},\leavevmode\nobreak\
g_{\mu_{1}\mu_{2}}p_{\mu_{3}}q_{\mu_{4}},\leavevmode\nobreak\
p_{\mu_{1}}q_{\mu_{2}}r_{\mu_{3}}k_{\mu_{4}}\\}.$ (3.37)
which for a general momentum configuration make up 138 possible structures
[72]. However, due to practical reasons, in the present work we consider a
reduced basis limited to the first elements using the metric tensor
only101010Although this approximation cuts a large number of possible tensor
structures, previous investigations found that the tree-level tensor seems to
provides the leading contribution in comparison with the rest of tensor
structures [32]. This behaviour is also found in the three gluon correlation
function [17].. With this choice, only a smaller number of independent tensors
will contribute to the vertex.
For the colour sector we can use the $SU(3)$ antisymmetric structure constants
$f^{abc}$, the symmetric terms $d^{abc}$ as well as $\delta^{ab}$ to construct
all possible structures
$\\{f^{ma_{1}a_{2}}f^{ma_{3}a_{4}},\leavevmode\nobreak\
d^{ma_{1}a_{2}}d^{ma_{3}a_{4}},\leavevmode\nobreak\
d^{ma_{1}a_{2}}f^{ma_{3}a_{4}},\leavevmode\nobreak\
\delta^{a_{1}a_{2}}\delta^{a_{3}a_{4}}\\}.$ (3.38)
However, various group identities reduce the number of possible terms, see
appendix A.
Due to the complexity associated with the tensor basis for a general kinematic
configuration, in the following we restrict the construction to a specific,
single scale configuration.
#### Kinematical configuration $(p,p,p,-3p)$
We work with the configuration $(p,p,p,-3p)$ which was considered in the
continuum investigations [31, 32]. The most complete basis within our
approximation to metric structures consists of three possible Bose symmetric
tensors. These are the tree-level tensor, written again for convenience
$\displaystyle{\Gamma^{(0)}}_{\mu_{1}\mu_{2}\mu_{3}\mu
4}^{a_{1}a_{2}a_{3}a_{4}}=-g^{2}\big{[}$ $\displaystyle
f^{a_{1}a_{2}m}f^{a_{3}a_{4}m}(g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}-g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}})$
$\displaystyle
f^{a_{1}a_{3}m}f^{a_{2}a_{4}m}(g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}-g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}})$
$\displaystyle
f^{a_{1}a_{4}m}f^{a_{2}a_{3}m}(g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}-g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}})\big{]},$
(3.39)
a fully symmetric tensor (in both colour and Lorentz sectors)
$G^{a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}=(\delta^{a_{1}a_{2}}\delta^{a_{2}a_{3}}+\delta^{a_{1}a_{3}}\delta^{a_{2}a_{4}}+\delta^{a_{1}a_{4}}\delta^{a_{2}a_{3}})(g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}+g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}+g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}})$
(3.40)
which is orthogonal to $\Gamma^{(0)}$ in both spaces
$\displaystyle{\Gamma^{(0)}}_{\mu_{1}\mu_{2}\mu_{3}\mu
4}^{b_{1}b_{2}b_{3}b_{4}}G^{a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}=0,$
$\displaystyle{\Gamma^{(0)}}_{\nu_{1}\nu_{2}\nu_{3}\nu
4}^{a_{1}a_{2}a_{3}a_{4}}G^{a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}=0.$
(3.41)
And finally, the third independent tensor is
$\displaystyle X^{a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}=$
$\displaystyle
g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}\left(\frac{1}{3}\delta^{a_{1}a_{2}}\delta^{a_{3}a_{4}}-d^{ma_{1}a_{2}}d^{ma_{3}a_{4}}\right)$
$\displaystyle+$ $\displaystyle
g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}\left(\frac{1}{3}\delta^{a_{1}a_{3}}\delta^{a_{2}a_{4}}-d^{ma_{1}a_{3}}d^{ma_{2}a_{4}}\right)$
$\displaystyle+$ $\displaystyle
g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}}\left(\frac{1}{3}\delta^{a_{1}a_{4}}\delta^{a_{2}a_{3}}-d^{ma_{1}a_{4}}d^{ma_{2}a_{3}}\right).$
(3.42)
With this tensor basis, we construct the general structure with three
symmetric form factors as
$\Gamma^{a_{1}a_{2}a_{3}a_{4}}_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}=V^{\prime}_{\Gamma^{(0)}}(p^{2}){\Gamma^{(0)}}_{\mu_{1}\mu_{2}\mu_{3}\mu
4}^{a_{1}a_{2}a_{3}a_{4}}+V^{\prime}_{G}(p^{2})G^{a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}+V^{\prime}_{X}(p^{2})X^{a_{1}a_{2}a_{3}a_{4}}_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}.$
(3.43)
with scalar form factors $V^{\prime}_{i}$ depending on the single momentum
scale $p$. This in turn is related to the complete correlation function by the
contraction with four external propagators. To extract each form factor from
the lattice we again apply the trace operation in the colour space. This
operation involves the structures in eq. 3.38 which make for more intricate
operations than the one found for the three gluon vertex. For these the group
identities in appendix A were used. Using the notation
$\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]=D_{\mu_{1}\nu_{1}}(p_{1})D_{\mu_{2}\nu_{2}}(p_{2})D_{\mu_{3}\nu_{3}}(p_{3})D_{\mu_{4}\nu_{4}}(p_{4})\sum_{\begin{subarray}{c}a_{i}\\\
i\in{1,2,3,4}\end{subarray}}\Tr\left(t^{a_{1}}t^{a_{2}}t^{a_{3}}t^{a_{4}}\right)\Gamma^{a_{1}a_{2}a_{3}a_{4}}_{\nu_{1}\nu_{2}\nu_{3}\nu_{4}}$
(3.44)
with the arguments of
$G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}(p_{1},p_{2},p_{3},p_{4})$ and
$\Gamma_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}(p_{1},p_{2},p_{3},p_{4})$ omitted, and
after performing the three non-vanishing Lorentz contractions we obtain
$\displaystyle
g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]=6A_{n}V_{\Gamma^{(0)}}+15G_{n}V_{G}+3(4X_{n}+X^{\prime}_{n})V_{X}$
(3.45) $\displaystyle
g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]=-12A_{n}V_{\Gamma^{(0)}}+15G_{n}V_{G}+3(2X_{n}+3X^{\prime}_{n})V_{X}$
(3.46) $\displaystyle
g_{\mu_{1}\mu_{4}}g_{\mu_{2}\mu_{3}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]=6A_{n}V_{\Gamma^{(0)}}+15G_{n}V_{G}+3(4X_{n}+X^{\prime}_{n})V_{X}$
(3.47)
where the $V_{i}$ are related to the pure vertex form factors by
$V_{i}(p^{2})=V^{\prime}_{i}(p^{2})D(p^{2})^{3}D(9p^{2}),$ (3.48)
and the following colour coefficients resulting from the trace and sum
operation are
$\displaystyle A_{n}=\frac{N_{c}^{2}(N_{c}^{2}-1)}{8},$ (3.49) $\displaystyle
G_{n}=\frac{N_{c}^{2}-1}{4N_{c}^{2}}(2N_{c}^{2}-3),$ (3.50) $\displaystyle
X_{n}=\frac{1}{3}\frac{(N_{c}^{2}-1)^{2}}{4N_{c}}-\frac{(N_{c}^{2}-1)(N_{c}^{2}-4)^{2}}{8N_{c}^{2}},$
(3.51) $\displaystyle
X^{\prime}_{n}=-\frac{1}{3}\frac{(N_{c}^{2}-1)^{2}}{4N_{c}}-\frac{(N_{c}^{2}-1)(N_{c}^{2}-4)}{2N_{c}^{2}}.$
(3.52)
Our interest is to obtain each form factor $V$ independently, however by
looking at eqs. 3.45, 3.46 and 3.47 we see that only two contractions are
linearly independent and thus only two objects can be extracted. Hence,
following [31] the $X$ structure will be disregarded. With this further
approximation the equations simplify to
$\displaystyle
g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]=6A_{n}V_{\Gamma^{(0)}}+15G_{n}V_{G}$
(3.53) $\displaystyle
g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]=-12A_{n}V_{\Gamma^{(0)}}+15G_{n}V_{G}$
(3.54)
and each form factor is obtained by
$\displaystyle
V_{\Gamma^{(0)}}=\frac{1}{18A_{n}}\left(g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]-g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]\right),$
(3.55) $\displaystyle
V_{G}=\frac{1}{45AG_{n}}\left(2g_{\mu_{1}\mu_{2}}g_{\mu_{3}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]+g_{\mu_{1}\mu_{3}}g_{\mu_{2}\mu_{4}}\Tr\left[G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\right]\right).$
(3.56)
These complete form factors are obtained in lattice Monte-Carlo simulations by
computing the corresponding linear combinations of the complete correlation
function $G_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}$. In section 4.3, Monte-Carlo
results for this kinematic configurations will be presented.
## Chapter 4 Results
In this chapter we investigate lattice tensor representations of the gluon
propagator by considering the tensor structures introduced in the previous
chapter. In addition we study the IR behaviour of the three gluon correlation
function and report a first computation of the lattice four gluon correlation
function. All results were obtained in a Landau gauge, 4-dimensional pure
$SU(3)$ Yang-Mills theory from the Wilson action, eq. 2.15.
$a\leavevmode\nobreak\ ($\mathrm{f}\mathrm{m}$)$ | $1/a\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$ | $\beta$ | $N$ | $V\leavevmode\nobreak\ ($\mathrm{f}\mathrm{m}^{4}$)$ | $\\#$config | $p_{\text{min}}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$
---|---|---|---|---|---|---
0.1016(25) | 1.943(47) | 6.0 | 80 | $(8.128)^{4}$ | 550 | $0.153$
64 | $(6.502)^{4}$ | 2000 | $0.191$
Table 4.1: Lattice setup for both ensembles used in the computation of the
gluon correlation functions.
The lattice setup used in this work can be seen in table 4.1. We used two
ensembles with the same lattice spacing but different volumes. The smaller
volume lattice also has a larger number of configurations.
The results shown are either dimensionless or expressed in terms of lattice
units. However, these are shown as a function of the physical momentum,
$p=p_{\text{lat}}a^{-1}\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$
with $a^{-1}=1.943(47)\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$.
Additionally, all results represent bare quantities, i.e. non-renormalized
values. Renormalized values would differ only by an overall constant factor
which does not affect the conclusions.
A complete $H(4)$ group averaging is applied for all quantities as defined in
section 3.4. An average of the quantity is taken over all group equivalent
points for each gauge field configuration. Only then the ensemble average is
taken. Also, the reader should be aware that scalar functions on the lattice
have the four $H(4)$ invariants as arguments although represented herein with
$p^{2}$ only. The exception is the case of the extrapolated values where the
dependence is partially corrected.
The error bars shown correspond to a tenfold bootstrap sampling from the
original set of configurations. For H4 corrected data, error bars result from
an initial bootstrap, followed by the linear regression propagation. Regarding
the correction methods, we will use the following convention through all
results (unless explicitly stated) – $p^{[4]}$ extrapolated data is shown
always as a function of the usual lattice momentum $p$ while momentum cuts are
generally reserved for the improved momentum data $\hat{p}$.
### 4.1 Gluon propagator – Tensor description
In this section we consider the lattice description of the gluon propagator,
compared with the usual continuum tensor structure. For most of this section
we analyse the $80^{4}$ lattice exclusively. The $64^{4}$ lattice will be
considered in the end in order to search for possible finite volume effects on
the results.
#### 4.1.1 Discretization correction methods
We begin by illustrating the correction methods defined in the previous
chapter to illustrate its advantages and setbacks. We use the gluon propagator
as a test, but the conclusions should be applicable to other correlation
functions as well as other tensor structures.
All results shown in this analysis are for the continuum tensor eq. 1.40 with
form factor $D(p^{2})$ and dimensionless dressing function
$d(p^{2})=p^{2}D(p^{2})$.
$D(p^{2})=\frac{1}{(N_{c}^{2}-1)(N_{d}-1)}\sum_{\mu}D_{\mu\mu}(p).$ (4.1)
Notice that the extraction of $D(p^{2})$ is independent of the use of the
normal or improved momentum for the basis.
00.51.01.52.02.53.03.5$0$$1$$2$$3$$4$$5$$6$$7$$8$$\scriptstyle
a)$00.51.01.52.02.53.03.5$0$$1$$2$$3$$4$$5$$6$$7$$8$$\scriptstyle
b)$00.51.01.52.02.53.03.5$0$$1$$2$$3$$4$$5$$\scriptstyle c)$
$d(p^{2})$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
p^{2}D(p^{2})$$\scriptstyle
p^{2}D(p^{2})+\text{cuts}$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle p^{2}D(p^{2})$$\scriptstyle
p^{2}D(p^{2})+\text{cuts}$
$d(p^{2})$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$H4$\scriptstyle
p^{2}D(p^{2})+\text{cuts}$$\scriptstyle\hat{p}^{2}D(\hat{p}^{2})+\text{cuts}$
Figure 4.1: Gluon dressing function $d(p^{2})$ from the continuum basis as a
function of lattice momentum (top left), and as a function of the improved
momentum (top right). The momenta surviving cylindrical and conical cuts are
shown for the each plot. The comparison between the data in terms of the
improved and lattice momenta after complete momentum cuts against the H4
corrected data with lattice momentum is shown in the bottom plot. Results from
the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
In fig. 4.1 results for the correction methods are shown – use of the improved
momentum; momentum cuts; and the H4 extrapolation. In $a)$ and $b)$ the
complete data and after momentum cuts is shown in terms of lattice and
improved momentum, respectively. The complete set of data shows structures
created by the hypercubic artifacts which are much more pronounced when using
lattice momentum. This is expected since, as introduced in the section 3.5,
$\hat{p}$ partially accounts for hypercubic errors up to $\order{a^{2}}$. The
use of the complete momentum cuts (cylindrical and conical) are also shown,
and create a much smoother curve.
The curves in terms of lattice and improved momentum after cuts do not agree
for momenta above $\sim 2.5\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$, this is visible in fig. 4.1 $c)$. In this
plot, the $p^{[4]}$ extrapolated data is also shown, and we see that it
matches the data with cuts as a function of improved momentum for a large
range. An advantage from the extrapolation method is that it offers a higher
density of points for a large range when compared with the curve surviving the
cuts. However, other than the loss of information for lower momentum, the high
momentum region is also problematic due to the lack of different $H(4)$
orbits, hence the extrapolation is not reliable. This becomes noticeable for
$p\sim 5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ where the
discrepancy can be related to the decline in quality of the extrapolation.
#### 4.1.2 Lattice basis – General kinematics
In this section we compare the behaviour of the usual continuum tensor, eq.
1.40, with two lattice descriptions given in eq. 3.16 and eq. 3.15. The most
general continuum basis, eq. 3.4, will also be considered. We disregard, for
now, the generalized diagonal configurations and other kinematics for which
the extraction of all form factors is not possible (details in section B.2).
The dimensionless form factors $p^{2}\Gamma(p^{2})$ will be considered due to
their appearance in the continuum relations, defined below. These are
$p^{2}E(p^{2})$, $p^{4}F(p^{2})$, $p^{4}H(p^{2})$ for the larger basis. The
only exception is for the terms $p^{4}G(p^{2})$, and $p^{4}I(p^{2})$ which are
expressed in lattice units.
##### Continuum relations
To probe the accuracy of our results we consider a benchmark result. We use
the data published in [73] from a precise continuum basis computation of the
propagator using improved momentum and additional cuts. This result comes from
a partial Z4 averaging procedure, i.e. only using momentum permutations. This
data will always be referred as $D(\hat{p}^{2})$ or
$d(\hat{p}^{2})=\hat{p}^{2}D(\hat{p}^{2})$ and shown as a function of improved
momentum only.
In addition to this benchmark, we consider continuum relations that relate
form factors among themselves and also with the continuum tensor basis result,
$D(p^{2})$. These relations are expected to be properly satisfied for the
infrared region where hypercubic effects are smaller111Note that this does not
guarantee that we are extracting proper continuum physics for the IR region.
There are still finite volume and finite spacing effects – see [62].. The
reproduction of the continuum basis, eq. 1.40, by the extended basis, eq.
3.16, for low momentum implies
$\displaystyle E(p^{2})\rightarrow D(p^{2})$ (4.2)
$\displaystyle-p^{2}F(p^{2}),\leavevmode\nobreak\ -p^{2}H(p^{2})\rightarrow
D(p^{2})$ (4.3) $\displaystyle G(p^{2}),\leavevmode\nobreak\
I(p^{2})\rightarrow 0.$ (4.4)
while for the reduced lattice basis, eq. 3.15, the continuum relations are
$\displaystyle J(p^{2})\rightarrow D(p^{2})$ (4.5)
$\displaystyle-p^{2}K(p^{2}),\leavevmode\nobreak\ -p^{2}L(p^{2})\rightarrow
D(p^{2})$ (4.6)
In addition, for the most general continuum second order tensor, eq. 3.4, we
obtain
$\displaystyle A(p^{2}),\leavevmode\nobreak\ -p^{2}B(p^{2})\rightarrow
D(p^{2}).$ (4.7)
The reproduction of these relations can be verified in figs. 4.2, 4.3 and 4.4
where the form factors are reported as a function of lattice momentum $p$
after a $p^{[4]}$ extrapolation (left column), and as a function of improved
momentum with momentum cuts (right). In fig. 4.2, we compare only the form
factors associated with the metric tensor $E(p^{2})$, $J(p^{2})$, and
$A(p^{2})$.
00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$H4
extrapolation01.02.03.04.05.06.0$0$$1$$2$$3$$4$$5$$6$$7$$8$Momentum
cuts00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle p^{2}E(p^{2})$$\scriptstyle d(\hat{p}^{2})$$\scriptstyle
p^{2}E(\hat{p}^{2})$$\scriptstyle d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle p^{2}J(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}J(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
p^{2}A(p^{2})$$\scriptstyle d(\hat{p}^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{2}A(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$ Figure 4.2: $p^{2}E(p^{2})$, $p^{2}J(p^{2})$, and
$p^{2}A(p^{2})$ dressing functions as a function of the lattice momentum after
a $p^{[4]}$ extrapolation (left) and as a function of the improved momentum
$\hat{p}$ after momentum cuts. The results come from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice and the benchmark continuum
dressing function $\hat{p}^{2}D(\hat{p}^{2})$ is plotted as a function of the
improved momentum.
0-1.0-0.50.51.0$1$$1.5$$2$$2.5$$3$$3.5$$4$0-1.0-0.50.51.0$1$$1.5$$2$$2.5$$3$$3.5$$4$0-2.0-1.5-1.0-0.50.51.01.52.0$1$$1.5$$2$$2.5$$3$$3.5$$4$$4.5$$5$$5.5$0-3.0-2.0-1.01.02.03.0$1$$1.5$$2$$2.5$$3$$3.5$$4$$4.5$$5$$5.5$
$\scriptstyle p^{4}\Gamma(p^{2})/a^{2}$
$\scriptstyle p^{4}I(p^{2})$$\scriptstyle\hat{p}^{4}I(\hat{p}^{2})$
$\scriptstyle p^{4}\Gamma(p^{2})/a^{2}$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
p^{4}G(p^{2})$\+ H4$\scriptstyle p^{4}I(p^{2})$\+
H4$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{4}G(\hat{p}^{2})+\text{Cuts}$$\scriptstyle\hat{p}^{4}I(\hat{p}^{2})+\text{Cuts}$
Figure 4.3: Dimensionless form factors $p^{4}G(p^{2})$ and $p^{4}I(p^{2})$.
$G$ is shown only after the correction methods. The original data is shown in
the top row for the lattice momentum $p$ (left) and improved momentum
$\hat{p}$ (right) for a restricted range of momenta. Below, $p^{4}G(p^{2})$
and $p^{4}I(p^{2})$ after the corrections are applied are presented, namely
the H4 extrapolated results and momentum cuts. All data from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$H4
extrapolation01.02.03.04.05.06.0$0$$1$$2$$3$$4$$5$$6$$7$$8$Momentum
cuts00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle-p^{4}F(p^{2})$$\scriptstyle-p^{4}H(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}F(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}H(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle-p^{4}K(p^{2})$$\scriptstyle-p^{4}L(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}K(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}L(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle-p^{4}B(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{4}B(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$ Figure 4.4: Dressing functions for the different tensor bases
as a function of the lattice momentum after a $p^{[4]}$ extrapolation (left)
and as a function of the improved momentum $\hat{p}$ after momentum cuts.
These come from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. The
improved continuum tensor form factor $D(\hat{p}^{2})$ is also shown.
The functions represented in figs. 4.2 and 4.4 are such that in the continuum
limit they all should become equal, thus satisfying eqs. 4.4, 4.6 and 4.7. It
can be seen for figs. 4.2 and 4.4 that within one standard deviation,
continuum relations are satisfied for improved momentum with additional cuts,
although with increased fluctuations when compared with the H4 corrected data
on the left. The latter, however, have a restricted range of compatibility
with the benchmark result. In addition, for fig. 4.4 the two H4 form factors
$F$ and $H$ for the extended basis seem to deviate from the expected
behaviour. The same happens for the smaller lattice basis, and this should be
related to the limitations of the extrapolation for low and high momentum.
Despite the fluctuations, the fact that the continuum relations are satisfied
for a large range of momentum indicates that the lattice is fine and large
enough to obtain results close to continuum.
In fig. 4.3, the form factors $p^{4}G(p^{2})/a^{2}$ and $p^{4}I(p^{2})/a^{2}$
are reported. In the bottom row, results are shown after the correction
methods are applied for both form factors. The appearance of the larger
fluctuations for $G$ and $I$ are expected due to its values being closer to
zero and the increased mixing among a larger number of form factors when
extracting each function. This is also why $I(p^{2})$, which only mixes with
$H(p^{2})$, shows less fluctuations when compared with $G(p^{2})$.
For low momentum, both correction methods and functions satisfy the continuum
relations within statistical fluctuations in fig. 4.3. However, for momenta
above $\sim 2\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ the H4
extrapolation results deviate from zero. This is already visible before the
extrapolation is applied. To see this, in the top row $p^{4}I(p^{2})$ is shown
for all available configurations without corrections, but for a restricted
range of momentum ($p^{4}G(p^{2})$ was disregarded due to having large
fluctuations). $p^{4}I(p^{2})$ is much closer to zero for the improved
momentum basis than for lattice momentum before any correction is applied.
This result can be viewed as a another improvement in the tensor description
after the change of variables to the momentum $\hat{p}$ when building the
tensor basis.
In fact, the change of variables from $p$ to $\hat{p}$ also provides an
improvement for the remaining form factors $E(p^{2})$, $-p^{2}F(p^{2})$, and
$-p^{2}H(p^{2})$. However, this is concealed by the complete set of data, thus
specific momentum configurations are helpful in exposing this effect. In fig.
4.5 these three form factors are shown for two different kinematics for both
the normal and improved momentum bases in the left and right columns,
respectively. The continuum relations are much better satisfied for the
improved momentum case. In regards to reproducing the expected result,
$D(\hat{p}^{2})$, the form factor $E(p^{2})$ shows the best results for
lattice momentum.
0-0.20.20.40.60.81.01.21.41.61.8$3$$3.5$$4$$4.5$$5$$\scriptscriptstyle
a)$$\scriptscriptstyle(20,n,n,0)$0-0.20.20.40.60.81.01.21.41.61.8$3$$3.5$$4$$4.5$$5$$\scriptscriptstyle
b)$$\scriptscriptstyle(20,n,n,0)$0.40.60.81.01.21.41.61.8$2$$2.2$$2.4$$2.6$$2.8$$3$$3.2$$3.4$$\scriptscriptstyle
c)$$\scriptscriptstyle
p=(n+6,n,n,n-6)$0.40.60.81.01.21.41.61.8$2$$2.2$$2.4$$2.6$$2.8$$3$$3.2$$3.4$$\scriptscriptstyle
d)$$\scriptscriptstyle p=(n+6,n,n,n-6)$
$\scriptstyle\Gamma(p^{2})/a^{2}$
$\scriptstyle D(\hat{p}^{2})$$\scriptstyle
E(p^{2})$$\scriptstyle-p^{2}F(p^{2})$$\scriptstyle-p^{2}H(p^{2})$$\scriptstyle
D(\hat{p}^{2})$$\scriptstyle
E(\hat{p}^{2})$$\scriptstyle-\hat{p}^{2}F(\hat{p}^{2})$$\scriptstyle-\hat{p}^{2}H(\hat{p}^{2})$
$\scriptstyle\Gamma(p^{2})/a^{2}$
$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$ Figure 4.5: $E(p^{2})$, $-p^{2}F(p^{2})$,
and $-p^{2}H(p^{2})$ from the improved momentum lattice basis (right) and from
the normal momentum lattice basis (left). Data from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. The standard result for
$D(\hat{p}^{2})$ is also shown as a function of the improved momentum.
The combination of the results from figs. 4.2, 4.4 and 4.3 means that the
continuum relations are properly reproduced for a large range of momenta. This
can be interpreted as the survival (at least to some extent) of the Slavnov-
Taylor identity and Landau gauge condition on the lattice that fix the form of
the gluon propagator to be orthogonal. This also confirms the improvement
obtained from the change of variables $p\rightarrow\hat{p}$ with respect to
the description of lattice correlation functions.
Other than allowing to check the continuum relations, figs. 4.2, 4.4 and 4.3
allow to compare the three extended tensor bases from the point of view of the
general description of the gluon propagator. With this analysis we inspect the
difference between the reduced and extended lattice bases in regards to
reproducing the gluon propagator – this will be complemented by the
reconstruction analysis below. Turning again to figs. 4.2 and 4.4, all results
portray $\hat{p}^{2}D(\hat{p}^{2})$ within one standard deviation, although
with increased fluctuations as one increases the basis elements (bottom to top
in the right columns). Nonetheless, all three sets of functions seem define a
single curve compatible with the benchmark result when represented in terms of
the improved momentum $\hat{p}$. However, even with the momentum cuts large
fluctuations appear for the larger tensor basis, due to the mixing of
different elements in the projection of form factors. In fact, for
$p^{4}F(p^{2})$ in terms of improved momentum in fig. 4.4 the fluctuations are
present through a larger range, starting around $1.5\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$.
The same form factors, but in terms of the normal momentum bases (left column
of both figures) and after the $p^{[4]}$ extrapolation also reproduce the
benchmark result $d(\hat{p}^{2})$ although in a limited range. The H4
extrapolation seems to remove most of the statistical fluctuations when
compared to the data in the right column. For this method there is a clear
distinction between the metric, $p^{2}E(p^{2})$, $p^{2}J(p^{2})$, and
$p^{2}A(p^{2})$ in fig. 4.2 and the remaining non-vanishing form factors in
fig. 4.4. The range of agreement with the benchmark result is larger for the
metric form factors with the deviation appearing for $p\sim
5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. On the other hand,
the curves in the left column of fig. 4.4 have a smaller range of agreement
(except for the basis $\\{A,B\\}$) with deviations starting for lower momenta.
Regarding the fluctuations appearing for larger tensor bases, this problem can
be overcome by using a binning procedure, where points inside each momentum
bin are averaged using a weighted average. Although with this we are summing
non equivalent points with respect to the group symmetry, this procedure is
allowed by noting that the uncertainty in the scale setting (choice of $a$) is
around $2.5\%$. This uncertainty allows us to define the bins in which the
average is performed. For data in terms of lattice momentum, the averaging is
taken only for the H4 corrected values.
To understand the reliability of this procedure we start by considering the
effect of binning the data for the benchmark result. In fig. 4.6 the data
published in [73] is shown with the usual momentum cuts, as well as the binned
results (right plot). The binning seems to introduce deviations from the
results after cuts for a range between $\hat{p}\sim 2-5\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. This deviation can be accounted for in the
following figures since it should be related to the use of the complete set of
data in terms of improved momentum which still carries some hypercubic
artifacts.
00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$
$d(\hat{p}^{2})$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$All
dataCuts$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$CutsBins Figure 4.6: Gluon dressing
function $d(\hat{p}^{2})$ as a function of the improved momentum for the
continuum basis published in [73]. The left plot shows the complete set of
data and the curve surviving momentum cuts. Additionally, the right plot shows
the averaged data in each bin – description in the text.
00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$H4
extrapolation00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle p^{2}E(p^{2})$$\scriptstyle d(\hat{p}^{2})$$\scriptstyle
p^{2}E(\hat{p}^{2})$$\scriptstyle d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle p^{2}J(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}J(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
p^{2}A(p^{2})$$\scriptstyle d(\hat{p}^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{2}A(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$ Figure 4.7: Dressing functions $p^{2}E(p^{2})$,
$p^{2}J(p^{2})$, and $p^{2}A(p^{2})$ from the $\beta=6.0,\leavevmode\nobreak\
80^{4}$ lattice as a function of the lattice momentum after a $p^{[4]}$
extrapolation (left) and as a function of the improved momentum $\hat{p}$. The
data is shown after a binning of $2.5\%$ in momentum was performed. The
continuum dressing function $\hat{p}^{2}D(\hat{p}^{2})$ is shown with momentum
cuts.
0-2.0-1.5-1.0-0.50.51.01.52.0$1$$1.5$$2$$2.5$$3$$3.5$$4$$4.5$$5$$5.5$0-2.0-1.5-1.0-0.50.51.01.52.0$1$$1.5$$2$$2.5$$3$$3.5$$4$$4.5$$5$$5.5$
$\scriptstyle\Gamma(p^{2})/a^{2}$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
p^{4}G(p^{2})$\+ H4$\scriptstyle p^{4}I(p^{2})$\+
H4$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{4}G(\hat{p}^{2})$$\scriptstyle\hat{p}^{4}I(\hat{p}^{2})$
Figure 4.8: Form factors for the higher order terms of the extended basis
$p^{4}G(p^{2})$ and $p^{4}I(p^{2})$ in terms of the usual momentum after the
$p^{[4]}$ extrapolation (left) and as a function of the improved momentum
(right) without any correction applied. Both cases are shown after a $2.5\%$
binning is applied in the momentum axis. Data from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$H4
extrapolation00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$00.51.01.52.02.53.03.54.0$0$$1$$2$$3$$4$$5$$6$$7$$8$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle-p^{4}F(p^{2})$$\scriptstyle-p^{4}H(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}F(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}H(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\scriptstyle-p^{4}K(p^{2})$$\scriptstyle-p^{4}L(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}K(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}L(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$
$\scriptstyle p^{2}\Gamma(p^{2})$
$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle-p^{4}B(p^{2})$$\scriptstyle
d(\hat{p}^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{4}B(\hat{p}^{2})$$\scriptstyle
d(\hat{p}^{2})$ Figure 4.9: $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice
non-metric dressing functions for three tensor bases as a function of the
lattice momentum after a $p^{[4]}$ extrapolation (left) and as a function of
the improved momentum $\hat{p}$, both after a $2.5\%$ binning procedure
applied to the momentum. The continuum dressing function
$\hat{p}^{2}D(\hat{p}^{2})$ is shown with momentum cuts.
The binned versions of figs. 4.2, 4.4 and 4.3 are shown in figs. 4.7, 4.9 and
4.8. The binning of the data defines smoother curves with smaller statistical
errors which allow for better analysis of the deviations from the benchmark
result. For fig. 4.9 some small fluctuations are noticed for $p\sim
1.2\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the extrapolated
data. This should be related to the fluctuations noticeable in the non-binned
counterpart, fig. 4.4.
The data as a function of the improved momentum in the right columns of figs.
4.7 and 4.9 shows a good agreement with $d(\hat{p}^{2})$ while the large
statistical fluctuations have been absorbed by the averaging procedure. The
visible deviation for the mid range of momentum do not appear in non-binned
results and should be associated with the binning procedure. For the H4
corrected data, the binning procedure results in a reduction of fluctuations
and allows to better recognize the deviations from the benchmark result. In
general, for the extrapolated data, the best agreement with the expected
result seems to be obtained by the smaller tensor basis $\\{A,B\\}$. For the
improved momentum bases the situation is not so clear, the best match with
$d(\hat{p}^{2})$ seems to be obtained for $p^{2}H(p^{2})$.
For the form factors $G(p^{2})$ and $I(p^{2})$, also shown after a binning
procedure in fig. 4.8, the interpretation given for fig. 4.3 is now much
clearer. Large fluctuations for low momentum are expected due to the smallness
of $\Delta_{1},\leavevmode\nobreak\ \Delta_{2}$ in the extraction of both
terms – section B.2. The improved momentum basis shows a better agreement with
the continuum relations, while the normal momentum after the extrapolation
shows deviations for higher momenta.
From the above analysis we would conclude that the use of larger bases does
not improve the description of the gluon propagator. In fact, the use of
larger bases introduces fluctuations in the computations. This, together with
the fact that the continuum relations are obtained through the complete range
of momentum restrains us from considering further additions to the lattice
basis. The use of a more complete tensor basis would require an increase in
the statistics to counteract the fluctuations coming from the mixing with a
larger number of terms.
Regarding the results obtained in [13] using a similar approach, the continuum
relations are only satisfied for low momentum (or close to diagonal
configurations) while in our case the relations are satisfied through all
range of momentum, namely when using $\hat{p}$. Note, however that the
referred work uses only 2 and 3-dimensional $SU(2)$ lattices with a larger
lattice spacing, and thus the comparison is to be taken with care.
##### Completeness of the tensor bases
The analysis of the form factors alone does not offer the full picture for how
the lattice bases affect the description of the tensor222It is important to
distinguish the description of the gluon propagator $D(p^{2})$, from the
description of the original lattice tensor $D_{\mu\nu}(p)$ which is the focus
when exploring the completeness of a basis.. Indeed, form factors alone do not
allow to perceive how faithful the tensor description with a given basis is.
The most evident case is for the continuum description which returns the exact
same form factor using normal or improved momentum while the latter reproduces
the original tensor with greater accuracy. This will be analysed below.
We consider the reconstruction introduced in section 3.3 applied to the tensor
bases that have been studied, namely the extended and reduced lattice bases,
eqs. 3.16 and 3.15, and also the continuum basis with a single form factor
$D(p^{2})$. The reconstruction ratio
$\mathcal{R}=\frac{\sum_{\mu\nu}|\Gamma^{\text{\tiny
orig}}_{\mu\nu}|}{\sum_{\mu\nu}|\Gamma^{\text{\tiny rec}}_{\mu\nu}|}$ (4.8)
is computed using the previously shown form factors.
We begin by consider H4 corrected data shown in fig. 4.10. From its analysis
we notice an improvement in the reconstruction when adding tensor elements. In
fact, the larger basis has the best result when compared to the other three
structures, with the results being in general closer to one. The comparison
between the two continuum tensors is not very informative since the
differences appear to be negligible.
To understand the differences in tensor descriptions from the lattice bases we
consider specific momentum configurations to evaluate the reconstruction ratio
in eq. 4.8. The use of specific momentum configurations also helps to
reinforce the existence of special kinematics for which the continuum
description is approached.
10.800.850.900.951.051.101.151.20$0$$1$$2$$3$$4$$5$10.800.850.900.951.051.101.151.20$0$$1$$2$$3$$4$$5$10.800.850.900.951.051.101.151.20$0$$1$$2$$3$$4$$5$10.800.850.900.951.051.101.151.20$0$$1$$2$$3$$4$$5$
$\scriptstyle\mathcal{R}$
$\\{E,F,G,H,I\\}$$\\{J,K,L\\}$
$\scriptstyle\mathcal{R}$
$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\\{A,B\\}$$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\\{D\\}$ Figure 4.10: Reconstruction ratio
for the normal momentum bases after the H4 extrapolation. Each plot is
labelled by the corresponding form factors for each basis. Data from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice.
1.201.201.201.211.211.221.221.231.23$1$$1.5$$2$$2.5$$3$$3.5$$4$$\scriptstyle
a)$$\scriptscriptstyle
p=(2n,n,n,0)$1.201.221.241.261.281.301.32$1$$1.5$$2$$2.5$$3$$3.5$$4$$\scriptstyle
b)$$\scriptscriptstyle
p=(4n,n,n,0)$1.031.031.031.031.031.041.041.04$1$$1.5$$2$$2.5$$3$$3.5$$4$$4.5$$5$$\scriptstyle
c)$$\scriptscriptstyle
p=(n+1,n,n,n-1)$1.021.041.061.081.101.121.141.161.181.201.221.24$2$$2.5$$3$$3.5$$4$$4.5$$5$$5.5$$6$$\scriptstyle
d)$$\scriptscriptstyle
p=(n+6,n,n,n-6)$1.181.201.221.241.261.281.301.321.341.361.38$4$$4.5$$5$$5.5$$6$$\scriptstyle
e)$$\scriptscriptstyle
p=(40,n,n,0)$1.221.241.261.281.301.321.341.361.381.40$2.6$$2.8$$3$$3.2$$3.4$$3.6$$3.8$$4$$\scriptstyle
f)$$\scriptscriptstyle p=(n,1,1,0)$
$\scriptstyle\mathcal{R}$
$\scriptstyle\mathcal{R}$
$\scriptstyle\\{E,F,G,H,I\\}$$\scriptstyle\\{J,K,L\\}$$\scriptstyle\\{D(\hat{p}^{2})\\}$$\scriptstyle\\{D(p^{2})\\}$
$\scriptstyle\mathcal{R}$
$\scriptstyle\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$ Figure 4.11: Reconstruction ratio
$\mathcal{R}$ for various single scale momentum configurations using two
lattice bases, eqs. 3.16 and 3.15, and the continuum tensor (1.40) using the
improved momentum and lattice momentum. Results from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
In fig. 4.11 the ratio for six different momentum configurations is shown. The
range of momentum was chosen for each plot in order to evidence the essential
behaviour for each kinematics. The continuum basis $\\{A,B\\}$ is not shown
since the results exactly match the ones from the single form factor basis.
This could be explained by the orthogonality of the propagator on the lattice,
that further restricts the $\\{A,B\\}$ basis, ending with a single effective
form factor. In addition, to study the differences in using improved or
lattice momentum we consider the usual continuum basis in terms of both
momenta. Conversely, both lattice tensors are shown as a function of $\hat{p}$
only.
The general behaviour in fig. 4.11 shows that the most complete lattice basis
is better at portraying the original tensor, having lower ratios across most
of the configurations and for a large range of momentum. There are, however,
special kinematic points for which the remaining tensor bases match the result
from this basis.
Another striking feature comes from the comparison between the two continuum
bases using normal and improved momentum. The latter shows better ratios and
thus a better description of original tensor333Notice that although the
extraction of $D(p^{2})$ is independent of the use of $p$ ou $\hat{p}$, the
use of both momenta changes the description of the full tensor..
The first row in Figure 4.11 displays two similar kinematics, only
distinguished by its distance from the diagonal, with $(4n,n,n,0)$ being
farther from it. The same general behaviour is obtained for both kinematics,
although with a significant improvement for the left case whose $\mathcal{R}$
values are closer to 1 for the whole range of momenta.
The second row in fig. 4.11 also represents two similar configurations, again
with the one on the left being closer to the diagonal, thus having an overall
better ratio among all bases. Additionally, there is an effect common to both,
namely the angle from the diagonal is not constant through all momenta.
Instead, it depends on $n$ like $\theta=\arccos\sqrt{1/(1+1/(2n^{2}))}$. This
dependence dictates the behaviour of the ratio, decreasing for increasing $n$.
The bottom row shows two distinct configurations. The case $(40,n,n,0)$ has an
expected minimum for large $n$, when approaching the configuration
$(40,40,40,0)$ from the left. The one on the right has a constant ratio, but
very different descriptions among the basis with the extended lattice basis
having a much lower ratio.
In general, we conclude that with respect to the description of the gluon
propagator tensor, $D_{\mu\nu}(p)$ the use of more complete bases provides a
better result. In addition, the improved momentum is again reinforced as the
better momentum vector to use. Note that the purpose of considering larger
bases is not only to obtain a better description of the scalar functions
characterizing the propagator, but also to properly understand its lattice
tensor structure, and how it deviates from the continuum form (these
deviations should be more evident for coarser lattices, with a larger lattice
spacing).
In addition, our analysis provides results differing from those in [13].
Namely, in this work the reconstruction from the three form factor lattice
basis444The extended tensor basis with five form factors was not considered in
this previous work. shows better reconstruction results than in our case.
This, however is related to the use of a lower dimensional lattice for which
the tensor is fully described by less form factors555The gluon propagator is
described in general by $N_{d}(N_{d}+1)/2$ independent tensor structures,
depending on the dimension of the lattice $N_{d}$.. This results in the
structure $\\{J,K,L\\}$ being a more complete basis for $N_{d}<4$ than for our
4-dimensional case. Again, comparisons with these results should be considered
with care.
##### Orthogonality of the tensor basis
The Landau gauge condition is expressed by the orthogonality of the gluon
field, $p_{\mu}A_{\mu}(p)=0$. This condition, together with the Slavnov-Taylor
condition, constrains the tensor form of the gluon propagator in the
continuum. It is important to study how this condition affects the form of the
two gluon correlation function on the lattice.
It is also relevant to notice that the gauge fixing on the lattice cannot be
implemented with infinite precision. In our simulations the condition
satisfies $|\partial A|\lesssim 10^{-7}$. It is also worth referring that we
have explicitly tested orthogonality of the gluon fields by computing the
correlation functions after applying the projection operator
$A_{\mu}^{\text{ort}}=\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)A_{\nu}(p)$
(4.9)
where $A_{\mu}(p)$ are the original gauge fields. Yet, the analysis after this
demand does not change neither the form factors nor the ratios $\mathcal{R}$.
This serves as a good test of the orthogonality on the lattice.
Also, in lattice simulations for general kinematics the Landau gauge condition
is much better realized for the improved momentum rather than normal momentum,
$\hat{p}_{\mu}A_{\mu}(p)\ll p_{\mu}A_{\mu}(p)$, with the results differing by
several orders of magnitude. The exception occurs for kinematics having a
single momentum scale for which we can establish
$\hat{p}_{\mu}A_{\mu}(p)\propto p_{\mu}A_{\mu}(p)$, with the proportionality
constant given by $\sin(n)/n$.
In the continuum, the orthogonality of the propagator is ensured by its tensor
structure by the transverse form $(\delta_{\mu\nu}-p_{\mu}p_{\nu}/p^{2})$.
However, for the extended bases this is not the case, and the orthogonality
should manifest in relations among the form factors. For the extended lattice
basis, the following relation is expected
$\displaystyle\sum_{\mu}p_{\mu}D_{\mu\nu}(p)=0$
$\displaystyle=E(p^{2})+p_{\nu}^{2}F(p^{2})+p_{\nu}^{4}G(p^{2})+(p^{2}-p_{\nu}^{2})H(p^{2})+\left(p^{[4]}+p^{2}p_{\nu}^{2}-2p_{\nu}^{4}\right)I(p^{2})$
(4.10)
for momentum $p_{\nu}\neq 0$.
0-0.15-0.10-0.050.050.100.150.20$1$$1.5$$2$$2.5$$3$$3.5$$4$0-0.20-0.15-0.10-0.050.050.100.150.20$1$$1.5$$2$$2.5$$3$$3.5$$4$
$\scriptstyle p_{\mu}D_{\mu\nu}(p)/a$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
p_{4}+\text{H4}$$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}_{4}$$\scriptstyle\hat{p}_{4}+\text{Cuts}$
Figure 4.12: Orthogonality condition, eq. 4.10 shown for the normal momentum
basis after H4 extrapolation from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$
lattice. Right plot shows the result using the improved basis result without
corrections and also with momentum cuts in terms of the improved momentum. For
all data the $p_{4}$ component was considered.
We look for deviations from this relation which, following the previous
discussion, are expected to be more perceptible for the lattice momentum $p$.
In fig. 4.12 the orthogonality condition is shown for the fourth component of
momentum, $p_{4}$ (the conclusions from the remaining components are the
same). The orthogonality relation, eq. 4.10, is shown for the H4 extrapolated
data (left) where we see that the condition is satisfied only for lower
momenta although with increased fluctuations. Contrarily, the improved basis
(right) shows a much better realization of the orthogonality for the full
momentum range. The low momentum region involves higher statistical
fluctuations that can be partially eliminated by cutting momenta farther from
the diagonal.
Note that this analysis of the orthogonality serves also as a complementary
verification of the continuum relations and the completeness of the basis.
Indeed, imposing $G,\leavevmode\nobreak\ I\rightarrow 0$ and
$-p^{2}F,-p^{2}H\rightarrow E$ the relation (4.10) is immediately satisfied.
#### 4.1.3 Lattice basis – Generalized diagonal configurations
Throughout the previous analysis we excluded the generalized diagonal
kinematics for which the complete set of lattice form factors is not possible
to obtain. However, it was hinted that these are special regarding the
description by the continuum tensor and for the orthogonality condition. In
this section these configurations are studied, and some quantitative arguments
are laid to support previous claims. The generalized diagonal configurations
were introduced in section 3.1. These are defined by a single scale, thus
include on-axis momenta with a single non-vanishing component, full diagonal
momenta $(n,n,n,n)$, and mixed configurations $(n,n,0,0)$ and $(n,n,n,0)$.
11.051.101.151.201.251.301.351.40$0$$1$$2$$3$$4$$5$$6$Lattice11.051.101.151.201.251.301.351.40$0$$1$$2$$3$$4$$5$$6$Continuum
$\scriptstyle\mathcal{R}$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle(n,1,1,0)$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle(n,0,0,0)$$\scriptstyle(n,n,0,0)$$\scriptstyle(n,n,n,0)$$\scriptstyle(n,n,n,n)$
Figure 4.13: Reconstruction ratio for all four generalized diagonal
configurations from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice
considering the most complete lattice basis (left) and the usual continuum
tensor basis (right). Also shown is the reconstruction for the kinematics
$(n,1,1,0)$ using the same two bases.
We start by analysing the reconstruction results for the four generalized
diagonal configurations in fig. 4.13. Firstly, there is a clear hierarchy in
the faithfulness in the description among the four configurations. The closer
to the diagonal, the better description. This should be related to softer
discretization artifacts along the diagonal, as opposite to the ones farther
from it. The ratio deviates considerably from unity, reaching differences of
about $40\%$ for on-axis momenta.
The other striking feature is the correspondence between both bases. Although
neither basis is complete, it would be expected that having more independent
terms would result in a better description. This apparent conflict can be
explained by the special properties of these kinematics. Although we are using
five form factors, the degeneracy of the tensor allows only to extract a
reduced number (two or three depending on the configuration – see appendix B)
hence reducing the freedom in the tensor description. In addition, the
combination of the gauge condition and Slavnov-Taylor identity on the lattice
further restricts the tensor by establishing relations among the form factors.
Therefore, for these kinematics, both bases provide the same effective degrees
of freedom.
In fig. 4.13 a momentum configuration close to on-axis momentum is also shown.
It represents the same configuration as in fig. 4.11 $f)$. It should be
noticed that for this kinematic configuration, the complete extraction of 5
form factors is possible. The ratio for $(n,1,1,0)$ is much smaller when using
the lattice basis than for the continuum structure which is closer to the
result from $(n,0,0,0)$ and again shows that the lattice basis is better at
describing the original tensor for a general configuration.
##### Continuum relations
In the above analysis we referred that the diagonal kinematics are special
regarding its reproduction of the continuum relations. To sustain these
claims, we verify that these are exactly satisfied for these kinematics. We
consider the full diagonal momenta $p=(n,n,n,n)$, for which only two objects
may be extracted,
$\displaystyle
E(p^{2})+n^{2}F(p^{2})+n^{4}G(p^{2})=\frac{1}{N_{d}}\sum_{\mu}D_{\mu\mu}(p)$
(4.11) $\displaystyle
n^{2}H(p^{2})+2n^{4}I(p^{2})=\frac{1}{N_{d}(N_{d}-1)}\sum_{\mu\neq\nu}D_{\mu\nu}(p).$
(4.12)
Since we want to establish relations among the continuum and lattice
parametrizations, we consider the right side of eqs. 4.11 and 4.12 expressed
by the continuum tensor
$D^{c}_{\mu\nu}=D(p^{2})(\delta_{\mu\nu}-p_{\mu}p_{\nu}/p^{2})$. By carrying
out this replacement, the expressions reduce to,
$\displaystyle 4E(p^{2})+p^{2}F(p^{2})+p^{4}G(p^{2})$
$\displaystyle=3D(p^{2})$ (4.13)
$\displaystyle-p^{2}H(p^{2})-\frac{1}{2}p^{4}I(p^{2})$
$\displaystyle=D(p^{2})$ (4.14)
which by considering $G,\leavevmode\nobreak\ I\rightarrow 0$ precisely reduce
to the continuum relations
$\displaystyle E(p^{2}),-p^{2}F(p^{2}),-p^{2}H(p^{2})$
$\displaystyle=D(p^{2}).$ (4.15)
In fact, this last step was unnecessary since due to the form of the basis,
$p^{2}F(p^{2})+p^{4}G(p^{2})$ could just be replaced by a new form factor
$p^{2}F^{\prime}(p^{2})$. In this case it is irrelevant how the form factor is
defined since only the combination of the two can be extracted. An analogous
argument can be made for the off-diagonal terms. Thus, for diagonal momenta,
the extended lattice basis exactly reduce to the continuum description. In
fact, this is the rationale for the argument given above on the decrease in
independent form factors in the case of diagonal kinematics.
For on-axis momenta only diagonal terms can be attained
$D_{\mu\mu}(p)=E(p^{2})+p_{\mu}^{2}F^{\prime}(p^{2})$ (4.16)
where we used the simpler notation,
$F^{\prime}(p^{2})=F(p^{2})+n^{2}G(p^{2})$. For this configuration the
continuum parametrization has the following form
$D^{c}_{\mu\mu}=\begin{cases}D(p^{2})&\mu=2,3,4\\\ 0&\mu=1.\end{cases}$
Extracting each lattice form factor with eqs. B.47 and B.48 and replacing the
tensor elements by the continuum parametrization gives
$\displaystyle E(p^{2})=\frac{1}{3}\sum_{\mu}D^{c}_{\mu\mu}(p)=D(p^{2})$
$\displaystyle p^{2}F(p^{2})=D^{c}_{11}(p)-E(p^{2})=-D(p^{2}),$
thus confirming the continuum relations for this configuration. The treatment
for the mixed configurations $(n,n,0,0)$ and $(n,n,n,0)$ is analogous and does
not alter the conclusions – it can be seen in section C.1.1.
We confirm that the continuum relations are satisfied for single scale
configurations and thus the description with the lattice or continuum tensor
is equivalent. Hence, we see that if we want to have a proper description of
lattice objects the continuum tensor basis provides a good result if one focus
on the diagonal kinematics. This serves also to again validate the
conventional approach to the computation of the propagator using momentum
cuts.
00.51.01.52.02.53.03.54.0$0$$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$\scriptscriptstyle(n,n,n,n)$00.51.01.52.02.53.03.54.0$0$$0.5$$1$$1.5$$2$$2.5$$\scriptscriptstyle(n,0,0,0)$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle(E(\hat{p}^{2})+n^{2}F(\hat{p}^{2})+n^{4}G(\hat{p}^{2}))\frac{4}{3}\hat{p}^{2}$$\scriptstyle-p^{2}H(\hat{p}^{2})-p^{4}I(\hat{p}^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}D(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}E(\hat{p}^{2})$$\scriptstyle-\hat{p}^{4}F(\hat{p}^{2})-\hat{p}^{4}G(\hat{p}^{2})$
Figure 4.14: Form factors from the lattice basis for the diagonal
configuration $p=(n,n,n,n)$ (left) and for the on-axis momentum $p=(n,0,0,0)$
(right) both as a function of improved momentum. Results from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice. Shown for comparison is the
benchmark result $d(\hat{p}^{2})$.
We confirm this numerically in fig. 4.14 which shows the previous continuum
relations. The three expressions show a very good agreement. The left plot
shows the two possible form factors for $(n,n,n,n)$ which other than
satisfying the continuum relations among them also have a very good agreement
with the benchmark result $d(\hat{p}^{2})$. For on-axis momentum the continuum
relations are also confirmed among the two lattice form factors and the
continuum scalar $D(p^{2})$. However, hypercubic artifacts render this
configuration problematic from the perspective of the reproducing the expected
result666Note that the benchmark result consists of data surviving momentum
cuts, and on-axis momenta do not survive the cuts. This is the reason the
result deviates quite considerably for momentum above $\sim
0.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$..
Regarding the orthogonality for generalized diagonal configurations, these are
the same as continuum relations. In fact, for the case $(n,n,n,n)$ the
orthogonality condition is
$p_{\mu}D_{\mu\nu}(p)=n(E(p^{2})+n^{2}F(p^{2})+n^{4}G(p^{2}))+3n^{3}(H(p^{2})+2n^{2}I(p^{2}))=0$
which is the same as obtained above for the continuum relations. Thus,
following the previous conclusions, both orthogonality and continuum relations
are guaranteed when studying the generalized diagonal kinematics.
#### 4.1.4 Finite volume effects
We explore possible finite volume effects by analysing results from a $64^{4}$
lattice with the same inverse coupling, $\beta=6.0$. Having a larger ensemble
(2000 configurations) results in lessened statistical fluctuations. On the
other hand, a smaller volume restricts the access to low momenta.
Due to the momentum restriction on the extraction of the five form factors for
a general kinematics, we cannot reach the lowest momentum points where the
finite volume effects should be noticeable. For the rest of momentum range the
continuum relations for the form factors show the same general behaviour as
the $80^{4}$ lattice, figs. 4.2, 4.4 and 4.3, as thus we do not consider its
analysis.
We turn our attention to the reconstruction – the finite volume of the lattice
is not taken into account in the basis construction and thus it could affect
the reconstruction of the original tensor. The comparison among the two
lattices is shown in fig. 4.15 with the extended and continuum basis shown in
terms of the improved momentum. The first thing to notice is that the
reconstruction is better for the $80^{4}$ lattice, showing a smaller ratio,
except for special points such as diagonal kinematics. This is perceptible for
the high momentum region of $a)$, $c)$, and $d)$. In $b)$, both lattices show
the same ratio for the extended basis while the continuum basis shows a slight
difference with the $80^{4}$ ensemble having a higher ratio.
Despite both lattices provide similar results for special kinematic points,
the remaining configurations differ, and the completeness of the bases seems
to be reduced for the smaller volume lattice. In fact, in fig. 4.15 $c)$ and
$d)$ even the $80^{4}$ continuum tensor provides a better reconstruction than
the $64^{4}$ extended lattice basis.
1.181.201.221.241.261.281.301.321.341.361.38$3.5$$4$$4.5$$5$$5.5$$6$$6.5$$7$$\scriptstyle
a)$$\scriptscriptstyle
p=(32,n,n,0)$1.221.241.261.281.301.321.341.361.381.40$2.6$$2.8$$3$$3.2$$3.4$$3.6$$3.8$$4$$\scriptstyle
b)$$\scriptscriptstyle
p=(n,1,1,0)$1.0251.0301.0351.0401.0451.050$1$$1.5$$2$$2.5$$3$$3.5$$4$$4.5$$5$$\scriptstyle
c)$$\scriptscriptstyle
p=(n+1,n,n,n-1)$1.021.041.061.081.101.121.141.161.181.201.221.24$2$$2.5$$3$$3.5$$4$$4.5$$5$$5.5$$6$$\scriptstyle
d)$$\scriptscriptstyle p=(n+6,n,n,n-6)$
$\scriptstyle\mathcal{R}$
$\scriptstyle 80^{4}-\\{E,F,G,H,I\\}$$\scriptstyle
64^{4}-\\{E,F,G,H,I\\}$$\scriptstyle 80^{4}-\\{D\\}$$\scriptstyle
64^{4}-\\{D\\}$
$\scriptstyle\mathcal{R}$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$ Figure 4.15: Reconstruction ratio for the
extended lattice basis and the usual continuum description both in terms of
the improved momentum. These are shown for the two different lattices with
$80^{4}$ and $64^{4}$ sites, and same spacing
$1/a=1.943(47)\leavevmode\nobreak\ \leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$^{-1}$. Four distinct momentum configurations
are shown.
11.051.101.151.201.251.301.351.40$0$$1$$2$$3$$4$$5$$6$$\scriptstyle\beta=6.0,\leavevmode\nobreak\
64^{4}$11.051.101.151.201.251.301.351.40$0$$1$$2$$3$$4$$5$$6$$\scriptstyle\beta=6.0,\leavevmode\nobreak\
80^{4}$
$\scriptstyle\mathcal{R}$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle(n,0,0,0)$$\scriptstyle(n,n,0,0)$$\scriptstyle(n,n,n,0)$$\scriptstyle(n,n,n,n)$
Figure 4.16: Reconstruction ratio for all four generalized diagonal
configurations considering the most complete lattice basis for the
$(6.502\leavevmode\nobreak\ $\mathrm{f}\mathrm{m}$)^{4}$ lattice (left) and
the $(8.128\leavevmode\nobreak\ $\mathrm{f}\mathrm{m}$)^{4}$ lattice (right).
Both lattices having the same lattice spacing
$1/a=1.943(47)\leavevmode\nobreak\ \leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$^{-1}$.
To complete the reconstruction analysis it is worth to reproduce fig. 4.13 for
the two different lattices, see fig. 4.16. We consider only the largest basis
and confirm that the reconstruction for diagonal kinematics is independent of
the lattice volume. Therefore, other than having a better description by the
continuum form, these kinematics seem also to be insensitive to the volume of
the lattice regarding its tensor description. With this analysis we confirm
that the momentum cuts, namely choosing the diagonal momenta seems to be an
appropriate methodology for lattice computation of correlation functions.
### 4.2 Three gluon vertex
The focus of this section is the analysis of the three gluon correlation
function. In particular, we look for a possible sign change and subsequent
logarithmic divergence which are expected to occur in the infrared region for
some specific kinematic limits and for some form factors of the three gluon
correlation function. The zero-crossing and IR divergence are related to the
concept of dynamical mass generation [15, 74, 75] whereby the gluon acquires
an effective momentum dependent mass $m(p^{2})$, while the ghost seems to be
transparent to this process thus remaining effectively massless. This property
should also affect different gluon correlation functions, particularly the IR
form of the gluon propagator [12, 76].
This behaviour has been predicted by various DSE analysis employing different
truncation schemes and approximations for the three gluon vertex [19, 68, 17,
26]. The basic mechanism for the appearance of the zero-crossing and
subsequent logarithmic divergence in the three gluon vertex is reviewed in
[15]. It boils down to the appearance of a diverging ghost loop in the Dyson-
Schwinger equation for the propagators which in turn affects the three gluon
vertex – see [17] for a thorough analysis. From a qualitative point of view we
can justify the divergence due to the supposedly ghost masslessness and its
loop contributing with a term of the form $\sim\ln(q^{2})$, which diverges for
$p^{2}\rightarrow 0$. On the other hand, the gluon loop is associated with a
term $\sim\ln(q^{2}+m^{2})$, remaining IR finite due to the momentum dependent
effective gluon mass777Note that in these schemes the divergence occurs in a
theory with a finite gluon propagator $D(0)\geq 0$ and finite ghost propagator
(as is the case of lattice results). Therefore, the origin of the divergences
is not related to the inherently divergent ‘scaling’ solutions appearing in
the DSE formalism. These solutions and its properties are discussed in [12].,
$m(0)>0$.
Since the DSE formalism requires approximations for the propagators/vertices
entering the truncated equations, its results require validation, usually
coming from lattice simulations. However, the study of the IR region is
constrained by the finite volume of the lattice and also by large statistical
fluctuations associated with the vertices. Although the zero-crossing and the
three gluon vertex divergence have been observed for 3-dimensional $SU(2)$
theory, its degree of divergence seems to be lower than the one expected from
the DSE framework [25]. Other lattice investigations in both $SU(2)$ and
$SU(3)$ and in three and four dimensions [21, 22, 23, 24] suggest the presence
of the zero-crossing albeit failing to observe the divergence. Contrarily, a
recent analytical study of the gluon and ghost propagators using lattice data
suggest the presence of a mass regularizing the ghost propagator in the deep
IR [29]. This could in turn remove the infrared divergence for the three gluon
vertex.
The zero-crossing provides a non-trivial constraint on the behaviour of gluon
vertices which due to its logarithm divergence makes the effect difficult to
observe888In three dimensions the corresponding effect is a $\sim 1/p$
divergence favouring its detection [77, 78] in small volume lattices. This
effect also strongly depends on the kinematic configuration. In this work we
focus on the ‘asymmetric’ configuration with a vanishing momentum
$(p_{1},p_{2},p_{3})=(p,0,-p)$ for which we extract a single form factor
$\Gamma(p^{2})$ that is expected to display the sign change in the IR region.
This kinematic was considered in other lattice studies [22, 23, 10] as well as
continuum approaches [16, 17]. In [15] the ratio
$R(p^{2})=\frac{{\Gamma^{(0)}}^{a_{1}a_{2}a_{3}}_{\mu_{1}\mu_{2}\mu_{3}}(p,0,-p)G^{a_{1}a_{2}a_{3}}_{\mu_{1}\mu_{2}\mu_{3}}(p,0,-p)}{{\Gamma^{(0)}}^{a_{1}a_{2}a_{3}}_{\mu_{1}\mu_{2}\mu_{3}}(p,0,-p)D^{a_{1}b_{1}}_{\mu_{1}\nu_{1}}(p)D^{a_{2}b_{2}}_{\mu_{2}\nu_{2}}(0)D^{a_{3}b_{3}}_{\mu_{3}\nu_{3}}(p){\Gamma^{(0)}}^{b_{1}b_{2}b_{3}}_{\nu_{1}\nu_{2}\nu_{3}}(p,0,-p)}=\frac{\Gamma(p^{2})}{2}$
(4.17)
was related to the diverging ghost loop appearing in the DSE for the gluon
propagator (under the chosen truncation scheme).
Other than $(p,0,-p)$, other kinematics are generally considered in the
literature, namely the ‘symmetric’ configuration
($p_{i}^{2}=p^{2},\leavevmode\nobreak\ p_{i}\cdot
p_{j}=-p^{2}/2,\leavevmode\nobreak\ i\neq j$) [22, 23] for which the zero-
crossing is easier to observe due to smaller fluctuations, thus having a more
defined range for the sign change. The asymmetric configuration, on the other
hand, is associated with increased statistical fluctuations due to the
vanishing momentum component $p_{2}=0$ [23].
Therefore we aim at investigating the possible occurrence of the zero-crossing
and narrowing the range of momentum where it is expected to occur under both
possible hypothesis for the ghost behaviour, namely the existence or absence
of a dynamical ghost mass that regularizes the vertex. In addition we look for
possible signs of the divergence for vanishing momentum. This work follows the
investigation from [21] albeit with increased statistics due to the use of a
larger configuration ensemble and also due to the use of the full group
symmetry – complete Z4 averaging.
For the three gluon vertex we restrict the analysis to the larger lattice,
with 550 configurations, see table 4.1. The reason is the need of deep IR
momentum points to study the structures introduced before. The larger ensemble
has a smaller volume and thus its smallest momentum is higher than the
corresponding for the $80^{4}$ lattice. This ensemble will be considered as
comparison for the general behaviour of the data in the IR. The reader should
also be aware that all quantities shown below are not renormalized, which
again amounts to a constant factor.
#### 4.2.1 Three gluon correlation function
We start by analysing the complete correlation function, i.e. the vertex with
external propagators, extracted with the following contraction
$\displaystyle G(p^{2})$
$\displaystyle\equiv\delta_{\mu_{1}\mu_{3}}p_{\mu_{2}}\expectationvalue{\Tr\left[A_{\mu_{1}}(p)A_{\mu_{2}}(0)A_{\mu_{3}}(-p)\right]}$
$\displaystyle=V\frac{N_{c}(N_{c}^{2}-1)}{4}D(p^{2})D(0)D(p^{2})\Gamma(p^{2})p^{2}.$
(4.18)
It is important to notice the difference in the statistical accuracy obtained
by considering the complete Z4 averaging as opposed to the partial
(permutation only) case. A look at fig. 4.17 allows to perceive the change
induced by the use of all $H(4)$ equivalent points for the averaging, which
enhances the signal to noise ratio. Statistical fluctuations are lessened
through all range of momentum for the complete Z4 case and the data defines a
smoother curve, with decreased error bars. Given the lessened statistical
precision found in lattice computation of vertices when comparing with the
results for the gluon propagator in the last section, it is crucial to
consider possible ways of increasing the statistics. For this reason, the rest
of this section considers the complete Z4 averaged data.
$-1000$$0$$1000$$2000$$3000$$4000$$5000$$6000$$7000$$0$$1$$2$$3$$4$$5$$6$$7$$8$
$G(p^{2})/a^{2}$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$Partial Z4Full
Z4 Figure 4.17: Three gluon correlation function from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble contracted with, and as a
function of the improved momentum. All data is shown without correction
methods using a partial Z4 averaging with permutations only, and also for the
complete Z4 averaging.
Regarding the $p^{[4]}$ extrapolation, we notice that this procedure can be
extended to a higher momentum than the one used for the gluon propagator
without loss of integrity of the method. The H4 method uses the $H(4)$ orbits
to ‘reconstruct’ the continuum object – extrapolating data to
$p^{[4]}\rightarrow 0$. While for the gluon propagator the structures formed
by the orbit points are well defined and with small uncertainty associated,
the three gluon orbit structures are concealed by large fluctuations. Hence,
the extrapolated function for the three gluon maintains a momentum dependence
close to the original data but with increased precision. Notice, however that
this is not an advantage of the method for the three gluon vertex, but a
consequence of the reduced precision associated with this vertex which allows
us to extend the range, within the original uncertainty.
To support these claims on the extension of the method we compare the effect
of extending the extrapolation for both the gluon propagator and the three
gluon vertex. In fig. 4.18 the H4 extrapolation for the propagator was
extended to all momentum and compared with diagonal configurations due to its
lessened hypercubic artifacts. The dressing function for $(n,n,n,n)$ momentum
is shown as a function of improved momentum as it was observed in the previous
section to produce a better match with the expected behaviour. We see that for
momenta above $p\sim 5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$
the difference between both results is large, evidencing the inaccuracy of the
extrapolation for this momentum scale. In fact, the extrapolation for momenta
above $p\sim 6\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ becomes
unstable, producing a less smooth curve.
$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$0$$1$$2$$3$$4$$5$$6$$7$$8$
$d(p^{2})$
$p\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$H4$\scriptstyle(n,n,n,n)$ Figure 4.18: H4
extrapolated data for the gluon propagator dressing function $d(p^{2})$
compared with full diagonal momenta $(n,n,n,n)$ as a function of improved
momentum. Data from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
Contrarily to this case, if we extend the $p^{[4]}$ extrapolation for the
three gluon vertex, the disagreement is only obtained for larger momenta. In
fig. 4.19 the H4 corrected vertex is again plotted against the diagonal
kinematics. We see that the general behaviour of the curve is maintained after
the correction (with additional precision), and that it follows the diagonal
curve. Therefore, for the three gluon vertex an extension of the extrapolation
is possible within the statistical accuracy. Notice however that the extension
is not complete since for momenta above $p\sim 8\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ large fluctuations arise and the
extrapolation is not reliable. In fact, for the highest momenta, the
extrapolation is not possible due to the lack of $H(4)$ orbit elements,
analogously to the IR region.
$0$$1000$$2000$$3000$$4000$$5000$$6000$$0$$2$$4$$6$$8$$10$$12$$-300$$-150$$0$$150$$300$$5$$5.5$$6$$6.5$$7$$7.5$$8$
$G(p^{2})/a^{2}$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
G(p^{2})$$\scriptstyle G(p^{2})+\text{H4}$$\scriptstyle(n,n,n,n)$ Figure 4.19:
Original and $p^{[4]}$ extrapolated data for the three gluon correlation
function from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble as a
function of the lattice momentum $p$. The H4 correction was applied for the
full momentum range. The configuration $(n,n,n,n)$ is shown for comparison.
#### Perturbative UV prediction
Although we are interested in the infrared behaviour of the correlation
function, we begin by probing how the continuum perturbative predictions match
lattice results for high momenta. To perform this comparison we apply the H4
extrapolation as well as conical cuts with improved momentum. Following [21],
to study the ultraviolet region of our results we use the one-loop
renormalization group improved result for the propagator
$D(p^{2})=\frac{Z}{p^{2}}\left[\ln\left(\frac{p^{2}}{\mu^{2}}\right)\right]^{-\gamma}$
(4.19)
with $Z$ a global constant, $\mu=0.22\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ and $\gamma=13/22$ the gluon anomalous
dimension. For the three gluon vertex a similar expression is obtained,
$\Gamma(p^{2})=Z^{\prime}\left[\ln\left(\frac{p^{2}}{\mu^{2}}\right)\right]^{\gamma_{3g}}$
(4.20)
with the anomalous dimension $\gamma_{3g}=17/44$. These two expressions can be
combined to construct the corresponding three gluon correlation function
computed above, eq. 4.18
$G_{\text{UV}}(p^{2})=\frac{Z^{\prime\prime}}{p^{2}}\left[\ln\left(\frac{p^{2}}{\mu^{2}}\right)\right]^{\gamma^{\prime}}$
(4.21)
with $\gamma^{\prime}=\gamma_{3g}-2\gamma=-35/44$ the overall anomalous
dimension. This result is expected to be valid for high momentum.
10.900.951.051.101.151.201.251.301.35$2$$3$$4$$5$$6$$7$10.80.91.11.21.31.4$1$$2$$3$$4$$5$$6$$7$
$\chi^{2}/d.o.f.$
$p_{0}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$H4$\hat{p}_{0}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$Cuts Figure 4.20: $\chi^{2}/d.o.f.$
obtained from the fit of the functional form (4.21) to the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice data as a function of the
momentum range cut off, $p>p_{0}\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. Left plot shows the result of the fit for
the H4 corrected data while the right plot with diagonal momenta as a function
of the improved momentum.
$0$$1000$$2000$$3000$$4000$$5000$$6000$$0$$1$$2$$3$$4$$5$$6$$7$$8$$0$$1000$$2000$$3000$$4000$$5000$$6000$$0$$1$$2$$3$$4$$5$$6$$7$$8$
$G(p^{2})/a^{2}$
$p\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
G(p^{2})$\+ H4$\scriptstyle G_{\text{UV}}(p^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle G(\hat{p}^{2})$$\scriptstyle
G_{\text{UV}}(\hat{p}^{2})$ Figure 4.21: Three gluon correlation function
$G(p^{2})$ after the H4 extrapolation as a function of the lattice momentum
(left) and as a function of the improved momentum after cuts for
$\hat{p}>1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. The
perturbative prediction, eq. 4.21 is also represented after a fit to the
extrapolated and diagonal configurations, respectively. All results shown are
from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
To better understand the validity of the perturbative prediction, the fits
were performed with Gnuplot [79] for various momentum ranges
$[p_{0},8]\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ with varying
$p_{0}$. The upper bound at $8\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ is considered also for H4 corrected data due
to large errors in the lattice data. The fit was applied to H4 corrected data
as a function of lattice momenta, and also for the data as a function of
improved momentum. To evaluate its quality we compute the
$\chi^{2}/d.o.f.$999This function measures the deviation of the approximated
curve obtained by the fit to the data points. It is defined as,
$\chi^{2}=\sum_{i}\left(\frac{G_{i}-f(p_{i})}{\delta G_{i}}\right)$ where
$G_{i}$ and $\delta G_{i}$ are the data points and corresponding error, while
$f(p_{i})$ is the fitted curve evaluated at the momentum of $G_{i}$. The
degrees of freedom ($d.o.f.$) are the number of data points to be adjusted
deducted by the number of adjustable parameters. A good fit to the data is
obtained by a reduced $\chi^{2}$ close to unit, i.e. $\chi^{2}/d.o.f.\sim 1$.
taking into account the uncertainty in the data, and which ought to be
minimized for various values $p_{0}$, this is shown in fig. 4.20.
For H4 corrected data, the best fit is obtained for momentum $p\sim
6.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. However, for
momenta above $p\sim 2.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$
the fit already shows a stable match with the lattice data. Above this scale
the fit maintains a $\chi^{2}/d.o.f.$ below $\sim 1.15$. The fit for
$p_{0}=2.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ is shown in
the left plot of fig. 4.21 for which $\chi^{2}/d.o.f.=1.14$. The data seems to
follow the perturbation theory prediction for $p$ above $\sim
2.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$.
The fits for the data as a function of improved momentum surviving the cuts
show similar $\chi^{2}/d.o.f.$ values for most fitting ranges. However the
values seem to oscillate less smoothly, and in fact become high for $p_{0}$
above $6\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. In the right
plot of fig. 4.21 the fit for $p_{0}>3\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ is shown, having $\chi^{2}/d.o.f.=1.09$.
This curve also shows a good agreement with the lattice data thus validating
the perturbative prediction for high momenta.
To compute the pure three gluon vertex we need to explicitly remove the
contribution of the external propagators by dividing by its form factor
$D(p^{2})$, eq. 4.18. Hence, we also compare the lattice computation of
$D(p^{2})$ with the perturbative result, eq. 4.19. The increase in accuracy
for this object allows only a fit to higher momenta and in addition, we do not
consider the extrapolated data due to its restrictions to high momentum for
the propagator. This is shown in fig. 4.22 as a function of the improved
momentum. A good match with the lattice data is obtained, with
$\chi^{2}/d.o.f.=1.10$ for the range $p>5\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. Again, the perturbative result is confirmed
for sufficiently high momentum.
$0$$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$0$$1$$2$$3$$4$$5$$6$$7$$8$
$d(p^{2})$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\hat{p}^{2}D(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}D_{\text{UV}}(\hat{p}^{2})$
Figure 4.22: Gluon propagator $D(p^{2})$ from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ lattice as a function of the improved
momentum after cuts abover $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. The renormalization group improved
perturbative result, eq. 4.21 was fitted to the data for
$p\in[5,8]\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, resulting in
a fit with $\chi^{2}/d.o.f.=1.10$.
#### 4.2.2 Three gluon one particle irreducible function
Although the possible sign change associated with the three gluon vertex
should be noticeable for the complete correlation function shown before, this
carries high statistical fluctuations for momenta below $p\sim
1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, hindering the IR
analysis of the curve. In addition, since continuum investigations work with
the 1PI function we need to remove the propagators if we want to properly
compare lattice and continuum results. In this way we isolate the pure one
particle irreducible function, which for the $(p,0,-p)$ kinematics and the
tensor basis considered is described by $\Gamma(p^{2})$, eq. 3.33.
Firstly, we notice that the comparison with the UV perturbative prediction
from eq. 4.20 is not possible for $\Gamma(p^{2})$ due to large statistical
fluctuations dominating the high momentum region. These arise due to the high
momentum form of the gluon propagators, where for a general kinematic
configuration they behave as $D(p^{2})\sim 1/p^{2}$. This induces a $p^{6}$
factor in $\Gamma(p^{2})$ when dividing by $D(p^{2})$101010The poor signal to
noise ratio for $\Gamma(p^{2})$ for high momentum is a common complication for
general lattice computed 1PI functions with more than two external legs. This
problem is not completely solved by the increase in the number of
configurations since it is inherently associated with the high momentum
behaviour of the propagators.. In turn, this factor enlarges the uncertainty
associated with $\Gamma(p^{2})$ – this can be noticed by a simple Gaussian
error propagation, see [21]. For the kinematics in consideration the factor is
softened to $p^{4}$ due to the vanishing momentum $p_{2}=0$, $D(0)>0$.
However, the $p^{4}$ factor combined with large fluctuations in $D(0)$ create
strong fluctuations in the ratio
$p_{\mu}G_{\nu\mu\nu}(p,0,-p)/D(p^{2})^{2}D(0)$ for high momenta.
Regarding the detection of the zero-crossing this is not a problem since
$D(p^{2})$ is essentially constant for the deep IR region and thus the signal
has a more stable behaviour and higher precision. Additionally, the H4
extrapolation is not useful for it disregards points in this region.
$-1$$-0.5$$0$$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$0$$0.2$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\Gamma(p^{2})$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\Gamma(\hat{p}^{2})$$\scriptstyle\Gamma(\hat{p}^{2})+\text{cuts}$
Figure 4.23: Complete set of data from the $\beta=6.0,\leavevmode\nobreak\
80^{4}$ lattice for the three-gluon 1PI, $\Gamma(p^{2})$ as a function of the
improved momentum. The data surviving momentum cuts above
$1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ is also shown.
In fig. 4.23 both the complete set of data for $\Gamma(p^{2})$, and the points
surviving momentum cuts after $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ are shown as a function of improved
momentum. This result matches the momentum dependence obtained in other
lattice studies, namely it follows the results from [21] although with an
improved signal to noise ratio. As expected, large statistical fluctuations
arise for momenta above $\sim 1.5\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. The two lowest momentum points are both
compatible with zero within one standard deviation. The lowest non on-axis
momentum is compatible with zero within the uncertainty,
$\Gamma(p=0.216\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$)=0.176(182)$, while the lowest on-axis
momentum is also compatible with zero although having a larger error
associated $\Gamma(p=0.152\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$)=0.477(479)$. However, these two points do
not provide a statistically relevant signal of the possible zero-crossing.
In order to improve the analysis of the infrared behaviour of the 1PI function
we consider three different functional forms to fit the data in fig. 4.23,
$\displaystyle\Gamma_{1}(p^{2})=a_{1}+z_{1}\ln(\frac{p^{2}}{\mu^{2}}),\leavevmode\nobreak\
(a_{1},z_{1})$ (4.22)
$\displaystyle\Gamma_{2}(p^{2})=a_{2}+z_{2}\ln(\frac{p^{2}+m^{2}}{\mu^{2}}),\leavevmode\nobreak\
(a_{2},z_{2},m)$ (4.23) $\displaystyle\Gamma_{3}(p^{2})=1+cp^{-d},(c,d);$
(4.24)
the adjustable parameters appear in parenthesis. The first functional form,
eq. 4.22, comes from a simple Landau gauge, four-dimensional QCD toy model for
asymptotically low momentum [15, 23]. The second logarithm, eq. 4.23 has an
additional constant $m^{2}$ to account for the possible dynamical ghost mass
predicted in [29]. This mass could in principle remove the three gluon
divergence by regularizing the ghost loop, nonetheless a sign change is
possible depending on the value of the parameters. Both constants
$a_{1},a_{2}$ serve to partially take into account the non-leading terms which
become relevant for higher momenta.
The third form for $\Gamma(p^{2})$, eq. 4.24, is a power law ansatz [25] which
allows to study the degree of the possible divergence in the IR and also
estimate the position of the zero-crossing. In [22, 15, 23] more appropriate
curves, obtained by solving the DSEs for this momentum configuration are
considered and fitted to lattice data.
To better understand the validity of the functional forms, the range of the
fit was tested for the limits $[p_{i},p_{f}]$ with variable $p_{f}$ while
$p_{i}$ is the lowest, non-zero momentum value. The value of $p_{f}$ was
restricted to $2\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, above
which $\Gamma(p^{2})$ is involved in large fluctuations, in fact these are
noticeable already in the upper momenta of fig. 4.23. As a lower bound, we
consider $p_{f}$ above $0.5\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ since not enough data exists below this
threshold.
Since we want to explore the quality of the fit with varying range $p_{f}$ we
consider the analysis for the complete set of data in fig. 4.23. In addition,
we compare the result of the fits with the data surviving momentum cuts above
$1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ to try to overcome
the problem of large fluctuations for higher momenta. The quality of the fit
was controlled with the $\chi^{2}/d.o.f.$ shown for all functional forms and
both sets of data in fig. 4.24.
$0.9$$1$$1.1$$1.2$$1.3$$1.4$$1.5$$1.6$$1.7$$1.8$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$$\Gamma_{1}(p^{2})$$0.9$$1$$1.1$$1.2$$1.3$$1.4$$1.5$$1.6$$1.7$$1.8$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$$\Gamma_{2}(p^{2})$$1$$1.5$$2$$2.5$$3$$3.5$$0.5$$0.6$$0.7$$0.8$$0.9$$1$$1.1$$1.2$$\Gamma_{3}(p^{2})$
$\chi^{2}/d.o.f.$
CompleteCuts
$\chi^{2}/d.o.f.$
$\hat{p}_{f}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$ Figure
4.24: $\chi^{2}/d.o.f.$ of the three fits from eqs. 4.22, 4.23 and 4.24 (top
left, top right and bottom, respectively) for the varying momentum range
$p\in[p_{i},p_{f}]$. Both fits with and without momentum cuts were considered.
The results for the $\chi^{2}/d.o.f$ as a function of the fitting range, in
fig. 4.24 are similar for both logarithms, $\Gamma_{1}$ and $\Gamma_{2}$. The
quality of the fit seems to be highly dependent on the range for $p_{f}$ below
$0.8\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, with $\chi^{2}$
rapidly oscillating. Above $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ the momentum cuts are applied and thus the
results for both sets of data become different. The reduced $\chi^{2}$
oscillates around $\chi^{2}/d.o.f=1.3$ for the complete data in the range
$p_{f}\sim 1-1.4\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. For
larger momentum ranges, $p_{f}>1.4\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$, the fit with the complete data provides
reduced $\chi^{2}$ values closer to one, indicating a better match to the
data.
Although the quality of the fit has a similar behaviour for both logarithms,
the one with an additional mass shows $\chi^{2}/d.o.f$ values closer to unity.
This value remains between $0.9-1.1$ for $p_{f}>1.1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ for the complete data using $\Gamma_{2}$
while for the form $\Gamma_{1}$ the reduced $\chi^{2}$ stabilizes around $1.2$
for this range.
For the data surviving momentum cuts the behaviour is simpler. Both functional
forms provide a stable $\chi^{2}$ around $\chi^{2}/d.o.f=1.3$ for
$p_{f}>1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. This should be
a good indication of the smoothness of the data created by the cuts, and also
that the curves match the results, within the uncertainty. It is important
also to notice that although in general the complete data provides a fit with
better quality, the data after momentum cuts is associated with lessened
lattice artifacts and thus this prediction should also be considered.
The behaviour of the fit for the third functional form $\Gamma_{3}$ is
different than the one described above. From the bottom panel in fig. 4.24 we
see that the best fit is obtained for $p_{f}$ in the range
$0.6-0.8\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ and that the
reduced $\chi^{2}$ grows rapidly for momenta above this region. Since the
quality of the fit becomes worse above $0.9\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ the momentum cuts were applied for
$p>0.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ instead. Notice
that in addition, the fit was restricted to $p_{f}=1.2\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$, above which the fits become worse. In fact,
since this functional form is considered to probe the degree of the possible
divergence in $\Gamma(p^{2})$ it should be valid for lower momentum111111This
was thoroughly explored in [25] for both 3 and 4-dimensional cases and found
that the power law is compatible with the data for momenta below $\sim
1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ only. when compared
with the first two models. This is why the quality of the fit rapidly
decreases when reaching $p_{f}\sim 1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. The quality from the data with cuts remains
practically constant above $0.9\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ with a value around $\chi^{2}/d.o.f=1.5$.
To better understand how each form matches the lattice data we analyse each
model independently and show the result of the fits for a specific value of
$p_{f}$. We choose $p_{f}$ above $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ in order to distinguish between the complete
data and the one surviving momentum cuts. For the $\Gamma_{1}$ logarithm the
choice $p_{f}=1.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$
provides fits with $\chi^{2}/d.o.f.=1.14$ and $\chi^{2}/d.o.f.=1.28$ for the
complete and the data after cuts, respectively. It is important to refer that
the parameters of this curve and the corresponding uncertainty do not vary
significantly for the range $1.3<p_{f}<2\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ which further supports the quality of the
fit – more on this below. The resulting curves and corresponding uncertainty
(computed assuming Gaussian propagation of the error) are shown in fig. 4.25.
The fit for the data surviving momentum cuts seems to provide a better match
with the three gluon vertex $\Gamma(p^{2})$ for the lowest momentum range,
namely for $p\sim 0.2-0.8\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. However, the uncertainty in the curve
parameters is slightly higher. The use of the complete lattice data seems to
shift the position of the possible sign change for higher momenta, with
$p_{0}=0.249(3)\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ and
$p_{0}=0.160(12)\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the
complete data and for the data after momentum cuts, respectively.
$-1$$0$$1$$2$$3$$4$$0.2$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\Gamma_{1}(p^{2})$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\Gamma(\hat{p}^{2})$$\scriptstyle\Gamma(\hat{p}^{2})+\text{cuts}$CutsComplete
Figure 4.25: $\Gamma(p^{2})$ from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$
ensemble as a function of improved momentum. The data after momentum cuts is
also shown. Two fits using eq. 4.22 and $p_{f}=1.7\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ were adjusted considering the complete data,
and the set after momentum cuts.
For the second logarithmic form, eq. 4.23, a similar reasoning is considered
for the choice of $p_{f}$. The range $p_{f}=1.7\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ provides a good fit to the data with
$\chi^{2}/d.o.f.=0.984$ and $\chi^{2}/d.o.f.=1.21$ for the complete set and
the data after cuts, respectively. The corresponding curves are shown in fig.
4.26. Although the quality of the fit indicated by the $\chi^{2}$ seems to be
better for the logarithm with additional mass, the uncertainty in the
parameters is larger. Nonetheless, both curves in fig. 4.26 have a similar
form and suggest a good match with the data for the full range of momenta.
Regarding the possible sign change, the fit with the complete data suggests a
positive IR value for $\Gamma(0)$ and an absent sign change, within the
uncertainty of the curve. On the other hand the curve using momentum cuts
allows for a possible sign change. However, although we predict that within
this model $p_{0}$ should occur below $0.35\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$, the existence of a sign change is not
guaranteed by the predictions made from the curve and the substantial
uncertainty carried by the resulting curve does not allow further conclusions.
$-1$$0$$1$$2$$3$$4$$0$$0.2$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\Gamma_{2}(p^{2})$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\Gamma(\hat{p}^{2})$$\scriptstyle\Gamma(\hat{p}^{2})+\text{cuts}$CutsComplete
Figure 4.26: $\Gamma(p^{2})$ from the complete set as a function of improved
momentum from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble. The data
after momentum cuts are applied is also shown. The functional form in eq. 4.23
with range $p_{f}=1.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$
was adjusted to the complete and partial data.
$-1$$0$$1$$2$$3$$4$$0$$0.2$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\Gamma_{3}(p^{2})$
$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle\Gamma(\hat{p}^{2})$$\scriptstyle\Gamma(\hat{p}^{2})+\text{cuts}$CutsComplete
Figure 4.27: $\Gamma(p^{2})$ for the complete kinematics as a function of
improved momentum from the $\beta=6.0,\leavevmode\nobreak\ 80^{4}$ ensemble.
The set of points surviving momentum cuts is also shown. The functional form
in eq. 4.24 with $p_{f}=0.85\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ was adjusted to the complete and partial
data.
For the power law form, eq. 4.24, a good balance in the quality of the fit and
a reasonable uncertainty is obtained for $p_{f}=0.85\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ for which the complete data provides a
better fit with $\chi^{2}/d.o.f.=1.12$ as opposed to $\chi^{2}/d.o.f.=1.29$
for the data surviving momentum cuts. The analysis of the corresponding curves
in fig. 4.27 shows that both fits have a comparable form, barely changed by
the change in the set of data (this is expected due to the small range
considered above $0.7\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$,
above which cuts were applied). Both results are compatible with a sign
change, with $p_{0}=0.189(31)\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ for the curve using the complete data and
$p_{0}=0.179(48)\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the
other set.
Since this last functional form is expected to match the data for low momentum
only, where the divergence is supposed to occur, the curve fails to match
lattice data for momenta above $\sim 1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. For lower momenta the curve seems to
provide a good match with the data, although with decreased precision when
compared with the results from fig. 4.25. The exponents $d$ from the fits are
$d=0.940(135)$ and $d=1.01(10)$ for the complete and partial sets,
respectively. These seem to be compatible with previous findings for both
$SU(2)$ and $SU(3)$ lattice investigations [25, 80]. However, since we do not
find a clear numerical evidence for the divergence due to the lack of points
in the deep IR region, this result is not reliable and should be taken with
care.
00.050.100.150.200.250.300.35$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$$\Gamma_{1}(p^{2})$00.050.100.150.200.250.300.35$0.5$$0.6$$0.7$$0.8$$0.9$$1$$1.1$$1.2$$\Gamma_{3}(p^{2})$
$p_{0}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$
$\hat{p}_{f}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$CutsComplete$\hat{p}_{f}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$ Figure 4.28: Prediction for the sign
change $p_{0}$ from the fits using eq. 4.22 (left) and eq. 4.24 (right) for
varying fitting ranges $[0,p_{f}]$.
Both the first and last functional forms, eqs. 4.22 and 4.24, are considered
in order to study the possible zero-crossing with subsequent divergence.
Despite not having a clear signal on the divergence, we can study how the
estimated position and uncertainty for $p_{0}$ varies with different fitting
ranges121212Although a sign change can also be observed for the form (4.23),
as seen in fig. 4.26, its existence strongly depends on the momentum range of
the fit. Besides, the uncertainty associated is much larger and therefore its
explicit computation as a function of $p_{f}$ is not shown.. The $p_{0}$
values for $\Gamma_{1}$ and $\Gamma_{3}$ are shown in fig. 4.28 as a function
of $p_{f}$ for both the complete and partial sets of data. From the analysis
of this figure we notice that $p_{0}$ is associated with smaller uncertainty
when computed with the first form, eq. 4.22 and using the complete set of
data. In addition, the complete data seems to shift the position of the zero-
crossing for higher momentum when compared to the partial data.
For the logarithmic case, $p_{0}$ varies very little for the range
$p_{f}<1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, showing values
around $0.1-0.15\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. Above
$1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ the data surviving
momentum cuts maintains a constant value around
$p_{0}=0.15\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. This
prediction lies in a region where in fact the lattice results are compatible
with zero within the uncertainty. On the other hand the prediction from the
complete data grows for $p_{f}>1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ reaching a seemingly constant value of
$p_{0}=0.25\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ above
$p_{f}=1.6\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. We see that
both sets of data seem to approach a constant value for large fitting ranges,
however the values are not compatible within one standard deviation.
For the power law, right plot in fig. 4.28, although the same tendency as for
$\Gamma_{1}$ is observed for $p_{0}$, the uncertainty in this model is much
larger. The result from the data surviving cuts seems to remain constant for
the whole range of momenta, while the complete result increases for larger
$p_{f}$. However, in this case the intervals predicted by the two sets are
compatible within the uncertainty. The combination of these results indicates
a possible value for the zero-crossing position at an interval
$0.1-0.25\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$.
Although a similar analysis for the form $\Gamma_{2}$ is not possible, it is
important to refer that the fit with eq. 4.23 maintains a stable behaviour,
similar to the one found in fig. 4.26 for a large range of $p_{f}$. This is a
good indication of the model describing the data. However, an increase in the
precision of the results is needed to better understand the possibility of the
sign change and IR finiteness of the three gluon vertex.
#### Finite volume effects
$-0.5$$0$$0.5$$1$$1.5$$2$$0$$0.2$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\Gamma(p^{2})$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
80^{4}+\text{cuts}$$\scriptstyle 64^{4}+\text{cuts}$ Figure 4.29:
$\Gamma(p^{2})$ from the $\beta=6.0,80^{4}$ ensemble compared with the results
from [21] using the $\beta=6.0,64^{4}$ lattice with 2000 configurations. Above
$1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ only data surviving
momentum cuts is shown.
To complete the analysis of the three gluon vertex we compare the results
obtained from the $80^{4}$ lattice using 550 configurations and those from the
$64^{4}$ lattice with 2000 configurations and partial Z4 averaging131313The
data from the $64^{4}$ was previously computed in [21] using momentum cuts
above $1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$.. Since both
lattices have the same spacing, this comparison allows to search for possible
finite volume effects for the three gluon vertex.
The dimensionless form factor $\Gamma(p^{2})$ is shown for both lattices in
fig. 4.29 where momentum cuts were applied above $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$. Although the $80^{4}$ lattice data is
noisier and shows larger error bars, as a result of the difference in the size
of the ensembles, both sets of data seem to have the same general behaviour
approaching the infrared region. However, the current data suggests a possible
shift enhancing the $\Gamma(p^{2})$ for the $80^{4}$ lattice in comparison
with the $64^{4}$ results. The curve produced by the $80^{4}$ lattice data
seems to be above the $64^{4}$ results for momenta below
$1.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$, above which the
fluctuations become larger and the results become compatible within the
uncertainty. This enhancement could result from the difference in lattice
sizes and suggests a finite volume effect for low momentum.
Finite volume effects for the gluon propagator were studied in [62], which was
found to have an IR decrease with the increase of lattice size at a fixed
spacing $a$. However, the relevant momentum scales for this effect seem to be
different for the three gluon vertex, with the enhancement extending to higher
momenta than for the propagator. If we consider this effect for the
propagator, and disregard a possible, independent finite volume effect on the
complete three gluon correlation function $G(p^{2})$, the pure vertex
$\Gamma(p^{2})$ is enhanced for low momentum when dividing by the product
$D(p^{2})^{2}D(0)$. Indeed, the lattice data seems to be compatible with an
increase for low momentum, however this is a rather rough estimate of the
effect and we should have in mind that the finite volume can also directly
affect the complete correlation function.
| $64^{4}$ | $80^{4}$
---|---|---
$\chi^{2}/d.o.f.$ | $p_{0}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$ | $\chi^{2}/d.o.f.$ | $p_{0}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$
$\Gamma_{1}$ | 1.09 | 0.180(14) | 1.28 | 0.156(18)
$\Gamma_{2}$ | 1.06 | | 1.19 |
$\Gamma_{3}$ | 1.12 | 0.209(43) | 1.18 | 0.180(43)
Table 4.2: Fit parameters for the $64^{4}$ and $80^{4}$ lattice using the
three models in eqs. 4.22, 4.23 and 4.24.
0-1.0-0.50.51.01.52.02.5$0$$0.4$$0.8$$1.2$$1.6$$2$$\Gamma_{1}(p^{2})$0-1.0-0.50.51.01.52.02.5$0$$0.4$$0.8$$1.2$$1.6$$2$$\Gamma_{2}(p^{2})$-1.0-0.50.00.51.01.52.02.5$0$$0.4$$0.8$$1.2$$1.6$$2$$\Gamma_{3}(p^{2})$
$\Gamma(p^{2})$
$\scriptstyle 80^{4}$$\scriptstyle 64^{4}$
$\Gamma(p^{2})$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$ Figure 4.30:
$\Gamma(p^{2})$ with momentum cuts above $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ for the $80^{4}$ and $64^{4}$ lattice. The
curves result from the fits with eq. 4.22 (top left), eq. 4.23 (top right),
and eq. 4.24 (bottom plot) with fitting ranges $p_{f}=1.7\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ for the first two, and
$p_{f}=0.85\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ for the
latter.
Regarding the position of a possible sign change, assuming the previous
hypothesis for the finite volume effect, the change in the propagator amounts
to an overall multiplicative factor and thus the position of the zero-crossing
is untouched. However, again we notice that the complete effect on the three
gluon correlation function may induce further changes and can in fact change
this value. Besides, since no statistically relevant signal of the zero-
crossing is found for neither of the ensembles, we cannot probe how the volume
affects this property.
To better understand the possible finite volume effect we reproduce the fits
with the three models, eqs. 4.22, 4.23 and 4.24 for the same momentum ranges
as in the previous analysis for each corresponding model. The results are
shown in fig. 4.30 for the three models and the fit parameters are summarized
in table 4.2. We see that in general the $\chi^{2}$ is lower for the $64^{4}$
due to the smoothness of the data computed from a larger ensemble. Moreover,
the position of the possible zero-crossing for both $\Gamma_{1}$ and
$\Gamma_{3}$ seem to be shifted for slightly higher momenta in the $64^{4}$
lattice, however both estimates for the sign change are compatible within the
uncertainty. The form $\Gamma_{2}$ seems to have lower $p_{0}$ for the
$64^{4}$ lattice, however a large uncertainty is associated with the results
for momenta below $\sim 0.3\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ which hinders the analysis of a possible
sign change.
### 4.3 Four gluon vertex
In this section we report on the four gluon correlation function computed from
the two ensembles in table 4.1. As referred in section 3.7, on a lattice
simulation we have access to the full Green’s functions only. However, the
four point correlation function involves, besides the pure four gluon 1PI
function, also the disconnected terms contributions and those associated with
the three gluon irreducible diagrams. All these contributions can be removed
by a proper choice of the kinematics.
Even after discarding these contributions, a lattice simulation returns the
four gluon Green function that combines the corresponding irreducible diagram
with external gluon propagators, eq. 3.36. Then, to measure the four point 1PI
function the full Green’s function requires the removal of the gluon
propagators. However, this operation enhances the fluctuations, specially at
large momenta, where the propagator becomes small, and adds a further
difficulty to the measurement that we aim to perform. Due to increased
fluctuations for the pure vertex we only show the complete correlation
function.
Regarding previous investigations on the IR properties of the four gluon
vertex only continuum studies have been conducted [32, 31], also establishing
a possible zero-crossing for some form factors. Some qualitative relations may
be established between lattice and continuum results. However, these
comparisons should be considered with care due to a weak signal conveyed by
the lattice four gluon correlation function.
In general, the fluctuations of higher order functions in a Monte-Carlo
simulation are larger and the computation necessarily calls for the use of
large ensembles of configurations. To try to overcome the problem of
statistical fluctuations, in all cases we perform a Z4 average, as done in the
previous sections. Unfortunately, although increasing the quality of the
Monte-Carlo signal, the Z4 averaging is not sufficient to produce results with
small or relatively small statistical errors for the statistics that we are
using. Certainly, an increase in the number of gauge configurations will allow
to overcome, at least partially, the problem of the statistical fluctuations.
Additionally, only a restricted class of momentum points will be shown, namely
the generalized diagonal kinematics. These allow to reach lower momentum
values and carry lessened hypercubic artifacts. However, of the four types of
diagonal momenta only the mixed cases will be shown. The reason is again
related with the effort to increase the signal to noise ratio. On-axis momenta
are disregarded for involving higher hypercubic artifacts, and generally
larger error bars due to smaller statistics. On the other hand, fully diagonal
kinematics of the form $(n,n,n,n)$ are disregarded due to having a smaller set
of possible distinct $H(4)$ averaging points. Both $(n,n,n,0)$ and $(n,n,0,0)$
retain a good balance in ‘non–equivalent’ Z4 averaging points while not being
strongly affected by $H(4)$ artifacts when compared with on-axis momenta.
As a starting point we are interested only in obtaining a proper signal of the
four gluon correlation function. A detailed analysis of the infrared behaviour
of the functions is difficult due to the uncertainty associated with the data.
The $64^{4}$ lattice with 2000 configurations provides a much better result
and will be analysed. The $80^{4}$ lattice with 550 configurations allows
access to lower momenta, however substantial fluctuations in the data inhibit
its analysis. For the latter, only points above a given momentum will be shown
and compared with the results from the larger ensemble.
#### 4.3.1 Four gluon correlation function
$-1000$$-500$$0$$500$$1000$$1500$$0.5$$1$$1.5$$2$$2.5$$3$$-100$$0$$100$$200$$300$$400$$500$$600$$0.5$$1$$1.5$$2$$2.5$
$\scriptstyle V_{\Gamma^{(0)}}(p^{2})/a^{4}$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
64^{4}-(n,n,0,0)$$\scriptstyle 64^{4}-(n,n,n,0)$ Figure 4.31: Four gluon
vertex form factor $V_{\Gamma^{(0)}}(p^{2})$ with external propagators from
the $\beta=6.0,\leavevmode\nobreak\ 64^{4}$ lattice. Only mixed diagonal
configurations are considered. The smaller plot shows a restricted range of
momentum to better visualize the mid momentum region. All data was rescaled by
a factor of 1000.
$-400$$-200$$0$$200$$400$$600$$800$$0.5$$1$$1.5$$2$$2.5$$3$$-140$$-120$$-100$$-80$$-60$$-40$$-20$$0$$20$$40$$0.6$$1$$1.4$$1.8$$2.2$
$\scriptstyle V_{G}(p^{2})/a^{4}$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
64^{4}-(n,n,0,0)$$\scriptstyle 64^{4}-(n,n,n,0)$ Figure 4.32: Four gluon
vertex form factor $V_{G}(p^{2})$ with external propagators from the
$\beta=6.0,\leavevmode\nobreak\ 64^{4}$ lattice. Only mixed diagonal
configurations are considered. The smaller plot shows a restricted range of
momentum to better visualize the mid momentum region. All data was rescaled by
a factor of 1000.
We now show the results for the four gluon correlation function from the
$\beta=6.0,\leavevmode\nobreak\ 64^{4}$ and $80^{4}$ ensembles. As introduced
in section 3.7, for the configuration $(p,p,p,-3p)$ only two form factors are
possible to extract, $V_{\Gamma^{(0)}}(p^{2})$ and $V_{G}(p^{2})$ associated
with the tree-level and the $G$ tensor, respectively.
For this particular kinematics the results for the $64^{4}$ lattice are shown
in figs. 4.32 and 4.31. Only the two mixed diagonal configurations are shown
with $V_{G}(p^{2})$ and $V_{\Gamma^{(0)}}(p^{2})$ on the first and second
figure, respectively. Notice these are not the pure, dimensionless form
factors due to the presence of the external propagators, i.e. we are using
$V_{i}(p^{2})=V^{\prime}_{i}(p^{2})(D(p^{2}))^{3}D(9p^{2}),$ (4.25)
where $V^{\prime}_{i}(p^{2})$ corresponds to the pure vertex form factor, as
defined in section 3.7.
A smaller plot is shown in each figure with a narrower range to facilitate the
analysis of the behaviour of the function for the mid-momentum range. Both
sets of data $(n,n,0,0)$ and $(n,n,n,0)$ seem to follow a similar curve
although with enlarged statistical fluctuations in the IR region. The fact
that two sets of non-equivalent kinematics produce similar curves should be an
evidence of this result being a proper signal of the four gluon correlation
function.
$V_{\Gamma^{(0)}}(p^{2})$ shown in fig. 4.31 seems to oscillate quite smoothly
near $1.1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ where it
reaches a minimum. It subsequently grows for low momentum and seems to
approach a finite value near the origin. However, the considerable amount of
uncertainty associated with the first two points hinders the interpretation of
the IR behaviour.
The values for $V_{G}(p^{2})$ in fig. 4.32 have larger uncertainty compared to
$V_{\Gamma^{(0)}}(p^{2})$. Nonetheless, both kinematics seem to follow the
same behaviour, suggesting a local maximum for $p\sim 1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ (see the small plot), followed by a minimum
around $p=0.6\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ with a
possible growth for low momentum. Notice that the uncertainty involved does
not allow to properly confirm this.
From the comparison of both form factors in fig. 4.33 for the same momentum
configurations we notice that the contribution from $V_{\Gamma^{(0)}}(p^{2})$
is slightly larger than the contribution from $V_{G}(p^{2})$ for the range
$0.5-1.5\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. This possible
difference in the weight of the contribution from each structure was also
explored in [32] with the results following the same pattern. Again, the large
uncertainty affecting lattice results allows only for a qualitative and
limited comparison.
$-200$$-100$$0$$100$$200$$300$$400$$500$$600$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\scriptstyle V_{i}(p^{2})/a^{4}$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
V_{\Gamma^{(0)}}(p^{2})$$\scriptstyle V_{G}(p^{2})$ Figure 4.33: Four gluon
vertex form factors $V_{\Gamma^{(0)}}(p^{2})$ and $V_{G}(p^{2})$ with external
propagators from the $\beta=6.0,\leavevmode\nobreak\ 64^{4}$ lattice. Only
mixed diagonal configurations are shown and the lowest momentum points
disregarded due to large fluctuations.
$-400$$-200$$0$$200$$400$$600$$800$$1000$$1200$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\scriptstyle V_{\Gamma^{(0)}}(p^{2})/a^{4}$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
64^{4}$$\scriptstyle 80^{4}$ Figure 4.34: Four gluon vertex form factor
$V_{\Gamma^{(0)}}(p^{2})$ with external propagators from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ (red) and $64^{4}$ (green) ensembles.
Only mixed diagonal configurations are considered and the lowest momentum
points were disregarded. All data was rescaled by a factor of 1000.
$-400$$-200$$0$$200$$400$$600$$800$$0.4$$0.6$$0.8$$1$$1.2$$1.4$$1.6$$1.8$$2$
$\scriptstyle V_{G}(p^{2})/a^{4}$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
64^{4}$$\scriptstyle 80^{4}$ Figure 4.35: Four gluon vertex form factor
$V_{G}(p^{2})$ with external propagators from the
$\beta=6.0,\leavevmode\nobreak\ 80^{4}$ (red) and $64^{4}$ (green) ensembles.
Only mixed diagonal configurations are considered and the lowest momentum
points were disregarded. All data was rescaled by a factor of 1000.
A further evidence for this result being a proper signal of the four gluon
correlation function is found from the comparison with the $80^{4}$ lattice.
In figs. 4.34 and 4.35 both $V_{\Gamma^{(0)}}(p^{2})$ and $V_{G}(p^{2})$ are
shown for mixed diagonal configurations $(n,n,0,0)$ and $(n,n,n,0)$ and for
both lattices. A smaller range of momentum was considered discarding the two
lowest momenta (these show large fluctuations, mainly for the larger lattice).
The form factor $V_{\Gamma^{(0)}}(p^{2})$ is compared for both lattices in
fig. 4.34. Looking only at the $80^{4}$ data we notice a possible similar
structure to that found in fig. 4.31 (see the small plot). The $80^{4}$
results suggest a decrease for negative values and a subsequent growth for
lower momentum. However, a discrepant point appears around
$p=0.8\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ and the errors
associated with the data are much larger than those from the $64^{4}$ lattice.
In addition, if we compare both sets of data in fig. 4.33 from both lattices
we notice a shift in the momentum scales where these structures are found. The
possible minimum occurs for higher momentum in the $64^{4}$ lattice. Although
the general structure of the curve seems to provide the same oscillation, the
shift in the data and the large uncertainty in the $80^{4}$ results could be a
sign of inconsistent data and restrains us from making further claims.
The data for $V_{G}(p^{2})$ in fig. 4.35 also suggests an agreement between
the results from both lattices. However, albeit the curves created by both
sets of data are compatible and have the same general structure within the
uncertainty, the error bars associated with the $80^{4}$ lattice are large and
thus this comparison is unreliable. In this case, we do not observe a shift in
the structure of the curve141414Notice that the momentum points do not
perfectly match due to the different lattice size, $N$. The definition of
lattice momentum is $ap=2\pi n/N$.. Both the local highest point, around
$p=0.9\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ and the minimum
near $p=0.6\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ seem to
occur at the same scales in both ensembles. However, while the minimum for the
$64^{4}$ lattice seems to have a negative value, the same cannot be claimed
for the larger lattice due to the large error bars.
Despite the large uncertainty, it is remarkable that two distinct lattices
seem to provide the same general behaviour for the form factors with similar
structures for the curves. This should be an evidence that we are indeed
computing a valid (albeit weak) signal of the four gluon correlation function.
Nonetheless, a significant increase in the precision of the signal is required
to establish reliable conclusions.
##### Comparison with continuum results
Figure 4.36: Original data from [31] for the DSE computation of the pure four
gluon vertex associated with the tree-level tensor
$V^{\prime}_{\Gamma^{(0)}}(p^{2})$. The ‘total’ result in black is the
relevant structure for comparison. Figure 4.37: Original data from [31] for
the DSE computation of the pure four gluon vertex associated with the tree-
level tensor $V^{\prime}_{G}(p^{2})$. The ‘total’ result in black is the
relevant structure for comparison.
Despite the large statistical fluctuations, we try to compare our results with
previous continuum predictions – these are currently the only source of
possible comparison. For this we compare only the smaller, $64^{4}$ lattice
having a higher precision.
The four gluon vertex was studied in a DSE analysis employing the same tensor
basis and kinematic configuration, [31] where it was argued that only the form
factor $V_{G}(p^{2})$ shows a possible divergent behaviour in the IR, while
$V_{\Gamma^{(0)}}(p^{2})$ remains finite. The original data for the pure
vertex form factors $V^{\prime}_{i}(p^{2})$ from this investigation is shown
in figs. 4.36 and 4.37. We are interested in comparing our results with the
black curves, representing the complete contribution (within the truncation
scheme)151515The remaining curves are the individual contributions from one-
loop diagrams in the DSE formalism..
Although on the lattice we can only access the complete vertex with some
reasonable statistical accuracy, we can establish some general comparisons
with the continuum results by considering the smooth, and practically constant
behaviour of the gluon propagators in the IR. In addition to this
approximation, both the large uncertainty associated with lattice results and
the approximations involved in the DSE approach call for careful conclusions
from the following comparisons.
Comparing the results for the tree-level form factor in figs. 4.31 and 4.36 we
notice a discrepant shift in the overall functions, namely the DSE curve sets
in at unit values for large momenta, while the lattice data seems to approach
zero. Notice, however that this could be an effect of the external
propagators. Nonetheless, the general structure of the lattice data seems to
follow the behaviour of the continuum prediction within the large uncertainty.
Namely, the pattern of oscillations is similar, showing what seems like a
local minimum for $p\sim 1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ followed by a sign change for positive
values below $1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. For
smaller momentum the data is less reliable due to larger uncertainty, however
it seems to approach a finite IR value, which again follows the continuum
prediction.
Another DSE study, using the tree-level tensor only, [32] obtained a similar
result to that in fig. 4.36. Due to the orthogonality between both tensors
$\Gamma^{(0)}$ and $G$, eq. 3.41, the results assuming only the tree-level
tensor for the basis should have the same general behaviour as the one found
in fig. 4.31. Therefore, this serves as a further connection between lattice
and continuum results due to the same qualitative structure in
$V_{\Gamma^{(0)}}(p^{2})$.
The results for $V_{G}(p^{2})$ in figs. 4.32 and 4.37 are also compatible
within the large uncertainty of the lattice results. In this case no shift is
observed between continuum and lattice data. The form factor computed from the
lattice shows a decrease to negative values for $p\sim 0.6\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ in the $64^{4}$ ensemble, which is also
noticeable in the DSE result around the same momentum scales. For lower
momentum the data suggests a possible sign change and subsequent IR growth,
again compatible with previous continuum results. Notice, however that the
error bars for low momenta provide limited confidence in these observations.
Also, the finite volume of the lattice does not allow to reach sufficiently
low momenta to better evaluate the IR behaviour.
## Conclusion
In this thesis we computed and analysed three different gluon correlation
functions of the pure Yang-Mills theory in Landau gauge using the lattice
formalism of QCD. Two lattices were considered with the same lattice spacing
and different physical volumes – see table 4.1.
In the first part of the work we investigated the gluon propagator to
understand how the use of continuum tensor bases affects the knowledge of
lattice computed tensors in a 4-dimensional theory. To date, only 2 and
3-dimensional studies have been conducted on this topic [13, 14]. To this end
we constructed suitable lattice tensor bases respecting the corresponding
lattice symmetries.
Continuum relations among lattice and continuum form factors were identified
and evaluated for every tensor structure. We found that, within the
uncertainty, continuum relations are satisfied for a large range of momentum
which seems to indicate that the lattice data is compatible with the Slavnov-
Taylor identity. Furthermore, to probe the quality of our results we used the
data from a precise lattice computation [73] as a comparison. The results
obtained with various bases match this benchmark result although with
increased fluctuations for larger bases.
The completeness of each tensor basis in describing the lattice tensor
$D_{\mu\nu}(p)$ was studied. Specific kinematics were considered independently
for a detailed analysis and we found that, in general, the most complete bases
(larger number of form factors) provide a better reproduction of the original
lattice tensor and the use of a continuum tensor basis for the propagator
leads to non-negligible loss of information of the lattice correlation
function. The orthogonality of the propagator using lattice tensors was also
studied and it serves as a complementary analysis of the completeness for each
basis.
The analysis of the reconstruction for specific kinematics hinted about the
existence of special points for which the continuum basis matches the
description from lattice bases. These are single scale momenta which were then
investigated exclusively. Although for these points the continuum and lattice
tensors provide the same quality in the description of the tensor, the results
are substantially better for configurations closer to the diagonal of the
lattice. Moreover, continuum relations are exactly satisfied by these
kinematics and constrain the number of independent form factors describing the
tensor. This is in turn related with the similar completeness from lattice and
continuum bases.
With this work we provide additional validation for the traditional method to
compute vertex functions using points near the diagonal of the lattice. We
conclude that diagonal data not only reduce hypercubic artifacts in the form
factors (lattice scalars) but also in the tensor structures that form the
basis. This is noticeable in the good reconstruction results obtained for
diagonal configurations. We also confirm that, in general, the use of improved
momentum provides a better description of lattice objects than the naively
discretized lattice momentum. In fact, this change of variables improves also
the fulfilment of both continuum and orthogonality conditions, as well as the
match with the benchmark result.
Although we did not consider a fully complete tensor to describe the gluon
propagator, we found that an increase in the degrees of freedom is accompanied
by a considerable rise in statistical fluctuations in the form factors. This
restricts the number of independent tensor structures used due to limited
statistics.
The effect of a finite volume lattice was also explored. We found that the
generalized diagonal configurations seem to be insensible to the finite volume
regarding its reconstruction. For the remaining configurations we observed
that, in general, the larger lattice provides lower ratios for the
reconstruction for both continuum and lattice bases.
The finiteness of the space was not taken into account in the construction of
lattice tensors, and the search for proper bases with respect to the
symmetries as well as the size of the lattice should improve the description
of the propagator. Moreover, mixed terms involving both improved and lattice
momentum could be considered as well as continuum vanishing terms, depending
explicitly on the lattice spacing. Identically, the behaviour of different
tensor bases with varying spacings could be explored. Finally, proper tensor
structures respecting lattice symmetries for higher order correlation
functions are yet to be constructed, and would allow to probe how the use of
continuum bases affects its description.
In the second part we analysed the three gluon correlation function from the
$80^{4}$ lattice. We began by showing that the use of the complete set of
group transformations (Z4 average) provides an improved signal to noise ratio.
This is crucial for the computation of higher order functions. A comparison
with the perturbative prediction for high momenta was performed for both two
and three gluon correlation functions, and was confirmed by fitting both
curves for sufficiently high momentum.
We analysed the IR behaviour for the three gluon 1PI function. Two different
hypothesis were considered, namely a possible zero-crossing occurring for low
momenta with a subsequent IR divergence. The effect is interpreted using the
concept of dynamical mass generation for the gluon which acquires a momentum
dependent mass, whereas the ghost is supposed to remain massless thus inducing
a possible divergence. This hypothesis is advocated by various continuum
studies, however it is highly dependent on the approximations employed.
Conversely, an analytic investigation of the gluon and ghost two point
functions suggest a possible dynamical ghost mass which should regularize the
vertex and thus remove the IR divergence [29].
Since the IR data provides no clear evidence of the sign change, let alone the
possible divergence for lower momenta, we analysed the behaviour of the data
by considering three different functional models. The first form contains an
IR unprotected logarithm, eq. 4.22, which other than the zero-crossing also
allows an subsequent divergence. Both the complete set of data, and the points
surviving momentum cuts above $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ provide good quality fits. The results of
the fits for various ranges indicate a zero-crossing around
$0.15-0.25\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ from this
functional form. Notice, however, that while we try to model the zero-
crossing, the divergence is not sustained by lattice data, hence predictions
for this property are less reliable.
The second functional form, eq. 4.23, represents the case of a non-vanishing
dynamical ghost mass which is included in the logarithm and removes the IR
divergence while still allowing for a sign change. In this case the complete
data provides a good fit with the curve for the range of momenta considered.
It is consistent with a positive IR value for the vertex and an absent sign
change. On the other hand, although the data after momentum cuts also matches
the data, this curve is associated with a larger uncertainty. In this case a
sign change is possible below $0.4\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ but it is not guaranteed by the curve.
The last model, eq. 4.24 is a power law ansatz whose purpose is to probe the
degree of the possible divergence for low momentum, and thus the functional
form is restricted to lower momentum. This can be noticed by the decline in
the quality of fit for momenta above $1\leavevmode\nobreak\
$\mathrm{G}\mathrm{e}\mathrm{V}$$ and by the poor match between the curve and
lattice data for this region. On the other hand, for momenta below $\sim
1\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$ the curve shows a good
agreement with lattice data. However, since the possible divergence lacks
confirmation from lattice data, no reliable conclusions can be established.
Although we do not yet have precise IR data to validate the zero-crossing, we
tried to establish a momentum range for the sign change using the analysis of
the three models. However, the possible divergence is currently out of our
grasp due to the lack of data in the very deep IR. The search for this
property calls for a larger lattice, however to obtain a sufficient amount of
statistics with a large lattice requires a substantial increase in the
computational resources. This difficulty could be overcome by using large
ensembles of high volume but coarser lattices. This should be possible due to
the seemingly negligible effect of the discretization in the infrared region
for this vertex found in [25].
For a possible finite IR value for the vertex, the data seems to be compatible
with the model despite the large uncertainty hindering a more detailed
analysis. A better description of the deep infrared region is necessary to
make a more accurate study.
To conclude the study of the three gluon vertex, a comparison between the
$80^{4}$ and $64^{4}$ lattice data was conducted to search for possible finite
volume effects. The results from the $80^{4}$ lattice seem to be enhanced
relatively to those from the $64^{4}$ lattice, creating a shift for momenta
below $\sim 1.4\leavevmode\nobreak\ $\mathrm{G}\mathrm{e}\mathrm{V}$$. This
can be partially explained by previous investigations of the gluon propagator,
which was found to decrease in the IR with increasing volume, and thus
inducing an enhancement in the three gluon vertex when divided by the
propagators. We also compared the predictions from the three models in eqs.
4.22, 4.23 and 4.24 with the results from the $64^{4}$ lattice. While the
curves are modified due to the shift in the data, remarkably the prediction
for the zero-crossing seems to remain unchanged within the uncertainty. This
is compatible with the finite volume effect amounting to a multiplicative
factor such as the one induced by the division of the external propagators.
However, in order to properly understand the effect, a detailed analysis of
both the complete and pure three gluon functions is necessary for different
lattice volumes. For the second model, eq. 4.23, the fit with the $64^{4}$
data follows the same behaviour but with increased precision. The sign change
seems to be predicted for lower momenta, however this is not unambiguously
confirmed within the error bars.
While we explored a single kinematic configuration, additional configurations
could be considered to analyse its IR behaviour. The use of different volume
lattices for other kinematics would also allow to improve the knowledge on the
possible finite volume effect. Another extension of this work could be related
to the large statistical fluctuations affecting the high momentum region of
the three gluon 1PI function. However, as discussed in section 4.2 this is not
achievable by an increase in the number of gauge-field configurations and thus
other alternatives should be envisioned.
For the final topic we computed the four gluon correlation function. As a
higher order function, it is associated with larger statistical fluctuations
which hinder the attainment of a discernible signal. In fact, current
precision allows only to study the complete correlation function, while the
1PI function carries large fluctuations. Using a suitable kinematic
configuration we isolated the contribution of the pure four gluon 1PI function
with external propagators. In addition to the choice of kinematics, an
approximation of the Lorentz tensor basis reduced the number of possible
structures to three. However, for the kinematics $(p,p,p,-3p)$ and the
approximation employed, only two form factors are possible to extract.
To improve the signal quality, we analysed the correlation function only for
configurations $(n,n,n,0)$ and $(n,n,0,0)$. The points from both kinematics
seem to define a single and smooth curve, except for low momentum due to
fluctuations and large error bars. Additionally, the results from both
ensembles have seemingly matching curves within the uncertainty, with
exception of some momentum points in the $V_{\Gamma^{(0)}}(p^{2})$ form
factor, which show some discrepancies for the $80^{4}$ data. Notice, however,
that the $80^{4}$ lattice provides reduced statistics and the comparison is to
be taken with care.
To complete the analysis we compared lattice results against the pure four
gluon vertex from previous continuum investigations [31, 32]. This is a very
delicate comparison due to the impossibility of the computation of the lattice
four gluon 1PI function. Hence, only a very qualitative connection between the
continuum and lattice curves was established. Nonetheless, this should be a
good indication of the signal obtained.
Although the results are an evidence that we are indeed peeking at the four
gluon correlation function, the statistical relevancy of the signal is still
very small and the signal should be improved in order to properly analyse the
vertex. From the previous analysis, the main structures observed in the form
factors should be noticeable for a reasonable range of momentum achievable by
our current lattices. Thus, an increase in statistics for the current
ensembles should help providing a clearer curve. Besides, the pure 1PI form
factors may only be computed accurately with increased precision.
## Bibliography
* [1] Guilherme T.R. Catumba, Orlando Oliveira and Paulo J. Silva “$H(4)$ tensor representations for the lattice Landau gauge gluon propagator and the estimation of lattice artefacts”, 2021 arXiv:2101.04978 [hep-lat]
* [2] F. Halzen, A.D. Martin and John Wiley & Sons “Quarks and Leptons: An Introductory Course in Modern Particle Physics” Wiley, 1984
* [3] “Symposium on Lattice Field Theory - 2019” URL: https://pos.sissa.it/363/
* [4] S. Aoki et al. “Light hadron spectrum and quark masses from quenched lattice QCD” In _Phys. Rev. D_ American Physical Society (APS), 2003 DOI: 10.1103/physrevd.67.034503
* [5] D. Bailin and A. Love “Introduction to Gauge Field Theory Revised Edition” Taylor & Francis, 1993
* [6] Michael E. Peskin and Daniel V. Schroeder “An Introduction to quantum field theory” Addison-Wesley, 1995
* [7] Taichiro Kugo and Izumi Ojima “Local Covariant Operator Formalism of Non-Abelian Gauge Theories and Quark Confinement Problem” In _Progress of Theoretical Physics Supplement_ 66, 1979 DOI: 10.1143/PTPS.66.1
* [8] Daniel Zwanziger “Nonperturbative Faddeev-Popov formula and the infrared limit of QCD” In _Phys. Rev. D_ 69.1 American Physical Society (APS), 2004 DOI: 10.1103/physrevd.69.016002
* [9] Gunnar S. Bali and Klaus Schilling “Running coupling and the Lambda parameter from SU(3) lattice simulations” In _Phys. Rev. D_ , 1993 DOI: 10.1103/PhysRevD.47.661
* [10] C. Parrinello “Exploratory study of the three-gluon vertex on the lattice” In _Phys. Rev. D_ 50 American Physical Society, 1994 DOI: 10.1103/PhysRevD.50.R4247
* [11] Gernot Eichmann et al. “Baryons as relativistic three-quark bound states” In _Progress in Particle and Nuclear Physics_ 91 Elsevier BV, 2016 DOI: 10.1016/j.ppnp.2016.07.001
* [12] Christian S Fischer “Infrared properties of QCD from Dyson–Schwinger equations” In _Journal of Physics G: Nuclear and Particle Physics_ 32.8 IOP Publishing, 2006 DOI: 10.1088/0954-3899/32/8/r02
* [13] Milan Vujinovic “Tensor representations of lattice vertices from hypercubic symmetry”, 2019 eprint: arXiv:1905.00651
* [14] Milan Vujinović and Tereza Mendes “Probing the tensor structure of lattice three-gluon vertex in Landau gauge” In _Phys. Rev. D_ American Physical Society (APS), 2019 DOI: 10.1103/PhysRevD.99.034501
* [15] A.. Aguilar, D. Binosi, D. Ibañez and J. Papavassiliou “Effects of divergent ghost loops on the Green’s functions of QCD” In _Phys. Rev. D_ American Physical Society (APS), 2014 DOI: 10.1103/physrevd.89.085008
* [16] Daniele Binosi and Joannis Papavassiliou “Coupled dynamics in gluon mass generation and the impact of the three-gluon vertex” In _Phys. Rev. D_ American Physical Society, 2018 DOI: 10.1103/PhysRevD.97.054029
* [17] Gernot Eichmann, Richard Williams, Reinhard Alkofer and Milan Vujinovic “Three-gluon vertex in Landau gauge” In _Phys. Rev. D_ 89 American Physical Society, 2014 DOI: 10.1103/PhysRevD.89.105014
* [18] Adrian Lorenz Blum, Markus Q. Huber, Mario Mitter and Lorenz Smekal “Gluonic three-point correlations in pure Landau gauge QCD” In _Phys. Rev. D_ 89 American Physical Society, 2014 DOI: 10.1103/PhysRevD.89.061703
* [19] Marcela Peláez, Matthieu Tissier and Nicolás Wschebor “Three-point correlation functions in Yang-Mills theory” In _Phys. Rev. D_ 88 American Physical Society DOI: 10.1103/PhysRevD.88.125003
* [20] Davide R. Campagnari and Hugo Reinhardt “Non-Gaussian wave functionals in Coulomb gauge Yang-Mills theory” In _Phys. Rev. D_ 82 American Physical Society, 2010 DOI: 10.1103/PhysRevD.82.105021
* [21] Anthony G. Duarte, Orlando Oliveira and Paulo J. Silva “Further evidence for zero crossing on the three gluon vertex” In _Phys. Rev. D_ American Physical Society, 2016 DOI: 10.1103/PhysRevD.94.074502
* [22] A. Athenodorou et al. “On the zero crossing of the three-gluon vertex” In _Physics Letters B_ , 2016 DOI: 10.1016/j.physletb.2016.08.065
* [23] Ph. Boucaud, F. De Soto, J. Rodríguez-Quintero and S. Zafeiropoulos “Refining the detection of the zero crossing for the three-gluon vertex in symmetric and asymmetric momentum subtraction schemes” In _Phys. Rev. D_ 95 American Physical Society, 2017 DOI: 10.1103/PhysRevD.95.114503
* [24] Attilio Cucchieri, Axel Maas and Tereza Mendes “Three-point vertices in Landau-gauge Yang-Mills theory” In _Phys. Rev. D_ American Physical Society (APS), 2008 DOI: 10.1103/physrevd.77.094510
* [25] Axel Maas and Milan Vujinović “More on the three-gluon vertex in SU(2) Yang-Mills theory in three and four dimensions”, 2020 eprint: arXiv:2006.08248
* [26] Anton K. Cyrol et al. “Landau gauge Yang-Mills correlation functions” In _Phys. Rev. D_ 94 American Physical Society, 2016 DOI: 10.1103/PhysRevD.94.054005
* [27] Richard Williams, Christian S. Fischer and Walter Heupel “Light mesons in QCD and unquenching effects from the 3PI effective action” In _Phys. Rev. D_ 93 American Physical Society, 2016 DOI: 10.1103/PhysRevD.93.034026
* [28] A.. Aguilar et al. “Gluon propagator and three-gluon vertex with dynamical quarks” In _The European Physical Journal C_ Springer ScienceBusiness Media LLC, 2020 DOI: 10.1140/epjc/s10052-020-7741-0
* [29] Alexandre F. Falcão, Orlando Oliveira and Paulo J. Silva “The analytic structure of the lattice Landau gauge gluon and ghost propagators”, 2020 arXiv:2008.02614 [hep-lat]
* [30] Joannis Papavassiliou and David Ibanez “The effective gluon mass and its dynamical equation”, 2013 arXiv:1301.4061 [hep-ph]
* [31] D. Binosi, D. Ibañez and J. Papavassiliou “Nonperturbative study of the four gluon vertex” In _Journal of High Energy Physics_ 2014 Springer ScienceBusiness Media LLC, 2014 DOI: 10.1007/jhep09(2014)059
* [32] Anton K. Cyrol, Markus Q. Huber and Lorenz von Smekal “A Dyson–Schwinger study of the four-gluon vertex” In _The European Physical Journal C_ 75 Springer ScienceBusiness Media LLC, 2015 DOI: 10.1140/epjc/s10052-015-3312-1
* [33] P. Ramond “Field Theory: A Modern Primer” Avalon Publishing, 1997
* [34] M. Srednicki “Quantum field theory” Cambridge University Press, 2007
* [35] T. Muta “Foundations of Quantum Chromodynamics: An Introduction to Perturbative Methods in Gauge Theories”, World Scientific lecture notes in physics World Scientific, 2010
* [36] Matthew D. Schwartz “Quantum Field Theory and the Standard Model” Cambridge University Press, 2014
* [37] L.. Faddeev and V.. Popov “Feynman Diagrams for the Yang-Mills Field” In _Phys. Lett._ 25B, 1967 DOI: 10.1016/0370-2693(67)90067-6
* [38] Christof Gattringer and Christian B. Lang “Quantum chromodynamics on the lattice”, 2010
* [39] Steven Weinberg “The Quantum Theory of Fields” Cambridge University Press, 1995 DOI: 10.1017/CBO9781139644167
* [40] L.H. Ryder “Quantum Field Theory” Cambridge University Press, 1996
* [41] A.A. Slavnov “Ward Identities in Gauge Theories” In _Theor. Math. Phys._ 10, 1972 DOI: 10.1007/BF01090719
* [42] J. Zinn-Justin “Quantum Field Theory and Critical Phenomena”, International series of monographs on physics Clarendon Press, 2002
* [43] Heinz J Rothe “Lattice Gauge Theories” WORLD SCIENTIFIC, 2012
* [44] Kenneth G. Wilson “Confinement of quarks” In _Phys. Rev. D_ American Physical Society, 1974 DOI: 10.1103/PhysRevD.10.2445
* [45] P. Hasenfratz “Prospects for perfect actions” Proceedings of the XVth International Symposium on Lattice Field Theory In _Nuclear Physics B - Proceedings Supplements_ 63.1, 1998 DOI: 10.1016/S0920-5632(97)00696-8
* [46] Stefano Capitani “Lattice perturbation theory” In _Physics Reports_ 382, 2003 DOI: 10.1016/S0370-1573(03)00211-4
* [47] S. Elitzur “Impossibility of spontaneously breaking local symmetries” In _Phys. Rev. D_ American Physical Society, 1975 DOI: 10.1103/PhysRevD.12.3978
* [48] V.N. Gribov “Quantization of Nonabelian Gauge Theories” In _Nucl. Phys. B_ 139, 1978, pp. 1 DOI: 10.1016/0550-3213(78)90175-X
* [49] T.P. Killingback “The Gribov Ambiguity in Gauge Theories on the 4 Torus” In _Phys. Lett. B_ 138, 1984, pp. 87–90 DOI: 10.1016/0370-2693(84)91878-1
* [50] Daniel Zwanziger “Fundamental modular region, Boltzmann factor, and area law in Lattice gauge theory” In _Nuclear Physics B - Proceedings Supplements_ DOI: 10.1016/0920-5632(94)90343-3
* [51] C… Davies et al. “Fourier acceleration in lattice gauge theories. I. Landau gauge fixing” In _Phys. Rev. D_ 37 American Physical Society, 1988 DOI: 10.1103/PhysRevD.37.1581
* [52] Robert G. Edwards and Bálint Joó “The Chroma Software System for Lattice QCD” LATTICE 2004 In _Nuclear Physics B - Proceedings Supplements_ 140, 2005 DOI: 10.1016/j.nuclphysbps.2004.11.254
* [53] Michael Pippig “PFFT: An Extension of FFTW to Massively Parallel Architectures” In _SIAM Journal on Scientific Computing_ 35.3, 2013 DOI: 10.1137/120885887
* [54] L. GIUSTI et al. “Problems on Lattice gauge fixing” In _International Journal of Modern Physics A_ 16.21 World Scientific Pub Co Pte Lt, 2001 DOI: 10.1142/s0217751x01004281
* [55] P.J. Silva and O. Oliveira “Gribov copies, lattice QCD and the gluon propagator” In _Nuclear Physics B_ 690.1-2 Elsevier BV, 2004 DOI: 10.1016/j.nuclphysb.2004.04.020
* [56] Attilio Cucchieri and Tereza Mendes “1) The influence of Gribov copies on gluon and ghost propagators in Landau gauge and 2) A new implementation of the fourier acceleration method” Proceedings of the XVth International Symposium on Lattice Field Theory In _Nuclear Physics B - Proceedings Supplements_ 63.1, 1998 DOI: 10.1016/S0920-5632(97)00917-1
* [57] Colin Morningstar “The Monte Carlo method in quantum field theory”, 2007 arXiv:hep-lat/0702020 [hep-lat]
* [58] B. Efron and R.J. Tibshirani “An Introduction to the Bootstrap”, Chapman & Hall/CRC Monographs on Statistics & Applied Probability Taylor & Francis, 1994
* [59] M. Hamermesh “Group Theory and Its Application to Physical Problems”, Dover Books on Physics Dover Publications, 2012
* [60] Hermann Weyl “The Classical Groups: Their Invariants and Representations” Princeton University Press, 1966
* [61] Rajan Gupta “Introduction to Lattice QCD”, 1998 eprint: hep-lat/9807028
* [62] Orlando Oliveira and Paulo J. Silva “Lattice Landau gauge gluon propagator: Lattice spacing and volume dependence” In _Phys. Rev. D_ American Physical Society (APS), 2012 DOI: 10.1103/PhysRevD.86.114513
* [63] Derek B. Leinweber, Jon Ivar Skullerud, Anthony G. Williams and Claudio Parrinello “Gluon propagator in the infrared region” In _Phys. Rev. D_ 58 American Physical Society, 1998 DOI: 10.1103/PhysRevD.58.031501
* [64] Feliciano de Soto and Claude Roiesnel “On the reduction of hypercubic lattice artifacts” In _Journal of High Energy Physics_ Springer ScienceBusiness Media LLC DOI: 10.1088/1126-6708/2007/09/007
* [65] Damir Becirević et al. “Asymptotic behavior of the gluon propagator from lattice QCD”, 1999 DOI: 10.1103/PhysRevD.60.094509
* [66] Orlando Oliveira, Paulo J. Silva, Jon-Ivar Skullerud and André Sternbeck “Quark propagator with two flavors of O(a)-improved Wilson fermions” In _Phys. Rev. D_ 99.9 American Physical Society (APS), 2019 DOI: 10.1103/physrevd.99.094506
* [67] Ph. Boucaud et al. “Quark propagator and vertex: systematic corrections of hypercubic artifacts from lattice simulations” In _Physics Letters B_ 575.3-4 Elsevier BV, 2003 DOI: 10.1016/j.physletb.2003.08.065
* [68] A.L. Blum, R. Alkofer, M.Q. Huber and A. Windisch “Unquenching the Three-gluon Vertex: A Status Report” In _Acta Physica Polonica B Proceedings Supplement_ 8.2 Jagiellonian University, 2015 DOI: 10.5506/aphyspolbsupp.8.321
* [69] N.. Smolyakov “Furry theorem for non-abelian gauge Lagrangians” In _Theoretical and Mathematical Physics_ 50, 1982 DOI: 10.1007/BF01016449
* [70] James S. Ball and Ting-Wai Chiu “Analytic properties of the vertex function in gauge theories. II” In _Phys. Rev. D_ 22, 1980 DOI: 10.1103/PhysRevD.22.2550
* [71] Markus Q. Huber “Nonperturbative properties of Yang–Mills theories” In _Physics Reports_ Elsevier BV, 2020 DOI: 10.1016/j.physrep.2020.04.004
* [72] J.. Gracey “Symmetric point quartic gluon vertex and momentum subtraction” In _Phys. Rev. D_ 90.2 American Physical Society (APS), 2014 DOI: 10.1103/physrevd.90.025011
* [73] David Dudal, Orlando Oliveira and Paulo J. Silva “High precision statistical Landau gauge lattice gluon propagator computation vs. the Gribov–Zwanziger approach” In _Annals of Physics_ 397 Elsevier BV, 2018 DOI: 10.1016/j.aop.2018.08.019
* [74] D. Binosi, D. Ibañez and J. Papavassiliou “All-order equation of the effective gluon mass” In _Phys. Rev. D_ 86 American Physical Society (APS), 2012 DOI: 10.1103/physrevd.86.085033
* [75] John M. Cornwall “Dynamical mass generation in continuum quantum chromodynamics” In _Phys. Rev. D_ 26 American Physical Society, 1982 DOI: 10.1103/PhysRevD.26.1453
* [76] Anthony G. Duarte, Orlando Oliveira and Paulo J. Silva “Lattice gluon and ghost propagators and the strong coupling in pure SU(3) Yang-Mills theory: Finite lattice spacing and volume effects” In _Phys. Rev. D_ 94 American Physical Society (APS), 2016 DOI: 10.1103/physrevd.94.014502
* [77] Attilio Cucchieri, Axel Maas and Tereza Mendes “Exploratory study of three-point Green’s functions in Landau-gauge Yang-Mills theory” In _Phys. Rev. D_ 74 American Physical Society, 2006 DOI: 10.1103/PhysRevD.74.014503
* [78] Attilio Cucchieri, Axel Maas and Tereza Mendes “Three-point vertices in Landau-gauge Yang-Mills theory” In _Phys. Rev. D_ 77 American Physical Society, 2008 DOI: 10.1103/PhysRevD.77.094510
* [79] Thomas Williams, Colin Kelley and many others “Gnuplot 5.2: an interactive plotting program”, 2019
* [80] Andre Sternbeck et al. “Triple-gluon and quark-gluon vertex from lattice QCD in Landau gauge” In _PoS_ LATTICE2016, 2017 DOI: 10.22323/1.256.0349
## Appendix A $SU(N)$ generators and identities
$SU(N)$ is the special unitary group of degree $N$ whose elements $U$ are
$N\times N$ unitary matrices, $U^{\dagger}U=\mathds{1}$, satisfying
$\det(U)=1$. It is a Lie group, with its elements being continuously generated
by real parameters $\theta^{a}\in\mathds{R}$. Each element can be written as
$U=e^{i\theta^{a}t^{a}}$ (A.1)
where $t^{a}$ are the $N^{2}-1$ group generators, corresponding to each
parameter $\theta^{a}$. The generators are hermitian and traceless matrices
$\displaystyle(t^{a})^{\dagger}=t^{a},$ $\displaystyle\tr(t^{a})=0,$ (A.2)
that span a vector space underlying the corresponding Lie algebra,
$\mathfrak{su}(N)$. The generators obey the commutation relation
$[t^{a},t^{b}]=if^{abc}t^{c}$ (A.3)
where $f^{abc}$ are the antisymmetric structure constants, specific for each
group and non-zero for a non-abelian group. A fundamental property of Lie
groups is the Jacobi identity
$[t^{a},[t^{b},t^{c}]]+[t^{b},[t^{c},t^{a}]]+[t^{c},[t^{a},t^{b}]]=0$ (A.4)
implying
$f^{ade}f^{bcd}+f^{bde}f^{cad}+f^{cde}f^{abd}=0.$ (A.5)
There are two main irreducible representations of the groups $SU(N)$. The
fundamental representation consists of $N$-dimensional complex vectors, with
the group as well as the algebra elements being $N\times N$ matrices. For QCD,
$N=3$, this corresponds to the representation of the 3-spinor quark field. The
usual choice of the normalization of the generators is
$f^{acd}f^{bcd}=N\delta^{ab}$ (A.6)
from which we can derive for the fundamental representation,
$\Tr\left(t^{a}t^{b}\right)=\frac{\delta^{ab}}{2}.$ (A.7)
The structure constants may be written as
$f^{abc}=-2i\tr([t^{a},t^{b}]t^{c})$ (A.8)
and the product of two generators has the general form,
$t^{a}t^{b}=\frac{\delta^{ab}}{2N}+\frac{1}{2}d^{abc}t^{c}+\frac{1}{2}if^{abc}t^{c}$
(A.9)
where the totally symmetric object is defined as
$d^{abc}=2\Tr\left(t^{a}\\{t^{b},t^{c}\\}\right)$, making use of the anti-
commutator defined as
$\\{t^{a},t^{b}\\}=\frac{\delta^{ab}}{N}+d^{abc}t^{c}.$ (A.10)
Additional identities may be obtained
$\displaystyle\Tr\left(t^{a}t^{b}t^{c}\right)=\frac{1}{4}(d^{abc}+if^{abc})$
(A.11) $\displaystyle f^{abc}f^{abc}=N(N^{2}-1)$ (A.12) $\displaystyle
f^{abm}f^{cdm}=\frac{2}{N}\left(\delta^{ac}\delta^{bd}-\delta^{ad}\delta^{bc}\right)+d^{acm}d^{dbm}-d^{adm}d^{bcm}$
(A.13) $\displaystyle f^{abm}d^{cdm}+f^{acm}d^{dbm}+f^{adm}d^{bcm}=0$ (A.14)
with a further relation for $N=3$,
$\delta^{ab}\delta^{cd}+\delta^{ac}\delta^{bd}+\delta^{ad}\delta^{bc}=3\left(d^{abm}d^{cdm}+d^{acm}d^{dbm}+d^{adm}d^{bcm}\right).$
(A.15)
The other important representation is the adjoint representation to which the
generators belong and acts on the vector space spanned by the generators
themselves – it is an $N^{2}-1$ dimensional representation. In QCD, the 8
gluon fields live on the adjoint representation of the group $SU(3)$ and
transform accordingly. The representation matrices of the generators are given
by the structure constants
$(t^{b})_{ac}=if^{abc}.$ (A.16)
A useful relation is the trace of four generators in the adjoint
representation
$\Tr(t^{a}t^{b}t^{c}t^{d})=\delta^{ad}\delta^{bc}+\frac{1}{2}\left(\delta^{ab}\delta^{cd}+\delta^{ac}\delta^{bd}\right)+\frac{N}{4}\left(f^{adm}f^{bcm}+d^{adm}d^{bcm}\right).$
(A.17)
In this representation, the covariant derivative
$D_{\mu}\eta(x)=(\partial_{\mu}-igA_{\mu}^{a}t^{a})\eta(x)$ (A.18)
takes the component form
$\displaystyle(D_{\mu}\eta(x))_{a}$
$\displaystyle=\partial_{\mu}\eta_{a}(x)-igA_{\mu}^{b}(t^{b})_{ac}\eta_{c}(x)$
(A.19)
$\displaystyle=\partial_{\mu}\eta_{a}(x)+gf^{abc}A_{\mu}^{b}\eta_{c}(x).$
(A.20)
## Appendix B Lattice tensors
### B.1 Construction of the lattice basis
#### B.1.1 Momentum polynomial under a transposition
We consider a brief proof of the transformation of a polynomial of a vector
$p$ under a transposition is given. A transposition is defined by an exchange
of two components of a vector, $\sigma\leftrightarrow\rho$, under the
operation $T^{\sigma\rho}$. A matrix form for this operator is
$\begin{cases}T^{(\sigma\rho)}_{\mu\nu}=\delta_{\mu\nu},\leavevmode\nobreak\
\mu\neq\sigma,\rho\\\ T^{(\sigma\rho)}_{\sigma\nu}=\delta_{\rho\nu}\\\
T^{(\sigma\rho)}_{\rho\nu}=\delta_{\sigma\nu}\end{cases}$ (B.1)
which reproduces the correct transformation on the vector $p$:
$\begin{cases}p^{\prime}_{\nu}=p_{\nu},\leavevmode\nobreak\
\nu\neq\sigma,\rho\\\ p^{\prime}_{\sigma}=p_{\rho},\\\
p^{\prime}_{\rho}=p_{\sigma}.\end{cases}$ (B.2)
Considering the transformation for an arbitrary order of $p$
$(p^{\prime}_{\mu})^{n}=p^{\prime}_{\mu}...p^{\prime}_{\mu}=T^{(\sigma\rho)}_{\mu\nu_{1}}p_{\nu_{1}}...T^{(\sigma\rho)}_{\mu\nu_{n}}p_{\nu_{n}}$
(B.3)
and considering the case $\mu\neq\sigma,\rho$ the correct transformation is
immediate since all components are left unchanged,
$(p^{\prime}_{\mu})^{n}=p_{\mu}...p_{\mu}=(p_{\mu})^{n}=T^{(\sigma\rho)}_{\mu\nu}(p_{\nu}).$
(B.4)
For $\mu=\sigma,\rho$, the transformation is
$\displaystyle(p^{\prime}_{\sigma})^{n}$
$\displaystyle=T^{(\sigma\rho)}_{\sigma\nu_{1}}p_{\nu_{1}}...T^{(\sigma\rho)}_{\sigma\nu_{n}}p_{\nu_{n}}$
$\displaystyle=\delta_{\sigma\nu_{1}}p_{\nu_{1}}...\delta_{\sigma\nu_{n}}p_{\nu_{n}}$
$\displaystyle=T^{(\sigma\rho)}_{\sigma\nu}(p_{\nu})^{n}=(p_{\rho})^{n}.$
(B.5)
This is the same transformation as for the vector $p$, and thus the polynomial
transforms accordingly.
#### B.1.2 Second order tensors under $H(4)$ symmetry
Here we show that there is no mixing among the diagonal and off-diagonal
elements under a general $H(4)$ transformation, using the fact that these
transformations can be formed by products of transpositions and inversions.
The transposition operator for the exchange of components
$\sigma\leftrightarrow\rho$ was defined in B.1. For the inversion of the
component $\rho$, we define the operator as
$\displaystyle P^{\rho}_{\mu\nu}=\delta_{\mu\nu},\leavevmode\nobreak\
\mu\neq\rho$ (B.6) $\displaystyle P^{\rho}_{\rho\nu}=-\delta_{\rho\nu}.$ (B.7)
The transformation for a second order tensor under transpositions and
inversions is
$\displaystyle
D^{\prime}_{\mu\nu}=T^{(\sigma\rho)}_{\mu\tau}T^{(\sigma\rho)}_{\mu\varepsilon}D_{\tau\varepsilon},$
(B.8) $\displaystyle
D^{\prime}_{\mu\nu}=P^{(\rho)}_{\mu\tau}P^{(\rho)}_{\mu\varepsilon}D_{\tau\varepsilon}.$
(B.9)
Now we consider the transformation of diagonal elements $\mu=\nu$. For
transpositions there are three distinct situations,
$\begin{cases}D^{\prime}_{\sigma\sigma}=\delta_{\rho\tau}\delta_{\rho\varepsilon}D_{\tau\varepsilon}=D_{\rho\rho}\\\
D^{\prime}_{\rho\rho}=D^{\prime}_{\sigma\sigma}\\\
D^{\prime}_{\mu\mu}=D_{\mu\mu},\leavevmode\nobreak\
\mu\neq\rho,\sigma\end{cases}$ (B.10)
and we see that no off-diagonal terms appear.
A similar analysis can be considered for the inversions using B.9
$\begin{cases}D^{\prime}_{\rho\rho}=(-\delta_{\rho\tau})(-\delta_{\rho\varepsilon})D_{\tau\varepsilon}=D_{\rho\rho}\\\
D^{\prime}_{\mu\mu}=D_{\mu\mu},\leavevmode\nobreak\ \mu\neq\rho\end{cases}$
(B.11)
and again for this transformation, no off-diagonal terms appear for the
diagonal transformation.
We now consider the off-diagonal transformation, $\mu\neq\nu$. For the
transpositions there are again three distinct cases
$\begin{cases}D^{\prime}_{\sigma\nu}=\sum_{\tau,\varepsilon}\delta_{\rho\tau}\delta_{\nu\varepsilon}=D_{\rho\nu}\\\
D^{\prime}_{\rho\nu}=D^{\prime}_{\sigma\nu}\\\
D^{\prime}_{\rho\sigma}=D_{\sigma\rho}\end{cases}$ (B.12)
and no diagonal terms are involved. On the other hand for inversions there are
two cases
$\begin{cases}D^{\prime}_{\mu\nu}=-D_{\mu\nu},\leavevmode\nobreak\
&\mu=\rho\wedge\nu=\rho\\\ D^{\prime}_{\mu\mu}=D_{\mu\mu},\leavevmode\nobreak\
&\mu\neq\rho\wedge\nu\neq\rho.\end{cases}$ (B.13)
We conclude that a general $H(4)$ transformation does not mix the diagonal and
off-diagonal elements for second order tensors.
### B.2 General construction for projectors
The projectors $\mathcal{P}^{k}$ are necessary to extract form factors
corresponding to each basis element. Here we describe the general form of
constructing projectors, for an arbitrary vector space.
Given a general tensor $\Gamma$, this object will be described by a basis of
$N$ tensor elements $\tau^{j}$,
$\Gamma=\sum_{j=1}^{N}\gamma^{j}\tau^{j}$ (B.14)
where $\gamma^{j}$ are the corresponding dressing functions. Suppose we want
to extract one of the form factors $\gamma^{k}$ by acting on $\Gamma$ with an
operator $\mathcal{P}^{k}$ (this operation involves the necessary index
contractions to build a scalar). The operation is of the form,
$\mathcal{P}^{k}\Gamma=\mathcal{P}^{k}\left(\sum_{j=1}^{N}\gamma^{j}\tau^{j}\right)=\gamma^{k}.$
(B.15)
From this we may extract the relation
$\mathcal{P}^{k}\tau^{j}=\delta^{kj}.$ (B.16)
using the completeness of the basis, and the linearity of the operator.
Considering the most general form of the projector $\mathcal{P}^{k}$,
constructed from basis elements
$\mathcal{P}^{k}=\sum_{i=1}^{N}A_{ki}\tau^{i}$ (B.17)
and substitute this into eq. B.16, to obtain
$\sum_{i=1}^{N}A_{ki}\tau^{i}\tau^{j}=\delta^{kj}\Leftrightarrow
A_{ki}=(\tau^{k}\tau^{i})^{-1}.$ (B.18)
This reduces the extraction of the form factors to a matrix inversion problem.
We need only to build the matrix with elements $A_{ki}^{-1}=\tau^{k}\tau^{i}$,
where the contraction of indices referred before is assumed, and obtain its
inverse $A$
$\mathcal{P}^{k}=\sum_{i=0}^{N}(\tau^{k}\tau^{i})^{-1}\tau^{i}.$ (B.19)
With this mechanism, it is straightforward to understand why it is impossible
to build well defined projectors when there are redundant basis elements that
can be written as a linear combination of the remaining elements. In this
case, not all rows will be linearly independent, and it is know from linear
algebra that matrices with this property are singular, i.e. non-invertible,
and the projectors cannot be defined.
#### B.2.1 Projectors for the lattice bases
We use the previous mechanism to build the projectors for the tensor bases
considered throughout the work. We begin with the general form for second
order tensors in the continuum
$D_{\mu\nu}(p)=A(p)\delta_{\mu\nu}+B(p)p_{\mu}p_{\nu}$ (B.20)
with the elements $\tau^{1}=\delta_{\mu\nu}$ and $\tau^{2}=p_{\mu}p_{\nu}$.
The matrix $A^{-1}$ for a $N_{d}$ dimensional space is
$A^{-1}=\matrixquantity(N_{d}^{2}&p^{2}\\\ p^{2}&p^{4}),$ (B.21)
and its inverse
$A=\frac{1}{p^{4}(N_{d}-1)}\matrixquantity(p^{4}&-p^{2}\\\ -p^{2}&N_{d}).$
(B.22)
The projectors are built with eq. B.17
$\displaystyle\mathcal{P}^{1}_{\mu\nu}=\frac{1}{N_{d}-1}\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)$
(B.23)
$\displaystyle\mathcal{P}^{2}_{\mu\nu}=\frac{1}{N_{d}-1}\left(-\frac{\delta_{\mu\nu}}{p^{2}}+N_{d}\frac{p_{\mu}p_{\nu}}{p^{4}}\right),$
(B.24)
and the extraction of the respective form factors follows immediately
$\displaystyle
A(p)=\frac{1}{N_{d}-1}\left(\sum_{\mu}D_{\mu\mu}(p)-\frac{1}{p^{2}}\sum_{\mu\nu}p_{\mu}p_{\nu}D_{\mu\nu}(p)\right)$
(B.25) $\displaystyle
B(p)=\frac{1}{N_{d}-1}\left(-\frac{1}{p^{2}}\sum_{\mu}D_{\mu\mu}(p)+\frac{N_{d}}{p^{2}}\sum_{\mu\nu}p_{\mu}p_{\nu}D_{\mu\nu}(p)\right).$
(B.26)
This procedure can be simplified when considering the tensor form
$D_{\mu\nu}(p)=D(p^{2})\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right),$
(B.27)
with the form factor extracted with
$D(p^{2})=\frac{1}{N_{d}-1}\sum_{\mu}D_{\mu\mu}(p).$ (B.28)
We consider now the lattice basis 3.16. As referred in the construction of the
basis, the diagonal elements do not mix with off-diagonal, which allow us to
analyse them independently. The reducibility of the group representation
splits the five dimensional matrix into two square matrices of size two and
three. It is thus important to use two different index contractions, one
considering only diagonal terms,
$\sum_{\mu}\tau_{\mu\mu}^{i}\tau_{\mu\mu}^{j}$, and the second considering
only off-diagonal elements
$\sum_{\mu\neq\nu}\tau_{\mu\nu}^{i}\tau_{\mu\nu}^{j}$. Starting with the
diagonal elements $\tau^{1}=\delta_{\mu\mu}$, $\tau^{2}=p_{\mu}^{2}$ and
$\tau^{3}=p_{\mu}^{4}$. The contraction matrix $A^{-1}$ is
$A^{-1}=\matrixquantity(N_{d}&p^{2}&p^{[4]}\\\ p^{2}&p^{[4]}&p^{[6]}\\\
p^{[4]}&p^{[6]}&p^{[8]}).$ (B.29)
Hence, the diagonal form factors are
$\displaystyle
E(p)=\frac{1}{\Delta_{1}}\bigg{[}\sum_{\mu}D_{\mu\mu}(p^{[4]}p^{[8]}-(p^{[6]})^{2})$
$\displaystyle+\sum_{\mu}p_{\mu}^{2}D_{\mu\mu}(p^{[4]}p^{[6]}-p^{2}p^{[8]})$
$\displaystyle+\sum_{\mu}p_{\mu}^{4}D_{\mu\mu}(p^{2}p^{[6]}-(p^{[4]})^{2})\bigg{]}$
(B.30) $\displaystyle
F(p)=\frac{1}{\Delta_{1}}\bigg{[}\sum_{\mu}D_{\mu\mu}(p^{[4]}p^{[6]}-p^{2}p^{[8]})$
$\displaystyle+\sum_{\mu}p_{\mu}^{2}D_{\mu\mu}(N_{d}p^{[6]}-(p^{[4]})^{2})$
$\displaystyle+\sum_{\mu}p_{\mu}^{4}D_{\mu\mu}(p^{2}p^{[4]}-N_{d}p^{[6]})\bigg{]}$
(B.31) $\displaystyle
G(p)=\frac{1}{\Delta_{1}}\bigg{[}\sum_{\mu}D_{\mu\mu}(p^{2}p^{[6]}-(p^{[8]})^{2})$
$\displaystyle+\sum_{\mu}p_{\mu}^{2}D_{\mu\mu}(p^{2}p^{[4]}-N_{d}p^{[6]})$
$\displaystyle+\sum_{\mu}p_{\mu}^{4}D_{\mu\mu}(p^{2}p^{[4]}-N_{d}p^{[6]})\bigg{]}$
(B.32)
with
$\Delta_{1}=N_{d}\left(p^{[4]}p^{[8]}-(p^{[6]})^{2}\right)+p^{2}\left(p^{[4]}p^{[6]}-p^{2}p^{[8]}\right)+p^{[4]}\left(p^{2}p^{[6]}-(p^{[4]})^{2}\right).$
(B.33)
Similarly we can repeat the procedure for the two dimensional, off-diagonal
case, obtaining both form factors,
$\displaystyle
H(p)=\frac{2}{\Delta_{2}}\bigg{[}\sum_{\mu\neq\nu}p_{\mu}p_{\nu}D_{\mu\nu}(p^{[4]}p^{[6]}-p^{[10]})-\sum_{\mu\neq\nu}p_{\mu}^{3}p_{\nu}^{3}D_{\mu\nu}(p^{2}p^{[4]}-p^{[6]})\bigg{]}$
(B.34) $\displaystyle
I(p)=\frac{1}{\Delta_{1}}\bigg{[}\sum_{\mu\neq\nu}p_{\mu}p_{\nu}D_{\mu\nu}(p^{[8]}-(p^{[4]})^{2})+\sum_{\mu\neq\nu}p_{\mu}^{3}p_{\nu}^{3}D_{\mu\nu}(p^{4}-p^{[4]})\bigg{]}$
(B.35)
with
$\Delta_{2}=2\left(p^{2}p^{[4]}-p^{[6]}\right)\left(p^{[8]}-(p^{[4]})^{2}\right)+2\left(p^{4}-p^{[4]}\right)\left(p^{[4]}p^{[6]}-p^{[10]}\right).$
(B.36)
Having all projectors for the lattice basis, we need to consider the case of
the generalized diagonal kinematics where these projectors are not possible to
obtain. This analysis is done for each individual configuration. Starting with
the diagonal, $(n,n,n,n)$, the gluon propagator is
$\displaystyle D_{\mu\mu}(p)=(E(p)+n^{2}F(p)+n^{4}G(p))\delta_{\mu\mu}$
$\displaystyle D_{\mu\nu}(p)=n^{2}H(p)+2n^{4}I(p),\leavevmode\nobreak\
\mu\neq\nu$ (B.37)
and in this case we can only extract two form factors, for the diagonal and
off-diagonal terms. These are extracted with
$\displaystyle
E(p)+n^{2}F(p)+n^{4}G(p)=\frac{1}{N_{d}}\sum_{\mu}D_{\mu\mu}(p),$ (B.38)
$\displaystyle
n^{2}H(p)+2n^{4}I(p)=\frac{1}{N_{d}(N_{d}-1)}\sum_{\mu\neq\nu}D_{\mu\nu}(p).$
(B.39)
The mixed configurations, $(n,n,0,0)$ and $(n,n,n,0)$ have non-diagonal terms
and the gluon propagator reads
$\displaystyle D_{\mu\mu}(p)=E(p)\delta_{\mu\mu}+(F(p)+n^{2}G(p))p_{\mu}^{2}$
$\displaystyle
D_{\mu\nu}(p)=(H(p)+2I(p)n^{2})p_{\mu}p_{\nu},\leavevmode\nobreak\
\mu\neq\nu.$ (B.40)
For these configurations we consider the parameter $k$ representing the number
of non-vanishing components. The contractions of tensor basis elements are
summarized by
(B.41)
with corresponding inverses
(B.42)
With this, the form factors follow easily
$\displaystyle
E(p^{2})=\frac{1}{kn^{4}(N_{d}-k)}\sum_{\mu}D_{\mu\mu}(p)\left(kn^{4}\delta_{\mu\mu}-kn^{2}p_{\mu}^{2}\right)$
(B.43) $\displaystyle
F(p^{2})+n^{2}G(p^{2})=\frac{1}{kn^{4}(N_{d}-k)}\sum_{\mu}D_{\mu\mu}(p)\left(-kn^{2}\delta_{\mu\mu}+N_{d}p_{\mu}^{2}\right)$
(B.44) $\displaystyle
H(p^{2})+2n^{2}I(p^{2})=\frac{1}{k(k-1)n^{4}}\sum_{\mu\neq\nu}D_{\mu\nu}(p)p_{\mu}p_{\nu}.$
(B.45)
Lastly, for on-axis momenta, $(n,0,0,0)$, only diagonal terms survive
$D_{\mu\mu}(p)=E(p)+(F(p)+n^{2}G(p))p_{\mu}^{2},$ (B.46)
and the form factors are extracted with
$\displaystyle E(p)=\frac{1}{3}\sum_{\mu\neq 1}D_{\mu\mu}(p),$ (B.47)
$\displaystyle n^{2}F(p)+n^{4}G(p)=D_{11}(p)-E(p).$ (B.48)
## Appendix C Results – Additional figures
### C.1 Gluon propagator
#### C.1.1 Continuum relations – mixed diagonal configurations
In this section the continuum relations for the momentum configurations
$(n,n,n,0)$ and $(n,n,0,0)$ are computed. The procedure follows similarly as
the other two diagonal kinematics. For both cases the lattice gluon propagator
reads
$\displaystyle
D_{\mu\mu}=E(p^{2})\delta_{\mu\mu}+(F(p^{2})+n^{2}G(p^{2}))p_{\mu}^{2}$
$\displaystyle
D_{\mu\nu}=(H(p^{2})+2n^{2}I(p^{2}))p_{\mu}p_{\nu},\leavevmode\nobreak\
\mu\neq\nu.$ (C.1)
Using the extraction for the form factors built in section B.2 and also the
continuum parametrization
$D_{\mu\nu}^{c}(p)=D(p^{2})\left(\delta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)$
the proof of the continuum relations is follows simply,
$\displaystyle E(p^{2})$
$\displaystyle=\frac{D(p^{2})}{kn^{4}(N_{d}-k)}\sum_{\mu}\left(\delta_{\mu\mu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)\left(kn^{4}\delta_{\mu\mu}-kn^{2}p_{\mu}^{2}\right)$
$\displaystyle=\frac{D(p^{2})}{kn^{4}(N_{d}-k)}\sum_{\mu}\left(kn^{4}-kn^{2}p_{\mu}^{2}-\frac{kn^{4}}{p^{2}}p_{\mu}^{2}+\frac{kn^{2}}{p^{2}}p_{\mu}^{4}\right)$
$\displaystyle=D(p^{2})$ $\displaystyle F(p^{2})+n^{2}G(p^{2})$
$\displaystyle=\frac{D(p^{2})}{kn^{4}(N_{d}-k)}\sum_{\mu}\left(\delta_{\mu\mu}-\frac{p_{\mu}p_{\nu}}{p^{2}}\right)\left(-kn^{2}\delta_{\mu\mu}+N_{d}p_{\mu}^{2}\right)$
$\displaystyle=\frac{D(p^{2})}{kn^{4}(N_{d}-k)}\sum_{\mu}\left(-kn^{2}+N_{d}p_{\mu}^{2}-\frac{kn^{2}}{p^{2}}p_{\mu}^{2}+\frac{N_{d}}{p^{2}}p_{\mu}^{2}\right)$
$\displaystyle=-\frac{D(p^{2})}{p^{2}}$ $\displaystyle
H(p^{2})+2n^{2}I(p^{2})$
$\displaystyle=\frac{D(p^{2})}{k(k-1)n^{4}}\sum_{\mu\neq\nu}\frac{-p_{\mu}p_{\nu}}{p^{2}}p_{\mu}p_{\nu}$
$\displaystyle=-\frac{D(p^{2})}{p^{2}}.$
Notice that $p^{2}=kn^{2}$ with the parameter $k$ defined in section B.2 and
$N_{d}=4$ the dimensionality of the lattice. In addition, this result is
independent of the use o lattice or improved momentum.
$0$$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$0$$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$\scriptscriptstyle(n,n,n,0)$$0$$0.5$$1$$1.5$$2$$2.5$$3$$3.5$$4$$0$$0.5$$1$$1.5$$2$$2.5$$\scriptscriptstyle(n,n,0,0)$
$\scriptstyle p^{2}\Gamma(p^{2})$
$\hat{p}\leavevmode\nobreak\ ($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}E(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}(n^{2}F(\hat{p}^{2})+n^{4}G(\hat{p}^{2}))$$\scriptstyle-\hat{p}^{4}H(\hat{p}^{2})-p^{6}I(\hat{p}^{2})$$\hat{p}\leavevmode\nobreak\
($\mathrm{G}\mathrm{e}\mathrm{V}$)$$\scriptstyle
d(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}E(\hat{p}^{2})$$\scriptstyle\hat{p}^{2}(n^{2}F(\hat{p}^{2})+n^{4}G(\hat{p}^{2}))$$\scriptstyle-\hat{p}^{4}H(\hat{p}^{2})-p^{6}I(\hat{p}^{2})$
Figure C.1: Form factors from the lattice basis for the mixed configurations
$p=(n,n,n,0)$ (left) and for $p=(n,n,0,0)$ (right) both as a function of
improved momentum. Shown for comparison is the benchmark result
$d(\hat{p}^{2})$.
The analysis of the continuum relations for these two configurations is seen
in fig. C.1. The continuum relations are exactly satisfied among all three
form factors for both configurations. The benchmark result was shown for
comparison, and it is noticeable that the further from the diagonal, the worse
the correspondence becomes. The configuration $(n,n,0,0)$ deviates from the
gluon propagator dressing function for higher momentum, while the result for
$(n,n,n,0)$ remains compatible through all range of momenta similarly to the
full diagonal momenta.
|
# Frustration, strain and phase co-existence in the mixed valent hexagonal
iridate Ba3NaIr2O9
Charu Garg Department of Physics, Indian Institute of Science Education and
Research, Dr. Homi Bhabha Road, Pune 411008, India Antonio Cervellino Swiss
Light Source, Paul Scherrer Institute, CH-5232 Villigen, Switzerland Sunil
Nair Department of Physics, Indian Institute of Science Education and
Research, Dr. Homi Bhabha Road, Pune 411008, India Centre for Energy Science,
Indian Institute of Science Education and Research, Dr. Homi Bhabha Road, Pune
411008, India
(September 3, 2024)
###### Abstract
Using detailed synchrotron diffraction, magnetization, thermodynamic and
transport measurements, we investigate the relationship between the mixed
valence of Ir, lattice strain and the resultant structural and magnetic ground
states in the geometrically frustrated triple perovskite iridate Ba3NaIr2O9.
We observe a complex interplay between lattice strain and structural phase co-
existence, which is in sharp contrast to what is typically observed in this
family of compounds. The low temperature magnetic ground state is
characterized by the absence of long range order, and points towards the
condensation of a cluster glass state from an extended regime of short range
magnetic correlations.
## I Introduction
Geometrically frustrated magnets- where triangular lattice antiferromagnets
(TLAFs) are considered to be an archetype- remain at the forefront of
contemporary condensed matter Witczak-Krempa _et al._ (2014); Ramirez (1994);
Moessner and Ramirez (2006). Of particular interest in the recent years have
been a number of Ruthenium and Iridium based perovskite variants which
stabilize in an inherently frustrated environment. In general, the
stabilization of a particular structural ground state depends on the tolerance
limit of the corresponding symmetry which in turn is related to the relative
ionic radii of the constituent elements. For instance, in the perovskite ABO3,
introducing a bigger element at the $A$ and $B$ sites can progressively tune
the lattice from a high symmetry hexagonal to a lower symmetry orthorhombic,
or even a monoclinic one Johnsson and Lemmens (2007). The same is true for the
double (A2BB${}^{{}^{\prime}}$O6) and triple layered perovskites
(A3BB${}^{{}^{\prime}}_{2}O_{9}$) as well, where it has been shown that $B$
site cations with higher atomic radii stabilizes in a lower symmetry Zhao _et
al._ (2009); Nag _et al._ (2018); Vasala and Karppinen (2015).
A relatively recent addition to this family of geometrically frustrated
magnets are the Barium based triple perovskite iridates of the form Ba3MIr2O9
(M=alkali metal, alkaline earth metal, 3$d$ transition metal or lanthanides).
The choice of the M-site cation strongly determines the crystallographic
symmetry, which in turn inordinately influences the magnetic ground states.
For example, in M=Zn2+, a close realization of the elusive J=0 state is
observed whereas for M= Mg2+, Sr2+ and Ca2+, deviation from the non magnetic
state in the form of antiferromagnetic exchange interactions, ferromagnetic
and weak dimer like features are observed respectively Nag _et al._ (2018).
Another addition to this family is the newly reported Ba3CoIr2O9, where Co2+,
being a magnetic ion strongly influences exchange paths leading to weak
ferromagnetism at low temperature and the highest magneto-structural
transition temperature reported in the triple perovskite iridates Garg _et
al._ (2020). On the other hand, Ba3BiIr2O9 has been reported to exhibit a
giant magneto-elastic transition accompanied by the opening of a spin gap
Miiller _et al._ (2012). The structure-property relationships in these
systems are clearly driven by a complex interplay between the relative
strengths of competing spin-orbit coupling (SOC), electronic correlation (U),
and a hybridization interaction controlled by the Ir-O-Ir bond angle. Thus
small perturbations, such as changes in lattice parameters caused by
variations of different M ions, can tip the balance between the competing
energies and ground states.
In all the reported 6H hexagonal triple perovskite iridates, the ionic radii
of the $M$ site cation lies in the range of 0.605$\AA$-0.947$\AA$, beyond
which the internal pressure forces the lattice to stabilize in a lower
symmetry. For example, the Ba3CaIr2O9 system has been reported to stabilize in
$C2/c$ monoclinic symmetry Zhao _et al._ (2009) which is in line with the
expected structural ground state based on the tolerance limit. Interestingly,
an exception appears to be the Ba3NaIr2O9 system, which - in spite of the
similar ionic radii of Na (1.02$\AA$) and Ca (1.00$\AA$) - has been reported
to stabilize in the high symmetry hexagonal structure at room temperatures. In
this report, we discuss this relatively un-investigated Na based triple
perovskite iridate, where iridium is forced to be in the unconventionally high
charge state of 5.5. We investigate polycrystalline specimens of this system
using a combination of high resolution synchrotron diffraction, magnetization,
resistivity and specific heat measurements. We observe that the lattice
appears to accommodate strain as the temperature is reduced, which in turn
precludes the stabilization of a lower symmetry structural phase. This is in
contrast to what is typically observed in this class of materials. On the
other hand, a very gradual and incomplete transformation to a low symmetry
orthorhombic phase is observed, and the high symmetry hexagonal phase survives
till the lowest measured temperatures. Measurements of the magnetization and
specific heat point towards the existence of a extended cooperative
paramagnetic regime characterized by short range magnetic correlations, which
condenses into a cluster glass like state at low temperatures.
## II Experimental Details
Polycrystalline specimens of Ba3NaIr2O9 were synthesized by using the standard
solid state reaction route. Stoichiometric amounts of high purity BaCO3, Na2O3
and IrO2 were thoroughly ground and then sintered at 11000C under oxygen
atmosphere to maintain the high oxidation state of Iridium. The phase purity
was confirmed by x-ray diffraction using a Bruker D8 Advance diffractometer
with a Cu K$\alpha$ radiation. High resolution synchrotron x-ray diffraction
data was collected using the Materials Science (MS) X04SA beam line
(wavelength 0.56526$\lambda$) at the Swiss Light Source (SLS, PSI
Switzerland). The finely ground powder was loaded in a glass capillary of
diameter 0.3mm and was spun during the data acquisition at various
temperatures between 5 K and 300 K. The structure was analyzed by Rietveld
refinement using the FULLPROF suite Rodriguez-Carvajal (2001); Rietveld
(1969). The structures shown in the manuscript are drawn using Vesta Momma and
Izumi (2011). The homogeneity and stoichiometry of the compound were also
reconfirmed by energy dispersive x-ray (EDAX) from ZEISS Ultra Plus.
Magnetization and physical property measurements were performed using a
Quantum Design (MPMS-XL) SQUID magnetometer and a Physical Property
Measurement System (PPMS) respectively.
## III Results and Discussion
Figure 1: Main panel: Fit to the Rietveld refinement of the synchrotron data
at 295 K for Ba3NaIr2O9. The compound crystallizes in a 6H-hexagonal
perovskite with space group P63/mmc (194). The calculated and the observed
diffraction profiles are shown in red and black respectively. The vertical
green lines indicates the Bragg positions and the brown line at the bottom is
the difference between observed and calculated intensities. Inset: Enlarged
view of the higher angle peaks and the corresponding fit. Figure 2: (a) A
schematic representation of the crystal structure of Ba3NaIr2O9 using Vesta.
Here pink and green octahedra represents Iridium and Sodium respectively and
the Barium atoms are represented in Blue. (b) The projection of the structure
along the c-axis is shown. The Iridium octahedra form a hexagonal ring
surrounded by Sodium. (c) Scanning electron micrograph of the compound showing
hexagonal facets.
A Rietveld fit to the synchrotron diffraction data obtained at 300 K is shown
in Fig. 1 where Ba3NaIr2O9 is seen to stabilize in the high symmetry hexagonal
($P6_{3}/mmc$) symmetry, and the lattice parameters are deduced to be a = b =
5.86282(3)$\AA$, c = 14.61922(10)$\AA$ and $\alpha=\beta$ = 90∘; $\gamma$ =
120∘. This is in good agreement with previous reportsLightfoot and Battle
(1990); Rijssenbeek _et al._ (1999); Doi _et al._ (2001); Lufaso and zur
Loye (2005). The room temperature structure is illustrated in Fig. 2(a) where
face sharing octahedra (in pink) forms a Ir2O9 dimer and are connected via
corners to NaO6 octahedra (in green). Fig. 2(b) represents the projection
along the crystallographic $c$-axis where IrO6 octahedra forms a hexagonal
ring around the NaO6 octahedra. Since Na is in the +1 oxidation state, Ir is
forced to stabilize in an atypical high oxidation state of +5.5. EDAX
measurements were also used to confirm the stoichiometry. Since it is
difficult to quantify the lighter elements (Na and O) using this technique,
the atomic percentage ratio between heavy elements Ba and Ir was compared. The
Ba:Ir ratio obtained from EDAX was observed to be 1.54 which is very close to
the stoichiometric ratio of 3:2=1.5 expected from the chemical formula. A
scanning electron micrograph image is shown in Fig. 2(c) where hexagonal
facets - a reflection of the underlying crystallographic symmetry - can be
clearly seen.
Figure 3: (a) Temperature evolution of synchrotron peaks at 5 K (black) and
300 K (blue). The lattice strain manifests in the form of broadening of
diffraction peaks as evident by the highly anisotropic peak profile at 5K.
(b,c,d) Attempts to fit the synchrotron diffraction data at 5 K using various
refinement models as indicated.
A comparison of the temperature dependence of a few representative x-ray
diffraction peaks as measured at the extreme temperatures of 5 K and 300 K is
shown in Fig. 3(a). As the temperature is lowered, the diffraction peaks shift
to higher angles and also becomes anisotropic. The modification of the peak
profile could either signal the presence of strain in the lattice or a
transformation to a lower symmetry phase. The former could be a consequence of
the large ionic radii which Na possesses, whereas the latter has been reported
in a number of triple perovskite iridates earlier. Since there were no
additional peaks visible in the low temperature scan, the data was initially
fit using a hexagonal model alone. These attempts were not successful, as is
shown in the Fig. 3(b). Addition of strain using the broadening model
available in FullProf made the fit better as can be seen in Fig. 3(c). This
method is based on Stephens model Stephens (1999) of anisotropic broadening,
where the refinement of microstrain covariance parameters S400, S004 and S112
corresponds to strain along the 100, 001 and 101 hkl planes. Though strain
does appear to have an impact on the low temperature phase, the fitting was
still not satisfactory enough, which hints at the possible presence of an
additional low symmetry phase at low temperatures.
Figure 4: A schematic representation of the crystal structure of Ba3NaIr2O9
using Vesta for the orthorhombic phase. Here pink and green octahedra
represents Iridium and Sodium respectively. The Barium atoms are not shown for
clarity. The yellow dotted line shows the hexagonal arrangement for Iridium
octahedra.
To identify the possible symmetry of the additional low temperature phase,
existing literature in the ruthenate triple perovskite family was referred to,
where multiple scenarios ranging from monoclinic ($P2/c$, $C2/c$) to
orthorhombic ($Cmcm$), or even different structural models for the same
compounds Kimber _et al._ (2012); Stitzer _et al._ (2002) have been
reported. After exploring all these possible options, the orthorhombic (space
group-Cmcm (63)) phase Stitzer _et al._ (2002) resulted in the best fit, with
Rwp and Rp values of 3.24 and 2.47 respectively. The generated pattern was
seen to match well with the high resolution synchrotron data as shown in Fig.
3(d). The lattice parameters obtained from the fit for the additional
orthorhombic phase at 5K are a= 11.6574(11)$\AA$, b=20.1975(21)$\AA$,
c=14.5773(03)$\AA$ and $\alpha=\beta=\gamma$= 90∘. Fig. 4 depicts this
orthorhombic phase as viewed along the crystallographic $c$-axis. The yellow
dotted line indicates the hexagonal arrangement formed by Ir octahedra. The
high temperature hexagonal structural symmetry allows for only one
crystallographic position (4f) for Iridium. Therefore, given the presence of
mixed valent state Ir5.5, this position is highly disordered. On the other
hand, the low temperature C-centred orthorhombic symmetry is a 2a x 2b
primitive hexagonal pseudo-cell (or an orthohexagonal cell) and allows for
three different crystallographic sites for Ir (8f,8f,16h) making it possible
for the charge to be redistributed at these distinct cation sites. In
addition, Na also now has 3 unique Wyckoff positions (4a, 4b 8d) allowing for
the movement of Iridium while still maintaining the orthorhombic crystal
framework. This is a complex low symmetry where each element has multiple
unique positions, the details of which are given in Kimber _et al._ (2012).
There have been prior reports of orthorhombic phases with only one
crystallographic position for Ir, but attempts to fit the low temperature
profile of Ba3NaIr2O9 using this symmetry were not successful. Interestingly,
in the ruthenium analogue, this need for multiple Ru positions was attributed
to the presence of a charge ordered state.
Figure 5: (a) The variation of the phase fraction of the hexagonal P63/mmc
with temperature. As the temperature reduces, the hexagonal phase converts
slowly to orthorhombic phase, nucleating at 50K and reached 80$\%$ of the
total volume fraction at 5 K. Temperature evolution of the (b) volume, (c)
ratio of lattice parameters c/a for the hexagonal symmetry. A slight variation
in both the parameters are observed marking the onset of the lower symmetry
orthorhombic phase. (d) The temperature dependence of the microstrain
parameters SHKL for three different hkl is depicted. The sharp change in S400
and S004 close to the structural transformation temperature is consistent with
distortions of the lattice with the onset of orthorhombic symmetry.
It is observed that down to 50 K, a single structural hexagonal model with
strain parameters is sufficient for the fitting. As a function of reducing
temperatures, the phase fraction of hexagonal symmetry is invariant till 50 K,
below which the orthorhombic symmetry is seen to stabilize, reaching 20$\%$ of
the total volume fraction at 5 K (Fig. 5(a)). The temperature dependence of
volume and $c/a$ ratio for the primary hexagonal phase are depicted in in Fig.
5(b) and Fig. 5(c) respectively. Clearly, below 50 K, the $c/a$ ratio shows a
change in slope associated with onset of the partial structural phase
transformation. The evolution of the secondary orthorhombic phase is also
evident in the temperature dependence of the microstrain covariance parameters
as in depicted in Fig. 5(d). The strain parameters S400 and S004 show a sharp
change close to the structural transformation temperature and remains almost
constant below it, whereas the parameter S112 increases dramatically. These
changes in the microstrain parameters are indicative of deviations in the
$\alpha$ and $\beta$ angles of the hexagonal lattice framework, and consistent
with a distortion towards an orthorhombic symmetry. It is interesting to note
that the emergence of the secondary orthorhombic phase at low temperatures is
not associated with the observation of a splitting of the hexagonal peaks, as
was observed in an earlier report on the same system zur Loye _et al._
(2009). We believe that this is due to the excess broadening of the
diffraction peaks due to strain. This incipient strain not only masks the peak
splitting expected due to the orthorhombic distortion, but also results in an
incomplete conversion of the high temperature hexagonal phase to the lower
symmetry orthorhombic one.
Fig. 6(a) shows the temperature dependence of the magnetic susceptibility of
Ba3NaIr2O9 as measured at an applied field of 500 Oe. The susceptibility
increases with decrease in temperature with the zero field cooled (zfc) and
field cooled (fc) curves diverging close to 6K as shown in Fig. 6(b). This is
at variance with what has been reported in early single crystalline specimens
of this system, where features in the magnetization was observed at 75 K and
50 K zur Loye _et al._ (2009); Kim _et al._ (2004). The temperature
dependence of the heat capacity as measured from 2-250 K is depicted in Fig.
6(c). Clearly, the low temperature anomaly observed in magnetization is absent
here which implies that the change in entropy is rather small. Fig. 6(d) shows
the temperature dependence of reciprocal magnetic susceptibility (1/$\chi$).
Interestingly, a linear region was observed well in excess of 200 K, and hence
only the temperature range 260- 300 K was chosen to fit the inverse magnetic
susceptibility using the Curie-Weiss law. An effective magnetic moment value
3.42(5)$\mu_{B}$ per formula unit and a Weiss temperature ($\theta_{c}$) of
-285.36(1.1) K were obtained, with the latter being indicative of the extent
of frustration in this system, since we only observe a feature in
magnetization at 6 K.
Since Iridium is the only magnetic ion in this system, the magnetic moment
arises from the charge balance between Ir(V) (5d4) and Ir(VI) (5d3). Based on
these oxidation states and the octahedral coordination environments, the
theoretical spin-only moment for non-interacting mixed valent Ir5+ (S=1, 2.83
$\mu_{B}$) and Ir6+ (S=3/2, 3.87 $\mu_{B}$) is 6.7$\mu_{B}$ per formula unit.
These calculated moments are significantly larger from the experimentally
determined value 3.42(5)$\mu_{B}$ per formula unit. However, the
experimentally obtained value is close to the reported magnetic moments
3.6$\mu_{B}$ per formula unit for Ba3NaIr2O9 and 3.93$\mu_{B}$ per formula
unit for Ba3LiIr2O9, both having Ir in a similar 5.5 charge state Kim _et
al._ (2004). Such reduction in moment is a peculiar feature seen in iridates
and has been reported for a wide range of iridium based oxides Nag _et al._
(2018); Boseggia _et al._ (2013); Ming _et al._ (2018); Rau _et al._
(2016).
The strong suppression of the magnetic moment here is ascribed to the joint
effect of spin orbit interaction and strong covalency, resulting in the
formation of metal-metal bonds. They act against the intraatomic Hund’s rule
exchange interaction to reduce the total magnetic moment on the Iridium dimer.
This was further confirmed by our synchrotron measurements where an anomalous
shortening of Ir-Ir bond distance in the +5.5 valence state (2.73$\AA$) as
compared to the +5 state (2.75$\AA$) corroborates the formation of the metal-
metal bonds. Na being non-magnetic, the inter and intra dimer interactions
between Ir ions drives the magnetic ground state of the system. In the absence
of a superexchange path for inter dimer interactions (J2 and J3), the extended
superexchange pathways Ir-O-O-Ir could possibly influence the magnetic
exchange interactions. The Ir dimers are separated from each other via non
magnetic Na octahedra (green) as shown in Fig. 7. The next nearest neighbour
inter dimer Ir (5.8619(8) and 5.6911(12)) are connected via 2 oxygens from the
Na octahedra as shown by the dotted lines. Thus, in addition to metal-metal
bonds, the presence of super exchange and extended super exchange interactions
pathways lead to complex magnetic exchange interactions.
Figure 6: (a) Zero field cooled temperature dependent magnetization measured
at 500 Oe for Ba3NaIr2O9. (b) the FC and ZFC curves show divergence close to
6K, corresponding to a cluster glass transition. (c) Heat capacity as a
function of temperature measured in zero magnetic field shows no discernible
anomaly in the entire range of measurement. (d) log-log plot of temperature
dependence of the inverse magnetic susceptibility data as measured at 500 Oe.
The solid red line is a guide to the eye to show the deviation from Curie-
Weiss law, which starts close to 175 K. (e) Thermo-remnant magnetization (TRM)
measured at 1 kOe with two systematic jumps corresponding to the onset of the
co-operative paramagnetic regime, and the cluster glass state respectively.
Figure 7: A schematic representation of the crystal structure of Ba3NaIr2O9
using Vesta. The projection of the structure perpendicular to the c-axis is
shown. Here pink and green octahedra represents Iridium and Sodium
respectively and the Barium atoms are not shown for clarity. The Iridium
dimers (pink) are separated by Sodium octahedra (green) along the c-axis where
the extended super exchange between Ir dimers is mediated by oxygens in Na
octahedra as shown by dotted line.
Though heat capacity measurements did not show evidence of any long range
ordered state, a Curie Weiss fit of the inverse magnetic susceptibility was
valid only in temperatures in excess of 260 K, indicating the presence of an
extended regime of short range magnetic correlations. To gain further insight
in to the extent of this regime, we performed temperature dependent
measurements of the Thermo-remnant magnetization (TRM), which has proven to be
an effective tool in the investigation of magnetically frustrated systems. A
TRM measurement as performed on the Ba3NaIr2O9 system in a cooling field of 1
kOe is depicted in Fig. 6(e). Two precipitous jumps are clearly observed - one
below 10 K, which corresponds to the low temperature magnetic transition
observed in the ZFC-FC measurements, and one just below 175 K, which roughly
corresponds to the region where the inverse magnetic susceptibility deviates
from the linear Curie-Weiss fit. In the absence of long range order, this
feature at high temperature could be ascribed to the onset of a cooperative
paramagnetic regime. First coined by Villain Villain (1979), cooperative
paramagnetism was used to describe the low temperature dynamics of a classical
Heisenberg spins on a corner sharing tetrahedral framework, and is a defining
feature of systems with high geometric frustration. Cooperative paramagnetism
is seen in many transition metal oxides which crystallizes in magnetic spin
configurations that are geometrically or topologically prone to frustration
due to underlying lattices based upon corner, edge or face sharing triangles
or tetrahedra. A wide range of systems including pyrochlore, spinels, and
jarosites are now known to exhibit this phenomena Lee _et al._ (2010); Ueland
_et al._ (2010); van Duijn _et al._ (2008). This state can also be looked
upon as being analogous to the Griffiths phase Yamamoto _et al._ (2020), with
the notable difference that the low temperature magnetic ground state instead
of being magnetically ordered, now undergoes a glass-like dynamical phase
transition. We believe that the nucleation of finite size correlated regions
within the antiferromagnetic matrix starts to develop close to 175 K. As the
temperature reduces, magnetic frustration develops due to competing intra
dimer (nearest neighbour J1) and inter dimer (next- nearest neighbour J2 and
J3) interactions. The absence of conventional long range antiferromagnetic
order is due to the interplay between frustration and quenched disorder. As
proposed by Imry and Ma Imry and Ma (1975), a random quenched disorder
inhibits the transition to a long range magnetically ordered state but instead
favours the nucleation of correlated magnetic clusters Pal _et al._ (2019).
Figure 8: Main panel: Resistivity ($\rho$) plotted as a function of
temperature. Inset: Resistivity (ln$\rho$) as a function of temperature
(T-0.33). The red line is the fit to the Mott variable range hopping (VRH)
model.
Interestingly, the high temperature magnetic feature ($\sim$175 K) which we
observe in the TRM measurements is not easily discernible in other
measurements, and hence has gone unreported in prior single crystal
measurements of Ba3NaIr2O9 as well Kim _et al._ (2004). This is a consequence
of the fact that the magnetic susceptibility of the paramagnetic matrix
($\chi{{}_{PM}}$) would be of the same order (or even larger) than that of the
antiferromagnetic clusters ($\chi{{}_{AFM}}$), making it difficult to
unambiguously determine the contribution of the antiferromagnetic clusters in
traditional in-field magnetic measurements. On the other hand, since TRM is a
zero-field measurement, the contribution of the paramagnetic magnetic
susceptibility is likely to be suppressed, allowing for one to identify more
clearly the temperature regime at which the antiferromagnetic clusters begin
to nucleate. Interestingly, the ruthenate analogue of this triple perovskite
was reported to exhibit the opening of a charge gap at 210 K Kimber _et al._
(2012), though we do not observe any evidence of a similar phenomena in its
Iridium counterpart investigated here. The magnetic transition in these family
of oxides is typically associated with a symmetry breaking lattice distortion
which alters the exchange parameters, thereby neutralizing the frustration. In
the case of Ba3NaIr2O9, the interesting capacity of the system to accommodate
strain impedes a traditional structural transformation. Therefore, rather than
observing a first order transition from hexagonal symmetry to an orthorhombic
one, we observe a slowly evolving strained lattice gradually transforming to a
lower symmetry where the major phase still retains the original high
temperature symmetry. A strained lattice of this nature is probably closer to
the reports of the triple perovskite family when subjected to external
pressure Zhao _et al._ (2009); Senn _et al._ (2013). For instance on
application of pressure, the Ba3NaRu2O9 transforms to a new phase,
3C1:2-Ba3NaRu2O9, where the charge gap completely disappears and Pauli
paramagnetism emerges, possibly as a consequence of strong electron-electron
correlations. The Ba3CaRu2O9 system has also been reported to exhibit excess
strain in the lattice, in the form of peak broadening as the temperature was
lowered. Therefore, the ground state in Ba3NaIr2O9 is clearly influenced by
the complex interplay of a mixed valent Ir, frustration, phase coexistence and
strain.
The temperature dependence of the electrical resistivity is shown in the Fig.
8. The system is semiconducting in nature, with the magnitude of resistivity
changing by 4 orders of magnitude from its room temperature value. Attempts to
fit using the Arrhenius model and Efros-Shklovskii variable range hopping (ES-
VRH) model were unsuccessful. A better fit was obtained by using the Mott
variable-range hopping (VRH) model which is given by:
$\rho\ \propto\ exp((T_{0}/T)^{\nu})$
where the best fits were obtained for $\nu$=1/3 indicating variable range
hopping in two dimensions Mott and Davis (2012). The magnetic Ir dimers are
connected via non-magnetic Na octahedra, generating a pseudo 2D structure.
Thus, the crystal structure of this triple perovskite can be expressed by
alternate stacking of two kinds of 2-D layers which consist of the NaO6
octahedra and the Ir2O9 polyhedra. This may account for the observed 2-D
resistivity behaviour. The resistivity of Ba3NaIr2O9 as a function of ln$\rho$
vs T-0.33 is shown in the inset of Fig. 8. The localization of states due to
strong Coulomb interactions and slight structural disorder would be consistent
with variable range hopping behaviour. The resistivity of the ruthenate
analogues of the triple perovskites Ba3MRu2O9 (R=Fe,Co,Ni,Cu,In,Ln), all
follows the same characteristics Rijssenbeek _et al._ (1998); Hinatsu and Doi
(2003).
Figure 9: Normalised Isothermal Remanent Magnetization (IRM) at 2K in cooling
fields of 500 Oe and 1 T (inset), fitted using the Kohlrausch Williams Watt
(KWW) stretched exponential (red) and a power law (blue). Figure 10:
Isothermal magnetization measured at 2 K for Ba3NaIr2O9 after cooling in
different magnetic fields. A systematic shift of the hysteresis loop as a
function of the magnitude of the cooling field can be clearly seen indicating
the presence of mixed magnetic interactions.
In addition to frustration, the presence of mixed valent states of Iridium and
phase co-existence sets the stage for inhomogeneous magnetic exchange
interactions. The observation of an anomaly below 6 K in the M(T) curves
indicates the emergence of glassy dynamics owing to this frustration. However,
the signal was too small for us to clearly identify a frequency dependent peak
in the ac susceptibility measurements. Another method to probe glassy dynamics
is to use the time evolution of Isothermal Remanent Magnetization (IRM). This
involves cooling the sample from 300 K to 2 K in the presence of a magnetic
field, after which the magnetic field is switched off and the decay of
magnetization is measured as function of time. A special formulation of power
law is known to study the time dynamics of magnetization for glasses under
stress known as the Kohlrausch Williams Watt (KWW) stretched exponential
equation Edwards and Vilgis (1986); Ito _et al._ (1986); Ghara _et al._
(2014) given by :
$m(t)=m_{0}-m_{g}exp\\{-(t/\tau)^{\beta}\\}$
where m0 is related to initial remanent magnetization, mg is magnetization of
glassy component, $\tau$ and $\beta$ are the characteristic relaxation time
constant and stretching exponent respectively. Here m(t) is representative of
the sum of many exponential decays weighted by a distribution of individual
relaxation times, with the magnitude of $\beta$ indicating the breadth of that
distribution Sidebottom _et al._ (1995). The value of $\beta$ has been
reported to lie between 0 and 1 for a wide range of disordered systems. The
normalized magnetization m(t) = (Mt/Mt=0) as measured in Ba3NaIr2O9 at 2 K
with cooling fields of 500 Oe (main panel) and 1T (inset) at 2K is plotted in
Fig. 9. As depicted by the blue curve, the fit to a simple power law was not
satisfactory. However, a good fit was obtained for the KWW model and the
resultant values of $\beta$ are 0.518(14) and 0.5464(68) for 500Oe and 1T
respectively. These values are in lines with the reported values for cluster
glass phase in many double perovskites Pal _et al._ (2019); Anand _et al._
(2019), and reinforces our contention that the low temperature magnetic ground
state is one which has magnetically frozen clusters.
Figure 11: Main panel: The low temperature specific heat CP/T3 as a function
of temperature. The slight upturn in the low temperature range is a strong
indication of disorder in the system. Inset: The red line line depicts the fir
to the low temperature specific heat data using the Debye and Petit’s law.
The magnetic interactions in Ba3NaIr2O9 are predominantly antiferromagnetic,
though, signatures of the presence of mixed magnetic interactions are
suggested by a weak loop opening in magnetization isotherms at 2 K as shown in
Fig. 10. As revealed by our synchrotron studies, the low temperature structure
comprises of a highly strained lattice with two unique structural motifs
coexisting. This coupled with the existence of finite size antiferromagnetic
clusters allow for exchange bias, with the antiferro and ferro-magnetic
contributions arising from the magnetic order within ordered clusters and
uncompensated spins at the surface of these clusters respectively (Fig. 10).
The presence of a low temperature glass-like magnetic ground state is also
evidenced in a strong upturn in C/T3 vs T Fig. 11, with a clear deviation from
what is expected from Debye’s law. This excess entropy arises as a consequence
of the glassy dynamics Banerjee _et al._ (2009), and appears to be a
signature common to structural and magnetic glasses. This is also evident on
plotting $C/T$ vs T2 curve, indicating the presence of an excess entropy that
releases as a consequence of short range ordering. The inset of Fig. 11 shows
the fit to the low temperature $C/T$ vs $T^{2}$ curve . The data is fitted
using the expression $C_{p}=\gamma T+\beta T^{3}$ where $\gamma$ and $\beta$
are related to the electronic and vibrational degrees of freedom respectively.
We also calculated the Debye temperature $\theta_{D}$, which is derived from
the expression, $\theta_{D}$ = (12$\pi^{4}$pR/5$\beta$)1/3, where R is the
ideal gas constant and p is the number of atoms per formula unit. The
calculated value of $\theta_{D}$ is 236.84K. The obtained values of $\gamma$
and $\beta$ are 77mJ/molK2 and 2.19mJ/molK4T respectively. The high value of
$\gamma$, unusual for insulating systems, can be attributed to the inherent
disorder which affects the spin, charge and orbital degrees of freedom. This
has been previously observed in insulating tripe perovskite iridate
(Ba3ZnIr2O9-25.9mJ/molK2), and manganites Hardy _et al._ (2003); Nag _et
al._ (2016). The high value observed here signifies the excess entropy
imparted by the frustration and disorder in this oxide owing to the mixed
valence state and stress. Interestingly, on the application of a moderate
magnetic field (0.5T), no change in the heat capacity was observed (not shown
here), which suggests against the presence of paramagnetic impurity centres.
Yamashita _et al._ (2011); Schliesser and Woodfield (2015).
## IV Summary
In summary, we report on the structure-property relationship in the mixed
valent geometrically frustrated triple perovskite iridate Ba3NaIr2O9. In
contrast to what is expected from purely structural considerations, this
system stabilizes in a high symmetry hexagonal symmetry at room temperatures.
On reducing the temperature, the lattice prefers to be strained rather than
distort to a low symmetry phase, as is the norm in this family of materials.
Though a low symmetry orthorhombic phase is finally nucleated below 50 K, this
conversion is only partial and the high symmetry hexagonal structure remains
the dominant one down to the lowest measured temperatures. Magnetic
measurements indicate an extended co-operative paramagnetic regime, which
finally freezes to a cluster glass-like phase at very low temperatures, as is
also evidenced from magnetization decay and specific heat data. This makes an
interesting addition to the family of triple perovskite iridates which exhibit
material sensitive physical properties.
## V Acknowledgements
S.N. acknowledges DST India for support through the DST Nanomission Thematic
Unit Program, SR/NM/TP-13/2016. C.G. and S.N. thank the Department of Science
and Technology, India (SR/NM/Z-07/2015) for financial support and the
Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR) for managing
the project.
## References
* Witczak-Krempa _et al._ (2014) W. Witczak-Krempa, G. Chen, Y. B. Kim, and L. Balents, Annual Review of Condensed Matter Physics 5, 57 (2014).
* Ramirez (1994) A. P. Ramirez, Annual Review of Materials Science 24, 453 (1994).
* Moessner and Ramirez (2006) R. Moessner and A. P. Ramirez, Physics Today 59, 24 (2006).
* Johnsson and Lemmens (2007) M. Johnsson and P. Lemmens, “Crystallography and chemistry of perovskites,” in _Handbook of Magnetism and Advanced Magnetic Materials_ (2007).
* Zhao _et al._ (2009) J. Zhao, L. Yang, Y. Yu, F. Li, R. Yu, and C. Jin, Journal of Solid State Chemistry 182, 327 (2009).
* Nag _et al._ (2018) A. Nag, S. Bhowal, F. Bert, A. D. Hillier, M. Itoh, I. Carlomagno, C. Meneghini, T. Sarkar, R. Mathieu, I. Dasgupta, and S. Ray, Phys. Rev. B 97, 064408 (2018).
* Vasala and Karppinen (2015) S. Vasala and M. Karppinen, Progress in Solid State Chemistry 43, 1 (2015).
* Garg _et al._ (2020) C. Garg, D. Roy, M. Lonsky, P. Manuel, A. Cervellino, J. Müller, M. Kabir, and S. Nair, (2020), arXiv:2009.13822 [cond-mat.str-el] .
* Miiller _et al._ (2012) W. Miiller, M. Avdeev, Q. Zhou, B. J. Kennedy, N. Sharma, R. Kutteh, G. J. Kearley, S. Schmid, K. S. Knight, P. E. R. Blanchard, and C. D. Ling, J. Am. Chem. Soc. 134, 3265 (2012).
* Rodriguez-Carvajal (2001) J. Rodriguez-Carvajal, (Laboratoire Leon Brillouin , CEA-CNRS ,Saclay, France, 2001).
* Rietveld (1969) H. M. Rietveld, Journal of Applied Crystallography 2, 65 (1969).
* Momma and Izumi (2011) K. Momma and F. Izumi, J. Appl. Cryst. 44, 1272 (2011).
* Lightfoot and Battle (1990) P. Lightfoot and P. Battle, Journal of Solid State Chemistry 89, 174 (1990).
* Rijssenbeek _et al._ (1999) J. Rijssenbeek, Q. Huang, R. Erwin, H. Zandbergen, and R. Cava, Journal of Solid State Chemistry 146, 65 (1999).
* Doi _et al._ (2001) Y. Doi, Y. Hinatsu, Y. Shimojo, and Y. Ishii, Journal of Solid State Chemistry 161, 113 (2001).
* Lufaso and zur Loye (2005) M. W. Lufaso and H.-C. zur Loye, Inorganic Chemistry 44, 9143 (2005), pMID: 16323894\.
* Stephens (1999) P. W. Stephens, Journal of Applied Crystallography 32, 281 (1999).
* Kimber _et al._ (2012) S. A. J. Kimber, M. S. Senn, S. Fratini, H. Wu, A. H. Hill, P. Manuel, J. P. Attfield, D. N. Argyriou, and P. F. Henry, Phys. Rev. Lett. 108, 217205 (2012).
* Stitzer _et al._ (2002) K. E. Stitzer, M. D. Smith, W. R. Gemmill, and H.-C. zur Loye, Journal of the American Chemical Society 124, 13877 (2002), pMID: 12431119.
* zur Loye _et al._ (2009) H.-C. zur Loye, S.-J. Kim, R. Macquart, M. D. Smith, Y. Lee, and T. Vogt, Solid State Sciences 11, 608 (2009).
* Kim _et al._ (2004) S.-J. Kim, M. D. Smith, J. Darriet, and H.-C. zur Loye, J. Solid State Chem. 177, 1493 (2004).
* Boseggia _et al._ (2013) S. Boseggia, H. C. Walker, J. Vale, R. Springell, Z. Feng, R. S. Perry, M. M. Sala, H. M. Rønnow, S. P. Collins, and D. F. McMorrow, Journal of Physics: Condensed Matter 25, 422202 (2013).
* Ming _et al._ (2018) X. Ming, X. Wan, C. Autieri, J. Wen, and X. Zheng, Phys. Rev. B 98, 245123 (2018).
* Rau _et al._ (2016) J. G. Rau, E. K.-H. Lee, and H.-Y. Kee, Annual Review of Condensed Matter Physics 7, 195 (2016).
* Villain (1979) J. Villain, Zeitschrift für Physik B Condensed Matter 33, 31 (1979).
* Lee _et al._ (2010) S.-H. Lee, H. Takagi, D. Louca, M. Matsuda, S. Ji, H. Ueda, Y. Ueda, T. Katsufuji, J.-H. Chung, S. Park, S.-W. Cheong, and C. Broholm, Journal of the Physical Society of Japan 79, 011004 (2010).
* Ueland _et al._ (2010) B. G. Ueland, J. S. Gardner, A. J. Williams, M. L. Dahlberg, J. G. Kim, Y. Qiu, J. R. D. Copley, P. Schiffer, and R. J. Cava, Phys. Rev. B 81, 060408 (2010).
* van Duijn _et al._ (2008) J. van Duijn, N. Hur, J. W. Taylor, Y. Qiu, Q. Z. Huang, S.-W. Cheong, C. Broholm, and T. G. Perring, Phys. Rev. B 77, 020405 (2008).
* Yamamoto _et al._ (2020) R. Yamamoto, T. Furukawa, K. Miyagawa, T. Sasaki, K. Kanoda, and T. Itou, Phys. Rev. Lett. 124, 046404 (2020).
* Imry and Ma (1975) Y. Imry and S.-k. Ma, Phys. Rev. Lett. 35, 1399 (1975).
* Pal _et al._ (2019) A. Pal, P. Singh, V. K. Gangwar, S. Ghosh, P. Prakash, S. K. Saha, A. Das, M. Kumar, A. K. Ghosh, and S. Chatterjee, Applied Physics Letters 114, 252403 (2019).
* Senn _et al._ (2013) M. S. Senn, A. M. Arevalo-Lopez, T. Saito, Y. Shimakawa, and J. P. Attfield, Journal of Physics: Condensed Matter 25, 496008 (2013).
* Mott and Davis (2012) N. F. Mott and E. A. Davis, _Electronic processes in non-crystalline materials_ (Oxford University Press, 2012).
* Rijssenbeek _et al._ (1998) J. T. Rijssenbeek, P. Matl, B. Batlogg, N. P. Ong, and R. J. Cava, Phys. Rev. B 58, 10315 (1998).
* Hinatsu and Doi (2003) Y. Hinatsu and Y. Doi, Bulletin of the Chemical Society of Japan 76, 1093 (2003).
* Edwards and Vilgis (1986) S. F. Edwards and T. Vilgis, Physica Scripta T13, 7 (1986).
* Ito _et al._ (1986) A. Ito, H. Aruga, E. Torikai, M. Kikuchi, Y. Syono, and H. Takei, Phys. Rev. Lett. 57, 483 (1986).
* Ghara _et al._ (2014) S. Ghara, B.-G. Jeon, K. Yoo, K. H. Kim, and A. Sundaresan, Phys. Rev. B 90, 024413 (2014).
* Sidebottom _et al._ (1995) D. Sidebottom, P. Green, and R. Brow, Journal of Non-Crystalline Solids 183, 151 (1995).
* Anand _et al._ (2019) K. Anand, A. Pal, P. Singh, M. Alam, A. G. Joshi, A. Mohan, and S. Chatterjee, (2019), arXiv:1910.13734 [cond-mat.mtrl-sci] .
* Banerjee _et al._ (2009) A. Banerjee, R. Rawat, K. Mukherjee, and P. Chaddah, Phys. Rev. B 79, 212403 (2009).
* Hardy _et al._ (2003) V. Hardy, A. Maignan, S. Hébert, and C. Martin, Phys. Rev. B 67, 024401 (2003).
* Nag _et al._ (2016) A. Nag, S. Middey, S. Bhowal, S. K. Panda, R. Mathieu, J. C. Orain, F. Bert, P. Mendels, P. G. Freeman, M. Mansson, H. M. Ronnow, M. Telling, P. K. Biswas, D. Sheptyakov, S. D. Kaushik, V. Siruguri, C. Meneghini, D. D. Sarma, I. Dasgupta, and S. Ray, Phys. Rev. Lett. 116, 097205 (2016).
* Yamashita _et al._ (2011) S. Yamashita, T. Yamamoto, Y. Nakazawa, M. Tamura, and R. Kato, Nature communications 2, 275 (2011).
* Schliesser and Woodfield (2015) J. M. Schliesser and B. F. Woodfield, Phys. Rev. B 91, 024109 (2015).
|
Some remarks on the discovery of 244Md
Fritz Peter Heßberger1,2,***E-mail<EMAIL_ADDRESS>
1GSI - Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291
Darmstadt, Germany
2Helmholtz Institut Mainz, Johann-Joachim-Becherweg, 55128 Mainz, Germany
Michael Block1,2,3
1GSI - Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291
Darmstadt, Germany
2Johannes Gutenberg-Universität Mainz, 55099 Mainz, Germany
3Helmholtz Institut Mainz, Johann-Joachim-Becherweg, 55128 Mainz, Germany
Christoph Düllmann1,2,3
1Johannes Gutenberg-Universität Mainz, 55099 Mainz, Germany
2GSI - Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291
Darmstadt, Germany
3Helmholtz Institut Mainz, Johann-Joachim-Becherweg, 55128 Mainz, Germany
Alexander Yakushev1
1GSI - Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291
Darmstadt, Germany
Matti Leino1
1University Jyväskylä, 40014 Jyväskylä, Finland
Juha Uusitalo1
1University Jyväskylä, 40014 Jyväskylä, Finland
Version: December 18, 2020
###### Abstract
In two recent papers by Pore et al.[1] and Khuyagbaatar et al.[2] discovery of
the new isotope 244Md was reported. The decay data, however, are conflicting.
While Pore et al. [1] report two isomeric states decaying by $\alpha$ emission
with Eα(1) = 8.66(2) MeV, T1/2(1) = 0.4${}^{+0.4}_{-0.1}$s and Eα(2) = 8.31(2)
MeV, T1/2(2)$\approx$6 s, Khuyagbaatar et al. [2] report only a single
transition with a broad energy distribution of Eα = (8.73 - 8.86) MeV and T1/2
= 0.30${}^{+0.19}_{-0.09}$ s. The data published in [1] are very similar to
those published for 245mMd (Eα = 8.64(2), 8.68(2) MeV, T1/2 =
0.35${}^{+0.23}_{-0.16}$ s [3]). Therefore, we compare the data presented for
244Md in [1] with those reported for 245Md in [3] and also in [2]. We conclude
that the data presented in [1] shall be attributed to 245Md with small
contributions (one event each) from 245Fm and probably 246Md.
## 1 Introduction
Discovery of 244Md was first reported by J.L. Pore et al. [1]. They used the
reaction 209Bi(40Ar,5n)244Md at a bombarding energy of $\approx$220 MeV, which
corresponds to an excitation energy of the compound nucleus 249Md of E∗
$\approx$ 46 MeV at a production in the center of the target. They observed
four events after the mass spectrometer FIONA at a position where events with
mass number A = 244 were expected, and six $\alpha$ decay chains in the BGS
focal plane detector. The latter were attributed to the decay of two states in
244Md, one with Eα = 8.308$\pm$0.019 MeV, T1/2 $\approx$6 s (1 event), and one
with Eα = 8.663$\pm$0.023 MeV, T1/2 = 0.4${}^{+0.4}_{-0.1}$ s (4 events).
In a publication by J. Khuyagbaatar et al. identification of 244Md was
reported using the reaction 197Au(50Ti, 3n)244Md [2]. The experiment was
peformed at two bombarding energies of 239.8 MeV and 231.5 MeV (center of
target), corresponding to excitation energies of E∗ = 32.7 MeV and E∗ = 26.2
MeV. They reported two $\alpha$ acitivities. One, with an energy range Eα =
(8.7 - 8.8) MeV and a half-life of T1/2 = 0.30${}^{+0.19}_{-0.09}$ s was
observed only at the higher excitation energy (7 events); the second activity
was observed at both energies (three events each with full energy release in
the stop detector) within an energy range of Eα = (8.6 - 8.7) MeV and a half-
life of T1/2 = 0.33${}^{+0.15}_{-0.08}$ s. This activity was attributed to the
previously reported isotope 245Md.
The isotope 245Md was first observed in an experiment performed at the
velocity filter SHIP at GSI, Darmstadt, Germany, using the reaction
209Bi(40Ar,4n)245Md at a bombarding energy of 5.12 AMeV (204.8 MeV)
corresponding to an excitation energy of E∗ = 40 MeV [3]. The authors reported
two $\alpha$ energies of Eα = 8640$\pm$20, 8680 $\pm$20 keV, and a half-life
of T1/2 = 0.35${}^{+0.23}_{-0.18}$ s and also a spontaneous fission activity
of T1/2 = 0.90${}^{+0.23}_{-0.16}$ ms. This fission activity with T1/2 =
0.9${}^{+0.6}_{-0.3}$ ms was also observed by Khuyagbaatar et al. [2]. The
fission activity was attributed to the ground state decay of 245Md, and the
$\alpha$ activity to an isomeric state 245mMd [3]. Previously known data on
245Md were not mentioned in [1]. For completeness it should be noted that on
the basis of detailed spectroscopic investigation of odd-mass mendelevium
isotopes performed since then [5] the $\alpha$ activity would nowadays rather
be attributed to 245gMd and the fission activity to 245mMd. It further was
shown in [5] that $\alpha$ decay in odd mass mendelevium isotopes populates
predominantly the 7/2-[514] Nilsson level in the einsteinium daughter nuclei
which decay into the 7/2+[633] Nilsson - level and the 9/2+ member of the
rotational band built up on it. As the 9/2+ level decays by highly converted
M1 transitions into the 7/2+ bandhead, the line at Eα = 8680 $\pm$20 keV
reported in [3] is thus certainly the result of energy summing of $\alpha$
particles and conversion electrons.
## 2 Comparison of the results for 245Md reported by Ninov et al. [3] and
Khuyagbaatar et al. [2] and for 244Md reported by Pore et al. [1].
The data published for 245Md in [3, 2] and 244Md in [1] are presented in fig.
1 and table 1. Data of Pore et al. (P1 - P6) are taken from table 1 in [1].
Data of Khuyagbataar et al. (K1 - K10) are taken from the supplemental
material of [2]. No list of single events was presented by Ninov et al. [3].
Data shown here (N1 - N8) are taken from a re-inspection of the logbook of the
corresponding SHIP experiment [4]. Only $\alpha$ \- $\alpha$ correlations with
full energy release of both $\alpha$ particles in the SHIP ’stop - detector’
are listed.
Figure 1: Summary of decays attributed to 245Md in [3](squares) as well as in
[2](triangles) together with data reported by Pore et al. [1] (circles: events
attributed to 245Md by the present authors, diamonds: events attributed to
245Fm or (tentatively) to 246Md). The dashed lines are to guide the eyes: the
red lines represent the $\alpha$ energies given for 245Md (8640, 8680 keV) in
[3] and the energy given for 244Md (8663 keV) in [1]; the blue lines represent
the $\alpha$ energy for 241Es (8113 keV) given in [3] and the highest daughter
energy (P5) in [1]; the purple line repesents the literature value of the
$\alpha$ energy of 241Cf (7335 keV) [6]. The orange hetched area marks the
range of $\alpha$ energies where the events attributed to 244Md in [2] were
observed. Table 1: Summary of decays attributed to 245Md in [3, 2] and decays
reported by Pore et al. [1]. Data from Pore et al. are taken from table 1 in
[1]; data from Khuyagbaatar et al. are from the supplemental material [2]. No
individual decay data are reported in [3]; these data are taken from the
experiment analysis logbook [4].
Ref. | evt. no. | Eα(1)/MeV | $\Delta$t(ER-$\alpha$1)/s | Eα(2)/MeV | $\Delta$t($\alpha$1-$\alpha$2)/s
---|---|---|---|---|---
[3] | N1 | 8.652 | 0.0178 | 8.004 | 8.254
[3] | N2∗ | 8.629 | 0.1751 | 7.450 | 88.083
[3] | N3 | 8.692 | 0.00164 | 8.084 | 28.406
[3] | N4 | 8.633 | 0.1565 | 7.360 | 203.876
[3] | N5 | 8.639 | 1.1708 | 8.111 | 7.639
[3] | N6 | 8.663 | 0.0843 | 8.108 | 15.763
[3] | N7 | 8.635 | 0.2831 | 8.119 | 13.573
[3] | N8 | 8.613 | 0.0914 | 7.894 | 335.005
[2] | K1∗∗ | 8.63 | 0.564 | 8.14 | 4.73
[2] | K2∗∗ | 8.67 | 0.454 | (1.1) | 0.24
[2] | K3∗∗ | 8.61 | 0.423 | (1.3) | 2.86
[2] | K4∗∗ | (1.9) | 0.120 | 8.12 | 6.87
[2] | K5∗∗ | (2.2) | 0.508 | 8.12 | 11.5
[2] | K6∗∗ | (0.9) | 0.131 | 8.09 | 15.1
[2] | K7∗∗ | (0.4) | 1.42 | 8.19 | 2.97
[2] | K8∗∗∗ | 8.65 | 0.693 | (0.26) | 5
[2] | K9∗∗∗ | 8.63 | 0.346 | 7.45 | 20
[2] | K10∗∗∗ | 8.69 | 0.129 | miss. | miss.
[1] | P1 | 8.178 | 0.60 | 7.305 | 27.34
[1] | P2 | 8.308 | 9.18 | 7.996 | 14.37
[1] | P3 | 8.635 | 0.88 | 7.330 | 18.95
[1] | P4∗∗∗∗ | 8.653 | 0.13 | 8.128 | 1.20
[1] | P5 | 8.682 | 0.31 | 8.203 | 10.00
[1] | P6 | 8.684 | 1.16 | 8.124 | 7.65
∗ Both events were registered within the beam on period
∗∗ observed at E∗ = 26.2 MeV
∗∗∗ observed at E∗ = 32.7 MeV
∗∗∗∗ the $\alpha$ \- $\alpha$ correlation was followed by a third event of Eα
= 7.086$\pm$25 MeV after $\Delta$t = 75.97 s.
Figure 2: Excitation function for 40Ar + 209Bi. The energies refer to
production in the center of the target. The error bars for the energies refer
to the energy loss of 40Ar ion in the bismuth targets [12]. Systematic errors
in the accelerator energy are typically 0.2$\%$ for the UNILAC accelerator and
are neglected. For the data of Pore et al. [1] an energy loss of $\approx$12.5
MeV in the titanium backing foil [12] is considered. No systematic error for
the accelerator energy is given by Pore et al.. Lines are the result of HIVAP
[9] calculations; full lines represent xn-channels, dashed lines represent
pxn-channels. Points are defined in the figure. The arrow marks the energy
reported in [1] for the observation of 247Md.
Evidently the chains P3, P4 and P6 agree with the data reported for 245Md in
[3, 2]. The energy of the daughter in P5 is higher than the values reported
for 241Es in [3], but is in agreement with the daughter energy in K7. This
event was attributed to 245Md in [2] as it was registered at E∗ = 26.2 MeV,
where only decays of 245Md were observed. Concerning the daughter energies P4,
P5 and P6 can be attributed to the decay 245Md ${}^{\alpha}_{\rightarrow}$
241Es ${}^{\alpha}_{\rightarrow}$, while P3 obviously represents the decay
245Md ${}^{\alpha}_{\rightarrow}$ 241Es ${}^{EC}_{\rightarrow}$ 241Cf
${}^{\alpha}_{\rightarrow}$, in accordance with N4 and the known $\alpha$
decay energy of 241Cf (7.340 MeV [6]). P1 fits to the decay sequence 245Fm
${}^{\alpha(8.15MeV)}_{~{}~{}~{}~{}\rightarrow}$ 241Cf
${}^{\alpha(8.34MeV)}_{~{}~{}~{}~{}\rightarrow}$ [6], with 245Fm being the
product of the p3n - channel. The cross-section ratio
$\sigma$(p3n)/$\sigma$(4n) $\approx$0.25 may appear unusually high, but it has
to be considered that one approaches the proton drip-line, and proton binding
energies are already low. The mass evaluation of Wang et al. [8] delivers
values of, e.g., 1540$\pm$210 keV for 247Md and 1360$\pm$320 keV for 246Md,
significantly lower than the neutron binding energies of 8250$\pm$330 keV
(247Md) and 7230$\pm$400 keV (246Md). And indeed HIVAP calculations [9]
deliver even a ratio $\sigma$(p3n)/$\sigma$(4n) $\approx$0.5 (see fig.2). It
should be reminded that recently notable cross - sections for p - evaporation
channels have been reported for the reaction 50Ti + 209Bi [10, 11].
Less clear is chain P2. The decay sequence 246Md ${}^{\alpha}_{\rightarrow}$
242Es ${}^{\alpha}_{\rightarrow}$, for which very broad energy distributions
in the range Eα $\approx$ (8.15-8.75) MeV (246Md) and Eα $\approx$ (7.75-8.05)
MeV (242Es) were observed (see fig. 5 in [7]) is a possible candidate.
P4 is terminated by an $\alpha$ event of Eα = 7.086$\pm$25 MeV, which could be
attributed to 237Bk, the so far unknown $\alpha$ daughter of 241Es. From
atomic mass extrapolation [8] one expects an $\alpha$ decay energy of E =
7.376$\pm$0.242 MeV. The lower value could be due to the population of an
excited state in 233Am.
## 3 Excitation functions.
The reported cross sections for production of 244-247Md in the reaction
209Bi(40Ar,xn)249-xMd [7, 1, 4] are shown in fig. 2. In [3] no cross sections
are given. The values given for this experiment are taken from [4]. The lines
are the result of HIVAP [9] calculations, using fission barriers modified to
reproduce the 2n (247Md) and 3n (246Md) cross sections. Evidently the 4n -
cross section from [4] is reproduced quite well. The excitation energy given
by Pore et al. [1] appears roughly 4 MeV above the expected maximum for the 4n
- cross section, and the value is about a factor of six higher, but more than
two orders of magnitude higher than the value expected for the 5n - channel.
A similiar situation is evident for the 2n - channel. Pore et al. [1] report
the observation of 247Md at a bombarding energy of 200 MeV, which corresponds
to an excitation energy E∗$\approx$30 MeV (arrow in fig. 2), which is about 6
MeV above the expected maximum for the 2n channel, but still a notable
production cross - section of $\approx$2 nbarn is expected here.
To conclude: comparison with reported cross-sections for xn - channels and
HIVAP calculations indicates that the events attributed to 244Md in [1] may
rather stem from decay of 245Md.
## 4 Conclusion.
The decay data for 244Md presented by Pore et al. [1] are in disagreement with
those published by Khuyagbaatar et al. [2]. A critical inspection of the decay
data of Pore et al. [1] for 244Md and a comparison with reported decay data
for 245Md rather suggest that they have observed 245Md. An additional argument
supporting that interpretation comes from the excitation function for the
production of mendelevium isotopes in the reaction 40Ar + 209Bi. The
excitation energy given for the observation of 244Md is about 10 MeV lower
than the expected maximum for the 5n - channel. Bombarding energy and reported
production cross section rather hint at the synthesis of 245Md.
## REFERENCES
* [1] J.L. Pore et al., Phys. Rev. Lett. 124, 252502 (2020).
* [2] J. Khuyagbaatar et al., Phys. Rev. Lett. 125, 142504 (2020).
* [3] V. Ninov et al., Z. Phys. A 356, 11 (1996).
* [4] F.P. Heßberger, Analysis Logbook SHIP experiment R165 (1993).
* [5] F.P. Heßberger et al., Eur. Phys. J. A 26, 233 (2005).
* [6] R.B. Firestone et al., Table of Isotopes, 8th Edition, John Wiley $\&$ Sons, New York (1996).
* [7] S. Antalic et al., Eur. Phys. J. A 43, 35 (2010).
* [8] M. Wang et al., Chinese Phys. C41, 03003 (2016).
* [9] W. Reisdorf, M. Schädel, Z. Phys. J. A 343, 47 (1992).
* [10] A. Lopez-Martens et al., Phys. Lett. B 795, 271 (2019).
* [11] F.P.Heßberger, Eur. Phys. A 55: 208 (2019).
* [12] J.F. Ziegler, J.P. Biersack, M.D. Ziegler, SRIM-2013.00, http://srim.org/ (2013)
|
# Ergodicity and totality of partitions associated with the RSK correspondence
A. M. Vershik St. Petersburg Department of Steklov Institute of Mathematics;
St. Petersburg State University; Institute for Information Transmission
Problems. E-mail<EMAIL_ADDRESS>N. V. Tsilevich St. Petersburg
Department of Steklov Institute of Mathematics. E-mail<EMAIL_ADDRESS>
###### Abstract
We study asymptotic properties of sequences of partitions ($\sigma$-algebras)
in spaces with Bernoulli measures associated with the Robinson–Schensted–Knuth
correspondence.
Key words: RSK correspondence, youngization, ergodicity of a sequence of
partitions, totality of a sequence of partitions.
## 1 Introduction
We study dynamic properties of the Robinson–Schensted–Knuth correspondence
(RSK) when it is successively applied to growing sequences of symbols. In
particular, we are interested in asymptotic properties of this correspondence.
Apparently, the “dynamic” approach to the RSK correspondence first appeared in
[3], where it was discovered that applying the RSK algorithm to an infinite
sequence of independent symbols from a linearly ordered set defines a
correspondence between Bernoulli measures (i.e., ensembles of independent
sequences of symbols) and central (Markov) measures on the path space of a
certain graph (the Young graph). In other words, the RSK correspondence
defines measure-preserving homomorphisms from Bernoulli spaces to Markov path
spaces. Thus, the question inevitably arises of whether or not this
homomorphism is an isomorphism $\bmod\,0$. The affirmative answer to this
question was obtained in the relatively recent important papers [4, 5]; their
approach relies on the Schützenberger’s jeu de taquin and is rather
technically involved.
The first author [10] suggested a program for solving these problems for a
certain class of graphs, including an elaboration of the ergodic method [7]
for finding invariant measures and the so-called bernoullization of graphs.
This approach uses techniques of the theory of filtrations (decreasing
sequences of $\sigma$-algebras) [8]; in particular, it has led to a new
problem of characterization of de Finetti-like filtrations, i.e., filtrations
for which every ergodic central measure is a Bernoulli measure. Note in this
regard that this paper (as well as [5]) deals with the properties and
structure of some ergodic central measures, those originating from Bernoulli
measures; the fact that they exhaust all central measures with finitely many
frequencies for the Young graph does by no means follow from these
considerations. However, the methods suggested in [10] allowed the authors to
prove, in a paper in preparation, that every central measure of this type is
associated with a Bernoulli measure in the above sense. This is how a long
awaited purely combinatorial proof of Thoma’s theorem (see [9]) should appear.
In this paper, we study only a part of the general problem in the simplest
case of a linearly ordered set with finitely many symbols, and prove two facts
in a sense dual to each other: the totality of the coplactic (= dual Knuth)
equivalence and the ergodicity of the so-called Young filtration, i.e., the
tail filtration determined by the $Q$-tableaux in the RSK corresponcence. Our
arguments, on the one hand, give another proof of the corresponding results
from [5] and, on the other hand, can be applied in a much more general
situation. We consider separately the case of two letters, because it is
illustrative and serves as the base case for an induction in the general case.
The reader is assumed to be familiar with the RSK correspondence and its basic
properties (see, e.g., [1]); for background on the representation theory of
the infinite symmetric group (the Young graph, central measures, Thoma
parameters, etc.), see, e.g., [2]; for that on the theory of measurable
partitions and filtrations, see, e.g., [8].
## 2 Youngization
Let ${\cal A}=\\{1,2,\ldots,k\\}$ be a finite alphabet. Consider the space
$X={\cal A}^{\infty}$ of infinite words in the alphabet ${\cal A}$ with
Bernoulli measure $m_{p}^{\infty}$, where $p=(p_{1},p_{2},\ldots,p_{k})$,
$p_{i}=\operatorname{Prob}(i)$, and $p_{1}\geq p_{2}\geq\ldots\geq p_{k}>0$.
Let ${\cal T}$ be the set of infinite standard Young tableaux (or, which is
the same, the set of infinite paths in the Young graph ${\mathbb{Y}}$) and
$\mu_{p}$ be the central measure on ${\cal T}$ with Thoma parameters
$(p,0,0)$. Note that the measure $\mu_{p}$ is supported by the subset of
tableaux with at most $k$ rows.
By $\operatorname{RSK}(w)=(P(w),Q(w))$ we denote the result of applying the
RSK algorithm to a finite sequence (word) $w$ in the alphabet ${\cal A}$.
Thus, $P(w),Q(w)$ is a pair of Young tableaux of the same shape (with at most
$k$ rows), which will be denoted by $\operatorname{sh}(w)$; the tableau $P(w)$
is semistandard, while the tableau $Q(w)$ is standard. Given an infinite
sequence $x\in X$, denote by $[x]_{n}=(x_{1},\ldots,x_{n})\in{\cal A}^{n}$ its
initial segment of length $n$, and let
$\\{x\\}_{n+1}:=(x_{n+1},x_{n+2},\ldots)\in{\cal A}^{\infty}$ be its
$(n+1)$-tail. Also, denote by $P_{n}(x)$ and $Q_{n}(x)$ the tableaux obtained
by applying the RSK algorithm to the initial segment of length $n$ of a
sequence $x$: $\operatorname{RSK}([x]_{n})=(P_{n}(x),Q_{n}(x))$.
Following [3], we introduce a map from the space of infinite sequences to the
space of infinite Young tableaux.
###### Definition 1.
Successively apply the RSK algorithm to the initial segments $[x]_{n}$ of a
sequence $x\in X$. It is clear from the construction of the algorithm that
$\lim\limits_{n\to\infty}Q_{n}(x)=:Q(x)$ is an infinite standard Young
tableau; denote it by $\pi(x)$. The resulting map
$\pi:(X,m_{p}^{\infty})\to({\cal T},\mu_{p})$ (1)
is called the _youngization_.
In [3] it is proved that the youngization (1) is a homomorphism of measure
spaces.
## 3 The sequences of Young partitions on a Bernoulli space. The main
theorems
The following measurable partitions are defined in a natural way on the space
$(X,m_{p}^{\infty})$ of infinite Bernoulli sequences:
* •
the cylinder partition $\sigma_{n}$ of level $n$, whose element is a set (of
finite measure) of sequences with a fixed initial segment of length $n$ and
arbitrary tail;
* •
the tail partition $\tau_{n}$ of level $n$, whose element is a (finite) set of
sequences with a fixed $(n+1)$-tail and arbitrary beginning.
We will call them the Bernoulli cylinder and tail partitions. They have the
following properties:
* •
the sequence of partitions $\sigma_{n}$ is monotonely increasing and converges
in the weak topology to the partition $\varepsilon$ into separate points;
* •
the sequence of partitions $\tau_{n}$ is monotonely decreasing and converges
in the weak topology to the trivial partition $\nu$;
* •
for every $n$, the partitions $\tau_{n}$ and $\sigma_{n}$ are independent with
respect to the Bernoulli measure $m_{p}^{\infty}$.
Now consider the following measurable partitions on the space ${\cal T}$ of
infinite Young tableaux:
* •
the cylinder partition $\xi_{n}$ of level $n$, whose element is a set (of
finite measure) of infinite paths in the Young graph (i.e., infinite Young
tableaux) with a fixed initial segment of length $n$ and arbitrary tail.
* •
the tail partition $\eta_{n}$ of level $n$, whose elements is a (finite) set
of infinite paths in the Young graph with a fixed $n$-tail and arbitrary
beginning.
###### Definition 2.
The Young cylinder partition and Young tail partition of the space $X$ of
infinite sequences are the partitions $\bar{\xi}_{n}:=\pi^{-1}\xi_{n}$ and
${\bar{\eta}_{n}:=\pi^{-1}\eta_{n}}$, respectively, i.e., the preimages of the
cylinder and tail partitions on Young tableaux under the youngization $\pi$.
The decreasing sequence of partitions $\\{\bar{\eta}_{n}\\}$ will also be
called the Young filtration.
Thus, $x\sim_{\bar{\xi}_{n}}y$ $\iff$ $Q([x]_{n})=Q([y]_{n})$, and
$x\sim_{\bar{\eta}_{n}}y$ $\iff$
$\operatorname{sh}([x]_{N})=\operatorname{sh}([y]_{N})$ for $N\geq n$.
Obviously, $\bar{\xi}_{n}\prec\sigma_{n}$.
To begin with, we describe the structure of Young partitions. Recall (see,
e.g., [1]) that the Knuth equivalence (or plactic) class ${\cal P}_{t}$ and
the dual Knuth equivalence (or coplactic) class ${\cal C}_{t}$ corresponding
to a given Young tableau $t$ of size $n$ is the set of all words $u$ of length
$n$ such that $P(u)=t$ and $Q(u)=t$, respectively.
###### Theorem 1.
The Young partitions on the space $X$ can be described as follows.
* •
The elements of $\bar{\xi}_{n}$ are indexed by the standard Young tableaux $t$
of size $n$ and coincide with the coplactic classes ${\cal C}_{t}$.
* •
The elements of $\bar{\eta}_{n}$ are indexed by the pairs $(t,y)$ where $t$ is
a semistandard Young tableau of size $n$ and $y$ is an infinite word in the
alphabet ${\cal A}$ and have the form
$\\{x\in X:[x]_{n}\in{\cal P}_{t},\,\\{x\\}_{n+1}=y\\};$
in other words, this is the set of all sequences whose initial segment of
length $n$ belongs to a given plactic class and the tail coincides with a
given infinite sequence.
The first assertion of this theorem is obvious, and the second one will be
proved in the next section (see Lemma 3).
Recall that a decreasing sequence of partitions (filtration) in a measure
space is said to be ergodic if it converges in the weak topology to the
trivial partition $\nu$ (into a single nonempty set). In turn, an increasing
sequence of partitions in a measure space is said to be total if it converges
in the weak topology to the partition $\varepsilon$ into separate points.
Thus, the ergodicity of a sequence of partitions means that there is no
nonconstant measurable function that is constant on the elements of all
partitions, while the totality of a sequence of partitions means that for
almost all pairs $x,y$ of different points, $x$ and $y$ will eventually fall
in different elements of partitions.
Clearly, the sequence of partitions $\bar{\xi}_{n}$ is increasing, while the
sequence of partitions $\bar{\eta}_{n}$ is decreasing. Our purpose is to study
the limiting partitions $\bar{\xi}:=\lim\limits_{n\to\infty}\bar{\xi}_{n}$ and
$\bar{\eta}:=\lim\limits_{n\to\infty}\bar{\eta}_{n}$, namely, to prove the
following theorem.
###### Theorem 2.
The Young partitions on the space $X$ of infinite Bernoulli sequences have the
following properties:
* •
the sequence of partitions $\bar{\xi}_{n}$ is total;
* •
the sequence of partitions $\bar{\eta}_{n}$ is ergodic.
As a corollary, we obtain the result proved (for an arbitrary central measure)
in [5].
###### Corollary 1.
The youngization map (1) is an isomorphism of measure spaces between
$(X,m_{p}^{\infty})$ and $({\cal T},\mu_{p})$.
Note that the space $({\cal T},\mu_{p})$ of infinite paths in the Young graph
with the central measure $\mu_{p}$ can be identified with the space of
trajectories of a Markov random walk on the “Weyl chamber”
${\cal W}_{k}=\\{(x_{1},\ldots,x_{k}):x_{i}\in\mathbb{Z},\,x_{1}\geq
x_{2}\geq\ldots\geq x_{k}\geq 0\\},$
where for a path ${\cal T}\ni t=(\lambda^{(1)},\lambda^{(2)},\ldots)$ we set
$\lambda^{(n)}=(\lambda_{1}^{(n)},\ldots,\lambda_{k}^{(n)})\in{\cal W}_{k}$
(thus, at each step, one of the coordinates is increased by $1$). So, the
youngization map (1) establishes an isomorphism between the space
$(X,m_{p}^{\infty})$ of trajectories of a Bernoulli process and the space of
trajectories of a Markov process. For example, in the case of $k=2$ and the
uniform measure $p=(\frac{1}{2},\frac{1}{2})$, the transition probabilities of
the Markov process are given by the following formula (see [11]): if
$j=\lambda_{1}-\lambda_{2}$ is the difference of the row lengths of a diagram,
then
$\operatorname{Prob}(j,j+1)=\frac{j+2}{2(j+1)},\qquad\operatorname{Prob}(j,j-1)=\frac{j}{2(j+1)}.$
We also introduce another family of partitions $\zeta_{n}$. Namely, on the
space $X$ of infinite sequences there is a natural action of the infinite
symmetric group ${\mathfrak{S}}_{\infty}$ by permutations of elements. Denote
by $\zeta_{n}$ the partition of $X$ into the orbits of the finite subgroup
${{\mathfrak{S}}_{n}\subset{\mathfrak{S}}_{\infty}}$. In other words, two
sequences $x,y\in X$ belong to the same element of $\zeta_{n}$ if and only if
$\\{x\\}_{n+1}=\\{y\\}_{n+1}$ and in $[x]_{n},[y]_{n}$ all elements occur with
the same multiplicity. The partitions $\zeta_{n}$ will be called the de
Finetti partitions. Note that $\lim\limits_{n\to\infty}\zeta_{n}=\nu$ by the
Hewitt–Savage zero–one law.
## 4 Proofs of the main theorems
### 4.1 The case $k=2$
In this section, we analyze the case of the two-letter alphabet ${\cal
A}_{2}=\\{1,2\\}$. Note that the space $X_{2}={\cal A}_{2}^{\infty}$ with
Bernoulli measure $m^{\infty}$, where $m=(p_{1},p_{2})$, can be naturally
regarded as the space of trajectories of the random walk on the one-
dimensional lattice with probability $p_{1}$ of moving right and probability
$p_{2}$ of moving left. Recall that we assume that $p_{1}\geq p_{2}>0$.
Consider a sequence from ${\cal A}_{2}^{n}$ as a word $w=x_{1}\ldots x_{n}$.
Bracket every factor $21$ in $w$. The remaining letters constitute a subword
$w_{1}$ in $w$. Bracket every factor $21$ in $w_{1}$. We are left with a word
$w_{2}$. Continue the procedure until we are left with a word of the form
$w_{k}=1^{a}2^{b}=x_{i_{1}}\ldots x_{i_{a+b}}$ with $a,b\geq 0$. The elements
$x_{i_{1}},\ldots,x_{i_{a+b}}$ of the sequence $w$ will be called free, and
all the other elements will be called paired. The number of brackets will be
called the rank of $w$ and denoted by $r(w)$.
Note that it follows from the properties of the random walk on the one-
dimensional lattice with $p_{1}\geq p_{2}$ that a.e. sequence $x\in X_{2}$ has
an initial segment with more $1$’s than $2$’s. This means that $x$ contains
infinitely many free $1$’s and each $2$ gets paired in a sufficiently long
initial segment.
The following lemma is an obvious consequence of the RSK construction.
###### Lemma 1.
Let $\operatorname{sh}([x]_{n})=(\lambda_{1},\lambda_{2})$. Then
$\lambda_{2}=r([x]_{n})$. Namely, the second row of $Q([x]_{n})$ contains the
indices of the free $1$’s, while its first row contains the indices of all the
other elements.
Now we can obtain an explicit description of the partitions $\bar{\xi}_{n}$
and $\bar{\eta}_{n}$, which, in particular, implies Theorem 1 in the two-
letter case.
###### Proposition 1.
The Young partitions on the set $X_{2}={\cal A}_{2}^{\infty}$ can be described
as follows:
* •
$x\sim_{\bar{\xi}_{n}}y$ $\iff$ all paired coordinates in $[x]_{n},[y]_{n}$
coincide;
* •
$x\sim_{\bar{\eta}_{n}}y$ $\iff$ $[x]_{n},[y]_{n}$ have the same rank and
$1$’s and $2$’s occur in them with the same multiplicity (these two conditions
amount to the condition that $P_{n}(x)=P_{n}(y)$) and
${\\{x\\}_{n+1}=\\{y\\}_{n+1}}$.
###### Proof.
The first assertion is obvious. To prove the second one, we first show that
$\bar{\eta}_{n}\succ\tau_{n}$. Let $x\sim_{\bar{\eta}_{n}}y$. We must prove
that $x\sim_{\tau_{n}}y$, i.e., $x_{N}=y_{N}$ for $N\geq n+1$. Assume the
contrary and let $m$ be the index of the first coordinate that differs in $x$
and $y$. Without loss of generality, $x_{m}=1$, $y_{m}=2$. But $y_{m}=2$ is a
free element in $[y]_{m}$, hence $x_{m}=1$ is a free element in $[x]_{m}$
(otherwise, $\operatorname{sh}([x]_{m})\neq\operatorname{sh}([y]_{m})$). Then
the tail $\\{y\\}_{m+1}$ contains no free $1$’s, which, as we have noted
above, has probability $0$. So, $\bar{\eta}_{n}\succ\tau_{n}$. The coincidence
of ranks is obvious. It remains to show that in $[x]_{n}$ and $[y]_{n}$ the
elements $1$ and $2$ occur with the same multiplicitiy. Assume to the contrary
that, say, $[y]_{n}$ has more free $2$’s than $[x]_{n}$. Since, almost surely,
each $2$ becomes paired in a sufficiently long initial segment, at the moment
when the “extra” $2$ gets paired, the condition
$\operatorname{sh}([x]_{N})=\operatorname{sh}([y]_{N})$ fails. ∎
###### Corollary 2.
The three (Bernoulli, Young, and de Finetti) tail filtrations on $X_{2}$
satisfy the relation
$\tau_{n}\prec\zeta_{n}\prec\bar{\eta}_{n}.$
###### Proposition 2.
For the two-letter alphabet, Theorem 2 holds, i.e., the sequence of Young
cylinder partitions is total and the Young filtration is ergodic.
###### Proof.
Let $x\sim_{\bar{\xi}}y$ and $x\neq y$. Then there exists $n$ such that
$[x]_{n-1}=[y]_{n-1}$ and $x_{n}\neq y_{n}$. But $x\sim_{\bar{\xi}_{n}}y$,
hence all paired coordinates in $[x]_{n}$ and $[y]_{n}$ coincide, so only free
ones may differ. Without loss of generality, let $x_{n}$ be a free $1$ and
$y_{n}$ be a free $2$. Almost surely, there exists $N>n$ such that this $2$
gets paired in $[y]_{N}$. But the element $x_{n}=1$ remains free in $[x]_{N}$.
Then $x\nsim_{\bar{\xi}_{N}}y$, a contradiction. This proves that
$\bar{\xi}=\varepsilon$.
Now we prove that $\bar{\eta}=\nu$. Consider the de Finetti partitions
$\zeta_{n}$; we will prove that if $x\sim_{\zeta_{n}}y$, then there exists $N$
such that $x\sim_{\bar{\eta}_{N}}y$. Since
$\lim\limits_{n\to\infty}\zeta_{n}=\nu$ by the Hewitt–Savage zero–one law,
this implies that ${\lim\limits_{n\to\infty}\bar{\eta}_{n}=\nu}$, as required.
So, let $x\sim_{\zeta_{n}}y$ but $x\nsim_{\bar{\eta}_{n}}y$, i.e.,
$\\{x\\}_{n+1}=\\{y\\}_{n+1}=:z$, the multiplicities of $1$’s and $2$’s in
$[x]_{n},[y]_{n}$ coincide, but $r([x]_{n})\neq r([y]_{n})$. Consider the
common tail $z$ of $x$ and $y$. As free $1$’s appear in $z$, they get paired
with free $2$’s in $[x]_{n}$ and $[y]_{n}$. Let $N$ be the moment when the
last of the free $2$’s in $[x]_{n},[y]_{n}$ gets paired. It is easy to see
that $\operatorname{sh}([x]_{N})=\operatorname{sh}([y]_{N})$ and,
consequently, $x\sim_{\bar{\eta}_{N}}y$. As discussed above, this completes
the proof. ∎
### 4.2 The general case
In this section, we prove Theorems 1 and 2 in full generality. Recall that
${p_{1}\geq p_{2}\geq\ldots\geq p_{k}>0}$. We need the following lemma, which
shows that, almost surely, each element $a>1$ eventually gets bumped from the
first row of the $P$-tableau.
###### Lemma 2.
Fix $\ell=2,\ldots,k$ and denote by $m_{n}=m_{n}(\ell,x)$ the number of
elements equal to $\ell$ in the first row of the tableau $P_{n}(x)$ for a
random sequence $x\in X$. Then for every $q\in{\mathbb{N}}$, almost surely,
there exists $N\geq q$ such that $m_{N}=0$.
###### Proof.
If $m_{q}=0$, there is nothing to prove. Let $m_{q}\neq 0$. Denote by $a_{n}$
the greatest element less than $\ell$ in the first row of $P_{n}(x)$ (or $1$
if there is no such element). Clearly,
$m_{n+1}=\begin{cases}m_{n}+1&\text{if }x_{n+1}=\ell,\\\ m_{n}-1&\text{if
}a_{n}\leq x_{n+1}<\ell,\\\ m_{n}&\text{if }x_{n+1}>\ell\text{ or
}x_{n+1}<a_{n}.\end{cases}$
The first event has probability $p_{\ell}$, while the second one has
probability ${r_{n}:=p_{a_{n}}+\ldots+p_{\ell-1}\geq p_{\ell-1}}$. If
$p_{\ell-1}>p_{\ell}$, then the desired assertion is obvious. Otherwise, let
$p_{\ell-1}=p_{\ell}=p$ and consider the random walk $\\{z_{n}\\}$ on
$\mathbb{Z}$ with transition probabilities
$z_{n+1}=\begin{cases}z_{n}+1&\text{with probability }p,\\\ z_{n}-1&\text{with
probability }p,\\\ z_{n}&\text{with probability }1-2p.\end{cases}$
Now we use the well-known recurrence criterion for a random walk with step $d$
(see, e.g., [6]): it is recurrent if and only if $\lim\limits_{t\nearrow
1}\int_{-\pi}^{\pi}\frac{dx}{1-t\phi(x)}=\infty$, where
$\phi(x)={\mathbb{E}}e^{ixd}$. In our case, $\phi(x)=2p\cos x+1-2p$; it easily
follows that the criterion is satisfied and the random walk is recurrent.
Hence, by the properties of a recurrent random walk, the random walk
$\\{z_{n}\\}$ starting from $m_{q}$ will reach $0$ with probability $1$. Now
we apply coupling. Namely, consider the random process
$\\{z^{\prime}_{n}\\}_{n\geq q}$ on $(X,m_{p}^{\infty})$ defined as follows.
Take a random variable $\varepsilon_{n}$ independent of all the other ones
that is equal to $1$ with probability $\frac{p}{r_{n}}$ and $0$ with
probability $1-\frac{p}{r_{n}}$. Set
$z^{\prime}_{n+1}=\begin{cases}z^{\prime}_{n}+1&\text{if }x_{n+1}=\ell,\\\
z^{\prime}_{n}-1&\text{if }a_{n}\leq x_{n+1}<\ell\text{ and
}\varepsilon_{n}=1,\\\ z^{\prime}_{n}&\text{otherwise}.\end{cases}$
Clearly, on the one hand, $\\{z^{\prime}_{n}\\}$ has the same distribution as
$\\{z_{n}\\}$ and, consequently, reaches $0$ with probability $1$. On the
other hand, for every $n\geq q$ we have $m_{n}\leq z_{n}^{\prime}$. It follows
that the original process $\\{m_{n}\\}$ also reaches $0$ with probability $1$.
∎
The following lemma completes the proof of Theorem 1.
###### Lemma 3.
Two sequences $x,y\in X$ belong to the same element of the Young tail
partition $\bar{\eta}_{n}$ if and only if their initial segments $[x]_{n}$ and
$[y]_{n}$ belong to the same plactic class and the tails $\\{x\\}_{n+1}$ and
$\\{y\\}_{n+1}$ coincide.
###### Proof.
We argue by induction on the number $k$ of letters in the alphabet ${\cal A}$.
The base case $k=2$ is proved in Proposition 1. We now prove the induction
step $k-1\mapsto k$. Consider the subtableaux $P^{\prime}([x]_{i})$ and
$P^{\prime}([y]_{i})$ in $P([x]_{i})$ and $P([y]_{i})$, respectively,
consisting of all rows except the first one (and filled with $2,\ldots,k$).
Then
$\operatorname{sh}(P^{\prime}([x]_{i}))=\operatorname{sh}(P^{\prime}([y]_{i}))$
for $i\geq n$, hence, by the induction hypothesis,
$P^{\prime}([x]_{n})=P^{\prime}([y]_{n})$ and the sequences of elements bumped
into the second row in $\\{x\\}_{n+1}$ and $\\{y\\}_{n+1}$ coincide. We claim
that $m_{n}(k,x)=m_{n}(k,y)$ in the notation of Lemma 2. Assume to the
contrary that, say, $m_{n}(k,x)>m_{n}(k,y)$. Since the shapes of the growing
tableaux coincide, it is clear that the difference $m_{i}(k,x)-m_{i}(k,y)$ can
decrease only if $k$ gets bumped from the first row of $P([x]_{i})$ and a
smaller element gets bumped from the first row of $P([y]_{i})$, which, as
noted above, cannot happen. However, it follows from Lemma 2 that there exists
$j>n$ such that $m_{j}(k,x)=0$, a contradiction. Hence,
$m_{n}(k,x)=m_{n}(k,y)$, and it follows from the above considerations that
elements equal to $k$ occupy the same positions in $\\{x\\}_{n+1}$ and
$\\{y\\}_{n+1}$. Now note that these elements do not affect the growth of the
subtableaux filled with the smaller elements. Denote by $x^{\prime}$ and
$y^{\prime}$ the subsequences in $x$ and $y$, respectively, obtained by
discarding the elements equal to $k$. It follows from what we have proved that
$x^{\prime}\sim_{\bar{\eta}_{n^{\prime}}}y^{\prime}$, where $n^{\prime}$ is
the number of elements less than $k$ in $[x]_{n}$ and $[y]_{n}$. It remains to
apply the induction hypothesis to $x^{\prime}$ and $y^{\prime}$. ∎
###### Corollary 3.
$\bar{\xi}_{n}\prec\sigma_{n},\qquad\bar{\eta}_{n}\succ\zeta_{n}\succ\tau_{n}.$
Now we turn to the proof of Theorem 2.
1\. If $x\sim_{\bar{\xi}}y$, then
$\operatorname{sh}([x]_{n})=\operatorname{sh}([y]_{n})$ for all $n$, and it
follows from Lemma 3 with $n=0$ that $x=y$.
2\. As in Proposition 2, we want to use the de Finetti partitions and the
Hewitt–Savage law. Namely, the desired result follows by the Hewitt–Savage law
from the following lemma.
###### Lemma 4.
If $x\sim_{\zeta_{n}}y$, then there exists $N\geq n$ such that
$x\sim_{\bar{\eta}_{N}}y$.
###### Proof.
Since $\zeta_{n}$ is the orbit partition for an action of the symmetric group
${\mathfrak{S}}_{n}$, it suffices to prove the assertion in the case where $x$
and $y$ are obtained from each other by the action of a Coxeter generator
$\sigma_{i}=(i,i+1)$, i.e., by a transposition of $x_{i}$ and $x_{i+1}$.
Assume without loss of generality that $x_{i}=u<v=x_{i+1}$. Then $y_{i}=v$,
$y_{i+1}=u$, and $y_{j}=x_{j}$ for $j\neq i,i+1$.
Denote by $R^{(j)}(x)$ and $R^{(j)}(y)$ the first rows of the tableaux
$P_{j}(x)$ and $P_{j}(y)$, respectively (as multisets). We claim that almost
surely there exists $N\geq i+1$ such that $R^{(N)}(x)=R^{(N)}(y)$. If
$R^{(i+1)}(x)=R^{(i+1)}(y)$, there is nothing to prove. Otherwise, $\max
R^{(i)}(x)=u$. Set $v_{i+1}:=v$. Then
$R^{(i+1)}(x)=R^{(i+1)}(y)\cup\\{v_{i+1}\\}$.
Assume that at the $j$th step
$R^{(j)}(x)=R^{(j)}(y)\cup\\{v_{j}\\}.$ (2)
Set $A_{j}:=\\{d\in R^{(j)}(x):d<v_{j}\\}$ and ${B_{j}:=\\{d\in
R^{(j)}(x):d\geq v_{j}\\}\setminus\\{v_{j}\\}}$ (multisets) and denote
$u_{j}:=\max A_{j}$. In particular, $u_{i+1}=u$ and $B_{i+1}=\emptyset$. Look
at the insertion of an element $x_{j+1}$ with $j>i$. Clearly, if
$x_{j+1}<u_{j}$ or $x_{j+1}\geq v_{j}$, then $R^{(j)}(x)$ and $R^{(j)}(y)$
undergo the same changes; in this case, we set $v_{j+1}=v_{j}$, and (2)
remains valid.
If ${u_{j}\leq x_{j+1}<v_{j}}$, then
$R^{(j+1)}(x)=R^{(j)}(x)\setminus\\{v_{j}\\}\cup\\{x_{j+1}\\}$ and two cases
are possible. If $B_{j}\neq\emptyset$, then
${R^{(j+1)}(y)=R^{(j)}(y)\setminus\\{\min B_{j}\\}\cup\\{x_{j+1}\\}}$, and (2)
remains valid with $v_{j+1}=\min B_{j}$. Finally, if ${B_{j}=\emptyset}$, then
$R^{(j+1)}(y)=R^{(j)}(y)\cup\\{x_{j+1}\\}$ and $R^{(j)}(x)=R^{(j)}(y)$.
We claim that this will eventually happen with probability $1$. Assume the
contrary. Note that $v_{j}$ never decreases and can increase only finitely
many times, because the alphabet ${\cal A}$ is finite. Let $v_{j}=v$ for all
sufficiently large $j$. By Lemma 2, almost surely there are infinitely many
$j$ such that $m_{j}(v,y)=0$, i.e., $v\notin B_{j}$. Hence, almost surely one
of them is succeeded by the event $u_{j}\leq x_{j+1}<v$ (which has probability
$\geq p_{v-1}>0$). If at this moment $B_{j}\neq\emptyset$, then $v_{j+1}=\min
B_{j}>v$, a contradiction. Therefore, $B_{j}=\emptyset$ and, as shown earlier,
$R^{(j)}(x)=R^{(j)}(y)$.
So, we have proved that if $x\sim_{\zeta_{n}}y$, then with probability $1$
there exists $N$ such that $P_{N}(x)$ and $P_{N}(y)$ have the same first row.
But then the sequences of elements bumped into the second row in these
tableaux also differ only by a permutation, hence, we obtain by induction that
all rows (there are finitely many of them) eventually become equal. The lemma
is proved. ∎
As we have discussed earlier, the second assertion of Theorem 2 follows from
Lemma 4 by the Hewitt–Savage zero–one law. The theorem is proved.
## References
* [1] W. Fulton, Young Tableaux With Applications to Representation Theory and Geometry, Cambridge University Press, 1997.
* [2] S. V. Kerov, Asymptotic Representation Theory of the Symmetric Group and its Applications in Analysis, Amer. Math. Soc., Providence, RI, 2003.
* [3] S. V. Kerov and A. M. Vershik, The characters of the infinite symmetric group and probability properties of the Robinson–Schensted–Knuth algorithm, SIAM J. Algebraic Discrete Methods 7, No. 1, 116–124 (1986).
* [4] D. Romik and P. Śniady, Jeu de taquin dynamics on infinite Young tableaux and second class particles, Ann. Probab. 43, No. 2, 682–737 (2015).
* [5] P. Śniady, Robinson–Schensted–Knuth algorithm, jeu de taquin, and Kerov–Vershik measures on infinite tableaux, SIAM J. Discrete Math. 28, No. 2, 598–630 (2014).
* [6] F. Spitzer, Principles of Random Walk, Springer-Verlag, New York–Heidelberg, 1976.
* [7] A. M. Vershik, Description of invariant measures for the actions of some infinite-dimensional groups, Sov. Math. Dokl. 15, 1396–1400 (1974).
* [8] A. M. Vershik, The theory of filtrations of subalgebras, standardness, and independence, Russian Math. Surveys 72, No. 2, 257–333 (2017).
* [9] A. M. Vershik, Three theorems on the uniqueness of the Plancherel measure from different viewpoints, Proc. Steklov Inst. Math. 305, 63–77 (2019).
* [10] A. M. Vershik, On the justification of the ergodic method for describing central measures, to appear in Dokl. Akad. Nauk.
* [11] A. M. Vershik and N. V. Tsilevich, Markov measures on Young tableaux and induced representations of an infinite symmetric group, Theory Probab. Appl. 51, No. 1, 211–223 (2006).
|
††thanks: Present Address: Department of Physics, IIT Delhi, Hauz Khas New
Delhi-110016, India
# Exploring the sensitivity of hadron colliders to non-universality in heavy
neutral currents
F. A. Conventi1,2<EMAIL_ADDRESS>G. D’Ambrosio1
<EMAIL_ADDRESS>A.M. Iyer3 a.iyer<EMAIL_ADDRESS>E.
Rossi1,4<EMAIL_ADDRESS>1INFN-Sezione di Napoli, Via Cintia, 80126
Napoli, Italy;
2Università degli Studi di Napoli Parthenope, Napoli, Italy;
3Univ. Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, UMR5822 IP2I,
F-69622, Villeurbanne, France;
4 Università degli Studi di Napoli ’Federico II’, Dipartimento di Fisica
“Ettore Pancini”, Via Cintia, 80126 Napoli, Italy.
###### Abstract
We present sensitivity projections for discovering a heavy resonance decaying
to electron and muon pairs and for probing the charged lepton non-universality
in such decays at the HL-LHC and FCC-hh. The analysis takes into account the
expected differences in the reconstruction efficiencies and the dilepton mass
resolutions for dielectron and dimuon final states. We demonstrate how the
analyses at HL-LHC naturally paves the way for a FCC-hh machine thereby
underlining its importance.
The Standard Model (SM) of particle physics has withstood the test of
experimental validation to a significant extent. In particular, the
electroweak sector is characterized by a well defined pattern of couplings
which manifests in terms of accurate predictions for several processes. Any
departure from this paradigm implies the presence of New Physics (NP) effects.
One of the most interesting observations in this direction corresponds to the
observation of flavour non-universality in terms of the theoretically clean
ratios: $R_{K}$ and $R_{K^{*}}$. Recent results obtained by the LHCb
Collaboration are compatible with the standard model at the level of 2.5
standard deviations Aaij _et al._ (2019, 2017), still leaving room for
studies on flavour non-universality. Anyway, independently of these anomalies,
it is instructive to investigate the potential of the direct searches in
measuring deviations from universality.
In a direct search, non-universality between a set of final states would
manifest in the form of correspondingly different yields in the detector. In
this paper we present sensitivity projections for testing charged lepton
flavor non-universality in dilepton decays of a new heavy boson that should be
discovered at current and future $pp$ colliders. The analysis uses a simple
test statistic to estimate the significance of these departure from the
universality case.
Since the leptons are very clean objects in a detector, the developed strategy
will be used to study flavour non-universality using a simplified model with
an additional heavy state. Without loss of generality, we consider a heavy
vector boson ($Z^{\prime}$) decaying into a pair of leptons. These states are
a characteristic of several models beyond the SM: for instance scenarios with
additional $U(1)$ Donini _et al._ (1997), extra-dimensional frameworks with
bulk gauge fields Gherghetta and Pomarol (2000) constitute some of the most
obvious extensions. The scenarios with an additional heavy vector were also
found to be useful in the context of flavour physics Gauld _et al._ (2014);
Glashow _et al._ (2015); Bhattacharya _et al._ (2015); Crivellin _et al._
(2015a, b); Aristizabal Sierra _et al._ (2015); Crivellin _et al._ (2015c);
Celis _et al._ (2015); Bélanger _et al._ (2015); Gripaios _et al._ (2016);
Allanach _et al._ (2016); Fuyuto _et al._ (2016); Chiang _et al._ (2016);
Boucenna _et al._ (2016a, b); Celis _et al._ (2017); Altmannshofer _et al._
(2016); Crivellin _et al._ (2017); Garcia Garcia (2017); Bečirević _et al._
(2016); Bhattacharya _et al._ (2017); Bhatia _et al._ (2017); Cline _et
al._ (2017). The analysis in this paper, however, can be trivially extended to
the $s$-channel decay of any heavy resonance like gravitons, heavy scalars
etc. The production mechanism is irrelevant for our analysis, therefore,
without loss of generality, we assume the $Z^{\prime}$ being predominantly
produced by light quarks. From a model point of view, denoting the coupling of
the leptons to the vector boson as $g_{l}$, the goal of this paper can be
restated in terms of extracting the sensitivity of the direct searches to
explore the difference $g_{e}-g_{\mu}$ and its deviations from 0. Similar
analyses exist for the Z-boson couplings from LEP Schael _et al._ (2006).
111The analysis presented in this paper can be easily applied to test
universality of couplings for the SM Z boson as well.
The paper is organized as follows: we begin with the traditional bump hunt
searches of heavy neutral resonances decaying into a di-lepton (di-muon and
di-electron) final state. In this section we point out the role of the
different reconstruction resolutions between different flavour leptons in the
eventual computation of the discovery significance. This is then followed by
the description of the analysis and the estimation of the sensitivity to non-
universality of the HL-LHC collider. We note that the study at HL-LHC
naturally paves the way for an FCC-hh machine which is characterized by
significantly enhanced sensitivities to even smaller deviations from
universality. We conclude the paper with the prospects of including tau as a
part of future analysis to complete the picture.
## I Bump Hunt Searches
The search for a heavy neutral resonance decaying into a di-lepton final state
is one of the most prominent channels being probed at LHC and there exist
relatively strong bounds on $\sigma\times\mathcal{B}_{ll}$ Sirunyan _et al._
(2018); Aad _et al._ (2019). A standard search strategy focuses on the
possibility for observing an excess of events over the Standard Model (SM)
prediction, where the SM background is mainly due to the universal coupling of
the $\gamma^{*}/Z$ to leptons.
In this analysis, we consider the production of a heavy $Z^{\prime}$ decaying
into muons and electrons. For the purpose of simulation, we use the Lagrangian
of the Sequential Standard Model (SSM). Using the model file from FEYNRULES
Alloul _et al._ (2014), the matrix element for the process is produced using
MADGRAPH Alwall _et al._ (2014) at a centre of mass energy of 14 TeV.
Showering and hadronization are described using PYTHIA 8 Sjostrand _et al._
(2008). CMS cards of DELPHES 3.4 de Favereau _et al._ (2014) is used for
detector simulation at the LHC. The efficiencies estimated from the simulation
are then used for different values of $\sigma\mathcal{B}$.
Event selection: In order to identify the leptons from the $Z^{\prime}$, the
following selection criteria have been applied:
* •
two isolated leptons (electrons or muons) with a $p_{T}\geq 50~{}GeV$ and
$|\eta|<2.5$;
* •
$\not{E}_{T}<10GeV$.
The main source of background is represented by the $pp\rightarrow
Z/\gamma^{*}\rightarrow ll$ where $l=e,\mu$.
Independently of the relative sizes of the coupling with the vector boson (SM
or beyond), the leptons are characterized by different detector acceptances
and mass reconstruction resolution. The acceptance efficiency ($\epsilon$) is
mass dependent: For instance for $m_{Z^{\prime}}=3$ TeV, we estimate
$\epsilon_{e}=0.46$ and $\epsilon_{\mu}=0.61$. While for 5 TeV the
corresponding values for electrons and muons are 0.48 and 0.35 respectively.
The mass reconstruction resolution for $m_{ll}>1$ TeV is much better for the
di-electron final state. The mass reconstruction resolution for di-leptons is
shown in Fig. 1 for a 5 TeV narrow resonance with a generated mass width
$\Gamma$ of $50$ GeV. The different mass reconstruction resolution can be
attributed to fact that the momentum of the electrons and muons are measured
differently: the former due to deposition in the E-cal and the latter due to
the bending in the tracker.222Under the assumption of enough statistics (not
necessarily equal) for either lepton, the asymmetry in the reconstruction
between the electrons and muons progressively increases with the resonance
mass. The smearing increases with the $p_{T}$ of the di-muons.
Figure 1: Mass reconstruction resolution of the di-electron (in pink-dashed)
and the di-muon (in red-solid) pairs for $M_{Z^{\prime}}=5$ TeV.
To calculate the expected significance for $Z^{\prime}\rightarrow ee$ and
$Z^{\prime}\rightarrow\mu\mu$ at LHC, we use a binned likelihood fit
$L(\mu_{e},\mu_{\mu})$.
In the case where background is well known we can evaluate the expected
significance as the probability of background only hypothesis
($\mu_{e}=\mu_{\mu}=0$) using the two dimensional profiled likelihood ratio
test Cowan _et al._ (2011):
$q_{0}=-2\log\left[\frac{L(0,0)}{L(\hat{\mu}_{e},\hat{\mu}_{\mu})}\right]$ (1)
where $\hat{\mu}$ is the best value of $\mu$ estimated by fitting to the data
for both the electron and the muon. The signal discovery significance Z can be
evaluated as:
$Z_{tot}=\sqrt{q_{0}}.$ (2)
and for sufficiently large background we can use the asymptotic formula:
$Z_{tot}=\sqrt{q_{0}}=\sqrt{\sum\limits_{i=1;j=e,\mu}^{N_{e,\mu}}\left(2(s^{j}_{i}+b^{j}_{i})\log\left[1+\frac{s^{j}_{i}}{b^{j}_{i}}\right]-2s^{j}_{i}\right)}$
(3)
where the sum runs over the bins, $s^{j}_{i}$ and $b^{j}_{i}$ are the expected
numbers for signal and background events in the $i^{th}$ bin for $j=e,\mu$.
Note that the total number of bins $N_{e,\mu}$ are in general different for
the electron and the muon. It is important to stress that Eq. 3 just gives the
local significance. We account for the look elsewhere effect which leads to
the modification of local p-value corresponding to a given $Z_{tot}$ E.Gross
and Vitells (2010).
Figure 2: Contours in the total di-lepton significance ($Z_{tot}$) for
$m_{Z^{\prime}}=5$ TeV decaying into electrons $(\sigma\mathcal{B})_{e}$ and
muons $(\sigma\mathcal{B})_{\mu}$. The diagonal dotted line corresponds to the
lepton flavour universality case
$\left((\sigma\mathcal{B})_{e}=(\sigma\mathcal{B})_{\mu}\right)$. The
asymmetry in the expected significance is due to different mass reconstruction
resolution (Fig.1).
Fig. 2 gives contours in the total di-lepton significance as a function of
cross-section times the branching fractions of the $Z^{\prime}$ decaying into
electrons $(\sigma\mathcal{B})_{e}$ and muons $(\sigma\mathcal{B})_{\mu}$. The
diagonal dotted line corresponds to the lepton flavour universality case
$\left((\sigma\mathcal{B})_{e}=(\sigma\mathcal{B})_{\mu}\right)$. The points
are scanned such that
$(\sigma\mathcal{B})_{e}+(\sigma\mathcal{B})_{\mu}\leq(\sigma\mathcal{B})_{max}$,
with the outer edge corresponding to $(\sigma\mathcal{B})_{max}$. For the LHC,
we choose $(\sigma\mathcal{B})_{max}$ as the upper bound obtained on
$(\sigma\mathcal{B})_{tot}$ from direct searches in di-lepton final state
Sirunyan _et al._ (2018); Aad _et al._ (2019). Lines parallel to the outer
edge represent contours of some constant
$(\sigma\mathcal{B})_{tot}<(\sigma\mathcal{B})_{max}$, decreasing
progressively as one moves inwards. For any given $(\sigma\mathcal{B})_{tot}$,
the scan over $(\sigma\mathcal{B})_{e}$ and $(\sigma\mathcal{B})_{\mu}$ is
done such that
$(\sigma\mathcal{B})_{e}+(\sigma\mathcal{B})_{\mu}=(\sigma\mathcal{B})_{tot}$
The asymmetric behaviour of the contour plot is due to the different mass
resolutions shown in Fig. 1. Thus, a larger coupling to the electrons leads to
a larger evaluated value for the total signal sensitivity. These
considerations lead to the following questions:
1. 1.
does the absence of a signal imply no NP or a larger coupling to the muons?
2. 2.
what are the prospects for unearthing non-universality at the HL-LHC and
future colliders?
## II Non-universality test
In real life experiments, the statistic $q_{0}$ in Eq. 1 is minimized at the
best fit value of $\sigma\mathcal{B}$ for the leptons. Fig. 3 shows the
distributions of the test statistic $q_{0}$ under two different assumptions:
the left plot corresponds to the universal coupling case where
$(\sigma\mathcal{B})_{e}=(\sigma\mathcal{B})_{\mu}$ while the right plot
illustrates the $(\sigma\mathcal{B})_{e}<(\sigma\mathcal{B})_{\mu}$ case and
hence non-universality. The different widths of the parabola reflect the
differences in the mass reconstruction resolutions between the leptons. The
black line represents the 1 $\sigma$ measurement uncertainty.
|
---|---
Figure 3: Distribution of test statistic under the assumption of universal
($(\sigma\mathcal{B})_{e}=(\sigma\mathcal{B})_{\mu}$) (left plot) and non-
universal ($(\sigma\mathcal{B})_{e}<(\sigma\mathcal{B})_{\mu}$) (right plot)
couplings. The different widths are a consequence of different mass
reconstruction resolution for the electrons (blue-thick) and muons (orange-
dashed).
The departure from the universality hypothesis can be quantified by the
following asymmetry variable:
$\hat{A}=\frac{(\sigma\mathcal{B})_{\mu}-(\sigma\mathcal{B})_{e}}{(\sigma\mathcal{B})_{\mu}+(\sigma\mathcal{B})_{e}}\in[-1,1].$
(4)
The two extremities $\hat{A}=-1$ and $\hat{A}=1$ correspond to a very large
signal in the electron channel
($\sigma\mathcal{B})_{e}\gg(\sigma\mathcal{B})_{\mu}$ and muon channel
($\sigma\mathcal{B})_{\mu}\gg(\sigma\mathcal{B})_{e}$ respectively. In
general, $\hat{A}$ divides the phase space into two specific regions:
$\hat{A}>0$ corresponds to the case where couplings to muons is larger and is
called the Pro-muon region; $\hat{A}<0$ corresponds to the case where
couplings to electrons is larger and is called the Pro-electron region. Thus,
a measurement corresponding to $\hat{A}\neq 0$ could be a hint of non-
universality. An estimate for the significance in the measurement of $\hat{A}$
must also account for the individual uncertainties in the extraction of
$\sigma\mathcal{B}_{e,\mu}$ which correspond to the widths in Fig. 3.
The significance in the measurement of $\hat{A}$ can be quantified by using a
two dimensional profiled likelihood ratio test similar to Eq. 1 and defined as
$q=-2\log\left[\frac{L(\hat{A}=0)}{L(\hat{A})}\right]$ (5)
treating $(\sigma\mathcal{B})$ as a nuisance parameter. The measured values of
$(\sigma\mathcal{B})$ for the electrons and muons are related to
$(\sigma\mathcal{B})_{tot}$ as:
$(\sigma\mathcal{B})_{e,\mu}=\hat{\mu}^{m}_{e,\mu}(\sigma\mathcal{B})_{tot}$.
Fig. 4 shows the typical behaviour of $q$ for two different values of non-
universality: $\mu^{m}_{e}=0.7;\mu^{m}_{\mu}=0.3$ (orange) and
$\mu^{m}_{e}=0.3;\mu^{m}_{\mu}=0.7$ (blue). We use two different benchmark
values of the total cross-section: $(\sigma\mathcal{B})_{Tot}=0.01$ fb (top
row) and $(\sigma\mathcal{B})_{Tot}=0.025$ fb (bottom row) for
$M_{Z^{\prime}}=3,5$ TeV. The plots quantify the departure of the universality
hypothesis ($\hat{A}=0$ or $\hat{\mu}_{e}=\hat{\mu}_{\mu}=0.5$). The solid
black line corresponds to the $2\sigma$ intercept. Scanning for different
values of $(\sigma\mathcal{B})_{Tot}$, we obtain the asymmetry-sensitivity
plots shown in Fig. 5. Moving along either curve, from the bottom to the top,
corresponds to increasing values of $(\sigma\mathcal{B})_{Tot}$ and, hence,
$Z_{tot}$. With respect to bounds from direct searches Aad _et al._ (2019),
we must note that they are obtained under a lepton flavour universality
assumption. Estimation of a bound under the non-universality hypothesis will
require a recast of the entire analysis and is out of scope of the paper.
However, we naively calculate the $Z_{tot}$ from the upper bound on
$(\sigma\mathcal{B})_{tot}$ represented by the lower boundary of the orange-
shaded region in Fig. 5.
In an ideal scenario, for any given Z’ mass we can expect to be sensitive to
tiny deviations from universality i.e. $\hat{A}\rightarrow 0$ as
$(\sigma\mathcal{B})$ becomes very large or $Z_{tot}$ increase. However HL-LHC
sensitivity is limited by existing bounds on $(\sigma\mathcal{B})_{tot}$ as
well as a finite total integrated luminosity. The ruled out region for 3 and 5
TeV masses are illustrated by the pink band in Fig.5. Taking this into
account, Fig.6 illustrates the expected limits on $|\hat{A}|$ as a function of
Z’ mass for the full LHC dataset at $\mathcal{L}=3$ab-1. The flat behaviour is
due to the fact that the bounds on direct searches becomes progressively
stronger in going from 1 to 5 TeV.
|
---|---
|
Figure 4: Test statistic $q$ in Eq. 5 with a benchmark of
$\sigma\mathcal{B}_{Tot}=0.01$ fb (top row) and $0.025$ fb ( bottom row). Left
(right) column corresponds to $M_{Z^{\prime}}=3(5)$ TeV. The orange and blue
curves correspond to two different hypothesis for $\mu^{m}_{e}$ and
$\mu^{m}_{\mu}$ (See text for details).
Having laid out our strategy for extracting non-universality, we find it
relevant to draw the attention to Fig. 1 of Greljo and Marzocca (2017) which
uses differential LFU (Lepton flavour universality) ratios to extract non-
universality. Similar ratios are also employed by experiments Aad _et al._
(2020) and a combination with the proposed analysis in this paper could
possibly reveal more information.
---
Figure 5: 2$\sigma$ (Green lines) and 3$\sigma$ (Red solid lines) asymmetry
sensitivity plot for the full LHC dataset at $\mathcal{L}=3$ab-1. The pink-
shaded region is the upper bound on $(\sigma\mathcal{B})_{tot}$ from direct
searches at LHC.
---
Figure 6: Summary of exclusion in $\hat{A}$ for different masses corresponding
to the rescaled LHC bounds for the electron channel. The red(green-dashed)
lines represent 3(2)$\sigma$ exclusion for the full LHC dataset at
$\mathcal{L}=3$ab-1.
The current non universality tests at LHC, while being powerful are limited on
the following accounts: 1) reduced sensitivity to heavier masses 2) Reduced
sensitivity to minor deviations from universality. These considerations
naturally lead to evaluate and study the possible improvements with future
colliders as it will be discussed below.
## III Future Colliders
The advent of the FCChh is expected to provide continuity from the tail end of
the sensitivity of the HL-LHC. The higher energy and integrated luminosity of
FCChh will allow one to extend the discovery reach toward both higher masses
and smaller couplings, and to increase the sensitivity for charged lepton
flavour non-universality. This makes a future collider all the more relevant
not only for explorations deeper in the UV regions of phase space but also
provides enhanced sensitivity to minor deviations from non-universality.
We first begin with the discovery prospect of such states at the FCC. For the
purpose of FCC studies, we use the FCC-hh card reported in this reference de
Favereau _et al._ (2014). One notable difference between the HL-LHC detectors
and those contemplated for FCC-hh is that the latter are expected to have
relatively similar reconstruction resolutions for electron and muon momenta.
This is particularly true for lower masses as compared to the heavier masses
as shown in Fig. 7 where the effect of the Z’ mass resolution on the expected
signal sensitivity for the FCC have been reported.
|
---|---
Figure 7: Contours of total signal significance as a function of branching
fraction into the leptons for 5 TeV (left) and 15 TeV (right) computed at
$\mathcal{L}=10$ ab-1 for the FCC.
The philosophy is exactly similar to the corresponding plot for the LHC in
Fig. 2. All the point on the edge of the plot satisfy
$(\sigma\mathcal{B})_{e}+(\sigma\mathcal{B})_{\mu}=0.1$ fb. The behavior of
increasing asymmetry between the leptons is similar to that of the LHC albeit
at much higher masses: at the LHC one expects a symmetric reconstruction at
around the scale of the $Z$ boson mass with the asymmetry increasing
progressively.
Our procedure to evaluate the signal discovery significance for FCC will be
exactly similar to the one shown for HL-LHC. For the purpose of continuity we
begin with $M_{Z^{\prime}}=5$ TeV and compare the FCC results with that at HL-
LHC as shown in Fig.8.
The results are compared at the end of the expected run of the corresponding
machines: 3 ab-1 for HL-LHC and 30 ab-1 for FCC. All the lines represent 3
$\sigma$ sensitivity to non-universality. The FCC curve is characterized by
two distinct features: A) more symmetric sensitivity on either side of
$\hat{A}$ owing to symmetric reconstruction for 5 TeV $Z^{\prime}$ mass and B)
higher sensitivity to minor deviations from non-universality corresponding to
the regions around $\hat{A}=0$.
---
Figure 8: 3$\sigma$ asymmetry sensitivity plot with the respect to expected
discovery significance in the electron channel for FCC (blue-solid) at a
luminosity of 30 ab-1 and for LHC (orange-dashed)) at a luminosity of 3 ab-1
for a $Z^{\prime}$ mass of 5 TeV.
FCC results for a $Z^{\prime}$ mass of 5 and 10 TeV are shown in Fig. 9.
As noted before, the strength of the FCC at 30 ab-1 is demonstrated by its
ability to probe regions very close to $\hat{A}=0$.
---
Figure 9: 3 $\sigma$ (red shadowed region) and 2 $\sigma$ (green lines)
asymmetry sensitivity plot for the FCC with the respect to expected discovery
significance in the electron channel for a $Z^{\prime}$ mass of 5 and 10 TeV
with a luminosity of 30 ab-1. The red shadowed region illustrates regions of
non-universality that can be excluded at $3\sigma$ at this luminosity.
Table 1 summarize the A bounds for a Z’ of 5 TeV and 10 TeV for Zee equal to
10 and 15. for HL-LHC and FCC.
$m_{Z^{\prime}}$ | $Z_{tot}$ | HL-LHC | FCC
---|---|---|---
$5$ TeV | 10 | $(-0.95,0.76)$ | $(-0.53,0.52)$
| 15 | $--$ | $(-0.36,0.35)$
$10$ TeV | 10 | – | $(-0.63,0.55)$
| 15 | – | $(-0.42,0.38)$
Table 1: 3$\sigma$ $\hat{A}$ bounds for a $5$ and $10$ TeV $Z^{\prime}$ at
$Z_{ee}=10,15$ level for HL-LHC and FCC. We do not quote the sensitivity for
10 TeV at HL-LHC as it is out of reach for practical values of
$\sigma\mathcal{B}$
.
## IV Conclusions
In this work, using a simple test statistic, we present sensitivity
projections for testing charged lepton flavour universality in dilepton decays
of a neutral heavy boson, should it be discovered at HL-LHC or FCC-hh. While
being powerful HL-LHC limits show reduced sensitivity to heavier Z’ masses and
minor deviations from leptons universality. This motivates a detailed analysis
in the future FCC-hh machines where the differences between the electrons and
the muons are ironed out for relatively lighter masses. Furthermore, this
machine is sensitive to minor deviations from non-universality. This work also
offers a nice complementarity between the observations in flavour factories
and direct searches. This strategy can also be extended to tau final states
which are mainly identified by their hadronic decays. Using the techniques
introduced in this paper and adapting improved identification criteria for the
tau, will enable us to get a complete picture of (non-)universality in the
neutral current sector.
## V Acknowledgements
We are grateful to M. Mangano for his continuous suggestions throughout the
course of the project. F.C. and A.I would wish to thank Antonio Giannini for
his help with the computation of signal sensitivities in the initial stages of
the project. G.D and A.I. wish to thank useful discussions with Alberto Orso
Maria Iorio. AI wishes to thank Michael Winn for useful observations during
the GdR-InF 2019 meeting. We wish to thank Sabyasachi Chakraborty, Seema
Sharma and Tuhin Roy for a careful reading of the manuscript and several
useful comments. A.I would like to thank CEFIPRA under the project “Composite
Models at the Interface of Theory and Phenomenology” (Project No. 5904-C).
G.D. was supported in part by MIUR under Project No. 2015P5SBHT and by the
INFN research initiative ENP. G.D. thanks “Satish Dhawan Visiting Chair
Professorship” at the Indian Institute of Science.
## References
* Aaij _et al._ (2019) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 122, 191801 (2019), arXiv:1903.09252 [hep-ex] .
* Aaij _et al._ (2017) R. Aaij _et al._ (LHCb), JHEP 08, 055 (2017), arXiv:1705.05802 [hep-ex] .
* Donini _et al._ (1997) A. Donini, F. Feruglio, J. Matias, and F. Zwirner, Nucl. Phys. B507, 51 (1997), arXiv:hep-ph/9705450 [hep-ph] .
* Gherghetta and Pomarol (2000) T. Gherghetta and A. Pomarol, Nucl. Phys. B586, 141 (2000), arXiv:hep-ph/0003129 [hep-ph] .
* Gauld _et al._ (2014) R. Gauld, F. Goertz, and U. Haisch, Phys. Rev. D89, 015005 (2014), arXiv:1308.1959 [hep-ph] .
* Glashow _et al._ (2015) S. L. Glashow, D. Guadagnoli, and K. Lane, Phys. Rev. Lett. 114, 091801 (2015), arXiv:1411.0565 [hep-ph] .
* Bhattacharya _et al._ (2015) B. Bhattacharya, A. Datta, D. London, and S. Shivashankara, Phys. Lett. B742, 370 (2015), arXiv:1412.7164 [hep-ph] .
* Crivellin _et al._ (2015a) A. Crivellin, G. D’Ambrosio, and J. Heeck, Phys. Rev. Lett. 114, 151801 (2015a), arXiv:1501.00993 [hep-ph] .
* Crivellin _et al._ (2015b) A. Crivellin, G. D’Ambrosio, and J. Heeck, Phys. Rev. D 91, 075006 (2015b), arXiv:1503.03477 [hep-ph] .
* Aristizabal Sierra _et al._ (2015) D. Aristizabal Sierra, F. Staub, and A. Vicente, Phys. Rev. D92, 015001 (2015), arXiv:1503.06077 [hep-ph] .
* Crivellin _et al._ (2015c) A. Crivellin, L. Hofer, J. Matias, U. Nierste, S. Pokorski, and J. Rosiek, Phys. Rev. D92, 054013 (2015c), arXiv:1504.07928 [hep-ph] .
* Celis _et al._ (2015) A. Celis, J. Fuentes-Martin, M. Jung, and H. Serodio, Phys. Rev. D92, 015007 (2015), arXiv:1505.03079 [hep-ph] .
* Bélanger _et al._ (2015) G. Bélanger, C. Delaunay, and S. Westhoff, Phys. Rev. D92, 055021 (2015), arXiv:1507.06660 [hep-ph] .
* Gripaios _et al._ (2016) B. Gripaios, M. Nardecchia, and S. A. Renner, JHEP 06, 083 (2016), arXiv:1509.05020 [hep-ph] .
* Allanach _et al._ (2016) B. Allanach, F. S. Queiroz, A. Strumia, and S. Sun, Phys. Rev. D93, 055045 (2016), [Erratum: Phys. Rev.D95,no.11,119902(2017)], arXiv:1511.07447 [hep-ph] .
* Fuyuto _et al._ (2016) K. Fuyuto, W.-S. Hou, and M. Kohda, Phys. Rev. D93, 054021 (2016), arXiv:1512.09026 [hep-ph] .
* Chiang _et al._ (2016) C.-W. Chiang, X.-G. He, and G. Valencia, Phys. Rev. D93, 074003 (2016), arXiv:1601.07328 [hep-ph] .
* Boucenna _et al._ (2016a) S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente, and J. Virto, Phys. Lett. B760, 214 (2016a), arXiv:1604.03088 [hep-ph] .
* Boucenna _et al._ (2016b) S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente, and J. Virto, JHEP 12, 059 (2016b), arXiv:1608.01349 [hep-ph] .
* Celis _et al._ (2017) A. Celis, W.-Z. Feng, and M. Vollmann, Phys. Rev. D95, 035018 (2017), arXiv:1608.03894 [hep-ph] .
* Altmannshofer _et al._ (2016) W. Altmannshofer, S. Gori, S. Profumo, and F. S. Queiroz, JHEP 12, 106 (2016), arXiv:1609.04026 [hep-ph] .
* Crivellin _et al._ (2017) A. Crivellin, J. Fuentes-Martin, A. Greljo, and G. Isidori, Phys. Lett. B766, 77 (2017), arXiv:1611.02703 [hep-ph] .
* Garcia Garcia (2017) I. Garcia Garcia, JHEP 03, 040 (2017), arXiv:1611.03507 [hep-ph] .
* Bečirević _et al._ (2016) D. Bečirević, O. Sumensari, and R. Zukanovich Funchal, Eur. Phys. J. C76, 134 (2016), arXiv:1602.00881 [hep-ph] .
* Bhattacharya _et al._ (2017) B. Bhattacharya, A. Datta, J.-P. Guévin, D. London, and R. Watanabe, JHEP 01, 015 (2017), arXiv:1609.09078 [hep-ph] .
* Bhatia _et al._ (2017) D. Bhatia, S. Chakraborty, and A. Dighe, JHEP 03, 117 (2017), arXiv:1701.05825 [hep-ph] .
* Cline _et al._ (2017) J. M. Cline, J. M. Cornell, D. London, and R. Watanabe, Phys. Rev. D95, 095015 (2017), arXiv:1702.00395 [hep-ph] .
* Schael _et al._ (2006) S. Schael _et al._ (ALEPH, DELPHI, L3, OPAL, SLD, LEP Electroweak Working Group, SLD Electroweak Group, SLD Heavy Flavour Group), Phys. Rept. 427, 257 (2006), arXiv:hep-ex/0509008 .
* Sirunyan _et al._ (2018) A. M. Sirunyan _et al._ (CMS), JHEP 06, 120 (2018), arXiv:1803.06292 [hep-ex] .
* Aad _et al._ (2019) G. Aad _et al._ (ATLAS), Phys. Lett. B796, 68 (2019), arXiv:1903.06248 [hep-ex] .
* Alloul _et al._ (2014) A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, Comput. Phys. Commun. 185, 2250 (2014), arXiv:1310.1921 [hep-ph] .
* Alwall _et al._ (2014) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, JHEP 07, 079 (2014), arXiv:1405.0301 [hep-ph] .
* Sjostrand _et al._ (2008) T. Sjostrand, S. Mrenna, and P. Z. Skands, Comput. Phys. Commun. 178, 852 (2008), arXiv:0710.3820 [hep-ph] .
* de Favereau _et al._ (2014) J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES 3), JHEP 02, 057 (2014), arXiv:1307.6346 [hep-ex] .
* Cowan _et al._ (2011) G. Cowan, K. Cranmer, E. Gross, and O. Vitells, Eur. Phys. J. C71, 1554 (2011), [Erratum: Eur. Phys. J.C73,2501(2013)], arXiv:1007.1727 [physics.data-an] .
* E.Gross and Vitells (2010) E.Gross and O. Vitells, Eur. Phys. J. C 70, 525 (2010).
* Greljo and Marzocca (2017) A. Greljo and D. Marzocca, Eur. Phys. J. C 77, 548 (2017), arXiv:1704.09015 [hep-ph] .
* Aad _et al._ (2020) G. Aad _et al._ (ATLAS), (2020), arXiv:2007.14040 [hep-ex] .
|
# On the Verification and Validation of AI Navigation Algorithms
Ivan Porres1, Sepinoud Azimi1, Sébastien Lafond1, Johan Lilius1, Johanna
Salokannel2 and Mirva Salokorpi2 1Faculty of Science and Engineering
Åbo Akademi University
Turku, Finland
<EMAIL_ADDRESS>2Novia University of Applied Sciences
Turku, Finland
<EMAIL_ADDRESS>
###### Abstract
This paper explores the state of the art on to methods to verify and validate
navigation algorithms for autonomous surface ships. We perform a systematic
mapping study to find research works published in the last 10 years proposing
new algorithms for autonomous navigation and collision avoidance and we have
extracted what verification and validation approaches have been applied on
these algorithms. We observe that most research works use simulations to
validate their algorithms. However, these simulations often involve just a few
scenarios designed manually. This raises the question if the algorithms have
been validated properly. To remedy this, we propose the use of a systematic
scenario-based testing approach to validate navigation algorithms extensively.
## I Introduction
Maritime Autonomous Surface Ships (MASS) of the future will exhibit an
increasing range of self-sufficiency. Autonomous capabilities include
relieving the vessel operator from constant supervision by taking over certain
responsibilities of the vessels using partial or complete remote operation of
vessels, or partial or complete unsupervised navigation.
An important motivation for autonomous functions and increased intelligence in
ships is to improve safety, efficiency of operations and decrease the
environmental footprint. Despite the advances in technologies and constant
striving towards improved safety, accidents still happen. In 2017 alone, 3301
accidents were reported by the European Maritime Safety Agency and over 53% of
all reported accidents were collisions, contacts or grounding occurrences, all
due to navigational error [1]. The development of autonomous navigational
capabilities is seen as a possible solution to dramatically reduce the number
of accidents due to navigational error.
The use of autonomous navigational functions in vessels raises however the
question of what may happen if these autonomous functions have design defects.
This question is addressed by Valdez et al. [2] who present a hazard analysis
for the design phase of autonomous vessels. In this study, the authors
identify AI software failure as a hazard that can lead to many of the
identified accidents. Valdez proposes a number of safety controls to eliminate
or reduce the likelihood that software hazard appears but this study does not
address how to implement these safety controls. If we intend to use AI
software components in navigation algorithms, we must ensure that they work as
expected and we should be able to analyze and reveal whether these components
may contain faults.
Traditionally, navigation algorithms have been based on path planning and
optimization and have been designed manually. Programming is a notoriously
complex task and developing defect-free programs require the application of
correct by construction methods or an extensive verification and validation
effort.
An alternative to path planning and optimization algorithms is the use of
machine learning (ML), reinforcement learning (RL) and neural networks (NN).
Machine learning has shown staggering success in autonomous cars. Machine
learning is known to succeed and outperform traditional approaches specially
in vaguely defined problem domains, where it is difficult, if not impossible,
to create a full formal specification of the phenomenon under study. We
consider this to be the case for COLREGs-based navigation and we conjecture
that a ML-based navigation approach can outperform existing search-based and
optimization algorithms. Still, modern AI software may also contain faults
introduced during the learning process of a neural network. As an example,
Katz [3] has analyzed the deep neural network implementation of the next-
generation airborne collision avoidance system for unmanned aircraft (ACAS Xu)
and found that several logical requirements did not hold for the system as
well as some adversarial perturbations that could lead to erroneous collision
avoidance actions.
This paper explores the state of the art related to the methods used to verify
and validate surface ship navigation algorithms. For this, we have performed a
systematic mapping study to find research works published in the last 10 years
proposing new algorithms and we have extracted what verification and
validation approaches have been applied on these algorithms. We have observed
that most research works use simulations to validate their algorithms.
However, these simulations involve just a few scenarios, often designed
manually. Therefore, we propose the use of a systematic scenario-based testing
approach to validate navigation algorithms thoroughly.
We proceed as follows. The design of the mapping studied is presented in
Section 2, while its main results are presented in Section 3 and 4. Finally,
Section 5 describes the proposal for a method for validation of navigation
algorithm using systematic scenario-based testing.
## II Study Design
We have adapted and applied the systematic mapping approach described in [4]
to the autonomous maritime domain. In this study, we first defined the
appropriate research questions, then conducted the search for the relevant
papers. Consequently, we filtered the obtained papers based on our predefined
inclusion and exclusion criteria. The result of our study, eventually, ended
in producing a systematic mapping.
### II-A Research questions
The first step consists in defining the research question. In this study, we
define three main research questions. In order to structure the answer to the
main questions, we also defined a few sub-questions. Our research questions
(RQs) are as follows.
* RQ1
What approaches for navigation or traffic avoidance in autonomous ships have
been presented in the research literature?
1. (a)
When and where have they been published?
2. (b)
What are the overall approaches?
3. (c)
Do they involve single ship or a swarm of ships?
* RQ2
What are the requirements for these approaches as presented in the research
literature?
1. (a)
How the safety is defined?
2. (b)
Are the requirements COLREGs compliant?
* RQ3
How are these approaches verified and validated in the research literature?
### II-B Search Strategy
The primary search is done in the _Web of Knowledge_ database, which includes
the core _Web of Science_ database as well as several regional databases. The
core Web of Science database consists of: _Science Citation Index Expanded
(1945-present)_ , _Social Sciences Citation Index (1956-present)_ , _Arts &
Humanities Citation Index (1975-present)_, _Conference Proceedings Citation
Index- Science (1990-present)_ , _Conference Proceedings Citation Index-
Social Science & Humanities (1990-present)_, and _Emerging Sources Citation
Index (2015-present)_. We opted for the papers published between 2010 and
2020. We defined the following criteria for our primary search.
(maritime $\lor$ marine $\lor$ ship $\lor$ vessel) $\wedge$ (autonomous
navigation $\lor$ autonomous traffic avoidance $\lor$ collision avoidance)
$\wedge$ (algorithm $\lor$ AI $\lor$ artificial intelligence $\lor$ machine
learning $\lor$ ML $\lor$ optimization $\lor$ optimisation) $\wedge$
(validation $\lor$ verification $\lor$ testing $\lor$ simulation $\lor$
quality $\lor$ safe $\lor$ safety)
This primary search resulted in the collection of 427 papers.
### II-C Inclusion and Exclusion
At this step, we performed a screening process of the papers, considering only
relevant papers based on our inclusion and exclusion criteria. The adopted
inclusion criteria are: (1) Only peer-reviewed research papers published in a
journal or a conference proceeding; (2) Only papers related to the theme of
surface maritime vessel’s in their title or abstract or keywords; (3) Only
papers that mentioned machine learning or optimization algorithm in their
title or abstract or keywords. The exclusion criteria were: (1) Papers
mentioning “maritime vessel” in their abstract but that cannot be considered
as describing research on autonomy; (2) Papers are duplicates (3) Papers
containing keywords related to our study but discovered as false positives
(e.g. review papers, studies on underwater vessels). The full list of papers
was equally divided between the authors to apply the inclusion/exclusion
criteria and filter the relevant papers. The final list of papers after
applying the inclusion/exclusion criteria consisted of 132 papers.
The identified papers in the screening step were then randomly distributed
among four authors for the full reading step. As such, each paper was
processed by a second author, to avoid bias. The full list of proceed papers
could be found in the Appendix.
## III Data extraction and classification
For the data extraction we followed the template presented in Table I.
Data Item | Value | RQ
---|---|---
General | |
Study ID | Integer |
Paper Title | Title of the Paper |
Authors’ Name | List of Authors |
Year of Publication | Calendar Year | RQ1
Venue | Publication Venue Name | RQ1
Process | |
Overall Approach | Algorithmic Approach | RQ1
Single or Swarm | Binary | RQ1
Safety | Safety Definition | RQ2
(Non) compliance with Regulations | Binary | RQ2
Verification & Validation | V&V Approach | RQ3
TABLE I: Data Extraction Form
We used the extracted data to answer our main research questions. Figure 1,
RQ1(a), presents the distribution of the studies between year 2010-2020. As it
can be seen from the graphs, the number of publications in the field
experienced a dramatic boost in year 2017. The majority of the studies were
published as a journal article, followed by conference papers and whole books,
87.5%, 9% and 3.5% respectively, see Figure 2, RQ1(a). This is to be expected
as the interest in autonomous vehicles have been piqued over the past few
years. As it could be observed from Figure 3, the majority of the papers opted
for optimization as their overall approach, RQ1(b). This indicates that the
use of AI is still at its infancy when it comes to autonomous navigation for
maritime surface vessels. Based on the data analysis results, 82% of the
studies involved only one single target ship, whereas the others, focus on a
swarm of ships, RQ1(c).
Figure 1: Publication Year Figure 2: Publication Type Figure 3: Overall
Approach
The majority of the articles defined safety based on the values of either Time
to Closest Point of Approach (TCPA) or Distance to Closest Point of Approach
(DCPA), 82%, RQ2(a). Only 48% of the papers chose to comply with CLOREGs in
their study design, RQ2(b), see Figure 4.
Figure 4: COLREG Compliance
The majority of the papers (86 out of 132) identified in this study used
simulation approaches to validate their results with a small (ranging from 1
to 12) number of scenarios. Three studies used either a real boat or a model
boat for the validation [76, 107, 25] and the rest did not use any
verification and validation approach, RQ3. The distribution of validation
methods is depicted in Figure 5
Figure 5: Verification & Validation Approaches
## IV Current practice on the Verification and Validation of AI Navigation
Algorithms
The verification and validation of navigation algorithms is an important issue
since software failures has been identified as a hazard that can lead to many
accidents in vessels with autonomous functions. To avoid such hazard, Perera
proposes a 3-level approach to validate the behaviour of autonomous vessels,
[8]. Level 1 in Perera’s classification requires the use of a software
simulation for the motion of all vessels. A level 2 testing system would
require that the own ship is a full scale or model vessel that navigates in
restricted waters, while the other ships are simulated. In contrast, a level 3
system would require that all involved vessels navigate in open seas.
The mapping study show that most papers use software simulation to validate
the proposed results. In these simulations, the simulation starts with a given
scenario that describes the initial position and speeds for two or more
vessels. The scenario is then animated in the physics-based simulator and the
performance of the AI agents under test is evaluated. This corresponds to
Level 1 validation in Perera’s classification. However, we have observed that
most of these works simulate just some few scenarios and that these scenarios
are designed manually, often to represent standard situations such as a take
over or a crossing. Also, there is a considerable number of research articles
that do not contain any verification or validation of their proposed results.
Existing work in the verification and validation in the automotive domain
emphasizes the need to use a large number of specially designed scenarios in
order to be able to find some faults in autonomous functions. We consider that
the same criteria should apply to the maritime domain and that there is a need
for domain-specific methods for the systematic verification and validation of
autonomous functions in vessels. Therefore, we propose in the next section the
use of a systematic scenario-based testing approach to validate navigation
algorithms thoroughly.
## V A Proposal for Navigation Algorithm Validation using Systematic
Scenario-Based Testing
The goal of scenario-based testing is to evaluate a large set scenarios to
find those where the AI agents do not perform as expected. In each scenario,
the position and the velocity vector of each ship may vary, as well as their
destination way-point. An example scenario with two vessels is depicted in
Fig. 6.
Figure 6: A possible scenario
We are interested to know if there are scenarios where the autonomous
validation components under study take the wrong decisions. These are
described as _challenging_ scenarios in the literature, and they lead to
undesirable outcomes such as near miss or a collision.
Testing a single scenario for an autonomous vehicle is computationally
expensive since it requires a physics-based simulation in addition to
executing the autonomous functions. This includes updating the motion of all
the vehicles involved in the scenario as well as simulating the environment
sensed by the autonomous functions. Since there is a limited testing budget
and we want to maximize the chances to find a defect, it is therefore
desirable to select the scenarios that are considered more challenging for the
autonomous function, [9].
Several authors have proposed methods to search for challenging scenarios
efficiently, [10, 11]. Abdessalem, Nejati, Vruand and Stifter have proposed a
method that uses neural networks as a surrogate model for the scenario fitness
functions and then genetic algorithms as a heuristic to search challenging
scenarios, [12]. This is presented as a two phase process. First a set of
simulations must be executed in order to create the surrogate models of the
fitness functions. Once these models have been created, the scenario search is
performed.
We have proposed a new approach for scenario-based testing that it is specific
to maritime surface vehicles and that avoid the need of training subrrogate
models. Our approach, presented in [13], is based on the use of a neural
network to discriminate and select scenarios that may be challenging for the
autonomous system being tested. The selected scenarios are simulated and
evaluated and their outcome is used to train the discriminating neural
network. Compared to other works such as [12], we combine the training of the
discriminator network and the scenario selection in one step, with the
intention to reduce the number of necessary simulations. The simulations are
evaluated by risk of collision and compliance to COLREGs.
To evaluate our approach, we have tested a collision avoidance algorithm based
on a neural network trained using reinforcement learning. The evaluation task
was to create 6000 simulation scenarios, each one depicting a different
initial situation. Our experimental results show that the proposed testing
method generates test suits composed mostly of challenging scenarios. This
allows us to validate quickly if the navigation algorithm under test can
operate safely while abiding the COLREGs.
## VI Conclusions
This paper explores the state of the art on the methods to verify and validate
navigation algorithms for autonomous surface ships by carrying out a
systematic mapping study. The mapping study reveals that most research works
use simulations to validate their algorithms. Finally, we have proposed the
use of a systematic scenario-based testing approach to validate navigation
algorithms extensively.
## References
* [1] EMSA, “Annual overview of marine casualties and incidents 2018.” European Maritime Safety Agency E.M.S. Agency, 2018.
* [2] O. A. V. Banda, S. Kannos, F. Goerlandt, P. H. A. J. M. van Gelder, M. Bergström, and P. Kujala, “A systemic hazard analysis and management process for the concept design phase of an autonomous vessel,” Reliab. Eng. Syst. Saf., vol. 191, 2019.
* [3] G. Katz, C. W. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An efficient SMT solver for verifying deep neural networks,” in Computer Aided Verification - 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I (R. Majumdar and V. Kuncak, eds.), vol. 10426 of Lecture Notes in Computer Science, pp. 97–117, Springer, 2017.
* [4] K. Petersen, S. Vakkalanka, and L. Kuzniarz, “Guidelines for conducting systematic mapping studies in software engineering: An update,” Information and Software Technology, vol. 64, pp. 1–18, 2015.
* [5] H. Shen, H. Hashimoto, A. Matsuda, Y. Taniguchi, D. Terada, and C. Guo, “Automatic collision avoidance of multiple ships based on deep q-learning,” Applied Ocean Research, vol. 86, pp. 268–288, 2019.
* [6] J. Xin, S. Li, J. Sheng, Y. Zhang, and Y. Cui, “Application of improved particle swarm optimization for navigation of unmanned surface vehicles,” Sensors, vol. 19, no. 14, p. 3096, 2019.
* [7] J. Han, Y. Cho, J. Kim, J. Kim, N.-s. Son, and S. Y. Kim, “Autonomous collision detection and avoidance for aragon usv: Development and field tests,” Journal of Field Robotics, 2020.
* [8] L. P. Perera, “Deep Learning Toward Autonomous Ship Navigation and Possible COLREGs Failures,” Journal of Offshore Mechanics and Arctic Engineering, vol. 142, 12 2019. 031102\.
* [9] D. Gagliardi, P. Tkachenko, and L. del Re, “Outcome oriented evaluation of autonomous driving functions,” in 57th IEEE Conference on Decision and Control, CDC 2018, Miami, FL, USA, December 17-19, 2018, pp. 6970–6975, IEEE, 2018.
* [10] R. B. Abdessalem, S. Nejati, L. C. Briand, and T. Stifter, “Testing vision-based control systems using learnable evolutionary algorithms,” in Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018 (M. Chaudron, I. Crnkovic, M. Chechik, and M. Harman, eds.), pp. 1016–1026, ACM, 2018.
* [11] G. E. Mullins, P. G. Stankiewicz, and S. K. Gupta, “Automated generation of diverse and challenging scenarios for test and evaluation of autonomous vehicles,” in 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, 2017, pp. 1443–1450, IEEE, 2017.
* [12] R. B. Abdessalem, S. Nejati, L. C. Briand, and T. Stifter, “Testing advanced driver assistance systems using multi-objective search and neural networks,” in Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering, ASE 2016, Singapore, September 3-7, 2016 (D. Lo, S. Apel, and S. Khurshid, eds.), pp. 63–74, ACM, 2016.
* [13] I. Porres, S. Azimi, and J. Lilius, “Scenario-based testing of a ship collision avoidance system,” in 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), IEEE, 2020.
## Articles included in the Mapping Study
## References
* [1] Karl Gunnar Aarsæther and Torgeir Moan. Adding the human element to ship manoeuvring simulations. The Journal of Navigation, 63(4):695, 2010.
* [2] Mohamed Abdelaal and Axel Hahn. Nmpc-based trajectory tracking and collision avoidance of unmanned surface vessels with rule-based colregs confinement. In 2016 IEEE Conference on Systems, Process and Control (ICSPC), pages 23–28. IEEE, 2016.
* [3] Jin-Hyeong Ahn, Key-Pyo Rhee, and Young-Jun You. A study on the collision avoidance of a ship using neural networks and fuzzy logic. Applied Ocean Research, 37:162–173, 2012.
* [4] Azzeddine Bakdi, Ingrid Kristine Glad, Erik Vanem, and Øystein Engelhardtsen. Ais-based multiple vessel collision and grounding risk identification based on adaptive safety domain. Journal of Marine Science and Engineering, 8(1):5, 2020.
* [5] Marco Bibuli, Yogang Singh, Sanjay Sharma, Robert Sutton, Daniel Hatton, and Asiya Khan. A two layered optimal approach towards cooperative motion planning of unmanned surface vehicles in a constrained maritime environment. IFAC-PapersOnLine, 51(29):378–383, 2018.
* [6] Morten Breivik et al. Mpc-based mid-level collision avoidance for asvs using nonlinear programming. In 2017 IEEE Conference on Control Technology and Applications (CCTA), pages 766–772. IEEE, 2017.
* [7] Mauro Candeloro, Anastasios M Lekkas, and Asgeir J Sørensen. A voronoi-diagram-based dynamic path-planning system for underactuated marine vessels. Control Engineering Practice, 61:41–54, 2017.
* [8] Wu Chao, Ma Feng, Wu Qing, and Wang Shuwu. A situation awareness approach for usv based on artificial potential fields. In 2017 4th International Conference on Transportation Information and Safety (ICTIS), pages 232–235. IEEE, 2017.
* [9] Chen Chen, Xian-Qiao Chen, Feng Ma, Xiao-Jun Zeng, and Jin Wang. A knowledge-free path planning approach for smart ships based on reinforcement learning. Ocean Engineering, 189:106299, 2019.
* [10] Siyu Guo, Xiuguo Zhang, Yisong Zheng, and Yiquan Du. An Autonomous Path Planning Model for Unmanned Ships Based on Deep Reinforcement Learning. Sensors, 20(2), JAN 2020.
* [11] Linying Chen, Hans Hopman, and Rudy R Negenborn. Distributed model predictive control for vessel train formations of cooperative multi-vessel systems. Transportation Research Part C: Emerging Technologies, 92:101–118, 2018.
* [12] PF Chen, PHAJM van Gelder, and JM Mou. Integration of elliptical ship domains and velocity obstacles for ship collision candidate detection. TransNav, International Journal on Marine Navigation and Safety od Sea Transportation, 13(4), 2019.
* [13] Jinwoo Choi, Jeonghong Park, Jongdae Jung, Yoongeon Lee, and Hyun-Taek Choi. Development of an autonomous surface vehicle and performance evaluation of autonomous navigation technologies. International Journal of Control, Automation and Systems, 18(3):535–545, 2020.
* [14] Shejun Deng, Yuyi Zhong, Zhijin Wu, Min Wang, Lvkai Zhu, and Hongru Yu. Research on waterborne collision avoidance and early warning algorithm based on location data. Advances in Mechanical Engineering, 12(2):1687814020906079, 2020\.
* [15] Zaopeng Dong, Tao Bao, Mao Zheng, Xin Yang, Lifei Song, and Yunsheng Mao. Heading control of unmanned marine vehicles based on an improved robust adaptive fuzzy neural network control algorithm. IEEE Access, 7:9704–9713, 2019.
* [16] Lei Du, Floris Goerlandt, Osiris A Valdez Banda, Yamin Huang, Yuanqiao Wen, and Pentti Kujala. Improving stand-on ship’s situational awareness by estimating the intention of the give-way ship. Ocean Engineering, 201:107110, 2020.
* [17] Armagan Elibol, Nuno Gracias, and Rafael Garcia. Augmented state–extended kalman filter combined framework for topology estimation in large-area underwater mapping. Journal of Field Robotics, 27(5):656–674, 2010.
* [18] Bjørn-Olav H Eriksen, Glenn Bitar, Morten Breivik, and Anastasios M Lekkas. Hybrid collision avoidance for asvs compliant with colregs rules 8 and 13–17. Frontiers in Robotics and AI, 7:11, 2020.
* [19] Bjørn-Olav H Eriksen, Morten Breivik, Erik F Wilthil, Andreas L Flåten, and Edmund F Brekke. The branching-course model predictive control algorithm for maritime collision avoidance. Journal of Field Robotics, 36(7):1222–1249, 2019.
* [20] R Fişkin, H Kişi, and E Nasibov. A research on techniques, models and methods proposed for ship collision avoidance path planning problem. International Journal Maritime Engineering, 160(A2):187–206, 2018\.
* [21] R Fiskin, E Nasibov, and MO Yardimci. Deterministic-based ship anti-collision route optimization with web-based application. International Journal of Maritime Engineering, 161:A345–A356, 2019\.
* [22] Ming-yu Fu, Sha-sha Wang, and Yuan-hui Wang. Multi-behavior fusion based potential field method for path planning of unmanned surface vessel. China Ocean Engineering, 33(5):583–592, 2019.
* [23] Xiongfei Geng, Yongcai Wang, Ping Wang, and Baochen Zhang. Motion plan of maritime autonomous surface ships by dynamic programming for collision avoidance and speed optimization. Sensors, 19(2):434, 2019.
* [24] Siyu Guo, Xiuguo Zhang, Yisong Zheng, and Yiquan Du. An autonomous path planning model for unmanned ships based on deep reinforcement learning. Sensors, 20(2):426, 2020.
* [25] Jungwook Han, Yonghoon Cho, Jonghwi Kim, Jinwhan Kim, Nam-sun Son, and Sun Young Kim. Autonomous collision detection and avoidance for aragon usv: Development and field tests. Journal of Field Robotics, 2020.
* [26] Qilong Han, Xiao Yang, Hongtao Song, Shanshan Sui, Hui Zhang, and Zaiqiang Yang. Whale optimization algorithm for ship path optimization in large-scale complex marine environment. IEEE Access, 8:57168–57179, 2020.
* [27] Wei He, Shuo Xie, Xinglong Liu, Tao Lu, Tianjiao Luo, Miguel Angel Sotelo, and Zhixiong Li. A novel image recognition algorithm of target identification for unmanned surface vehicles based on deep learning. Journal of Intelligent & Fuzzy Systems, 37(4):4437–4447, 2019\.
* [28] Ramdane Hedjar and Messaoud Bounkhel. An automatic collision avoidance algorithm for multiple marine surface vehicles. International Journal of Applied Mathematics and Computer Science, 29(4):759–768, 2019.
* [29] MA Hinostroza and C Guedes Soares. Collision avoidance, guidance and control system for autonomous surface vehicles in complex navigation conditions. Progress in Maritime Technology and Engineering, pp. Taylor & Francis Group, London, UK, pages 121–132, 2018.
* [30] Liang Hu, Wasif Naeem, Eshan Rajabally, Graham Watson, Terry Mills, Zakirul Bhuiyan, Craig Raeburn, Ivor Salter, and Claire Pekcan. A multiobjective optimization approach for colregs-compliant path planning of autonomous surface vehicles verified on networked bridge simulators. IEEE Transactions on Intelligent Transportation Systems, 21(3):1167–1179, 2019.
* [31] Liang Hu, Wasif Naeem, Eshan Rajabally, Graham Watson, Terry Mills, Zakirul Bhuiyan, and Ivor Salter. Colregs-compliant path planning for autonomous surface vehicles: A multiobjective optimization approach. IFAC-PapersOnLine, 50(1):13662–13667, 2017.
* [32] Qing Hu, Yi Jiang, Jingbo Zhang, Xiaowen Sun, and Shufang Zhang. Development of an automatic identification system autonomous positioning system. Sensors, 15(11):28574–28591, 2015.
* [33] Yamin Huang, Linying Chen, Pengfei Chen, Rudy R Negenborn, and PHAJM van Gelder. Ship collision avoidance methods: State-of-the-art. Safety science, 121:451–473, 2020.
* [34] Yamin Huang, Linying Chen, and PHAJM van Gelder. Generalized velocity obstacle algorithm for preventing ship collisions at sea. Ocean Engineering, 173:142–156, 2019.
* [35] Yamin Huang, PHAJM van Gelder, and Yuanqiao Wen. Velocity obstacle algorithms for collision prevention at sea. Ocean Engineering, 151:308–321, 2018.
* [36] Timur İnan and Ahmet Fevzi Baba. Particle swarm optimization-based collision avoidance. Turkish Journal of Electrical Engineering and Computer Science, 27(3):2137–2155, 2019.
* [37] Min-Gi Jeong, Eun-Bang Lee, and Moonjin Lee. An adaptive route plan technique with risk contour for autonomous navigation of surface vehicles. In OCEANS 2018 MTS/IEEE Charleston, pages 1–4. IEEE, 2018.
* [38] Min-Gi Jeong, Eun-Bang Lee, Moonjin Lee, and Jung-Yeul Jung. Multi-criteria route planning with risk contour map for smart navigation. Ocean Engineering, 172:72–85, 2019.
* [39] Yu-Tao Kang, Wei-Jiong Chen, Da-Qi Zhu, Jin-Hui Wang, and Qi-Miao Xie. Collision avoidance path planning for ships by particle swarm optimization. Journal of Marine Science and Technology, 26(6):777–786, 2018.
* [40] Joanna Karbowska-Chilinska, Jolanta Koszelew, Krzysztof Ostrowski, Piotr Kuczynski, Eric Kulbiej, and Piotr Wolejsza. Beam search algorithm for ship anti-collision trajectory planning. Sensors, 19(24):5338, 2019.
* [41] Heesu Kim, Sang-Hyun Kim, Maro Jeon, JaeHak Kim, Soonseok Song, and Kwang-Jun Paik. A study on path optimization method of an unmanned surface vehicle under environmental loads using genetic algorithm. Ocean Engineering, 142:616–624, 2017.
* [42] Łukasz Kuczkowski and Roman Śmierzchalski. Comparison of single and multi-population evolutionary algorithm for path planning in navigation situation. In Solid State Phenomena, volume 210, pages 166–177. Trans Tech Publ, 2014.
* [43] Lukasz Kuczkowski and Roman Smierzchalski. Termination functions for evolutionary path planning algorithm. In 2014 19th International Conference on Methods and Models in Automation and Robotics (MMAR), pages 636–640. IEEE, 2014.
* [44] Yoshiaki Kuwata, Michael T Wolf, Dimitri Zarzhitsky, and Terrance L Huntsberger. Safe maritime navigation with colregs using velocity obstacles. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4728–4734. IEEE, 2011.
* [45] A Lazarowska. An efficient graph theory-based algorithm for ship trajectory planning. INTERNATIONAL JOURNAL OF MARITIME ENGINEERING, 161:155–161, 2019\.
* [46] Agnieszka Lazarowska. Swarm intelligence approach to safe ship control. Polish Maritime Research, 22(4):34–40, 2015.
* [47] Agnieszka Lazarowska. Research on algorithms for autonomous navigation of ships. WMU Journal of Maritime Affairs, 18(2):341–358, 2019.
* [48] Man-Chun Lee, Chung-Yuan Nieh, Hsin-Chuan Kuo, and Juan-Chen Huang. An automatic collision avoidance and route generating algorithm for ships based on field model. Journal of Marine Science and Technology, 27(2):101–113, 2019.
* [49] Jinxin Li, Hongbo Wang, Wei Zhao, and Yuanyuan Xue. Ship’s trajectory planning based on improved multiobjective algorithm for collision avoidance. Journal of Advanced Transportation, 2019, 2019.
* [50] Shijie Li, Jialun Liu, and Rudy R Negenborn. Distributed coordination for collision avoidance of multiple ships considering ship maneuverability. Ocean Engineering, 181:212–226, 2019.
* [51] Shijie Li, Jialun Liu, Rudy R Negenborn, and Feng Ma. Optimizing the joint collision avoidance operations of multiple ships from an overall perspective. Ocean Engineering, 191:106511, 2019.
* [52] Weifeng Li and Wenyao Ma. Simulation on vessel intelligent collision avoidance based on artificial fish swarm algorithm. Polish Maritime Research, 23(s1):138–143, 2016.
* [53] J Lisowski. Multi-criteria optimization of multi-step matrix game in collision avoidance of ships. TransNav: International Journal on Marine Navigation and Safety of Sea Transportation, 13, 2019.
* [54] Józef Lisowski. Computational intelligence methods of a safe ship control. Procedia computer science, 35:634–643, 2014.
* [55] Deli Liu, Dong Xu, Nan Wang, Ziying Zhang, and Pingpeng Tang. Dynamic replanning algorithm of local trajectory for unmanned surface vehicle. In 2016 35th Chinese Control Conference (CCC), pages 7120–7125. IEEE, 2016.
* [56] Hongdan Liu, Sheng Liu, and Lanyong Zhang. Ship collision avoidance path planning strategy based on quantum bacterial foraging algorithm. In 2015 2nd International Conference on Electrical, Computer Engineering and Electronics. Atlantis Press, 2015.
* [57] Hongdan Liu, Rong Sun, and Qi Liu. The tactics of ship collision avoidance based on quantum-behaved wolf pack algorithm. Concurrency and Computation: Practice and Experience, 32(6):e5196, 2020.
* [58] Shuchen Liu, Sylvain Roy, Eloy Pairet-Garcia, Jan-Jöran Gehrt, Friederike Siemer, Christof Büskens, Dirk Abel, and René Zweigel. Case study: Networked control for optimal maneuvering of autonomous vessels. IFAC-PapersOnLine, 52(8):440–445, 2019.
* [59] Xinyu Liu, Yun Li, Jing Zhang, Jian Zheng, and Chunxi Yang. Self-adaptive dynamic obstacle avoidance and path planning for usv under complex maritime environment. IEEE Access, 7:114945–114954, 2019.
* [60] Yuanchang Liu and Richard Bucknall. Efficient multi-task allocation and path planning for unmanned surface vehicle in support of ocean operations. Neurocomputing, 275:1550–1566, 2018.
* [61] Yuanchang Liu, Richard Bucknall, and Xinyu Zhang. The fast marching method based intelligent navigation of an unmanned surface vehicle. Ocean Engineering, 142:363–376, 2017.
* [62] Yuanchang Liu, Rui Song, Richard Bucknall, and Xinyu Zhang. Intelligent multi-task allocation and planning for multiple unmanned surface vehicles (usvs) using self-organising maps and fast marching method. Information Sciences, 496:180–197, 2019.
* [63] Hongguang Lyu and Yong Yin. Fast path planning for autonomous ships in restricted waters. Applied Sciences, 8(12):2592, 2018.
* [64] Hongguang Lyu and Yong Yin. Colregs-constrained real-time path planning for autonomous ships using modified artificial potential fields. The Journal of Navigation, 72(3):588–608, 2019.
* [65] LY Ma, Wei Xie, and HB Huang. Convolutional neural network based obstacle detection for unmanned surface vehicle. Mathematical biosciences and engineering: MBE, 17(1):845–861, 2019\.
* [66] Yong Ma, Mengqi Hu, and Xinping Yan. Multi-objective path planning for unmanned surface vehicle with currents effects. ISA transactions, 75:137–156, 2018.
* [67] Mostefa Mohamed-Seghir. Methods based on fuzzy sets to solve problems of safe ship control. In Novel Algorithms and Techniques in Telecommunications and Networking, pages 373–377. Springer, 2010.
* [68] Wasif Naeem, Sable C Henrique, and Liang Hu. A reactive colregs-compliant navigation strategy for autonomous maritime navigation. IFAC-PapersOnLine, 49(23):207–213, 2016.
* [69] Wasif Naeem, George W Irwin, and Aolei Yang. Colregs-based collision avoidance strategies for unmanned surface vehicles. Mechatronics, 22(6):669–678, 2012.
* [70] Hanlin Niu, Al Savvaris, and Antonios Tsourdos. Usv geometric collision avoidance algorithm for multiple marine vehicles. In OCEANS 2017-Anchorage, pages 1–10. IEEE, 2017.
* [71] Bartosz Ożoga and Jakub Montewka. Towards a decision support system for maritime navigation on heavily trafficked basins. Ocean Engineering, 159:88–97, 2018.
* [72] Madhusmita Panda, Bikramaditya Das, Bidyadhar Subudhi, and Bibhuti Bhusan Pati. A comprehensive review of path planning algorithms for autonomous underwater vehicles. International Journal of Automation and Computing, pages 1–32, 2020\.
* [73] Giulia Pedrielli, Yifan Xing, Jia Hao Peh, Kim Wee Koh, and Szu Hui Ng. A real time simulation optimization framework for vessel collision avoidance and the case of singapore strait. IEEE Transactions on Intelligent Transportation Systems, 21(3):1204–1215, 2019.
* [74] Zihe Qin, Zhuang Lin, Dongmei Yang, and Ping Li. A task-based hierarchical control strategy for autonomous motion of an unmanned surface vehicle swarm. Applied Ocean Research, 65:251–261, 2017.
* [75] Andrzej Rak and Witold Gierusz. Reinforcement learning in discrete and continuous domains applied to ship trajectory generation. Polish Maritime Research, 19(Special):31–36, 2012.
* [76] Haiqing Shen, Hirotada Hashimoto, Akihiko Matsuda, Yuuki Taniguchi, Daisuke Terada, and Chen Guo. Automatic collision avoidance of multiple ships based on deep q-learning. Applied Ocean Research, 86:268–288, 2019.
* [77] Yogang Singh, Sanjay Sharma, Robert Sutton, Daniel Hatton, and Asiya Khan. A constrained a* approach towards optimal path planning for an unmanned surface vehicle in a maritime environment containing dynamic obstacles and ocean currents. Ocean Engineering, 169:187–201, 2018.
* [78] Yogang Singh, Sanjay Sharma, Robert Sutton, Daniel Hatton, and Asiya Khan. Feasibility study of a constrained dijkstra approach for optimal path planning of an unmanned surface vehicle in a dynamic maritime environment. In 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pages 117–122. IEEE, 2018.
* [79] A Lifei Song, B Yiran Su, C Zaopeng Dong, D Wei Shen, E Zuquan Xiang, and F Puxiu Mao. A two-level dynamic obstacle avoidance algorithm for unmanned surface vehicles. Ocean Engineering, 170:351–360, 2018.
* [80] Lifei Song, Zhuo Chen, Zaopeng Dong, Zuquan Xiang, Yunsheng Mao, Yiran Su, and Kai Hu. Collision avoidance planning for unmanned surface vehicle based on eccentric expansion. International Journal of Advanced Robotic Systems, 16(3):1729881419851945, 2019.
* [81] Rui Song, Yuanchang Liu, and Richard Bucknall. A multi-layered fast marching method for unmanned surface vehicle path planning in a time-variant maritime environment. Ocean Engineering, 129:301–317, 2017.
* [82] Xiaojie Sun, Guofeng Wang, Yunsheng Fan, Dongdong Mu, and Bingbing Qiu. An automatic navigation system for unmanned surface vehicles in realistic sea environments. Applied Sciences, 8(2):193, 2018.
* [83] Xiaojie Sun, Guofeng Wang, Yunsheng Fan, Dongdong Mu, and Bingbing Qiu. Collision avoidance of podded propulsion unmanned surface vehicle with colregs compliance and its modeling and identification. IEEE Access, 6:55473–55491, 2018.
* [84] Rafal Szlapczynski. Evolutionary sets of safe ship trajectories within traffic separation schemes. The Journal of Navigation, 66(1):65–81, 2013.
* [85] Rafal Szlapczynski. Evolutionary planning of safe ship tracks in restricted visibility. The Journal of Navigation, 68(1):39–51, 2015.
* [86] Rafał Szłapczyński and Hossein Ghaemi. Framework of an evolutionary multi-objective optimisation method for planning a safe trajectory for a marine autonomous surface ship. Polish Maritime Research, 26(4):69–79, 2019.
* [87] Rafał Szłapczyński and Joanna Szłapczyńska. Evolutionary sets of safe ship trajectories: Problem dedicated operators. In International Conference on Computational Collective Intelligence, pages 221–230. Springer, 2011.
* [88] CheeKuang Tam and Richard Bucknall. Path-planning algorithm for ships in close-range encounters. Journal of marine science and technology, 15(4):395–407, 2010.
* [89] Guoge Tan, Jin Zou, Jiayuan Zhuang, Lei Wan, Hanbing Sun, and Zhiyuan Sun. Fast marching square method based intelligent navigation of the unmanned surface vehicle swarm in restricted waters. Applied Ocean Research, 95:102018, 2020.
* [90] Ming-Cheng Tsou, Sheng-Long Kao, and Chien-Min Su. Decision support from genetic algorithms for ship collision avoidance route planning and alerts. The Journal of Navigation, 63(1):167, 2010.
* [91] Sebastián Aldo Villar et al. Navigation system for macábot an autonomous surface vehicles using gps aided strapdown inertial navigation system. IEEE Latin America Transactions, 17(06):1009–1019, 2019.
* [92] C Wang, YS Mao, KJ Du, BQ Hu, and LF Song. Simulation on local obstacle avoidance algorithm for unmanned surface vehicle. International Journal of Simulation Modelling, 15(3):460–472, 2016\.
* [93] Chengbo Wang, Xinyu Zhang, Longze Cong, Junjie Li, and Jiawei Zhang. Research on intelligent collision avoidance decision-making of unmanned ship in unknown environments. Evolving Systems, 10(4):649–658, 2019.
* [94] Chengbo Wang, Xinyu Zhang, Ruijie Li, and Peifang Dong. Path planning of maritime autonomous surface ships in unknown environment with reinforcement learning. In International Conference on Cognitive Systems and Signal Processing, pages 127–137. Springer, 2018.
* [95] Hongjian Wang and Xicheng Ban. Research on autonomous collision avoidance method of unmanned surface vessel in the circumstance of moving obstacles. In 2018 37th Chinese Control Conference (CCC), pages 501–506. IEEE, 2018.
* [96] Ning Wang, Yuncheng Gao, Zhongjiu Zheng, Hong Zhao, and Jianchuan Yin. A hybrid path-planning scheme for an unmanned surface vehicle. In 2018 Eighth International Conference on Information Science and Technology (ICIST), pages 231–236. IEEE, 2018.
* [97] Ning Wang, Xiaozhao Jin, and Meng Joo Er. A multilayer path planner for a usv under complex marine environments. Ocean Engineering, 184:1–10, 2019.
* [98] Ning Wang, Yue Tan, and Shao-Man Liu. Ship domain identification using fast and accurate online self-organizing parsimonious fuzzy neural networks. In Proceedings of the 30th Chinese Control Conference, pages 5271–5276. IEEE, 2011.
* [99] Zhaokun Wei, Kang Zhao, and Ming Wei. Decision-making in ship collision avoidance based on cat-swarm biological algorithm. In 2015 International Conference on Computational Science and Engineering. Atlantis Press, 2015.
* [100] Martin S Wiig, Kristin Y Pettersen, and Thomas R Krogstad. A reactive collision avoidance algorithm for vehicles with underactuated dynamics. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pages 1452–1459. IEEE, 2017.
* [101] Martin Syre Wiig, Kristin Ytterstad Pettersen, and Thomas Røbekk Krogstad. Collision avoidance for underactuated marine vehicles using the constant avoidance angle algorithm. IEEE Transactions on Control Systems Technology, 28(3):951–966, 2019.
* [102] Kyle Woerner and Michael Benjamin. Safety and efficiency analysis of autonomous collision avoidance using multi-objective optimization with interval programming. Naval Engineers Journal, 126(4):163–168, 2014.
* [103] Joohyun Woo and Nakwan Kim. Collision avoidance for an unmanned surface vehicle using deep reinforcement learning. Ocean Engineering, 199:107001, 2020.
* [104] R Glenn Wright. Intelligent autonomous ship navigation using multi-sensor modalities. TransNav: International Journal on Marine Navigation and Safety of Sea Transportation, 13(3), 2019.
* [105] Shuo Xie, Xiumin Chu, Mao Zheng, and Chenguang Liu. Ship predictive collision avoidance method based on an improved beetle antennae search algorithm. Ocean Engineering, 192:106542, 2019.
* [106] Shuo Xie, Vittorio Garofano, Xiumin Chu, and Rudy R Negenborn. Model predictive ship collision avoidance based on q-learning beetle swarm antenna search and neural networks. Ocean Engineering, 193:106609, 2019.
* [107] Junfeng Xin, Shixin Li, Jinlu Sheng, Yongbo Zhang, and Ying Cui. Application of improved particle swarm optimization for navigation of unmanned surface vehicles. Sensors, 19(14):3096, 2019.
* [108] Junfeng Xin, Jiabao Zhong, Fengru Yang, Ying Cui, and Jinlu Sheng. An improved genetic algorithm for path-planning of unmanned surface vehicle. Sensors, 19(11):2640, 2019.
* [109] Chengke Xiong, Danfeng Chen, Di Lu, Zheng Zeng, and Lian Lian. Path planning of multiple autonomous marine vehicles for adaptive sampling using voronoi-based ant colony optimization. Robotics and Autonomous Systems, 115:90–103, 2019.
* [110] Haitong Xu, Hao Rong, and C Guedes Soares. Use of ais data for guidance and control of path-following autonomous vessels. Ocean Engineering, 194:106635, 2019.
* [111] Qingyang Xu. Collision avoidance strategy optimization based on danger immune algorithm. Computers & Industrial Engineering, 76:268–279, 2014.
* [112] Yan Zhuo Xue, Yi Wei, and Yue Qiao. The research on ship intelligence navigation in confined waters. In Advanced Materials Research, volume 442, pages 398–401. Trans Tech Publ, 2012.
* [113] Yanzhuo Xue, D Clelland, BS Lee, and Duanfeng Han. Automatic simulation of ship navigation. Ocean Engineering, 38(17-18):2290–2305, 2011.
* [114] Rongjie Yan, Xiangtong Yao, Junjie Yang, and Kai Huang. Formal collision avoidance analysis for rigorous building of autonomous marine vehicles. In National Conference on Embedded System Technology, pages 118–127. Springer, 2017.
* [115] Hui Yang, Jie Qi, Yongchun Miao, Haixin Sun, and Jianghui Li. A new robot navigation algorithm based on a double-layer ant algorithm and trajectory optimization. IEEE Transactions on Industrial Electronics, 66(11):8557–8566, 2018\.
* [116] Rongwu Yang, Jinsong Xu, Xin Wang, and Quan Zhou. Parallel trajectory planning for shipborne autonomous collision avoidance system. Applied Ocean Research, 91:101875, 2019.
* [117] Tingting Yang, Chengzhuo Han, Meng Qin, and Chuan Huang. Learning-aided intelligent cooperative collision avoidance mechanism in dynamic vessel networks. IEEE Transactions on Cognitive Communications and Networking, 6(1):74–82, 2019.
* [118] Raphael Zaccone and Michele Martelli. A collision avoidance algorithm for ship guidance applications. Journal of Marine Engineering & Technology, 19(sup1):62–75, 2020\.
* [119] Raphael Zaccone, Michele Martelli, and Massimo Figari. A colreg-compliant ship collision avoidance algorithm. In 2019 18th European Control Conference (ECC), pages 2530–2535. IEEE, 2019.
* [120] Zheng Zeng, Karl Sammut, Lian Lian, Andrew Lammas, Fangpo He, and Youhong Tang. Rendezvous path planning for multiple autonomous marine vehicles. IEEE Journal of Oceanic Engineering, 43(3):640–664, 2017.
* [121] J Zhang, Q Hu, and B Liao. Ship collision avoidance decision model and simulation based on collision circle. TransNav: International Journal on Marine Navigation and Safety of Sea Transportation, 13(2), 2019.
* [122] RL Zhang and M Furusho. Conversion timing of seafarer’s decision-making for unmanned ship navigation. TransNav: International Journal on Marine Navigation and Safety of Sea Transportation, 11(3), 2017.
* [123] Xinyu Zhang, Chengbo Wang, Yuanchang Liu, and Xiang Chen. Decision-making for the autonomous navigation of maritime autonomous surface ships based on scene division and deep reinforcement learning. Sensors, 19(18):4055, 2019.
* [124] Luman Zhao and Myung-Il Roh. Colregs-compliant multiship collision avoidance based on deep reinforcement learning. Ocean Engineering, 191:106436, 2019.
* [125] Luman Zhao, Myung-Il Roh, and Sung-Jun Lee. Control method for path following and collision avoidance of autonomous ship based on deep reinforcement learning. Journal of Marine Science and Technology, 27(4):293–310, 2019.
* [126] Yujiao Zhao, Xin Qi, Atilla Incecik, Yong Ma, and Zhixiong Li. Broken lines path following algorithm for a water-jet propulsion usv with disturbance uncertainties. Ocean Engineering, 201:107118, 2020.
* [127] Yuxin Zhao, Wang Li, and Peng Shi. A real-time collision avoidance learning system for unmanned surface vessels. Neurocomputing, 182:255–266, 2016.
* [128] Huarong Zheng, Rudy R Negenborn, and Gabriël Lodewijks. Fast admm for distributed model predictive control of cooperative waterborne agvs. IEEE Transactions on Control Systems Technology, 25(4):1406–1413, 2016.
* [129] Kai Zheng, Yabo Chen, Yi Jiang, and Shuanghu Qiao. A svm based ship collision risk assessment algorithm. Ocean Engineering, 202:107062, 2020.
* [130] Xinyuan Zhou, Peng Wu, Haifeng Zhang, Weihong Guo, and Yuanchang Liu. Learn to navigate: cooperative path planning for unmanned surface vehicles using deep reinforcement learning. IEEE Access, 7:165262–165278, 2019.
* [131] Jiayuan Zhuang, Lei Zhang, Zihe Qin, Hanbing Sun, Bo Wang, and Jian Cao. Motion control and collision avoidance algorithms for unmanned surface vehicle swarm in practical maritime environment. Polish Maritime Research, 26(1):107–116, 2019.
* [132] S Zinchenko, P Nosov, V Mateichuk, P Mamenko, I Popovych, and O Grosheva. Automatic collision avoidance system with many targets, including maneuvering ones. Bulletin of The University of Karaganda-Physics, 4(96):69–79, 2019.
|
# Black-box Adversarial Attacks in
Autonomous Vehicle Technology
K. Naveen Kumar1, C. Vishnu1, Reshmi Mitra2, C. Krishna Mohan1
1 Indian Institute of Technology Hyderabad, India
2 Southeast Missouri State University, Cape Girardeau, USA
{cs19m20p000001<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Despite the high quality performance of the deep neural network in real-world
applications, they are susceptible to minor perturbations of adversarial
attacks. This is mostly undetectable to human vision. The impact of such
attacks has become extremely detrimental in autonomous vehicles with real-time
“safety” concerns. The black-box adversarial attacks cause drastic
misclassification in critical scene elements such as road signs and traffic
lights leading the autonomous vehicle to crash into other vehicles or
pedestrians. In this paper, we propose a novel query-based attack method
called Modified Simple black-box attack (M-SimBA) to overcome the use of a
white-box source in transfer based attack method. Also, the issue of late
convergence in a Simple black-box attack (SimBA) is addressed by minimizing
the loss of the most confused class which is the incorrect class predicted by
the model with the highest probability, instead of trying to maximize the loss
of the correct class. We evaluate the performance of the proposed approach to
the German Traffic Sign Recognition Benchmark (GTSRB) dataset. We show that
the proposed model outperforms the existing models like Transfer-based
projected gradient descent (T-PGD), SimBA in terms of convergence time,
flattening the distribution of confused class probability, and producing
adversarial samples with least confidence on the true class.
###### Index Terms:
adversarial attacks, black-box attacks, deep learning methods, autonomous
vehicles.
††publicationid: pubid: 978-1-7281-8243-8/20/$31.00 ©2020 IEEE
## I Introduction
Cybersecurity threats on Autonomous vehicles (AV) can cause serious safety and
security issues as per the “Safety First” industry consortium paper [1]
published by twelve industry leaders such as Audi, BMW, Volkswagen, among
others. AV is made possible due to the control functions of connected
vehicles, onboard diagnostics for maintenance, and cloud backend system. These
capabilities also make it a rich and vulnerable attack surface for the
adversary. Cyber-attacks on such systems can have dangerous effects leading to
malicious actors gaining arbitrary control of the vehicle with such multiple
entities managed simultaneously on the road. These malicious actions can
eventually cause life-threatening harm to pedestrians and prevent widespread
adoption of AV.
Cyber attacks often cause data corruption and intentional tampering by an
unexpected source, which could be crucial elements in the training data for
deep neural networks [2]. Although these models are popular for their accuracy
and performance for computer vision tasks (such as classification, detection,
and segmentation), they are known to be extremely vulnerable to adversarial
attacks [3]. In this type of attack, the adversary induces minor but
systematic perturbations in key model layers such as filters and input
datasets as shown in Fig. 1. Even though this minor layer of noise is barely
perceptible to human vision, it may cause drastic misclassification in
critical scene elements such as road signs and traffic lights. This may
eventually lead to AV crashing into other vehicles or pedestrians. Stickers or
paintings on the traffic signboards are the most common physical adversarial
attacks, which can impact the functionality of the vehicular system.
Figure 1: Example of adversarial attack: minor perturbations introduced to the
training data cause misclassification of a critical traffic sign i.e. Yield
instead of Stop sign. This incorrect prediction can be hardly perceptible to
the human eye and thus have dangerous repercussions for autonomous vehicles.
Adversarial attacks are primarily of two types: (1) White-box where adversary
customizes perturbations to the known deep neural network such as
architecture, training data, parameter settings, and (2) Black-box where
adversary has minimum to nil knowledge about the network. Although white-box
attacks have been under study, they may not be realistic for AV technology,
because of the many dynamic elements primarily related to sensor data. Our
state-of-art study has shown that there is very limited research on black-box
adversarial attacks in the domain of AV.
Seminal research articles [3, 4] to report adversarial attack problems for
images in neural networks observed that an imperceptible non-random noise to a
test image can lead to serious misprediction problems, thereby questioning the
model robustness. These white box examples were generated using box-
constrained Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS)
algorithm. It has remarkable transferability property and is illustrated
across tasks with different architectures [5, 6]. The decision outputs
resulted from machine learning models of the sub-tasks in the computer vision
domain, such as classification, detection, and segmentation, become sensitive
to the adversarial perturbations in the input. This is discussed in various
prior works [7, 8, 9, 10].
Gradient estimation techniques such as Finite Differences (FD) and Natural
Evolutionary Strategies (NES) are used in a black-box setting, because they
are not directly accessible to the adversary. The other significant technique
uses surrogates [11, 12] to exploit the transferability of adversarial
examples over models. Although several papers have verified the
transferability properties [13], the focus of our work is on the gradient
estimation technique [14] because of the convenience of attack. This property
transferability of adversarial attacks is investigated in [15] for dispersion
reduction attack. It uses limited perturbations compared to the existing
attacks and demonstrated its performance over different computer vision tasks
(image classification, object detection, semantic segmentation).
The first work to generate adversarial examples for black-box attacks in video
recognition, V-BAD [16] framework utilizes tentative perturbations transferred
from image models and partition-based rectifications to obtain good
adversarial gradient estimates. They demonstrate an effective and efficient
attack with a $\sim$90% success rate using fewer queries to the target model.
More recently, the first article on adversarial examples for sign recognition
systems in AV [17] has proposed two different attack methods: out-of-
distribution and lenticular printing in black-box settings.
Unlike the scored-based and transfer-based methods, the TRansferable EMbedding
based Black-box Attack (TREMBA) method [18]. Direted to an unknown target
network, it learns a compact embedding with a pre-trained model and performs
an efficient search over the embedding space. The adversarial perturbations by
TREMBA have high-level semantics, which is effectively transferable. Further,
these perturbations help in enhancing the query efficiency of the black-box
adversarial attack across the architectures of different target networks.
The boundary attack is introduced as a category of the decision-based attack
[19], which is relevant for the assessment of model robustness. These are used
to highlight the security risks of machine learning systems belonging to
closed-source like autonomous cars. Boundary attacks usually require a large
set of model queries for obtaining a successful human indistinguishable
adversarial example. To improve the efficiency of the boundary attack, it must
be combined with a transfer-based attack. The biased boundary attack [20],
significantly reduces the number of model queries with the combination of low-
frequency random noise and the gradient from a substitute model. Similar to
other transfer-based attacks, a biased boundary attack depends on the
transferability between the target model and the substitute model. The
boundary attack++ [21] is an algorithmic improvement of the boundary attack,
which estimates the gradient direction with the help of binary information
available at the decision boundary. Another method [22] of decision-based
attack, called qFool, used very few queries in the computation of adversarial
examples. The qFool method can handle both non-targeted and targeted attacks
with less number of queries.
A simple black-box adversarial attack, called SimBA [23] has emphasized that
optimizing queries in black-box adversarial attacks continues to be an open
problem. This is happening even though there is a significant body of prior
work [16, 18]. The algorithm in SimBA repeatedly picks a random direction from
a pre-specified set of directions and uses continuous-valued confidence scores
to perturb the input image by adding or subtracting the vector from the image.
We have extended their work by improving the efficiency and efficacy of the
attack. Instead of maximizing the loss of the original class, our model
searches for gradients in a direction that minimizes the loss of the “most
confused class”.
The main objective of this research is to design black-box adversarial attacks
for AV for exposing vulnerabilities in deep learning models. We propose a
“multi-gradient” attack in deep neural networks model for traffic scene
perception. There are three main advantages of our model: fast convergence,
flattens the confused class probability distribution, and produces adversarial
samples with the least confidence in true class. In other words, the results
demonstrate that our model is better at generating successful mis-predictions
at a faster rate with a higher probability of failure. Our work in building
such models will serve two primary scientific communities. First, it
contributes towards the safety and security of the primary users i.e.
passengers and pedestrians. Second, it helps AI researchers in developing
robust and reliable models.
The main contributions of this work are:
* •
A novel multi-gradient model for designing a black-box adversarial attack on
traffic sign images by minimizing the loss of the most confused class.
* •
Result validation by comparison with transfer-based projected gradient descent
(T-PGD) and simple black-box attack (SimBA) using German Traffic Sign
Recognition Benchmark (GTSRB) dataset
* •
Our model outperforms on three metrics: iterations for convergence, class
probability distribution, and confidence values on input class.
The paper is organized as follows. In Section II, we describe the proposed
architecture of black-box adversarial attacks. Section III contains
discussions on the performance of the proposed method on the GTSRB dataset
along with quantitative and qualitative analysis. The conclusions are
presented and future work in Section IV.
## II Proposed Method
In this section, we are presenting the proposed method for black-box
adversarial attacks in AV. As shown in Fig 2, there are three main modules:
(a) input module to sense/detect the traffic signs through the camera attached
to the autonomous vehicle (b) multi gradient attack module, and (c)
adversarial sample estimator that implements the target attack. The gradient
perturbations can be generated from one of the three methods: Transfer based
projected gradient descent (TPGD), a Simple Black box attack (SimBA), and
Modified Simple black-box attack (M-SimBA). A detailed explanation of this key
attack module is given in the subsequent sections.
Figure 2: Proposed method for black-box adversarial attacks in autonomous
vehicle technology. (a) an input module to sense/detect the traffic signs
through the camera attached to the autonomous vehicle (b) multi gradient
attack module to generate 3 different gradient perturbations from Transfer
based projected gradient descent (T-PGD), Simple Black box attack (SimBA),
Modified Simple black-box attack (M-SimBA), and (c) a classification module
which attacks the target black-box model Figure 3: Basic block diagram for
Modified Simple Black-box Attack (M-SimBA)
### II-A Transfer based Projected Gradient Descent (T-PGD)
In this white-box attack, the source CNN architecture is trained for a similar
task. The gradients from this model are used to produce an adversarial sample
which is then transferred to attack the target. Gradients updates are
performed in the direction which maximizes the classification loss as per
equation (1), where $x$, $Adv_{x}$ are original and adversarial sample,
respectively. The term $\epsilon$ is the step size that decides the magnitude
of the update. The gradient of the loss function is denoted by
$\nabla_{x}\mathit{J}$ and weights corresponding to the CNN is shown as
$\theta$. The output label is shown $y$.
$Adv_{x}=x+\epsilon\ *\
\mathbf{sign}(\nabla_{x}\mathit{J}(\mathbf{\theta},x,y)).$ (1)
Iterative gradient updates are performed until the loss converges to a higher
value. This treatment makes the adversarial image to deviate from the original
image, making it unperceivable to humans. Although T-PGD shows good
generalization ability for samples generated on white box source model to be
transferred to the black box model, it is limited by the need for the white
box source model.
Figure 4: Flowchart of Modified Simple Black-box Attack (M-SimBA)
### II-B Simple Black-box Attack (SimBA)
This query-based attack does not require any additional white-box model unlike
T-PGD to create the adversarial samples. It has no knowledge of the model and
its architecture. Hence, the model parameters such as weights and biases are
not known to calculate the gradient concerning the input image as done in
previous transfer-based attacks. The SimBA attack uses only the confidence or
output probabilities of a black box CNN model to produce adversarial samples.
It tries to search in various directions so that updating the input pixels in
that direction maximizes the loss of the correct class. This reduces the
overall confidence of the network.
For any given direction $q$ and step size $\epsilon$, one of the gradient term
$(\mathit{x+q\epsilon})$ or $(\mathit{x-q\epsilon})$ is likely to decrease
$P(y|x)$. To minimize the number of queries to the model, $+q\epsilon$ term is
added. In case, this decreases the probability $P(y|x)$, then a step is taken
in this direction. Otherwise, the opposite of $-q\epsilon$ is considered.
Although it is a simple method to be used to attack any unknown architecture,
it requires an extensive gradient search which consumes a large number of
iterations to converge.
Figure 5: German Traffic Sign Recognition Benchmark (GTSRB) dataset Figure 6:
Comparison of three attacks on Iterations vs Success rate Figure 7: Comparison
of three attacks on Epsilon vs Success rate Figure 8: Comparison of three
attacks on Samples vs Success rate Figure 9: Visual Results on GTSRB - 1. True
class of the input image is 0. The T-PGD method produces the adversarial
sample highest probability (red box on T-PGD plot) compared to the other two
attacks. M-SimBA (red box on M-SimBA plot) can attack the black-box model
which outputs very low confidence in the input class i.e., 0. It is a
desirable behavior of a robust attack method to suppress the confidence of the
original class.
### II-C Modified simple black-box attack (M-SimBA)
To avoid the use of white-box source model of T-PGD attack and late
convergence problems of SimBA attack, we are proposing a novel method by
modifying the Simple Black box attack to call it M-SimBA. This is shown in
Fig. 3. Instead of maximizing the loss of the original class in SimBA model,
we are minimizing the loss of the most confused class. It is the incorrect
class where the model misclassifies with the highest probability. As shown in
Fig. 4, firstly probability of the original model class is checked before the
attack. In the next step, random gradients are initialized and are added to
the input sample. Subsequently, the black-box model probability is calculated
in the most confused class. Initially, a positive update is considered. In
case, it fails to improve the probability of a most confused class, a negative
gradient update is performed. If both positive and negative gradient updates
fail to improve the probability, a new gradient is randomly initialized and
the process is repeated until convergence.
## III Experimental results
In this section, we are presenting the details about the dataset, experimental
setup and result discussions.
### III-A Dataset
We are evaluating the performance of the proposed method on the German Traffic
Sign Recognition Benchmark (GTSRB) dataset [24]. It consists of 43 traffic
sign classes, where 39000 are training images and 12000 are test images. The
images contain one traffic sign, a border of 10% around the actual traffic
sign (at least 5 pixels) to allow for edge-based approaches. It varies between
($15\times 15$) to ($250\times 250$) pixels and sample images are shown in
Fig. 5.
Figure 10: Visual Results on GTSRB - 2. True class of the input image is 9.
M-SimBA flattens the distribution of confused class probabilities (red box on
M-SimBA plot) compared to the other two attacks. It is a desirable behavior
such that there is a high chance that the black-box model confuses with at
least of the other class.
### III-B Experimental Setup
In this section, we are describing the initial setup for the three models to
ensure their proper functioning without attack. To perform transfer based
projected gradient descent (T-PGD) attack, a 2-layer customized white-box CNN
architecture is designed which takes the input image of size (150x150). The
model classifies the original samples with 94% accuracy. It serves as a white-
box source to generate adversarial samples in the T-PGD attack. To perform
SimBA and M-SimBA attack methods, another 2-layer customized black-box CNN
architecture with a larger number of max-pool and dropout layers compared to
white-box CNN is designed. It takes the input image of same size (150x150) to
perform the attack. It classifies the original samples with 96% accuracy.
### III-C Comparison results
In this section, we are comparing the three attack methods based on their
success rate. It is defined as a fraction of generated samples that are
successfully misclassified by the black-box model. As shown in Fig. 6, the
success rate increases with an increase in the number of iterations for all
the three methods. This is an expected trend, gradient updates for adversarial
sample become better with more processing time. The success rate of T-PGD does
not increase much with an increase in iterations, since it does not rely on
random searching and requires only a fixed number of iterations to generate
the sample. One of the features of our proposed M-SimBA attack model is that
converges faster as compared to the other two methods.
In the result shown in Fig. 7, a common trend is observed that as $\epsilon$
increases, the success rate decreases for all the three methods. This is
expected behavior because, as we increase the step size, the value of the
gradient update also increases. For the large values of $\epsilon$, there is a
high probability of overshooting and missing the optimum value. Due to this
reason, the method may not converge and that can lead to a low success rate.
On the other hand, T-PGD gives very good results for small values of
$\epsilon$, but becomes the poorest of the three methods for larger values of
$\epsilon$. This happens as T-PGD relies on gradient updates in a fixed
direction and ends up reaching the optimum value in the neighborhood boundary
quickly. In addition, SimBA and M-SimBA tend to outperform T-PGD and converge
to the same point at higher values of $\epsilon$, but SimBA needs a higher
number of iterations. Finally, in Fig. 8, it is observed that M-SimBA tends to
show a higher success rate for the initial increase in the number of samples
and continues to outperform other methods, because of its property of early
convergence.
### III-D Qualitative analysis
There are two main ideas for the qualitative analysis of the proposed black-
box adversarial attacks on GTSRB dataset. Firstly, M-SimBA suppresses the
confidence of the original class, which makes it a desirable feature for
attack technique. As shown in Fig. 9, the true class of the sample is zero.
The T-PGD method leads to minimum distortion in the probability vector. On the
other hand, M-SimBA can attack the black-box model with very low confidence in
the input class with almost zero value. Secondly, M-SimBA flattens the
distribution of confused class probabilities compared to the other two attacks
as shown in Fig. 10. This is a advantageous from attack perspective, because
it provides a higher chance that the prediction model confuses with the other
class.
## IV Conclusion
Autonomous vehicles powered with deep neural networks for scene perception can
be extremely vulnerable to adversarial attacks. For the safety and security of
pedestrians and passengers, it is crucial to understand the attacks for
building robust models. The main objective of our research is to demonstrate
and evaluate the black-box adversarial attack for traffic sign detection for
AV. To achieve efficiency in the iterative process of reducing the number of
queries searching the classifier, we focus on minimizing the loss of the most
confused class. We are comparing our model with two other algorithms SimBA and
T-PGD using the GTSRB dataset. We are showing the efficiency and efficacy of
our model with three different metrics namely: iterations for convergence,
class probability distribution, and confidence values on input class. In the
future, this work can be extended to attacks in video context and different
vehicle sensor data. Also, novel methods can be explored to design robust
defense techniques to tackle these adversarial attacks.
## References
* [1] M. Wood, P. Robbel, D. Wittmann _et al._ , “Safety first for automated driving,” 2019. [Online]. Available: https://www.aptiv.com/docs/default-source/white-papers/safety-first-for-automated-driving-aptiv-white-paper.pdf
* [2] Y. Deng, X. Zheng, T. Zhang, C. Chen, G. Lou, and M. Kim, “An analysis of adversarial attacks and defenses on autonomous driving models,” _arXiv preprint arXiv:2002.02175_ , 2020.
* [3] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in _International Conference on Learning Representations_ , 2014. [Online]. Available: http://arxiv.org/abs/1312.6199
* [4] I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in _International Conference on Learning Representations_ , 2015. [Online]. Available: http://arxiv.org/abs/1412.6572
* [5] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” _arXiv preprint arXiv:1611.02770_ , 2016\.
* [6] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and object detection,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 1369–1378.
* [7] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in _2017 IEEE Symposium on Security and Privacy (SP)_ , 2017, pp. 39–57.
* [8] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in _International Conference on Learning Representations_ , 2014. [Online]. Available: http://arxiv.org/abs/1312.6199
* [9] M. Cisse, Y. Adi, N. Neverova, and J. Keshet, “Houdini: Fooling deep structured prediction models,” 07 2017.
* [10] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and object detection,” in _2017 IEEE International Conference on Computer Vision (ICCV)_ , 2017, pp. 1378–1387.
* [11] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” _arXiv preprint arXiv:1605.07277_ , 2016.
* [12] S.-M. Moosavi-Dezfooli and O. F. Alhussein Fawzi, “Pascal frossard.”,” in _Universal adversarial perturbations.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017.
* [13] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” _arXiv preprint arXiv:1706.06083_ , 2017.
* [14] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in _Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security_ , 2017, pp. 15–26.
* [15] Y. Lu, Y. Jia, J. Wang, B. Li, W. Chai, L. Carin, and S. Velipasalar, “Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 940–949.
* [16] L. Jiang, X. Ma, S. Chen, J. Bailey, and Y.-G. Jiang, “Black-box adversarial attacks on video recognition models,” in _Proceedings of the 27th ACM International Conference on Multimedia_ , 2019, pp. 864–872.
* [17] C. Sitawarin, A. N. Bhagoji, A. Mosenia, M. Chiang, and P. Mittal, “Darts: Deceiving autonomous cars with toxic signs,” 2018.
* [18] Z. Huang and T. Zhang, “Black-box adversarial attack with transferable model-based embedding,” _arXiv preprint arXiv:1911.07140_ , 2019.
* [19] W. Brendel, J. Rauber, and M. Bethge, “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” _arXiv preprint arXiv:1712.04248_ , 2017.
* [20] T. Brunner, F. Diehl, M. T. Le, and A. Knoll, “Guessing smart: Biased sampling for efficient black-box adversarial attacks,” in _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_ , 2019, pp. 4957–4965.
* [21] I. Rosenberg, A. Shabtai, Y. Elovici, and L. Rokach, “Query-efficient black-box attack against sequence-based malware classifiers,” _arXiv preprint arXiv:1804.08778_ , 2018.
* [22] Y. Liu, S. Moosavi-Dezfooli, and P. Frossard, “A geometry-inspired decision-based attack,” in _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_ , 2019, pp. 4889–4897.
* [23] C. Guo, J. Gardner, Y. You, A. G. Wilson, and K. Weinberger, “Simple black-box adversarial attacks,” ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. Long Beach, California, USA: PMLR, 09–15 Jun 2019, pp. 2484–2493. [Online]. Available: http://proceedings.mlr.press/v97/guo19a.html
* [24] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The German Traffic Sign Recognition Benchmark: A multi-class classification competition,” in _IEEE International Joint Conference on Neural Networks_ , 2011, pp. 1453–1460.
|
††thanks: E-mail<EMAIL_ADDRESS>
# Is Asymptotically Weyl-Invariant Gravity Viable?
Daniel Coumbe _The Niels Bohr Institute, Copenhagen University_
_Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark_
###### Abstract
We explore the cosmological viability of a theory of gravity defined by the
Lagrangian $f(\mathcal{R})=\mathcal{R}^{n\left(\mathcal{R}\right)}$ in the
Palatini formalism, where $n\left(\mathcal{R}\right)$ is a dimensionless
function of the Palatini scalar curvature $\mathcal{R}$ that interpolates
between general relativity when $n\left(\mathcal{R}\right)=1$ and a locally
scale-invariant and superficially renormalizable theory when
$n\left(\mathcal{R}\right)=2$. We refer to this model as asymptotically Weyl-
invariant gravity (AWIG).
We analyse perhaps the simplest possible implementation of AWIG. A phase space
analysis yields three fixed points with effective equation of states
corresponding to de Sitter, radiation and matter-dominated phases. An analysis
of the deceleration parameter suggests our model is consistent with an early
and late period of accelerated cosmic expansion, with an intermediate period
of decelerated expansion. We show that the model contains no obvious curvature
singularities. Therefore, AWIG appears to be cosmologically viable, at least
for the simple implementation explored.
PACS numbers: 04.60.-m, 04.60.Bc
## 1 Introduction
The general theory of relativity is currently our best description of gravity.
One reason for this is its explanatory power: assuming little more than a
single symmetry principle general relativity can explain a truly astonishing
range of experimental phenomena [1].
However, it is at best incomplete. It is often said that general relativity
breaks down at high energies _or_ small distances. Yet, it is more accurate to
say high energies _and_ small distances. This is an important distinction
since it highlights the regime in which we must modify general relativity,
namely for large energy densities, or equivalently for large spacetime
curvatures. For example, general relativity predicts its own breakdown at
curvature singularities, where scalar measures of curvature grow without
bound. Furthermore, general relativity is known to become fundamentally
incompatible with quantum field theory at high curvature scales, a failure
known as its non-renormalizability [2, 3]. Theoretical arguments alone are
enough to tell us that general relativity must be modified at high curvature
scales.
Experimental data also indicates that general relativity must either be
augmented or replaced altogether if it is to agree with observation [4]. For
example, general relativity by itself is unable to explain the early phase of
accelerated cosmic expansion, as evidenced by myriad high-precision
measurements [4], and must be supplemented with unobserved exotic energy
sources and scalar fields [5, 6]. Although this top-down approach, as
exemplified by the $\Lambda$CDM model, is currently our best description of
observed cosmological dynamics [4], its _ad hoc_ construction has driven
attempts to replace general relativity from the bottom-up.
Finding a viable replacement of general relativity is challenging. Such a
theory must at the very least be (i) equivalent to general relativity in the
low-curvature limit, (ii) renormalizable in the high-curvature limit, (iii)
unitary, (iv) stable, (v) contain no curvature singularities, (vi) consistent
with observation.
One attempt is that of higher-order gravity, in which the Lagrangian includes
terms quadratic in the curvature tensor. Although this approach is
perturbatively renormalizable, and hence satisfies criterion (ii), such
higher-order theories are not typically unitary or stable, thus failing to
satisfy criteria (iii) and (iv). The only higher-order theories that are
unitary and stable are so-called $f(R)$ theories, in which the Lagrangian is
an arbitrary function of the Ricci scalar only [7].
There are three types of $f(R)$ theory: metric, Palatini and metric-affine
variations [7]. Metric $f(R)$ gravity assumes that the affine connection
uniquely depends on the metric via the Levi-Civita connection, as in standard
general relativity. The Palatini formalism generalises the metric formalism by
relaxing the assumption that the connection must depend on the metric. The
metric-affine formalism is the most general approach since it even drops the
implicit assumption that the matter action is independent of the connection.
Particular metric $f(R)$ models have been shown to conflict with solar system
tests [8], give an incorrect Newtonian limit [9], contradict observed
cosmological dynamics [10, 11], be unable to satisfy big bang nucleosynthesis
constraints [12] and contain fatal Ricci scalar instabilities [13]. Thus,
metric $f(R)$ theories do not typically satisfy criteria (i), (iv) or (vi). As
for the metric-affine formalism, it is not even a metric theory in the usual
sense, meaning diffeomorphism invariance is likely broken [7]. Thus, metric-
affine theories do not even seem to satisfy criterion (i). However, it has
been shown that the Palatini variation is immune to any such Ricci scalar
instability [14]. Palatini formulations also appear to pass solar system tests
and reproduce the correct Newtonian limit [15]. Remarkably, a Palatini action
that is linear in the scalar curvature is identical to regular general
relativity [7]. However, this equivalence does not hold for higher-order
theories [16, 7]. In particular, a Palatini action that is purely quadratic in
the scalar curvature is identical to normal general relativity plus a non-zero
cosmological constant [17].
In Ref. [18] we proposed the theory of asymptotically Weyl-invariant gravity
(AWIG) within the Palatini formalism (see Refs. [19, 20, 21] for the
background to this proposal). By construction AWIG satisfies criteria
(i)-(iv).111In the low-curvature limit AWIG yields
$f\left(\mathcal{R}\right)=\mathcal{R}$, which is identical to general
relativity [7]. AWIG is at least superficially renormalizable because the
coupling constant of the theory becomes dimensionless in the high curvature
limit, as shown in section 2. AWIG is likely to be unitary because states of
negative norm (ghosts) that cause unitarity violations do not appear in $f(R)$
theories [7, 22]. AWIG appears stable since Ostragadsky’s instability is
evaded by any $f(R)$ theory [23], and the Dolgov-Kawasaki instability can not
occur in Palatini $f\left(\mathcal{R}\right)$ gravity [7] (see Ref. [18] for
more details on the construction of AWIG). The present work aims to test
whether this theory also satisfies criteria (v) and (vi), and hence to
determine if it may be a viable replacement of general relativity.
In addition to satisfying criteria (i)-(iv), a major motivation for developing
AWIG was finding a theory with the symmetry of local scale invariance. The
need for local scale invariance can be seen by recognising that all length
measurements are local comparisons. For example, to measure the length of a
rod requires bringing it together with some standard unit of length, say a
metre stick, at the same point in space and time. In this way the local
comparison yields a dimensionless ratio, for example, the rod might be longer
than the metre stick by a factor of two. Repeating this comparison at a
different spacetime point must yield the same result, even if the metric at
this new point were rescaled by an arbitrary factor $\Omega^{2}(x)$. This is
because both the rod and metre stick would be equally rescaled, yielding the
same dimensionless ratio. Such a direct comparison cannot be made for two rods
with a non-zero space-like or time-like separation [24, 25]. Therefore, it has
been argued that the laws of nature must be formulated in such a way as to be
invariant under local rescalings of the metric tensor
$g_{\mu\nu}\rightarrow\Omega^{2}(x)g_{\mu\nu}$, or equivalently under a local
change of units. Moreover, since scale-invariant theories of gravity are gauge
theories [26, 27], unification with the other three fundamental interactions,
which have all been successfully formulated as local gauge theories, becomes
tractable. The theory analysed in this work is invariant with respect to local
changes of scale in the high-curvature limit.
It is important to establish the standard against which we will judge whether
the presented theory is viable. Criterion (v) will be deemed to be satisfied
if at least two different curvature invariants can be shown to be divergence-
free. To satisfy criterion (vi) we make the maximal demand that the theory
reproduces all four observed phases of cosmological evolution in the correct
order [28], namely an early period of accelerated expansion, followed by
radiation and matter-dominated phases, and finally a late period of
accelerated expansion [29].
This paper is organised as follows. In section 2 we define the model of AWIG,
including a detailed exploration of the dimensionless exponent
$n\left(\mathcal{R}\right)$. In section 3 we detail the methodology that will
be used to test the viability of our model. Results are presented in section 4
followed by a concluding discussion in section 5.
## 2 Model
The class of theories to which our model belongs is defined by the action
$\mathcal{S}=\frac{1}{2\kappa}\int f\left(\mathcal{R}\right)\sqrt{-g}d^{4}x,$
(1)
where $\kappa\equiv 8\pi G$ and $G$ is the gravitational coupling.
$f\left(\mathcal{R}\right)$ is an arbitrary function of the Palatini scalar
curvature $\mathcal{R}$ and $g$ is the determinant of the metric tensor.
Varying Eq. (1) with respect to the metric and taking the trace gives the
field equations [7]
$f^{\prime}(\mathcal{R})\mathcal{R}-2f(\mathcal{R})=\kappa T.$ (2)
AWIG is defined by the specific case [18]
$f\left(\mathcal{R}\right)=\mathcal{R}^{n\left(\mathcal{R}\right)},$ (3)
where $n\left(\mathcal{R}\right)$ is a dimensionless function of $\mathcal{R}$
that interpolates between general relativity when
$n\left(\mathcal{R}\right)=1$ and a locally scale-invariant and superficially
renormalizable theory of gravity when $n\left(\mathcal{R}\right)=2$. By
defining $n\left(\mathcal{R}\right)$ in this way the Lagrangian density
$f\left(\mathcal{R}\right)$ is purely a function of scalar curvature, and
hence is guaranteed to be invariant under arbitrary differential coordinate
transformations. In $4$-dimensional spacetime
$\mathcal{R}^{n\left(\mathcal{R}\right)}$ has canonical mass dimension
$2n\left(\mathcal{R}\right)$. Since $\sqrt{-g}$ has mass dimension $-4$,
$\kappa$ must have a mass dimension of $2n\left(\mathcal{R}\right)-4$ if Eq.
(1) is to be dimensionless, which it must be since we are working in units of
$\hbar=c=1$. Thus, in the limit $n\left(\mathcal{R}\right)\to 2$ the
gravitational coupling becomes dimensionless, as demanded by scale-invariance.
Superficially renormalizable field theories are those with dimensionless
coupling constants [30].
To complete the definition of this model we must specify the function
$n\left(\mathcal{R}\right)$. We begin by taking the first derivative of
$f\left(\mathcal{R}\right)$ with respect to $\mathcal{R}$, denoted by
$f^{\prime}\left(\mathcal{R}\right)$, finding
$f^{\prime}\left(\mathcal{R}\right)=\mathcal{R}^{n\left(\mathcal{R}\right)-1}\left(n\left(\mathcal{R}\right)+\mathcal{R}\rm{log}\left(\mathcal{R}\right)n^{\prime}\left(\mathcal{R}\right)\right).$
(4)
Substituting Eqs.(3) and (4) into Eq. (2) and rearranging yields
$n^{\prime}\left(\mathcal{R}\right)=\frac{\kappa
T+\mathcal{R}^{n\left(\mathcal{R}\right)}\left(2-n\left(\mathcal{R}\right)\right)}{\mathcal{R}^{n\left(\mathcal{R}\right)+1}\log{\left(\mathcal{R}\right)}}.$
(5)
We now use the fact that the symmetry of local scale invariance is signalled
by the vanishing of the traced energy tensor [31]. Thus, as
$n\left(\mathcal{R}\right)\to 2$ we must have $T\to 0$. Applying the limits
$n\left(\mathcal{R}\right)\to 2$ and $T\to 0$ to Eq.(5) yields
$n^{\prime}\left(\mathcal{R}\right)=0$. Similarly, as
$n\left(\mathcal{R}\right)\to 1$ we must have $\kappa T\to-\mathcal{R}$, and
so Eq.(5) again yields $n^{\prime}\left(\mathcal{R}\right)=0$.222If
$\mathcal{R}=1$ when $n\left(\mathcal{R}\right)=2$ and $\kappa T=0$ then
$n^{\prime}\left(\mathcal{R}\right)$ is undefined, since the numerator and
denominator of Eq.(5) both equal zero. Likewise, if $\mathcal{R}=0$ when
$n\left(\mathcal{R}\right)=1$ and $\kappa T=-\mathcal{R}$ then
$n^{\prime}\left(\mathcal{R}\right)$ is undefined. However, in the limiting
cases $\mathcal{R}\to 0$ and $\mathcal{R}\to 1$ we have
$n^{\prime}\left(\mathcal{R}\right)=0$. Therefore, the function we seek must
satisfy the condition $n^{\prime}\left(\mathcal{R}\right)=0$ as
$n\left(\mathcal{R}\right)\to 1$ and $n\left(\mathcal{R}\right)\to 2$.
Experiment also supports a near-constant exponent $n\left(\mathcal{R}\right)$
at lower curvature scales. This is because general relativity agrees with
experiment over a wide range of energy or curvature scales [32, 1], indicating
that $n\left(\mathcal{R}\right)$ has at most a very weak dependence on
$\mathcal{R}$ within the range of current experimental sensitivity. Similarly,
the fact that in the high-curvature limit the theory becomes locally scale-
invariant implies a constant $n\left(\mathcal{R}\right)$, since in this limit
there can be no scale with respect to which $n\left(\mathcal{R}\right)$ can
vary.
We now proceed by assuming $n\left(\mathcal{R}\right)$ admits a series
expansion in $\mathcal{R}$ of the form
$n\left(\mathcal{R}_{*}\right)=\sum_{m=0}^{\infty}c_{m}\mathcal{R}_{*}^{m},$
(6)
where $c_{m}$ are dimensionless constants and $\mathcal{R_{*}}$ is defined by
the dimensionless ratio $\mathcal{R_{*}}\equiv\mathcal{R}/\mathcal{R}_{0}$,
with $\mathcal{R}_{0}$ a finite constant of mass dimension two that represents
the maximum value $R$ can take. In this way, $n\left(\mathcal{R}_{*}\right)$
is a purely dimensionless function of the Palatini scalar curvature
$\mathcal{R}$. Truncating to a third-order function we have333It can be shown
that first and second-order functions cannot produce the desired features
[33].
$n\left(\mathcal{R}_{*}\right)=c_{0}+c_{1}\mathcal{R}_{*}+c_{2}\mathcal{R}_{*}^{2}+c_{3}\mathcal{R}_{*}^{3}.$
(7)
Since the low-curvature limit corresponds to $\mathcal{R}_{*}\to 0$, the
constraint $n\left(\mathcal{R}_{*}\to 0\right)=1$ immediately yields
$c_{0}=1$. Similarly, since the high-curvature limit corresponds to
$\mathcal{R}_{*}\to 1$, the constraint $n\left(\mathcal{R}_{*}\to 1\right)=2$
gives $1+c_{1}+c_{2}+c_{3}=2$, or equivalently $c_{1}+c_{2}+c_{3}=1$.
The first derivative of $n\left(\mathcal{R}_{*}\right)$ with respect to
$\mathcal{R}$ is
$n^{\prime}\left(\mathcal{R}_{*}\right)=\frac{c_{1}}{\mathcal{R}_{0}}+2\frac{c_{2}}{\mathcal{R}_{0}^{2}}\mathcal{R}+3\frac{c_{3}}{\mathcal{R}_{0}^{3}}\mathcal{R}^{2}=\frac{c_{1}}{\mathcal{R}}\mathcal{R}_{*}+2\frac{c_{2}}{\mathcal{R}}\mathcal{R}_{*}^{2}+3\frac{c_{3}}{\mathcal{R}}\mathcal{R}_{*}^{3}.$
(8)
Since $\mathcal{R}_{*}\equiv\mathcal{R}/\mathcal{R}_{0}\to 0$ in the low-
curvature limit, Eq. (8) gives
$n^{\prime}\left(\mathcal{R}_{*}\right)=c_{1}/\mathcal{R}_{0}=0$, which
implies $c_{1}=0$ since $\mathcal{R}_{0}$ is assumed to be finite. The high-
curvature limit corresponds to
$\mathcal{R}_{*}\equiv\mathcal{R}/\mathcal{R}_{0}\to 1$, and so Eq. (8) gives
$n^{\prime}\left(\mathcal{R}_{*}\right)=c_{1}/\mathcal{R}_{0}+2c_{2}/\mathcal{R}_{0}+3c_{3}/\mathcal{R}_{0}=0$,
which implies $2c_{2}+3c_{3}=0$ since $c_{1}=0$. The polynomial coefficients
$c_{2}$ and $c_{3}$ can now be determined by solving the system of equations
$2c_{2}+3c_{3}=0$ and $c_{2}+c_{3}=1$, with the result $c_{2}=3,c_{3}=-2$.
Therefore,
$n\left(\mathcal{R}_{*}\right)=1+3\mathcal{R}_{*}^{2}-2\mathcal{R}_{*}^{3}.$
(9)
Eq. (9) is the lowest-order polynomial to satisfy our criteria, but there are
potentially an infinite number of higher-order polynomial functions. Let
$n_{i}\left(\mathcal{R}_{*}\right)$ label this set of polynomial functions,
where the order of the polynomial is given by $2i+1$. One can then generalise
Eq. (9) to any higher-order using [33]
$n_{i}\left(\mathcal{R}_{*}\right)=1+\mathcal{R}_{*}^{i+1}\sum_{j=0}^{i}{{i+j}\choose{j}}{{2i+1}\choose{i-j}}\left(-\mathcal{R}_{*}\right)^{j},\qquad
i\in\mathbb{N}.$ (10)
The first thirteen functions generated by Eq. (10) are shown in Fig. 1 (left).
The Lagrangian density in this case is then
$f_{i}\left(\mathcal{R}\right)=\mathcal{R}^{n_{i}\left(\mathcal{R}_{*}\right)}=\left(\mathcal{R}_{0}\mathcal{R}_{*}\right)^{n_{i}\left(\mathcal{R}_{*}\right)}.$
(11)
For simplicity, we choose $\mathcal{R}_{0}$ to have the value of one when
expressed in some particular unit of mass dimension two. For example, one
possibility is $\mathcal{R}_{0}=1m_{P}$, where $m_{P}$ is the Planck mass. The
term $\mathcal{R}_{0}$ then only acts to set the dimensionality of
$f_{i}\left(\mathcal{R}\right)$. The first thirteen functions
$f_{i}\left(\mathcal{R}\right)$ generated by applying Eq. (10) to Eq. (11) are
shown in Fig. 1 (middle), where we set $\mathcal{R}_{0}=1$ in some appropriate
unit. Differentiating Eq. (11) with respect to $\mathcal{R}$ gives the set of
first derivative functions $f^{\prime}_{i}\left(\mathcal{R}\right)$, with the
first 13 shown in Fig. 1 (right).
Figure 1: The first 13 exponents $n_{i}\left(\mathcal{R}_{*}\right)$ (left),
Lagrangian densities $f_{i}\left(\mathcal{R}\right)$ (middle), and first
derivative functions $f^{\prime}_{i}\left(\mathcal{R}\right)$ (right)
generated by Eq. (10) as a function of $\mathcal{R}_{*}$.
An important feature of Fig. 1 (right) is that the thirteenth function
$f^{\prime}_{13}\left(\mathcal{R}\right)$ becomes negative for certain values
of $\mathcal{R}_{*}$. A well-defined conformal transformation of the metric
tensor $\tilde{g}_{\mu\nu}=f^{\prime}\left(\mathcal{R}\right)g_{\mu\nu}$
requires that $f^{\prime}\left(\mathcal{R}\right)>0$ for all $\mathcal{R}$.
This condition is only satisfied if $i\leq 12$. Thus, we can exclude
Lagrangian densities $f_{i}\left(\mathcal{R}\right)$ with $i\geq 13$. In this
work, we shall focus on the simplest permitted Lagrangian density
$f_{1}\left(\mathcal{R}\right)=\mathcal{R}^{1+3\mathcal{R}_{*}^{2}-2\mathcal{R}_{*}^{3}}.$
(12)
## 3 Method
In this section we detail the method used to test the cosmological viablility
of the model defined by Eq. (12). The methodology presented in this section
follows the work of Refs. [28, 18].
Since cosmological observations by the Planck satellite show that our universe
is consistent with being spatially flat at late times [4], we begin by
assuming a flat Friedmann-Lemaıtre-Robertson-Walker (FLRW) metric
$ds^{2}=-dt^{2}+a^{2}(t)\left(dx^{2}+dy^{2}+dz^{2}\right),$ (13)
where $a(t)$ is the scale factor of the universe, a function of cosmological
time $t$. The evolution of a spatially homogenous and isotropic universe
filled with a cosmological fluid composed of pressureless dust and radiation
can be described by the modified Friedmann equation [7, 34]
$\left(H+\frac{\dot{f}^{\prime}\left(\mathcal{R}\right)}{2f^{\prime}\left(\mathcal{R}\right)}\right)^{2}=\frac{\kappa\left(\rho_{m}+2\rho_{r}\right)+f\left(\mathcal{R}\right)}{6f^{\prime}\left(\mathcal{R}\right)},$
(14)
where the dot notation signifies a time derivative and $H\equiv\dot{a}/a$ is
the Hubble parameter. $\rho_{m}$ and $\rho_{r}$ are the energy density of
matter and radiation, respectively, which satisfy the conservation conditions
$\dot{\rho}_{m}+3H\rho_{m}=0,\qquad\dot{\rho}_{r}+4H\rho_{r}=0.$ (15)
Since the trace of the energy-momentum tensor for radiation is zero, we simply
have $T=-\rho_{m}$ [28]. By using Eq. (14), combined with the conservation
conditions of Eq. (15), we can express the time derivative of the Palatini
scalar curvature as [34, 28]
$\mathcal{\dot{R}}=-\frac{3H\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)\right)}{f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}-f^{\prime}\left(\mathcal{R}\right)}.$
(16)
Using Eq. (16) we can replace $\mathcal{\dot{R}}$ in Eq. (14) to obtain [34,
7]
$H=\sqrt{\frac{2\kappa\left(\rho_{m}+\rho_{r}\right)+f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-f\left(\mathcal{R}\right)}{6f^{\prime}\left(\mathcal{R}\right)\xi}},$
(17)
where $\xi$ is defined by
$\xi=\left(1-\frac{3}{2}\frac{f^{\prime\prime}\left(\mathcal{R}\right)\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)\right)}{f^{\prime}\left(\mathcal{R}\right)\left(f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}-f^{\prime}\left(\mathcal{R}\right)\right)}\right)^{2}.$
(18)
If $\rho_{r}=0$, it is possible to use
$T=-\rho_{m}=\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)\right)/\kappa$
to obtain the simpler expression
$H=\sqrt{\frac{3f\left(\mathcal{R}\right)-f^{\prime}\left(\mathcal{R}\right)\mathcal{R}}{6f^{\prime}\left(\mathcal{R}\right)\xi}}.$
(19)
In this work, we shall perform a detailed analysis of the phase space of the
model defined by Eq. (12). To facilitate this analysis we establish an
autonomous system of equations defined by the pair of dimensionless variables
[28]
$y_{1}=\frac{f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-f\left(\mathcal{R}\right)}{6f^{\prime}\left(\mathcal{R}\right)\xi
H^{2}},\qquad
y_{2}=\frac{\kappa\rho_{r}}{3f^{\prime}\left(\mathcal{R}\right)\xi H^{2}}.$
(20)
Using Eqs. (20) and (17) it can be shown that $\rho_{r}$ can be expressed in
terms of the variable $y_{2}$ via
$\rho_{r}=\frac{y_{2}}{1-y_{2}}\left(\rho_{m}+\frac{f^{\prime}\left(\mathcal{R}\right)\mathcal{R}}{2\kappa}-\frac{f\left(\mathcal{R}\right)}{2\kappa}\right).$
(21)
The evolution of $y_{1}$ and $y_{2}$ as a function of the cosmic scale factor
$a$ are established by the differential equations
$\frac{dy_{1}}{dN}=y_{1}\left(3-3y_{1}+y_{2}-3\frac{\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)\right)f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}}{\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-f\left(\mathcal{R}\right)\right)\left(f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}-f^{\prime}\left(\mathcal{R}\right)\right)}\left(1-y_{1}\right)\right)$
(22)
and
$\frac{dy_{2}}{dN}=y_{2}\left(-1-3y_{1}+y_{2}+3\frac{\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)\right)f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}}{\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-f\left(\mathcal{R}\right)\right)\left(f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}-f^{\prime}\left(\mathcal{R}\right)\right)}y_{1}\right),$
(23)
where $N\equiv\rm{ln}(a)$. The fixed points of this system correspond to the
values $\left(y_{1},y_{2}\right)$ that satisfy
$\frac{dy_{1}}{dN}=\frac{dy_{2}}{dN}=0.$ (24)
Note that there is a direct relationship between $\mathcal{R}$ and the
variables $\left(y_{1},y_{2}\right)$ given by [28]
$\frac{f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)}{f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-f\left(\mathcal{R}\right)}=-\frac{1-y_{1}-y_{2}}{2y_{1}}.$
(25)
By calculating the eigenvalues $\left(\lambda_{1},\lambda_{2}\right)$ of the
Jacobian matrix at each point $\left(y_{1},y_{2}\right)$ the stability of the
fixed points can be determined [28, 35]. The fixed point is stable when both
eigenvalues are real and negative, and unstable when both are real and
positive. The fixed point is a saddle point when both eigenvalues are real and
of opposite sign. The nature of the fixed point for different eigenvalues
$\left(\lambda_{1},\lambda_{2}\right)$ is summarized in Tab. 1.
Eigenvalues | Fixed point
---|---
$\lambda_{1}\neq\lambda_{2}<0$ | Stable
$\lambda_{1}\neq\lambda_{2}>0$ | Unstable
$\lambda_{1}<0<\lambda_{2}$ | Saddle
Table 1: Fixed point type based on eigenvalue pairs
$\left(\lambda_{1},\lambda_{2}\right)$.
The values $\left(y_{1},y_{2}\right)$ for each corresponding fixed point are
then substituted into the effective equation of state $w_{eff}$ given by
[28]444As a cross-check of our methodology and computer code we verified that
we are able to successfully reproduce the cosmological dynamics found in Ref.
[28] for two different models.
$w_{eff}=-y_{1}+\frac{1}{3}y_{2}+\frac{\dot{f}^{\prime}\left(\mathcal{R}\right)}{3Hf^{\prime}\left(\mathcal{R}\right)}+\frac{\dot{\xi}}{3H\xi}-\frac{\dot{f}^{\prime}\left(\mathcal{R}\right)\mathcal{R}}{18f^{\prime}\left(\mathcal{R}\right)\xi
H^{3}},$ (26)
where $\dot{\xi}$ is determined by taking the derivative of Eq. (18) with
respect to time and using Eq. (16). $\dot{f}^{\prime}\left(\mathcal{R}\right)$
is given by [28]
$\dot{f}^{\prime}\left(\mathcal{R}\right)=-\frac{3H\left(f^{\prime}\left(\mathcal{R}\right)\mathcal{R}-2f\left(\mathcal{R}\right)\right)f^{\prime\prime}\left(\mathcal{R}\right)}{f^{\prime\prime}\left(\mathcal{R}\right)\mathcal{R}-f^{\prime}\left(\mathcal{R}\right)}=\mathcal{\dot{R}}f^{\prime\prime}\left(\mathcal{R}\right).$
(27)
It will also prove useful to define the deceleration parameter $q$ in terms of
the effective equation of state $w_{eff}$. Since the deceleration parameter is
defined in terms of the Hubble parameter via
$q\equiv-\left(\frac{\dot{H}}{H^{2}}+1\right),$ (28)
and since [28]
$\frac{\dot{H}}{H^{2}}=-\frac{3}{2}\left(1+w_{eff}\right),$ (29)
we then find
$q=\frac{1}{2}\left(1+3w_{eff}\right).$ (30)
To further evaluate the viability criteria set out in the introduction, we
must also test whether our theory contains scalar curvature singularities
[18]. A local rescaling of the metric tensor by a conformal factor
$\Omega^{2}(x)$ is equivalent to the transformations [36, 37, 38]
$g_{\mu\nu}\rightarrow\tilde{g}_{\mu\nu}=f^{\prime}(\mathcal{R})g_{\mu\nu},\qquad
g^{\mu\nu}\rightarrow\tilde{g}^{\mu\nu}=\left(f^{\prime}(\mathcal{R})\right)^{-1}g^{\mu\nu}.$
(31)
The Ricci scalar $\mathcal{R}$ defines the simplest possible curvature
invariant. Thus, in the Palatini formalism, $\mathcal{R}$ raised to the power
of any positive integer $m$ transforms under (31) via
$\mathcal{R}^{m}\to\frac{\mathcal{R}^{m}}{\left(f^{\prime}\left(\mathcal{R}\right)\right)^{m}}.$
(32)
The next simplest curvature invariant involves the Ricci tensor. Since our
model is defined in the Palatini variation, the connection
$\Gamma^{\nu}_{\mu\sigma}$ is not assumed to depend on the metric
$g_{\mu\nu}$, and so the Ricci tensor
$R_{\mu\nu}=\partial_{\rho}\Gamma^{\rho}_{\nu\mu}-\partial_{\nu}\Gamma^{\rho}_{\rho\mu}+\Gamma^{\rho}_{\rho\lambda}\Gamma^{\lambda}_{\nu\mu}-\Gamma^{\rho}_{\nu\lambda}\Gamma^{\lambda}_{\rho\mu}$
(33)
may remain invariant under the local rescaling transformation of Eq. (31). The
Ricci tensor with upper indices, however, is given by
$R^{\mu\nu}=g^{\mu\rho}g^{\nu\sigma}R_{\rho\sigma}$ and so it does transform
under Eq. (31) according to $R^{\mu\nu}\to R^{\mu\nu}\
\left(f^{\prime}(\mathcal{R})\right)^{-2}$. Therefore, second order curvature
invariants involving the Ricci tensor, namely $R_{\mu\nu}R^{\mu\nu}$, to any
integer power $m$, will transform under Eq. (31) according to
$\left(R_{\mu\nu}R^{\mu\nu}\right)^{m}\to\frac{\left(R_{\mu\nu}R^{\mu\nu}\right)^{m}}{\left(f^{\prime}(\mathcal{R})\right)^{2m}}.$
(34)
It is unclear whether the Kretchmann scalar is a scalar in the Palatini
formalism [39], and so we omit this from our analysis.
## 4 Results
We find that the model defined by the exponent of Eq. (12) contains three
fixed points $P_{1}$, $P_{2}$ and $P_{3}$. The eigenvalues and stability of
these fixed points, defined by the roots $\left(y_{1},y_{2}\right)$ of Eqs.
(22) and (23), are displayed in Tab. 2 in the low and high-curvature limits.
Figure 2 displays how the eigenvalues $\left(\lambda_{1},\lambda_{2}\right)$
vary as a function of $\mathcal{R}_{*}$ for $P_{1}$ (left), $P_{2}$ (middle),
and $P_{3}$ (right).
Fixed point | $\left(y_{1},y_{2}\right)$ | $\left(\lambda_{1},\lambda_{2}\right)$ $\left(\mathcal{R}_{*}\to 0\right)$ | $\left(\lambda_{1},\lambda_{2}\right)$ $\left(\mathcal{R}_{*}\to 1\right)$
---|---|---|---
$P_{1}$ | $\left(1,0\right)$ | $\left(6,5\right)$ Unstable | $\left(-4,-3\right)$ Stable
$P_{2}$ | $\left(0,1\right)$ | $\left(1,-5\right)$ Saddle | $\left(1,4\right)$ Unstable
$P_{3}$ | $\left(0,0\right)$ | $\left(-1,-6\right)$ Stable | $\left(-1,3\right)$ Saddle
Table 2: The dimensionless variables $\left(y_{1},y_{2}\right)$, eigenvalues
$\left(\lambda_{1},\lambda_{2}\right)$ in the low $\left(\mathcal{R}_{*}\to
0\right)$ and high-curvature $\left(\mathcal{R}_{*}\to 1\right)$ limits, and
stability of the three fixed points $P_{1}$, $P_{2}$ and $P_{3}$.
Figure 2: The eigenvalues $\left(\lambda_{1},\lambda_{2}\right)$ as a function
of $\mathcal{R}_{*}$ for fixed points $P_{1}$ (left), $P_{2}$ (middle), and
$P_{3}$ (right).
Figure 2 illustrates a potential advantage of AWIG. Unlike most other $f(R)$
theories of gravity, the variable power in the Lagrangian density of AWIG
makes it possible for the eigenvalues and hence stability of each fixed point
to vary with curvature scale, and hence to potentially vary with cosmological
time. So, for example, the stability of the fixed point $P_{1}$ can change
from being stable in the high-curvature limit to being unstable at lower
curvatures, as can be seen in Fig. 2 (left). This feature allows a richer set
of possible cosmological dynamics.
Inserting the obtained coordinate pairs $\left(y_{1},y_{2}\right)$ into Eq.
(26) yields the effective equation of state $w_{eff}$ as a function of the
Palatini scalar curvature. The results are displayed in Fig. 3 for the fixed
points $P_{1}$ and $P_{3}$. Since $y_{2}=1$ for the fixed point $P_{2}$ we can
see from Eq. (21) that $\rho_{r}$ is undefined, and therefore via Eq. (17) $H$
must also be undefined. Consequently, $w_{eff}$ for $P_{2}$ is undefined, as
is evident from Eq. (26). However, we know that as
$n\left(\mathcal{R}_{*}\right)\to 2$ AWIG is equivalent to general relativity
plus a cosmological constant [17]. Thus, $w_{eff}$ for $P_{2}$ can be
determined in the high-curvature limit by an equivalent analysis of the model
$f\left(\mathcal{R}\right)=\mathcal{R}-\Lambda$, where $\Lambda$ is the
cosmological constant. We have repeated the methodology outlined in section 3
for the model $f\left(\mathcal{R}\right)=\mathcal{R}-\Lambda$ finding
eigenvalues $\left(\lambda_{1},\lambda_{2}\right)=\left(1,4\right)$, which
agrees with our result presented in Fig. 2 (middle) in the high-curvature
limit, and an effective equation of state $w_{eff}=1/3$. Identical results are
also found in Ref. [28]. Therefore, $P_{2}$ corresponds to a radiation-like
phase in the high-curvature limit.
Figure 3: The effective equation of state parameter $w_{eff}$ as a function of
$\mathcal{R}_{*}$ for the fixed point $P_{1}$ (left) and $P_{3}$ (right).
The effective equation of state parameter $w_{eff}$ for the fixed points
$P_{1}$, $P_{2}$ and $P_{3}$ in the low and high-curvature limits are
summarised in Tab.3. Thus, we identify $P_{1}$ as a de Sitter-like phase,
$P_{2}$ as a radiation-like phase, and $P_{3}$ as a matter-like phase. Note
that the unknown value of $w_{eff}$ for $P_{2}$ in the limit
$\mathcal{R}_{*}\to 0$ is denoted by $-$. Figures 2 and 3 suggest that if the
matter-dominated phase $P_{3}$ is to transition back to the de Sitter-like
phase $P_{1}$, to account for the currently observed late period of cosmic
acceleration, then this transition must occur at a curvature scale
$\mathcal{R}_{*}\gtrsim 0.28$. This is because if $\mathcal{R}_{*}\lesssim
0.28$ then it is not possible to exit the stable matter-like phase.
Fixed point | $w_{eff}\left(\mathcal{R_{*}}\to 0\right)$ | $w_{eff}\left(\mathcal{R_{*}}\to 1\right)$ | Phase
---|---|---|---
$P_{1}$ | -1 | -1 | De Sitter
$P_{2}$ | - | 1/3 | Radiation
$P_{3}$ | 0 | 0 | Matter
Table 3: The effective equation of state in the low-curvature limit
$w_{eff}\left(\mathcal{R}_{*}\to 0\right)$, high-curvature limit
$w_{eff}\left(\mathcal{R}_{*}\to 1\right)$ and the phase type for the fixed
points $P_{1}$, $P_{2}$ and $P_{3}$.
To further analyse the cosmological evolution of our model we use Eq. (30) to
investigate how the deceleration parameter $q$ varies as a function of
$\mathcal{R}_{*}$ for the de Sitter-like phase. The results are shown in Fig.
4. If $q>0$ then the universe is expanding but decelerating. If $q<0$ then the
universe is expanding but accelerating [40]. Figure 4, therefore, indicates
that the de Sitter-like phase undergoes two periods of accelerated expansion,
one in the high-curvature regime $0.4\lesssim\mathcal{R}<1$ and one in the
low-curvature regime $0\leq\mathcal{R}\lesssim 0.23$, mediated by a period of
decelerated expansion for $0.23\lesssim\mathcal{R}\lesssim 0.4$ (see Fig. 4).
Assuming curvature on cosmological scales decreases with cosmological time,
this implies an early and late period of accelerated cosmic expansion, with an
intermediate period of decelerated expansion. In this sense, the dynamics
appear consistent with cosmological observations, depending on the exact scale
set by $\mathcal{R}_{0}$.
Figure 4: The deceleration parameter $q$ as a function of scalar curvature
$\mathcal{R}_{*}$ for the fixed point $P_{1}$.
We now analyse the phase space of this model, with the results shown in Fig.
5. The phase space of AWIG is 3-dimensional, with each point in the phase
space uniquely specified by the set of coordinates
$\left(y_{1},y_{2},\mathcal{R}_{*}\right)$. Figure 5 shows the
$\left(y_{1},y_{2}\right)$ plane for three different values of constant
curvature. One possible route the system may take through the 3-dimensional
phase space is depicted in the three plots of Fig. 5, where the system evolves
through the closed sequence of fixed points $P_{1}\to P_{2}\to P_{3}\to P_{1}$
with decreasing curvature scale $\mathcal{R}_{*}$. Thus, the model presented
is consistent with the sequence of an early period of accelerated expansion,
intermediate radiation and matter-dominated eras of decelerated expansion,
followed by the return to a period of accelerated expansion at late times.
Figure 5: Slices of constant curvature through the 3-dimensional phase space
of AWIG at $\mathcal{R}_{*}=0.5036$ (left), $\mathcal{R}_{*}=0.35$ (middle)
and $\mathcal{R}_{*}=0.3$ (right). The red trajectory shows one possible way
the system may evolve through the sequence of fixed points $P_{1}\to P_{2}\to
P_{3}\to P_{1}$.
We now present results for various powers of the Ricci scalar curvature under
the local rescaling of Eq. (31). Using Eqs. (4) and (32) we find
$\mathcal{R}^{m}\to\frac{\mathcal{R}^{m}}{\left(f^{\prime}\left(\mathcal{R}\right)\right)^{m}}\underset{\mathcal{R}_{*}\to
1}{=}\frac{1}{2^{m}}.$ (35)
The first three powers of the Ricci scalar curvature ($m=1,2,3$) are shown in
Fig. 6. As can be seen from Fig. 6 each curvature invariant is divergence-free
and approaches a constant in the limit $\mathcal{R}_{*}\to 1$. Similar results
have been shown in Refs. [41, 42]. Likewise, Eqs. (4) and (34) can be used to
show that the curvature invariant $\left(R_{\mu\nu}R^{\mu\nu}\right)^{m}$
formed from the Ricci tensor asymptotically approaches $1/2^{2m}$ as
$\mathcal{R}_{*}\to 1$. Therefore, the model presented contains no curvature
singularities in $\mathcal{R}$ or $R_{\mu\nu}R^{\mu\nu}$, at any order $m$.
Figure 6: The first three powers ($m=1,2,3$) of the transformed Palatini
scalar curvature as a function of $\mathcal{R}_{*}$.
## 5 Discussion
In this work we have shown that one of the simplest possible implementations
of asymptotically Weyl-invariant gravity (AWIG) may be viable, as measured
against criteria (i)$-$(vi) set out in the introduction.
However, the model’s viability cannot yet be definitively established for
several reasons. Firstly, AWIG is by construction superficially
renormalizable, but establishing its renormalizability via explicit
calculation remains an open problem. Secondly, the analysis performed in this
work has raised some unanswered questions. For example, the transition from
the matter-dominated phase to the late phase of cosmic expansion must occur at
a curvature scale $\mathcal{R}\gtrsim 0.28\mathcal{R}_{0}$. It is unknown
whether this is consistent with cosmological observations since the
dimensionful scale $\mathcal{R}_{0}$ is presently unknown. Furthermore, the
effective equation of state parameter $w_{eff}$ for the fixed point $P_{3}$ is
negative for $0<\mathcal{R}_{*}\lesssim 0.2$, the meaning of which is unclear.
Finally, one of the three fixed points $P_{2}$ has an undefined effective
equation of state for $0\leq\mathcal{R}_{*}<1$, however, we can determine
$w_{eff}$ for $\mathcal{R}_{*}\to 1$.
Nevertheless, the model presented contains several encouraging features, such
as the apparent absence of curvature singularities and three fixed points with
effective equation of states corresponding to de Sitter, radiation and matter-
like phases. The model also contains the correct sequence of early and late
periods of accelerated cosmic expansion, with an intermediate period of
decelerated expansion, something that has proven difficult to achieve in other
attempted modifications of general relativity [28]. Moreover, the early
accelerating phase emerges from AWIG without adding a scalar field. This is
because AWIG asymptotically approaches the Palatini formulation of pure
$\mathcal{R}^{2}$ gravity in the high curvature limit, which is equivalent to
general relativity plus a non-zero cosmological constant and no massless
scalar field [17]. Another positive feature of AWIG is that the variable power
in the Lagrangian density seems to permit a richer set of possible
cosmological dynamics, as can be seen from the variable eigenvalues in Fig.
(2).
The dimensionless exponent $n\left(\mathcal{R}_{*}\right)$ explored in this
work is among the simplest possible choices, but it is far from the only
consistent choice. Going forward, we aim to narrow down, or perhaps even
uniquely determine, the functional form of $n\left(\mathcal{R}_{*}\right)$ and
the value of $\mathcal{R}_{0}$ to more robustly test the viability of
asymptotically Weyl-invariant gravity.
## 6 Acknowledgements
I wish to thank Roberto Percacci and Subir Sarkar for their comments on the
manuscript, and the anonymous referee for invaluable corrections.
## References
* [1] Clifford M. Will. The Confrontation between General Relativity and Experiment. Living Rev. Rel., 17:4, 2014.
* [2] Gerard ’t Hooft and M.J.G. Veltman. One loop divergencies in the theory of gravitation. Annales Poincare Phys.Theor., A20:69–94, 1974.
* [3] Marc H. Goroff and Augusto Sagnotti. The Ultraviolet Behavior of Einstein Gravity. Nucl.Phys., B266:709, 1986.
* [4] Y. Akrami et al. Planck 2018 results. X. Constraints on inflation. 2018\.
* [5] Sean M. Carroll. The Cosmological constant. Living Rev. Rel., 4:1, 2001.
* [6] Edmund J. Copeland, M. Sami, and Shinji Tsujikawa. Dynamics of dark energy. Int. J. Mod. Phys. D, 15:1753–1936, 2006.
* [7] Thomas P. Sotiriou and Valerio Faraoni. f(R) Theories Of Gravity. Rev. Mod. Phys., 82:451–497, 2010.
* [8] Takeshi Chiba. 1/R gravity and scalar - tensor gravity. Phys. Lett. B, 575:1–3, 2003.
* [9] Thomas P. Sotiriou. Unification of inflation and cosmic acceleration in the Palatini formalism. Phys. Rev. D, 73:063515, 2006.
* [10] Luca Amendola, David Polarski, and Shinji Tsujikawa. Are f(R) dark energy models cosmologically viable ? Phys. Rev. Lett., 98:131302, 2007.
* [11] Luca Amendola, Radouane Gannouji, David Polarski, and Shinji Tsujikawa. Conditions for the cosmological viability of f(R) dark energy models. Phys. Rev. D, 75:083504, 2007.
* [12] Anthony W. Brookfield, Carsten van de Bruck, and Lisa M.H. Hall. Viability of f(R) Theories with Additional Powers of Curvature. Phys. Rev. D, 74:064028, 2006.
* [13] A. D. Dolgov and Masahiro Kawasaki. Can modified gravity explain accelerated cosmic expansion? Phys. Lett., B573:1–4, 2003.
* [14] Xin-He Meng and Peng Wang. Gravitational potential in Palatini formulation of modified gravity. Gen. Rel. Grav., 36:1947–1954, 2004.
* [15] Thomas P. Sotiriou. The Nearly Newtonian regime in non-linear theories of gravity. Gen. Rel. Grav., 38:1407–1417, 2006.
* [16] Jose Beltrán Jiménez and Tomi S. Koivisto. Modified Gravity with Vector Distortion and Cosmological Applications. Universe, 3(2):47, 2017.
* [17] Ariel Edery and Yu Nakayama. Palatini formulation of pure $R^{2}$ gravity yields Einstein gravity with no massless scalar. Phys. Rev., D99(12):124018, 2019.
* [18] Daniel Coumbe. Asymptotically Weyl-Invariant Gravity. Int. J. Mod. Phys. A, 34(31):31, 2019.
* [19] D. N. Coumbe. Hypothesis on the Nature of Time. Phys. Rev., D91(12):124040, 2015.
* [20] Daniel Coumbe. Quantum gravity without vacuum dispersion. Int. J. Mod. Phys., D26(10):1750119, 2017.
* [21] D. N. Coumbe. Renormalizing Spacetime. Int. J. Mod. Phys., D27(16):1950008, 2018.
* [22] K. S. Stelle. Classical Gravity with Higher Derivatives. Gen. Rel. Grav., 9:353–371, 1978.
* [23] Richard P. Woodard. Avoiding dark energy with 1/r modifications of gravity. Lect. Notes Phys., 720:403–433, 2007.
* [24] H. Weyl. Gravitation and electricity. Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.), 1918:465, 1918. [,24(1918)].
* [25] R. H. Dicke. Mach’s principle and invariance under transformation of units. Phys. Rev., 125:2163–2167, Mar 1962.
* [26] P.S. Wesson. Astronomy and Astrophysics, Vol.102:45, 1981.
* [27] Anthony Lasenby and Michael Hobson. Scale-invariant gauge theories of gravity: theoretical foundations. J. Math. Phys., 57(9):092505, 2016.
* [28] Stephane Fay, Reza Tavakol, and Shinji Tsujikawa. f(R) gravity theories in Palatini formalism: Cosmological dynamics and observational constraints. Phys. Rev. D, 75:063509, 2007.
* [29] D.N. Spergel et al. Wilkinson Microwave Anisotropy Probe (WMAP) three year results: implications for cosmology. Astrophys. J. Suppl., 170:377, 2007.
* [30] Hikaru Kawai, Yoshihisa Kitazawa, and Masao Ninomiya. Renormalizability of quantum gravity near two-dimensions. Nucl. Phys. B, 467:313–331, 1996.
* [31] Viatcheslav Mukhanov and Sergei Winitzki. Modern Cosmology. Cambridge University Press, 2007.
* [32] Timothy Clifton and John D. Barrow. The Power of General Relativity. Phys. Rev., D72(10):103005, 2005. [Erratum: Phys. Rev.D90,no.2,029902(2014)].
* [33] D.S. Ebert, F.K. Musgrave, D. Peachey, K. Perlin, J.C. Hart, and S. Worley. Texturing & Modeling: A Procedural Approach. Morgan Kaufmann series in computer graphics and geometric modeling. Morgan Kaufmann, 2003.
* [34] Shu-Lei Cao, Song Li, Hao-Ran Yu, and Tong-Jie Zhang. Statefinder diagnostic and constraints on the Palatini f(R) gravity theories. Res. Astron. Astrophys., 18(3):026, 2018.
* [35] Edmund J. Copeland, Andrew R Liddle, and David Wands. Exponential potentials and cosmological scaling solutions. Phys. Rev. D, 57:4686–4690, 1998.
* [36] Guido Magnano and Leszek M. Sokolowski. On physical equivalence between nonlinear gravity theories and a general relativistic selfgravitating scalar field. Phys. Rev., D50:5039–5059, 1994.
* [37] P. W. Higgs. Quadratic lagrangians and general relativity. Nuovo Cim., 11(6):816–820, 1959.
* [38] John D. Barrow and S. Cotsakis. Inflation and the Conformal Structure of Higher Order Gravity Theories. Phys. Lett., B214:515–518, 1988.
* [39] Cecilia Bejarano, Adria Delhom, Alejandro Jiménez-Cano, Gonzalo J. Olmo, and Diego Rubiera-Garcia. Geometric inequivalence of metric and Palatini formulations of General Relativity. 2019\.
* [40] Yu. L. Bolotin, V.A. Cherkaskiy, O.A. Lemets, D.A. Yerokhin, and L.G. Zazunov. Cosmology In Terms Of The Deceleration Parameter. Part I. 2 2015.
* [41] Cosimo Bambi, Alejandro Cardenas-Avendano, Gonzalo J. Olmo, and D. Rubiera-Garcia. Wormholes and nonsingular spacetimes in Palatini $f(R)$ gravity. Phys. Rev. D, 93(6):064016, 2016.
* [42] Carlos Barragan, Gonzalo J. Olmo, and Helios Sanchis-Alepuz. Bouncing Cosmologies in Palatini f(R) Gravity. Phys. Rev. D, 80:024016, 2009.
|
# On the relation of the COVID-19 reproduction number to the explosive
timescales: the case of Italy
Dimitris G. Patsatzis School of Chemical Engineering, National Technical
University of Athens, 15780 Athens, Greece<EMAIL_ADDRESS>
###### Abstract
A great issue of discussion of an infectious disease is its basic reproduction
number $R_{0}$, which provides an estimation of the contagiousness of the
disease. When $R_{0}>1$, the disease spread will potentially lead to an
outbreak, such that of the ongoing COVID-19 pandemics. During the evolution of
an outbreak, various non-pharmaceutical interventions are employed, the impact
of which is frequently assessed by the reduction that they introduce to the
effective reproduction number $R_{t}$; reduction below 1 is an indication of
eventual dying out of the disease spread. Motivated by the fact that $R_{0}$
essentially expresses the stability of the disease-free equilibrium, in this
work, $R_{t}$ was examined in the view of timescale analysis. It was shown
that during the evolution of the COVID-19 outbreak in Italy, when various
interventions were in place, $R_{t}$ had a clear relation with the explosive
timescale characterizing the dynamics of the outbreak. In particular, it is
shown that the existence of an explosive timescale during the progression of
the epidemics implies $R_{t}>1$, while its absence implies $R_{t}<1$. In
addition, as this timescale converges/diverges with the immediately slowest
one, $R_{t}$ approaches to/withdraws from its threshold value 1. These results
suggest that timescale analysis can be utilized for the assessment of the
impact of various interventions, since it reflects the insight provided by the
effective reproduction number, without being hindered by the selection of the
population model, nor the parameter estimation process followed for model
calibration.
###### keywords:
COVID-19, reproduction number, timescale analysis, population dynamics
††journal: arXiv.org
## 1 Introduction
As of March 11, 2020, the novel coronavirus disease (COVID-19) was declared a
pandemic by World Health Organization (WHO) [1]. By January 15, 2021, the
COVID-19 pandemics has been spread to more than 219 countries and territories,
reporting more than 93 million infected cases and 2 million deaths [2].
A great issue of discussion of the COVID-19 pandemics is its basic
reproduction number, estimations for which were provided in numerous early
studies; see Refs within [3]. The basic reproduction number, $R_{0}$, is the
average number of secondary infections produced by an infectious individual in
a population where everyone is considered susceptible [4, 5]. Being dependent
on human behavior and the biological characteristics of the pathogen, $R_{0}$
provides an estimation of the contagiousness of the infectious disease [4] and
serves as a threshold parameter; when $R_{0}>1$ the infected increase
exponentially, leading to a disease outbreak, while when $R_{0}<1$ the disease
spread dies out [4, 5].
For the control of the COVID-19 outbreak, various interventions are employed
aiming to “flatten” the curve of the epidemics. Since $R_{0}$ is constant in
time, it cannot monitor the effect of the undertaken measures; instead, the
time-varying _effective_ reproduction number $R_{t}$ is utilized, that
estimates the secondary infections produced by an infectious individual during
the course of an outbreak, thus, in a population where not everyone is
considered susceptible. As a result, during the evolution of the epidemics,
the undertaken control measures affect $R_{t}$, since they influence (i) the
duration of contagiousness, (ii) the likelihood of infection per contact and
(iii) the contact rate of the infection [4, 6]. The impact of various
interventions (case isolation, contact tracing, travel restrictions, etc.) in
$R_{t}$ has been assessed in a number of studies to provide guidelines in
decision-making policies [7, 8, 9, 10, 11, 12].
The use of $R_{0}$ as a threshold parameter is related to the stability of the
disease-free equilibrium (DFE) of the epidemiological model under
consideration [5, 13, 4], which is locally assessed by the existence of
positive eigenvalues. During the evolution of the system, the local dynamics
is characterized by timescales of dissipative/explosive nature - associated
with positive/negative eigenvalues - the action of which tends to drive the
system towards to/away from equilibrium [14, 15]. Timescale analysis has been
frequently employed to address the dynamical properties of systems arising
from reactive flows [16, 17], systems biology [18, 19], pharmacokinetics [20],
etc, but, to my knowledge, it hasn’t been widely applied to population
dynamics.
Motivated by the fact that $R_{0}$ mathematically expresses the stability of
the DFE; i.e., the existence of positive eigenvalues at day zero, here the
relation of $R_{t}$ to the explosive timescales (positive eigenvalues) during
the course of COVID-19 outbreak in Italy was investigated. It is shown that
the existence of an explosive timescale implies $R_{t}>1$, while its absence
implies $R_{t}<1$. In addition, as this timescale converges/diverges with the
immediately slowest one, $R_{t}$ was shown to approaches to/withdraws from its
threshold value 1. Finally, by performing the analysis in 4 different
population dynamics models, it is demonstrated that timescale analysis is a
robust methodology to monitor the progression of the epidemics, since it
directly reflects the variations in $R_{t}$, without being hindered by the
complexity of the selected model.
## 2 Materials and Methods
Compartmental modeling is widely used for the analysis of various infectious
diseases [21, 22, 23, 24], among which COVID-19 pandemics [8, 11, 25, 26].
Four population dynamics compartmental models in the framework of the SIR
model [27] were analyzed here. The effective reproduction number $R_{t}$ was
calculated on the basis of these models and conclusions were drawn on its
relation to the timescales characterizing the dynamics of each model.
The four compartmental models are presented in Section 2.1, followed by the
parameter estimation process considered for their calibration against the data
of Italy in Section 2.2. The methodology to calculate the effective
reproduction number $R_{t}$ and the timescales $\tau_{i}$ on the basis of each
model is presented in Sections 2.3 and 2.4, respectively.
### 2.1 The population dynamics models
The SIR model formulates the transmission of an infectious disease among three
population groups, namely the susceptible, infected and recovered individuals
[27]. In this framework, four population dynamics models were considered, the
SIRD, SEIRD, SEInsRD and SIDARTHE models, the governing equations of which can
be written in the ODE form:
$\dfrac{d}{dt}\mathbf{y}=\mathbf{g}(\mathbf{y})$ (1)
where $\mathbf{y}$ is the N-dim. column state vector, which includes the
fraction of each population group over the total population and
$\mathbf{g}(\mathbf{y})$ is the N-dim. column vector field, which incorporates
the transition rates from one population group to another.
The simplest compartmental model to capture COVID-19 pandemics is the SIRD
model, which essentially is the SIR model with the addition of a compartment
accounting for the dead individuals. Denoting $S$, $I$, $R$ and $D$ the
fraction of susceptible, infected, recovered and dead individuals
respectively, over the total population $N$, the SIRD model is written in the
form of Eq. (1) as:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ I\\\ R\\\
D\end{bmatrix}=\begin{bmatrix}-\beta SI\\\ \beta SI-(\gamma+\mu)I\\\ \gamma
I\\\ \mu I\end{bmatrix}$ (2)
where $\beta$ is the transmission ratio, $\gamma$ the recovery ratio, which
also expresses the inverse of the infection period of the disease, and $\mu$
the fatality ratio.
A more realistic assumption for COVID-19 infection is the existence of an
incubation (latency) period, during which an individual is infected but yet
not infectious [28, 29]. Such an assumption can be incorporated in the SIRD
model with the addition of a compartment accounting for exposed individuals.
Denoting $E$ their fraction over the total population, the resulting SEIRD
model is written in the form of Eq. (1) as:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ E\\\ I\\\ R\\\
D\end{bmatrix}=\begin{bmatrix}-\beta SI\\\ \beta SI-\sigma E\\\ \sigma
E-(\gamma+\mu)I\\\ \gamma I\\\ \mu I\end{bmatrix}$ (3)
where $\sigma$ is the transition ratio from exposed to infected individuals,
expressing the inverse of the incubation period of the disease.
In addition, it has been shown that the COVID-19 infected individuals have
symptoms of different severity, varying from mild to severe [30]. Since the
severely infected individuals are in need of immediate health care, a more
biologically realistic assumption for COVID-19 infection is the distinction
between normally infected and severely infected individuals. Such an
assumption can be incorporated in the SEIRD model, by dividing the infected
compartment in two sub-compartments. Denoting $IN$ and $IS$ the fraction of
normally and severely infected individuals over the total population, the
resulting SEInsRD model in the form of Eq. (1) reads:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ E\\\ IN\\\ IS\\\ R\\\
D\end{bmatrix}=\begin{bmatrix}-\beta_{N}S.IN-\beta_{S}S.IS-\mu_{TP}S\\\
\beta_{N}S.IN+\beta_{S}S.IS-\sigma E-\mu_{TP}E\\\ (1-ss)\sigma E-\gamma
IN-\mu_{N}IN\\\ ss\sigma E-\gamma IS-\mu_{S}IS\\\ \gamma(IN+IS)-\mu_{TP}R\\\
\mu_{N}IN+\mu_{S}IS\end{bmatrix}$ (4)
where the subscripts $N$ and $S$ indicate the normally and severely infected
transmission $\beta$ and fatality $\mu$ ratios, $ss$ denotes the fraction of
severely over normally infected individuals and $\mu_{TP}$ is the
physiological death ratio.
Finally, a more detailed compartmental model was considered, accounting for
susceptible ($S$), asymptomatic detected and undetected infected ($I$ and
$D$), symptomatic detected and undetected infected ($A$ and $R$), severely
symptomatic ($T$), healed ($H$) and extinct ($E$) individuals, namely the
SIDARTHE model [25]. Here, the SIDARTHE model was considered for validation
purposes and thus, only a brief description of the model is provided in A;
details can be found in [25]. Note that the SIDARTHE model is also written in
the form of Eq. (1); see Eq. (13) and Eqs. (1-8) in Methods section in [25].
### 2.2 Model calibration
In this study only the SEIRD and SEInsRD models in Eqs. (3, 4) respectively,
were calibrated, since (i) the relation of $R_{t}$ with the timescales can be
reached analytically on the basis of the SIRD model in Eq. (2) and (ii) the
parameter values of the SIDARTHE model are provided in [25].
The SEIRD and SEInsRD models in Eqs. (3, 4) were calibrated to the daily
reported data of infected, recovered and dead individuals in Italy, as
reported by John Hopkins database [31]. The parameter estimation process was
performed in a weekly basis accounting for the data from February 26 (week 0)
to September 30 (week 30). February 26 was selected as starting day in order
to minimize early data distortion, since more than 400 infected individuals
were reported at that date. Initially, given the reported fraction of
infected, recovered and dead population groups at week 0 - and the susceptible
one, through conservation of the total population - the fraction of the
exposed, normally infected and severely infected population groups was
estimated. In the following, a parameter estimation process was performed in a
weekly basis, given these 3 reported data sets, through a genetic algorithm
provided by the open-source COPASI software [32, 33]. The initial conditions
at day 0 of each week were the predicted values at day 7 of the previous week,
in order to preserve continuity in the solution. The resulting parameter sets
are depicted for SEIRD and SEInsRD models in Fig. 1 of B.
Figure 1: The SEIRD model (left), its fitting against the reported data for
Italy in circles (middle) and the profiles of all the population groups
(right). The parameter estimation process was performed in a weekly basis from
the 26th of February (week 0) to the 30th of September (week 30), accounting
for the reported data of infected, recovered and dead individuals.
Figure 2: The SEInsRD model (left), its fitting against the reported data for
Italy in circles (middle) and the profiles of all the population groups
(right). The parameter estimation process was performed in a weekly basis from
the 26th of February (week 0) to the 30th of September (week 30), accounting
for the reported data of infected, recovered and dead individuals.
A schematic representation of the SEIRD and SEInsRD models is provided in the
left panels of Figs. 1 and 2, respectively. The profiles of infected,
recovered and dead individuals, resulted from the aforementioned parameter
estimation process, are in very good agreement with the reported data, as
shown in the middle panels of Figs. 1 and 2 for the SEIRD and SEInsRD models,
respectively. Note that in the case of SEInsRD model, the sum of the normally
and severely infected individuals $I=IN+IS$ was fitted against the reported
data set of the infected individuals. The very good agreement of the model
parameters to the reported data can be also demonstrated by the $R^{2}$ values
of both fittings shown in Table 1 of B, combined with the respective p-values,
which in all cases are $p\ll 0.05$. Finally, the profiles of all the
population groups are displayed in the right panels of Figs. 1 and 2, in which
great agreement of the population profiles between the two models is reported.
Finally, the SIDARTHE model parameters were directly adopted by [25],
following a slightly different approach than the one followed here for SEIRD
and SEInsRD models. First, due to availability of data, here the SEIRD and
SEInsRD models were calibrated for Italy from February 26 to September 30,
while the SIDARTHE model was calibrated for Italy from February 20 to April 5.
Second, here the SEIRD and SEInsRD models were calibrated in a constant,
7-days long, time frame, while the SIDARTHE model was calibrated in varying
time frames, depending on the interventions undertaken in Italy [25]. Last but
not least, the reported data of infected, recovered and dead individuals were
considered in our analysis, while in [25] only the ones of infected and
recovered (not fitted to the healed compartment of the model) individuals.
### 2.3 Estimation of the reproduction number
The basic reproduction number, $R_{0}$, is a constant biological parameter
that provides an estimation of the contagiousness of the infectious disease.
It also serves as a threshold parameter; when $R_{0}>1$, one infected
individual can trigger an outbreak, while when $R_{0}<1$, the infection will
not spread in the population [5, 4].
When various non-pharmaceutical interventions (NPI) are in place, the
effective reproduction number $R_{t}$ is utilized, instead of $R_{0}$, to
monitor the reproduction number during the evolution of the outbreak. $R_{t}$
provides an estimation of the contagiousness of the infectious disease, during
the course of an outbreak, where not every individual is considered
susceptible. Considering that all model parameters are time dependant, we
estimated $R_{t}$ for COVID-19 pandemics in Italy using the Next Generation
Matrix (NGM) approach [34, 35, 13], which yields in the following expressions
for the SIRD, SEIRD, SEInsRD and SIDARTHE models:
$\displaystyle R^{SIRD}_{t}$
$\displaystyle=\dfrac{\beta}{\gamma+\mu}=R^{SEIRD}_{t}$ $\displaystyle
R^{SEInsRD}_{t}$
$\displaystyle=\dfrac{\sigma}{\sigma+\mu_{TP}}\left(\dfrac{(1-ss)\beta_{N}}{\gamma+\mu_{N}}+\dfrac{ss\beta_{S}}{\gamma+\mu_{S}}\right)$
(5) $\displaystyle R^{SIDARTHE}_{t}$
$\displaystyle=\dfrac{\alpha}{r_{1}}+\dfrac{\beta\epsilon}{r_{1}r_{2}}+\dfrac{\gamma\zeta}{r_{1}r_{3}}+\dfrac{\delta\theta\zeta}{r_{1}r_{3}r_{4}}+\dfrac{\delta\epsilon\eta}{r_{1}r_{2}r_{4}}$
where $\alpha,\beta,\gamma,\delta,\epsilon,\zeta,\eta,\theta$ are model
parameters of the SIDARTHE model and $r_{1}=\epsilon+\lambda+\zeta$,
$r_{2}=\eta+\rho$, $r_{3}=\kappa+\mu+\theta$ annd $r_{4}=\nu+\xi$. Note that
the expression of $R_{t}$ for SIDARTHE model estimated here via the NGM
approach is the same with the one derived in [25].
A brief discussion on NGM approach is provided in A, along with details on the
calculation of $R_{t}$ on the basis of the four population dynamics models.
### 2.4 Calculation of the time scales
Given a system of ODEs in the matrix form of Eq. (1), the timescales are
calculated as the inverse modulus of the eigenvalues of the N$\times$N
Jacobian matrix
$\mathbf{J}(\mathbf{y})=\nabla_{\mathbf{y}}\left(\mathbf{g}(\mathbf{y})\right)$
[14, 15]. The timescales are of dissipative/explosive nature, i.e., the
components of the system that generate them tend to drive the system towards
to/away from equilibrium, when the respective eigenvalue has negative/positive
real part.
When a complex mathematical model in the form of Eq. (1) is encountered, it is
usually impossible to calculate analytic expressions for its eigenvalues and
thus its timescales. This is the case of the SEIRD, SEInsRD and SIDARTHE
models, for which the timescales were calculated numerically. However, in the
case of the SIRD model, the non-zero eigenvalues can be calculated
analytically as:
$\lambda_{1,2}=\dfrac{1}{2}\left(X\pm\sqrt{X^{2}-4Y}\right)\qquad\qquad
X=-\gamma-\beta I-\mu+\beta S\qquad Y=\beta I(\gamma+\mu)$ (6)
Therefore, the related timescales are of explosive nature (either real or
complex $\lambda_{1,2}$) if and only if:
$X>0\Rightarrow\beta(S-I)>\gamma+\mu\Rightarrow\dfrac{\beta(S-I)}{\gamma+\mu}>1$
(7)
Equation (7) provides the condition under which the explosive timescales of
the SIRD model arise, a feature that will associated in the following section
with $R_{t}$.
## 3 Results
The impact of the undertaken NPIs in COVID-19 pandemics is assessed by the
effect that they introduce in the reproduction number [7, 8, 9, 10, 11, 12].
Here, we show that the insights provided by the utilization of the effective
reproduction number $R_{t}$ during the progression of the COVID-19 pandemics
can be deduced by timescale analysis. In particular, it is shown that:
1. i)
the existence of an explosive timescale during the progression of COVID-19
epidemics implies $R_{t}>1$, while its absence implies $R_{t}<1$, and
2. ii)
the tendency of this timescale to converge/diverge with the immediately
slowest one, implies that $R_{t}$ tends to approach to/withdraw from its
threshold value 1.
These results are reached on the basis of the four population dynamics models
discussed in Section 2.1, for the case of Italy.
### 3.1 The explosive timescales in relation to the reproduction number
The first indication on the relation of the explosive timescales to the
reproduction number is provided by the analysis of the SIRD model in Eq. (2),
that is the simplest model to describe the progression of COVID-19 epidemics.
In contrast to more complicated models, the timescales of the SIRD model can
be calculated analytically. According to Section 2.4, the evolution of the
SIRD model is characterized by the action of two timescales
$\tau_{1,2}=1/|\lambda_{1,2}|$; the expressions of $\lambda_{1,2}$ were
derived in Eq. (6). Both $\tau_{1,2}$ are of explosive/dissipative nature when
the condition in Eq. (7) holds/is violated. Given the expression of $R_{t}$
for SIRD model in Eq. (5) and that $S-I<S(0)=1$, Eq. (7) yields:
$Re(\lambda_{1,2})>0\Leftrightarrow
X>0\Leftrightarrow\dfrac{\beta(S-I)}{\gamma+\mu}>1\Rightarrow\dfrac{\beta
S(0)}{\gamma+\mu}>1\Rightarrow R_{t}>1$ (8)
Equation (8) shows that the existence of explosive timescales implies
$R_{t}>1$, while their absence implies $R_{t}<1$. Note that this outcome,
holds true not only for COVID-19 pandemics, but also for any infectious
disease, since it was derived by analytical means on the basis of the SIRD
model.
Next, the relation of the explosive timescales with $R_{t}$ was examined using
reported data for COVID-19 pandemics in Italy. The SEIRD model in Eq. (3) was
adopted and fitted against the reported data sets of infected, recovered and
dead individuals in Italy from February 26 to September 30. In order to
account for the NPIs undertaken, the SEIRD model was calibrated in a weekly
basis following the parameter estimation process described in detail in
Section 2.2. The resulting solution is in great agreement with the reported
data, as shown in Fig. 1.
Figure 3: The timescales (left) and the effective reproduction number $R_{t}$
(right) estimated on the basis of the solution of the SEIRD model shown in
Fig. 1.
The timescales and $R_{t}$, estimated on the basis of the SEIRD model in Eq.
(5), are displayed in Fig. 3 from week 0 (starting in Feb. 26) to week 30
(ending in Sep. 30). As shown in the left panel of Fig. 3, the evolution of
the SEIRD model is characterized by three timescales $\tau_{1,2,3}$, the
fastest of which, $\tau_{1}$, is always dissipative in nature, while
$\tau_{2,3}$ are either dissipative or explosive. In particular, during weeks
0-6 and 20-26, $\tau_{2,3}$ are explosive, as indicated by the shaded
background in Fig. 3. The values of $R_{t}$ are depicted in the right panel of
Fig. 3, in which the red dashed horizontal line indicates the threshold value
$R_{t}=1$. As indicated by the shaded background, the time periods when the
explosive nature of timescales $\tau_{2,3}$ is reported coincides with the
ones that $R_{t}>1$ (weeks 0-6 and 20-26). In contrast, when $\tau_{2,3}$ are
of dissipative nature, $R_{t}<1$ (weeks 7-19, 27-30). Note that the transition
from the explosive to the dissipative nature of the timescales $\tau_{2,3}$,
and vice-versa, is immediate, since model calibration is performed in a weekly
basis.
Comparison of the explosive timescales and $R_{t}$ in Fig. 3 reveals the
following trend: as the gap between $\tau_{2}$ and $\tau_{3}$
decreases/increases, $R_{t}$ approaches to/withdraws from its threshold value
unity. This is particularly clear during the first wave of COVID-19 pandemics
in Italy (weeks 0-12). During weeks 0-6, where $\tau_{2}$ and $\tau_{3}$ are
explosive, their gap tends to decrease, so that $R_{t}$ decreases, approaching
close to unity values. At week 7, $\tau_{2}$ and $\tau_{3}$ become dissipative
and $R_{t}$ attains values below 1. From this point on and up to week 12, the
gap of $\tau_{2}$ and $\tau_{3}$ increases, so that $R_{t}$ continues to
decrease, this time withdrawing from its threshold 1. This behaviour is
additionally supported by the fact that during weeks 4-8, 17, 19, 21 and 30,
in which $R_{t}$ attains close to 1 values, the gap between $\tau_{2}$ and
$\tau_{3}$ is small; to the point where $\tau_{2}=\tau_{3}$ in week 19, in
which $R_{t}=0.96$.
### 3.2 Robustness
In order to demonstrate the robustness of the relation of the explosive
timescales to the reproduction number, a more complicated population dynamics
model was considered, the SEInsRD model. The SEInsRD model in Eq. (4) was
adopted and calibrated to the same reported data sets with SEIRD model in
Section 3.1, corresponding to infected, recovered and dead individuals in
Italy from February 26 to September 30. Similarly to SEIRD model, the SEInsRD
model calibration was performed in a weekly basis following the process
described in Section 2.2 and the resulting solution is in great agreement with
the reported data, as shown in Fig. 2.
Figure 4: The timescales (left) and the reproduction number $R_{t}$ (right)
calculated on the basis of the solution of the SEInsRD model shown in Fig. 2.
The timescales and $R_{t}$, estimated on the basis of the SEInsRD model in Eq.
(5), are displayed in Fig. 4 from week 0 (starting in Feb. 26) to week 30
(ending in Sep. 30). As shown in the left panel of Fig. 4, the evolution of
SEInsRD model is characterized by 5 timescales: three of which are always of
dissipative nature and the remaining ones are either explosive or dissipative;
denoting $\tau_{exp,f}$ the fast explosive timescale and $\tau_{exp,s}$ the
slow one. In particular, during weeks 0-6 and 20-26, $\tau_{exp,f}$ and
$\tau_{exp,s}$ are explosive, as indicated by the shaded background in Fig. 4.
The right panel of Fig. 4 displays the values of $R_{t}$ in comparison to the
threshold value $R_{t}=1$ indicated by the red dashed horizontal line.
Similarly to the SEIRD model, it is shown by the shaded background that
$R_{t}>1$ when the $\tau_{exp,f}$ and $\tau_{exp,s}$ are explosive (weeks 0-6
and 20-26), while $R_{t}<1$ when they lose this character and become
dissipative (weeks 7-19 and 27-30). In addition, the trend of
increasing/decreasing gap of $\tau_{exp,f}$ and $\tau_{exp,s}$ is again
reflected in $R_{t}$ approaching to/withdrawing from its threshold value 1. In
particular, it is shown that the closer the values of $R_{t}$ to 1, (weeks
4-8, 17, 19, 21 and 30), the smaller the gap between $\tau_{exp,f}$ and
$\tau_{exp,s}$; to the point where $R_{t}=0.97$ in week 30, in which
$\tau_{exp,f}\approx\tau_{exp,s}$. In summary, the qualitative results on the
relation of the explosive timescales to $R_{t}$ are maintained on the basis of
the SEInsRD model.
### 3.3 Validation
In order to validate the qualitative results, reached on the basis of the
SEIRD and SEInsRD models, regarding to the relation of the explosive
timescales to $R_{t}$, a more complicated SIDARTHE model was considered [25],
as briefly discussed in Section 2.1.
Figure 5: The timescales and reproduction number Rt calculated on the basis of
SIDARTHE model, that was calibrated for Italy data in [25].
The profiles of the population groups accounted for in the SIDARTHE model were
reproduced, adopting the model parameters in [25]. On the basis of the
SIDARTHE solution, the timescales were calculated and $R_{t}$ was estimated
according to the expression in Eq. (5). The resulting values are displayed in
Fig. 5 starting from day -6 (Feb 20) and ending in day 40 (Apr 5); day 0 was
chosen to be Feb 26 for comparison with Figs. 3 and 4. As shown in the left
panel of Fig. 5, the evolution of SIDARTHE model is characterized by six
timescales, four among which are always dissipative in nature, while the
remaining two are either dissipative or explosive; denoted as $\tau_{exp,f}$
and $\tau_{exp,s}$. In particular, $\tau_{exp,f}$ and $\tau_{exp,s}$ are
explosive from day -6 to day 22, as indicated by the shaded background in Fig.
5. The values of $R_{t}$ are depicted in the right panel of Fig. 5, in which
the red dashed horizontal line indicates the threshold value $R_{t}=1$. As
indicated by the shaded background, the explosive nature of timescales
$\tau_{exp,f}$ and $\tau_{exp,s}$ implies $R_{t}>1$ (days -6-22), while losing
such a nature and becoming dissipative, implies $R_{t}<1$ (days 23-40). In
addition, it is shown that as the gap between $\tau_{exp,f}$ and
$\tau_{exp,s}$ becomes smaller, $R_{t}$ approaches its threshold value unity,
to the point when $R_{t}=0.986$ during days 23-33, in which
$\tau_{exp,f}=\tau_{exp,s}$.
It should be noted here that, the evolution of timescales $\tau_{exp,f}$ and
$\tau_{exp,s}$ and the reproduction number $R_{t}$, calculated on the basis of
SIDARTHE model, is different in comparison to those calculated on the basis of
SEIRD and SEInsRD models. In particular, on the basis of the SIDARTHE model,
the timescales are explosive in nature and $R_{t}>1$ until to day 22, while on
the basis of the SEIRD and SEInsRD models until day 42. Despite this being a
major difference, that originates from differences in model calibration as
discussed in Section 2.2, the relation of the explosive timescales to $R_{t}$,
deduced on the basis of SEIRD and SEInsRD models, is validated by the analysis
with the SIDARTHE model.
## 4 Conclusions
The progression of an infectious disease spread like COVID-19 pandemics is
frequently examined by population dynamics models [21, 22, 23, 24, 8, 11, 25,
26]. Their evolution as dynamical systems is characterized by timescales that
are either of dissipative or explosive nature; i.e., their action tends to
drive the system either towards to or away from equilibirum [14, 15]. The
basic reproduction number $R_{0}$ as a threshold parameter provides such an
intuition, in the view that when $R_{0}<1$ the system is driven towards to its
DFE, so that the infection does not spread in the population, while when
$R_{0}>1$ the system is driven away from its DFE, so that the disease spreads
exponentially [4, 5]. In the case of an outbreak, such as COVID-19 pandemics,
in which early predictions showed $R_{0}\approx 2-3$ [3], various NPIs are
employed during the evolution of the outbreak, aiming to “flatten” the curve
of the epidemics. The influence of the NPIs is frequently assessed by the
reduction that they introduce to the effective reproduction number $R_{t}$,
[7, 8, 9, 10, 11, 12]; ideally making $R_{t}<1$, which indicates that the
disease spread will eventually die out. In this work, the relation of the
effective reproduction number $R_{t}$ with the timescales characterizing the
evolution of the epidemic spread was examined in the case of COVID-19
pandemics in Italy from February 26 to September 30.
In particular, it was demonstrated analytically on the basis of the SIRD model
and numerically on the basis of the SEIRD model in Section 3.1, that when two
of the timescales characterizing the evolution of the epidemic spread are of
explosive nature, the effective reproduction number is above its threshold
value; i.e., $R_{t}>1$. On the contrary, when all the timescales are of
dissipative nature it is implied that $R_{t}<1$. In addition, the following
trending behaviour was revealed: as the gap between the two explosive
timescales increases/decreases, $R_{t}$ approaches to/withdraws from its
threshold value 1, as shown in Fig. 3. These outcomes suggest that the
insights provided by the utilization of $R_{t}$ as a threshold parameter can
be also obtained by timescale analysis.
This work additionally suggests that timescale analysis is a robust
methodology to assess the progression of the epidemic spread, since it is not
hindered by the complexity of the selected model, nor the calibration process
followed to fit the model against the reported data. Following the same model
calibration procedure to the SEInsRD model, resulted in timescales that are
almost equal to the ones of the SEIRD model; see Figs. 3 and 4. Such a result
indicates that the relation of the explosive timescales to $R_{t}$ is not
affected by model selection, as discussed in Section 3.2. In addition, this
relation is not affected by the parameter estimation process either, as
demonstrated through the analysis of the SIDARTHE model, the calibration of
which in [25] had significant differences with the one followed here for SEIRD
and SEInsRD models; see Section 2.2.
In conclusion, timescale analysis is a rigorous mathematical methodology to
assess the progression of an epidemic spread, since it can effectively provide
the insight obtained by the reproduction number. Timescale analysis is not
hindered by model selection in contrast to the reproduction number that is
highly dependable on the structure of the selected model [4]. In addition, the
expression of the reproduction number becomes more complex as the detail of
the model increases, as shown in Eq. (5); compare for example $R_{t}$ fo SEIRD
and SIDARTHE models. In contrast, timescale analysis can be performed in an
algorithmic fashion, utilizing the diagnostic tools of Computational Singular
Perturbation [14, 36] that have been effectively employed to address the
dynamical properties of systems arising from a wide variety of fields [16, 17,
18, 19, 20]. More importantly, the use of timescale analysis for the
assessment of various NPIs is promising, since it can determine via its
algorithmic tools the factors that play the most significant role on the
control of ongoing COVID-19 outbreak.
## 5 Acknowledgements
This publication is based upon work supported by the Khalifa University of
Science and Technology, under Award No. CPRA-2020-Goussis.
## References
* [1] World Health Organization. WHO Director‐General’s opening remarks at the media briefing on COVID‐19, March 11, 2020, https://www.who.int/dg/speeches/detail/who‐director‐general‐s‐opening‐remarks‐at‐the‐media‐briefing‐on‐covid‐19—11‐march‐2020, accessed: 2020-03-11.
* [2] Worldometer. COVID-19 Coronavirus Pandemic, https://www.worldometers.info/coronavirus/, accessed: 2021-1-15.
* [3] Y. Liu, A. A. Gayle, A. Wilder-Smith, J. Rocklöv, The reproductive number of covid-19 is higher compared to sars coronavirus, Journal of travel medicine.
* [4] P. L. Delamater, E. J. Street, T. F. Leslie, Y. T. Yang, K. H. Jacobsen, Complexity of the basic reproduction number (r0), Emerging infectious diseases 25 (1) (2019) 1.
* [5] O. Diekmann, J. A. P. Heesterbeek, J. A. Metz, On the definition and the computation of the basic reproduction ratio r 0 in models for infectious diseases in heterogeneous populations, Journal of mathematical biology 28 (4) (1990) 365–382.
* [6] G. Viceconte, N. Petrosillo, Covid-19 r0: Magic number or conundrum?, Infectious disease reports 12 (1).
* [7] J. Hellewell, S. Abbott, A. Gimma, N. I. Bosse, C. I. Jarvis, T. W. Russell, J. D. Munday, A. J. Kucharski, W. J. Edmunds, F. Sun, et al., Feasibility of controlling covid-19 outbreaks by isolation of cases and contacts, The Lancet Global Health.
* [8] A. J. Kucharski, T. W. Russell, C. Diamond, Y. Liu, J. Edmunds, S. Funk, R. M. Eggo, F. Sun, M. Jit, J. D. Munday, et al., Early dynamics of transmission and control of covid-19: a mathematical modelling study, The lancet infectious diseases.
* [9] A. Pan, L. Liu, C. Wang, H. Guo, X. Hao, Q. Wang, J. Huang, N. He, H. Yu, X. Lin, et al., Association of public health interventions with the epidemiology of the covid-19 outbreak in wuhan, china, Jama 323 (19) (2020) 1915–1923.
* [10] B. J. Cowling, S. T. Ali, T. W. Ng, T. K. Tsang, J. C. Li, M. W. Fong, Q. Liao, M. Y. Kwan, S. L. Lee, S. S. Chiu, et al., Impact assessment of non-pharmaceutical interventions against coronavirus disease 2019 and influenza in hong kong: an observational study, The Lancet Public Health.
* [11] B. Tang, X. Wang, Q. Li, N. L. Bragazzi, S. Tang, Y. Xiao, J. Wu, Estimation of the transmission risk of the 2019-ncov and its implication for public health interventions, Journal of clinical medicine 9 (2) (2020) 462.
* [12] B. Tang, N. L. Bragazzi, Q. Li, S. Tang, Y. Xiao, J. Wu, An updated estimation of the risk of transmission of the novel coronavirus (2019-ncov), Infectious disease modelling 5 (2020) 248–255.
* [13] P. Van den Driessche, J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical biosciences 180 (1-2) (2002) 29–48.
* [14] S. Lam, D. Goussis, Understanding complex chemical kinetics with computational singular perturbation, in: Symposium (International) on Combustion, Vol. 22, Elsevier, 1989, pp. 931–941.
* [15] U. Maas, S. B. Pope, Simplifying chemical kinetics: intrinsic low-dimensional manifolds in composition space, Combustion and flame 88 (3-4) (1992) 239–264.
* [16] D. M. Manias, E. A. Tingas, C. E. Frouzakis, K. Boulouchos, D. A. Goussis, The mechanism by which ch2o and h2o2 additives affect the autoignition of ch4/air mixtures, Combustion and Flame 164 (2016) 111–125.
* [17] E. A. Tingas, D. C. Kyritsis, D. A. Goussis, Autoignition dynamics of dme/air and etoh/air homogeneous mixtures, Combustion and Flame 162 (9) (2015) 3263–3276.
* [18] P. D. Kourdis, R. Steuer, D. A. Goussis, Physical understanding of complex multiscale biochemical models via algorithmic simplification: Glycolysis in saccharomyces cerevisiae, Physica D: Nonlinear Phenomena 239 (18) (2010) 1798–1817.
* [19] D. G. Patsatzis, D. A. Goussis, A new michaelis-menten equation valid everywhere multi-scale dynamics prevails, Mathematical biosciences 315 (2019) 108220\.
* [20] D. G. Patsatzis, D. T. Maris, D. A. Goussis, Asymptotic analysis of a target-mediated drug disposition model: algorithmic and traditional approaches, Bulletin of mathematical biology 78 (6) (2016) 1121–1161.
* [21] S. Gao, L. Chen, Z. Teng, Pulse vaccination of an seir epidemic model with time delay, Nonlinear Analysis: Real World Applications 9 (2) (2008) 599–607.
* [22] L. Canini, F. Carrat, Population modeling of influenza a/h1n1 virus kinetics and symptom dynamics, Journal of virology 85 (6) (2011) 2764–2770.
* [23] L. Esteva, C. Vargas, Coexistence of different serotypes of dengue virus, Journal of mathematical biology 46 (1) (2003) 31–47.
* [24] A. Stegeman, A. R. Elbers, J. Smak, M. C. de Jong, Quantification of the transmission of classical swine fever virus between herds during the 1997–1998 epidemic in the netherlands, Preventive veterinary medicine 42 (3-4) (1999) 219–234.
* [25] G. Giordano, F. Blanchini, R. Bruno, P. Colaneri, A. Di Filippo, A. Di Matteo, M. Colaneri, Modelling the covid-19 epidemic and implementation of population-wide interventions in italy, Nature Medicine (2020) 1–6.
* [26] L. Russo, C. Anastassopoulou, A. Tsakris, G. N. Bifulco, E. F. Campana, G. Toraldo, C. Siettos, Tracing day-zero and forecasting the fade out of the covid-19 outbreak in lombardy, italy: A compartmental modelling and numerical optimization approach., medRxiv.
* [27] W. O. Kermack, A. G. McKendrick, A contribution to the mathematical theory of epidemics, Proceedings of the royal society of london. Series A, Containing papers of a mathematical and physical character 115 (772) (1927) 700–721.
* [28] S. A. Lauer, K. H. Grantz, Q. Bi, F. K. Jones, Q. Zheng, H. R. Meredith, A. S. Azman, N. G. Reich, J. Lessler, The incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application, Annals of internal medicine 172 (9) (2020) 577–582.
* [29] Q. Li, X. Guan, P. Wu, X. Wang, L. Zhou, Y. Tong, R. Ren, K. S. Leung, E. H. Lau, J. Y. Wong, et al., Early transmission dynamics in wuhan, china, of novel coronavirus–infected pneumonia, New England Journal of Medicine.
* [30] V. Surveillances, The epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (covid-19)—china, 2020, China CDC Weekly 2 (8) (2020) 113–122.
* [31] COVID-19 Data Repositoy by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University, https://github.com/CSSEGISandData/COVID-19, accessed: 2020-06-26.
* [32] T. P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on evolutionary computation 4 (3) (2000) 284–294.
* [33] S. Hoops, S. Sahle, R. Gauges, C. Lee, J. Pahle, N. Simus, M. Singhal, L. Xu, P. Mendes, U. Kummer, Copasi—a complex pathway simulator, Bioinformatics 22 (24) (2006) 3067–3074.
* [34] J. M. Heffernan, R. J. Smith, L. M. Wahl, Perspectives on the basic reproductive ratio, Journal of the Royal Society Interface 2 (4) (2005) 281–293.
* [35] P. van den Driessche, Reproduction numbers of infectious disease models, Infectious Disease Modelling 2 (3) (2017) 288–303.
* [36] S. Lam, D. Goussis, The csp method for simplifying kinetics, International journal of chemical kinetics 26 (4) (1994) 461–486.
## Appendix A Derivation of the effective reproduction number
The Next Generation Matrix (NGM) approach is utilized for the calculation of
the basic reproduction number $R_{0}$ [34, 35, 13]. Given a system of ODEs in
the form of Eq. (1), let $y_{j}$ be the $j=1,\ldots,m$ infected population
groups among all the $y_{i}$ populations groups of the $i=1,\ldots,n$
compartments in $\mathbf{y}$. In turn, let $F_{i}(\mathbf{y})$ be the rate of
appearance of new infections in the $i$-th compartment and
$V_{i}(\mathbf{y})=V_{i}^{-}(\mathbf{y})-V_{i}^{+}(\mathbf{y})$ the transition
rates out of ($V^{-}$) and into ($V^{+}$) the $i$-th compartment. By
definition, it is implied that:
$\dfrac{dy_{i}}{dt}=F_{i}(\mathbf{y})-V_{i}(\mathbf{y})=F_{i}(\mathbf{y})+V^{+}_{i}(\mathbf{y})-V^{-}_{i}(\mathbf{y})$
(1)
Let the matrices $\mathbf{F}$ and $\mathbf{V}$ be:
$\mathbf{F}=\left[\dfrac{\partial F_{i}(\mathbf{y^{*}})}{\partial
y_{j}}\right]\qquad\text{and}\qquad\mathbf{V}=\left[\dfrac{\partial
V_{i}(\mathbf{y^{*}})}{\partial y_{j}}\right]$ (2)
where $\mathbf{y^{*}}$ is the disease-free equilibrium and $i,j=1,\ldots,m$.
According to the NGM approach, the basic reproduction number $R_{0}$ is the
spectral radius (largest eigenvalue) of the matrix
$\mathbf{F}\cdot\mathbf{V^{-1}}$; i.e.,
$R_{0}=\rho(\mathbf{F}\cdot\mathbf{V^{-1}})$ [34, 35, 13]. However, since the
model parameters vary in time (different parameter values in each week), the
NGM approach utilization results in the calculation of the effective
reproduction number $R_{t}$. In the following, the analytical expressions of
$R_{t}$ for SIRD, SEIRD, SEInsRD and SIDARTHE models in Eq. (5) are derived.
The SIRD mathematical model in Eq. (2) can be written in the form of Eq. (1)
as:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ I\\\ R\\\ D\end{bmatrix}=\begin{bmatrix}0\\\
\beta SI\\\ 0\\\ 0\end{bmatrix}+\begin{bmatrix}0\\\ 0\\\ \gamma I\\\ \mu
I\end{bmatrix}-\begin{bmatrix}\beta SI\\\ (\gamma+\mu)I\\\ 0\\\
0\end{bmatrix}=F_{i}(\mathbf{y})+V^{+}_{i}(\mathbf{y})-V^{-}_{i}(\mathbf{y})$
(3)
The disease-free equilibrium is $\mathbf{y^{*}}=(S(0),0,0,0)$, so that
substitution in Eq. (2) leads to:
$\mathbf{F}=\beta S(0)\qquad\text{and}\qquad\mathbf{V}=\gamma+\mu$ (4)
Given that $S(0)=1$ as fraction of the total population, the effective
reproduction number for the SIRD model is:
$R_{t}=\rho(\mathbf{F}\cdot\mathbf{V^{-1}})=\dfrac{\beta}{\gamma+\mu}$ (5)
The SEIRD mathematical model in Eq. (3) can be written in the form of Eq. (1)
as:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ E\\\ I\\\ R\\\
D\end{bmatrix}=\begin{bmatrix}0\\\ \beta SI\\\ 0\\\ 0\\\
0\end{bmatrix}+\begin{bmatrix}0\\\ 0\\\ \sigma E\\\ \gamma I\\\ \mu
I\end{bmatrix}-\begin{bmatrix}\beta SI\\\ \sigma E\\\ \gamma I+\mu I\\\ 0\\\
0\end{bmatrix}=F_{i}(\mathbf{y})+V^{+}_{i}(\mathbf{y})-V^{-}_{i}(\mathbf{y})$
(6)
The disease-free equilibrium is $\mathbf{y^{*}}=(S(0),0,0,0,0)$, so that
substitution in Eq. (2) leads to:
$\mathbf{F}=\begin{bmatrix}0&\beta S(0)\\\
0&0\end{bmatrix}\qquad\text{and}\qquad\mathbf{V}=\begin{bmatrix}\sigma&0\\\
-\sigma&\gamma+\mu\end{bmatrix}$ (7)
Given that $S(0)=1$, the effective reproduction number for the SEIRD model is:
$R_{t}=\rho(\mathbf{F}\cdot\mathbf{V^{-1}})=\dfrac{\beta}{\gamma+\mu}$ (8)
Note that the $R_{t}$ of SEIRD model is the same to that of SIRD model in Eq.
(5).
The SEInsRD mathematical model in Eq. (4) can be written in the form of Eq.
(1) as:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ E\\\ IN\\\ IS\\\ R\\\
D\end{bmatrix}=\begin{bmatrix}0\\\ \beta_{N}S.IN+\beta_{S}S.IS\\\ 0\\\ 0\\\
0\\\ 0\end{bmatrix}+\begin{bmatrix}0\\\ 0\\\ (1-ss)\sigma E\\\ ss\sigma E\\\
\gamma(IN+IS)\\\
\mu_{N}IN+\mu_{S}IS\end{bmatrix}-\begin{bmatrix}\beta_{N}S.IN+\beta_{S}S.IS+\mu_{TP}S\\\
\sigma E+\mu_{TP}E\\\ \gamma IN+\mu_{N}IN\\\ \gamma IS+\mu_{S}IS\\\
\mu_{TP}R\\\
0\end{bmatrix}=F_{i}(\mathbf{y})+V^{+}_{i}(\mathbf{y})-V^{-}_{i}(\mathbf{y})$
(9)
The disease-free equilibrium is $\mathbf{y^{*}}=(S(0),0,0,0,0,0)$, so that
substitution in Eq. (2) leads to:
$\mathbf{F}=\begin{bmatrix}0&\beta_{N}S(0)&\beta_{S}S(0)&\\\ 0&0&0\\\
0&0&0\end{bmatrix}\qquad\text{and}\qquad\mathbf{V}=\begin{bmatrix}\sigma+\mu_{TP}&0&0\\\
-(1-ss)\sigma&\gamma+\mu_{N}&0\\\ -ss\sigma&0&\gamma+\mu_{S}\end{bmatrix}$
(10)
Given that $S(0)=1$, the effective reproduction number for the SEInsRD model
is:
$R_{t}=\rho(\mathbf{F}\cdot\mathbf{V^{-1}})=\dfrac{\sigma}{\sigma+\mu}\left(\dfrac{(1-ss)\beta_{N}}{\gamma+\mu_{N}}+\dfrac{ss\beta_{S}}{\gamma+\mu_{S}}\right)$
(11)
Note that when considering the $\mu_{TP}\ll\sigma$ limit, $R_{t}$ of SEInsRD
model in Eq. (11) is simplified to:
$R_{t}\stackrel{{\scriptstyle\mu_{TP}\ll\sigma}}{{=}}\left(\dfrac{(1-ss)\beta_{N}}{\gamma+\mu_{N}}+\dfrac{ss\beta_{S}}{\gamma+\mu_{S}}\right)$
(12)
which is similar to that of SIRD and SEIRD models in Eqs. (5, 8) when setting
$ss=0$; i.e., when neglecting the severely infected individuals from the
model.
Finally, the SIDARTHE mathematical model in [25] can be written in the form of
Eq. (1) as:
$\dfrac{d}{dt}\begin{bmatrix}S\\\ I\\\ D\\\ A\\\ R\\\ T\\\ H\\\
E\end{bmatrix}=\begin{bmatrix}-S(\alpha I-\beta D-\gamma A-\delta R)\\\
S(\alpha I+\beta D+\gamma A+\delta R)-(\epsilon+\zeta+\lambda)I\\\ \epsilon
I-(\eta+\rho)D\\\ \zeta I-(\theta+\mu+\kappa)A\\\ \eta D+\theta
A-(\nu+\xi)R\\\ \mu A+\nu R-(\sigma+\tau)T\\\ \lambda I+\rho D+\kappa A+\xi
R+\sigma T\\\ \tau T\end{bmatrix}=\begin{bmatrix}0\\\ S(\alpha I+\beta
D+\gamma A+\delta R)\\\ 0\\\ 0\\\ 0\\\ 0\\\ 0\\\ 0\end{bmatrix}+$
$+\begin{bmatrix}0\\\ 0\\\ \epsilon I\\\ \zeta I\\\ \eta D+\theta A\\\ \mu
A+\nu R\\\ \lambda I+\rho D+\kappa A+\xi R+\sigma T\\\ \tau
T\end{bmatrix}-\begin{bmatrix}S(\alpha I-\beta D-\gamma A-\delta R)\\\
(\epsilon+\zeta+\lambda)I\\\ (\eta+\rho)D\\\ (\theta+\mu+\kappa)A\\\
(\nu+\xi)R\\\ (\sigma+\tau)T\\\ 0\\\
0\end{bmatrix}=F_{i}(\mathbf{y})+V^{+}_{i}(\mathbf{y})-V^{-}_{i}(\mathbf{y})$
(13)
where the parameter notation is explained in detail in [25]. The disease-free
equilibrium is $\mathbf{y^{*}}=(S(0),0,0,0,0,0,0,0)$, so that substitution in
Eq. (2) leads to:
$\mathbf{F}=\begin{bmatrix}\alpha S(0)&\beta S(0)&\gamma S(0)&\delta S(0)&0\\\
0&0&0&0&0\\\ 0&0&0&0&0\\\ 0&0&0&0&0\\\
0&0&0&0&0\end{bmatrix}\qquad\text{and}\qquad\mathbf{V}=\begin{bmatrix}\epsilon+\lambda+\zeta&0&0&0&0\\\
-\epsilon&\eta+\rho&0&0&0\\\ -\zeta&0&\kappa+\mu+\theta&0&0\\\
0&-\eta&-\theta&\nu+\xi&0\\\ 0&0&\mu&\nu&\sigma+\tau\end{bmatrix}$ (14)
Given that $S(0)=1$, the effective reproduction number for the SIDARTHE model
is:
$R_{t}=\rho(\mathbf{F}\cdot\mathbf{V^{-1}})=\dfrac{\alpha}{r_{1}}+\dfrac{\beta\epsilon}{r_{1}r_{2}}+\dfrac{\gamma\zeta}{r_{1}r_{3}}+\dfrac{\delta\theta\zeta}{r_{1}r_{3}r_{4}}+\dfrac{\delta\epsilon\eta}{r_{1}r_{2}r_{4}}$
(15)
where $r_{1}=\epsilon+\lambda+\zeta$, $r_{2}=\eta+\rho$,
$r_{3}=\kappa+\mu+\theta$ and $r_{4}=\nu+\xi$. Note that the expression in Eq.
(15) derived here in the context of NGM approach is the same with the one in
Eq. (18) derived in [25] using a different approach.
## Appendix B The SEIRD and SEInsRD model parameters
The parameter estimation process described in Section 2.2, that was followed
to fit the reported data sets of infected, recovered and dead individuals of
Italy from February 26 to September 30 in a weekly basis, resulted in the
model parameters shown in Fig. 1. The left panel shows the distribution of the
SEIRD model parameters $\beta$, $\sigma$, $\gamma$ and $\mu$ and the right
panel shows the ones of SEInsRD model $\beta_{N}$, $\beta_{S}$, $\sigma$,
$\gamma$, $\mu_{N}$, $\mu_{S}$ and $ss$. The values of parameter $\mu_{TP}$ of
the SEInsRD model are not shown, since they are smaller than $10^{-5}$.
Figure 1: The parameters estimated for the SEIRD (left) and SEInsRD (right)
models. The shaded regions indicate the weeks for which $R_{t}>1$ and
explosive timescales arise.
Figure 1 indicates that the parameters expressing the transition from a
population group to another attain similar values in both models: transmission
rate ($\beta$ and $\beta_{N},\beta_{S}$), incubation period ($1/\sigma$),
recovery rate ($\gamma$) and fatality rate ($\mu$ and $\mu_{N},\mu_{S}$)
constants.
As shown in Fig. 1, the following trends in the parameter values are
indicated:
* 1.
the transmission rate constant $\beta$ attains high/low values in the periods
where explosive timescale are present/absent. The values of $\beta$ tend to
decrease during the transition from an explosive to a dissipative region and
vice-versa.
* 2.
the rate constant $\sigma$ (inverse of incubation period) tend to increases
during the explosive regions.
* 3.
the recovery rates $\gamma$ are almost constant
* 4.
the fatality rates $\mu$ tend to decrease, despite the explosive/dissipative
region transition. They tend to increase only in the last few weeks.
* 5.
the normally to severely infected ratio $ss$ is almost constant.
population group | SEIRD | SEInsRD
---|---|---
infected, $I$ | $0.99972$ | $0.99813$
recovered, $R$ | $0.99993$ | $0.99985$
dead, $D$ | $0.99998$ | $0.99969$
Table 1: $R^{2}$ values of the solution acquired on the basis of the SEIRD and
SEInsRD models with the parameter distribution shown in Fig. 1, with reference
to the reported data for infected, recovered and dead individuals in Italy.
|
11institutetext: Andreas L. Opdahl 22institutetext: University of Bergen,
Norway, 22email<EMAIL_ADDRESS>
# Knowledge Graphs and
Natural-Language Processing
Andreas L. Opdahl
###### Abstract
Emergency-relevant data comes in many varieties. It can be high volume and
high velocity, and reaction times are critical, calling for efficient and
powerful techniques for data analysis and management. Knowledge graphs
represent data in a rich, flexible, and uniform way that is well matched with
the needs of emergency management. They build on existing standards,
resources, techniques, and tools for semantic data and computing. This chapter
explains the most important semantic technologies and how they support
knowledge graphs. We proceed to discuss their benefits and challenges and give
examples of relevant semantic data sources and vocabularies. Natural-language
texts — in particular those collected from social media such as Twitter — is a
type of data source that poses particular analysis challenges. We therefore
include an overview of techniques for processing natural-language texts.
## 1 What are Knowledge Graphs?
Knowledge graphs originate from Tim Berners-Lee’s vision of a machine-
processable web of data that would augment the original web of human-readable
documents (Berners-Lee et al, 2001; Shadbolt et al, 2006). A central idea is
to represent data as graphs, with nodes that represent concrete objects,
information, or concepts and with edges that represent semantic relations
(Allemang and Hendler, 2011).
The most central standard is the Resource Description Framework
(RDF111https://www.w3.org/TR/rdf11-primer/), which is the standard way of
representing knowledge graphs. An RDF graph consists of triples, each
expressing that a semantic resource (the subject) has a particular semantic
relation (the predicate or property) to either a literal value or another
semantic resource (the object). Resources and properties are identified using
Internationalized Resource Names (IRN222Here, we use IRN about Uniform
Resource Names (URN) that are extended to the Unicode character set, although
it remains more common to use the initialism URN even when Unicode is
allowed.), and literals are typically expressed using XML Schema Definition
(XSD) datatypes. A special rdf:type property can be used to state that one
resource is the type of another, such as in the triple dbpedia:Tim_Berners-Lee
rdf:type foaf:Person (where we have used standard prefixes dbpedia:, rdf:, and
foaf: to shorten the IRNs). Standard formats are available for exchanging RDF
files, and the new JSON-LD333http://json-ld.org standard extends JavaScript
Object Notation (JSON) with semantic tags so that RDF data as can be easily
exchanged through web APIs.
RDF Schema (RDFS444https://www.w3.org/TR/rdf-schema/) extends RDF with terms —
represented as IRNs — that make knowledge graphs richer and more precise. For
example, RDFS defines resource types and properties for expressing that one
resource type is a subtype of another (i.e., that toxic fume is a kind of
pollution), that one property is a subtype of another (i.e., that being a
nurse is a form of being a healthcare worker), and that some property is
always used with subjects and objects of specific types (i.e., that only
living things can be poisoned). The meaning of RDFS terms is defined through
axioms and entailment rules. The Web Ontology Language
(OWL555https://www.w3.org/OWL/) offers even more precise semantics and
automated reasoning on top of RDFS, but computational complexity grows quickly
when datasets become large. Therefore, OWL is most effective for smaller and
more specific semantic datasets, called ontologies. One important use of
ontologies is to precisely define and interrelate the resource types and
properties that are used to organise and give meaning to larger knowledge
graphs. Such ontologies — even when they are expressed less formally in RDFS —
are often called vocabularies (more about that later).
SPARQL (Simple Protocol and RDF Query
Language666https://www.w3.org/TR/sparql11-overview/) lets users and programs
extract information from knowledge graphs. The result can be tables of
information, yes/no answers, or new knowledge graphs. SPARQL Update also lets
users and programs modify knowledge graphs by adding or removing triples.
SPARQL is supported both by native RDF database management systems, called
triple stores, and by wrappers that expose tabular and other data in legacy
databases as knowledge graphs — whether as downloadable RDF files, through
online SPARQL endpoints, or by other means.
The Linked Open Data (LOD) principles offer further advice for creating and
sharing knowledge graphs (Bizer et al, 2009a). The four central principles
are:
1. 1.
sharing graphs using standard formats and protocols such as RDF, RDFS, OWL,
and SPARQL;
2. 2.
using Internationalized Resource Names (IRNs) to name resources (nodes) and
properties (edges);
3. 3.
making these IRNs into dereferencable Internationalized Resource Identifiers
(IRIs777IRIs are Uniform Resource Identifiers (URIs) that are extended to the
Unicode character set. They both name a resource uniquely and specify its
location on the web.) that can be accessed on the web to provide further
information about the resource in RDF format; and
4. 4.
using standard IRNs that are defined in vocabularies as types and properties
in graphs.
Today, more than 1200 datasets that adhere to these principles are openly
available in the LOD cloud (McCrae et al, 2018), adding up to almost 150
trillion triples. Much-used datasets we will mention later (such as DBpedia,
GeoNames, LinkedGeoData, and Wikidata) act as hubs that tie these linked open
datasets even more tightly together by offering standard names (again IRNs)
for individual people, organisations, places, works, and so on.
Knowledge graphs can also be stored and processed using property graph
databases and other technologies outside the semantic standard but, even for
such graphs, RDF and SPARQL are commonly used for information exchange.
## 2 Benefits and Challenges
In an emergency situation, diverse data sources must be recombined and used to
support complex querying, processing, and reasoning in unforeseeable ways.
This is exactly the type of situation where knowledge graphs shine, because
they leverage an interoperable set of semantic technologies and tools for
quickly and easily interpreting, combining, analysing, and presenting
potentially related datasets from different sources.
### 2.1 Benefits
Given that the right competencies, tools, and infrastructure are in place,
knowledge graphs building on semantic technologies and tools have the
potential to simplify and speed up all stages of emergency data processing.
Identifying data sources is made easier by semantic search engines and
semantically searchable registries of open data (such as http://lod-
cloud.net). Harvesting semantic data is made easier by standard data-exchange
formats such as Turtle, NT and OWL/XML for downloading files, JSON-LD for web
APIs, and SPARQL for database endpoints. Lifting non-semantic data to RDF
format is supported by tools such as Karma888http://usc-
isi-i2.github.io/karma/, and JSON data from web APIs can be easily lifted to
JSON-LD by adding simple semantic metadata. A wide range of wrappers, such as
D2RQ999http://d2rq.org/, provide SPARQL access to relational and other DBMSs
that do not natively support SPARQL. Identifying vocabularies to use for
lifting is made easier by semantically searchable registries such as Linked
Open Vocabularies (LOV101010https://lov.linkeddata.es/dataset/lov
(Vandenbussche et al, 2017) and LODstats (Ermilov et al, 2013)). Understanding
data becomes easier for humans when the data attributes are marked up with
semantically precise tags from well-defined vocabularies. Alignment of related
terms from different vocabularies (and other kinds of ontologies) is supported
by techniques and tools that use term and structural similarity as indicators
of term equivalence and of other semantic relations between terms. Recombining
data from different data sets is the most central strength of knowledge
graphs: as soon as their vocabularies have been aligned, knowledge graphs can
be recombined simply by loading them into the same triple store or through
SPARQL, using federated queries that combine partial results from multiple
endpoints. Enriching data means to recombine a dataset with reference data,
for example from the Linked Open Data (LOD) cloud. Contextualising and
validating data is thus simplified further by openly available semantic
datasets that can be used to make data even easier to understand and to
control its validity. Reasoning over data is supported to some extent by the
description logic (DL) subset of OWL, although computational effort may grow
quickly for large ontologies if they are not carefully designed. Rule-based
reasoning is therefore more applicable to large datasets than DL reasoning.
Visualising semantic data, e.g., in dashboards, is also well supported. In all
these processing stages, the strength of knowledge graphs and semantic
technologies lies in the same set of ideas and practices: expressing knowledge
uniformly in a standard format (RDF or OWL) that is annotated semantically
using well-defined terms (IRIs) defined as part of semantically interlinked
vocabularies that are expressed in the same standard formats (RDFS or OWL).
### 2.2 Challenges
A full stack of semantic technologies for knowledge graphs is already
available for simplifying and speeding up information processing in an
emergency situation. The challenge is to have the right combinations of
competencies, capacities, and tools already in place when disaster strikes.
On the competence side, it is critical to recruit and train volunteers with
the right combination of semantic-technology competence and collaboration and
communication skills. To have maximal impact in an emergency, a semantic
technologist must not only be expert in the use of their tools and techniques,
but also be able to communicate well with emergency workers and perhaps
directly with the people affected. Communicating in an emergency situation is
particularly challenging, because the people involved: may be scared,
fatigued. and otherwise working in stressful situations; will have a broad
variety and levels of other competencies and skills; may come from different
cultures, use different languages and perhaps operate in different climates
and time zones; may not be knowledgeable and skilled in ICT; may experience
low-quality transmission and delays due to long distances and perhaps
compromised infrastructures.
On the capacity side, most of the semantic interpretation, lifting, combining,
and analysing can take place in the cloud in a distributed fashion that makes
it highly suitable for volunteer work. Cloud computing platforms such as
Amazon’s EC2 and others make it possible to set up collaborative computing
infrastructures on-demand quickly. The basic tools needed for handling
knowledge graphs can be downloaded and installed quickly, and some cloud
providers even offer pre-configured virtual hosts (such as Amazon’s Machine
Images, AMIs) that can be instantiated on demand. Hence, dedicated emergency
machine images can be defined in advance where important and trusted reference
datasets have already been loaded into a running triple store, along with
ready-to-user tools such as as data scrapers and lifters, ontology editors,
programming tools and APIs, visualisers, dashboard generators, and various
types of social emergency software. Training to create, use, and curate such
advance-prepared infrastructures is therefore a useful emergency-preparation
activity, and mastering management and use of virtual hosts and other cloud
infrastructures is a useful competence.
On the tool side, for all types of non-semantic data, precise semantic lifting
is essential to avoid information loss. We have already mentioned the
computational complexity of OWL reasoning. Indeed, computational complexity is
a challenge for graph-based reasoning and pattern matching in general, and it
is an important consideration both for native RDF programming and when
providing and querying SPARQL endpoints. Although triple-store technologies
have been used to store more than a trillion triples in benchmarks, most
existing technologies do not scale to the biggest data sizes. An important
future challenge is therefore to extend current big-data technologies to also
handle semantic data. Finally, knowledge graphs and semantic technologies need
to become seamlessly integrated with mainstream machine-learning techniques.
A final challenge is textual data, which must be lifted to semantic form
before they can be represented in knowledge graphs. This issue is so central
that we will discuss it in a separate section below.
## 3 Vocabularies for Emergency Response
Semantic technologies, LOD, and knowledge graphs rely heavily on vocabularies,
expressed either in RDFS or more precisely and formally as OWL ontologies.
Vocabularies define terms that can be used to make the meaning of knowledge
graphs explicit, precise, and easier to understand. The terms in a vocabulary
provide standard IRNs for the most important resource types and properties in
a domain. For example, an organisation vocabulary can define resource types
for Person and Project and a currentProject property to relate them. We have
already mentioned Linked Open Vocabularies
(LOV111111https://lov.linkeddata.es/dataset/lov), a web site that offers a
searchable overview over and entry point into the most used vocabularies.
Precisely defined and interlinked vocabularies also make it easier to combine
knowledge graphs that use different vocabularies.
There is no all-encompassing and widely accepted ontology that covers all of
emergency management. But many data-exchange standards have been proposed for
specific concerns, such as people, organisations, resources, infrastructure,
processes, disaster description, damage assessment, geography, hydrology,
meteorology, and topography. Unfortunately, most standards are defined in
plain XML or proprietary formats, and some of them are not even publicly
available.
Among the vocabularies that are both open and semantic, MOAC (Management of a
Crisis121212http://observedchange.com/moac/ns/) combines three types of crisis
information used by: (a) traditional humanitarian agencies, (b) disaster
affected communities, and (c) volunteer and technical committees for
humanitarian data exchange. Accordingly, MOAC is divided into three sections
that offer terms (IRNs) for: emergency types, security incidents, and affected
populations (emergency management); shelters, water, sanitation, food, health,
logistics, and telecommunications (emergency cluster); and
who/what/where/when, needs, and responses (who-what-where). Parts of MOAC are
supported by the Ushahidi web platform131313https://www.ushahidi.com for
emergency management.
HXL (Humanitarian eXchange Language141414http://hxlstandard.org/) aims to
improve information sharing during humanitarian crises without adding extra
reporting burdens. It defines hashtags for describing: places, such as
geolocations, populated places and administrative units in countries; people
and households, such as affected populations, their needs and characteristics;
responses and other operations, such as their capacities and operations;
crises, incidents and events, including their causes, impacts and severity;
and general metadata, such as data provenance, approvals, and timestamps. It
offers a broader infrastructure that also comprises training, tools and other
materials, including a semantic version of the vocabulary.
EDXL-RESCUER is an attempt to make the XML-based Emergency Data Exchange
Language (EDXL151515http://docs.oasis-open.org/emergency/edxl-de/v2.0/edxl-
de-v2.0.html) standard available as an OWL ontology. EDXL facilitates sharing
of emergency information between government agencies and other involved
organisations. It offers terms for: alerts, information about events, affected
areas, and additional image or audio resources (the common alerting protocol);
requesting, responding to, and committing resources (resource messaging);
field observations, causality, illness, and management reporting (situation
reporting); hospitals, their statuses, bed capacities, facilities, resources,
and services (hospital availability exchange); emergency patients (emergency
patients tracking); and high-level information modelling (reference
information model).
Other examples of domain ontologies or vocabularies that can be relevant in
emergency situations are: km4city (city data), Linked Datex II (traffic),
Semantic Sensor Network Ontology (sensors), Ordnance Survey Hydrology Ontology
(hydrology), Weather Ontology (meteorology), USGS CEGIS (topography), Ordnance
Survey Building and Places Ontology, E-response Building Pathology Ontology,
and E-response Building Internal Layout Ontology. These vocabularies can be
used alongside general vocabularies for, e.g., time and duration (OWL-Time),
locations (geo, GeoNames, LinkedGeoData), people (FOAF, bio), organisations
(org, InteLLEO), events (the Event Ontology), provenance (PROV-O), and data
rights (CC).
## 4 Semantic datasets for Emergency Management
The chapter on Big Data has already reviewed many data sources that are
relevant for emergency management. Some of them are also available in semantic
formats or, at least, have semantic counterparts.
The LOD Cloud161616http://lod-cloud.net (McCrae et al, 2018) is a searchable
portal of more than 1200 interrelated datasets available as knowledge graphs.
It contains both general datasets and sets that are specific to emergency-
related domains such as geography, government, social networking, and user-
generated content. DBpedia (Auer et al, 2007; Bizer et al, 2009b) is an
automated extraction of structured data from Wikipedia (in particular, its
fact boxes) into RDF. It describes more than 14 million resources and is
available in over a hundred languages. It is one of the most central hubs in
the LOD cloud, where it has been standard practice to name people,
organisations, works, and so on using their (dereferencable) DBpedia IRIs.
Wikidata171717https://www.wikidata.org/wiki/Wikidata:Introduction is
Wikipedia’s sister project for crowdsourcing structured factual information.
The idea is that the information in Wikipedia’s fact boxes will be extracted
from and maintained by the Wikidata project. Hence, whereas DBpedia extracts
its data from Wikipedia, Wikidata is a supplier of information to Wikipedia.
It currently contains around 50 million items with unique IRIs, similar to RDF
resources. Although Wikidata’s knowledge graph is not natively stored and
maintained in RDF, the data is available through a SPARQL endpoint and
downloadable as RDF files. GeoNames181818http://www.geonames.org/about.html is
a crowdsourced open repository of more than 10 million geotagged toponyms
(geographical names) categorised using a three-level taxonomy with nine
letter-coded top-level categories and more than 600 sub-categories. The nine
top-level categories are: countries, states, regions… (A); streams, lakes…
(H); parks, areas… (L); cities, villages… (P); roads, railways… (R); spots,
buildings, farms… (S); mountains, hills, rocks… (T); undersea… (U); and
forests, heaths… (V). GeoNames can be browsed online through a map interface.
It is also available as RDF and SPARQL and has a web API. It is common in the
LOD cloud to name places using their (dereferencable) GeoNames IRIs.
LinkedGeoData (Auer et al, 2009; Stadler et al, 2012) is an automated
extraction of structured data from OpenStreetMap, much as DBpedia is an
extraction from Wikipedia. BabelNet191919https://babelnet.org/ is a multi-
lingual word net (Miller, 1995). LODstats202020http://lodstats.aksw.org/
(Ermilov et al, 2013) has been used to index an even larger body of semantic
datasets and endpoints and can be used to search for datasets that use
specific RDF types, properties, vocabularies, etc.
The big internet-companies like Google, Facebook, and Amazon also maintain
large internal knowledge graphs, although the information is not in general
open or always represented using standard semantic formats and protocols. In
some cases, commercial data can be sampled or shared in an emergency
situation, either pro bono or paid. Google’s Emergency Map service and Person
Finder212121http://www.google.org/\\{crisismap,personfinder\\} are examples of
such services, although they are not exposed through semantic interfaces.
Google also supports the GDELT project222222https://www.gdeltproject.org/,
which continuously harvests and analyses media in print, broadcast, and web
formats in over 100 languages. The GDELT Event Database represents and
codifies physical events reported in the world news, whereas the GDELT Global
Knowledge graph represents the reported people, places, organisations, themes,
and emotions. Both databases are open to the public and incremental updates
are available every 15 minutes. Although the graphs are distributed in tabular
form with unique identifiers and well-defined columns, the data are not
represented in standard semantic format with IRNs and XSD-typed literals.
GDELT does not target emergency management specifically, but offers an open-
data firehose about human society that can be used to monitor unstable
situations and escalating crises.
The new JSON-LD232323http://json-ld.org format extends basic JSON in a simple
way with semantic tags taken from standard vocabularies. JSON-LD makes it easy
to lift JSON-based APIs to a semantic format, so the responses can be inserted
directly into knowledge graphs as soon as a suitable vocabulary has been found
or created and interlinked. Data represented in XML-based or other formats,
such as from Google Person Finder, can easily be converted to JSON before
lifting to JSON-LD by adding simple semantic metadata.
Semantic web APIs also make it much easier to connect the rapidly growing
number of more or less smart things available on the internet. Networks of
sensors, actuators and other networked devices on the Internet of Things
(Atzori et al, 2010) can thereby be identified, integrated, and leveraged much
more quickly and easily in an emergency situation, and the information they
provide becomes easier to recombine with semantic data from other sources.
Smart semantic things can describe, gain access to, and reason about their own
context, They can describe themselves and their services semantically in graph
form, making them more self-contained and easier to find, for example using
the new Semantic Sensor Network Ontology.
Regular datasets that are available as spreadsheets or in SQL databases can
also be lifted easily to semantic format. We have already mentioned
Karma242424http://usc-isi-i2.github.io/karma/, which is one of several
semantic lifting tools that can generate RDF from structured (tabular or
hierarchical) data and D2RQ252525http://d2rq.org/, which is a much-used
wrapper for creating SPARQL endpoints and RDF interfaces on top of SQL
databases. Automatic semantic annotation of images, video, and audio is an
emerging area. In particular, deep neural convolution networks have made image
analysis much more precise in recent years (Krizhevsky et al, 2012).
Nevertheless, some of the most important information during an emergency will
be available as text, in particular as messages harvested from social media in
real time. The next section therefore discusses natural-language processing
and lifting of texts into semantic form as knowledge graphs.
## 5 Analysing Natural-Language Texts
### 5.1 Pre-processing
Natural-language processing (NLP) use AI and ML techniques to make the
semantic content of written texts processable by computers. Central challenges
are to identify: which topics and things a text is about; how the topics and
things are related; as well as which attitudes and emotions the text
expresses. Conventionally, NLP has built on a pre-processing pipeline that
combines all or some of the following steps (Castillo, 2016, chapter 3):
1. 1.
Character decoding and tokenisation breaks the text into a list of words, word
pieces, or even single characters, called tokens, that are represented using a
standard character set such as Unicode.
2. 2.
Normalisation standardises use of abbreviations, accents, emoticons,
shorthands, slang, upper- versus lower-case characters, etc.
3. 3.
Stopword removal eliminates words that are too common to convey much meaning,
such as “of”, “the”, and “or”. One much-used stopword list contains around 300
words but, for some types of analyses, aggressively eliminating as much as the
20% most frequent words produce the best results. Removing little used words
is also common.
4. 4.
Stemming or lemmatisation are two alternative ways of handling words such as
“build”, “builds”, “built”, “builder”, and “building” that are grammatical
forms of the same word (and stem) “build”. The difference is that stemming
uses simple pattern-based string substitutions (typically based on regular
expressions), whereas lemmatisation embeds more lexical and grammatical
knowledge, including exception lists. For example, a hypothetical and very
simple stemmer might treat the word “was” as the plural form of (the non-word)
“wa”, whereas a lemmatiser would look up its exception list and identify “was”
correctly as the past tense of “is”.
5. 5.
Part of Speech (PoS) tagging parses sentences to assign words to classes such
as nouns, verbs, adjectives, and adverbs. Lemmatisation can sometimes benefit
from PoS tags, so the order of steps does not have to be strict. For example,
a grammatically-informed lemmatiser would recognise “building” as a form of
“build” when it is used as a verb, but retain the form “building” when it is
used as a noun.
6. 6.
Dependency parsing detects how the words and phrases in a sentence are
related, for example which noun (phrase) that an adjective modifies, which
earlier noun phrase that a pronoun refers to, and which noun phrases that are
the subject and object of a verb phrase.
While pre-processing has often relied on hand-crafted algorithms and rules,
pre-processing with neural networks and other machine-learning techniques has
become more common.
### 5.2 Word embeddings
Natural-language processing techniques are developing rapidly. Google’s
word2vec has trained a neural network to predict which words that occur in
which contexts in a 1.6 billion-word corpus (Mikolov et al, 2013; Goldberg and
Levy, 2014). The result is a set of word vectors, each of which represents the
semantics of a word as a few hundred real numbers. GloVe has generated a
similar set of word vectors using statistical techniques instead of a neural
network (Pennington et al, 2014). The vectors generated by word2vec and GloVe
can describe word meanings on a very precise level that opens up for new modes
of analysis and reasoning. For example, when the vector for the word “France”
is subtracted from the vector for “Paris” and the vector for “Germany” is
added, the sum turns out to be close to the vector for “Berlin”. Similar
additive relations exist between different grammatical forms of the same stem,
so that “biggest” – “big” + “small” produces a vector similar to the one for
“smallest” (Mikolov et al, 2013). But word-vector addition and subtraction
does not work equally well for all kinds of relations.
Word-embedding techniques have also been used to generate vectors that
approximate the meaning of sentences, paragraphs, and documents (Le and
Mikolov, 2014) and even the nodes (resources) and edges (properties) in
knowledge graphs (Ristoski and Paulheim, 2016), so that the semantic distance
between a word or paragraph and a LOD resource can be approximated by the
distance (Euclidian or other) between their vector representations. Vector
representations of words, sentences, paragraphs, documents, LOD resources, and
other semantic phenomena are paving the way for research that may increase the
quality of NL processing as word embedding becomes better understood and more
widely used.
Word-embedding approaches often skip all but the first step of the
conventional pre-processing pipeline, treating even misspellings and
punctuation signs as meaning-bearing tokens. Skipping stemming or
normalisation can also improve accuracy because grammatical forms carry
semantic information.
### 5.3 Analysis problems
Sentiment analysis, sometimes known as opinion mining, attempts to identify
whether a text (or its parts) expresses a positive or negative attitude (Pak
and Paroubek, 2016; Pang and Lee, 2008). Most sentiment analysers are
implemented using supervised machine-learning algorithms. For example, a
collection of movie reviews where each text is associated with a numerical
ranking can be used to train a regression algorithm (Müller et al, 2016).
Emotion analysis uses similar techniques to identify more specific feelings
such as joy, anger, disgust, sadness, and fear, both for the text as a whole
and for the keywords and phrases it contains.
Negation analysis attempts to identify negated parts of a text. Otherwise a
sentence like “I did not find the jokes entertaining.” could easily be scored
as a positive statement: the words “joke” and “entertain” are both positive,
and the rest are neutral or stop words.
Keyword extraction attempts to find the most important words and phrases in a
text. Conventional keyword analysis uses a bag of words that results from pre-
processing steps 1-4. Extraction proceeds by comparing this bag to a large
corpus of other pre-processed texts (for example news articles or Wikipedia
pages). Good keywords are ones that occur many times in the input text, but
are rare elsewhere in the corpus. A suitable measure is term frequency-inverse
document frequency (TF-IDF). Word phrases can be extracted in much the same
way as keywords, but comparing bags of two- and three-word sequences (called
2- and 3-grams) instead of single words (Sebastiani, 2002).
Topic identification is used to identify topics or themes that are related to
a text, but that may not be explicitly mentioned in it. For example, a
newspaper article may be related to the Summer Olympic Games although the text
does not contain that exact phrase nor a synonym. Machine-learning techniques
are much used for this purpose (Müller et al, 2016). Latent Dirichlet
Allocation (LDA) is a statistical technique that identifies groups of words
that tend to occur together in a corpus of texts, under the assumption that
each such word group marks a topic or theme that a text can be about. Word-
embedding techniques are increasingly being used to identify and represent the
topics of sentences, paragraphs, and documents (Le and Mikolov, 2014).
Classification is similar to topic identification but, whereas topic
identification is open, text classification relies on a closed taxonomy of
labels. Standard machine-learning approaches are available for single-label or
multi-label classification (Müller et al, 2016), and standard clustering
algorithms can be used to establish the initial taxonomy structure.
Afterwards, other NL techniques can be used to suggest class labels, although
manual curation and labelling is also common.
Named entity recognition (NER) attempts to identify the individuals that are
mentioned in a text, such as people, companies, organisations, cities,
geographic features, etc., usually along with their types. Conventionally,
this has been treated as a three-step task. First, the words or phrases that
name an individual are identified. Common techniques are gazetteer lists (of
known names) and typesetting conventions (such as capital initials) in
combination with PoS analysis that identifies nouns. Next, the identified
names are disambiguated: does the name “Bergen” refer to an American actress,
a college football team, or a city in the Netherlands, New Jersey, or Norway?
Statistical techniques like LDA can be used here, because each meaning of a
name like “Bergen” will tend to co-occur with different groups of words.
Finally, when the meaning of a name is clear, it is represented in some
standard way, preferably linked by an IRN defined in a common Linked Open Data
resource. Examples of LOD sets that can be used to define IRNs are the English
WordNet (its RDF version), the multi-lingual BabelNet, DBpedia, Wikidata,
GeoNames, and LinkedGeoData. Keywords and phrases, concepts, and
categories/labels can also be semantically linked with IRNs using similar
techniques. Recently, neural networks have been applied to all three sub-
problems, both separately and in combination.
Relation extraction is a challenging area that attempts to identify precise
semantic relations between the keywords, phrases, concepts, labels, and named
entities that are extracted from a text (Wong et al, 2012). For example, when
a text mentions a “hurricane” near the name of a town, does it mean that the
hurricane is approaching, hitting, or passing by? Supervised machine learning
has been used to extract specific relations in narrow domains, such as sports
results. But general relation extraction using deeper PoS tagging and
dependency analysis is an open research area. A new generation of neural-
network and word-embedding based joint entity and relation extractors and
linkers are producing increasingly accurate (complete and precise) results,
often surpassing specialised entity recognisers-linkers and specialised
relation extractors-linkers.
Literal extraction is a two-step task: first identifying data that constitutes
a literal such as a phone number, web address, date or time, and then
representing its meaning in a standard way, for example as an IRN or XSD-typed
literal string.
### 5.4 Discussion
With the advent of statistical NL analysers trained on large text corpora, the
area of natural-language processing is currently progressing rapidly. But not
even advanced machine learning and deep neural networks will be able to handle
the more difficult problems of natural-language understanding anytime soon.
Such problems include irony, sarcasm, and metaphorical speech that presume a
shared pragmatic and social understanding between sender and receiver. Current
narrow NL and ML techniques have not yet dealt with these higher levels of
communication, which approach the so far unsolved problem of general
artificial intelligence. On the other hand, emergencies — in particular when
broken down into particular emergency types (avalanche, derailing, fire,
terrorism) — deal with highly specific domains for which precise NL processors
can be trained specifically. Also, during emergencies, people can be expected
to use simple and straightforward language that makes NLP easier, with limited
use of sarcasm, irony, and metaphor.
In the foreseeable future, general NLP will remain useful but inaccurate. In
situations where lives, health, property, and the environment are at stake, we
cannot fully trust the results of even the most accurate NL analysers on the
single-text level. This applies even more strongly to the kind of short and
context-dependent messages people write on social media. Nevertheless, NLP
techniques will remain useful in emergency situations in at least two ways:
* •
They can provide strategic overviews by aggregating analysis results over
collections of many messages, for example by averaging sentiment and emotion
scores and by eliminating concepts and named entities that are not repeated
across messages. They can offer answers to questions like: “In a disaster
area, how does the sentiment of tweets that mention food change over time in
different locations?” The hope is that aggregation of many messages will
cancel or straighten out single-text analysis errors, but some bias may always
remain.
* •
They can suggest potentially actionable insights by identifying single
messages or groups of messages that may contain important tactical
information, such as a rapidly approaching fire front, a gas leak, or an
entrapment. Semantically categorising a single message as a distress call may
not alone justify directing a rescuer or medical worker to a dangerous spot.
But it can act as a trigger for further information gathering by automatic or
manual means. And it can act as one of several indicators that aid tactical
operation leaders in making the best possible decisions based on the available
information.
## 6 Using a Sentiment Analyser
A wide range of tools support both sentiment analysis and other NLP
techniques. They are available as online services, as downloadable programs,
or as APIs that can be used from programming languages such as Python, Java,
Scala, and R. Most of them bundle several different analysis techniques
together in a single interface.
We will look more closely at the NLP component of IBM’s Watson
platform262626IBM Watson offers a free online demo at http://natural-language-
understanding-demo.ng.bluemix.net/, but you must register with IBM Watson to
get your own API key.. Through a web interface, the user enters either a plain
text or the URL of a web page. In response, the following features are
returned:
* •
Keywords and phrases, ranked by their relevance.
* •
Sentiment of the text as a whole and for the specific keywords and phrases it
contains.
* •
Emotions, such as joy, anger, disgust, sadness, and fear, both for the text as
a whole and for specific keywords and phrases.
* •
Named entities, such as people, companies, organisations, cities, and
geographic features, along with their types, relevance, and occurrence counts.
* •
Concepts that are related to the text, but that may not be explicitly
mentioned in it, ranked by their relevance scores.
* •
Categories selected from a fixed taxonomy and ranked by their relevance
scores: IBM Watson’s taxonomy is up to five levels deep with more than a
thousand leaf nodes and 23 top categories, such as education, finance, news,
science, shopping, and sports.
* •
Semantic roles that break sentences down into their grammatical and semantic
parts.
Overall sentiment is scored in the [-1, 1] range, whereas emotions and
relevance are [0, 1]-scored. The results are returned in a human-readable web
page or as machine-readable JSON. For example, the results of sentiment and
emotion analysis may look like this in JSON format:
{
"sentiment": {
"document": {
"score": 0,
"label": "neutral"
}
},
"emotion": {
"document": {
"emotion": {
"sadness": 0.029943,
"joy": 0.056795,
"fear": 0.025568,
"disgust": 0.034639,
"anger": 0.549087
}
}
}
}
Of course, the analyser can be accessed through API calls as well, e.g., from
a Python program or from a terminal window using the command-line tool curl:
curl -X POST -u "apikey:{your-apikey}" \
"https://{your-api}/analyze?version={your-version}}" \
--header "Content-Type: application/json" \
--data ’{
"text": "Wildfires rage in Arctic Circle as
Sweden calls for help",
"features": {
"sentiment": {},
"concepts": {},
"entities": {}
}
}’
This command will return JSON results about sentiments, concepts, and entities
found in the given newspaper headline. If possible, it will also return a
DBpedia IRI for each concept and entity. More specific results can be
requested using additional arguments, but a single headline usually contains
too little context information to be accurately lifted.
There is a wide range of similar natural language analysers available,
differing mostly in precision and in the range of analyses, metrics, and
languages they support. For example, DBpedia Spotlight272727A three-language
demo is available at https://www.dbpedia-spotlight.org/demo/. returns DBpedia
IRIs for topics and named entities found in texts in 12 major languages
(Mendes et al, 2011). The code is open and can be trained and tailored to
other languages and more specific domains, such as particular types of
emergency situations. The BabelNet282828http://live.babelnet.org/ analyser
returns IRIs for topics and named entities in BabelNet, a multi-lingual
version of WordNet. NLP services that leverage next-generation NL analysers
trained on large text corpora are also appearing. It is likely that the
quality of NL analysis tools will continue to improve as word embedding
becomes better understood and more neural-network based text-analysis APIs and
services become available.
## Exercises
1. 1.
What is RDF, RDFS, OWL, and SPARQL?
2. 2.
What is a knowledge graph (RDF graph)?
3. 3.
Outline the following knowledge graph: _Tim Berners-Lee is a person and an
author. He has authored a book with title “Weaving the Web”, published in
2001. Another person, Mark Fischetti is co-author of this book, which has ISBN
0756752310._
4. 4.
What are the benefits of knowledge graphs in an emergency situation?
5. 5.
And what are the main challenges?
6. 6.
What is LOD? Give examples of LOD resources that can be useful for emergency
management. Where can you go to find more?
7. 7.
What is a vocabulary in connection with RDFS and OWL? Why are vocabularies
important?
8. 8.
Give examples of vocabularies that can be useful for emergency management.
Where can you find more?
9. 9.
What is TF-IDF?
10. 10.
What is LDA?
11. 11.
What are the main steps in natural-language processing?
12. 12.
What is a sentiment analyser? Explain its typical outputs.
## References
* Allemang and Hendler (2011) Allemang D, Hendler J (2011) Semantic web for the working ontologist: effective modeling in RDFS and OWL. Elsevier
* Atzori et al (2010) Atzori L, Iera A, Morabito G (2010) The Internet of Things: A survey. Computer Networks 54(15):2787–2805, DOI 10.1016/j.comnet.2010.05.010, URL http://linkinghub.elsevier.com/retrieve/pii/S1389128610001568
* Auer et al (2007) Auer S, Bizer C, Kobilarov G, Lehmann J, Cyganiak R, Ives Z (2007) Dbpedia: A nucleus for a web of open data. In: The semantic web, Springer, pp 722–735
* Auer et al (2009) Auer S, Lehmann J, Hellmann S (2009) Linkedgeodata: Adding a spatial dimension to the web of data. Springer, pp 731–746
* Berners-Lee et al (2001) Berners-Lee T, Hendler J, Lassila O (2001) The semantic web. Scientific american 284(5):34–43
* Bizer et al (2009a) Bizer C, Heath T, Berners-Lee T (2009a) Linked data-the story so far. International journal on semantic web and information systems 5(3):1–22
* Bizer et al (2009b) Bizer C, Lehmann J, Kobilarov G, Auer S, Becker C, Cyganiak R, Hellmann S (2009b) DBpedia-A crystallization point for the Web of Data. Web Semantics: science, services and agents on the world wide web 7(3):154–165
* Castillo (2016) Castillo C (2016) Big crisis data: Social media in disasters and time-critical situations. Cambridge University Press
* Ermilov et al (2013) Ermilov I, Martin M, Lehmann J, Auer S (2013) Linked open data statistics: Collection and exploitation. In: Proc. International Conference on Knowledge Engineering and the Semantic Web, Springer, pp 242–249
* Goldberg and Levy (2014) Goldberg Y, Levy O (2014) word2vec Explained: deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv:14023722 [cs, stat] URL http://arxiv.org/abs/1402.3722, arXiv: 1402.3722
* Krizhevsky et al (2012) Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
* Le and Mikolov (2014) Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: International conference on machine learning, pp 1188–1196
* McCrae et al (2018) McCrae JP, Cyganiak R, Bizer C (2018) The Linked Open Data Cloud. URL http://lod-cloud.net/
* Mendes et al (2011) Mendes PN, Jakob M, García-Silva A, Bizer C (2011) DBpedia spotlight: shedding light on the web of documents. ACM, pp 1–8
* Mikolov et al (2013) Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient Estimation of Word Representations in Vector Space. arXiv:13013781 [cs] URL http://arxiv.org/abs/1301.3781, arXiv: 1301.3781
* Miller (1995) Miller GA (1995) Wordnet: a lexical database for english. Communications of the ACM 38(11):39–41
* Müller et al (2016) Müller AC, Guido S, et al (2016) Introduction to machine learning with Python: a guide for data scientists. ” O’Reilly Media, Inc.”
* Pak and Paroubek (2016) Pak A, Paroubek P (2016) Twitter as a Corpus for Sentiment Analysis and Opinion Mining. IJARCCE 5(12):320–322, DOI 10.17148/IJARCCE.2016.51274, URL http://ijarcce.com/upload/2016/december-16/IJARCCE\%2074.pdf
* Pang and Lee (2008) Pang B, Lee L (2008) Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval 2(1–2):1–135
* Pennington et al (2014) Pennington J, Socher R, Manning C (2014) Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1532–1543
* Ristoski and Paulheim (2016) Ristoski P, Paulheim H (2016) Rdf2vec: Rdf graph embeddings for data mining. In: International Semantic Web Conference, Springer, pp 498–514
* Sebastiani (2002) Sebastiani F (2002) Machine learning in automated text categorization. ACM computing surveys (CSUR) 34(1):1–47
* Shadbolt et al (2006) Shadbolt N, Berners-Lee T, Hall W (2006) The Semantic Web Revisited. IEEE Intelligent Systems 21(3):96–101, DOI 10.1109/MIS.2006.62, URL http://ieeexplore.ieee.org/document/1637364/
* Stadler et al (2012) Stadler C, Lehmann J, Höffner K, Auer S (2012) LinkedGeoData: A core for a web of spatial open data. Semantic Web 3(4):333–354
* Vandenbussche et al (2017) Vandenbussche PY, Atemezing GA, Poveda-Villalón M, Vatant B (2017) Linked Open Vocabularies (LOV): a gateway to reusable semantic vocabularies on the Web. Semantic Web 8(3):437–452
* Wong et al (2012) Wong W, Liu W, Bennamoun M (2012) Ontology learning from text: A look back and into the future. ACM Computing Surveys 44(4):1–36, DOI 10.1145/2333112.2333115, URL http://dl.acm.org/citation.cfm?doid=2333112.2333115
## Acknowledgement
This is a pre-print of the following chapter: Opdahl, A. L., “Knowledge Graphs
and Natural-Language Processing”, published in Big Data in Emergency
Management: Exploitation Techniques for Social and Mobile Data, edited by
Rajendra Akerkar, 2020, Springer International Publishing reproduced with
permission of Springer International Publishing. The final authenticated
version is available online at: http://dx.doi.org/10.1007/978-3-030-48099-8.
Opdahl, A. L. (2020). Knowledge Graphs and Natural-Language Processing. In Big
Data in Emergency Management: Exploitation Techniques for Social and Mobile
Data (pp. 75-91). Springer, Cham.
|
# DMRG study of exciton condensation in the extended Falicov-Kimball model
P. Farkašovský
(Received May 15, 2020, in final form August 13, 2020)
###### Abstract
The formation and condensation of excitonic bound states of conduction-band
electrons and valence-band holes surely belongs to one of the most exciting
ideas of contemporary solid state physics. In this short review we present the
latest progress in this field reached by the density-matrix-renormalization-
group (DMRG) calculations within various extensions of the Falicov-Kimball
model. Particular attention is paid to a description of crucial mechanisms
(interactions) that affect the stability of the excitonic phase, and namely:
(i) the interband $d$-$f$ Coulomb interaction, (ii) the $f$-electron hopping,
(iii) the nonlocal hybridization with odd and even parity, (iv) combined
effects of the local and nonlocal hybridization, (v) the nearest-neighbor
Coulomb interaction between $d$ and $f$ electrons and (vi) the correlated
hopping. The relevance of numerical results obtained within different
extensions of the Falicov-Kimball model for a description of the real $d$-$f$
materials is widely discussed.
Key words: Falicov-Kimball model, quantum condensates, one-dimensional systems
###### Abstract
Ôîðìóâàííÿ êîíäåíñàöÿ çâÿçàíèõ åêñèòîííèõ ñòàíâ ìæ åëåêòðîíàìè ç çîíè
ïðîâäíîñò òà äðêàìè ç âàëåíòíî¿ çîíè, áåçóìîâíî, íàëåæèòü äî îäí¿ ç íàéáëüø
çàõîïëþþèõ äåé ñóàñíî¿ ôçèêè òâåðäîãî òëà. Ó öüîìó êîðîòêîìó îãëÿä, ìè
ïðåäñòàâëÿìî îñòàííé ïðîãðåñ ó öé ãàëóç, ùî áóâ äîñÿãíóòèé çàâäÿêè ðîçðàõóíêàì
ìåòîäîì ðåíîðì ãðóïè ìàòðèö ãóñòèíè (DMRG) äëÿ ðçíèõ óçàãàëüíåíü ìîäåë
Ôàëêîâà-Êìáàëà. Îñîáëèâà óâàãà ïðèäëÿòüñÿ îïèñó íàéâàæëèâøèõ ìåõàíçìâ
(âçàìîäé), ÿê âïëèâàþòü íà ñòàáëüíñòü åêñèòîííî¿ ôàçè, à ñàìå: (i) ìæçîííà
$d$-$f$ êóëîíâñüêà âçàìîäÿ, (ii) ïåðåíîñ $f$-åëåêòðîíâ, (iii) ïàðíà íåïàðíà
íåëîêàëüíà ãáðèäèçàöÿ, (iv) êîìáíîâàí åôåêòè ëîêàëüíî¿ òà íåëîêàëüíî¿
ãáðèäèçàö¿, (v) êóëîíâñüêà âçàìîäÿ íàéáëèæèõ ñóñäâ ìæ $d$\- $f$-åëåêòðîíàìè òà
(vi) êîðåëüîâàíèé ïåðåíîñ. Øèðîêî îáãîâîðþòüñÿ âäïîâäíñòü èñëîâèõ ðåçóëüòàòâ,
îòðèìàíèõ äëÿ ðçíèõ óçàãàëüíåíü ìîäåë Ôàëêîâà-Êìáàëà, äëÿ îïèñó ðåàëüíèõ
$d$-$f$ ìàòåðàëâ.
Ключов слова: ìîäåëü Ôàëêîâà-Êìáàëëà, êâàíòîâ êîíäåíñàòè, îäíîâèìðí ñèñòåìè
## 1 Introduction
The formation of excitonic quantum condensates is an intensively studied
continuous problem in condensed matter physics [1, 2, 3, 4]. Whilst
theoretically predicted a long time ago [5], no conclusive experimental proof
of the existence of the excitonic condensation has been achieved yet. However,
the latest experimental studies of materials with strong electronic
correlations showed that promising candidates for the experimental
verification of the excitonic condensation could be TmSe0.45Te0.55 [6, 7],
$1T$-TiSe2 [8, 9, 10, 11], Ta2NiSe5 [12], or a double bilayer graphene system
[13]. In this regard, the mixed valence compound TmSe0.45Te0.55 was argued to
exhibit a pressure-induced excitonic instability, related to an anomalous
increase in the electrical resistivity [6, 7]. In particular, detailed studies
of the pressure-induced semiconductor-semimetal transition in this material
[based on the Hall effect, electrical and thermal (transport) measurements]
showed that excitons are created in a large quantity and condense below 20 K.
On the other hand, in the layered transition-metal dichalcogenide $1T$-TiSe2,
a BCS-like electron-hole pairing was considered as the driving force for the
periodic lattice distorsion [8, 9, 10, 11]. Moreover, quite recently, the
excitonic-insulator state was probed by angle-resolved photoelectron
spectroscopy in the semiconducting Ta2NiSe5 compound [12]. These results have
stimulated further experimental and theoretical studies with regard to the
formation and possible condensation of excitonic bound states of electron and
holes in correlated systems. At present, it is generally accepted that the
minimal theoretical model for a description of excitonic correlations in these
materials could be the Falicov-Kimball model [14] and its extensions which
were successfully used in the past years to test the exciting idea of
electronic ferroelectricity [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25] that
is directly related with the formation of an excitonic insulator [26, 27, 28,
29, 30, 31, 32, 33, 34, 35]. In its original form, the Falicov-Kimball model
describes a two-band system of localized $f$ electrons and itinerant $d$
electrons with short-ranged $f$-$d$ Coulomb interaction $U$:
$H_{0}=\sum_{ij}t_{ij}d^{+}_{i}d_{j}+U\sum_{i}f^{+}_{i}f_{i}d^{+}_{i}d_{i}+E_{f}\sum_{i}f^{+}_{i}f_{i}\,,$
(1.1)
where $f^{+}_{i}$, $f_{i}$ are the creation and annihilation operators for an
electron in the localized state at lattice site $i$ with the binding energy
$E_{f}$ and $d^{+}_{i}$, $d_{i}$ are the creation and annihilation operators
of the itinerant spinless electrons in the $d$-band Wannier state at site $i$.
The first term of (1.1) is the kinetic energy corresponding to quantum-
mechanical hopping of the itinerant $d$ electrons between sites $i$ and $j$.
These intersite hopping transitions are described by the matrix elements
$t_{ij}$, which are $-t_{d}$ if $i$ and $j$ are the nearest neighbors and zero
otherwise (in what follows all parameters are measured in units of $t_{d}$).
The second term represents the on-site Coulomb interaction between the
$d$-band electrons with density $n_{d}=\frac{1}{L}\sum_{i}d^{+}_{i}d_{i}$ and
the localized $f$ electrons with density
$n_{f}=\frac{1}{L}\sum_{i}f^{+}_{i}f_{i}$, where $L$ is the number of lattice
sites. The third term stands for the localized $f$ electrons whose sharp
energy level is $E_{f}$.
Since in this simple model, the local occupation number $f^{+}_{i}f_{i}$
commutates with the total Hamiltonian of the system, the local $f$-electron
number is a strictly conserved quantity and thus the $d$-$f$ electron
coherence cannot be established in such a system. If hybridization
$H_{V}=V\sum_{i}d^{+}_{i}f_{i}+f^{+}_{i}d_{i}$ between both bands is included,
the $f$ charge occupation is no longer a good quantum number, and it is
possible to build coherence between $d$ and $f$ electrons. Hybridization
between the itinerant $d$ and localized $f$ states, however, is not the only
way to develop $d$-$f$ coherence. Theoretical works of Batista et al. [22, 23]
showed that the ground state with a spontaneous electric polarization can also
be induced by the nearest-neighbor $f$-electron hopping
$H_{t_{f}}=-t_{f}\sum_{<i,j>}f^{+}_{i}f_{j}$, but only for dimensions $D>1$.
In the strong coupling limit, this result was proven by mapping the extended
Falicov-Kimball model into the $xxz$ spin 1/2 model with a magnetic field
along the $z$-direction, while in the intermediate coupling regime the
ferroelectric state was identified numerically by constrained path Monte Carlo
(CPMC) technique. Based on these results, the authors postulated the following
conditions that favour the formation of the electronically driven
ferroelectric state: (a) The system must be in a mixed-valence regime and the
two bands involved must have different parity. (b) It is best, though not
necessary, if both bands have similar bandwidths. (c) A local Coulomb
repulsion ($U$) between the different orbitals is required.
Later on this model was extensively used to describe different phases in the
ground state and special properties of the excitonic phase [26, 27, 28, 29,
30, 31, 32]. It was found that the ground state phase diagram exhibits a very
simple structure consisting of only four phases, and namely, the full $d$ and
$f$ band insulator (BI), the excitonic insulator (EI), the charge-density-wave
(CDW) and the staggered orbital order (SOO). The EI is characterized by a
nonvanishing $\langle d^{+}f\rangle$ average. The CDW is described by a
periodic modulation in the total electron density of both $f$ and $d$
electrons, and the SOO is characterized by a periodic modulation in the
difference between the $f$ and $d$ electron densities.
In this article we focus our attention on the properties of the EI phase
induced by local hybridization $V$ in the one dimension. Although it is
generally known that there is no nonvanishing $P_{df}=\langle d^{+}f\rangle$
expectation value in the limit of vanishing hybridization (no spontaneous
hybridization), the studies that we performed in the past years on various
extensions of the original Falicov-Kimball model showed that it is possible to
dramatically enhance excitonic correlations in the limit of small, but finite
$V$ by additional interactions/factors [36, 37, 38, 39]. The effects of most
important interactions are discussed in this review. In particular, there are:
(i) the interband $d$-$f$ Coulomb interaction, (ii) the $f$-electron hopping,
(iii) the nonlocal hybridization with odd and even parity, (iv) combined
effects of the local and nonlocal hybridization, (v) the nearest-neighbor
Coulomb interaction between $d$ and $f$ electrons and (vi) the correlated
hopping. The main goal of this review is not to examine the possibilities of
spontaneous symmetry breaking (a spontaneous hybridization) in various
extensions of the Falicov-Kimball model, but to show how these extensions
(different interaction terms) influence the properties of the excitonic phase
induced by local hybridization. All presented results were obtained within the
density-matrix-renormalization-group (DMRG) method. where we typically keep up
to 500 states per block, although in the numerically more difficult cases
(where the DMRG results converge slower), we keep up to 1000 states.
Truncation errors [40], given by the sum of the density matrix eigenvalues of
the discarded states, vary from $10^{-6}$ in the worse cases to zero in the
best cases.
## 2 Results and Discussion
### 2.1 Effects of interband Coulomb interaction
Let us start our review with the discussion of effects of the Coulomb
interactions [36]. In this case, Hamiltonian consists of two terms: $H_{0}$,
which is given by (1.1) and $H_{V}=V\sum_{i}d^{+}_{i}f_{i}+f^{+}_{i}d_{i}$.
Our DMRG results obtained for the symmetric case $E_{f}=0$ are summarized in
figure 1 a and in figure 1 b where the $P_{df}=\langle d_{i}^{+}f_{i}\rangle$
expectation value is shown as a function of hybridization for several values
of Coulomb interaction $U$ (figure 1 a) and the ratio
$\Delta=P_{df}(U)/P_{df}(U=0)$ for several values of $V$ (figure 1 b). Figure
1 a clearly demonstrates that there is no nonvanishing $\langle
d^{+}f\rangle$-expectation value in the limit of vanishing hybridization for
all examined values of $U$. At the same time, these data reveal an important
feature of the model and namely that the $P_{df}$ expectation value is
dramatically enhanced with increasing $U$ in comparison to the noninteracting
case. This is explicitly shown in figure 1 b where the ratio of the
interacting $P_{df}(U)$ and non-interacting $P_{df}(U=0)$ excitonic average is
plotted for several selected values of hybridization.
Figure 1: (Colour online) a) The hybridization dependence of the
$d$-$f$-excitonic average $P_{df}=\langle d^{+}_{i}f_{i}\rangle$ in the
extended Falicov-Kimball model calculated for six different values of $U$ and
two different values of $L$. The symmetric case $E_{f}=0$. b) The ratio of the
interacting $P_{df}(U)$ and non-interacting $P_{df}(U=0)$ excitonic average as
a function of $U$ calculated for several selected values of local
hybridization $V$ on the cluster of $L=100$ sites [36].
For all examined values of V, the ratio $\Delta=P_{df}(U)/P_{df}(U=0)$ rapidly
increases with increasing interband Coulomb interaction $U$ from its initial
value $\Delta=1$ to its saturated value $\Delta=\Delta_{s}$ that also
dramatically increases with a decreasing $V$. Indeed, while $\Delta_{s}\sim 7$
for $V=0.05$ its value increases up to $\sim 200$ for $V=0.002$. This result
is very important from the point of view of real rare-earth materials with $d$
and $f$ electrons. In these materials the local hybridization is usually
forbidden due to the crystal symmetry an thus the $d-f$ coherence cannot be
established. However, according to our results, any infinitesimal
hybridization, induced by some additional mechanism, could lead to a robust
excitonic average due to the interband Coulomb interaction. Such an additional
mechanism could be, for example, the electron-phonon interaction $H_{\text{el-
ph}}$ that can be reduced to the phonon-mediated local hybridization
(electron-electron interactions) by the standard canonical transformation of
the form $\mathrm{e}^{S}H\mathrm{e}^{-S}$, where the operator $S$ is
determined so that $H_{\text{el-ph}}=-[S,H_{\text{loc}}]$ and $H_{\text{loc}}$
are all local terms corresponding to $f,d$ electrons and phonons [41].
To examine the nature of the EI state more in detail, we have calculated, in
accordance with [31] and [32], the exciton-exciton correlation function
$\langle b^{+}_{i}b_{j}\rangle$ with $b^{+}_{i}=d^{+}_{i}f_{i}$ and the
excitonic momentum distribution $N(q)=\langle b^{+}_{q}b_{q}\rangle$ with
$b^{+}_{q}=(1/\sqrt{L})\sum_{k}d^{+}_{k+q}f_{k}$. We have found that the
exciton-exciton correlation function $\langle b^{+}_{i}b_{j}\rangle$ exhibits
power-low correlations $|i-j|^{-\alpha}$ (with $\alpha$ between 3 and 4) and
the excitonic momentum distribution $N(q)$ diverges for $q=0$ (see figure 2
a), signalizing a Bose-Einstein condensation of preformed excitons. Moreover,
figure 2b shows that the density of zero momentum excitons
$n_{0}=\frac{1}{L}N(q=0)$ as well as the total exciton density
$n_{T}=\frac{1}{L}\sum_{q}N(q)$ strongly depend on the values of the Coulomb
interaction $U$ and that already for relatively small values of $U$ ($U\sim
4$) practically all particles are paired in electron-hole pairs with
significant fraction of $n_{0}/n_{T}\sim 0.5$ excitons in the zero-momentum
state.
Figure 2: (Colour online) a) The excitonic momentum distribution $N(q)$
calculated for different values of $V$ at $U=1,E_{f}=0$ and $L=60$. The inset
shows a divergence of $N(q=0)$ for $L\to\infty$ for three selected values of
$V$. b) The density of zero momentum excitons $n_{0}$ and the total exciton
density $n_{T}$ as functions of $1/L$ calculated for several different values
of $U$ at $V=0.2$ and $E_{f}=0$ [36].
### 2.2 Effects of $f$-electron hopping
With regard to the situation in real materials, where there always exists a
finite overlap of $f$ orbitals on the neighbouring sites, it is interesting to
ask what happens if the $f$-electron hopping
$H_{t_{f}}=-t_{f}\sum_{<i,j>}f^{+}_{i}f_{j}$ is also taken into account [37].
In accordance with some previous theoretical studies, which documented strong
effects of the parity of $f$ band on the stability of the excitonic phase [22,
23], we have examined the model for both the positive (the even parity) and
negative (the odd parity) values of the $f$-electron hopping integrals
$t_{f}$. The results of our non-zero $t_{f}$ DMRG calculations for $n_{0}$ are
displayed in figure 3 and they clearly demonstrate that the zero-momentum
condensate is suppressed in the limit of positive values of $t_{f}$, while it
remains robust for negative values of $t_{f}$.
Figure 3: (Colour online) $n_{0}$ (a) and $n_{T}$ (b) as functions of $t_{f}$
calculated for three different values of $U$ ($E_{f}=0$, $V=0.1$, $L=\infty$)
[37].
This result is intuitively expected since our previous Hartree-Fock (HF)
results [24] showed that only the negative values of $t_{f}$ stabilize the
ferroelectric phase, while the positive values stabilize the antiferroelectric
phase. The effect of $t_{f}$ is especially strong for $U$ small (see figure 3
a), where continuous but very steep changes of $n_{0}$ are observed for
$t_{f}\to 0^{+}$. On the contrary, the total exciton density $n_{T}$ (figure 3
b) exhibits only a weak dependence on the $f$-electron hopping parameter
$t_{f}$, over the whole interval of $t_{f}$ values.
### 2.3 Effects of $f$-level position (pressure)
So far we have presented the results exclusively for $E_{f}=0$. Let us now
briefly discuss the effect of the change of the $f$-level position [37]. This
study is also interesting from the point of view that taking into account the
parametrization between the external pressure and the position of the $f$
level ($E_{f}\sim p$), one can also deduce, at least qualitatively, their $p$
dependences from the $E_{f}$ dependences of the ground state characteristics
[42]. The resultant $E_{f}$ dependences of the density of zero momentum
excitons $n_{0}$ are shown in figure 4 a for several values of $V$ and
$U=0.5$.
Figure 4: (Colour online) a) $n_{0}$ as a function of $E_{f}$ calculated for
three different values of $V$ ($t_{f}=0,U=0.5,L=\infty$). The inset shows the
density of $d$ electrons $n_{d}$ near $E_{f}=-1.5$. b) $n_{0},n_{T},n_{d}$ and
$n^{\text{un}}_{d}=n_{d}-n_{T}$ as functions of $E_{f}$ calculated for
$t_{f}=0,U=0.5,V=0.1$ and $L=\infty$. The inset shows the behaviour of $n_{0}$
and $n_{T}$ near $E_{f}=-2$ [37].
One can see that the density of zero momentum excitons is nonzero over the
whole interval of $E_{f}$ values. Moreover, we have found that the values of
$n_{0}$ are extremely enhanced in the region near $E_{f}\sim-1.5$, which is
obviously due to a significant enhancement of the $d$ electron population in
the $d$ band (see the inset in figure 4 a).
To describe the process of formation of excitonic bound states with increasing
$E_{f}$ more in detail, we have also plotted in figure 4 b, besides the
density of zero momentum excitons $n_{0}$, the total exciton density $n_{T}$,
the total $d$-electron density $n_{d}$ and the total density of unbond $d$
electrons $n^{\text{un}}_{d}=n_{d}-n_{T}$. It is seen (see the inset in figure
4 b) that below $E_{f}\sim-1.8$, $n_{0}$ and $n_{T}$ coincides, which means
that the excitonic insulator in this region is practically completely driven
by the condensation of zero-momentum excitons. Above this value $n_{T}$ starts
to sharply increase, while $n_{0}$ tends to its maximum at $E_{f}\sim-1.3$ and
then gradually decreases to its minimum at $E_{f}=0$. Similar behaviour with
increasing $E_{f}$ also exhibits the density of unbond $d$ electrons
$n^{\text{un}}_{d}$, though the values of $n^{\text{un}}_{d}$ are several
times larger than $n_{0}$. It is interesting to note that although the total
exciton density $n_{T}$ increases over the whole interval of $E_{f}$ values,
the number of unbond $d$ electrons remains practically unchanged over the wide
range of $E_{f}$ values (from $E_{f}=-1$ to $E_{f}=1$), since its decrease,
due to the formation of excitonic pairs, is compensated by the increase of
$n_{d}(E_{f})$. Thus, we can conclude that in the pressure induced case, when
the $f$-level energy shifts up with the applied pressure [42], the model is
capable of describing, at least qualitatively, the increase in the total
density of excitons with external pressure and the increase or decrease
(according to the initial position of $E_{f}$ at ambient pressure) in the
$n_{0}$ and $n^{\text{un}}_{d}$.
### 2.4 Effects of non-local hybridization with inversion symmetry
As already mentioned, from the physics viewpoint, the most interesting case
corresponds to the case of finite non-local hybridization [37]. The importance
of this term emphasizes the fact that the on-site hybridization $V$ is usually
forbidden in real $d$-$f$ systems for parity reasons. Instead of the on-site
hybridization, one should consider in these materials the non-local
hybridization with inversion symmetry
$V_{i,j}=V_{\text{non}}(\delta_{j,i-1}-\delta_{j,i+1})$ which leads to
$k$-dependent hybridization of the opposite parity that corresponds to the $d$
band [$V_{k}\sim\sin(k)$] [43]. Typical examples of $1/L$ dependence of the
excitonic momentum distribution $N(q=0)$ obtained for three representative
values of the interband Coulomb interaction and two values of $f$-electron
hopping are displayed in figure 5 a and figure 5 b.
Figure 5: (Colour online) $N(0)$ as a function of $1/L$ calculated for three
different values of $U$ and two different values of $t_{f}$: a) $t_{f}=0$, b)
$t_{f}=-0.5$ ($E_{f}=0,V_{\text{non}}=0.1$) [37].
These results clearly demonstrate that there is no sign of divergence in the
$1/L$-dependence of $N(0)$ neither for $t_{f}=0$ nor for $t_{f}=-0.05$ and
thus, there is no signal of forming the Bose-Einstein condensate in the
presence of non-local hybridization with the inversion symmetry. Thus, our
results indicate that the class of possible candidates for the appearance of
the Bose-Einstein condensation of excitons in real $d$-$f$ materials is
strongly limited, since the local hybridization is usually forbidden in these
systems for parity reasons and the non-local hybridization with the inversion
symmetry does not support the formation of the Bose-Einstein condensate.
### 2.5 Combined effects of local and non-local hybridization with equal
parity of $d$ and $f$ orbitals
In this situation, the most promising candidates for studying this phenomenon
seem to be the systems with equal parity of $d$ and $f$ orbitals, where the
nonlocal hybridization $H_{\text{n}}$ can be written as [38]:
$H_{\text{n}}=V_{\text{n}}\sum_{\langle i,j\rangle}(d^{+}_{i}f_{j}+H.c.).$
(2.1)
In such systems, the local hybridization $V$ is allowed, and thus one can
examine the combined effects of the local and nonlocal hybridization within
the unified picture. In the weak ($U<1$) and strong ($V\ll U$ and
$V_{\text{n}}\ll U$) coupling limits, the model Hamiltonian
$H_{0}+H_{V}+H_{\text{n}}$ was recently analyzed by Zenker et al. in [44], and
the corresponding mean-field quantum phase diagrams were presented as
functions of the model parameters $U,V,V_{\text{n}}$ and $E_{f}$ for the half-
filed band case $n_{f}+n_{d}=1$ and $D=2$. Moreover, examining the effects of
the local $V$ and nonlocal $V_{\text{n}}$ hybridization, they found that in
the pseudospin space
($c^{+}_{i\uparrow}=d^{+}_{i}$,$c^{+}_{i\downarrow}=f^{+}_{i}$), the nonlocal
hybridization $V_{\text{n}}$ favors the staggered Ising-type ordering along
the $x$ direction, while $V$ favors a uniform polarization along the $x$
direction and the staggered Ising-type ordering along the $y$ direction. In
our paper [38] we have examined the model for arbitrary $V$ and $V_{\text{n}}$
and unlike the paper of Zenker et al. [44] we have focused our attention
primarily on a description of process of formation and condensation of
exitonic bound states.
Let us discuss the results obtained for $n_{0}=\frac{1}{L}N(q=0)$,
$n_{\piup}=\frac{1}{L}N(q=\piup)$, $n_{d}$ and $n^{\text{un}}_{d}$ as
functions of the $f$-level position $E_{f}$ which can give us, at least
qualitatively, the answer to the very important question, and namely, how
these quantities change with the applied pressure $p$. In figure 6 we present
the resultant behaviours of $n_{0},n_{\piup},n_{d},n^{\text{un}}_{d}$ as
functions of the $f$-level position $E_{f}$ obtained by the DMRG method for
$V=0.2$ and several different values of $V_{\text{n}}$.
Figure 6: (Colour online) The density of zero-momentum excitons $n_{0}$ (a),
the density of $\piup$-momentum excitons $n_{\piup}$ (b), the total
$d$-electron density $n_{d}$ (c), and the total density of unbound $d$
electrons $n^{\text{un}}_{d}=n_{d}-n_{T}$ (d) as functions of $E_{f}$
calculated for $U=4,V=0.2,L=60$ and six different values of $V_{\text{n}}$
[38].
In all examined cases, the density of zero-momentum excitons is the most
significantly enhanced for $d$-electron densities near the half-filled band
case $E_{f}=0$ and $n_{d}=1/2$. The changes of $n_{0}$ are gradual for
$E_{f}<0$ and very steep, but still continuous, for $E_{f}>0$. The fully
different behaviour exhibits the density of $\piup$-momentum excitons
$n_{\piup}$. Its enhancement with increasing $E_{f}$ is practically negligible
for $E_{f}<0$, but from this value $n_{\piup}$ it starts to sharply increase
and tends to its saturation value corresponding to the fully occupied $d$ band
$n_{d}\sim 1$. The density of unbound $d$ electrons $n^{\text{un}}_{d}$
exhibits a very simple behaviour for $E_{f}<0$. In this limit,
$n^{\text{un}}_{d}$ gradually increases with increasing $E_{f}$ for all
examined values of nonlocal hybridization $V_{\text{n}}$. However, in the
opposite case ($E_{f}>0$), the density of unbound $d$ electrons
$n^{\text{un}}_{d}$ behaves fully differently for $V_{\text{n}}<V^{c}_{n}$ and
$V_{\text{n}}>V^{c}_{n}$, where $V^{c}_{n}\sim 0.2$. For
$V_{\text{n}}<V^{c}_{n}$, the density of unbound $d$ electrons
$n^{\text{un}}_{d}$ gradually decreases with an increasing $E_{f}$ and tends
to zero when $E_{f}$ approaches the upper edge of the noninteracting band
$E_{f}=2$, but in the opposite limit the density of unbound $d$ electrons
$n^{\text{un}}_{d}$ decreases by the interval of $E_{f}$ values from $E_{f}=0$
to $E^{c}_{f}(V_{\text{n}})$, and $n^{\text{un}}_{d}$ starts to increases
again for $E_{f}>E^{c}_{f}(V_{\text{n}})$.
Figure 7: (Colour online) The inverse value of the density of unbound
$d$-electrons $n^{\text{un}}_{d}$ as a function of the $f$-level energy
$E_{f}$ calculated for $U=4,V=0.2,V_{\text{n}}=0.2$ and $L=\infty$ [38]. The
inset shows the resistivity as the function of pressure in TmSe0.45Te0.55 at
4.2 K [6].
Taking into account the above mentioned parametrization between $E_{f}$ and
the external pressure $p$, as well as the fact that the electrical
conductivity is proportional to the density of unbound electrons
$n^{\text{un}}_{d}$ (and the electrical resistivity to $1/n^{\text{un}}_{d}$),
the results discussed above could have very important physical consequences.
Indeed, in figure 7 we have plotted the quantity $1/n^{\text{un}}_{d}$ (in the
logarithmic scale) as a function of $E_{f}$ and compare it with experimental
measurements of the pressure dependence of the electrical resistivity in the
mixed valence compound TmSe0.45Te0.55 (see the inset in figure 7). One can see
that there is a nice qualitative accordance between our theoretical
predictions and experimental results of Wachter et al. [6]. In spite of the
fact that our model is in many aspects very simplified, the physics that could
lead to the unusual behaviour of the
electrical resistivity in TmSe0.45Te0.55 under the external pressure seems to
be clear. This is a result of the formation and condensation of excitonic
bound states of conduction-band electrons and valence-band holes.
### 2.6 Effects of non-local Coulomb interactions
The above discussed results show that the Falicov-Kimball model has a great
potential to describe some of the anomalous features of real complex materials
such as rare-earth compounds. On the other hand, it should be noted that the
original version of the model, as well as its extensions discussed above,
represent a too crude approximation of real rare-earth compounds, since we
neglect all nonlocal Coulomb interactions, that can change this picture. For a
correct description of these materials one should take into account at least
the following nonlocal Coulomb interaction terms [39]:
$H_{\text{non}}=U_{dd}\sum_{<ij>}n^{d}_{i}n^{d}_{j}+U_{df}\sum_{<ij>}n^{d}_{i}n^{f}_{j}+U_{ff}\sum_{<ij>}n^{f}_{i}n^{f}_{j}+U_{ch}\sum_{<ij>}d^{+}_{i}d_{j}(n^{f}_{i}+n^{f}_{j}),$
(2.2)
which represent the nearest-neighbour Coulomb interaction between two $d$
electrons (the first term), between one $d$ and one $f$ electron (the second
term), between two $f$ electrons (the third term) and the so-called correlated
hopping (the last term).
There is a number of papers, were the influence of individual interaction
terms from (2.2) on the ground state properties of the Falicov-Kimball model
has been studied. However, there are only a few where the combined effects of
two or three terms were considered. Among the papers dealing with the
influence of individual interactions, let us mention the work [45] (and
references therein) where the effects of nonlocal interaction between $d$ and
$f$ electrons are examined and the excellent papers of Shvaika et al. [46, 47,
48] where rigorous results for the influence of the correlated hopping on the
thermodynamical functions were derived within the local approach and then used
for a description of various physical problems. Among the papers dealing with
combined effects of two or three terms, let us mention the works [49, 50] (and
references therein). From this point of view, the model Hamiltonian
$H=H_{0}+H_{V}+H_{\text{non}}$ considered here represents one of the most
complex extensions of the Falicov-Kimball model used for a description of
ground state properties of strongly correlated systems. Here, we focus our
attention exclusively on a discussion of two main problems, and namely, the
process of formation and condensation of excitonic bound states and the
problem of valence transitions in the generalized Falicov-Kimball model. To
simplify numerical calculations, we adopt here the following model
$U_{dd}=U_{ff}=U_{df}=U_{nn}$, that allows us to reduce the number of model
parameters and at the same time to keep all nonlocal interaction terms
nonzero. The physically most interesting case corresponds to the situation
where both ($U_{nn}$ as well as $U_{ch}$) interactions are switched on
simultaneously and numerical results for this case are summarized in figure 8.
Figure 8: (Colour online) $n_{0},n_{d},n_{T}$ and
$n^{\text{un}}_{d}=n_{d}-n_{T}$ as functions of $E_{f}$ calculated for four
different values of $U_{ch}$ ($U_{ch}=0,0.2,0.4,0.5$) at
$U_{nn}=U_{ch},U=1,V=0.1,L=100$ and $n_{f}+n_{d}=1$ [39].
One can see that combined effects of non-local interactions lead to a number
of interesting results: (i) strong suppression of the zero-momentum condensate
in the region of $E_{f}$, where $n_{d}\sim 0.5$, (ii) stabilization of the
intermediate phase with $n_{d}\sim 0.5$ for increasing $U_{nn}=U_{ch}$, (iii)
strong enhancement of the total density of unbond $d$ electrons
$n^{\text{un}}_{d}$ with an increase of $U_{nn}=U_{ch}$. (iv) stabilization of
zero momentum condensate for some values of the $f$-level energy $E_{f}$ in
the weak coupling limit $U_{nn}=U_{ch}\sim 0.2$, (v) appearance of
discontinuous valence transitions for sufficiently large values of
$U_{nn}=U_{ch}\sim 0.4$ and (vi) discontinuous disappearance of the density of
zero momentum excitons, as well as discontinuous changes in the total density
of excitons $n_{T}$ and the total density of unbond $d$ electrons
$n^{\text{un}}_{d}$ at the valence transition points.
The appearance of discontinuous changes in some ground-state observables such
as the density of conduction $d$ (valence $f$) electrons, the density of zero-
momentum condensate, the density of unbond electrons, is a very important
result from the point of view of rare-earth compounds. In some of them, e.g.,
the mixed valence system SmS such discontinuous changes are experimentally
observed in the density of valence electrons when the external hydrostatic
pressure is applied [51], though they were not satisfactorily described so
far. Indeed, as mentioned above, the SmS compound is a mixed valence system,
with fluctuating valence and thus for its description one should take into
account the hybridization between the localized $f$ and conduction $d$
electron states. However, more reliable methods, such as alloy-analog
approximation [52], renormalization group method [53], exact diagonalization
method [54], predict only the continuous valence transitions within the
Falicov-Kimball model extended by the local hybridization. Here, we show that
considering the parametrization between the external pressure $p$ and the
$f$-level position $E_{f}$, the pressure induced discontinuous valence
transitions are possible to generate also in such a system under a very
realistic assumption, namely, that nonlocal interactions are switched on. This
opens up a new route to the understanding of various ground-state anomalies
observed in the rare-earth compounds within the unified picture.
Finally, it should be noted that although all the results presented in this
review were obtained for the one-dimensional case, their validity is probably
much more general. Indeed, a direct comparison of our one-dimensional DMRG and
two dimensional Hartree-Fock results [37], obtained for the density of zero-
momentum excitons as a function of $t_{f}$ and $E_{f}$, revealed only a weak
dependence of $n_{0}$ on the system dimension indicating a possible extension
of our one-dimensional DMRG results to real two and three dimensional systems.
Moreover, in the two-dimensional case, we can switch off completely the local
hybridization, since in this case the excitonic condensate can be generated by
other terms (the $f$-electron hopping), modelling more realistically the
situation in rare-earth compounds.
## Acknowledgements
This work was supported by projects VEGA 2-0112-18, APVV-17-0020, ITMS
2220120047, ITMS 26230120002 and IMTS 26210120002.
## References
* [1] Blatt J.M., Böer K.W., Brandt W., Phys. Rev., 1962, 126, 1691, doi:10.1103/PhysRev.126.1691.
* [2] Keldysh L.V., Kopaev H.Y.V., Sov. Phys. Solid State, 1965, 6, 2219.
* [3] Moskalenko S.A., Snoke D.W., Bose-Einstein Condensation of Excitons and Biexcitons, Cambridge Univ. Press, Cambridge, 2000.
* [4] Littlewood P.B., Eastham P.R., Keeling J.M.J., Marchetti F.M., Simons D.B., Szymanska M.H., J. Phys.: Condens. Matter, 2004, 16, S3597, doi:10.1088/0953-8984/16/35/003.
* [5] Des Cloizeaux J., J. Phys. Chem. Solids, 1965, 26, 259, doi:10.1016/0022-3697(65)90153-8.
* [6] Neuenschwander J., Wachter P., Phys. Rev. B, 1990, 41, 12693, doi:10.1103/PhysRevB.41.12693.
* [7] Bucher B., Steiner P., Wachter P., Phys. Rev. Lett., 1991, 67, 2717, doi:10.1103/PhysRevLett.67.2717.
* [8] Monney C., Monney G., Aebi P., Beck H., Phys. Rev. B, 2012, 85, 235150, doi:10.1103/PhysRevB.85.235150.
* [9] Zenker B., Fehske H., Beck H., Monney C., Bishop A.R., Phys. Rev. B, 2013, 88, 075138,
doi:10.1103/PhysRevB.88.075138.
* [10] Monney G., Monney C., Hildebrand B., Aebi P., Beck H., Phys. Rev. Lett., 2015, 114, 086402,
doi:10.1103/PhysRevLett.114.086402.
* [11] Watanabe H., Seki K., Yunoki S., Phys. Rev. B, 2016, 91, 205135, doi:10.1103/PhysRevB.91.205135.
* [12] Wakisaka I., Sudayama T., Takubo K., Mizokawa T., Arita M., Namatame H., Taniguchi M., Katayama N., Nohara M., Takagi H., Phys. Rev. Lett., 2009, 103, 026402 doi:10.1103/PhysRevLett.103.026402.
* [13] Perali A., Neilson D., Hamilton A.R., Phys. Rev. Lett., 2013, 110, 146803, doi:10.1103/PhysRevLett.110.146803.
* [14] Falicov L.M., Kimball J.C., Phys. Rev. Lett., 1969, 22, 997, doi:10.1103/PhysRevLett.22.997.
* [15] Portengen T., Östreich T., Sham L.J., Phys. Rev. Lett., 1996, 76, 3384, doi:10.1103/PhysRevLett.76.3384.
* [16] Portengen T., Östreich T., Sham L.J., Phys. Rev. B, 1996, 54, 17452, doi:10.1103/PhysRevB.54.17452.
* [17] Czycholl G., Phys. Rev. B, 1999, 59, 2642, doi:10.1103/PhysRevB.59.2642.
* [18] Farkašovský P., Phys. Rev. B, 1999, 59, 9707, doi:10.1103/PhysRevB.59.9707.
* [19] Farkašovský P., Phys. Rev. B, 2002, 65, 81102, doi:10.1103/PhysRevB.65.081102.
* [20] Zlatić V., Freericks J.K., Lemanski R., Czycholl G., Philos. Mag. B, 2001, 81, 1443,
doi:10.1080/13642810110066470.
* [21] Freericks J.K., Zlatić V., Rev. Mod. Phys., 2003, 75, 1333, doi:10.1103/RevModPhys.75.1333.
* [22] Batista C.D., Phys. Rev. Lett., 2002, 89, 166403, doi:10.1103/PhysRevLett.89.166403.
* [23] Batista C.D., Gubernatis J.E., Bonča, Lin H.Q., Phys. Rev. Lett., 2004, 92, 187601,
doi:10.1103/PhysRevLett.92.187601 .
* [24] Farkašovský P., Phys. Rev. B, 2008, 77, 155130, doi:10.1103/PhysRevB.77.155130.
* [25] Schneider C., Czycholl, Eur. Phys. J. B, 2008, 64, 43, doi:10.1140/epjb/e2008-00273-y.
* [26] Zenker B., Ihle D., Bronold F.X., Fehske H., Phys. Rev. B, 2010, 81, 115122, doi:10.1103/PhysRevB.81.115122.
* [27] Phan V.N., Becker K.W., Fehske H., Phys. Rev. B, 2010, 81, 205117, doi:10.1103/PhysRevB.81.205117.
* [28] Seki K., Eder R., Ohta Y., Phys. Rev. B, 2011, 84, 245106, doi:10.1103/PhysRevB.84.245106.
* [29] Zenker B., Ihle D., Bronold F.X., Fehske H., Phys. Rev. B, 2012, 85, 121102R,
doi:10.1103/PhysRevB.85.121102.
* [30] Kaneko T., Seki K., Ohta Y., Phys. Rev. B, 2012, 85, 165135, doi:10.1103/PhysRevB.85.165135.
* [31] Kaneko T., Ejima S., Fehske H., Ohta Y., Phys. Rev. B, 2013, 88, 035312, doi:10.1103/PhysRevB.88.035312.
* [32] Ejima S., Kaneko T., Ohta Y., Fehske H., Phys. Rev. Lett., 2014, 112, 026401,
doi:10.1103/PhysRevLett.112.026401.
* [33] Apinyan V., Kopeć T.K., J. Low Temp. Phys., 2014, 176, 27, doi:10.1007/s10909-014-1165-x.
* [34] Kuneš J., J. Phys.: Condens. Matter, 2015, 27, 333201, doi:/10.1088/0953-8984/27/33/333201.
* [35] Golosov D.I., Phys. Rev., 2020, 101, 165130, doi:10.1103/PhysRevB.101.165130.
* [36] Farkašovský P., EPL, 2015 110, 47007 doi:10.1209/0295-5075/110/47007.
* [37] Farkašovský P., Phys. Rev. B, 2017 95, 045101, doi:10.1103/PhysRevB.95.045101.
* [38] Farkašovský P., Solid State Commun., 2017, 255, 24, doi:10.1016/j.ssc.2017.03.005.
* [39] Farkašovský P., Regeciová L., Eur. Phys. J. B, 2019, 92, 141, doi:10.1140/epjb/e2019-90406-6.
* [40] White S.R., Phys. Rev. Lett., 1992, 69, 2863, doi:10.1103/PhysRevLett.69.2863.
* [41] Brouers F., de Menezes O.L.T., Phys. Status Solidi B, 1981, 104, 541, doi:10.1002/pssb.2221040218.
* [42] Gonçalves da Silva C.E.T., Falicov L.M., Solid State Commun., 1975, 17, 1521, doi:10.1016/0038-1098(75)90986-2.
* [43] Czycholl G., Phys. Rep. B, 1986, 143, 277, doi:10.1016/0370-1573(86)90177-8.
* [44] Zenker B., Fehske H., Batista C.D., Phys. Rev. B, 2010, 82, 165110, doi:10.1103/PhysRevB.82.165110.
* [45] Farkašovský P., Acta Phys. Slovaca, 2010, 60, 497, doi:10.2478/v10155-010-0005-z.
* [46] Shvaika A.M., Phys. Rev. B, 2003, 67, 075101, doi:10.1103/PhysRevB.67.075101.
* [47] Shvaika A.M., Condens. Matter Phys., 2014, 17, 43704, doi:10.5488/CMP.17.43704.
* [48] Dobushovskyi D.A., Shvaika A.M., Zlatić V., Phys. Rev. B, 2017, 95, 125133, doi:10.1103/PhysRevB.95.125133.
* [49] Čenčariková H., Farkašovský P., Žonda M., Acta Phys. Pol. A, 2008, 113, 287,
doi:10.12693/APhysPolA.113.287.
* [50] Lemański R., Kapcia K.J., Robaszkiewicz S., Phys. Rev. B, 2017, 96, 205102, doi:10.1103/PhysRevB.96.205102.
* [51] Rohler J., Handbook on the Physics and Chemistry of Rare Earths, Vol. 10, Gschneider K.A., Eyring L.R., Huffner S. (Eds.) Amsterdam: North-Holland, 453.
* [52] Czycholl G., Phys. Rep., 1986, 143, 277–345, doi:10.1016/0370-1573(86)90177-8.
* [53] Hanke W., Hirsch J.E., Phys. Rev. B, 1982, 25, 6748, doi:10.1103/PhysRevB.25.6748.
* [54] Farkašovský P., Z. Phys. B, 1997, 104, 553, doi:10.1007/s002570050489.
Ukrainian 0
Äîñëäæåííÿ êîíäåíñàö¿ åêñèòîíâ ìåòîäîì DMRG äëÿ óçàãàëüíåíî¿ ìîäåë Ôàëêîâà-
Êìáàëà Ï. Ôàðêàøîâñüêèé
íñòèòóò åêñïåðèìåíòàëüíî¿ ôçèêè, Ñëîâàöüêà àêàäåìÿ íàóê, âóë. Âàòñîíîâà 47,
Êîøèö, Ñëîâàèíà
|
Hyperspectral Image Classification—Traditional
to Deep Models: A Survey for Future Prospects
Muhammad Ahmad,
Sidrah Shabbir,
Swalpa Kumar Roy, Student Member, IEEE,
Danfeng Hong, Senior Member, IEEE,
Xin Wu, Member, IEEE,
Jing Yao,
Adil Mehmood Khan,
Manuel Mazzara,
Salvatore Distefano,
and Jocelyn Chanussot Fellow, IEEE
Manuscript received October 24, 2021; revised November 19, 2021; accepted
November 30, 2021. Date of publication December 9, 2021; date of current
version January 20, 2022. This work was supported in part by the National Natural Science Foundation of China under Grant 42030111 and Grant 41722108. This work was supported by the National Natural Science Foundation of China under Grant 62101045 and the China Postdoctoral Science Foundation Funded Project No. 2021M690385. This work was supported by MIAI@Grenoble Alpes (ANR-19-P3IA-0003) and the AXA Research Fund. This research was also financially supported by The Analytical Center for the Government of the Russian Federation (Agreement No. 70-2021-00143 dd. 01.11.2021, IGK 000000D730321P5Q0002) Corresponding author: Danfeng Hong.
M. Ahmad is with Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Chiniot-Faisalabad Campus, Chiniot 35400, Pakistan, and Dipartimento di Matematica e Informatica—MIFT, University of Messina, Messina 98121, Italy; (e-mail<EMAIL_ADDRESS>S. Shabbir is with the Department of Computer Engineering, Khwaja Fareed University of Engineering and Information Technology (KFUEIT), Pakistan. (e-mail<EMAIL_ADDRESS>S. K. Roy is with the Department of Computer Science and Engineering, Jalpaiguri Government Engineering College, West Bengal 735102, India (e-mail: [email protected]).
D. Hong and J. Yao are with the Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China (e-mail<EMAIL_ADDRESS>[email protected]).
X. Wu is with the School of Information and Electronics, Beijing Institute of Technology, 100081 Beijing, China, and Beijing Key Laboratory of Fractional Signals and Systems, 100081 Beijing, China. (e-mail<EMAIL_ADDRESS>A. M. Khan is with the Institute of Data Science and Artificial Intelligence, Innopolis University, Innopolis, 420500, Russia. (e-mail<EMAIL_ADDRESS>M. Mazzara is with Institute of Software Development and Engineering, Innopolis University, Innopolis, 420500, Russia. (e-mail<EMAIL_ADDRESS>S. Distefano is with Dipartimento di Matematica e Informatica—MIFT, University of Messina, Messina 98121, Italy. (e-mail<EMAIL_ADDRESS>J. Chanussot is with the Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-Lab, 38000 Grenoble, France. (e-mail<EMAIL_ADDRESS>Digital Object Identifier 10.1109/JSTARS.2021.3133021
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 15, 2022
M.Ahmad et al.:
Hyperspectral Imaging (HSI) has been extensively utilized in many real-life applications because it benefits from the detailed spectral information contained in each pixel. Notably, the complex characteristics i.e., the nonlinear relation among the captured spectral information and the corresponding object of HSI data make accurate classification challenging for traditional methods. In the last few years, Deep Learning (DL) has been substantiated as a powerful feature extractor that effectively addresses the nonlinear problems that appeared in a number of computer vision tasks. This prompts the deployment of DL for HSI classification (HSIC) which revealed good performance. This survey enlists a systematic overview of DL for HSIC and compared state-of-the-art strategies of the said topic. Primarily, we will encapsulate the main challenges of traditional machine learning for HSIC and then we will acquaint the superiority of DL to address these problems. This survey breakdown the state-of-the-art DL frameworks into spectral-features, spatial-features, and together spatial-spectral features to systematically analyze the achievements (future research directions as well) of these frameworks for HSIC. Moreover, we will consider the fact that DL requires a large number of labeled training examples whereas acquiring such a number for HSIC is challenging in terms of time and cost. Therefore, this survey discusses some strategies to improve the generalization performance of DL strategies which can provide some future guidelines.
Hyperspectral Imaging (HSI), Hyperspectral Image Classification (HSIC), Deep Learning (DL), Feature Learning, Spectral-Spatial Information.
§ INTRODUCTION
Hperspectral Imaging (HSI) is concerned with the extraction of meaningful information based on the radiance acquired by the sensor at short or long distances without substantial contact with the object of interest [1]. HSI provides detailed spectral information by sampling the reflective portion of the electromagnetic spectrum covering a wide range of $0.4-2.4~m$ (i.e. visible $0.4-0.7~m$ to short wave infrared $0.7-2.4~m$) region in hundreds of narrow and contiguous spectral bands. HSI can also explore the (light) emission properties of objects in the range of mid to long infrared regions [2].
Various real-world applications of HSI.
Despite the detailed information, it brings several challenges since traditional analysis techniques for monochromatic, RGB, and multispectral images cannot be directly exploited to extract meaningful information from Hyperspectral ones due to several reasons, e.g. HSI exhibits the unique statistical and geometrical properties of high dimensional spectral/spatial data, i.e. the volume of a hypercube and hypersphere concentrates on corners and outside shells respectively.
HSI has been adopted in several real-world applications including but not limited to the atmosphere, environmental, urban, agriculture, geological and mineral exploration, coastal zone, marine, forestry (i.e. track forest health), water quality and surface contamination, inland waters, and wetlands, snow and ice, biological, medical contexts, and food processing [3, 4, 5, 6, 7, 8]. There are also several military applications in camouflage, landmine detection, and littoral zone mapping. Furthermore, HSI has been used in space, air, and underwater vehicles to acquire detailed spectral information for a wide range of uses [9, 10, 11, 12].
Infield collection and spectral library indexing of ground truth signatures for any of the said applications are critical for many reasons. For instance, the spectral information of vegetation is prejudiced by a wide range of environmental situations that make it challenging to satisfactorily represent variability without the collection of site-specific field spectra. But the real potential of HSI is mostly untapped since it allows it to go deeper than surface features considering that usually, each feature has a different spectrum band. HSI, indeed, can capture more than 200 spectral bands which help practitioners to discriminate objects that were not possible before. A few HSI application examples are shown in Fig. <ref>, but several other domains (e.g. smart city, Industry 4.0, Intelligent Transportation Systems) can greatly benefit from such an approach.
Considering the aforementioned limitations, HSI analysis is categorized into the following main streams: dimensionality reduction [13, 14, 15, 16, 17], spectral unmixing [18, 19, 20, 21, 22, 23, 24, 25, 26], object/change detection [27, 28, 29, 30, 31, 32, 33] classification [34, 35, 36], feature learning for classification [37, 38, 39, 40, 41], restoration and denoising [42, 43], resolution enhancement [44, 45]. Figure <ref> shows an exponentially growing trend in literature published per year for HSI analysis-related tasks and applications.
Various HSI related articles published per year till September 25, 2021, [Source: Google Scholar accessed on September 25, 2021 and the results (including patents and citations) were sorted by relevance].
In this survey, we specifically focus on HSI data classification (HSIC), which has achieved a phenomenal interest of the research community due to its broad applications in the areas of land use and land cover [46, 47, 48, 49, 50], environment monitoring and natural hazards detection [51, 52], vegetation mapping [53, 54] and urban planning. HSIC methodologies exploit machine learning algorithms to perform the classification task [55, 56]. These methods are outlined in various comprehensive reviews published during/in the last decade [57, 58, 59, 34, 60, 61, 62, 63, 64, 65]. Nevertheless, continuous advancements in the field of Machine Learning provide improved methods from time to time. Deep learning (DL) models is one of such revolutionary advancements in machine learning that improved HSIC accuracy [66, 67, 68].
This survey aims to give an overview of the widely used DL-based techniques to perform HSIC. Specifically, we will first summarize the main challenges of HSIC which cannot be effectively overcome by traditional machine learning (TML), and later we will enlist the advantages of DL to handle the aforementioned issues. At a later stage, we will provide a framework to categorize the corresponding works among:
* Spectral and spatial feature learning, individually, and
* Spectral-spatial feature learning to systematically review the achievements in DL-based HSIC.
* Future research stems to improve the generalization performance and robustness of DL models while considering the limited availability of reliable training samples.
The remainder of this paper is structured as follows.
Section <ref> introduces the task of HSI Classification (HSIC) and briefly discusses the HSIC paradigm shift from Traditional (Conventional) Machine Learning to Deep Learning (DL) models, describing HSI data characteristics along with the advantages and limitations of DL that are faced while working with HSI. In section <ref> and <ref>, we give an overview of different forms of HSI representations and basic machine learning strategies, respectively. Section <ref> describes a few commonly used types of layers and reviews recent developments (specifically from 2017 onward) of some intensively utilized DL frameworks for HSIC. Sections <ref>, <ref>, <ref>, and <ref> presents the state-of-the-art developments of Convolutional Neural Networks (CNN), Graph CNN (GCNN), Autoencoders (AEs), Deep Belief Networks (DBNs), Recurrent Neural networks (RNNs), respectively. In section <ref>, we briefly discussed various strategies to overcome the low generalization performance of HSIC due to the limited availability of training data. Section <ref> presents the experimental results and discussion on results obtained using different deep learning strategies. Section <ref> concludes the paper with a few future research directions related to joint exploitation of spectral-spatial features of HSI, limited training data, and computational complexity.
§ HYPERSPECTRAL IMAGE CLASSIFICATION (BACKGROUND AND CHALLENGES)
§.§ Traditional to DL Models
The main task of HSIC is to assign a unique label to each pixel vector of HSI cube based on its spectral or spectral-spatial properties. Mathematically, an HSI cube can be represented as \(\textbf{X} = [x_1, x_2, x_3, \dots, x_B]^T \in \mathcal{R}^{B \times (N \times M)}\), where \(B\) represent total number of spectral bands consisting of \((N \times M)\) samples per band belonging to \(\textbf{Y}\) classes where \(x_i = [x_{1,i},~x_{2,i},~x_{3,i}, \dots,x_{B,i}]^T\) is the \(i^{th}\) sample in the HSI cube with class label \(y_i \in \mathcal{R}^Y \). The classification problem can be considered as an optimization one, in which a mapping function \(f_c(.)\) takes the input data \(\textbf{X}\) and after applying some transformations over it, obtains the corresponding label \(\textbf{Y}\), to reduce the gap between obtained output and the actual one [69].
\begin{equation}
Y = f_c(X,\theta)
\end{equation}
where $\theta$ is a certain adjustable parameter that may be required to apply transformations on input data $\textbf{X}$ such that $f_c: X \to Y$.
In literature, substantial work has been done on HSIC and there is a growing trend in the development of such techniques as shown in Figure <ref>. Most HSIC frameworks seemed to be influenced by the methodologies used in the computer vision domain [70]. Traditional machine learning-based HSIC approaches use hand-crafted features to train the classifier. These methods generally rely on utilizing engineering skills and domain expertise to design several human-engineered features, for instance, shape, texture, color, shape, spectral and spatial details. All these features are basic characteristics of an image and carry effective information for image classification. Commonly used hand-crafted feature extraction and classification methods include: texture descriptors such as Local Binary Patterns (LBPs) [71], Histogram of Oriented Gradients (HOG) [72], Global Image Scale-invariant Transform / Global Invariant Scalable Transform (GIST) [73], Pyramid Histogram of Oriented Gradients (PHOG), Scale-invariant Feature Transform (SIFT) [74], Random Forests [75], kernel-based Support Vector Machine (SVM) [76], K-nearest Neighbours (KNN), and Extreme Learning Machine (ELM).
Remote sensing/Hyperspectral Image Classification related articles published per year till September 25, 2021, [Source: Google Scholar accessed on September 25, 2021 and the results (including patents and citations) were sorted by relevance].
Color histograms are simple and effective handcrafted features used for an image classification task. They are easy to compute and invariant to small changes in images i.e. translation and rotation. The major drawback of a color histogram is that it does not provide spatial contextual information, hence it becomes difficult to distinguish between objects of the same color but different distribution. Moreover, color histograms are sensitive to variance in illumination. HOG features represent the histogram of edge orientations of spatial sub-regions. It can effectively extract the edge and local shape details and has been utilized in various remote sensing related works [77, 78, 79, 46].
Scale-invariant Feature Transform (SIFT) is a broadly used robust feature descriptor applied to image classification tasks [80, 81, 82, 83]. The advantage of the SIFT descriptor is that it is invariant to the changes in image scale, rotation, illumination, and noise. SIFT is used to extract local features that describe a specific point in the image. The disadvantage of SIFT is that it is mathematically complex which increases its computational cost. GIST represents the global description of important aspects of an image that is the scales and orientations (gradient information) of various subregions of an image. GIST builds a spatial envelope in terms of different statistical properties like roughness, openness, and ruggedness, etc [84]. Texture descriptors such as local binary patterns (LBPs) are used for remote sensing image analysis [71, 85]. LBPs are used to describe the texture around each pixel by choosing pixels from the square neighborhood and gray level values of all neighborhood pixels are thresholded with respect to the central pixel.
The color histograms, GIST, and texture descriptors are global features that represent certain statistical characteristics of an image like color, texture [86, 87], and spatial structure [73]. While HOG and SIFT are local features that describe geometrical information. Usually they are used to construct bag-of-visual-words (BoVW) models [52, 88, 48, 89, 90, 91, 92, 83, 93] and HOG feature-based models [46, 94]. Some popular feature encoding or pooling strategies to enhance the performance of BoVW are Fisher vector coding [95, 96, 71], Spatial Pyramid Matching (SPM) [97], and Probabilistic Topic Model (PTM) [98, 99, 100, 93]. A single feature is insufficient to represent the whole image information, hence a combination of these features is used for image classification [101, 47, 88, 98, 100, 102, 103, 104, 105, 106, 107].
Hand-crafted features can effectively represent the various attributes of an image, hence working well with the data being analyzed. However, these features may be insubstantial in the case of real data, therefore it is difficult to fine-tune between robustness and discriminability as the set of optimal features considerably vary between different data. Furthermore, human involvement in designing the features considerably affects the classification process, as it requires a high level of domain expertise to design hand-crafted features.
To mitigate the limitations of hand-crafted feature designing, a deep feature learning strategy was proposed by Hinton and Salakhutdinov in $2006$ [108]. Deep learning (DL) based methods can automatically learn the features from data in a hierarchical manner, to construct a model with growing semantic layers until a suitable representation is achieved. Such models have shown great potential for feature representation in remote sensing image classification [109, 110].
DL architectures can learn the behavior of any data without any prior knowledge regarding the statistical distribution of the input data [111] and can extract both linear and non-linear features of input data without any pre-specified information. Such systems are capable of handling HSI data in both spectral and spatial domains individually, and also in a coupled fashion. DL systems possess a flexible architecture in terms of types of layers and their depth and are adaptive to various machine learning strategies like supervised, semi-supervised, and unsupervised techniques.
§.§ Hyperspectral Data Characteristics and DL Challenges
Despite the above-discussed DL potentials, there are still some challenges that need to be considered while applying DL to HSI data. Most of these challenges are related to the characteristics of HSI data i.e. hundreds of contiguous and narrow spectral channels with very high spectral resolution and low spatial resolution throughout the electromagnetic spectrum coupled with limited availability of training data. Although the pixels with rich spectral information are useful for classification purposes, however, the computation of such data takes a lot of time and resources.
Furthermore, processing such high-dimensional data is a somewhat complex task due to an increased number of parameters. This is known as the curse of dimensionality which considerably influences the classification performance especially in the case of supervised learning [112]. Since the size of training data is not adequate/insufficient and/or not reliable (i.e. the training samples may not provide any new information to the model or may have similar patterns/structures) to properly train the classifier which may lead the model to overfit. This is known as the Hughes phenomena [113] which occurs when labeled training data is significantly smaller than the number of spectral bands present in the data. Lack of labeled HSI data is a major issue in HSIC as labeling of HSI is a time-consuming and expensive task because it usually requires human experts or investigation of real-time scenarios.
In addition to high dimensionality, HSIC suffers from various other artifacts like high intra-class variability due to unconfined variations in reflectance values caused by several environmental interferers and degradation of data caused by instrumental noise while capturing the data [114]. Furthermore, the addition of redundant bands due to HSI instruments affects the computational complexity of the model. Spectral mixing is another challenge related to the spatial resolution of HSI. HSI pixels with low to average spatial resolution cover vast spatial regions on the surface of earth leading to mixed spectral signatures which result in high inter-class similarity in border regions. As a result, it becomes difficult to identify the materials based on their spectral reflectance values [115]. Following are some main challenges that come across when DL is applied to HSIC:
* Complex Training Process: Training of Deep Neural Network (DNN) and optimization by tuning parameters is an NP-complete problem where the convergence of the optimization process is not guaranteed [116, 117, 118]. Therefore, it is assumed that training of DNN is very difficult [111] especially in the case of HSI when a large number of parameters need to be adjusted/tuned. However, the convergence task becomes somehow easier due to the advancement of various optimization techniques for deep CNNs. Among stochastic gradient descent (SGD) [119] and its momentum version (SGDM) [120], RMSProp [121], Adam [122], AdamW [123], diffGrad [124], RAdam [125], gradient centralization (GC) [126], AngularGrad [127], respectively are the successful CNN optimization techniques and widely used in any classification problems.
* Limited Availability of Training Data: As discussed above, supervised DNN requires a considerably large amount of training data otherwise their tendency to overfit increases significantly [128] leads to the Hughes phenomena. The high dimensional characteristic of HSI coupled with a small amount of labeled training data makes the DNNs ineffective for HSIC as it demands a lot of adjustments during the training phase [69].
* Model's Interpretability: The training procedure of DNNs is difficult to interpret and understand. The black box kind of nature is considered as a potential weakness of DNNs and may affect the design decisions of the optimization process. Although, a lot of work has been done to interpret the model's internal dynamics.
* High Computational Burden: One of the main challenges of DNN is dealing with a big amount of data that involves increased memory bandwidth, high computational cost, and storage consumption [129]. However, advanced processing techniques like parallel and distributed architectures [130, 131] and high-performance computing (HPC) [115] make it possible for DNNs to process large amounts of data.
* Training Accuracy Degradation: It is assumed that deeper networks extract more rich features from data [132], however, this is not true for all systems to achieve higher accuracy by simply adding more layers. Because by increasing the network’s depth, the problem of exploding or vanishing gradient becomes more prominent [133] and affects the convergence of the model [132].
§ HSI REPRESENTATION
Hyperspectral data is represented in the form of a $3D$ hypercube, $X \in \mathcal{R}^{B \times (N \times M)}$, which contains $1D$ spectral and $2D$ spatial details of a sample where $B$ represents the total number of spectral bands and $N$ and $M$ are spatial components i.e., width and height, respectively. The HSI cube is shown in Figure <ref>.
Hyperspectral Cube
§.§ Spectral Representation
In such representations, each pixel vector is isolated from other pixels and processed based on spectral signatures only which means the pixel is represented only in spectral space $x_i \in \mathcal{R}^{B}$. Where $B$ can either be the actual number of spectral channels or just relevant spectral bands extracted after some dimensionality reduction (DR) method. Usually, instead of using original spectral bands, a low dimensional representation of HSI is preferred for data processing in order to avoid redundancy and achieve better class separability, without considerable loss of useful information.
Dimensionality Reduction (DR) approaches for spectral HSI representation can either be supervised or unsupervised. Unsupervised techniques transform the high dimensional HSI into a low dimensional space without using the class label information, for example, Principal Component Analysis (PCA) and locally linear embedding [134]. On the other hand, supervised DR methods utilize labeled samples to learn the data distribution i.e. to keep data points of the same classes near to each other and separate the data points of different classes. For instance, linear discriminant analysis (LDA), local Fisher discriminant analysis (LFDA) [135], local discriminant embedding (LDE) [136] and nonparametric weighted feature extraction (NWFE) [137]. LDA and LDFA provide better class separability by maximizing the inter-class distance of data points and minimizing the intra-class distance. However, due to the spectral mixing effect, in which the same material may appear with different spectra or different materials may have the same spectral signatures, it becomes difficult to differentiate among different classes based on the spectral reflectance values alone.
§.§ Spatial Representation
To deal with the limitations of spectral representation, another approach is to exploit the spatial information of the pixels, in which pixels in each band are represented in the form of a matrix, $x_i \in \mathcal{R}^{N \times M}$. Due to high spatial correlation, neighboring pixels have higher probabilities to belong to the same class. Therefore, in the case of spatial representation, neighboring pixels’ information is also considered and the neighborhood of a pixel can be determined using kernel or pixel centric window [138]. Some common methods to extract spatial information from HSI cube are morphological profiles (MPs), texture features (like Gabor filters, gray-level co-occurrence matrix (GLCM), and local binary pattern (LBP), etc.) and DNN based methods. Morphological profiles are capable of extracting geometrical characteristics. Few extensions of MPs include extended morphological profiles (EMPs) [139], multiple-structure-element morphological profiles [140], invariant attribute profiles (IAPs) [141].
The texture of the image provides useful spatial contextual information of HSI. For instance, a Gabor filter, a texture analysis technique, can efficiently obtain textural information at various scales and orientations. Similarly, LBP can provide rotation-invariant spatial texture representation. The GLCM can effectively determine the spatial variability of HSI by exploiting the relative positions of neighborhood pixels. The DNNs can also extract spatial information of HSI by considering the pixel as an image patch instead of representing it as a spectral vector. The spatial information contained in HSI can also be extracted by combining various of the afore discussed methods. For instance, [142]combined Gabor filter and differential morphological profiles [143] to extract local spatial sequential features for a recurrent neural network (RNN) based HSIC framework.
§.§ Spectral-Spatial Representation
This representation jointly exploits both spectral and spatial information of data. In such approaches, a pixel vector is processed based on spectral features while considering spatial-contextual information. The strategies that simultaneously use both spectral and spatial representations of HSI, either concatenate the spatial details with spectral vector [62, 144] or process the $3D$ HSI cube to preserve the actual structure and contextual information [145].
In literature, all these HSI representations are widely exploited for HSIC. Most of the DNNs for pixel-wise classification utilized the spectral representation of HSIs [146, 147]. However, to mitigate the limitations of spectral representation, many efforts have been made to incorporate the spatial information [148, 149]. Recently, joint exploitation of both spectral and spatial features has gained much popularity and led to improved classification accuracy [150, 151, 152, 153, 67, 154]. These HSI feature exploitation approaches, for HSIC, are further discussed in the following sections.
§ LEARNING STRATEGIES
Deep learning models can adopt various learning strategies that can be broadly categorized into the following:
§.§ Supervised Learning
In a supervised learning approach, the model is trained based on the labeled training data which means training data is comprised of a set of inputs and their corresponding outputs or class labels. During the training phase, the model iteratively updates its parameters in order to predict the desired outputs accurately. In the testing phase, the model is tested against the new input/test data in order to validate its ability to predict the correct labels. If trained sufficiently, the model can predict the labels of new input data. However, supervised learning of DNNs requires a lot of labeled training data to fine-tune the model parameter. Therefore, they are best suited to scenarios where plentiful labeled data is available. The details of various supervised learning techniques for DNNs will be explained in the respective sections.
§.§ Unsupervised Learning
In contrast to the supervised learning approach, unsupervised learning techniques learn from the input data with no explicit labels associated with it. These approaches try to identify the underlying statistical structure of input representations or patterns in the absence of corresponding labels. As there is no ground truth available for the training data so it might be difficult to measure the accuracy of the trained model. However, such learning strategies are useful in the cases where we want to learn the inherent structure of such datasets which have a scarcity of training data. The principal component analysis (PCA) is an unsupervised learning technique that can be used to learn a low-dimensional representation of the input. Similarly, k-means clustering is another unsupervised learning method that groups the input data into homogeneous clusters.
§.§ Semi-supervised Learning
The semi-supervised learning technique is halfway between unsupervised and supervised approaches. It learns from the partially labeled datasets that are a small amount of labeled training data can be utilized to label the rest of the unlabeled data. These techniques effectively utilize all available data instead of just labeled data, therefore, these techniques have gained much popularity among the research community and are being widely used for HSIC [155, 156, 157, 158]. The details of these methods are briefly described in section <ref>.
§ DEVELOPMENT OF DNNS (TYPES OF LAYERS)
In the following, we review recent developments of some widely used DNN frameworks for HSIC. We specifically surveyed the literature published from 2017 onward. DNNs exhibit a great variety of flexible and configurable models for HSIC that allow the incorporation of several types of layers. Few widely used types of layers are explained in the following.
A layer is the key building block of DNN and the type of layer has a decisive impact in terms of feature processing. A layer takes the weighted input, processes it through linear or non-linear transformation, and outputs these values to the next layer. Generally, a layer is uniform, as it has a single activation function. The first layer of the network is known as the input layer and the last layer as an output layer. All other layers in the network, in between the input and output layers, are known as hidden layers. These layers progressively find different features in the input data by performing various transformations. The choice of layer type depends on the task at hand, as some layers perform better for some tasks than others. The most commonly used layers for HSIC are explained below.
§.§ Fully Connected Layers
A fully connected (FC) layer connects every neuron in the lower layer to every neuron in the upper/next layer. Mostly, they are used as the last few layers of a model usually after convolution/pooling layers. FC takes the output of the previous layer and assigns weights to predict the probabilities for class labels. Due to a large number of connections, a large number of parameters need to be adjusted which significantly increases the computational overhead. Moreover, due to a large number of parameters, the model becomes more sensitive to overfitting [49]. However, to mitigate the effect of overfitting, a dropout method is introduced in [159].
§.§ Convolutional Layers
The convolutional (CONV) layer convolve the input data or feature maps from a lower layer with the filters (kernels). The filter contains weights whose dot product is calculated with the subset of input data by moving it across the width, height, and depth of the input region. The output of the filter is known as a feature map. CONV layer provides spatial invariance via a local connectivity approach in which the neuron in the feature map connects to a subset of input from the previous layer rather than connecting to every neuron. This reduces the number of parameters that need to train. To further reduce the number of parameters, the CONV layer uses the mechanism of parameter sharing in which the same weights are used in a particular feature map.
§.§ Activation Layers
Activation layers are assumed to be a feature detector stage of DNNs [160]. FC and CONV layers provide linear representations of input data or it can be said that they work similarly to linear regressors and data transformed by these layers is considered to be at the feature extraction stage [69]. Therefore, to learn non-linear features of data, an activation layer must be used after FC and CONV layers. In the activation layer, feature maps from previous layers go through an activation function to form an activation map. Some commonly used activation functions are sigmoid, hyperbolic tangent (tanh), rectified linear unit (ReLU), LiSHT [161] and softmax. However, in HSI analysis, softmax and ReLU are widely employed activation functions [69]. Figure <ref> presents a graphical representation of a few commonly utilized activation functions.
Graphical representation of various commonly used activation functions
§.§ Pooling or sub-sampling layers
The pooling layer, also known as the sub-sampling or down-sampling layer, takes a certain input volume and reduces it to a single value as shown in Figure <ref>. This provides invariance to small distortions in the data. The pooling layer helps the model to control overfitting as the size of data and model parameters both are reduced which also leads to a decrease in the computational time. The commonly used down-sampling operations are max-pooling, average-pooling, and sum-pooling. Recently, a pooling technique, wavelet-pooling is introduced in [162] whose performance is commensurable to max-pooling and average-pooling. Alternatively, [163] proposed another trend in which the pooling layer is replaced by the CONV layer of increased filter stride.
Max-pooling and average-pooling operations of down-sampling/pooling layer
§ CONVOLUTIONAL NEURAL NETWORK (CNN)
The architecture of the Convolutional Neural Network (CNN) is inspired by the biological visual system presented in [164]. Following the natural visual recognition mechanism proposed by Hubel and Wiesel [164], Neocognitron [165] is regarded as the first hierarchical, position-invariant model for pattern recognition [166] which can be considered as the predecessor of CNN [167]. The architecture of CNN can be divided into two main stages: one is Feature Extraction (FE) network and the other is a classification based on the feature maps extracted in the first stage.
The FE network consists of multiple hierarchically stacked CONV, activation, and pooling layers. The CONV layer extracts the features from input data by convolving a learned kernel with it. On each CONV layer, the kernel is spatially shared with whole input data which reduces the model’s complexity and the network becomes easier to train as the number of parameters that need to be fine-tuned is reduced. Convolved results are then passed through an activation layer which adds nonlinearities in the network to extract non-linear features of the input. This is achieved by applying a non-linear function to the convolved results. Afterward, the resolution of the feature map is reduced by applying a pooling operation to achieve shift-invariance. Generally, the pooling layer is added with every CONV layer followed by the activation function.
The classification stage consisting of FC layers and a Softmax operator gives the probability of input pattern belonging to a specific class based on the feature maps extracted at the FE stage. FC layer connects every single neuron in the previous layer to every neuron in the current layer. In [168] and [169], the authors proposed that the FC layer can be disregarded by using a global average pooling layer. Softmax is commonly used for classification tasks [170, 171] however, many works have also utilized SVM [172, 173] for this purpose.
In the following, we reviewed three types of CNN architectures for HSIC: i) Spectral CNN, ii) Spatial CNN and iii) Spectral-spatial CNN. Figure <ref> illustrates the general architecture of these three frameworks.
General architecture of Spectral CNN, Spatial CNN and Spectral-spatial CNN frameworks for HSIC.
§.§ Spectral CNN Frameworks for HSIC
Spectral CNN models only consider 1D spectral information $(x_i \in \mathcal{R}^{B})$ as input, where $B$ could either be the original number of spectral bands or the appropriate number of bands extracted after some dimensionality reduction method. In [174], a CNN structure was proposed to mitigate the overfitting problem and achieved a better generalization capability by utilizing $1 \times 1$ convolutional kernels and enhanced dropout rates. Moreover, a global average pooling layer is used in place of a fully connected layer in order to reduce the network parameters. To reduce high correlation among HSI bands [169] proposed a CNN architecture for HSIC which fully utilized the spectral information by transforming the 1D spectral vector to a 2D feature matrix and by cascading composite layers consisting of $1 \times 1$ and $3 \times 3$ CONV layers, the architecture achieved the feature reuse capability. Similar to [174], [169] also utilized the global average pooling layer to lower the network's training parameters and to extract high dimensional features.
In [175] authors presented a hybrid model for HSIC in which the first few CONV layers are employed to extract position invariant middle-level features and then recurrent layers are used to extract spectral-contextual details. Similarly, <cit.> used a hybrid architecture for classifying healthy and diseased Wheat heads. For the input layer, they transform spectral information into a 2D data structure. In [176] CNN proved to be more effective as compared to SVM and KNN for the spectral-based identification of rice seed’s variety. A similar application of CNN was explored in [147] where various varieties of Chrysanthemum were identified using spectral data of the first five PCs of Principal component analysis (PCA). PCA is a dimensionality reduction method that is widely used in many DL applications to handle/preprocess high dimensional data. In [177] PCA was utilized to preprocess medical HSI and then the fusion of CNN kernels with Gabor kernels using dot product is used for classification.
The study [178] analyzed another dimensionality reduction technique Dynamic Mode Decomposition (DMD) which converted 3D HSI data to 2D and then this data is fed to vectorized CNN (VCNN) for classification. To overcome the noise effect in pixel-wise HSIC, a method of averaged spectra is used in [179] where an averaged spectra of a group of pixels belonging to bacterial colonies is extracted for further analysis.
§.§ Spatial CNN frameworks for HSIC
Spatial CNN models only consider spatial information and to extract the spatial information from HSI data, dimensionality reduction (DR) methods are employed on spectral-domain to lower the dimensionality of original HSI data. For instance, [180] used PCA to extract the first PC with refined spatial information and fed it to a fully CNN framework for classification. Similarly, [181] trained a spatial-based 2D-CNN with one PC. In [182], PCA whitened input data considering three PCs is fed to a random patches network as a 2D-CNN classification framework. However, the limited training samples with highly similar spectral feature make DL models prone to over-fitting. To overcome this [183] proposed a probabilistic neighbourhood pooling based attention network (PNPAN) for HSI classification.
The method proposed in [184] cropped the patches from 2D input images (i.e. images from the different spectral bands) to train a 2D-CNN architecture that learns the data-adaptive kernels by itself. Furthermore, some authors also proposed the utilization of handcrafted features along with spectral-domain reduction. For example, [185] combined the Gabor filtering technique with 2D-CNN for HSIC to overcome the overfitting problem due to limited training samples. The Gabor filtering extracts the spatial details including edges and textures which effectively reduce the overfitting problem. The work [186] proposed a deformable HSIC network based on the concept of deformable sampling locations which can adaptively adjust their size and shape in accordance with HSI's spatial features. Such sampling locations are created by calculating 2D offsets for every pixel in the input image through regular convolutions by taking into account three PCs. These offsets can cover the locations of similar neighboring pixels possessing similar characteristics. Then structural information of neighboring pixels is fused to make deformable feature images. Regular convolution employed on these deformable feature images can extract more effective complex structures.
§.§ Spectral-Spatial CNN frameworks for HSIC
Spectral-spatial pixel-wise HSIC can be achieved by integrating spatial features into spectral information. For instance, [187] presented an improved pixel pair feature (PPF) approach called spatial pixel pair feature which is different from traditional PPFs with respect to two main aspects: one is the selection of pixel pair that is only the pixel from the immediate neighborhood of central pixel can be used to make a pair, second is the label of pixel pair would be as of central pixel. To extract discriminative joint representation [188] introduced a Supervised Spectral-Spatial Residual Network (SSRN) that uses a series of 3D convolutions in the respective spectral and spatial residual blocks. An efficient deep 3D-CNN framework was proposed in [189] that simultaneously exploits both spectral and spatial information for HSIC.
Similarly, to reflect the variations of spatial contexture in various hyperspectral patches, [190] implemented an adaptive weight learning technique instead of assigning fixed weights to incorporate spatial details. Besides this, to make the convolutional kernel more flexible [154] explored a new architectural design that can adaptively find adjustable receptive filed and then an improved spectral-spatial residual network for joint feature extraction. The discriminative power of the extracted features can be further improved by combining both the max and min convolutional features before the ReLU non-linearity reported in [191] for the classification task. CNN's are failed to exploit rotation equivariance in a natural way [192] introduced the translation equivariant representations of input features which provides extra robustness to the spatial feature locations for HSIC.
The deeper networks may suffer from the issues of overfitting and gradient vanishing problems due to the smaller number of available labeled training samples and to overcome this shortcoming the lightweight CNN's gain good attention in HSIC communities. The paper [193] introduced an end-to-end 3D lightweight convolutional neural network to tackle the limited numbers of training samples for HSI classification. To reduce the large gap between the massive trainable parameters and the limited labeled samples [194] proposed to extract the spatial-spectral Schroedinger eigenmaps (SSSE) joint spatial-spectral information, and then further reduced the dimensionality using compression technique. Approximately 90% of trainable weights of the total parameters are used immediately after the flatten operation i.e., in the fully connected layer, whereas the remaining only 10% weights are used on the previous convolutional layers of the whole network. To overcome the paper [195] introduced a lightweight bag-of-feature learning paradigm into an end-to-end spectral-spatial squeeze-and-excitation residual network for HSIC.
The morphological operations i.e., erosion and dilation are powerful nonlinear feature transformations that are widely used to preserve the essential characteristics of shape and structural information of an image. Inspired by these the paper [196] introduced a new end-to-end morphological convolutional neural network (MorphCNN) for HSIC which utilizes both the spectral and spatial features by concatenating the outputs from spectral and spatial morphological blocks extracted in a dual-path fashion.
The work [190] proposed a two-stage framework for joint spectral-spatial HSIC which can directly extract both spectral and spatial features instead of independently concatenating them. The first stage of the proposed network is comprised of a CNN and softmax normalization that adaptively learns the weights for input patches and extracts joint shallow features. These shallow features are then fed to a network of Stacked Autoencoder (SAE) to obtain deep hierarchical features and final classification is performed with a Multinomial Logistic Regression (MLR) layer. A 3D-CNN model was introduced in [197] to jointly exploit spectral-spatial features from HSI and to validate its performance comparison is performed with spectral-based DBN, SAE, and 2D-spatial CNN for HSIC. The work [198] introduced a bilinear fusion mechanism over the two branches of squeeze operation based on the global and max-pooling whereas the excitation operation is performed with the fused output of squeeze operation.
The work [199] proposed a deep multiscale spectral-spatial feature extraction approach for HSIC which can learn effective discriminant features from the images with high spatial diversity. The framework utilizes the Fully Convolutional Network (FCN) to extract deep spatial information and then, these features are fused with spectral information by using a weighted fusion strategy. Finally, pixel-wise classification is performed on these fused features.
In [200] a dual-channel CNN framework was implemented for spectral-spatial HSIC. In the proposed approach, 1D-CNN is used to hierarchically extract spectral features and 2D-CNN to extract hierarchical spatial features. These features are then combined together for the final classification task. Furthermore, to overcome the deficiency of training data and to achieve higher classification accuracy, the proposed framework is supported by a data augmentation technique that can increase the training samples by a factor of 6. In [201], a multiscale 3D deep CNN is introduced for end-to-end HSIC which can jointly learn both 1D spectral and 2D multiscale spatial features without any pre-processing or post-processing techniques like PCA, etc. In order to reduce the band redundancy or noise in HSI, [202] explored a novel architecture for HSIC by embedding a band attention module in the traditional CNN framework. The study [203] proposed an HSIC architecture in which PCA transformed images are used to obtain multi-scale cubes for handcrafted feature extraction by utilizing multi-scale covariance maps which can simultaneously exploit spectral-spatial details of HSI. These maps are then used to train the traditional CNN model for classification.
The work [204] combined CNN with metric learning-based HSIC framework which first utilizes CNN to extract deep spatial information using the first three PCs extracted by PCA. Then, in a metric learning-based framework, spectral and spatial features are fused for spectral-spatial feature learning by embedding a metric learning regularization factor for the classifier’s training (SVM). Similarly, [205] combines multi-scale convolution-based CNN (MS-CNN) with diversified deep metrics based on determinantal point process (DPP) [206] priors for (1D spectral, 2D spectral-spatial, and 3D spectral-spatial) HSIC. Multiscale filters are used in CNN to obtain multi-scale features and DPP-based diversified metric transformation is performed to increase the inter-class variance and decrease intra-class variance, and better HSI representational ability. Final classification maps are obtained by using a softmax classifier.
In recent work, [207] an HSIC framework is proposed to extract multi-scale spatial features by constructing a three-channel virtual RGB image from HSI instead of extracting the first three PCs through PCA. The purpose of using a three-channel RGB image is to utilize existing networks trained on natural images to extract spatial features. For multi-scale feature extraction, these images are passed to a fully convolutional network. These multi-scale spatial features are fused and further joined with PCS extracted spectral features for final classification via SVM.
A two-branch (spectral and spatial) DNN for HSIC was introduced in [208]. The spatial branch consists of a band selection layer and a convolutional and de-convolutional framework with skip architecture to extract spatial information of HSI, and in the spectral branch, a contextual DNN is used to extract spectral features. The paper [209] introduced an adaptive band selection based semi-supervised 3D-CNN to jointly exploit spectral-spatial features whereas [210] explored dual-attention based autoencoder-decoder network for unsupervised hyperspectral band selection and then joint feature extraction for land cover class prediction. Similarly, in [211] spectral-spatial features are simultaneously exploited in an unsupervised manner using a 3D convolution autoencoder. The pixel-wise land use and land cover (LULC) classification using traditional CNNs is often suffered by the presence of wrong / noisy labels in the training set and can easily be overfitted to the labeled noises. To overcome this problem of accurate classification [212] proposed a lightweight heterogeneous kernel convolution (HetConv3D) for HSI classification with noisy labels by effectively combining both the spectral and spatial kernel feature to produce discriminative and invariant feature maps for classification.
A hybrid 3D-2D-CNN architecture was presented by [213] in which 3D-CNN is first used to extract joint spectral-spatial features and then 2D-CNN is further used to obtain more abstract spatial contextual features. The study [214] proposed to use adaptive Markov random field for HSIC. The CNN first extracts joint spectral-spatial features and then a smooth MRF prior is placed on class labels to further refine the spatial details. Convolutional neural networks are greatly affected by overfitting and vanishing gradient problems and to overcome this a separable attention network was introduced by [215]. Where the input feature maps are divided into several groups and split along the channel dimension and finally an attention mask encodes global contextual information by combining them. Recently, generalized gradient centralized $3D$ convolution (G2C-Conv3D) was introduced in [216] to combine both the intensity level semantic information and gradient level detailed information extracted from raw HSIs during the convolutions operation. To boost the performance of accurate land-cover types classification, G2C-Conv3D can be easily plugged into the existing HSIs feature extraction networks.
§.§ GCN frameworks for HSIC
Graph Convolutional Networks (GCNs) [217] have been garnering increasing attention to researchers in various application fields, owing to their flexible and diversified network architecture that is capable of processing non-grid high-dimensional data. Such properties provide new insight and possibilities in processing hyperspectral data more effectively and efficiently. In detail, GCNs enable the modeling of the relations between data (or samples). Accordingly, this naturally motivates us to use the GCNs to capture the spatial relations of spectral signatures in HSIs. Due to the GCNs' limitations in the graph construction [218], particularly for large graphs (need expensive computational cost), GCNs fail to classify or identify materials in large-scale hyperspectral scenes using normal PCs, which leads to relatively less popularity compared to CNN’s in HSIC. For this reason, there have been some tentative researches using the GCNs in the HSIC task.
For example, a second-order GCN was proposed in [219] by modeling spatial-spectral relations on manifolds for HSIC by the attempts to reduce the computational cost on graphs. Authors of [220] first used superpixel segmentation techniques on HSIs and fed superpixels instead of pixels into GCNs. This enables the network training of GCNs on a large number of pixels in HSIs with the application to the land cover classification task. Nevertheless, these methods still fail to solve the problem of GCNs essentially. To this end, Hong et al. [218] proposed a novel miniGCN. As the name suggests, miniGCN trains the GCNs in a mini-batch fashion, which is the same as CNN. The proposed miniGCN not only reduces the computational cost-effectively but also makes it possible to make a quantitative comparison and fusion with CNNs, further yielding a FuNet for HSIC.
§.§ Future directions for CNN-based HSIC
In the preceding section, we have reviewed the recent developments of CNNs for HSIC. Although CNN's based HSIC frameworks have achieved great success with respect to classification performance, there are still many aspects that need further investigation. For instance, there is a need to further work on such models that can jointly employ spatial and spectral information for HSIC. Many of the above-surveyed frameworks use dimensionality reduction methods to achieve better spectral-spatial representation but such approaches discard useful spectral information of HSI. Hence the development of robust HSIC approaches that can preserve spectral information is required. However, the processing of such approaches increases the computational burden, and the training process becomes slower, therefore, parallel processing of such networks using FPGAs and GPUs is desired in order to achieve the computationally fast models, that can even be suitable for mobile platforms, without the performance degradation.
Moreover, as the CNNs are becoming deeper and deeper, more labeled training data is required for accurate classification, and as discussed before, there is a lack of labeled training data in HSI. In order to overcome this issue, more research is required to integrate the CNN with unsupervised or semi-supervised approaches. Furthermore, we should pay more attention to the generalization ability of CNNs, particularly for the input data format (not only limiting to the grid data). GCNs might be a good solution to combine with CNN's together to develop a more general CNN-based new framework. Using this, we expect to be able to further break the performance bottleneck, yielding more efficient HSIC.
§ AUTOENCODERS (AE)
Autoencoder (AE) is a popular symmetrical neural network for HSIC due to its unsupervised feature learning capability. AE itself does not perform a classification task instead it gives a compressed feature representation of high-dimensional HSI data. AE consists of an input layer, one hidden or encoding layer, one reconstruction or decoding layer, and an output layer as shown in Figure <ref>. AE is trained on input data in such a manner to encode it into a latent representation that is able to reconstruct the input. To learn a compressed feature representation of input data, AE tries to reduce the reconstruction error that is minimizing the difference between the input and the output.
A general Autoencoder Architecture
Whereas, the Stacked Autoencoder (SAE) is built by stacking multiple layers of AEs in such a way that the output of one layer is served as an input of the subsequent layer. Denoising autoencoder (DAE) is a variant of AE that has a similar structure as AE except for the input data. In DAE, the input is corrupted by adding noise to it, however, the output is the original input signal without noise. Therefore, DAE, different from AE, can recover original input from a noisy input signal.
To learn high-level representation from data, the work [221] proposed a combination of multi-layer AEs with maximum noise fraction which reduces the spectral dimensionality of HSI, while a softmax logistic regression classifier is employed for HSIC. The study reported in [222] combined multi-manifold learning framework proposed by [223] with Counteractive Autoencoder [224] for improved unsupervised HSIC. The work [225] jointly exploited spectral-spatial features of HSI through an unsupervised feature extracting framework composed of recursive autoencoders (RAE) network. It extracts the features from the neighborhood of the target pixel and weights are assigned based on the spectral similarity between target and neighboring pixels. A two-stream DNN with a class-specific fusion scheme was introduced in [226] which learns the fusion weights adaptively. One stream composed of stacked denoising auto-encoder is used to extract spectral features and the second stream is implemented to extract spatial information using Convolutional Neural Network (CNN), while final classification is performed by fusing the class prediction scores obtained from the classification results of both streams.
Another work proposed a hybrid architecture for multi-feature based spectral-spatial HSIC which utilizes PCA for dimensionality reduction, guided filters [227] to obtain spatial information, and sparse AE for high-level feature extraction. The framework proposed in [228] exploited both spectral and spatial information for HSIC by adopting batch-based training of AEs and features are generated by fusing spectral and spatial information via a mean pooling scheme. Another work [229] developed a spectral-spatial HSIC framework by extracting appropriate spatial resolution of HSI and utilization of stacked sparse AE for high-level feature extraction followed by Random Forest (RF) for the final classification task.
Similarly, [230] also used stacked sparse AE for various types of representation that is spectral-spatial and multi-fractal features along with other higher-order statistical representations. A combination of SAE and extreme learning machine was proposed in [231] for HSIC, which segments the features of the training set and transform them via SAE, after transformation, feature subsets are rearranged according to the original order of the training set and fed to extreme learning machine-based classifiers, while Q-statistics is used for final classification result. This processing of feature subsets helps to improve variance among base classifiers [231]. Similarly, in a recent work [232] implemented a computationally efficient multi-layer extreme learning machine-based AE which learns the features in three folds, as proposed in [39] for HSIC.
To overcome the issue of high intra-class variability and high inter-class similarity in HSI, [233] developed an SAE-based HSIC which can learn compact and discriminative features by imposing a local fisher discriminant regularization. Similarly, in the latest work [234] a k-sparse denoising AE is spliced with and spectral–restricted spatial features that overcome the high intra-class variability of spatial features for HSIC. The study [235] proposed an HSIC architecture that first makes the spectral segments of HSI based on mutual information measure to reduce the computation time during feature extraction via SAE, while spatial information is incorporated by using extended morphological profiles (EMPs) and SVM/RF is used for final classification. Recently, [236] used SAE for the classification of an oil slick on the sea surface by jointly exploiting spectral-spatial features of HSI.
§.§ Future Directions for AE-based HSIC
In the above section, we have surveyed the recent developments of AEs based techniques for HSIC. Although such frameworks provide powerful predictive performance and show good generalization capabilities, more sophisticated work is still desired. Many of the discussed approaches do not fully exploit abundant spatial information so further techniques need to be developed that can fully employ joint spatial and spectral information for HSIC. Moreover, the issue of high intra-class variability and high inter-class similarity in HSI also hinders the classification performance. Many of the above-reviewed works have addressed this issue but further research to overcome this aforesaid issue is required. One direction could be further exploring approaches like pre-training, co-training, and adaptive neural networks, etc for AE-based HSIC frameworks.
§ DEEP BELIEF NETWORK (DBN)
Deep Belief Network (DBN) [237] is a hierarchical deep DNN that learns the features from input in an unsupervised, layer-by-layer approach. The layers in DBN are built using Restricted Boltzmann Machine (RBM) comprised of a two-layer architecture in which visible units are connected to hidden units [238] as shown in Figure <ref>.
Basic architecture of RBM
A detailed overview of RBM can be found at [238]. To extract more comprehensive features from input data, the hidden unit of one RBM can be fed to the visible units of other RBM. This type of layer-by-layer architecture builds a DBN, which is trained greedily and can capture deep features from HSI. The architecture of three-layer DBN is shown in Figure <ref>.
A three layer DBN architecture
In literature, several works implemented DBN for HSIC. For instance, [239] used DBN for land cover classification by combining spectral-spatial information and making a comparison with some other classification approaches. The usual learning process of DBN involves two steps: one is unsupervised pre-training with unlabeled samples and the second is supervised fine-tuning with the help of labeled samples. However, this training process may result in two problems: first, multiple hidden units may tend to respond similarly [240] due to co-adaptation [241] and second is linked with the sparsity and selectivity of activations neurons that are some neurons may always be dead or always responding [242]. To mitigate these two problems, [243] introduced a diversified DBN model through regularizing the pre-training and fine-tuning process by imposing a diversity prior to enhancing the DBN's classification accuracy for HSI.
To extract efficient texture features for the HSIC, the work [244] proposed a DBN based texture feature enhancement framework that combines band grouping and sample band selection approach with a guided filter to enhance the texture features, which are then learned by a DBN model and final classification results are obtained by a softmax classifier. The work [245] implemented a parallel layers framework consisting of Gaussian-Bernoulli RBM which extracts high-level, local invariant, and nonlinear features from HSI and a logistic regression layer is used for classification.
To improve the classification accuracy, some works are considered to jointly exploit the spectral and spatial information contained in HSI. For instance, [246] introduced a DBN framework with the logistics regression layer and verified that the joint exploitation of spectral-spatial features leads to improved classification accuracy. Similarly, [247] proposed a spectral-spatial graph-based RBM method for HSIC which constructs the spectral-spatial graph through joint similarity measurement based on spectral and spatial details, then an RBM is trained to extract useful joint spectral-spatial features from HSI, and finally, these features are passed to a DBN and logistic regression layer for classification.
§.§ Future directions for DBN-based HSIC
In the preceding section, we have reviewed the latest developments of DBN-based HSIC frameworks. We have observed that relative to other DNNs, very few works have utilized the DBNs for HSIC. Therefore, there is a need to further explore the DBN-based robust techniques that can jointly employ spatial and spectral features for HSIC. In addition, another research direction can be the regularization of the pretraining and fine-tuning processes of DBN to efficiently overcome the issue of dead or potentially over-tolerant (always responding) neurons.
§ RECURRENT NEURAL NETWORK (RNN)
The architecture of the Recurrent Neural Network (RNN), shown in Figure <ref>, comprises loop connections, where the node activation of the next step depends on the previous step [248]. Therefore, RNNs are capable of learning temporal sequences. RNN models process the spectral information of HSI data as time sequence considering the spectral bands as time steps [249]. There are three basic models of RNN a) Vanilla, b) Long-Short-Term Memory (LSTM) and c) Gated Recurrent Unit (GRU).
RNN architecture
Vanilla is the simplest RNN model and leads to information degradation while processing high-dimensional data. LSTM models composed of two states overcome this issue by controlling the information flow through three gates: input, forget, and output gates. It learns the relevant information over time by discarding the extraneous information. However, the gate controlling strategy makes the LSTM a considerably complex approach. GRU variant of LSTM enjoys the simplicity of the Vanilla model and provides high performance similar to LSTM. GRU is a simpler version of LSTM which modifies the input and forget gate as an update ($z_t$) and reset ($r_t$) gate and removes the output gate. A comparison of LSTM and GRU's internal architecture is presented in Figure <ref>.
Internal architecture of LSTM and GRU
The work [70] proposed an RNN based HSIC framework with a novel activation function (parametric rectified tanh) and GRU, which utilizes the sequential property of HSI to determine the class labels. In [142] a local spatial sequential (LSS) method based RNN framework was introduced which first extracts low-level features from HSI by using Gabor filter and differential morphological profiles [143] and then fuse these features to obtain LSS features from the proposed method, these LSS features are further passed to an RNN model to extract high-level features, while a softmax layer is used for final classification.
Keeping in view the usefulness of spatial information to achieve improved classification accuracies, the work [250] proposed a spectral-spatial LSTM based network that learns spectral and spatial features of HSI by utilizing two separate LSTM followed softmax layer for classification, while a decision fusion strategy is implemented to get joint spectral-spatial classification results. Similarly, [251] proposed a patch-based RNN with LSTM cells that incorporate multi-temporal and multi-spectral information along with spatial characteristics for land cover classification.
In literature, several works proposed CNN-based hybrid RNN architectures (CRNN) for HSIC. For instance, [175] implemented a convolutional RNN in which the first few CONV layers are employed to extract position invariant middle-level features, and then recurrent layers are used to extract spectral-contextual details for HSIC. Similarly, [252] utilized such a model for semi-supervised HSIC by using pseudo labels. The study [253] suggested an HSIC framework in which CNN is used to extract spatial features from HSI, then these features are passed to a GRU-based fusion network that performs feature level and decision level fusion.
Similarly, Luo, et al., [254] exploited both spectral and spatial information contained in HSI by combining CNN with parallel GRU-based RNN which simplifies the training of GRU and improves performance. Bidirectional Convolutional LSTM (CLSTM) was proposed in [153] to jointly exploit spectral-spatial feature of HSI for classification. In, [255] combined multiscale local spectral-spatial features extracted by 3D-CNN with a hierarchical RNN which learns the spatial dependencies of local spectral-spatial features at multiple scales. Recurrent 2D-CNN and recurrent 3D-CNN for HSIC were proposed in [256] and along with an interesting comparison of these frameworks with their corresponding 2D and 3D-CNN models, which validates the superiority of recurrent CNN. The work [257] integrated CNN with CLSTM in which a 3D-CNN model is used to capture low-level spectral-spatial features and CLSTM recurrently analyzes this low-level spectral-spatial information. Recently, [70], introduced a cascade RNN for HSIC which consist of two layers of GRU-based RNN, the first layer is used to reduce the redundant spectral bands and the second layer is used to learn the features from HSI, furthermore, a few convolutional layers are employed to incorporate the rich spatial information contained in HSI.
§.§ Future directions for RNN-based HSIC
In the above section, we have surveyed the recent developments of AEs based techniques for HSIC. Although RNN-based HSIC frameworks have attracted considerable attention to the remote sensing community and achieved great success for classification performance, there are still many aspects that need further investigation. For instance, the construction of sequential input data for RNN. Most of the surveyed methods considered HSI pixel as a sequential point that is the pixel from each spectral band that forms a data sequence. However, This increases the length of RNN’s input sequence considerably large which can lead to an overfitting issue.
Moreover, processing such large data sequences increases the computational time and the learning process becomes slower. Therefore, the use of parallel processing tools needs to be further investigated to achieve good generalization performance of RNN-based HSIC. In addition, approaches like a grouping of spectral bands to decrease the data sequence length and utilization of the entire spectral signature to better discriminate between various classes can further be explored to construct the sequential input of the RNN model. Another interesting future direction may involve the implementation of RNN-based HSIC frameworks in a real multi-temporal HSI context.
§ STRATEGIES FOR LIMITED LABELED SAMPLES
Although DNNs have been successfully exploited for the task of HSIC however, they require a considerably large amount of labeled training data. However, as discussed earlier, the collection of labeled HSI is very critical and expensive due to numerous factors that either demand human experts or exploration of real-time scenarios. The limited availability of labeled training data hinders classification performance. To overcome the aforesaid issue, many effective strategies have been proposed in the literature. In this section, we will briefly discuss some of these strategies while focusing on active learning algorithms.
§.§ Data Augmentation
To combat the issue of limited training samples, data augmentation is proven to be an effective tool for HSIC. It generates new samples from the original training samples without introducing additional labeling costs. Data augmentation approaches can be categorized into two main strategies as i) data wrapping; ii) oversampling [258]. Data wrapping usually encodes several invariances (translational, size, viewpoint, and/or illumination) by conducting geometric and color-based transformations while preserving the labels, and oversampling-based augmentation methods inflate the training data by generating synthetic samples based on original data distributions. Oversampling techniques include mixture-based instance generation, feature space augmentations [258], and Generative Adversarial Networks (GANs) [259].
Referring to HSIC literature, several data augmentation-based frameworks have been employed to improve the classification performance by avoiding potential overfitting, which is generally caused by the limited availability of training data. For instance, [260] enhanced the training data by using three data augmentation operations (flip, rotate, and translation), and then this enhanced data is exploited to train CNN for HSIC. The work [261] presented a comprehensive comparison of various extensively utilized HSI data augmentation techniques and proposed a pixel-block pair-based data augmentation that utilized both spectral and spatial information of HSI to synthesis new instances, to train a CNN model for HSIC. The work [262] compared the classification performance of a combination of CNN and AL with and without data augmentation techniques and demonstrated that the data augmentation leads to higher classification accuracies. Similarly, in another comparison [263], data augmentation-based CNN exhibited a 10% increase in HSIC accuracy when compared to a PCA-based CNN model.
The above-discussed methods utilize offline data augmentation techniques that increase the training data by creating new instances during/before the training process of a model. Recently, a novel data augmentation framework for HSI is proposed in [264] which, rather than inflating the training data, generates the samples at test time, and a DNN trained over original training data along with a voting scheme is used for the final class label. To improve the generalization capability of DNN models, the work [264] also proposed two fast data augmentation techniques for high-quality data syncretization. A similar PCA-based online data augmentation strategy is proposed in [265] which also synthesis new instances during the inference, instead of training.
§.§ Semi-Supervised/Unsupervised Learning
Semi-Supervised Learning (SSL) approaches learn data distribution by jointly exploiting both labeled and unlabeled data. These techniques expand the training data by utilizing unlabeled samples along with labeled ones in order to construct a relationship between feature space and class labels. Several SSL-based HSIC frameworks have been proposed in the literature that can mainly be categorized as follows: i) Co-training, ii) Self-training, iii) GANs, iv) Graph-based SSL models and v) Semi-supervised SVM. A recent comprehensive survey on these SSL techniques can be found in [266]. Moreover, another in-depth survey of SSL approaches is also presented in [267].
The SSL-based HSIC techniques are briefly summarized in [268], where authors also made a detailed comparison of these methods. The method presented in [252] used pseudo or cluster-labeled samples to pre-train a CRNN for HSIC and small-sized labeled data is used to fine-tune the network. Similarly, [156] proposed a semi-supervised HSIC framework that exploits PCA and extended morphological attribute profiles to extract pseudo-labeled samples which are fed to a CNN-based deep feature fusion network.
The work [269] proposed a dual strategy co-training approach based on spectral and spatial features of HSI. Similarly, [270] separately pre-trained two SAEs, one using spectral and the other using spatial features of HSI, and fine-tuning is achieved via a co-training approach. [271] proposed a region information-based self-training approach to enhance the training data. A graph-based self-training framework was developed in [272] where initial sampling is achieved through subtractive clustering. Recently, [157] improved the HSIC performance by pseudo-labeling the unlabeled samples through a clustering-based self-training mechanism and regulating the self-training by employing spatial constraints.
§.§ Generative Adversarial Network (GAN)
GAN proposed by [273], is comprised of two neural networks, one is known as a generator and the other is known as discriminator (Figure <ref>). GANs can learn to replicate the samples by exploiting the data distribution details. The work [274] proposed a spectral feature-based GAN for SSL-based HSIC.
A general architecture of Generative Adversarial Network (GAN)
Similarly, [275] proposed a GAN-based spectral-spatial HSIC framework. Similarly, [276] developed CNN-based 1D-GAN and 3D-GAN architectures to enhance the classification performance. A 1D customized GAN is used to generate the spectral features [277], which is further used by CNN for feature extraction, and then majority voting is performed HSIC. Very recently, [278] introduced a spatial-spectral multi-class GAN (MSGAN) which utilizes two generators to produce spatial and spectral information with the help of multiple adversarial objectives. To address the data imbalance problem for HSI classification [279] proposed a new semi-supervised model which combines GAN with conditional random fields (CRFs).
Similarly, [280] investigated a Caps-TripleGAN model which effectively generates new samples using a 1D structure Triple Generative Adversarial Network (TripleGAN) and classifying the generated HSI samples using the capsule network (CapsNet). The work [281] proposed to utilize a 3D CNN-based generator network and a 3D deep residual network-based discriminator network for HSIC. To learn high-level contextual features combination of both capsule network and convolutional long short-term memory (ConvLSTM) based discriminator model has been proposed in [282] for HSIC.
The work [283] proposed to address the scarcity of training examples by utilizing a GAN model where the performance of the discriminator is further improved by an auxiliary classifier to produce more structurally coherent virtual training samples. Besides this, to enhance the model performance [284] proposed a generative adversarial minority oversampling-based technique for addressing the long-standing problem of class-wise data imbalanced imposed by HSIC.
§.§ Transfer Learning
Transfer learning enhances the performance of a model by using prior knowledge of a relevant primary task to perform a secondary task. In other words, information extracted from the relevant source domain is transferred to the target domain to learn unseen/unlabeled data. Therefore, transfer learning can be effectively employed in domains with insufficient or no training data. Based on the availability of labeled training instances, transfer learning frameworks can further be categorized as supervised or unsupervised transfer learning. Generally, both source and target domains are assumed to be related but not exactly similar. However, they may follow different distributions as in the case of HSIC where categories of interest are the same but data in two domains may vary due to different acquisition circumstances.
In DNN based HSIC, the model learns features in a hierarchical manner, where lower layers usually extract generic features, when trained on various images. Therefore, the features learned by these layers can be transferred to learn a new classifier for the target dataset. For instance, [285] pertained to a two-branch spectral-spatial CNN model with an ample amount of training data from other HSIs and then applied the lower layers of the pre-trained model to the target network for the robust classification of target HSI. To learn the target-specific features, higher layers of the target network are randomly initialized and the whole network is fine-tuned by utilizing limited labeled instances of target HSI. Similarly, [286] proposed a suitable method to pre-train and fine-tune a CNN network to utilize it for the classification of new HSIs. The study [287] combined data augmentation and transfer learning approaches to combat the shortage of training data in order to improve HSIC performance.
As discussed before, data in source and target domain may vary in many aspects, for instance, in the case of HSIs, the dimensions of two HSIs may vary due to the acquisition from different sensors. Handling such cross-domain variations and transferring the knowledge between them is known as heterogeneous transfer learning (a detailed survey of such methods can be found in [288]). In HSIC literature, several works have been proposed to bridge the gap for transferring the knowledge between two HSIs, with varying dimensions and/or distributions.
For example, [289] proposed an effective heterogeneous transfer learning-based HSIC framework that works well with both homogeneous and heterogeneous HSIs, and [290] used an iterative re-weighting mechanism-based heterogeneous transfer learning for HSIC. Similarly, a recent work [291] proposed a band selection-based transfer learning approach to pre-train a CNN, which retains the same number of dimensions for various HSIs. Furthermore, [292] proposed an unsupervised transfer learning technique to classify completely unknown target HSI and [293] demonstrate that the networks trained on natural images can enhance the performance of transfer learning for remote sensing data classification as compared to the networks trained from scratch using smaller HSI data.
§.§ Active Learning
Active Learning (AL) iteratively enhances the predictive performance of a classifier by actively increasing the size of training data, for each training iteration, by utilizing an unlabeled pool of samples. In each iteration, AL enhances the training dataset by actively selecting the most valuable instances from the pool of unlabeled data and an oracle (Human or machine-based) assigns the true class labels to these instances. Finally, these useful instances are added to the existing training dataset and the classifier is retrained on this new training dataset. The process continues until a stopping criterion, that maybe the size of the training dataset, the number of iterations, or the desired accuracy score, is achieved. A general framework of AL is illustrated in Figure <ref>.
A general overview of Active Learning
The selection of the most useful/effective samples is made in such a way that the samples should be informative and representative of the overall input distribution in order to improve accuracy. Based on the criteria of adding new instances to the training set, AL frameworks can be designated as either stream-based or pool-based. In stream-based selection, one instance at a time is drawn from an actual set of unlabeled samples and the model decides whether to label it or not based on its usefulness. While in pool-based strategy, samples are queried from a pool/subset of unlabeled data based on ranking scores computed from various measures to evaluate the sample's usefulness.
The work [294] found that streamed-based selection gives poorer learning rates as compared to pool-based selection as the former tends to query extra instances. In pool-based selection, it is important to incorporate diversity in the pool of samples, in order to avoid redundancy within the pool of samples. Generally, the following three aspects are focused on while selecting/querying the most valuable samples: heterogeneity behavior, model’s performance, and representativeness of samples. A brief introduction of these sampling approaches is given below:
§.§.§ Heterogeneity-based selection
These approaches select the samples that are more heterogeneous to the already seen instances with respect to model diversity, classification uncertainty, and contention between a committee of various classifiers. Uncertainty sampling, expected model change, and query-by-committee are examples of heterogeneity-based models.
* Uncertainty Sampling: In this approach, the classifier iteratively tries to query the label of those samples for which it is most uncertain while predicting the label. The selection of new instances is based on ranking scores against a specified threshold and the instances with scores closest to that threshold are queried for labels. One simple example of such a scheme could be implementing the probabilistic classifier on a sample in a scenario of binary classification and querying its label if the predicted class probability is close to $0.5$.
* Query-by-Committee: Such heterogeneity-based approaches perform the sampling process based on the dissimilarities in the predictions of various classifiers trained on the same set of labeled samples. A committee of various classifiers trained on the same set of training data is used to predict the class labels of unlabeled samples and the samples for which classifiers differ more are selected for querying labels. The committee of different classifiers can either be built by using ensemble learning algorithms like Bagging and Boosting [295] or by changing the model parameters [296]. Generally, a less number of diverse classifiers is adequate for constructing a committee [297, 295].
* Expected Model Change: Such a heterogeneity-based approach chooses the instances which result in a significant change from the current model in terms of the gradient of the objective function. Such techniques attempt to query the label for those instances that are considerably different from the current model. These sampling techniques only fit the models which follow gradient-based training procedures/optimization.
§.§.§ Performance-based Selection
Such methods consider the effect of adding queried samples to the model performance. They try to optimize the performance of the model by reducing variance and error. There are two types of performance-based sampling:
* Expected Error Reduction: This approach is interrelated to uncertainty sampling in such a way that uncertainty measures maximize the label uncertainty of the sample to be queried for the label while expected error reduction reduces the label uncertainty of the queried sample. Referring to the already discussed example of the binary classification problem, the expected error reduction approach would choose the samples with a probability far away from $0.5$ in order to reduce the error rate. Such techniques are also known as the greatest certainty models [296].
* Expected Variance Reduction: Reducing the variance of the model is guaranteed to reduce future generalization error [298]. Therefore, expected variance reduction techniques attempt to indirectly reduce the generalization error by minimizing the model variance. Such approaches query the instances that result in the lowest model variance. The Fisher information ratio is a well-known variance minimization framework.
§.§.§ Representativeness-based selection
Heterogeneity-based models are prone to include outlier and controversial samples but performance-based approaches implicitly avoid such samples by estimating future errors. Representative sampling tends to query such instances that are representative of the overall input distribution, hence, avoid outliers and unrepresentative samples. These approaches weigh the dense input region to a higher degree while the querying process. Density-weighted techniques like information density are examples of representativeness sampling approaches that consider the representativeness of samples along with heterogeneity behavior, and are also known as hybrid models [296].
Recently, AL has been intensively utilized in HSIC. [299] proposed a feature-driven AL framework to define a well-constructed feature space for HSIC. [300] proposed a Random Forest-based semi-supervised AL method that exploits spectral-spatial features to define a query function to select the most informative samples as target candidates for the training set.
Spatial information has been intensively exploited in many AL-based HSIC. For instance, [301] presented an AL framework that splice together the spectral and spatial features of superpixels. Similarly, [302] considered the neighborhood and superpixel information to enhance the uncertainty of queried samples. In recent work, [303] exploited the attribute profiles to incorporate spatial information in an AL-based HSIC framework.
Batch-mode AL frameworks have been widely employed to accelerate the learning process. Such approaches select a batch of samples, in each iteration, to be queried for a label. Therefore, the diversity of the samples is extremely critical in batch mode AL techniques in order to avoid redundancy. A multi-criteria batch-mode AL method proposed by [304] defines a novel query function based on diversity, uncertainty, and cluster assumption measures. These criteria are defined by exploiting the properties of KNN, SVM, and K-means clustering respectively, and finally, genetic algorithms are used to choose the batch of most effective samples. Similarly, [305] proposed a regularized multi-metric batch-mode AL framework for HSIC that exploits various features of HSI.
A multiview AL (MVAL) framework was proposed in [306] that analyzes the object from various views and measure the informativeness of the sample through multiview Intensity-based query criteria. Similarly, [307] also exploited the concept of multiview learning using the Fisher Discriminant Ratio to generate multiple views. In another work, [308] proposed a novel adaptive MVAL framework for HSIC which jointly exploits the spatial and spectral features in each view. Recently, [309] proposed an MVAL technique that utilizes pixel-level, subpixel-level, and superpixel-level details to generate multiple views for HSIC. Moreover, the proposed method exploits joint posterior probability estimation and dissimilarities among multiple views to query the representative samples.
In the HSIC literature, several works have combined the AL and DNN. For instance, [310] joined autoencoder with AL technique and [311] proposed a DBN-based AL framework for HSIC. Similarly, [312] coupled Bayesian CNN with AL paradigm for spectral-spatial HSIC. Recently, [262] proposed a CNN-based AL framework to better exploit the unlabeled samples for HSIC.
Many works integrated AL with transfer learning for HSIC. For example, [313] proposed an AL-based transfer learning framework that extracts the salient samples and exploits high-level features to correlate the source and target domain data. Another work, [314] proposed a Stacked Sparse AE-based Active Transfer Learning technique that jointly utilizes both spectral and spatial features for HSIC. Another work [315] combined domain adaptation and AL methods based on multiple kernels for HSIC.
AL-based HSIC offers some sophisticated frameworks to enhance the generalization capabilities of models. For instance, [35] proposed a fuzziness-based AL method to improve the generalization performance of discriminative and generative classifiers. The method computes the fuzziness-based distance of each instance and estimated class boundary, and the instances having greater fuzziness values and smaller distances from class boundaries are selected to be the candidates for the training set. Recently, [316] proposed a non-randomized spectral-spatial AL framework for multiclass HSIC that combines the spatial prior Fuzziness approach with Multinomial Logistic Regression via a Splitting and Augmented Lagrangian classifier. The authors also made a comprehensive comparison of the proposed framework with state-of-the-art sample selection methods along with diverse classifiers.
§ EXPERIMENTAL EVALUATION
The most research-oriented works published in the literature present a comprehensive experimental evaluation to highlight the pros and cons of the work/s proposed. However, to some extent, these works may have chosen different experimental settings, for instance, training, validation, and test samples may have the same number or percentage but the samples may be different as these samples are normally chosen randomly. Therefore, to make a fair comparison among different works proposed in the literature, one must need to have the same experimental settings.
These experimental settings include the same samples (geographical locations should remain the same for all chosen models no the different ones) and the number of samples should have been selected for each round of training in the cross-validation process. Normally, these samples have been chosen randomly, thus high likely, they may be different for different models if the models are executed at different times.
The other issue with most of the literature proposed in recent years is overlapping between training/test samples, i.e., training/validation samples have been randomly selected (including or excluding the above point) for training and validation however, the entire dataset has been passed at a testing phase which leads to a highly biased model (as the training samples have already been seen by the model) and produces high accuracy. Thus, in this work, the training/test samples are though chosen randomly (because all the models have been executed at the same time) however, the above point has been taken seriously and the intersection among these samples remain empty.
§.§ Experimental Datasets
The Indian Pines (IP) dataset was gathered by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [317] over the Indian Pines test site in North-western Indiana. It contains $224$ spectral bands within a wavelength range of $400$ to $2500$ $nm$. The $24$ null and corrupted bands have been removed. The spatial size of the image is $145\times{145}$ pixels, and it comprises of $16$ mutually exclusive vegetation classes. The spatial resolution is 20 meters per pixel (MPP). The detailed class description and ground truth maps are presented in Figure <ref>. Moreover, the disjoint Training/Test sample maps are presented in Figures <ref> and <ref>.
GT Maps
Disjoint Training
Disjoint Test
=1mm =1mm
Background 10776 =1mm =1mm Alfalfa 46 =1mm =1mm Corn notill 1428
=1mm =1mm
Corn min 830 =1mm =1mm Corn 237 =1mm =1mm Grass/Pasture 483
=1mm =1mm
Grass/Trees 730 =1mm =1mm Grass/pasture-mowed 28 =1mm =1mm Hay windrowed 478
=1mm =1mm
Oats 20 =1mm =1mm Soybeans notill 972 =1mm =1mm Soybeans min 2455
=1mm =1mm
Soybean clean 593 =1mm =1mm Wheat 205 =1mm =1mm Woods 1265
=1mm =1mm
Bldg Grass Tree Drives 386 =1mm =1mm Stone steel towers 93 =1mm =1mm Total samples 21025
The type associated with the land-cover classes and number of available samples in the Indian Pines (IP) dataset. Moreover, Spatially disjoint training and test samples for the IP dataset are also presented.
The Kennedy Space Center (KSC) dataset was gathered in 1996 by AVIRIS [317], with wavelengths ranging from $400$ to $2500$ $nm$. The image has $512\times{614}$ pixels and $176$ spectral bands after removal of some low signal-to-noise ratio (SNR) bands. The KSC dataset comprises $5202$ labeled samples, with a total of $13$ upland and wetland classes. The detailed class description and ground truth maps are presented in Figure <ref>. Moreover, the disjoint Training/Test sample maps are presented in Figures <ref> and <ref>.
GT Maps
Disjoint Training
Disjoint Test
=1mm =1mm Background 309157 =1mm =1mm Scrub 761 =1mm =1mm Willow swamp 243
=1mm =1mm
CP hammock 256 =1mm =1mm Slash pine 252 =1mm =1mm Oak/Broadleaf 161
=1mm =1mm
Hardwood 229 =1mm =1mm Swap 105 =1mm =1mm Graminoid marsh 431
=1mm =1mm
Spartina marsh 520 =1mm =1mm Cattail marsh 404 =1mm =1mm Salt marsh 419
=1mm =1mm
Mud flats 503 =1mm =1mm Water 927 =1mm =1mm Total samples 207400
The type associated with the land-cover classes and number of available samples in the Kennedy Space Center (KSC) dataset. Moreover, Spatially disjoint training and test samples for the KSC dataset are also presented.
The University of Pavia (UP) dataset was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor during a flight campaign over the university campus at Pavia, Northern Italy [318]. It consists of $610\times{340}$ pixels with $103$ spectral bands in the wavelength range from $430$ to $860~{nm}$ and 2.5 MPP. It comprises 9 urban land-cover classes. The detailed class description and ground truth maps are presented in Figure <ref>. Moreover, the disjoint Training/Test sample maps are presented in Figures <ref> and <ref>.
GT Maps
Disjoint Training
Disjoint Test
=1mm =1mm
Background 164624 =1mm =1mm Asphalt 6631 =1mm =1mm Meadows 18649
=1mm =1mm
Gravel 2099 =1mm =1mm Trees 3064 =1mm =1mm Painted metal sheets 1345
=1mm =1mm
Bare Soil 5029 =1mm =1mm Bitumen 1330 =1mm =1mm Self Blocking Bricks 3682
=1mm =1mm
Shadows 947 =1mm =1mm Total samples 207400 =1mm =1mm
The type associated with the land-cover classes and number of available samples in the Pavia University (PU) dataset. Moreover, Spatially disjoint training and test samples for the PU dataset are also presented.
The IEEE Geoscience and Remote Sensing Society published the University of Houston (UH) dataset–collected by the Compact Airborne Spectrographic Imager (CASI)– in 2013 [319], as part of its Data Fusion Contest. It is composed of $340\times{1905}$ pixels with 144 spectral bands. The spatial resolution of this dataset is $2.5$ MPP with a wavelength ranging from $0.38$ to $1.05$ $\mu$m. Finally, the ground truth comprises 15 different land-cover classes. The detailed class description and ground truth maps are presented in Figure <ref> and disjoint Training/Test sample maps are presented in Figures <ref> and <ref>.
GT Maps
Disjoint Training
Disjoint Testing
=1mm =1mm Background 1314661 =1mm =1mm Grass-healthy 1251 =1mm =1mm Grass-stressed 1254
=1mm =1mm
Grass-synthetic 697 =1mm =1mm Tree 1244 =1mm =1mm Soil 1242
=1mm =1mm
Water 325 =1mm =1mm Residential 1286 =1mm =1mm Commercial 1244
=1mm =1mm
Road 1252 =1mm =1mm Highway 1227 =1mm =1mm Railway 1235
=1mm =1mm
Parking-lot1 1233 =1mm =1mm Parking-lot2 469 =1mm =1mm Tennis-court 428
=1mm =1mm
Running-track 660 Total samples 1329690
The type associated with the land-cover classes and number of available samples in the Houston (UH) dataset.
The University of Trento (UT) dataset was gathered by the using AISA eagle sensor over the rural regions in the south of Trento, Italy. The HSI contains 63 spectral bands within a wavelength of range $0.42$ to $0.99$ $\mu{m}$ [320]. The scene has $600 \times 166$ pixels, which comprises of $6$ mutually exclusive vegetation land-cover classes where the spectral resolution is 9.2 $nm$, and the spatial resolution is 1 meter per pixel (MPP). In addition, the available samples are divided into disjoint training and test samples of 6 classes and Fig. <ref> lists the information about the per class number of samples for six different land-covers.
GT Maps
Disjoint Training
Disjoint Testing
0.90! =1mm =1mm
Background 168986 =1mm =1mm
Apples 4034
=1mm =1mm
Buildings 2903 =1mm =1mm
Ground 479
=1mm =1mm
Woods 9123 =1mm =1mm
Vineyard 10501
=1mm =1mm
Roads 3174
=1mm =1mm Total samples 199200
The type associated with the land-cover classes and number of available samples in the University of Trento (UT) dataset.
Table <ref> provides a summary description of each dataset used in the following experiments whereas, Table <ref> enlists the numbers of disjoint samples (i.e., Train/Test samples selected from each class) used for all the experimental results. Please note that the number of train/test (i.e. percentage) samples and geographical locations of train/test samples remain the same for all experimental methods (competing methods).
Summary of the HSI datasets used for experimental evaluation.
! — IP PU KSC UH UT
Year 1992 2001 1996 2013
Source AVIRIS ROSIS-03 AVIRIS CASI AISA
Spatial $145\times 145$ $610 \times 610$ $512\times 614$ $340\times 1905$ $600 \times 166$
Spectral 220 115 176 144 63
Wavelength $400-2500$ $430-860$ $400-2500$ $0.35-1.05$ $0.42-0.99$
Samples 21025 207400 314368 1329690 199200
Classes 16 9 13 15 6
Sensor Aerial Aerial Aerial Aerial Aerial
Resolution $20~m$ $1.3~m$ $10~nm$ $2.5~mpp$ $1~mpp$
Number of Disjoint Train/Test Samples used for the experimental results. Where TrS and TeS stands for disjoint Train and Test samples, respectively.
4c|IP Data 4c|KSC Data 4c|PU Data 4c|UH Data
4cUT Data
Class Land Cover TrS TeS Class Land Cover TrS TeS Class Land Cover TrS TeS Class Land Cover TrS TeS Class Land Cover TrS TeS
1 29 25 1 114 602 1 548 6083 1 198 1053 Apples 129 3905
2 762 675 2 36 207 2 540 18109 2 190 1064 Buildings 125 2778
3 435 404 3 38 218 3 392 1707 3 192 505 Ground 105 374
4 146 99 4 38 214 4 524 2540 4 188 1056 Woods 154 8969
5 232 274 5 24 137 5 265 1080 5 186 1056 Vineyard 184 10317
6 394 354 6 34 195 6 532 4497 6 182 143 Roads 122 3052
7 16 2 7 16 89 7 375 955 7 196 1072
8 235 250 8 65 366 8 514 3168 8 191 1053
9 10 10 9 78 442 9 231 716 9 193 1059
10 470 503 10 61 343 10 191 1036
11 1424 1065 11 63 356 11 181 1054
12 328 282 12 75 428 12 192 1041
13 132 80 13 139 788 13 184 285
14 728 545 14 181 247
15 291 99 15 187 473
16 57 44
Classification results obtained by RF [75], MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326], HybridSN [213], and MorphCNN [196] on the disjoint train-test dataset for the PU scene.
Class RF [75] MLR [321] SVM [322] MLP [69] RNN [70] LSTM [323] GRU [324] CNN-1D [218] CNN-2D [325] CNN-3D [326] HybridSN [213] MorphCNN [196]
1 89.98$\pm$0.15 77.68$\pm$0.0 82.23$\pm$0.0 84.53$\pm$1.89 83.08$\pm$3.3 82.63$\pm$2.39 77.25$\pm$6.92 87.18$\pm$2.11 93.4$\pm$1.89 85.66$\pm$4.0 89.74$\pm$5.19 94.52$\pm$1.9
2 74.39$\pm$0.01 58.79$\pm$0.01 65.81$\pm$0.0 75.13$\pm$2.4 67.9$\pm$2.92 78.74$\pm$1.99 80.1$\pm$5.12 89.64$\pm$2.53 96.84$\pm$1.93 95.88$\pm$1.71 81.78$\pm$3.15 97.12$\pm$8.71
3 38.42$\pm$0.13 67.21$\pm$0.02 66.72$\pm$0.0 68.37$\pm$5.17 65.17$\pm$7.99 60.73$\pm$11.0 54.79$\pm$14.82 71.1$\pm$5.98 65.48$\pm$13.94 68.11$\pm$6.47 82.88$\pm$1.54 85.08$\pm$4.53
4 98.24$\pm$0.05 74.27$\pm$0.05 97.77$\pm$0.0 93.5$\pm$2.32 90.72$\pm$2.56 97.1$\pm$1.22 92.05$\pm$2.31 95.32$\pm$1.49 95.55$\pm$2.14 97.02$\pm$0.83 83.66$\pm$4.58 97.0$\pm$1.03
5 95.98$\pm$0.04 98.88$\pm$0.04 99.37$\pm$0.0 99.37$\pm$0.08 99.23$\pm$0.09 99.28$\pm$0.08 99.51$\pm$0.12 99.48$\pm$0.26 98.03$\pm$0.92 98.9$\pm$0.56 99.94$\pm$0.04 99.25$\pm$0.22
6 51.43$\pm$0.19 93.53$\pm$0.02 91.62$\pm$0.0 89.94$\pm$4.14 85.07$\pm$3.14 65.94$\pm$5.92 74.86$\pm$11.38 88.28$\pm$2.33 80.52$\pm$9.39 68.85$\pm$11.29 72.43$\pm$13.23 93.92$\pm$3.88
7 80.63$\pm$0.36 85.08$\pm$0.05 87.36$\pm$0.0 87.2$\pm$3.05 82.94$\pm$3.79 84.95$\pm$4.02 90.17$\pm$3.9 86.77$\pm$3.38 89.29$\pm$9.48 73.09$\pm$9.53 96.16$\pm$1.88 84.98$\pm$10.74
8 97.64$\pm$0.14 87.58$\pm$0.01 90.46$\pm$0.0 90.37$\pm$1.24 85.85$\pm$4.97 88.89$\pm$7.83 90.42$\pm$4.39 90.43$\pm$3.34 94.5$\pm$5.44 95.21$\pm$1.69 92.80$\pm$0.90 96.62$\pm$2.21
9 94.92$\pm$0.05 99.22$\pm$0.05 93.71$\pm$0.0 98.44$\pm$1.17 94.52$\pm$4.79 98.29$\pm$1.47 93.51$\pm$7.93 97.33$\pm$3.31 95.8$\pm$0.76 93.54$\pm$1.76 94.04$\pm$3.99 97.05$\pm$0.46
OA 77.44$\pm$0.06 72.23$\pm$0.0 77.8$\pm$0.0 82.05$\pm$0.88 77.07$\pm$0.95 80.38$\pm$0.52 80.7$\pm$0.56 89.09$\pm$0.97 92.55$\pm$1.02 89.43$\pm$1.37 84.18$\pm$1.40 95.51$\pm$0.66
AA 80.18$\pm$0.06 82.47$\pm$0.01 86.12$\pm$0.0 87.43$\pm$1.03 83.83$\pm$0.72 84.06$\pm$0.74 83.63$\pm$2.03 89.5$\pm$1.03 89.94$\pm$1.37 86.25$\pm$1.98 88.16$\pm$1.94 93.95$\pm$0.96
k(x100) 70.44$\pm$0.07 65.44$\pm$0.0 72.06$\pm$0.0 76.89$\pm$1.07 70.84$\pm$1.04 74.32$\pm$0.68 74.76$\pm$1.02 85.5$\pm$1.22 89.9$\pm$1.42 85.61$\pm$1.94 79.13$\pm$1.42 93.95$\pm$0.88
a) 1PC
b) GT
c) MLR
d) SVM
e) MLP
f) RNN
g) LSTM
h) GRU
i) CNN1D
j) CNN2D
k) CNN3D
l) MorphCNN (90.55%)
Classification Maps obtained by MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326] and MorphCNN [196] on the disjoint train-test dataset for the UP scene.
§.§ Experimental Results on Disjoint Train/Test Samples
To strengthen the ideas highlighted in this survey and to make the claims valid, the main contributions made in recent years include MLR, SVM, MLP, RNN, LSTM, GRU, CNN-1D, CNN-2D, CNN-3D, and MorphCNN have been considered to compare the experimental results. Some of the representative works for each above are as follows; Cloud Implementation of Logistic Regression for HSIC [321, 327, 328] (MLR), Classification of Hyperspectral Remote Sensing Images with SVM [322], (SVM), Deep Recurrent Neural Networks for HSIC [70] (RNN), Long Short-Term Memory [323] (LSTM), On the properties of Neural Machine Translation: Encoder-Decoder Approaches [324] (GRU), Deep Convolutional Neural Networks for HSIC [218] (CNN1D), Deep Supervised Learning for Hyperspectral Data Classification through Convolutional Neural Networks [325] (CNN2D), 3-D Deep Learning Approach for Remote Sensing Image Classification [326] (CNN3D), Morphological Convolutional Neural Networks for HSIC [196] (MorphCNN), and MLP [69].
To some extent, all the aforesaid works are based on Convolutional and Recurrent Networks and are evaluated on four benchmark HSI datasets namely IP, PU, KSC, Houston Scene, and the University of Toronto. This survey only pays attention to the robustness of all these models while considering the small sample size of training data to classify HSI for joint spatial-spectral classification.
Classification results obtained by RF [75], MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326], HybridSN [213], and MorphCNN [196] on the disjoint train-test dataset for the IP scene.
Class RF [75] MLR [321] SVM [322] MLP [69] RNN [70] LSTM [323] GRU [324] CNN-1D [218] CNN-2D [325] CNN-3D [326] HybridSN [213] MorphCNN [196]
1 85.33$\pm$1.88 80.0$\pm$0.0 88.0$\pm$0.0 73.6$\pm$7.42 58.4$\pm$4.8 89.6$\pm$1.96 77.6$\pm$9.33 80.8$\pm$12.75 73.64$\pm$14.77 48.18$\pm$22.84 82.66$\pm$13.59 92.27$\pm$3.55
2 55.11$\pm$0.32 81.48$\pm$0.0 80.0$\pm$0.0 81.45$\pm$1.07 75.5$\pm$1.48 82.22$\pm$1.26 81.1$\pm$2.77 79.38$\pm$4.16 83.12$\pm$6.12 85.12$\pm$7.88 82.17$\pm$2.64 84.05$\pm$8.03
3 22.77$\pm$0.20 54.11$\pm$0.12 69.55$\pm$0.0 64.55$\pm$2.85 63.37$\pm$1.93 64.16$\pm$5.44 70.35$\pm$1.36 74.26$\pm$6.12 81.98$\pm$3.89 77.22$\pm$13.04 76.73$\pm$4.02 79.34$\pm$3.45
4 13.13$\pm$1.64 38.38$\pm$0.0 48.48$\pm$0.0 47.07$\pm$10.41 29.49$\pm$5.4 55.35$\pm$10.62 53.33$\pm$11.15 31.92$\pm$11.55 45.39$\pm$6.36 50.11$\pm$10.04 33.33$\pm$3.59 52.14$\pm$6.24
5 41.60$\pm$0.78 91.97$\pm$0.0 87.23$\pm$0.0 86.94$\pm$1.07 87.59$\pm$1.5 89.27$\pm$0.68 88.4$\pm$0.85 90.73$\pm$1.07 89.11$\pm$5.55 80.28$\pm$6.52 81.14$\pm$8.89 91.66$\pm$1.69
6 94.06$\pm$0.23 94.63$\pm$0.0 96.33$\pm$0.0 95.93$\pm$0.97 95.31$\pm$0.83 96.39$\pm$0.87 96.38$\pm$1.14 96.39$\pm$0.9 95.02$\pm$5.68 89.81$\pm$4.03 97.36$\pm$2.54 95.74$\pm$2.28
7 0.0$\pm$0.0 0.0$\pm$0.0 50.0$\pm$0.0 10.0$\pm$20.0 0.0$\pm$0.0 0.0$\pm$0.0 0.0$\pm$0.0 0.0$\pm$0.0 0.0$\pm$0.0 0.0$\pm$0.0 0.0$\pm$0.0 0.0$\pm$0.0
8 91.33$\pm$0.18 100.0$\pm$0.0 100.0$\pm$0.0 99.84$\pm$0.2 99.52$\pm$0.3 99.2$\pm$0.91 99.12$\pm$0.64 99.84$\pm$0.32 99.96$\pm$0.13 95.96$\pm$6.78 96.53$\pm$3.78 100.0$\pm$0.0
9 40.0$\pm$0.0 0.0$\pm$0.0 50.0$\pm$0.0 80.0$\pm$15.49 56.0$\pm$10.2 76.0$\pm$8.0 66.0$\pm$4.9 50.0$\pm$8.94 26.66$\pm$15.87 77.78$\pm$21.66 66.66$\pm$24.94 44.44$\pm$19.88
10 26.83$\pm$1.26 66.76$\pm$0.08 76.54$\pm$0.0 75.35$\pm$5.02 71.13$\pm$5.93 81.51$\pm$2.76 78.53$\pm$4.64 81.83$\pm$4.69 77.44$\pm$8.99 77.9$\pm$6.2 74.35$\pm$9.46 80.77$\pm$3.77
11 81.06$\pm$0.49 84.13$\pm$0.0 87.7$\pm$0.0 83.19$\pm$1.51 78.86$\pm$1.45 80.4$\pm$2.43 82.29$\pm$1.82 80.39$\pm$3.65 89.4$\pm$5.47 82.73$\pm$3.81 79.18$\pm$4.92 88.54$\pm$5.03
12 28.95$\pm$0.44 66.31$\pm$0.0 77.3$\pm$0.0 78.58$\pm$2.95 71.91$\pm$5.07 76.31$\pm$1.39 83.19$\pm$1.16 84.75$\pm$7.5 87.72$\pm$3.06 82.64$\pm$14.49 71.04$\pm$4.11 88.46$\pm$4.26
13 86.25$\pm$1.02 95.0$\pm$0.0 97.5$\pm$0.0 98.0$\pm$0.61 97.0$\pm$1.7 97.25$\pm$0.94 97.75$\pm$0.94 97.75$\pm$0.5 95.28$\pm$4.74 89.72$\pm$6.89 96.25$\pm$4.44 87.64$\pm$3.43
14 91.07$\pm$0.87 90.64$\pm$0.0 91.38$\pm$0.0 92.92$\pm$1.48 90.28$\pm$1.09 94.13$\pm$1.18 92.88$\pm$1.79 93.32$\pm$2.34 98.94$\pm$0.55 98.31$\pm$1.41 91.68$\pm$4.33 98.82$\pm$1.01
15 10.10$\pm$0.0 89.9$\pm$0.0 80.81$\pm$0.0 87.88$\pm$3.78 75.56$\pm$6.43 90.71$\pm$2.34 93.54$\pm$1.64 89.9$\pm$4.78 82.02$\pm$14.83 55.17$\pm$27.57 45.45$\pm$21.39 69.44$\pm$15.86
16 71.96$\pm$3.86 97.73$\pm$0.0 97.73$\pm$0.0 87.27$\pm$4.45 88.64$\pm$4.31 94.09$\pm$2.32 95.45$\pm$2.49 96.82$\pm$2.32 82.0$\pm$6.69 82.5$\pm$12.5 84.09$\pm$4.90 84.0$\pm$4.21
OA 60.80$\pm$0.14 80.33$\pm$0.02 84.12$\pm$0.0 82.95$\pm$0.23 79.07$\pm$0.33 83.55$\pm$0.39 84.2$\pm$0.21 84.0$\pm$0.28 87.25$\pm$1.03 83.6$\pm$1.41 80.86$\pm$1.74 87.45$\pm$1.01
AA 52.47$\pm$0.21 70.69$\pm$0.01 79.91$\pm$0.0 77.66$\pm$1.98 71.16$\pm$0.87 79.16$\pm$0.75 78.49$\pm$0.36 76.76$\pm$0.75 75.48$\pm$2.12 73.34$\pm$3.46 72.41$\pm$2.32 77.33$\pm$1.56
k(x100) 54.41$\pm$0.18 77.47$\pm$0.02 81.87$\pm$0.0 80.56$\pm$0.26 76.12$\pm$0.4 81.27$\pm$0.44 82.01$\pm$0.26 81.81$\pm$0.35 85.48$\pm$1.15 81.36$\pm$1.62 78.24$\pm$1.98 85.75$\pm$1.14
a) 1PC
b) GT
c) MLR
d) SVM
e) MLP
f) RNN
g) LSTM
h) GRU
i) CNN1D
j) CNN2D
k) CNN3D
l) MorphCNN
Classification Maps obtained by MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326] and MorphCNN [196] on the disjoint train-test dataset for the IP scene.
Classification results obtained by RF [75], MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326], HybridSN [213], and MorphCNN [196] on the disjoint train-test dataset for the UH scene.
Class RF [75] MLR [321] SVM [322] MLP [69] RNN [70] LSTM [323] GRU [324] CNN-1D [218] CNN-2D [325] CNN-3D [326] HybridSN [213] MorphCNN [196]
1 82.87$\pm$0.04 82.24$\pm$0.06 82.34$\pm$0.0 81.23$\pm$0.28 82.22$\pm$0.28 82.76$\pm$0.36 82.58$\pm$0.35 82.28$\pm$0.98 82.25$\pm$0.65 82.1$\pm$0.39 82.74$\pm$0.29 82.43$\pm$0.33
2 82.51$\pm$0.38 82.5$\pm$0.07 83.36$\pm$0.0 82.29$\pm$0.55 82.87$\pm$0.33 80.19$\pm$1.36 81.64$\pm$0.71 91.78$\pm$6.46 84.15$\pm$0.28 84.14$\pm$0.45 90.91$\pm$6.22 84.42$\pm$0.19
3 64.09$\pm$0.49 99.8$\pm$0.0 99.8$\pm$0.0 99.72$\pm$0.1 99.72$\pm$0.2 99.68$\pm$0.16 99.88$\pm$0.1 99.92$\pm$0.16 90.31$\pm$4.41 77.85$\pm$4.8 98.81$\pm$0.74 97.21$\pm$1.23
4 92.04$\pm$0.0 98.3$\pm$0.0 98.96$\pm$0.0 87.58$\pm$1.28 93.5$\pm$2.02 91.23$\pm$1.29 93.22$\pm$2.84 94.36$\pm$3.12 87.24$\pm$3.21 89.24$\pm$1.1 83.96$\pm$1.24 92.37$\pm$0.33
5 99.81$\pm$0.07 97.44$\pm$0.0 98.77$\pm$0.0 97.35$\pm$0.49 97.76$\pm$0.29 97.65$\pm$0.31 97.37$\pm$0.16 98.77$\pm$0.13 99.51$\pm$0.48 98.97$\pm$0.59 99.46$\pm$0.75 99.77$\pm$0.53
6 96.27$\pm$0.32 94.41$\pm$0.0 97.9$\pm$0.0 94.55$\pm$0.28 95.1$\pm$0.0 97.06$\pm$1.79 98.32$\pm$1.63 95.8$\pm$1.88 96.43$\pm$2.14 98.91$\pm$1.44 98.60$\pm$1.97 99.46$\pm$1.15
7 86.19$\pm$0.34 73.37$\pm$0.07 77.43$\pm$0.0 75.24$\pm$2.27 81.4$\pm$0.43 78.88$\pm$1.0 77.03$\pm$2.18 82.78$\pm$2.23 86.44$\pm$2.18 85.48$\pm$1.98 75.62$\pm$3.89 88.07$\pm$1.78
8 41.69$\pm$0.23 63.82$\pm$0.0 60.3$\pm$0.0 57.0$\pm$6.97 40.06$\pm$1.07 40.11$\pm$1.92 53.62$\pm$2.97 75.5$\pm$6.71 70.03$\pm$3.96 62.06$\pm$3.01 93.16$\pm$0.20 73.09$\pm$3.5
9 86.02$\pm$0.48 70.23$\pm$0.04 76.77$\pm$0.0 75.58$\pm$2.86 76.54$\pm$2.96 81.55$\pm$4.12 79.06$\pm$1.61 81.44$\pm$2.0 79.53$\pm$6.38 80.81$\pm$4.32 81.39$\pm$5.24 84.09$\pm$2.73
10 36.00$\pm$0.0 55.6$\pm$0.0 61.29$\pm$0.0 48.78$\pm$2.27 47.44$\pm$1.44 47.37$\pm$2.29 49.54$\pm$2.61 68.71$\pm$14.55 60.22$\pm$4.2 54.75$\pm$4.63 76.51$\pm$10.40 62.86$\pm$3.08
11 64.67$\pm$0.16 74.21$\pm$0.04 80.55$\pm$0.0 76.25$\pm$0.46 76.24$\pm$0.81 76.38$\pm$1.09 80.82$\pm$0.71 85.24$\pm$2.83 82.93$\pm$7.68 66.78$\pm$3.34 89.21$\pm$5.62 89.15$\pm$6.86
12 67.27$\pm$0.09 70.41$\pm$0.0 79.92$\pm$0.0 75.31$\pm$3.75 76.33$\pm$3.09 79.98$\pm$3.32 84.15$\pm$3.13 89.93$\pm$4.29 92.87$\pm$3.31 93.83$\pm$1.92 96.28$\pm$2.29 93.02$\pm$3.32
13 89.23$\pm$0.43 67.72$\pm$0.0 70.88$\pm$0.0 73.19$\pm$2.15 69.12$\pm$1.61 71.37$\pm$3.54 72.63$\pm$3.68 74.88$\pm$5.14 86.21$\pm$2.65 82.34$\pm$2.49 86.78$\pm$6.67 89.61$\pm$1.34
14 100.0$\pm$0.0 98.79$\pm$0.0 100.0$\pm$0.0 99.84$\pm$0.32 100.0$\pm$0.0 99.11$\pm$0.47 99.92$\pm$0.16 99.68$\pm$0.16 98.92$\pm$1.8 96.31$\pm$3.67 100.0$\pm$0.0 99.19$\pm$1.3
15 90.06$\pm$0.45 95.56$\pm$0.0 96.41$\pm$0.0 97.8$\pm$0.51 97.59$\pm$0.47 98.14$\pm$0.31 98.22$\pm$0.59 98.48$\pm$0.24 77.63$\pm$2.91 75.85$\pm$2.69 100.0$\pm$0.0 97.04$\pm$4.47
OA 75.38$\pm$0.06 78.97$\pm$0.01 81.86$\pm$0.0 78.22$\pm$0.36 77.95$\pm$0.68 78.16$\pm$0.28 80.21$\pm$0.27 86.42$\pm$1.64 83.27$\pm$0.8 80.24$\pm$0.55 88.31$\pm$1.78 86.51$\pm$0.71
AA 78.58$\pm$0.11 81.63$\pm$0.01 84.31$\pm$0.0 81.45$\pm$0.37 81.06$\pm$0.55 81.43$\pm$0.32 83.2$\pm$0.27 87.97$\pm$1.38 84.98$\pm$0.74 81.96$\pm$0.75 90.23$\pm$1.39 88.78$\pm$0.68
k(x100) 73.49$\pm$0.07 77.3$\pm$0.01 80.43$\pm$0.0 76.55$\pm$0.39 76.23$\pm$0.71 76.52$\pm$0.3 78.66$\pm$0.29 85.27$\pm$1.77 81.89$\pm$0.86 78.62$\pm$0.59 87.33$\pm$1.92 85.4$\pm$0.76
a) 1PC
b) GT
c) MLR
d) SVM
e) MLP
f) RNN
g) LSTM
h) GRU
i) CNN1D
j) CNN2D
k) CNN3D
l) MorphCNN
Classification Maps obtained by MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326] and MorphCNN [196] on the disjoint train-test dataset for the UH scene.
Classification results obtained by RF [75], MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326], HybridSN [213], and MorphCNN [196] on the disjoint train-test dataset for the KSC scene.
Class RF [75] MLR [321] SVM [322] MLP [69] RNN [70] LSTM [323] GRU [324] CNN-1D [218] CNN-2D [325] CNN-3D [326] HybridSN [213] MorphCNN [196]
1 99.69$\pm$0.0 100.0$\pm$0.00 94.13$\pm$0.00 99.18$\pm$1.17 87.33$\pm$1.73 92.22$\pm$0.91 89.44$\pm$0.69 99.79$\pm$0.14 85.52$\pm$15.45 97.17$\pm$2.93 100.0$\pm$0.0 97.63$\pm$2.26
2 98.38$\pm$0.22 99.03$\pm$0.001 0.00$\pm$0.00 86.63$\pm$7.33 63.12$\pm$1.49 81.64$\pm$4.17 70.85$\pm$0.91 99.19$\pm$1.14 67.31$\pm$25.41 92.91$\pm$2.62 100.0$\pm$0.0 86.79$\pm$9.64
3 99.23$\pm$0.57 99.54$\pm$0.00 54.59$\pm$0.00 84.25$\pm$2.70 69.72$\pm$2.82 75.38$\pm$0.21 78.89$\pm$7.79 95.11$\pm$3.05 60.09$\pm$16.23 81.04$\pm$12.05 99.69$\pm$0.21 98.31$\pm$2.37
4 88.16$\pm$1.22 99.06$\pm$0.002 17.28$\pm$0.00 78.97$\pm$11.86 47.82$\pm$4.20 58.09$\pm$1.16 44.08$\pm$3.41 77.73$\pm$2.81 45.17$\pm$9.84 44.54$\pm$16.19 99.53$\pm$0.66 88.94$\pm$15.64
5 73.72$\pm$0.0 100.0$\pm$0.00 0.00$\pm$0.00 13.38$\pm$18.92 68.37$\pm$5.63 74.21$\pm$5.20 65.21$\pm$3.28 80.53$\pm$4.85 67.40$\pm$12.84 85.15$\pm$6.98 98.78$\pm$0.34 48.66$\pm$34.29
6 88.88$\pm$0.24 100.0$\pm$0.00 0.00$\pm$0.00 78.12$\pm$8.79 56.24$\pm$1.97 65.12$\pm$6.03 59.82$\pm$2.30 91.97$\pm$1.74 65.47$\pm$29.63 62.74$\pm$15.45 100.0$\pm$0.0 86.32$\pm$16.43
7 100.0$\pm$0.0 89.88$\pm$0.00 0.00$\pm$0.00 78.65$\pm$3.99 83.52$\pm$8.91 90.26$\pm$1.40 89.14$\pm$3.47 95.13$\pm$1.40 77.15$\pm$28.34 80.52$\pm$17.07 97.75$\pm$1.83 97.75$\pm$3.17
8 85.51$\pm$0.22 100.0$\pm$0.00 60.10$\pm$0.00 89.62$\pm$8.25 65.57$\pm$2.32 71.40$\pm$2.99 69.76$\pm$2.53 97.45$\pm$0.51 64.75$\pm$13.87 71.49$\pm$11.96 99.90$\pm$0.12 70.76$\pm$34.44
9 96.68$\pm$0.42 100.0$\pm$0.00 89.37$\pm$0.00 97.59$\pm$1.79 88.39$\pm$3.35 90.72$\pm$2.93 86.72$\pm$1.36 99.92$\pm$0.11 89.22$\pm$8.06 98.94$\pm$1.33 100.0$\pm$0.0 91.93$\pm$7.16
10 99.22$\pm$0.13 100.0$\pm$0.00 98.83$\pm$0.00 96.50$\pm$3.33 92.42$\pm$3.12 88.92$\pm$3.37 88.53$\pm$1.58 99.90$\pm$0.13 73.08$\pm$23.37 90.67$\pm$8.13 100.0$\pm$0.0 100.0$\pm$0.00
11 100.0$\pm$0.0 98.03$\pm$0.001 94.94$\pm$0.00 98.50$\pm$0.86 83.89$\pm$2.29 90.26$\pm$1.08 84.83$\pm$3.64 100.0$\pm$0.0 87.55$\pm$10.06 97.56$\pm$1.17 96.34$\pm$0.79 100.0$\pm$0.0
12 97.89$\pm$0.19 99.29$\pm$0.001 89.25$\pm$0.00 98.52$\pm$0.79 81.31$\pm$4.48 87.46$\pm$2.05 83.57$\pm$4.87 98.36$\pm$1.06 82.48$\pm$19.17 99.30$\pm$0.99 99.06$\pm$0.99 97.89$\pm$2.06
13 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.0 99.88$\pm$0.10 100.0$\pm$0.0 99.92$\pm$0.05 100.0$\pm$0.0 99.92$\pm$0.12 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.0
OA 96.17$\pm$0.07 99.45$\pm$0.001 72.84$\pm$0.00 91.76$\pm$0.56 81.47$\pm$1.17 86.10$\pm$0.40 82.76$\pm$0.72 97.18$\pm$0.18 79.98$\pm$13.38 89.71$\pm$1.30 99.48$\pm$0.05 92.76$\pm$2.08
AA 94.41$\pm$0.08 98.83$\pm$0.001 53.73$\pm$0.00 84.61$\pm$0.62 75.96$\pm$1.71 81.97$\pm$0.34 77.75$\pm$0.70 95.00$\pm$0.34 74.24$\pm$16.27 84.77$\pm$2.60 99.31$\pm$0.17 89.61$\pm$1.60
k(x100) 95.74$\pm$0.08 99.40$\pm$0.001 69.29$\pm$0.00 90.82$\pm$0.62 79.33$\pm$1.30 84.51$\pm$0.45 80.79$\pm$0.80 96.86$\pm$0.20 77.63$\pm$14.99 88.51$\pm$1.46 99.43$\pm$0.06 91.94$\pm$2.31
a) 1PC
b) GT
c) MLR
d) SVM
e) MLP
f) RNN
g) LSTM
h) GRU
i) CNN1D
j) CNN2D
k) CNN3D
l) MorphCNN
Classification Maps obtained by MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326] and MorphCNN [196] on the disjoint train-test dataset for the KSC scene.
Here we have enlisted the experimental results with detailed discussion on the obtained results. The obtained accuracies for disjoint training and test samples are shown in Tables <ref>, <ref>, <ref> and <ref> and Figures <ref>, <ref> <ref>, and <ref>. All the results shown in the Tables and Figures are obtained using the 10-cross-validation process to compute the overall, average and kappa $(\kappa)$ accuracy for comparison purposes. For instance, let us assume the case of Pavia University results, for this particular case, the work [196] has the highest average, overall and kappa $(\kappa)$ accuracies which are 95.51%, 93.95%, and 93.95% respectively in comparison with the average, overall and kappa $(\kappa)$ accuracies for other comparative works; 92.55%, 89.94%, 89.9% for [325], 89.43%, 86.25%, 85.61% for [326], 89.09%, 89.5%, 85.5% for [218], 82.05%, 87.43%, 76.89% for [69], 80.38%, 83.63%, 74.76% for [324], 80.38%, 84.06%, 74.32% for [323], 77.8%, 86.12%, 72.06% for [322], 77.07%, 83.83%, 70.84% for [70], and 72.23%, 82.12%, 65.44% for [321]. Similar observations can be made of the other experimental datasets.
Classification results obtained by RF [75], MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326], HybridSN [213], and MorphCNN [196] on the disjoint train-test dataset for the University of Trento (UT) scene.
Class RF [75] MLR [321] SVM [322] MLP [69] RNN [70] LSTM [323] GRU [324] CNN-1D [218] CNN-2D [325] CNN-3D [326] HybridSN [213] MorphCNN [196]
1 97.27$\pm$0.33 92.57$\pm$0.00 95.03$\pm$0.00 70.68$\pm$3.69 93.46$\pm$3.67 95.71$\pm$0.32 95.78$\pm$0.97 98.06$\pm$0.72 96.35$\pm$0.22 92.64$\pm$7.45 99.15$\pm$0.16 97.83$\pm$0.37
2 89.50$\pm$0.11 90.92$\pm$0.00 88.66$\pm$0.00 76.22$\pm$2.90 83.09$\pm$1.50 84.73$\pm$0.87 84.40$\pm$1.51 89.16$\pm$1.99 85.87$\pm$1.87 75.70$\pm$7.23 81.65$\pm$2.93 90.41$\pm$1.83
3 75.04$\pm$0.33 90.10$\pm$0.00 91.71$\pm$0.00 83.95$\pm$2.08 69.51$\pm$0.87 59.26$\pm$14.31 65.15$\pm$4.84 70.67$\pm$8.99 66.84$\pm$2.00 60.07$\pm$3.57 74.86$\pm$10.58 83.86$\pm$10.69
4 99.96$\pm$0.01 92.77$\pm$0.00 85.18$\pm$0.00 37.55$\pm$2.74 98.44$\pm$0.97 97.72$\pm$1.41 99.65$\pm$0.18 99.86$\pm$0.11 98.67$\pm$0.61 98.88$\pm$0.80 99.64$\pm$0.28 99.06$\pm$0.57
5 99.97$\pm$0.00 99.14$\pm$0.00 97.76$\pm$0.00 99.98$\pm$.01 98.12$\pm$0.56 96.34$\pm$0.80 98.20$\pm$0.54 99.78$\pm$0.19 99.32$\pm$0.39 98.40$\pm$1.77 98.12$\pm$1.79 99.87$\pm$0.14
6 67.71$\pm$0.31 66.80$\pm$0.00 72.37$\pm$0.00 73.91$\pm$2.88 65.54$\pm$0.90 69.04$\pm$2.29 63.89$\pm$1.02 77.94$\pm$3.83 74.34$\pm$6.48 69.42$\pm$3.43 70.72$\pm$4.74 82.64$\pm$0.88
OA 94.95$\pm$0.07 92.08$\pm$0.00 89.98$\pm$0.00 63.51$\pm$9.43 92.43$\pm$0.34 92.27$\pm$0.72 93.03$\pm$0.28 95.94$\pm$0.23 94.45$\pm$0.13 92.14$\pm$1.05 94.03$\pm$0.21 96.46$\pm$0.27
AA 88.24$\pm$0.07 88.72$\pm$0.00 88.45$\pm$0.00 63.22$\pm$5.90 84.69$\pm$0.51 83.80$\pm$1.91 84.51$\pm$0.85 89.25$\pm$1.34 86.90$\pm$0.53 82.52$\pm$4.27 87.36$\pm$1.83 92.28$\pm$1.60
k(x100) 93.23$\pm$0.09 89.43$\pm$0.00 86.74$\pm$0.00 48.58$\pm$13.70 89.86$\pm$0.48 89.67$\pm$0.95 90.66$\pm$0.37 94.55$\pm$0.31 92.56$\pm$0.18 89.45$\pm$1.43 92.00$\pm$0.27 95.26$\pm$0.36
The comparative methods mostly misclassify the samples having similar spatial structures (i.e., Meadows and Bare Soil classes for Pavia University dataset) as shown in Table and Figure. Moreover, the overall accuracy for Grapes Untrained is lower than the other classes due to the reasons mentioned above. In a nutshell, one can say that higher accuracy can be achieved by increasing the number of labeled training samples. Thus a higher number of labeled training samples can produce better accuracies for all competing methods.
a) 1PC
b) GT
c) MLR
d) SVM
e) MLP
f) RNN
g) LSTM
h) GRU
i) CNN1D
j) CNN2D
k) CNN3D
l) MorphCNN
Classification Maps obtained by MLR [321], SVM [322], MLP [69], RNN [70], LSTM [323], GRU [324], CNN-1D [218], CNN-2D [325], CNN-3D [326] and MorphCNN [196] on the disjoint train-test dataset for the UT scene.
In general, the works [196, 325] outperformed (i.e. stable results) than the other comparative methods especially in the case of less number of labeled training samples. The above leads to conclude that these works are not sensitive to the number of training samples. Moreover, as the number of training samples increases, the accuracies also increase for these methods however, other methods can work better with a higher number of training samples as compared to these methods. A similar trend has been observed with a higher number of training samples. Thus, one can conclude that the works [196] and [325] can solve the limited availability of training samples issues to some extent while considering disjoint train/test samples.
Classification results obtained by CNN-2D [325] CNN-3D [326], G2C-Conv2D [216], and G2C-Conv3D [216] on the disjoint train-test for IP, PU, Trento, UH, and KSC datasets. The higher accuracies are emphasised.
2*Class 4c||IP 4c||PU 4c||UH 4c||Trento 4cKSC
Conv2D Conv3D G2C-2D G2C-3D Conv2D Conv3D G2C-2D G2C-3D Conv2D Conv3D G2C-2D G2C-3D Conv2D Conv3D G2C-2D G2C-3D Conv2D Conv3D G2C-2D G2C-3D
1 46.66$\pm$0.09 56.00$\pm$8.64 53.33$\pm$18.57 97.33$\pm$1.88 81.20$\pm$0.01 87.20$\pm$0.42 86.22$\pm$4.54 91.26$\pm$3.40 80.88$\pm$0.006 82.77$\pm$0.27 81.32$\pm$0.08 82.90$\pm$0.20 98.30$\pm$0.007 98.63$\pm$0.18 98.75$\pm$0.58 96.12$\pm$1.95 95.56$\pm$0.007 98.40$\pm$0.81 98.24$\pm$0.52 99.27$\pm$0.56
2 48.04$\pm$0.03 64.64$\pm$4.75 68.93$\pm$5.41 57.33$\pm$4.91 89.52$\pm$0.01 89.90$\pm$0.35 92.99$\pm$0.90 83.84$\pm$0.49 81.64$\pm$0.005 82.64$\pm$0.85 81.79$\pm$1.44 83.52$\pm$0.31 85.74$\pm$0.029 84.78$\pm$2.61 90.29$\pm$2.68 93.30$\pm$1.41 79.06$\pm$0.03 95.16$\pm$1.18 90.82$\pm$2.04 98.87$\pm$0.60
3 24.33$\pm$0.009 44.96$\pm$8.99 35.97$\pm$6.92 78.21$\pm$2.13 53.53$\pm$0.006 59.98$\pm$2.20 51.33$\pm$2.60 78.08$\pm$2.84 49.37$\pm$0.032 65.80$\pm$5.12 74.32$\pm$7.73 91.48$\pm$1.12 78.43$\pm$0.08 86.63$\pm$9.87 63.81$\pm$7.90 83.51$\pm$12.16 74.92$\pm$0.016 90.36$\pm$5.99 80.58$\pm$3.37 97.09$\pm$0.21
4 28.28$\pm$0.13 41.75$\pm$4.15 18.18$\pm$2.18 42.42$\pm$3.59 96.92$\pm$0.009 97.43$\pm$0.44 96.97$\pm$1.05 94.05$\pm$1.16 86.26$\pm$0.006 95.54$\pm$3.17 91.60$\pm$0.29 90.49$\pm$2.39 93.59$\pm$0.006 98.31$\pm$0.30 97.11$\pm$0.76 98.53$\pm$0.23 50.62$\pm$0.056 74.61$\pm$3.33 69.62$\pm$4.01 77.57$\pm$1.52
5 29.56$\pm$0.05 46.83$\pm$2.99 26.88$\pm$3.96 52.79$\pm$3.88 98.41$\pm$0.007 97.99$\pm$0.95 99.28$\pm$0.38 99.19$\pm$0.38 95.39$\pm$0.018 96.96$\pm$0.27 98.67$\pm$1.00 99.87$\pm$0.08 98.22$\pm$0.011 99.77$\pm$0.18 99.57$\pm$0.22 98.70$\pm$0.60 71.04$\pm$0.03 88.32$\pm$3.31 81.26$\pm$1.37 87.34$\pm$0.91
6 72.31$\pm$0.10 92.56$\pm$3.01 94.35$\pm$2.59 98.39$\pm$1.06 47.65$\pm$0.08 62.30$\pm$2.37 60.68$\pm$4.23 87.04$\pm$2.05 81.11$\pm$0.02 92.07$\pm$1.83 82.51$\pm$4.67 94.40$\pm$0.00 65.09$\pm$0.036 75.74$\pm$3.30 58.87$\pm$2.95 73.78$\pm$2.48 79.82$\pm$0.03 90.25$\pm$1.10 78.63$\pm$4.43 93.84$\pm$1.10
7 0.00$\pm$0.00 0.00$\pm$0.00 0.00$\pm$0.00 0.00$\pm$0.00 61.36$\pm$0.02 76.55$\pm$1.11 65.30$\pm$3.66 88.44$\pm$5.08 79.57$\pm$0.02 84.48$\pm$2.21 82.64$\pm$1.12 85.85$\pm$0.49 88.01$\pm$0.010 98.50$\pm$1.40 95.88$\pm$1.05 99.62$\pm$0.52
8 91.60$\pm$0.05 98.93$\pm$0.67 98.66$\pm$0.82 94.53$\pm$4.64 81.97$\pm$0.02 94.73$\pm$1.51 97.25$\pm$0.93 97.34$\pm$0.92 47.61$\pm$0.08 65.71$\pm$4.44 56.98$\pm$0.73 71.06$\pm$0.98 85.15$\pm$0.062 97.08$\pm$0.34 87.70$\pm$1.77 98.99$\pm$0.71
9 36.66$\pm$0.04 33.33$\pm$12.47 50.00$\pm$8.16 80.00$\pm$8.16 88.38$\pm$0.04 96.98$\pm$0.81 96.31$\pm$1.74 96.85$\pm$1.61 74.15$\pm$0.04 79.85$\pm$1.96 77.49$\pm$1.45 82.37$\pm$1.60 98.11$\pm$0.013 98.64$\pm$0.97 99.62$\pm$0.21 99.77$\pm$0.18
10 50.49$\pm$0.04 62.42$\pm$5.93 52.75$\pm$3.42 55.00$\pm$4.17 41.40$\pm$0.02 45.65$\pm$1.29 52.67$\pm$3.07 49.25$\pm$1.74 96.20$\pm$0.008 99.31$\pm$0.76 99.90$\pm$0.13 100.0$\pm$0.00
11 73.45$\pm$0.01 79.68$\pm$1.91 79.68$\pm$0.66 81.72$\pm$0.19 50.18$\pm$0.01 54.26$\pm$3.88 65.71$\pm$1.04 70.55$\pm$1.47 92.97$\pm$0.007 99.81$\pm$0.26 99.15$\pm$0.45 100.0$\pm$0.00
12 30.61$\pm$0.01 49.29$\pm$4.92 56.38$\pm$5.87 35.93$\pm$7.39 70.22$\pm$0.02 76.46$\pm$3.88 88.98$\pm$4.50 88.05$\pm$4.16 92.83$\pm$0.025 95.63$\pm$0.77 98.90$\pm$0.39 98.13$\pm$0.66
13 95.83$\pm$0.03 97.08$\pm$1.17 94.16$\pm$5.62 95.83$\pm$0.58 85.84$\pm$0.019 91.92$\pm$0.28 85.02$\pm$0.92 82.57$\pm$0.82 100.0$\pm$0.00 100.0$\pm$0.00 100.0$\pm$0.00 100.0$\pm$0.00
14 74.18$\pm$0.001 76.08$\pm$4.46 95.59$\pm$0.98 90.03$\pm$5.77 77.86$\pm$0.06 82.86$\pm$1.63 80.70$\pm$0.68 99.05$\pm$0.76
15 19.52$\pm$0.01 54.54$\pm$16.05 21.21$\pm$8.12 74.74$\pm$2.97 48.34$\pm$0.06 70.68$\pm$3.24 59.83$\pm$6.72 99.92$\pm$0.09
16 71.21$\pm$0.18 78.78$\pm$2.83 62.87$\pm$5.96 78.03$\pm$10.55
OA 57.02$\pm$0.12 69.25$\pm$1.32 68.33$\pm$0.60 72.82$\pm$0.26 81.23$\pm$0.36 85.96$\pm$0.27 86.55$\pm$0.44 87.79$\pm$0.23 69.67$\pm$0.19 76.52$\pm$1.13 77.27$\pm$0.61 82.26$\pm$0.56 91.95$\pm$1.03 95.09$\pm$0.26 93.15$\pm$0.34 95.21$\pm$0.33 89.76$\pm$1.13 96.15$\pm$0.43 94.05$\pm$0.49 97.65$\pm$0.02
AA 49.55$\pm$0.01 61.05$\pm$2.92 56.81$\pm$1.73 69.52$\pm$1.23 77.66$\pm$0.002 84.78$\pm$0.18 82.92$\pm$0.38 90.68$\pm$0.06 69.99$\pm$0.007 77.84$\pm$0.86 77.35$\pm$1.18 84.76$\pm$0.52 86.56$\pm$0.021 90.64$\pm$1.80 84.73$\pm$1.57 90.66$\pm$2.24 84.95$\pm$0.011 94.31$\pm$0.65 90.79$\pm$0.64 96.19$\pm$0.01
k(x100) 50.66$\pm$0.22 64.78$\pm$1.58 63.60$\pm$0.74 68.87$\pm$0.37 74.57$\pm$0.58 81.11$\pm$0.36 81.74$\pm$0.65 83.95$\pm$0.30 67.22$\pm$0.24 74.61$\pm$1.19 75.43$\pm$0.67 80.87$\pm$0.60 89.20$\pm$1.38 93.42$\pm$0.36 90.79$\pm$0.46 93.62$\pm$0.45 88.60$\pm$1.26 95.72$\pm$0.48 93.37$\pm$0.55 97.39$\pm$0.02
Moreover, one can conclude that the AE-based models do not perform well as compared to the other models, although the unsupervised methods do not require the samples to be labeled, if there are no constraints, these methods might learn nothing. Moreover, AE has a symmetric architecture that leads to the explosion of training parameters which increases the difficulty in training. The works [329] and [330] overcome the above-mentioned issues, however, the work [228] does not adopt the greedy layer-wise approach thus producing the worst results, thus, there is room for improvement in such methods.
In a nutshell, the classification results based on CNN are way better than AE-based methods while considering the limited availability of labeled training samples. Although the AEs can learn the internal structure of the unlabeled data, the final feature representation might not have task-driven characteristics which might be the reason for not performing well as compared to the supervised learning models. Moreover, AL and/or SL takes the benefits from the selection of the most important samples for training which enables the model to focus more attention on indistinguishable samples for HSIC. Whereas, FSL benefits from the exploration of the relationship between samples to find a discriminative decision boundary for HSIC. TL makes good use of similarity among different HSI’s to reduce the quantity required for training also reduces the number of trainable parameters while boosting the models' robustness. According to the raw data (i.e., processing the HSI without extracting/learning the features), DA generates more samples which bring a diversity of samples.
§.§ Experiments with Convolutional Feature Extractors
This section revisited several deep Hyperspectral feature extraction processes, i.e., a traditional convolutional process and a gradient centralized convolutional process. In this hierarchy, we have conducted several experiments using several state-of-the-art works published in recent years. This experiment is specifically designed to check the performance of the convolutional process rather than testing the model's performance. The baseline models apply convolutional feature extractors which include a 2D convolution neural network for HSI classification (Conv2D) introduce by Makantasis et al. [325] and the 3D convolutional approach for remote sensing image classification (Conv3D) proposed by Hamida et al. [326] (a traditional 3D convolutional feature extractor), and recently Roy et al. introduced generalized gradient centralized 2D convolution (G2C-Conv2D) [216], and generalized gradient centralized 3D convolution (G2C-Conv3D) [216] to extract the fine-grained spectral-spatial feature representation. The generalized gradient centralized 3D convolution (G2C-Conv3D) operation is designed by using a weighted combination between the vanilla and gradient centralized 3D convolutions (GC-Conv3D) to extract both the intensity level semantic information and gradient level information from the HSIs.
All the aforementioned convolutional feature extractors have been evaluated on 5 different Hyperspectral datasets, namely, IP, PU, Trento, UH, and KSC datasets. The experimental results are illustrated in Table <ref>. From all these results, one can easily conclude that the G2C-Conv3D convolutional process outperformed Conv2D and Conv3D followed by G2C-Conv2D. A similar trend has been observed for all datasets except the Trento dataset on which the 3D convolutional process slightly performed better as compared to the traditional Conv2D and G2C-Conv2D, respectively. The accuracy difference is not that high as compared to the G2C-Conv3D for other datasets. Most importantly, the G2C-Conv3D convolution operation is simple to implement and can easily be plugged into existing CNNs to boost both the robustness and classification performance.
§ CONCLUSION AND FUTURE DIRECTIONS
The rich information contained in HSI data is a captivating factor that constitutes the utilization of HSI technology in real-world applications. Moreover, advances in machine learning methods strengthen the deployment potentials of such technologies. In this work, we surveyed recent developments of Hyperspectral Image Classification (HSIC) using state of the art Deep Neural Networks (for instance, Auto-encoder (AE), Deep Belief Network (DBN), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Transfer Learning (TL), Few-shot Learning (FSL), Active/Self Learning (AL/SL), and Data Augmentation (DA)) in a variety of learning schemes (specifically, supervised, semi-supervised and unsupervised learning). In addition, we also analyzed the strategies to overcome the challenges of limited availability of training data like Data Augmentation, Few-shot Learning (FSL), Transfer Learning, and Active Learning, etc. According to the methodologies discussed above, we select some of the representative works to conduct the experiments on benchmark HSI datasets.
Although the current HSIC techniques reflect a rapid, remarkable, and sophistication of the task, further developments are still required to improve the generalization capabilities. The main issue of deep neural network-based HSIC is the lack of labeled data. HSI data is infamous due to the limited availability of labeled data and deep neural networks demand a sufficiently large amount of labeled training data. Section <ref> discussed some widely used strategies to combat the aforesaid issue but significant improvements are still needed to efficiently utilize limited available training data. One direction to solve this problem could be to explore the integration of various learning strategies discussed in section <ref> to cash in the joint benefits. One more way is to exploit a few-shot or K-shot learning approaches that can accurately predict the class labels with only a few labeled samples. Moreover, there is a need to focus on the joint exploitation of spectral-spatial features of HSI to complement classification accuracies achieved from the aforementioned HSIC frameworks. Another future potential of HSIC is computationally efficient architectures. Therefore, the issue of the high computational complexity of deep neural networks is of paramount importance and it is crucial to implement parallel HSIC architectures to speed up the processing of deep neural networks to meet the computational stipulation of time-critical HSI applications. In this direction, high-performance computing platforms and specialized hardware modules like graphical processing units (GPUs) and field-programmable gate arrays (FPGAs) can be used to implement the parallel HSIC frameworks. Hence, to assimilate aforesaid aspects in the development of a new HSIC framework is to appropriately utilize the limited training samples while considering joint spectral-spatial features of HSI and maintaining the low computational burden.
§ ACKNOWLEDGMENT
The authors thank Ganesan Narayanasamy who is leading IBM OpenPOWER/POWER enablement and ecosystem worldwide for his support to get the IBM AC922 system's access.
[1]
M. Ahmad, A. Khan, A. M. Khan, M. Mazzara, S. Distefano, A. Sohaib, and
O. Nibouche, “Spatial prior fuzziness pool-based interactive classification
of hyperspectral images,” Remote Sensing, vol. 11, no. 9, May. 2019.
[2]
D. Hong, W. He, N. Yokoya, J. Yao, L. Gao, L. Zhang, J. Chanussot, and X. X.
Zhu, “Interpretable hyperspectral artificial intelligence: When non-convex
modeling meets hyperspectral remote sensing,” IEEE Geosci. Remote Sens.
Mag., vol. 9, no. 2, pp. 52–87, 2021.
[3]
H. Ayaz, M. Ahmad, A. Sohaib, M. N. Yasir, M. A. Zaidan, M. Ali, M. H. Khan,
and Z. Saleem, “Myoglobin-based classification of minced meat using
hyperspectral imaging,” Applied Sciences, vol. 10, no. 19, 2020.
[4]
M. H. Khan, Z. Saleem, M. Ahmad, A. Sohaib, H. Ayaz, and M. Mazzara,
“Hyperspectral imaging for color adulteration detection in red chili,” Applied Sciences, vol. 10, no. 17, 2020.
[5]
Z. Saleem, M. H. Khan, M. Ahmad, A. Sohaib, H. Ayaz, and M. Mazzara,
“Prediction of microbial spoilage and shelf-life of bakery products through
hyperspectral imaging,” IEEE Access, vol. 8, pp. 176986–176996, 2020.
[6]
M. Zulfiqar, M. Ahmad, A. Sohaib, M. Mazzara, and S. Distefano, “Hyperspectral
imaging for bloodstain identification,” Sensors, vol. 21, no. 9, 2021.
[7]
H. Ayaz, M. Ahmad, M. Mazzara, and A. Sohaib, “Hyperspectral imaging for
minced meat classification using nonlinear deep features,” Applied
Sciences, vol. 10, no. 21, 2020.
[8]
H. Hussain Khan, Z. Saleem, M. Ahmad, A. Sohaib, H. Ayaz, M. Mazzara, and R. A.
Raza, “Hyperspectral imaging-based unsupervised adulterated red chili
content transformation for classification: Identification of red chili
adulterants,” Neural Computing and Applications, vol. 33, pp. 1–15,
11 2021.
[9]
“10 Important Applications of hyperspectral image.”
Accessed: 2020-03-10.
[10]
F. Xing, H. Yao, Y. Liu, X. Dai, R. L. Brown, and D. Bhatnagar, “Recent
developments and applications of hyperspectral imaging for rapid detection of
mycotoxins and mycotoxigenic fungi in food products,” Critical Reviews
in Food Science and Nutrition, vol. 59, no. 1, pp. 173–180, 2019.
PMID: 28846441.
[11]
M. Ahmad, “Ground truth labeling and samples selection for hyperspectral image
classification,” Optik, vol. 230, p. 166267, 2021.
[12]
“Applications of hyperspectral image.”
Accessed: 2020-03-10.
[13]
M. Ahmad, A. M. Khan, and R. Hussain, “Graph-based spatial spectral feature
learning for hyperspectral image classification,” IET Image
Processing, vol. 11, no. 12, pp. 1310–1316, 2017.
[14]
J. M. Haut, M. E. Paoletti, J. Plaza, and A. Plaza, “Fast dimensionality
reduction and classification of hyperspectral images with extreme learning
machines,” Journal of Real-Time Image Processing, vol. 15, no. 3,
pp. 439–462, 2018.
[15]
D. Hong, N. Yokoya, J. Chanussot, J. Xu, and X. Zhu, “Learning to propagate
labels on graphs: An iterative multitask regression framework for
semi-supervised hyperspectral dimensionality reduction,” ISPRS J.
Photogramm. Remote Sens., vol. 158, pp. 35–49, 2019.
[16]
M. Ahmad, S. Lee, D. Ulhaq, and Q. Mushtaq, “Hyperspectral remote sensing:
Dimensional reduction and end member extraction,” International Journal
of Soft Computing and Engineering (IJSCE), vol. 2, pp. 2231–2307, 05 2012.
[17]
D. Hong, N. Yokoya, J. Chanussot, J. Xu, and X. X. Zhu, “Joint and progressive
subspace analysis (jpsa) with spatial-spectral manifold alignment for
semi-supervised hyperspectral dimensionality reduction,” IEEE Trans.
Cybern., vol. 51, no. 7, pp. 3602–3615, 2021.
[18]
B. Zhang, X. Sun, L. Gao, and L. Yang, “Endmember extraction of hyperspectral
remote sensing images based on the ant colony optimization (aco) algorithm,”
IEEE transactions on geoscience and remote sensing, vol. 49, no. 7,
pp. 2635–2646, 2011.
[19]
J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and
J. Chanussot, “Hyperspectral unmixing overview: Geometrical, statistical,
and sparse regression-based approaches,” IEEE journal of selected
topics in applied earth observations and remote sensing, vol. 5, no. 2,
pp. 354–379, 2012.
[20]
M. Ahmad, A. Khan, and A. K. Bashir, “Metric similarity regularizer to enhance
pixel similarity performance for hyperspectral unmixing,” Optik -
International Journal for Light and Electron Optics, vol. 140, pp. 86–95,
July 2017.
[21]
Y. Zhong, X. Wang, L. Zhao, R. Feng, L. Zhang, and Y. Xu, “Blind spectral
unmixing based on sparse component analysis for hyperspectral remote sensing
imagery,” ISPRS Journal of Photogrammetry and Remote Sensing,
vol. 119, pp. 49–63, 2016.
[22]
M. Ahmad, D. Ihsan, and D. Ulhaq, “Linear unmixing and target detection of
hyperspectral imagery using osp,” in International Proceedings of
Computer Science and Information Technology, pp. 179–183, 01 2011.
[23]
M. Ahmad, D. Ulhaq, Q. Mushtaq, and M. Sohaib, “A new statistical approach for
band clustering and band selection using k-means clustering,” International Journal of Engineering and Technology, vol. 3, pp. 606–614,
12 2011.
[24]
M. Ahmad, D. Ulhaq, and Q. Mushtaq, “Aik method for band clustering using
statistics of correlation and dispersion matrix,” in International
Conference on Information Communication and Management, pp. 114–118, 01
[25]
D. Hong, N. Yokoya, J. Chanussot, and X. Zhu, “An augmented linear mixing
model to address spectral variability for hyperspectral unmixing,” IEEE
Trans. Image Process., vol. 28, no. 4, pp. 1923–1938, 2019.
[26]
L. Gao, Z. Han, D. Hong, B. Zhang, and J. Chanussot, “Cycu-net:
Cycle-consistency unmixing network by learning cascaded autoencoders,” IEEE Transactions on Geoscience and Remote Sensing, 2021.
[27]
D. W. Stein, S. G. Beaven, L. E. Hoff, E. M. Winter, A. P. Schaum, and A. D.
Stocker, “Anomaly detection from hyperspectral imagery,” IEEE signal
processing magazine, vol. 19, no. 1, pp. 58–69, 2002.
[28]
S. Liu, Q. Du, X. Tong, A. Samat, L. Bruzzone, and F. Bovolo, “Multiscale
morphological compressed change vector analysis for unsupervised multiple
change detection,” IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, vol. 10, no. 9, pp. 4124–4137, 2017.
[29]
S. Li, K. Zhang, Q. Hao, P. Duan, and X. Kang, “Hyperspectral anomaly
detection with multiscale attribute and edge-preserving filters,” IEEE
Geoscience and Remote Sensing Letters, vol. 15, no. 10, pp. 1605–1609,
[30]
X. Wu, D. Hong, J. Tian, J. Chanussot, W. Li, and R. Tao, “Orsim detector: A
novel object detection framework in optical remote sensing imagery using
spatial-frequency channel features,” IEEE Trans. Geosci. Remote Sens.,
vol. 57, no. 7, pp. 5146–5158, 2019.
[31]
S. Liu, D. Marinelli, L. Bruzzone, and F. Bovolo, “A review of change
detection in multitemporal hyperspectral images: Current techniques,
applications, and challenges,” IEEE Geoscience and Remote Sensing
Magazine, vol. 7, no. 2, pp. 140–158, 2019.
[32]
X. Wu, D. Hong, J. Chanussot, Y. Xu, R. Tao, and Y. Wang, “Fourier-based
rotation-invariant feature boosting: An efficient framework for geospatial
object detection,” IEEE Geosci. Remote Sens. Lett., vol. 17, no. 2,
pp. 302–306, 2020.
[33]
P. Chen, D. Hong, Z. Chen, X. Yang, B. Li, and B. Zhang, “Fccdn: Feature
constraint network for vhr image change detection,” arXiv preprint
arXiv:2105.10860, 2021.
[34]
M. Fauvel, Y. Tarabalka, J. A. Benediktsson, J. Chanussot, and J. C. Tilton,
“Advances in spectral-spatial classification of hyperspectral images,” Proceedings of the IEEE, vol. 101, no. 3, pp. 652–675, 2012.
[35]
M. Ahmad, S. Protasov, A. M. Khan, R. Hussain, A. M. Khattak, and W. A. Khan,
“Fuzziness-based active learning framework to enhance hyperspectral image
classification performance for discriminative and generative classifiers,”
PLoS ONE, vol. 13, p. e0188996, January 2018.
[36]
M. Ahmad, S. Shabbir, D. Oliva, M. Mazzara, and S. Distefano, “Spatial-prior
generalized fuzziness extreme learning machine autoencoder-based active
learning for hyperspectral image classification,” Optik-International
Journal for Light and Electron Optics, 2020.
[37]
M. Ahmad, A. M. Khan, R. Hussain, S. Protasov, F. Chow, and A. M. Khattak,
“Unsupervised geometrical feature learning from hyperspectral data,” in
2016 IEEE Symposium Series on Computational Intelligence (IEEE SSCI
2016), pp. 1–6, December 2016.
[38]
M. Ahmad, S. Protasov, and A. M. Khan, “Hyperspectral band selection using
unsupervised non-linear deep auto encoder to train external classifiers,”
CoRR, vol. abs/1705.06920, 2017.
[39]
M. Ahmad, M. A. Alqarni, A. M. Khan, R. Hussain, M. Mazzara, and S. Distefano,
“Segmented and non-segmented stacked denoising autoencoder for hyperspectral
band reduction,” Optik - International Journal for Light and Electron
Optics, vol. 180, pp. 370–378, Oct 2018.
[40]
S. Liu, Y. Zheng, Q. Du, A. Samat, X. Tong, and M. Dalponte, “A novel feature
fusion approach for vhr remote sensing image classification,” IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
vol. 14, pp. 464–473, 2020.
[41]
D. Hong, N. Yokoya, J. Chanussot, and X. Zhu, “CoSpace: Common subspace
learning from hyperspectral-multispectral correspondences,” IEEE Trans.
Geos. Remote Sens., vol. 57, no. 7, pp. 4349–4359, 2019.
[42]
L. Gao, Q. Du, B. Zhang, W. Yang, and Y. Wu, “A comparative study on linear
regression-based noise estimation for hyperspectral imagery,” IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
vol. 6, no. 2, pp. 488–498, 2013.
[43]
W. Wei, L. Zhang, C. Tian, A. Plaza, and Y. Zhang, “Structured sparse
coding-based hyperspectral imagery denoising with intracluster filtering,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 12,
pp. 6860–6876, 2017.
[44]
C. Yi, Y.-Q. Zhao, J. Yang, J. C.-W. Chan, and S. G. Kong, “Joint
hyperspectral superresolution and unmixing with interactive feedback,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 7,
pp. 3823–3834, 2017.
[45]
C. Yi, Y.-Q. Zhao, and J. C.-W. Chan, “Hyperspectral image super-resolution
based on spatial and spectral correlation fusion,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 56, no. 7, pp. 4165–4177, 2018.
[46]
G. Cheng, J. Han, L. Guo, Z. Liu, S. Bu, and J. Ren, “Effective and efficient
midlevel visual elements-oriented land-use classification using vhr remote
sensing images,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 53, no. 8, pp. 4238–4249, 2015.
[47]
Q. Zhu, Y. Zhong, B. Zhao, G.-S. Xia, and L. Zhang, “Bag-of-visual-words scene
classifier with local and global features for high spatial resolution remote
sensing imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 13,
no. 6, pp. 747–751, 2016.
[48]
H. Wu, B. Liu, W. Su, W. Zhang, and J. Sun, “Hierarchical coding vectors for
scene level land-use classification,” Remote Sensing, vol. 8, no. 5,
p. 436, 2016.
[49]
G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classification:
Benchmark and state of the art,” Proceedings of the IEEE, vol. 105,
no. 10, pp. 1865–1883, 2017.
[50]
D. Hong, N. Yokoya, N. Ge, J. Chanussot, and X. Zhu, “Learnable manifold
alignment (LeMA): A semi-supervised cross-modality learning framework for
land cover and land use classification,” ISPRS J. Photogramm. Remote
Sens., vol. 147, pp. 193–205, 2019.
[51]
T. R. Martha, N. Kerle, C. J. van Westen, V. Jetten, and K. V. Kumar, “Segment
optimization and data-driven thresholding for knowledge-based landslide
detection by object-based image analysis,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 49, no. 12, pp. 4928–4943, 2011.
[52]
G. Cheng, L. Guo, T. Zhao, J. Han, H. Li, and J. Fang, “Automatic landslide
detection from remote-sensing imagery using a scene classification method
based on bovw and plsa,” International Journal of Remote Sensing,
vol. 34, no. 1, pp. 45–59, 2013.
[53]
N. B. Mishra and K. A. Crews, “Mapping vegetation morphology types in a dry
savanna ecosystem: integrating hierarchical object-based image analysis with
random forest,” International Journal of Remote Sensing, vol. 35,
no. 3, pp. 1175–1198, 2014.
[54]
X. Li and G. Shao, “Object-based urban vegetation mapping with high-resolution
aerial photography as a single data source,” International journal of
remote sensing, vol. 34, no. 3, pp. 771–789, 2013.
[55]
S. B. Kotsiantis, I. D. Zaharakis, and P. E. Pintelas, “Machine learning: a
review of classification and combining techniques,” Artificial
Intelligence Review, vol. 26, no. 3, pp. 159–190, 2006.
[56]
S. B. Kotsiantis, I. Zaharakis, and P. Pintelas, “Supervised machine learning:
A review of classification techniques,” Emerging artificial
intelligence applications in computer engineering, vol. 160, pp. 3–24,
[57]
C. Pan, X. Jia, J. Li, and X. Gao, “Adaptive edge preserving maps in markov
random fields for hyperspectral image classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 59, no. 10,
pp. 8568–8583, 2021.
[58]
A. Plaza, J. A. Benediktsson, J. W. Boardman, J. Brazile, L. Bruzzone,
G. Camps-Valls, J. Chanussot, M. Fauvel, P. Gamba, A. Gualtieri, et al., “Recent advances in techniques for hyperspectral image
processing,” Remote sensing of environment, vol. 113, pp. S110–S122,
[59]
R. Ablin and C. H. Sulochana, “A survey of hyperspectral image classification
in remote sensing,” International Journal of Advanced Research in
Computer and Communication Engineering, vol. 2, no. 8, pp. 2986–3000, 2013.
[60]
G. Camps-Valls, D. Tuia, L. Bruzzone, and J. A. Benediktsson, “Advances in
hyperspectral image classification: Earth monitoring with statistical
learning methods,” IEEE signal processing magazine, vol. 31, no. 1,
pp. 45–54, 2013.
[61]
D. Chutia, D. Bhattacharyya, K. K. Sarma, R. Kalita, and S. Sudhakar,
“Hyperspectral remote sensing classifications: a perspective survey,” Transactions in GIS, vol. 20, no. 4, pp. 463–490, 2016.
[62]
Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based
classification of hyperspectral data,” IEEE Journal of Selected topics
in applied earth observations and remote sensing, vol. 7, no. 6,
pp. 2094–2107, 2014.
[63]
P. Ghamisi, N. Yokoya, J. Li, W. Liao, S. Liu, J. Plaza, B. Rasti, and
A. Plaza, “Advances in hyperspectral image and signal processing: A
comprehensive overview of the state of the art,” IEEE Geoscience and
Remote Sensing Magazine, vol. 5, no. 4, pp. 37–78, 2017.
[64]
C. Li, Y. Wang, X. Zhang, H. Gao, Y. Yang, and J. Wang, “Deep belief network
for spectral–spatial classification of hyperspectral remote sensor data,”
Sensors, vol. 19, no. 1, p. 204, 2019.
[65]
B. Rasti, D. Hong, R. Hang, P. Ghamisi, X. Kang, J. Chanussot, and
J. Benediktsson, “Feature extraction for hyperspectral imagery: The
evolution from shallow to deep: Overview and toolbox,” IEEE Geosci.
Remote Sens. Mag., vol. 8, no. 4, pp. 60–88, 2020.
[66]
H. Petersson, D. Gustafsson, and D. Bergstrom, “Hyperspectral image analysis
using deep learning—a review,” in 2016 Sixth International Conference
on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6, IEEE,
[67]
M. Ahmad, A. M. Khan, M. Mazzara, S. Distefano, M. Ali, and M. S. Sarfraz, “A
fast and compact 3-d cnn for hyperspectral image classification,” IEEE
Geoscience and Remote Sensing Letters, p. 1–5, 2020.
[68]
D. Hong, Z. Han, J. Yao, L. Gao, B. Zhang, A. Plaza, and J. Chanussot,
“Spectralformer: Rethinking hyperspectral image classification with
transformers,” IEEE Trans. Geosci. Remote Sens., 2021.
DOI: 10.1109/TGRS.2021.3130716.
[69]
M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “Deep learning classifiers for
hyperspectral imaging: A review,” ISPRS Journal of Photogrammetry and
Remote Sensing, vol. 158, pp. 279–317, 2019.
[70]
R. Hang, Q. Liu, D. Hong, and P. Ghamisi, “Cascaded recurrent neural networks
for hyperspectral image classification,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 57, no. 8, pp. 5384–5394, 2019.
[71]
L. Huang, C. Chen, W. Li, and Q. Du, “Remote sensing image scene
classification using multi-scale completed local binary patterns and fisher
vectors,” Remote Sensing, vol. 8, no. 6, p. 483, 2016.
[72]
N. Dalal and B. Triggs, “Histograms of oriented gradients for human
detection,” in 2005 IEEE computer society conference on computer vision
and pattern recognition (CVPR'05), vol. 1, pp. 886–893, IEEE, 2005.
[73]
A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic
representation of the spatial envelope,” International journal of
computer vision, vol. 42, no. 3, pp. 145–175, 2001.
[74]
D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the seventh IEEE international conference on computer vision,
vol. 2, pp. 1150–1157, Ieee, 1999.
[75]
J. Ham, Y. Chen, M. M. Crawford, and J. Ghosh, “Investigation of the random
forest framework for classification of hyperspectral data,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 492–501,
[76]
G. Camps-Valls and L. Bruzzone, “Kernel-based methods for hyperspectral image
classification,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 43, no. 6, pp. 1351–1362, 2005.
[77]
W. Yang, L. Gao, and D. Chen, “Real-time target detection in hyperspectral
images based on spatial-spectral information extraction,” EURASIP
Journal on Advances in Signal Processing, vol. 2012, 07 2012.
[78]
G. Cheng, P. Zhou, X. Yao, C. Yao, Y. Zhang, and J. Han, “Object detection in
vhr optical remote sensing images via learning rotation-invariant hog
feature,” in 2016 4th International Workshop on Earth Observation and
Remote Sensing Applications (EORSA), pp. 433–436, IEEE, 2016.
[79]
G. Cheng, P. Zhou, J. Han, L. Guo, and J. Han, “Auto-encoder-based shared
mid-level visual dictionary learning for scene classification using very high
resolution remote sensing images,” IET Computer Vision, vol. 9, no. 5,
pp. 639–647, 2015.
[80]
R. Azhar, D. Tuwohingide, D. Kamudi, N. Suciati, et al., “Batik image
classification using sift feature extraction, bag of features and support
vector machine,” Procedia Computer Science, vol. 72, pp. 24–30, 2015.
[81]
O. Zeglazi, A. Amine, and M. Rziza, “Sift descriptors modeling and application
in texture image classification,” in 2016 13th International Conference
on Computer Graphics, Imaging and Visualization (CGiV), pp. 265–268, IEEE,
[82]
Y. Xu, K. Hu, Y. Tian, and F. Peng, “Classification of hyperspectral imagery
using sift for spectral matching,” in 2008 Congress on Image and Signal
Processing, vol. 2, pp. 704–708, IEEE, 2008.
[83]
Y. Yang and S. Newsam, “Comparing sift descriptors and gabor texture features
for classification of remote sensed imagery,” in 2008 15th IEEE
international conference on image processing, pp. 1852–1855, IEEE, 2008.
[84]
H. T. M. Nhat and V. T. Hoang, “Feature fusion by using lbp, hog, gist
descriptors and canonical correlation analysis for face recognition,” in
2019 26th International Conference on Telecommunications (ICT),
pp. 371–375, IEEE, 2019.
[85]
S. K. Roy, B. Chanda, B. B. Chaudhuri, D. K. Ghosh, and S. R. Dubey, “Local
morphological pattern: A scale space shape descriptor for texture
classification,” Digital Signal Processing, vol. 82, pp. 152–165,
[86]
R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image
classification,” IEEE Transactions on systems, man, and cybernetics,
no. 6, pp. 610–621, 1973.
[87]
T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and
rotation invariant texture classification with local binary patterns,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24,
no. 7, pp. 971–987, 2002.
[88]
L. Zhao, P. Tang, and L. Huo, “Feature significance-based
multibag-of-visual-words model for remote sensing image scene
classification,” Journal of Applied Remote Sensing, vol. 10, no. 3,
p. 035004, 2016.
[89]
Y. Zhang, X. Sun, H. Wang, and K. Fu, “High-resolution remote-sensing image
classification via an approximate earth mover's distance-based
bag-of-features model,” IEEE Geoscience and Remote Sensing Letters,
vol. 10, no. 5, pp. 1055–1059, 2013.
[90]
Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for
land-use classification,” in Proceedings of the 18th SIGSPATIAL
international conference on advances in geographic information systems,
pp. 270–279, 2010.
[91]
S. Xu, T. Fang, D. Li, and S. Wang, “Object classification of aerial images
with bag-of-visual words,” IEEE Geoscience and Remote Sensing Letters,
vol. 7, no. 2, pp. 366–370, 2009.
[92]
J. Zhang, T. Li, X. Lu, and Z. Cheng, “Semantic classification of
high-resolution remote-sensing images based on mid-level features,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote
Sensing, vol. 9, no. 6, pp. 2343–2353, 2016.
[93]
R. Bahmanyar, S. Cui, and M. Datcu, “A comparative study of bag-of-words and
bag-of-topics models of eo image patches,” IEEE Geoscience and Remote
Sensing Letters, vol. 12, no. 6, pp. 1357–1361, 2015.
[94]
G. Cheng, J. Han, P. Zhou, and L. Guo, “Multi-class geospatial object
detection and geographic image classification based on collection of part
detectors,” ISPRS Journal of Photogrammetry and Remote Sensing,
vol. 98, pp. 119–132, 2014.
[95]
B. Zhao, Y. Zhong, L. Zhang, and B. Huang, “The fisher kernel coding framework
for high spatial resolution scene classification,” Remote Sensing,
vol. 8, no. 2, p. 157, 2016.
[96]
J. Hu, G.-S. Xia, F. Hu, and L. Zhang, “A comparative study of sampling
analysis in the scene classification of optical high-spatial resolution
remote sensing imagery,” Remote Sensing, vol. 7, no. 11,
pp. 14988–15013, 2015.
[97]
S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial
pyramid matching for recognizing natural scene categories,” in 2006
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR'06), vol. 2, pp. 2169–2178, IEEE, 2006.
[98]
B. Zhao, Y. Zhong, G.-S. Xia, and L. Zhang, “Dirichlet-derived multiple topic
scene classification model for high spatial resolution remote sensing
imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54,
no. 4, pp. 2108–2123, 2015.
[99]
R. Kusumaningrum, H. Wei, R. Manurung, and A. Murni, “Integrated visual
vocabulary in latent dirichlet allocation–based scene classification for
ikonos image,” Journal of Applied Remote Sensing, vol. 8, no. 1,
p. 083690, 2014.
[100]
Y. Zhong, Q. Zhu, and L. Zhang, “Scene classification based on the
multifeature fusion probabilistic topic model for high spatial resolution
remote sensing imagery,” IEEE Transactions on Geoscience and Remote
Sensing, vol. 53, no. 11, pp. 6207–6222, 2015.
[101]
D. Hong, N. Yokoya, G.-S. Xia, J. Chanussot, and X. X. Zhu, “X-modalnet: A
semi-supervised deep cross-modal network for classification of remote sensing
data,” ISPRS J. Photogramm. Remote Sens., vol. 167, pp. 12–23, 2020.
[102]
H. Yu, W. Yang, G.-S. Xia, and G. Liu, “A color-texture-structure descriptor
for high-resolution satellite image classification,” Remote Sensing,
vol. 8, no. 3, p. 259, 2016.
[103]
V. Risojević and Z. Babić, “Fusion of global and local descriptors for
remote sensing image classification,” IEEE Geoscience and Remote
Sensing Letters, vol. 10, no. 4, pp. 836–840, 2012.
[104]
M. L. Mekhalfi, F. Melgani, Y. Bazi, and N. Alajlan, “Land-use classification
with compressive sensing multifeature fusion,” IEEE Geoscience and
Remote Sensing Letters, vol. 12, no. 10, pp. 2155–2159, 2015.
[105]
G. Sheng, W. Yang, T. Xu, and H. Sun, “High-resolution satellite scene
classification using a sparse coding based multiple feature combination,”
International journal of remote sensing, vol. 33, no. 8,
pp. 2395–2412, 2012.
[106]
L. Xie, J. Wang, B. Zhang, and Q. Tian, “Incorporating visual adjectives for
image classification,” Neurocomputing, vol. 182, pp. 48–55, 2016.
[107]
D. Hong, L. Gao, N. Yokoya, J. Yao, J. Chanussot, D. Qian, and B. Zhang, “More
diverse means better: Multimodal deep learning meets remote-sensing imagery
classification,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 5,
pp. 4340–4354, 2021.
[108]
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data
with neural networks,” science, vol. 313, no. 5786, pp. 504–507,
[109]
Q. Zou, L. Ni, T. Zhang, and Q. Wang, “Deep learning based feature selection
for remote sensing scene classification,” IEEE Geoscience and Remote
Sensing Letters, vol. 12, no. 11, pp. 2321–2325, 2015.
[110]
F. Hu, G.-S. Xia, J. Hu, and L. Zhang, “Transferring deep convolutional neural
networks for the scene classification of high-resolution remote sensing
imagery,” Remote Sensing, vol. 7, no. 11, pp. 14680–14707, 2015.
[111]
S. Chen and Y. Wang, “Convolutional neural network and convex optimization,”
Dept. of Elect. and Comput. Eng., Univ. of California at San Diego, San
Diego, CA, USA, Tech. Rep, 2014.
[112]
R. E. Bellman, Adaptive control processes: a guided tour.
Princeton university press, 2015.
[113]
G. Hughes, “On the mean accuracy of statistical pattern recognizers,” IEEE transactions on information theory, vol. 14, no. 1, pp. 55–63, 1968.
[114]
M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais,
et al., “Deep learning and process understanding for data-driven earth
system science,” Nature, vol. 566, no. 7743, pp. 195–204, 2019.
[115]
J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi, and
J. Chanussot, “Hyperspectral remote sensing data analysis and future
challenges,” IEEE Geoscience and remote sensing magazine, vol. 1,
no. 2, pp. 6–36, 2013.
[116]
Q. Nguyen and M. Hein, “Optimization landscape and expressivity of deep
cnns,” arXiv preprint arXiv:1710.10928, 2017.
[117]
M. Ahmad, S. Shabbir, R. A. Raza, M. Mazzara, S. Distefano, and A. M. Khan,
“Artifacts of different dimension reduction methods on hybrid cnn feature
hierarchy for hyperspectral image classification,” Optik, vol. 246,
p. 167757, 2021.
[118]
M. Ahmad, M. Mazzara, and S. Distefano, “Regularized cnn feature hierarchy for
hyperspectral image classification,” Remote Sensing, vol. 13, no. 12,
p. 2275, 2021.
[119]
L. Bottou, “Stochastic gradient learning in neural networks,” Proceedings of Neuro-Nımes, vol. 91, no. 8, p. 12, 1991.
[120]
N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural networks, vol. 12, no. 1, pp. 145–151, 1999.
[121]
G. Hinton, N. Srivastava, and K. Swersky, “Neural networks for machine
learning lecture 6a overview of mini-batch gradient descent,” Cited
on, vol. 14, no. 8, 2012.
[122]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[123]
I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.
[124]
S. R. Dubey, S. Chakraborty, S. K. Roy, S. Mukherjee, S. K. Singh, and B. B.
Chaudhuri, “diffgrad: An optimization method for convolutional neural
networks,” IEEE transactions on neural networks and learning systems,
vol. 31, no. 11, pp. 4500–4511, 2019.
[125]
L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han, “On the variance
of the adaptive learning rate and beyond,” arXiv preprint
arXiv:1908.03265, 2019.
[126]
H. Yong, J. Huang, X. Hua, and L. Zhang, “Gradient centralization: A new
optimization technique for deep neural networks,” in European
Conference on Computer Vision, pp. 635–652, Springer, 2020.
[127]
S. Roy, M. Paoletti, J. Haut, S. Dubey, P. Kar, A. Plaza, and B. Chaudhuri,
“Angulargrad: A new optimization technique for angular convergence of
convolutional neural networks,” arXiv preprint arXiv:2105.10190, 2021.
[128]
D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio,
“Why does unsupervised pre-training help deep learning?,” Journal of
Machine Learning Research, vol. 11, no. Feb, pp. 625–660, 2010.
[129]
M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin,
M. Hasan, B. C. Van Essen, A. A. Awwal, and V. K. Asari, “A state-of-the-art
survey on deep learning theory and architectures,” Electronics,
vol. 8, no. 3, p. 292, 2019.
[130]
A. Plaza, D. Valencia, and J. Plaza, “An experimental comparison of parallel
algorithms for hyperspectral analysis using heterogeneous and homogeneous
networks of workstations,” Parallel Computing, vol. 34, no. 2,
pp. 92–114, 2008.
[131]
A. Plaza, J. Plaza, A. Paz, and S. Sanchez, “Parallel hyperspectral image and
signal processing [applications corner],” IEEE Signal Processing
Magazine, vol. 28, no. 3, pp. 119–126, 2011.
[132]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in Proceedings of the IEEE conference on computer vision
and pattern recognition, pp. 770–778, 2016.
[133]
Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with
gradient descent is difficult,” IEEE transactions on neural networks,
vol. 5, no. 2, pp. 157–166, 1994.
[134]
Y. Fang, H. Li, Y. Ma, K. Liang, Y. Hu, S. Zhang, and H. Wang, “Dimensionality
reduction of hyperspectral images based on robust spatial information using
locally linear embedding,” IEEE Geoscience and Remote Sensing Letters,
vol. 11, no. 10, pp. 1712–1716, 2014.
[135]
M. Sugiyama, “Dimensionality reduction of multimodal labeled data by local
fisher discriminant analysis,” Journal of machine learning research,
vol. 8, no. May, pp. 1027–1061, 2007.
[136]
H.-T. Chen, H.-W. Chang, and T.-L. Liu, “Local discriminant embedding and its
variants,” in 2005 IEEE computer society conference on computer vision
and pattern recognition (CVPR'05), vol. 2, pp. 846–853, IEEE, 2005.
[137]
B.-C. Kuo and D. A. Landgrebe, “Nonparametric weighted feature extraction for
classification,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 42, no. 5, pp. 1096–1105, 2004.
[138]
B. Kumar, O. Dikshit, A. Gupta, and M. K. Singh, “Feature extraction for
hyperspectral image classification: a review,” International Journal of
Remote Sensing, vol. 41, no. 16, pp. 6248–6287, 2020.
[139]
J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, “Classification of
hyperspectral data from urban areas based on extended morphological
profiles,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 43, no. 3, pp. 480–491, 2005.
[140]
Y. Gu, T. Liu, X. Jia, J. A. Benediktsson, and J. Chanussot, “Nonlinear
multiple kernel learning with multiple-structure-element extended
morphological profiles for hyperspectral image classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 54, no. 6,
pp. 3235–3247, 2016.
[141]
D. Hong, X. Wu, P. Ghamisi, J. Chanussot, N. Yokoya, and X. X. Zhu, “Invariant
attribute profiles: A spatial-frequency joint feature extractor for
hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens.,
vol. 58, no. 6, pp. 3791–3808, 2020.
[142]
X. Zhang, Y. Sun, K. Jiang, C. Li, L. Jiao, and H. Zhou, “Spatial sequential
recurrent neural network for hyperspectral image classification,” IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
vol. 11, no. 11, pp. 4141–4155, 2018.
[143]
M. Pesaresi and J. A. Benediktsson, “A new approach for the morphological
segmentation of high-resolution satellite imagery,” IEEE transactions
on Geoscience and Remote Sensing, vol. 39, no. 2, pp. 309–320, 2001.
[144]
Y. Chen, X. Zhao, and X. Jia, “Spectral–spatial classification of
hyperspectral data based on deep belief network,” IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8,
no. 6, pp. 2381–2392, 2015.
[145]
M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “Deep&dense convolutional
neural network for hyperspectral image classification,” Remote
Sensing, vol. 10, no. 9, p. 1454, 2018.
[146]
X. Jin, L. Jie, S. Wang, H. J. Qi, and S. W. Li, “Classifying wheat
hyperspectral pixels of healthy heads and fusarium head blight disease using
a deep neural network in the wild field,” Remote Sensing, vol. 10,
no. 3, p. 395, 2018.
[147]
N. Wu, C. Zhang, X. Bai, X. Du, and Y. He, “Discrimination of chrysanthemum
varieties using hyperspectral imaging combined with a deep convolutional
neural network,” Molecules, vol. 23, no. 11, p. 2831, 2018.
[148]
Y. Li, W. Xie, and H. Li, “Hyperspectral image reconstruction by deep
convolutional neural network for classification,” Pattern Recognition,
vol. 63, pp. 371–383, 2017.
[149]
Y. Zhan, D. Hu, H. Xing, and X. Yu, “Hyperspectral band selection based on
deep convolutional neural network and distance density,” IEEE
Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2365–2369,
[150]
M. E. Paoletti, J. M. Haut, R. Fernandez-Beltran, J. Plaza, A. J. Plaza, and
F. Pla, “Deep pyramidal residual networks for spectral–spatial
hyperspectral image classification,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 57, no. 2, pp. 740–754, 2018.
[151]
J. Acquarelli, E. Marchiori, L. Buydens, T. Tran, and T. Van Laarhoven,
“Spectral-spatial classification of hyperspectral images: Three tricks and a
new learning setting,” Remote Sensing, vol. 10, no. 7, p. 1156, 2018.
[152]
M. Ahmad, A. M. Khan, and R. Hussain, “Graph-based spatial–spectral feature
learning for hyperspectral image classification,” IET image
processing, vol. 11, no. 12, pp. 1310–1316, 2017.
[153]
Q. Liu, F. Zhou, R. Hang, and X. Yuan, “Bidirectional-convolutional lstm based
spectral-spatial feature learning for hyperspectral image classification,”
Remote Sensing, vol. 9, no. 12, p. 1330, 2017.
[154]
S. K. Roy, S. Manna, T. Song, and L. Bruzzone, “Attention-based adaptive
spectral-spatial kernel resnet for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 9,
pp. 7831–7843, 2021.
[155]
B. Liu, X. Yu, P. Zhang, X. Tan, A. Yu, and Z. Xue, “A semi-supervised
convolutional neural network for hyperspectral image classification,” Remote Sensing Letters, vol. 8, no. 9, pp. 839–848, 2017.
[156]
X. Kang, B. Zhuo, and P. Duan, “Semi-supervised deep learning for
hyperspectral image classification,” Remote Sensing Letters, vol. 10,
no. 4, pp. 353–362, 2019.
[157]
Y. Wu, G. Mu, C. Qin, Q. Miao, W. Ma, and X. Zhang, “Semi-supervised
hyperspectral image classification via spatial-regulated self-training,”
Remote Sensing, vol. 12, no. 1, p. 159, 2020.
[158]
Z. Zhang, “Semi-supervised hyperspectral image classification algorithm based
on graph embedding and discriminative spatial information,” Microprocessors and Microsystems, p. 103070, 2020.
[159]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with
deep convolutional neural networks,” in Advances in neural information
processing systems, pp. 1097–1105, 2012.
[160]
I. Goodfellow, Y. Bengio, and A. Courville, Deep learning.
MIT press, 2016.
[161]
S. K. Roy, S. Manna, S. R. Dubey, and B. B. Chaudhuri, “Lisht: Non-parametric
linearly scaled hyperbolic tangent activation function for neural networks,”
arXiv preprint arXiv:1901.05894, 2019.
[162]
T. Williams and R. Li, “Wavelet pooling for convolutional neural networks,”
in International Conference on Learning Representations, 2018.
[163]
J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for
simplicity: The all convolutional net,” arXiv preprint
arXiv:1412.6806, 2014.
[164]
D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular interaction and
functional architecture in the cat's visual cortex,” The Journal of
physiology, vol. 160, no. 1, pp. 106–154, 1962.
[165]
K. Fukushima, “Neocognitron: A self-organizing neural network model for a
mechanism of pattern recognition unaffected by shift in position,” Biological cybernetics, vol. 36, no. 4, pp. 193–202, 1980.
[166]
A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning
for computer vision: A brief review,” Computational intelligence and
neuroscience, vol. 2018, 2018.
[167]
J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang,
G. Wang, J. Cai, et al., “Recent advances in convolutional neural
networks,” Pattern Recognition, vol. 77, pp. 354–377, 2018.
[168]
M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint
arXiv:1312.4400, 2013.
[169]
H. Gao, Y. Yang, C. Li, H. Zhou, and X. Qu, “Joint alternate small convolution
and feature reuse for hyperspectral image classification,” ISPRS
International Journal of Geo-Information, vol. 7, no. 9, p. 349, 2018.
[170]
W. Zhao, L. Jiao, W. Ma, J. Zhao, J. Zhao, H. Liu, X. Cao, and S. Yang,
“Superpixel-based multiple local cnn for panchromatic and multispectral
image classification,” IEEE Transactions on Geoscience and Remote
Sensing, vol. 55, no. 7, pp. 4141–4156, 2017.
[171]
H. Alhichri, N. Alajlan, Y. Bazi, and T. Rabczuk, “Multi-scale convolutional
neural network for remote sensing scene classification,” in 2018 IEEE
International Conference on Electro/Information Technology (EIT), pp. 1–5,
IEEE, 2018.
[172]
M. Noor, S. Salwa, J. Ren, S. Marshall, and K. Michael, “Hyperspectral image
enhancement and mixture deep-learning classification of corneal epithelium
injuries,” Sensors, vol. 17, no. 11, p. 2644, 2017.
[173]
J. Leng, T. Li, G. Bai, Q. Dong, and H. Dong, “Cube-cnn-svm: a novel
hyperspectral image classification method,” in 2016 IEEE 28th
International Conference on Tools with Artificial Intelligence (ICTAI),
pp. 1027–1034, IEEE, 2016.
[174]
S. Yu, S. Jia, and C. Xu, “Convolutional neural networks for hyperspectral
image classification,” Neurocomputing, vol. 219, pp. 88–98, 2017.
[175]
H. Wu and S. Prasad, “Convolutional recurrent neural networks forhyperspectral
data classification,” Remote Sensing, vol. 9, no. 3, p. 298, 2017.
[176]
Z. Qiu, J. Chen, Y. Zhao, S. Zhu, Y. He, and C. Zhang, “Variety identification
of single rice seed using hyperspectral imaging combined with convolutional
neural network,” Applied Sciences, vol. 8, no. 2, p. 212, 2018.
[177]
Q. Huang, W. Li, and X. Xie, “Convolutional neural network for medical
hyperspectral image classification with kernel fusion,” in BIBE 2018;
International Conference on Biological Information and Biomedical
Engineering, pp. 1–4, VDE, 2018.
[178]
K. Charmisha, V. Sowmya, and K. Soman, “Dimensionally reduced features for
hyperspectral image classification using deep learning,” in International Conference on Communications and Cyber Physical Engineering
2018, pp. 171–179, Springer, 2018.
[179]
G. Turra, S. Arrigoni, and A. Signoroni, “Cnn-based identification of
hyperspectral bacterial signatures for digital microbiology,” in International Conference on Image Analysis and Processing, pp. 500–510,
Springer, 2017.
[180]
J. Li, X. Zhao, Y. Li, Q. Du, B. Xi, and J. Hu, “Classification of
hyperspectral imagery using a new fully convolutional neural network,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 2, pp. 292–296,
[181]
J. M. Haut, M. E. Paoletti, J. Plaza, A. Plaza, and J. Li, “Hyperspectral
image classification using random occlusion data augmentation,” IEEE
Geoscience and Remote Sensing Letters, vol. 16, no. 11, pp. 1751–1755,
[182]
Y. Xu, B. Du, F. Zhang, and L. Zhang, “Hyperspectral image classification via
a random patches network,” ISPRS journal of photogrammetry and remote
sensing, vol. 142, pp. 344–357, 2018.
[183]
Y. Wang, T. Song, Y. Xie, and S. K. Roy, “A probabilistic neighbourhood
pooling-based attention network for hyperspectral image classification,”
Remote Sensing Letters, vol. 13, no. 1, pp. 65–75, 2021.
[184]
C. Ding, Y. Li, Y. Xia, W. Wei, L. Zhang, and Y. Zhang, “Convolutional neural
networks based hyperspectral image classification method with adaptive
kernels,” Remote Sensing, vol. 9, no. 6, p. 618, 2017.
[185]
Y. Chen, L. Zhu, P. Ghamisi, X. Jia, G. Li, and L. Tang, “Hyperspectral images
classification with gabor filtering and convolutional neural network,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2355–2359,
[186]
J. Zhu, L. Fang, and P. Ghamisi, “Deformable convolutional neural networks for
hyperspectral image classification,” IEEE Geoscience and Remote Sensing
Letters, vol. 15, no. 8, pp. 1254–1258, 2018.
[187]
L. Ran, Y. Zhang, W. Wei, and Q. Zhang, “A hyperspectral image classification
framework with spatial pixel pair features,” Sensors, vol. 17, no. 10,
p. 2421, 2017.
[188]
Z. Zhong, J. Li, Z. Luo, and M. Chapman, “Spectral–spatial residual network
for hyperspectral image classification: A 3-d deep learning framework,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 2,
pp. 847–858, 2018.
[189]
M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “A new deep convolutional neural
network for fast hyperspectral image classification,” ISPRS journal of
photogrammetry and remote sensing, vol. 145, pp. 120–147, 2018.
[190]
S. Li, X. Zhu, Y. Liu, and J. Bao, “Adaptive spatial-spectral feature learning
for hyperspectral image classification,” IEEE Access, vol. 7,
pp. 61534–61547, 2019.
[191]
S. K. Roy, M. E. Paoletti, J. M. Haut, E. M. T. Hendrix, and A. Plaza, “A new
max-min convolutional network for hyperspectral image classification,” in
2021 11th Workshop on Hyperspectral Imaging and Signal Processing:
Evolution in Remote Sensing (WHISPERS), pp. 1–5, 2021.
[192]
M. E. Paoletti, J. M. Haut, S. K. Roy, and E. M. Hendrix, “Rotation
equivariant convolutional neural networks for hyperspectral image
classification,” IEEE Access, vol. 8, pp. 179575–179591, 2020.
[193]
H. Zhang, Y. Li, Y. Jiang, P. Wang, Q. Shen, and C. Shen, “Hyperspectral
classification based on lightweight 3-d-cnn with transfer learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 8,
pp. 5813–5828, 2019.
[194]
S. Jia, Z. Lin, M. Xu, Q. Huang, J. Zhou, X. Jia, and Q. Li, “A lightweight
convolutional neural network for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 5,
pp. 4150–4163, 2020.
[195]
S. K. Roy, S. Chatterjee, S. Bhattacharyya, B. B. Chaudhuri, and
J. Platoš, “Lightweight spectral–spatial squeeze-and-excitation
residual bag-of-features learning for hyperspectral classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 8,
pp. 5277–5290, 2020.
[196]
S. K. Roy, R. Mondal, M. E. Paoletti, J. M. Haut, and A. Plaza, “Morphological
convolutional neural networks for hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote
Sensing, vol. 14, pp. 8689–8702, 2021.
[197]
Y. Li, H. Zhang, and Q. Shen, “Spectral–spatial classification of
hyperspectral imagery with 3d convolutional neural network,” Remote
Sensing, vol. 9, no. 1, p. 67, 2017.
[198]
S. K. Roy, S. R. Dubey, S. Chatterjee, and B. B. Chaudhuri, “Fusenet: fused
squeeze-and-excitation network for spectral-spatial hyperspectral image
classification,” IET Image Processing, vol. 14, no. 8, pp. 1653–1661,
[199]
L. Jiao, M. Liang, H. Chen, S. Yang, H. Liu, and X. Cao, “Deep fully
convolutional network-based spatial distribution prediction for hyperspectral
image classification,” IEEE Transactions on Geoscience and Remote
Sensing, vol. 55, no. 10, pp. 5585–5599, 2017.
[200]
H. Zhang, Y. Li, Y. Zhang, and Q. Shen, “Spectral-spatial classification of
hyperspectral imagery using a dual-channel convolutional neural network,”
Remote sensing letters, vol. 8, no. 5, pp. 438–447, 2017.
[201]
M. He, B. Li, and H. Chen, “Multi-scale 3d deep convolutional neural network
for hyperspectral image classification,” in 2017 IEEE International
Conference on Image Processing (ICIP), pp. 3904–3908, IEEE, 2017.
[202]
H. Dong, L. Zhang, and B. Zou, “Band attention convolutional networks for
hyperspectral image classification,” arXiv preprint arXiv:1906.04379,
[203]
N. He, M. E. Paoletti, J. M. Haut, L. Fang, S. Li, A. Plaza, and J. Plaza,
“Feature extraction with multiscale covariance maps for hyperspectral image
classification,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 57, no. 2, pp. 755–769, 2018.
[204]
G. Cheng, Z. Li, J. Han, X. Yao, and L. Guo, “Exploring hierarchical
convolutional features for hyperspectral image classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 56, no. 11,
pp. 6712–6722, 2018.
[205]
Z. Gong, P. Zhong, Y. Yu, W. Hu, and S. Li, “A cnn with multiscale convolution
and diversified metric for hyperspectral image classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 57, no. 6,
pp. 3599–3618, 2019.
[206]
P. Zhong, N. Peng, and R. Wang, “Learning to diversify patch-based priors for
remote sensing image restoration,” IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing, vol. 8, no. 11,
pp. 5225–5245, 2015.
[207]
L. Liu, Z. Shi, B. Pan, N. Zhang, H. Luo, and X. Lan, “Multiscale deep spatial
feature extraction using virtual rgb image for hyperspectral imagery
classification,” Remote Sensing, vol. 12, no. 2, p. 280, 2020.
[208]
X. Ma, A. Fu, J. Wang, H. Wang, and B. Yin, “Hyperspectral image
classification based on deep deconvolution network with skip architecture,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 8,
pp. 4781–4791, 2018.
[209]
A. Sellami, M. Farah, I. R. Farah, and B. Solaiman, “Hyperspectral imagery
classification based on semi-supervised 3-d deep neural network and adaptive
band selection,” Expert Systems with Applications, vol. 129,
pp. 246–259, 2019.
[210]
S. K. Roy, S. Das, T. Song, and B. Chanda, “Darecnet-bs: Unsupervised
dual-attention reconstruction network for hyperspectral band selection,”
IEEE Geoscience and Remote Sensing Letters, 2020.
[211]
S. Mei, J. Ji, Y. Geng, Z. Zhang, X. Li, and Q. Du, “Unsupervised
spatial–spectral feature learning by 3d convolutional autoencoder for
hyperspectral classification,” IEEE Transactions on Geoscience and
Remote Sensing, vol. 57, no. 9, pp. 6808–6820, 2019.
[212]
S. K. Roy, D. Hong, P. Kar, X. Wu, X. Liu, and D. Zhao, “Lightweight
heterogeneous kernel convolution for hyperspectral image classification with
noisy labels,” IEEE Geoscience and Remote Sensing Letters, pp. 1–5,
[213]
S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, “HybridSN:
Exploring 3-D-2-D CNN feature hierarchy for hyperspectral image
classification,” IEEE Geoscience and Remote Sensing Letters, vol. 17,
no. 2, pp. 277–281, 2020.
[214]
B. Zhang, S. Li, X. Jia, L. Gao, and M. Peng, “Adaptive markov random field
approach for classification of hyperspectral imagery,” IEEE Geoscience
and Remote Sensing Letters, vol. 8, no. 5, pp. 973–977, 2011.
[215]
M. E. Paoletti, J. M. Haut, T. Alipour-Fard, S. K. Roy, E. M. Hendrix, and
A. Plaza, “Separable attention network in single-and mixed-precision
floating point for land-cover classification of remote sensing images,” IEEE Geoscience and Remote Sensing Letters, 2021.
[216]
S. K. Roy, P. Kar, D. Hong, X. Wu, A. Plaza, and J. Chanussot, “Revisiting
deep hyperspectral feature extraction networks via gradient centralized
convolution,” IEEE Transactions on Geoscience and Remote Sensing,
pp. 1–20, 2021.
[217]
T. N. Kipf and M. Welling, “Semi-supervised classification with graph
convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
[218]
D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, and J. Chanussot, “Graph
convolutional networks for hyperspectral image classification,” IEEE
Trans. Geosci. Remote Sens., vol. 59, no. 7, pp. 5966–5978, 2021.
[219]
A. Qin, Z. Shang, J. Tian, Y. Wang, T. Zhang, and Y. Y. Tang,
“Spectral–spatial graph convolutional networks for semisupervised
hyperspectral image classification,” IEEE Geosci. Remote Sens. Lett.,
vol. 16, no. 2, pp. 241–245, 2018.
[220]
S. Wan, C. Gong, P. Zhong, S. Pan, G. Li, and J. Yang, “Hyperspectral image
classification with context-aware dynamic graph convolutional network,” arXiv preprint arXiv:1909.11953, 2019.
[221]
J. Zhu, L. Wu, H. Hao, X. Song, and Y. Lu, “Auto-encoder based for high
spectral dimensional data classification and visualization,” in 2017
IEEE Second International Conference on Data Science in Cyberspace (DSC),
pp. 350–354, IEEE, 2017.
[222]
A. Hassanzadeh, A. Kaarna, and T. Kauranne, “Unsupervised multi-manifold
classification of hyperspectral remote sensing images with contractive
autoencoder,” in Scandinavian Conference on Image Analysis,
pp. 169–180, Springer, 2017.
[223]
Y. Wang, Y. Jiang, Y. Wu, and Z.-H. Zhou, “Multi-manifold clustering,” in
Pacific Rim International Conference on Artificial Intelligence,
pp. 280–291, Springer, 2010.
[224]
S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive
auto-encoders: Explicit invariance during feature extraction,” in Proceedings of the 28th International Conference on International Conference
on Machine Learning, ICML'11, (Madison, WI, USA), p. 833–840, Omnipress,
[225]
X. Zhang, Y. Liang, C. Li, N. Huyan, L. Jiao, and H. Zhou, “Recursive
autoencoders-based unsupervised feature learning for hyperspectral image
classification,” IEEE Geoscience and Remote Sensing Letters, vol. 14,
no. 11, pp. 1928–1932, 2017.
[226]
S. Hao, W. Wang, Y. Ye, T. Nie, and L. Bruzzone, “Two-stream deep architecture
for hyperspectral image classification,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 56, no. 4, pp. 2349–2361, 2017.
[227]
K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE transactions
on pattern analysis and machine intelligence, vol. 35, no. 6,
pp. 1397–1409, 2012.
[228]
X. Sun, F. Zhou, J. Dong, F. Gao, Q. Mu, and X. Wang, “Encoding spectral and
spatial context information for hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2250–2254,
[229]
C. Zhao, X. Wan, G. Zhao, B. Cui, W. Liu, and B. Qi, “Spectral-spatial
classification of hyperspectral imagery based on stacked sparse autoencoder
and random forest,” European journal of remote sensing, vol. 50,
no. 1, pp. 47–63, 2017.
[230]
X. Wan, C. Zhao, Y. Wang, and W. Liu, “Stacked sparse autoencoder in
hyperspectral data classification using spectral-spatial, higher order
statistics and multifractal spectrum features,” Infrared Physics &
Technology, vol. 86, pp. 77–89, 2017.
[231]
F. Lv, M. Han, and T. Qiu, “Remote sensing image classification based on
ensemble extreme learning machine with stacked autoencoder,” IEEE
Access, vol. 5, pp. 9021–9031, 2017.
[232]
M. Ahmad, A. M. Khan, M. Mazzara, and S. Distefano, “Multi-layer extreme
learning machine-based autoencoder for hyperspectral image classification,”
in Proceedings of the 14th International Conference on Computer Vision
Theory and Applications (VISAPP’19), Prague, Czech Republic, pp. 25–27,
[233]
P. Zhou, J. Han, G. Cheng, and B. Zhang, “Learning compact and discriminative
stacked autoencoder for hyperspectral image classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 57, no. 7,
pp. 4823–4833, 2019.
[234]
R. Lan, Z. Li, Z. Liu, T. Gu, and X. Luo, “Hyperspectral image classification
using k-sparse denoising autoencoder and spectral–restricted spatial
characteristics,” Applied Soft Computing, vol. 74, pp. 693–708, 2019.
[235]
S. Paul and D. N. Kumar, “Spectral-spatial classification of hyperspectral
data with mutual information based segmented stacked autoencoder approach,”
ISPRS journal of photogrammetry and remote sensing, vol. 138,
pp. 265–280, 2018.
[236]
B. Liu, Q. Zhang, L. Ying, W. Chang, and M. Zhou, “Spatial–spectral jointed
stacked auto-encoder-based deep learning for oil slick extraction from
hyperspectral images,” Journal of the Indian Society of Remote
Sensing, vol. 47, no. 12, pp. 1989–1997, 2019.
[237]
G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep
belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554,
[238]
N. Zhang, S. Ding, J. Zhang, and Y. Xue, “An overview on restricted boltzmann
machines,” Neurocomputing, vol. 275, pp. 1186–1199, 2018.
[239]
B. Ayhan and C. Kwan, “Application of deep belief network to land cover
classification using hyperspectral images,” in Advances in Neural
Networks - ISNN 2017 (F. Cong, A. Leung, and Q. Wei, eds.), (Cham),
pp. 269–276, Springer International Publishing, 2017.
[240]
U. Shaham, X. Cheng, O. Dror, A. Jaffe, B. Nadler, J. Chang, and Y. Kluger, “A
deep learning approach to unsupervised ensemble learning,” in International conference on machine learning, pp. 30–39, 2016.
[241]
G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R.
Salakhutdinov, “Improving neural networks by preventing co-adaptation of
feature detectors,” arXiv preprint arXiv:1207.0580, 2012.
[242]
H. Xiong, A. J. Rodríguez-Sánchez, S. Szedmak, and J. Piater,
“Diversity priors for learning early visual features,” Frontiers in
computational neuroscience, vol. 9, p. 104, 2015.
[243]
P. Zhong, Z. Gong, S. Li, and C.-B. Sch"̈onlieb, “Learning to diversify
deep belief networks for hyperspectral image classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 55, no. 6,
pp. 3516–3530, 2017.
[244]
J. Li, B. Xi, Y. Li, Q. Du, and K. Wang, “Hyperspectral classification based
on texture feature enhancement and deep belief networks,” Remote
Sensing, vol. 10, no. 3, p. 396, 2018.
[245]
K. Tan, F. Wu, Q. Du, P. Du, and Y. Chen, “A parallel gaussian–bernoulli
restricted boltzmann machine for mining area classification with
hyperspectral imagery,” IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing, vol. 12, no. 2, pp. 627–636, 2019.
[246]
S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Deep
learning for hyperspectral image classification: An overview,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 57, no. 9,
pp. 6690–6709, 2019.
[247]
A. Sellami and I. Farah, “Spectra-spatial graph-based deep restricted
boltzmann networks for hyperspectral image classification,” in 2019
PhotonIcs & Electromagnetics Research Symposium-Spring (PIERS-Spring),
pp. 1055–1062, IEEE, 2019.
[248]
R. J. Williams and D. Zipser, “A learning algorithm for continually running
fully recurrent neural networks,” Neural computation, vol. 1, no. 2,
pp. 270–280, 1989.
[249]
M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “Scalable recurrent neural
network for hyperspectral image classification,” The Journal of
Supercomputing, pp. 1–17, 2020.
[250]
F. Zhou, R. Hang, Q. Liu, and X. Yuan, “Hyperspectral image classification
using spectral-spatial lstms,” Neurocomputing, vol. 328, pp. 39–47,
[251]
A. Sharma, X. Liu, and X. Yang, “Land cover classification from
multi-temporal, multi-spectral remotely sensed imagery using patch-based
recurrent neural networks,” Neural Networks, vol. 105, pp. 346 – 355,
[252]
H. Wu and S. Prasad, “Semi-supervised deep learning using pseudo labels for
hyperspectral image classification,” IEEE Transactions on Image
Processing, vol. 27, no. 3, pp. 1259–1270, 2017.
[253]
F. Zhou, R. Hang, Q. Liu, and X. Yuan, “Integrating convolutional neural
network and gated recurrent unit for hyperspectral image spectral-spatial
classification,” in Chinese Conference on Pattern Recognition and
Computer Vision (PRCV), pp. 409–420, Springer, 2018.
[254]
H. Luo, “Shorten spatial-spectral rnn with parallel-gru for hyperspectral
image classification,” arXiv preprint arXiv:1810.12563, 2018.
[255]
C. Shi and C.-M. Pun, “Multi-scale hierarchical recurrent neural networks for
hyperspectral image classification,” Neurocomputing, vol. 294,
pp. 82–93, 2018.
[256]
X. Yang, Y. Ye, X. Li, R. Y. Lau, X. Zhang, and X. Huang, “Hyperspectral image
classification with deep learning models,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 56, no. 9, pp. 5408–5423, 2018.
[257]
M. Seydgar, A. Alizadeh Naeini, M. Zhang, W. Li, and M. Satari, “3-d
convolution-recurrent networks for spectral-spatial classification of
hyperspectral images,” Remote Sensing, vol. 11, no. 7, p. 883, 2019.
[258]
C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for
deep learning,” Journal of Big Data, vol. 6, no. 1, p. 60, 2019.
[259]
S. Jia, S. Jiang, Z. Lin, N. Li, M. Xu, and S. Yu, “A survey: Deep learning
for hyperspectral image classification with few labeled samples,” Neurocomputing, vol. 448, pp. 179–204, 2021.
[260]
X. Yu, X. Wu, C. Luo, and P. Ren, “Deep learning in remote sensing scene
classification: a data augmentation enhanced convolutional neural network
framework,” GIScience & Remote Sensing, vol. 54, no. 5, pp. 741–758,
[261]
W. Li, C. Chen, M. Zhang, H. Li, and Q. Du, “Data augmentation for
hyperspectral image classification with deep cnn,” IEEE Geoscience and
Remote Sensing Letters, vol. 16, no. 4, pp. 593–597, 2018.
[262]
X. Cao, J. Yao, Z. Xu, and D. Meng, “Hyperspectral image classification with
convolutional neural network and active learning,” IEEE Transactions on
Geoscience and Remote Sensing, 2020.
[263]
J. F. R. Rochac, N. Zhang, L. Thompson, and T. Oladunni, “A data
augmentation-assisted deep learning model for high dimensional and highly
imbalanced hyperspectral imaging data,” in 2019 9th International
Conference on Information Science and Technology (ICIST), pp. 362–367,
IEEE, 2019.
[264]
J. Nalepa, M. Myller, and M. Kawulok, “Training-and test-time data
augmentation for hyperspectral image segmentation,” IEEE Geoscience and
Remote Sensing Letters, 2019.
[265]
J. Nalepa, M. Myller, and M. Kawulok, “Hyperspectral data augmentation,” arXiv preprint arXiv:1903.05580, 2019.
[266]
J. E. Van Engelen and H. H. Hoos, “A survey on semi-supervised learning,”
Machine Learning, vol. 109, no. 2, pp. 373–440, 2020.
[267]
N. N. Pise and P. Kulkarni, “A survey of semi-supervised learning methods,”
in 2008 International Conference on Computational Intelligence and
Security, vol. 2, pp. 30–34, IEEE, 2008.
[268]
S. S. Sawant and M. Prabukumar, “Semi-supervised techniques based
hyper-spectral image classification: a survey,” in 2017 Innovations in
Power and Advanced Computing Technologies (i-PACT), pp. 1–8, IEEE, 2017.
[269]
B. Fang, Y. Li, H. Zhang, and J. C.-W. Chan, “Semi-supervised deep learning
classification for hyperspectral image based on dual-strategy sample
selection,” Remote Sensing, vol. 10, no. 4, p. 574, 2018.
[270]
S. Zhou, Z. Xue, and P. Du, “Semisupervised stacked autoencoder with
cotraining for hyperspectral image classification,” IEEE Transactions
on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3813–3826, 2019.
[271]
F. Li, D. A. Clausi, L. Xu, and A. Wong, “St-irgs: A region-based
self-training algorithm applied to hyperspectral image classification and
segmentation,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 56, no. 1, pp. 3–16, 2017.
[272]
M. S. Aydemir and G. Bilgin, “Semisupervised hyperspectral image
classification using small sample sizes,” IEEE Geoscience and Remote
Sensing Letters, vol. 14, no. 5, pp. 621–625, 2017.
[273]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014.
[274]
Y. Zhan, D. Hu, Y. Wang, and X. Yu, “Semisupervised hyperspectral image
classification based on generative adversarial networks,” IEEE
Geoscience and Remote Sensing Letters, vol. 15, no. 2, pp. 212–216, 2017.
[275]
Z. He, H. Liu, Y. Wang, and J. Hu, “Generative adversarial networks-based
semi-supervised learning for hyperspectral image classification,” Remote Sensing, vol. 9, no. 10, p. 1042, 2017.
[276]
L. Zhu, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Generative adversarial
networks for hyperspectral image classification,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 56, no. 9, pp. 5046–5063, 2018.
[277]
Y. Zhan, K. Wu, W. Liu, J. Qin, Z. Yang, Y. Medjadba, G. Wang, and X. Yu,
“Semi-supervised classification of hyperspectral data based on generative
adversarial networks and neighborhood majority voting,” in IGARSS
2018-2018 IEEE International Geoscience and Remote Sensing Symposium,
pp. 5756–5759, IEEE, 2018.
[278]
J. Feng, H. Yu, L. Wang, X. Cao, X. Zhang, and L. Jiao, “Classification of
hyperspectral images based on multiclass spatial–spectral generative
adversarial networks,” IEEE Transactions on Geoscience and Remote
Sensing, vol. 57, no. 8, pp. 5329–5343, 2019.
[279]
Z. Zhong, J. Li, D. A. Clausi, and A. Wong, “Generative adversarial networks
and conditional random fields for hyperspectral image classification,” IEEE transactions on cybernetics, 2019.
[280]
X. Wang, K. Tan, Q. Du, Y. Chen, and P. Du, “Caps-triplegan: Gan-assisted
capsnet for hyperspectral image classification,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 57, no. 9, pp. 7232–7245, 2019.
[281]
Z. Xue, “Semi-supervised convolutional generative adversarial network for
hyperspectral image classification,” IET Image Processing, vol. 14,
no. 4, pp. 709–719, 2019.
[282]
W.-Y. Wang, H.-C. Li, Y.-J. Deng, L.-Y. Shao, X.-Q. Lu, and Q. Du, “Generative
adversarial capsule network with convlstm for hyperspectral image
classification,” IEEE Geoscience and Remote Sensing Letters, 2020.
[283]
T. Alipour-Fard and H. Arefi, “Structure aware generative adversarial networks
for hyperspectral image classification,” IEEE Journal of Selected
Topics in Applied Earth Observations and Remote Sensing, vol. 13,
pp. 5424–5438, 2020.
[284]
S. K. Roy, J. M. Haut, M. E. Paoletti, S. R. Dubey, and A. Plaza, “Generative
adversarial minority oversampling for spectral-spatial hyperspectral image
classification,” IEEE Transactions on Geoscience and Remote Sensing,
[285]
J. Yang, Y.-Q. Zhao, and J. C.-W. Chan, “Learning and transferring deep joint
spectral–spatial features for hyperspectral classification,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 55, no. 8,
pp. 4729–4742, 2017.
[286]
L. Windrim, A. Melkumyan, R. J. Murphy, A. Chlingaryan, and R. Ramakrishnan,
“Pretraining for hyperspectral convolutional neural network
classification,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 56, no. 5, pp. 2798–2810, 2018.
[287]
X. Liu, Q. Sun, Y. Meng, M. Fu, and S. Bourennane, “Hyperspectral image
classification based on parameter-optimized 3d-cnns combined with transfer
learning and virtual samples,” Remote Sensing, vol. 10, no. 9,
p. 1425, 2018.
[288]
O. Day and T. M. Khoshgoftaar, “A survey on heterogeneous transfer learning,”
Journal of Big Data, vol. 4, no. 1, p. 29, 2017.
[289]
J. Lin, R. Ward, and Z. J. Wang, “Deep transfer learning for hyperspectral
image classification,” in 2018 IEEE 20th International Workshop on
Multimedia Signal Processing (MMSP), pp. 1–5, IEEE, 2018.
[290]
X. Li, L. Zhang, B. Du, L. Zhang, and Q. Shi, “Iterative reweighting
heterogeneous transfer learning framework for supervised remote sensing image
classification,” IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, vol. 10, no. 5, pp. 2022–2035, 2017.
[291]
Y. Liu and C. Xiao, “Transfer learning for hyperspectral image classification
using convolutional neural network,” in MIPPR 2019: Remote Sensing
Image Processing, Geographic Information Systems, and Other Applications,
vol. 11432, p. 114320E, International Society for Optics and Photonics, 2020.
[292]
J. Lin, C. He, Z. J. Wang, and S. Li, “Structure preserving transfer learning
for unsupervised hyperspectral image classification,” IEEE Geoscience
and Remote Sensing Letters, vol. 14, no. 10, pp. 1656–1660, 2017.
[293]
R. Pires de Lima and K. Marfurt, “Convolutional neural network for
remote-sensing scene classification: Transfer learning analysis,” Remote Sensing, vol. 12, no. 1, p. 86, 2020.
[294]
R. Ganti and A. Gray, “Upal: Unbiased pool based active learning,” in Artificial Intelligence and Statistics, pp. 422–431, 2012.
[295]
P. Melville and R. J. Mooney, “Diverse ensembles for active learning,” in
Proceedings of the twenty-first international conference on Machine
learning, p. 74, 2004.
[296]
C. C. Aggarwal, X. Kong, Q. Gu, J. Han, and S. Y. Philip, “Active learning: A
survey,” in Data Classification: Algorithms and Applications,
pp. 571–605, CRC Press, 2014.
[297]
H. S. Seung, M. Opper, and H. Sompolinsky, “Query by committee,” in Proceedings of the fifth annual workshop on Computational learning theory,
pp. 287–294, 1992.
[298]
B. Settles, “Active learning literature survey,” tech. rep., University of
Wisconsin-Madison Department of Computer Sciences, 2009.
[299]
C. Liu, L. He, Z. Li, and J. Li, “Feature-driven active learning for
hyperspectral image classification,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 56, no. 1, pp. 341–354, 2017.
[300]
Y. Zhang, G. Cao, X. Li, B. Wang, and P. Fu, “Active semi-supervised random
forest for hyperspectral image classification,” Remote Sensing,
vol. 11, no. 24, p. 2974, 2019.
[301]
J. Guo, X. Zhou, J. Li, A. Plaza, and S. Prasad, “Superpixel-based active
learning and online feature importance learning for hyperspectral image
analysis,” IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, vol. 10, no. 1, pp. 347–359, 2016.
[302]
Z. Xue, S. Zhou, and P. Zhao, “Active learning improved by neighborhoods and
superpixels for hyperspectral image classification,” IEEE Geoscience
and Remote Sensing Letters, vol. 15, no. 3, pp. 469–473, 2018.
[303]
K. Bhardwaj, A. Das, and S. Patra, “Spectral–spatial active learning with
attribute profile for hyperspectral image classification,” in International Conference on Intelligent Computing and Smart Communication
2019, pp. 1219–1229, Springer, 2020.
[304]
S. Patra, K. Bhardwaj, and L. Bruzzone, “A spectral-spatial multicriteria
active learning technique for hyperspectral image classification,” IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
vol. 10, no. 12, pp. 5213–5227, 2017.
[305]
Z. Zhang and M. M. Crawford, “A batch-mode regularized multimetric active
learning framework for classification of hyperspectral images,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 55, no. 11,
pp. 6594–6609, 2017.
[306]
X. Xu, J. Li, and S. Li, “Multiview intensity-based active learning for
hyperspectral image classification,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 56, no. 2, pp. 669–680, 2017.
[307]
M. K. Pradhan, S. Minz, and V. K. Shrivastava, “Fisher discriminant ratio
based multiview active learning for the classification of remote sensing
images,” in 2018 4th International Conference on Recent Advances in
Information Technology (RAIT), pp. 1–6, IEEE, 2018.
[308]
Z. Zhang, E. Pasolli, and M. M. Crawford, “An adaptive multiview active
learning approach for spectral–spatial classification of hyperspectral
images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58,
no. 4, pp. 2557–2570, 2019.
[309]
Y. Li, T. Lu, and S. Li, “Subpixel-pixel-superpixel-based multiview active
learning for hyperspectral images classification,” IEEE Transactions on
Geoscience and Remote Sensing, 2020.
[310]
Y. Sun, J. Li, W. Wang, A. Plaza, and Z. Chen, “Active learning based
autoencoder for hyperspectral imagery classification,” in 2016 IEEE
International Geoscience and Remote Sensing Symposium (IGARSS),
pp. 469–472, IEEE, 2016.
[311]
P. Liu, H. Zhang, and K. B. Eom, “Active deep learning for classification of
hyperspectral images,” IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, vol. 10, no. 2, pp. 712–724, 2016.
[312]
J. M. Haut, M. E. Paoletti, J. Plaza, J. Li, and A. Plaza, “Active learning
with convolutional neural networks for hyperspectral image classification
using a new bayesian approach,” IEEE Transactions on Geoscience and
Remote Sensing, vol. 56, no. 11, pp. 6440–6461, 2018.
[313]
J. Lin, L. Zhao, S. Li, R. Ward, and Z. J. Wang, “Active-learning-incorporated
deep transfer learning for hyperspectral image classification,” IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
vol. 11, no. 11, pp. 4048–4062, 2018.
[314]
C. Deng, Y. Xue, X. Liu, C. Li, and D. Tao, “Active transfer learning network:
A unified deep joint spectral–spatial feature learning model for
hyperspectral image classification,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 57, no. 3, pp. 1741–1754, 2018.
[315]
C. Deng, X. Liu, C. Li, and D. Tao, “Active multi-kernel domain adaptation for
hyperspectral image classification,” Pattern Recognition, vol. 77,
pp. 306–315, 2018.
[316]
M. Ahmad, R. A. Raza, and M. Mazzara, “Multiclass non-randomized
spectral–spatial active learning for hyperspectral image classification,”
Applied Sciences, vol. 10, p. 4739, 07 2020.
[317]
R. O. Green, M. L. Eastwood, C. M. Sarture, T. G. Chrien, M. Aronsson, B. J.
Chippendale, J. A. Faust, B. E. Pavri, C. J. Chovit, M. Solis, et al.,
“Imaging spectroscopy and the airborne visible/infrared imaging spectrometer
(aviris),” Remote sensing of environment, vol. 65, no. 3,
pp. 227–248, 1998.
[318]
X. Huang and L. Zhang, “A comparative study of spatial approaches for urban
mapping using hyperspectral rosis images over pavia city, northern italy,”
International Journal of Remote Sensing, vol. 30, no. 12,
pp. 3205–3221, 2009.
[319]
X. Xu, J. Li, and A. Plaza, “Fusion of hyperspectral and lidar data using
morphological component analysis,” in 2016 IEEE International
Geoscience and Remote Sensing Symposium (IGARSS), pp. 3575–3578, IEEE,
[320]
X. Xu, W. Li, Q. Ran, Q. Du, L. Gao, and B. Zhang, “Multisource remote sensing
data classification based on convolutional neural network,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 56, no. 2, pp. 937–949,
[321]
J. Li, J. M. Bioucas-Dias, and A. Plaza, “Semisupervised hyperspectral image
segmentation using multinomial logistic regression with active learning,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 11,
pp. 4085–4098, 2010.
[322]
F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing
images with support vector machines,” IEEE Transactions on geoscience
and remote sensing, vol. 42, no. 8, pp. 1778–1790, 2004.
[323]
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[324]
K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio, “On the properties
of neural machine translation: Encoder-decoder approaches,” arXiv
preprint arXiv:1409.1259, 2014.
[325]
K. Makantasis, K. Karantzalos, A. Doulamis, and N. Doulamis, “Deep supervised
learning for hyperspectral data classification through convolutional neural
networks,” in Proc. IGARSS, pp. 4959–4962, IEEE, 2015.
[326]
A. Ben Hamida, A. Benoit, P. Lambert, and C. Ben Amar, “3-d deep learning
approach for remote sensing image classification,” IEEE Transactions on
geoscience and remote sensing, vol. 56, no. 8, pp. 4420–4434, 2018.
[327]
J. Li, J. M. Bioucas-Dias, and A. Plaza, “Hyperspectral image segmentation
using a new bayesian approach with active learning,” IEEE Transactions
on Geoscience and Remote Sensing, vol. 49, no. 10, pp. 3947–3960, 2011.
[328]
J. Haut, M. Paoletti, A. Paz-Gallardo, J. Plaza, A. Plaza, and J. Vigo-Aguiar,
“Cloud implementation of logistic regression for hyperspectral image
classification,” in Proc. 17th Int. Conf. Comput. Math. Methods Sci.
Eng.(CMMSE), vol. 3, pp. 1063–2321, Cádiz, Spain: Costa Ballena (Rota),
[329]
L. Windrim, R. Ramakrishnan, A. Melkumyan, R. J. Murphy, and A. Chlingaryan,
“Unsupervised feature-learning for hyperspectral data with autoencoders,”
Remote Sensing, vol. 11, no. 7, 2019.
[330]
X. Chen, M. Li, and Y. Xiaoquan, “Stacked denoise autoencoder based feature
extraction and classification for hyperspectral images,” Journal of
Sensors, vol. 2016, p. 10, 2016.
[]Muhammad Ahmad received his MS degree in Electronics Engineering from International Islamic University, Islamabad, Pakistan, a Ph.D. degree in Computer Science and Engineering, from Innopolis University, Innopolis, Russia, and another Ph.D. degree in Cyber-Physical Systems from the University of Messina, Messina, Italy.
Muhammad is currently working at the National University of Computer & Emerging Sciences (FAST-NUCES). He has also served as an Assistant Professor, Lecturer, Instructor, Research Fellow, Research Associate, and Research Assistant for a number of international/national universities. He has also worked with Ericsson (Mobilink Project) as Radio Access Network (RAN) Supervisor. He authored and co-authored over 70 scientific contributions to international journals, conferences, and books. He is supervising/co-supervising several graduates (MS and Ph.D.). He served/serving as a lead/guest editor on several special issues in journals (SCI/E, JCR). He has delivered a number of invited and keynote talks and reviewed (reviewing) the technology-leading articles for journals. His research interest includes Hyperspectral Imaging, Remote Sensing, Machine Learning, Computer Vision, and Wearable Computing.
-2plus -1fil
[]Sidrah Shabir received her bachelor’s degree in Computer Engineering from COMSATS University Islamabad and a master’s degree in Computer Engineering from Khwaja Fareed University of Engineering and Information Technology. Currently, she is working as Lab Engineer at the Department of Computer Engineering, Khwaja Fareed University of Engineering and Information Technology. Her research interests include Machine learning, Hyperspectral Imaging and Hardware Accelerator Design for Machine learning.
-2plus -1fil
[]Swalpa Kumar Roy(S'15) received the bachelor’s and the master’s degree in Computer Science and Engineering from West Bengal University of Technology, Kolkata, India, in 2012, and Indian Institute of Engineering Science and Technology, Shibpur, Howrah, India, (IIEST Shibpur) in 2015 and also the Ph.D. degree in Computer Science and Engineering from University of Calcutta, Kolkata in 2021.
From July 2015 to March 2016, he was a Project Linked Person with the Optical Character Recognition (OCR) Laboratory, Computer Vision and Pattern Recognition Unit, Indian Statistical Institute, Kolkata. He is currently working as an Assistant Professor with the Department of Computer Science and Engineering, Jalpaiguri Government Engineering College, West Bengal, India. Dr. Roy was nominated for the Indian National Academy of Engineering (INAE) engineering teachers mentoring fellowship program by INAE Fellows in 2021 and also a recipient of the Outstanding Paper Award in second Hyperspectral Sensing Meets Machine Learning and Pattern Analysis (HyperMLPA) at the Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS) in 2021. He has served as a reviewer for the IEEE Transactions on Geoscience and Remote Sensing and IEEE Geoscience and Remote Sensing Letters. His research interests include computer vision, deep learning and remote sensing.
-2plus -1fil
[]Danfeng Hong
(S'16–M'19–SM'21) received the M.Sc. degree (summa cum laude) in computer vision from the College of Information Engineering, Qingdao University, Qingdao, China, in 2015, the Dr. -Ing degree (summa cum laude) from the Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Munich, Germany, in 2019.
From 2015 to 2019, he was a Research Associate at the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Oberpfaffenhofen, Germany. Since 2019, He has been a Research Scientist and led a Spectral Vision Working Group at IMF, DLR. He is also an Adjunct Scientist at GIPSA-lab, Grenoble INP, CNRS, Univ. Grenoble Alpes, Grenoble, France, from 2020. He is currently with the Key Laboratory of Digital Earth Science, Aerospace Information Research Institute (AIR), Chinese Academy of Sciences (CAS). His research interests include signal/image processing and analysis, hyperspectral remote sensing, machine / deep learning, artificial intelligence, and their applications in Earth Vision.
Dr. Hong is an Editorial Board Member of Remote Sensing and a Topical Associate Editor of the IEEE Transactions on Geoscience and Remote Sensing (TGRS). He was a recipient of the Best Reviewer Award of the IEEE TGRS in 2021 and the Jose Bioucas Dias award for recognizing the outstanding paper at the Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS) in 2021. He is also a Leading Guest Editor of the International Journal of Applied Earth Observation and Geoinformation, the IEEE Journal of Selected Topics in Applied Earth Observations, and Remote Sensing.
-2plus -1fil
[]Xin Wu (S'19–M'20) received the M.Sc. degree in Computer Science and Technology from the College of Information Engineering, Qingdao University, Qingdao, China, in 2014, the Ph.D. degree from the School of Information and Electronics, Beijing Institute of Technology (BIT), Beijing, China, in 2020.
In 2018, she was a visiting student at the Photogrammetry and Image Analysis department of the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Oberpfaffenhofen, Germany. She is currently a Postdoctoral Researcher in the School of Information and Electronics, BIT, Beijing, China. Her research interests include signal/image processing, fractional Fourier transform, deep learning and their applications in biometrics and geospatial object detection.
She was a recipient of the Jose Bioucas Dias award for recognizing the outstanding paper at the Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS) in 2021.
-2plus -1fil
[]Jing Yao received the B.Sc. degree from Northwest University, Xi’an, China, in 2014, and the Ph.D. degree in the School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China, in 2021.
He is currently an Assistant Professor with the Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China. From 2019 to 2020, he was a visiting student at Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Munich, Germany, and at the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Oberpfaffenhofen, Germany.
His research interests include low-rank modeling, hyperspectral image analysis and deep learning-based image processing methods.
-2plus -1fil
[]Adil Mehmood Khan received his B.S. degree in Information Technology from National University of Sciences and Technology (NUST), Pakistan in 2005. He completed his M.Sc. and Ph.D. degrees in Computer Engineering from Kyung Hee University, South Korea in 2011. He is currently a Professor at the Institute of Artificial Intelligence and Data Science, Innopolis University, Russia. His research interests are machine learning and deep learning.
-2plus -1fil
[]Manuel Mazzara is a professor of Computer Science at Innopolis University (Russia) with a research background in software engineering, service-oriented architectures and programming, concurrency theory, formal methods, software verification and Artificial Intelligence.
Manuel received a PhD in computing science from the University of Bologna and cooperated with European and US industry, plus governmental and inter-governmental organizations such as the United Nations, always at the edge between science and software production.
The work conducted by Manuel and his team in recent years focuses on the development of theories, methods, tools and programs covering the two major aspects of Software Engineering and Artificial Intelligence: the process side, describing how we develop software, and the product side, describing the results of this process.
-2plus -1fil
[]Salvatore Distefanois an Associate Professor at the University of Messina (Italy). He authored and co-authored more than 250 scientific papers and contributions to international journals, conferences, and books. He visited as a scholar and professor different universities and research centers
such as collaborating with top scientists such as UMass Dartmouth, UCLA, Duke, Innopolis, and kazan Federal University.
He took part in several national and international projects, such as Reservoir, Vision (EU FP7), SMSCOM (EU FP7 ERC Advanced Grant), Beacon, IoT-Open.EU (EU H2020). He is a member of international conference committees and he is on the editorial boards of IEEE Transactions on Dependable and Secure Computing, Journal of Cloud Computing, International Journal of Big Data. His main research interests include non-Markovian modeling; Quality of Service/Experience; Parallel and Distributed Computing, Grid, Cloud, Autonomic, Volunteer, Crowd, Edge, Fog Computing; Internet of Things; Cyber-Physical Social Systems; Smart Cities; Intelligent Transportation Systems; Big Data, Stream Processing; Software-Defined and virtualized ecosystems; Hyper Spectral Imaging; Machine Learning. During his research activity, he contributed to the development of several tools such as WebSPN, ArgoPerformance, GS3 and Stack4Things. He is also one of the co-founders of the SmartMe.io start-up, a spin-off of the University of Messina established in 2017.
-2plus -1fil
[]Jocelyn Chanussot
(M'04–SM'04–F'12) received the M.Sc. degree in electrical engineering from the Grenoble Institute of Technology (Grenoble INP), Grenoble, France, in 1995, and the Ph.D. degree from the Université de Savoie, Annecy, France, in 1998. Since 1999, he has been with Grenoble INP, where he is currently a Professor of signal and image processing. His research interests include image analysis, hyperspectral remote sensing, data fusion, machine learning and artificial intelligence. He has been a visiting scholar at Stanford University (USA), KTH (Sweden) and NUS (Singapore). Since 2013, he is an Adjunct Professor of the University of Iceland. In 2015-2017, he was a visiting professor at the University of California, Los Angeles (UCLA). He holds the AXA Chair in remote sensing and is an Adjunct Professor at the Chinese Academy of Sciences, Aerospace Information Research Institute, Beijing.
Dr. Chanussot is the founding President of IEEE Geoscience and Remote Sensing French chapter (2007-2010) which received the 2010 IEEE GRS-S Chapter Excellence Award. He has received multiple outstanding paper awards. He was the Vice-President of the IEEE Geoscience and Remote Sensing Society, in charge of meetings and symposia (2017-2019). He was the General Chair of the first IEEE GRSS Workshop on Hyperspectral Image and Signal Processing, Evolution in Remote sensing (WHISPERS). He was the Chair (2009-2011) and Cochair of the GRS Data Fusion Technical Committee (2005-2008). He was a member of the Machine Learning for Signal Processing Technical Committee of the IEEE Signal Processing Society (2006-2008) and the Program Chair of the IEEE International Workshop on Machine Learning for Signal Processing (2009). He is an Associate Editor for the IEEE Transactions on Geoscience and Remote Sensing, the IEEE Transactions on Image Processing and the Proceedings of the IEEE. He was the Editor-in-Chief of the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2011-2015). In 2014 he served as a Guest Editor for the IEEE Signal Processing Magazine. He is a Fellow of the IEEE, a member of the Institut Universitaire de France (2012-2017) and a Highly Cited Researcher (Clarivate Analytics/Thomson Reuters).
|
# Effect of Gameplay Uncertainty, Display Type, and Age on Virtual Reality
Exergames
Wenge Xu Xi’an Jiaotong-Liverpool UniversitySuzhouJiangsuChina
<EMAIL_ADDRESS>, Hai-Ning Liang Xi’an Jiaotong-Liverpool
UniversitySuzhouJiangsuChina<EMAIL_ADDRESS>, Kangyou Yu Xi’an
Jiaotong-Liverpool UniversitySuzhouJiangsuChina
<EMAIL_ADDRESS>and Nilufar Baghaei Massey
UniversityAucklandNew Zealand<EMAIL_ADDRESS>
(2021)
###### Abstract.
Uncertainty is widely acknowledged as an engaging gameplay element but rarely
used in exergames. In this research, we explore the role of uncertainty in
exergames and introduce three uncertain elements (false-attacks, misses, and
critical hits) to an exergame. We conducted a study under two conditions
(uncertain and certain), with two display types (virtual reality and large
display) and across young and middle-aged adults to measure their effect on
game performance, experience, and exertion. Results show that (1) our designed
uncertain elements are instrumental in increasing exertion levels; (2) when
playing a motion-based first-person perspective exergame, virtual reality can
improve performance, while maintaining the same motion sickness level as a
large display; and (3) exergames for middle-aged adults should be designed
with age-related declines in mind, similar to designing for elderly adults. We
also framed two design guidelines for exergames that have similar features to
the game used in this research.
exergame, uncertainty, virtual reality, young adults, middle-aged adults
††journalyear: 2021††copyright: acmcopyright††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††price: 15.00††doi:
10.1145/3411764.3445801††isbn: 978-1-4503-8096-6/21/05††ccs: Software and its
engineering Interactive games††ccs: Human-centered computing Virtual
reality††ccs: Applied computing Computer games
## 1\. Introduction
Motion-based exergames, a combination of “motion-based exercise” and “gaming”,
is a promising approach to encourage regular exercise, especially for
unmotivated or inactive target groups (Bogost, 2005; Sinclair et al., 2007).
Previous literature has shown the benefits of playing motion-based exergame,
which include but are not limited to enhanced postural stability (Sheehan and
Katz, 2013), muscle strength (Soares et al., 2016), and working memory
(Eggenberger et al., 2015). Because of the potential of these exergames in
eliciting health benefits, much work has been conducted with different age
groups (including children (Hernandez et al., 2013), young individuals (Xu et
al., 2020d), and older adults (Gerling et al., 2012)).
Age-related declines are common in older adults (i.e., aged 65 and above) and
middle-aged adults (i.e., aged 45 to 65) as previous studies show that
reductions (e.g., cognitive abilities) could start even before the age of 50
(Ferreira et al., 2015; Verhaeghen and Salthouse, 1997). These age-related
declines affect the elderly’ game performance and experience and could also
affect middle-aged adults in a similar way. Although there have been some
attempts to understand whether middle-aged adults could obtain the same health
benefits from playing videogame as elderly adults (Rosney and Horvath, 2018;
Xu et al., 2020a), there is very limited research on exploring the performance
and experience of middle-aged adults.
Designing an enjoyable and effective exergame is challenging. Studies
(Berkovsky et al., 2010; Ioannou et al., 2019; Barathi et al., 2018) have been
conducted to improve the motivation and experience of these games. For
instance, Ioannou et al. (Ioannou et al., 2019) proposed a virtual performance
augmentation method for exergames and found that it increased players’
immersion and motivation. Barathi et al. (Barathi et al., 2018) implemented an
interactive feedforward method to an exergame and found that it improved
players’ performance.
One factor that has been widely applied in games is uncertainty, which has
long been recognized as a key ingredient of engaging gameplay (Costikyan,
2013; Power et al., 2019; Caillois, 2001; Johnson, 2018). Costikyan
(Costikyan, 2013) argues that games require uncertainty to hold players’
interest and that the struggle to master uncertainty is central to games’
appeal. Most importantly, he suggested that game designers can harness
uncertainty to frame gameplay’s design. Several game designers and researchers
have tried to identify uncertainty sources that can lead to a good gameplay
experience (Malone, 1982; Juul, 2011; Tekinbas and Zimmerman, 2003; DeKoven,
2002). Drawing on many of these sources and practical experience, Costikyan
(Costikyan, 2013) listed an influential categorization of eleven sources of
uncertainty found or can be used in games. Recently, Kumari et al. (Kumari et
al., 2019) presented a grounded theory of uncertainty sources which can
partially map onto existing taxonomies, especially Costikyan’s (Costikyan,
2013), providing converging evidence of the validity of Costikyan’s
categorization of uncertainty sources. Although uncertainty is recognized as a
core component of the gaming experience, there is relatively little research
that has looked specifically into the effect of uncertainty in games,
especially exergames. Based on uncertainty sources identified in (Costikyan,
2013), in this research, we propose the use of three uncertain elements for
exergames, that cover four sources of uncertainty, and evaluate their effect
in exergames on performance, game experience (and sickness when implemented in
virtual reality), and exertion levels.
Given the recent emergence of affordable virtual reality (VR) head-mounted
displays (HMDs), VR exergames have been gaining rapid attention (Barathi et
al., 2018; Ioannou et al., 2019; Xu et al., 2020c). For instance, VR exergames
are useful in promoting physical activity in sedentary and obese children
(Rizzo et al., 2011), especially to increase their motivation to exercise
(Plante et al., 2003; Mestre et al., 2011). Existing literature has outlined
that there are additional benefits of playing motion-based exergames in VR
than non-VR. In VR, players could achieve a higher exertion and experience a
game more positively in areas like the challenge, flow, immersion and a lower
negative affect (Xu et al., 2020d). However, a major drawback is that VR might
lead to a higher level of simulator sickness, which must be taken into account
during the design process to mitigate its effects.
The aim of our research is to explore the effect of uncertain versus certain
elements and VR versus a typical TV large display (LD) on two main player
groups of exergames regarding their game performance, experience, and
exertion. In this paper, we first introduce GestureFit, the game we developed
for this research. We describe the rules and logic behind it, the game
procedure, and risk control for middle-aged adults. We then present the study
we conducted to investigate the effect of display type and game condition,
focusing on differences between young adults and middle-aged adults. We then
report the results and present a discussion of our findings that are framed
based on existing literature. Two main design guidelines derived from the
results are then proposed, followed by the conclusions.
The contributions of the paper include: (1) an empirical evaluation of the
effects of display type and game condition on exergame performance,
experience, and exertion between young and middle-aged adults; (2) a set of
uncertain elements that can help increase the exertion level for motion-based
exergames; and (3) two recommendations that can help frame the design of
motion-based exergames to contain uncertain gameplay elements and how to
motivate middle-age and older adults to engage with exergames more
meaningfully.
## 2\. Related Work
### 2.1. VR and Non-VR Motion-based Exergames
Many motion-based exergames have been developed for non-VR displays since the
introduction of Kinect. A typical motion-based exergame requires players to
move their body or perform certain gestures to interact with the game world.
For instance, in GrabApple (Gao and Mandryk, 2012), users need to jump or duck
to pick up apples; they also need to move around to locate them but also avoid
touching other objects, like bombs. In a game reported in Gerling et al.
(Gerling et al., 2012), users need to perform static and dynamic gestures to
grow plants and flowers and catch birds. In Sternataler (Smeddinck et al.,
2013), players use their hands to collect stars that appear sequentially in
some predefined paths.
Recent advances and the growing popularity of VR HMDs have created a
substantial demand for motion-based exergames. For instance, games like
Virtual Sports111https://www.vrgamerankings.com/virtual-sports for the HTC
VIVE allow a user to play sports with his/her full body in fully immersive
virtual environments. In another commercial game, FitXR222https://fitxr.com/,
the users need to jab, weave, and do uppercuts following rhythmic music. In
the research exergame KIMove (Xu et al., 2019), the players need to move their
hands to hit fruits floating in midair and use their feet to step on cubes
moving towards them on the ground. In GestureStar (Xu et al., 2020d), users
need to perform 6 different gestures to eliminate the objects, like cubes,
flying towards them.
Previous research has reported inconsistent findings when looking at the
effect of display type on gameplay experience and performance. Xu et al. (Xu
et al., 2020d) suggested that players achieved a higher exertion and
experienced a game more positively in VR than LD. However, they also found
that VR could lead to a higher level of simulator sickness. Results from (Xu
et al., 2019) suggested that there was no effect of display type on gameplay
performance and experience. Therefore, we have included this factor in our
experiment to investigate it further and provide more insights.
### 2.2. User Experience in Exergames
Exergames integrate physical activity to engage players (Mueller et al.,
2011). Because findings from other types of games may not be applicable to
exergames (Monteiro et al., 2018; Xu et al., 2020d), efforts have been focused
on studying user experience in exergames. For instance, it is reported in (Xu
et al., 2019) that task mode (single- and multi-tasking) could affect users’
exergame experience; in particular, multi-tasking could not only make the game
more challenging and cause a higher sickness, but also lead to worse
performance than single-tasking. Koulouris et al. (Koulouris et al., 2020)
investigated the effect of customization and identification in a VR exergame,
and found that customization significantly increased identification, intrinsic
motivation, and performance in the exergame. Further, playing pose (i.e.,
standing and seated), performance augmentation (i.e., enabling players with
superhuman capabilities in the virtual game) could also affect the gameplay
experience (e.g., sickness) (Ioannou et al., 2019; Xu et al., 2020b). On the
other hand, although uncertainty is a crucial element in gameplay, it is
underexplored in exergames. It is this reason that we are interested in
studying the effect of uncertainty in exergames for both immersive VR and
large displays.
### 2.3. Design Elements of Exergames
Several design guidelines have been proposed by researchers in HCI and sport
sciences for designing more attractive and effective full-body motion-based
exergames (Marshall et al., 2016; Márquez Segura et al., 2013; Hardy et al.,
2015). According to these, to design a playful exergame experience, designers
should focus on (1) the player’s body (movement concept), (2) the mediating
controller technology (transferring movement input into the virtual world and
providing feedback), and (3) the game scenario (audio-visual and narrative
design and feedback) (Martin-Niedecken et al., 2019).
#### 2.3.1. The Player’s Body
After criticizing existing exertion games and commercial exergames, Marshall
et al. (Marshall et al., 2016) proposed three design strategies based on the
idea of movement, which are (1) the design of exertion trajectories (e.g., to
create a trajectory across individual play sessions for skill-learning that
takes into account players’ cognitive load and the exertion patterns), (2)
design for, with, and around pain (e.g., celebrating positive pain), and (3)
design leveraging the social nature of exertion (e.g., players to be
surrounded by other players like friends and family members or game
enthusiasts).
#### 2.3.2. The Mediating Controller Technology
Studies have suggested that the participation of the body is a crucial
variable not only in the efficacy of exergames in affecting users’ emotional
experience (Vara et al., 2016), but also in improving user experience, energy
expenditure, and intention to repeat the experience (Kim et al., 2014). To
achieve these positive gaming experiences, body-centered controllers should be
designed to serve as an additional physical playground, so that they can be
easily integrated into players’ body scheme (Pasch et al., 2009) and provide a
balance of guided and free movements (Martin-Niedecken et al., 2019).
#### 2.3.3. The Game Scenario
Exergame should involve specific preferences for game mechanics, levels,
visuals, audio, and narrative. This requirement will unavoidably make it
essential to involve the target group in the design process from the start
(Martin-Niedecken and Mekler, 2018; Martin-Niedecken, 2018). The literature
offers suggestions for key elements of game scenarios. For instance, games
should include an immediate celebration of movement articulation by providing
direct and constrained amounts of feedback (Mueller and Isbister, 2014). Also,
games should involve achievable short-term challenges to foster long-term
motivation and help players identify rhythm in their movements, for example,
by setting movements that are mapped to specific sounds and visualizing
previous and upcoming movements (Mueller and Isbister, 2014; Mueller et al.,
2016). It is also important to provide a challenge that matches individual
skill levels, for instance, balancing the challenge level by monitoring the
player’s heart rate (Mueller et al., 2012).
### 2.4. Uncertainty in Games
Caillois (Caillois, 2001) says that the outcome of a game should be uncertain
for it to be enjoyable. Similarly, Costikyan (Costikyan, 2013) argues about
the importance of uncertainty in the overall game experience and has developed
an influential categorization of 11 sources of uncertainty within games.
Typical uncertainty sources are (1) Performative uncertainty: uncertainty of
physical performance (e.g., hand-eye coordination); (2) Solver’s uncertainty:
weighting a group of options against potential outcomes; (3) Player
unpredictability: not knowing how the opponents/teammates will act; (4)
Randomness: uncertainty emanating from random game elements. Recently, Kumari
et al. (Kumari et al., 2019) developed an empirically-based grounded taxonomy
of seven sources of uncertainty across the input-output loop that involves the
game, the player, and their interaction in an outcome. This taxonomy partially
maps onto existing taxonomies, especially the one proposed by Costikyan
(Costikyan, 2013). This, in turn, provides further evidence of its validity.
Hence, in this research, we used Costikyan’s sources of uncertainty to guide
the design of the uncertainty elements in our exergame.
To explore the effects of uncertainty in exergames, we applied three uncertain
elements in an exergame we developed: (1) False-Attacks: this concept is
originally from sports (e.g., basketball) and has been applied widely in
sports videogames (e.g., NBA 2K series). (2) Misses: this concept has been
widely used in games (e.g., Dungeon & Fighter) where an attack hits the
opponent but is counted as a miss by the system. (3) Critical Hits: this
concept has also been widely used in games (e.g., Dungeon & Fighter). When a
critical hit happens, the player issuing the hit causes more damages to the
opponent that a normal successful blow.
### 2.5. Game Experience for Different Age Groups
Users from different age groups often perceive gameplay elements
differently—for instance, what is motivating for one group may not be so for
another. Motivations can change with age: fantasy is a powerful motivational
factor in younger children (Greenberg et al., 1999), whereas competition and
challenge-related motives are stronger in older children and adolescents
(Sherry et al., 2006). Young adults are more motivated by rewarding
experiences, while older adults are more inspired by perceived benefits to
their health (Subramanian et al., 2019). Young adults tend to prefer visually
appealing graphics and music that fit the theme and nature of the game, but
older adults pay more attention to the feedback that helps them complete a
game (Subramanian et al., 2019). Furthermore, there is an increased
appreciation for the enjoyment that a game brings, greater satisfaction for
autonomy, and decreased competence as users age, especially after a certain
threshold (Birk et al., 2017). In other words, young adults prefer exergames
that allow them to challenge themselves physically and cognitively, but older
adults preferred exergames that are fun to play and are beneficial to their
health (Subramanian et al., 2019).
Gajadhar et al. (Gajadhar et al., 2008) investigated the social elements of
gameplay for young adults. They found that gameplay is most enjoyable when
gamers are co-located, less satisfying in mediated co-play, and the least
enjoyable in virtual co-play. However, these three social contexts (virtual,
mediated, and co-located co-play) do not positively influence older users like
younger adults (Blocker et al., 2014; Gajadhar et al., 2010). Gerling et al.
(Gerling et al., 2013) explored the effect of sedentary and motion-based
control tasks in games (such as pointing and tracking) for older adults and
younger adults, and found that older adults performed worse than young adults.
There is a large body of work on the experience of children (Andries and
Robertson, 2019; Duh et al., 2010; Eriksson et al., 2019) and young adults (Xu
et al., 2020b; Xu et al., 2019; Xu et al., 2020d), and older adults (De
Schutter, 2011; De Schutter and Vanden Abeele, 2010; Gerling et al., 2012)
with videogames. However, there is only limited attention given to middle-aged
players. Previous research suggested age-related declines could start when
people are in their mid-age; for instance, age-related memory impairment and
executive dysfunction can be found in people before they reach 50 (Ferreira et
al., 2015; Verhaeghen and Salthouse, 1997). Middle-aged adults suffer from
several age-related declines, including but not limited to lower working
memory (Meguro et al., 2000), grip strength (Kozakai et al., 2016), and muscle
mass (Brown and Hasser, 1996). Given this above research, our work involves
two groups, young adults (18-30) and middle-aged adults (45-65), to explore
the effect of age on exergames.
Table 1. Features and requirement for each move by the playera and the monsterb. Name | Description of the move
---|---
Kicka | An attack move that inflicts 10 hp damage to the opponent in the kicking direction and requires a 3-second cooldown.
Puncha,b | An attack move that inflicts 10 hp damage to the opponent on the punching direction and requires a 3-second cooldown.
Zoom+Kicka | A ranged attack move that inflicts 30 hp damage to the opponent in that attack range (1m) and requires a 5-second cooldown.
Squatb | A ranged attack move that deals 30 hp damage and requires a 5-seconds to cooldown.
Zoom+Squata | A defense move that releases a sphere to protect the user for 2 seconds and heals 20 hp if it could successfully defend the player from the monster’s attack. This move requires a 3-second cooldown.
## 3\. GestureFit: A Gestured-based Game
The game was implemented in Unity3D with the Oculus Integration
plugin333https://assetstore.unity.com/packages/tools/integration/oculus-
integration-82022 and the Kinect v2 Unity
plugin444https://assetstore.unity.com/packages/3d/characters/kinect-v2-examples-
with-ms-sdk-and-nuitrack-sdk-18708.
### 3.1. Rules and Logic
The design of our game was inspired by Nintendo Ring Fit
Adventure555https://www.nintendo.com/games/detail/ring-fit-adventure-switch/.
The goal of the game is for the player to stay alive and defeat a monster
three times. To do this, the player needs to perform gestures to make attacks
against the monster and defend themselves from being attacked by it. The
player begins with 100 health points (HP) while the monster has 500 HP. The
monster or player dies when their HP reaches 0. Both the monster and the
player have 3 lives. The monster could move leftward or rightward within a
2-meter range prior to its game starting position. Players’ lateral movement
is limited so that they are always within the operational tracking range
(Ioannou et al., 2019; Xu et al., 2020d). The game is designed to take this
into account so that the gameplay experience is not affected. Both visual and
audio feedback is provided to give a fuller range of sensory experience to
players.
#### 3.1.1. Selected Gestures and Corresponding Attack/Defense Moves
There are three attack moves and one defense move. All moves can be released
by performing their corresponding gestures. These four moves are (i) Kick:
kicking using any leg, (ii) Punch: single hand punching, (iii) Zoom+Kick:
kicking using any leg and leaning arms forward and stretching them out, and
(iv) Zoom+Squat: performing a squat and leaning arms forward and stretching
them out. The selected gestures were chosen based on design recommendations
from previous studies on young adults (Xu et al., 2020d) and older adults
(Gerling et al., 2012). Table 1 lists pre-defined features and their
requirements.
#### 3.1.2. The Use of Uncertainty
The uncertain condition includes three uncertain elements, which covers four
uncertainty sources (Costikyan, 2013):
* •
`False-Attacks`: There is a 20% chance that the monster would perform a false-
attack (which lasts around 0.8 seconds) when the system triggers an attack-
related animation to trick the player into performing the defense move. False-
attacks cover the following uncertainty sources: a) Performative uncertainty:
our game challenges eye-body coordination (i.e., would the players be able to
cancel their defense move when they realize the monster is performing a false-
attack?), b) Solver’s uncertainty: it is concerned with whether performing or
not performing a defense move against potential outcomes (i.e., wasting a
defense move to a false-attack or being successful in defending from an actual
attack), and c) Player unpredictability: this is about the uncertainty of the
opponent’s movements (e.g., whether it is a false or real attack).
* •
`Misses`: There is a 10% chance that the player’s or monster’s attack would be
regarded as a miss even if it hits the opponent. Randomness: misses act as a
random element in the game.
* •
`Critical Hits`: There is a 10% chance that the player’s or monster’s attack
could be a critical hit, which would deal 50% more damage than a normal attack
move. Randomness: critical hits act as another random element in the game.
The only difference between the certain and non-certain conditions is that the
former does not include the above three uncertain features.
#### 3.1.3. Monster Attack Design
In both conditions, the monster would perform an action every 2 sec. In the
certain condition, if any attack skill is available, there is 80% chance that
the action is an attack (either 100% for the only skill that is available or
50% for each skill that is available); otherwise, it is a walk. The uncertain
condition also follows this attack mechanism; the only difference is that if
an attack skill is available, there is 80% chance the action is attack-related
(i.e., 8/10 = a real attack, 2/10 = a false attack).
### 3.2. Game Procedure
The game starts with a training (warm-up) phase (see Figure 1a-b), where the
player needs to use attack and defense moves. The order of the moves required
for the player to perform is Kick, Punch, Zoom+Kick, Zoom+Squat. For attack
moves, the player needs to perform the corresponding gesture, and its attack
must damage the monster twice before proceeding to the next move. For the
defense moves, the player must successfully defend themselves from the
monster’s attacks twice to finish the training. The player needs to perform a
Zoom gesture between each move training to switch to the next move training.
After the training phase, the player needs to perform another Zoom gesture to
start the gameplay phase.
Figure 1. Screenshots of GestureFit: (a) LD training phase, (b) VR training
phase, (c) LD gameplay phase, and (d) VR gameplay phase. All variables are the
same in all versions except in VR the player information is slightly tilted.
Game procedure LD and VR versions
During the gameplay phase (see Figure 1c-d), players need to perform the
gestures to attack and defend themselves. If the players have no HPs, they
need to perform Zoom+Squat five times to regain life and perform Zoom once to
confirm they are ready to return. If the monster has no HPs, the game will
play an animation of the monster falling to the ground and is destroyed. After
a 5-second wait, the monster uses its second or third life and the game re-
starts. The game ends when the monster or the player has no lives and HPs
left.
### 3.3. Risk Control for Middle-aged Adults
We controlled the risk, if any, to a minimal level. As pointed in (Martin-
Niedecken, 2018; Martin-Niedecken and Mekler, 2018), having users involved in
the development process is useful. As such, for our game prototype, we had two
middle-aged adults frequently involved during the development process to test
the gestures’ suitability, tune parameters (e.g., cooldown time, shield
protection’s duration) and ensure accurate and meaningful execution of
movements. The selected gesture worked quite well since all middle-aged
participants had no issues performing them during the experimental gaming
sessions (as our results would show; more on this later).
Besides, we minimized any risks by (1) making a first-person viewing
perspective game so that players can see their motions, (2) limiting the
number of monster’s attack skills and having gaps in its attacks, (3)
restricting players’ position, (4) allowing them 5 sec rests after they took a
monster’s life, (5) allowing them to rest as much as they want after they lost
one life, and (6) displaying information (user’s skills, player’s HP, and
monster’s HP) in front of the users without the need for additional head
movement.
## 4\. Experiment
### 4.1. Experiment Design and Outcome Measures
The experiment followed a 2 × 2 within-subjects design with two within-
subjects factors: Display Type (DT: VR and LD) and (2) Game Condition (GC:
certain and uncertain). The order of DT × GC was counterbalanced in the
experiment.
To determine participants’ task performance, we collected the following (1)
completion time on each of the three lives of the monster; (2) success rate of
each move; and (3) the total number of each type of gestures performed.
Participants’ experience was measured with Game Experience Questionnaire (GEQ)
(IJsselsteijn et al., 2008) and Simulator Sickness Questionnaire (SSQ)
(Kennedy et al., 1993). We used the 33-item core module of the GEQ to measure
game experience, which consists of seven components: competence, immersion,
flow, tension, challenge, negative affect, and positive affect. Simulator
sickness was assessed using the 16-item SSQ, which produces 3 measures of
cybersickness (nausea, oculomotor, and disorientation).
Exertion was evaluated by (1) the average heart rate (avgHR%) expressed as a
percentage of a participant’s estimated maximum heart rate
(211-0.64$\times$age) (Nes et al., 2013), (2) calories burned, and (3) Borg
RPE 6-20 scale (Borg, 1982).
We measured the acceptability of the uncertain elements used in our games with
three questions: “I like the design of the false-attacks”, “I like the design
of attacks that could be missed by chance”, and “I like the design of attacks
that could be a critical hit by chance”. The questions followed a 1-7 Likert
scale, with 1 indicating “extremely disagree” and 7 indicating “extremely
agree”.
After completing the above questionnaires, we conducted a semi-structured
interview for participants with the following open-ended questions: “Overall,
what did you think about the game?”, “What did you like about the game?”,
“What did you not like about the game?”, “Was there anything more difficult
than you expected in the game?”, and “Was there anything more confusing than
you expected in the game?” (Drachen et al., 2018). Answers were recorded and
transcribed in text and later analyzed by two of the researchers following an
informal, simplified inductive open coding approach (Sanders and Stappers,
2013). Themes were concluded by the two researchers independently and agreed
in a post-coding meeting with a third researcher. Details of the themes can be
found in the feedback section (Section 4.5.5). There was no limit for the
length of participants’ responses.
### 4.2. Apparatus and Setup
We used an Oculus Rift CV1 as our VR HMD and a 50-inch 4K TV as our LD. Both
devices were connected to an HP Z workstation with an i7 CPU, 16GB RAM, and a
Nvidia Quadro P5200 GPU. Players’ gestures were detected via a Microsoft
Kinect 2, which was also connected to the HP Z workstation. The heart rate
(HR) was monitored by a Polar OH1 optical HR sensor, which has been proven to
be reliable compared to the gold standard of HR measurement with an
electrocardiography device (Hettiarachchi et al., 2019; Schubert et al.,
2018). Figure 2 shows the experiment setup and devices used in the experiment.
Figure 2. Experiment setup and the devices used in the experiment: (1) the
Oculus Rift CV1; (2) a 50-inch 4K TV; (3) the HP Z backpack; (4) the Microsoft
Kinect 2; and (5) Polar OH1.
Experiment setup
The experiment was conducted in an indoor laboratory room that could not be
seen from the outside. The laboratory room was well illuminated, and its
temperature was controlled by an air conditioner that regulated the room
temperature to 24℃ during the experiment.
### 4.3. Participants
#### 4.3.1. Inclusion and Exclusion Criteria
Participants were recruited from a local university campus and a local
community center through posters, social media platforms, and a mailing list
for young adults between 18 and 30 years old and middle-aged adults between 45
to 65 years old. The study included participants who were not disabled, were
not pregnant (because of the physical exertion required to play the game), and
had not consumed any alcohol during the day (because blood alcohol level of
approximately 0.07% could reduce symptoms of cybersickness (Iskenderova et
al., 2017), which might affect the results of our study).
Participants were excluded from the experiment if they (1) answered “yes” to
any of the Physical Activity Readiness Questionnaire (Thomas et al., 1993)
questions, (2) had resting blood pressure higher than 140/90 mmHg, and (3) had
an extremely good or poor resting heart rate (RestHR) level (i.e., heart rate
range were the top 10% or the last 10% of the population) depending on their
age and gender (Ostchega et al., 2011).
#### 4.3.2. Participants Background
Thirty-two (32) participants participated in our study—16 young adults (6
females; mean age = 20.6, SD = 1.31, range 18 to 23; BMI = 20.3, SD = 2.62),
and 16 middle-aged adults (5 females; mean age = 47.7, SD = 2.68, range 45 to
54; BMI = 23.8, SD = 2.04). Among young adults, 7 of them had experience with
VR HMDs, but none were regular users. Fourteen of them played videogames
before; 6 of them played regularly. For middle-aged adults, none had
experience with VR HMDs and videogames. There were no dropouts in this
experiment.
### 4.4. Procedure and Task
The duration of each session was about one hour. Before the experiment began,
participants needed to fill out a pre-experiment questionnaire that gathered
demographic information (e.g., age, gender, and experience with the VR device)
and Physical Activity Readiness Questionnaire (Thomas et al., 1993). After a
brief description of the experimental procedure, participants signed the
consent to participate in the experiment and collected their RestHR and
resting blood pressure level. They were also asked to enter their age, gender,
height, and weight into the Polar Beat app.
Before each condition started, a researcher would help each participant to
wear the required devices (e.g., Polar OH1). Once their HR reached the
equivalent RestHR level, they were led to the experiment stage, beginning with
a training (warm-up) phase and then the gameplay phase (see Figure 1 and
Section 3.2). After each condition, they were asked to fill in post-condition
questionnaires (GEQ (IJsselsteijn et al., 2008), SSQ (Kennedy et al., 1993),
Borg RPE 6-20 scale (Borg, 1982)). They proceeded to the next condition when
they felt rested and their HR was at the resting level. Once they completed
all conditions, they needed to complete a post-experiment questionnaire and a
semi-structured interview.
### 4.5. Results
#### 4.5.1. Statistical Analysis
We used SPSS version 24 for windows for data analysis. We employed a three-way
mixed ANOVA with GC (uncertain and certain) and DT (VR and LD) as within-
subjects variables and Age (young adults—YA and middle-aged adults—MA) as the
between-subjects variable. We applied Age as the between-subjects variable
because we want to follow existing approaches in the literature (Nacke et al.,
2009; Gerling et al., 2013; Wang et al., 2017). Bonferroni correction was used
for pairwise comparisons. Effect sizes ($\eta_{p}^{2}$) were added whenever
feasible. To minimize any impact on the readability of the paper, we have
placed all the data results in the tables of an appendix located after the
references.
#### 4.5.2. Performance
Completion Time on Each Life. Figure 3a presents the mean completion time of
each life (i.e., monster’s life1, life2, life3). ANOVA tests yielded a
significant effect of Age on life2 ($F_{1,30}=7.246,p<.05,\eta_{p}^{2}=.195$)
and life3 ($F_{1,30}=9.088,p<.01,\eta_{p}^{2}=.232$). Post-hoc pairwise
comparisons revealed that YA could destroy the monster faster than MA on life2
and life3. No other significant effects were found.
Figure 3. (a) Mean completion time on each monster’s life according to age
group, (b) mean success rate of Kick and Zoom+Squat according to DT, and (c)
mean success rate of Zoom+Squat and Zoom+Kick according to GC and Age. Error
bars indicate ±2 standard errors.
Fig3
Success Rate. Table 2 shows the ANOVA tests of the success rate for
Zoom+Squat, Kick, Zoom+Kick. Corresponding success rate data can be found in
Figure 3b,c and Figure 4a. In summary, (1) participants have a higher defense
(i.e., Zoom+Squat) success rate in certain GC than uncertain GC, (2) YA have a
higher defense success rate in VR than LD, (3) participants have a higher Kick
success rate in VR than LD, (4) YA had a higher Zoom+Kick success rate than MA
in VR, (5) YA had a higher Zoom+Kick success rate in VR than LD, and (6) YA
had a higher Zoom+Kick success rate than MA in uncertain GC.
Total Number of Gestures Performed. Table 3 shows the ANOVA tests of the total
number of gestures performed for Zoom+Squat, Punch, Zoom+Kick. Corresponding
success rate data can be found in Figure 4b,c. In summary, (1) YA and MA both
performed more defense moves (i.e., Zoom+Squat) in uncertain GC than certain
GC, (2) MA performed more defense moves than YA in both certain and uncertain
GC, (3) YA performed more Punch than MA in LD, (4) MA performed more Punch in
VR than LD, (5) participants performed more Zoom+Kick in uncertain GC than in
certain GC.
Table 2. Three-way mixed ANOVA test results for success rate. Significant results where $p<.05$ are shown in light green, $p<.01$ in green, and $p<.001$ in dark green. Punch, Age, DT × GC, DT × Age × GC have no significant results and therefore not shown for better clarity. No sig indicates no significant results. | Kick | Zoom+Squat | Zoom+Kick
---|---|---|---
DT | $F_{1,30}=4.836,p<.05,\eta_{p}^{2}=.139$ | $F_{1,30}=14.403,p<.001,\eta_{p}^{2}=.324$ | No sig
GC | No sig | $F_{1,30}=21.799,p<.001,\eta_{p}^{2}=.421$ | No sig
DT × Age | No sig | $F_{1,30}=7.942,p<.01,\eta_{p}^{2}=.209$ | $F_{1,30}=5.008,p<.05,\eta_{p}^{2}=.143$
GC × Age | No sig | No sig | $F_{1,30}=6.439,p<.05,\eta_{p}^{2}=.177$
Post-hoc | DT: VR ¿ LD ($p<0.5$; see Figure 3b) | GC: uncertain ¡ certain ($p<.001$; see Figure 3c); YA: VR ¿ LD ($p<.001$; see Figure 4a) | VR: YA ¿ MA ($p<.05$; see Figure 4a); YA: VR ¿ LD ($p<.05$; see Figure 4a); Uncertain: YA ¿ MA ($p<.05$; see Figure 3c)
Table 3. Three-way mixed ANOVA test results for the total number of gestures performed. Significant results where $p<.05$ are shown in light green, $p<.01$ in green, and $p<.001$ in dark green. Kick, DT, GC × DT, Age × GC × DT have no significant results and therefore not shown for better clarity. No sig indicates no significant results. | Punch | Zoom+Squat | Zoom+Kick
---|---|---|---
GC | No sig | $F_{1,30}=129.718,p<.001,\eta_{p}^{2}=.812$ | $F_{1,30}=5.473,p<.05,\eta_{p}^{2}=.154$
Age | $F_{1,30}=5.268,p<.05,\eta_{p}^{2}=.149$ | $F_{1,30}=18.638,p<.001,\eta_{p}^{2}=.383$ | No sig
GC × Age | No sig | $F_{1,30}=9.231,p<.01,\eta_{p}^{2}=.235$ | No sig
DT × Age | $F_{1,30}=4.981,p<.05,\eta_{p}^{2}=.142$ | No sig | No sig
Post-hoc | LD: YA ¿ MA ($p<.01$; see Figure 4b); MA: VR ¿ LD ($p<.01$; see Figure 4b) | YA and MA: uncertain ¿ certain (both $p<.001$; see Figure 4c); Uncertain and certain: MA ¿ YA (both $p<.001$; see Figure 4c) | GC: uncertain ¿ certain ($p<.05$; see Figure 4c)
Figure 4. (a) Mean success rate of Zoom+Kick and Zoom+Squat according to DT
and Age, (b) mean total number of Punch performed according to DT and Age, and
(c) mean total number of Zoom+Kick and Zoom+Squat performed according to GC
and Age. Error bars indicate ±2 standard errors.
Fig4
#### 4.5.3. Experience
Game Experience. ANOVA tests yielded a significant effect of Age on competence
($F_{1,30}=20.787,p<.001,\eta_{p}^{2}=.409$), immersion
($F_{1,30}=23.010,p<.001,\eta_{p}^{2}=.434$), tension
($F_{1,30}=20.815,p<.001,\eta_{p}^{2}=.410$), negative affect
($F_{1,30}=19.278,p<.001,\eta_{p}^{2}=.391$), positive affect
($F_{1,30}=20.810,p<.001,\eta_{p}^{2}=.410$). Post-hoc pairwise comparisons
showed that YA had a higher levels of competence, immersion, tension, negative
affect, and positive affect than MA (see Figure 5a).
Figure 5. (a) Game experience questionnaire rating of subscales according to
Age, (b) mean flow rating according to DT and Age, and (c) mean nausea and
oculomotor rating according to Age. Error bars indicate ±2 standard errors.
Fig5
There was a significant effect of DT
($F_{1,30}=40.298,p<.001,\eta_{p}^{2}=.573$) on flow, showing that
participants experienced a greater flow in VR than LD. Additionally, ANOVA
tests yielded a significant effect of DT × Age
($F_{1,30}=11.163,p<.01,\eta_{p}^{2}=.271$) on flow. Post-hoc pairwise
comparisons revealed that (1) YA experienced a lower flow than MA in LD
($p<.001$), (2) VR could lead to a greater flow experience than LD in both YA
($p<.05$) and MA ($p<.001$). Figure 5b depicts the corresponding flow values.
No other significant effects were found.
Simulator Sickness. ANOVA tests yielded a significant effect of Age on nausea
($F_{1,30}=7.049,p<.05,\eta_{p}^{2}=.190$) and oculomotor
($F_{1,30}=5.242,p<.05,\eta_{p}^{2}=.149$), but not on disorientation
($F_{1,30}=2.490,p=.125,\eta_{p}^{2}=.077$). Post-hoc pairwise comparisons
revealed that (1) YA experienced a higher nausea level than MA (see Figure
5c), and (2) YA experienced a higher oculomotor level than MA (see Figure 5c).
No other significant effects were found.
Uncertain Elements’ Ratings. We employed a two-way mixed ANOVA with Elements
(false-attack, hit, miss) as the within-subjects variable and Age as the
between-subjects variable. The ANOVA tests yielded a significant effect of
Elements ($F_{1.607,48.224}=3.547,p<.05,\eta_{p}^{2}=.106$), but not Elements
× Age ($F_{1.607,48.224}=1.656,p=.200$) on the ratings of the uncertain
elements. There was a significant effect of Age
($F_{1,30}=8.217,p<.001,\eta_{p}^{2}=.215$) on the uncertain elements’
ratings, showing that uncertainty settings were rated higher in YA (M = 5.88,
s.e. = 0.20) than MA (M = 5.08, s.e. = 0.20). However, post-hoc pairwise
comparisons could not find any significance between uncertain elements.
#### 4.5.4. Exertion
Table 4 shows the ANOVA tests of all exertion measures. In summary, (1) YA had
lower avgHR% than MA in uncertain GC, (2) MA had a higher avgHR% in uncertain
GC than certain GC, (3) participants burned more calories in uncertain GC than
certain GC, (4) MA participants burned more calories than YA participants (see
Figure 6b), (5) Borg RPE for uncertain GC was higher than certain GC among YA
and MA, (6) the Borg RPE for YA was higher than MA in certain GC and uncertain
GC.
Table 4. Three-way mixed ANOVA test results for exertion measurements. Significant results where $p<.05$ are shown in light green, $p<.01$ in green, and $p<.001$ in dark green. DT, GC × DT, Age × DT, Age × GC × DT have no significant results and therefore not shown for better clarity. No sig indicates no significant results. | avgHR% | Calories Burned | Borg RPE
---|---|---|---
GC | $F_{1,30}=30.560,p<.001,\eta_{p}^{2}=.505$ | $F_{1,30}=45.587,p<.001,\eta_{p}^{2}=.603$ | $F_{1,30}=39.533,p<.001,\eta_{p}^{2}=.569$
Age | $F_{1,30}=7.754,p<.01,\eta_{p}^{2}=.205$ | $F_{1,30}=8.353,p<.01,\eta_{p}^{2}=.218$ | $F_{1,30}=15.488,p<.001,\eta_{p}^{2}=.340$
GC × Age | $F_{1,30}=8.279,p<.01,\eta_{p}^{2}=.248$ | No sig | $F_{1,30}=4.759,p<.05,\eta_{p}^{2}=.137$
Post-hoc | Uncertain: YA ¡ MA (both $p<.01$; see Figure 6a); MA: uncertain ¿ certain ($p<.001$; see Figure 6a) | GC: uncertain ¿ certain ($p<.001$; see Figure 6b); Age: MA ¿ YA ($p<.01$; see Figure 6b) | YA: uncertain ¿ certain ($p<.01$; see Figure 6c); MA: uncertain ¿ certain ($p<.001$; see Figure 6c); Certain: YA ¿ MA ($p<.001$; see Figure 6c); Uncertain: YA ¿ MA ($p<.01$; see Figure 6c)
Figure 6. (a) Mean avgHR% according to GC and Age, (b) mean calories burned,
and (c) mean Borg RPE rating according to GC and Age. Error bars indicate ±2
standard errors.
Fig6
#### 4.5.5. User Rankings and Feedback
The VR uncertain version was rated the best version among the four versions by
23 participants (12 YA). Only 5 participants (4 YA) selected VR certain as
their top option and 4 MA chose LD uncertain version as their top selection.
Feedback. From the coded transcripts, three main themes emerged (element of
the games, general gaming experience, and exercising for health) from the two
researchers, who first reviewed the transcripts independently. They were
agreed by a third researcher after a second discussion. Thirty-two
participants were labeled P1-P16 (YA group) and P17-P32 (MA group).
Overall, both user groups perceived the game as “enjoyable” (10 YA, 9 MA),
“novel” (9 YA, 8 MA), and “good for their health” (9 YA, 14 MA) and none of
them perceive anything that was confusing in the game. Both groups perceived
the false-attacks more difficult than expected (P3, P13, P20, P22, P24-27),
but only MA participants mentioned that sometimes they could not perform the
defense move in time.
Regarding the elements that they liked about the game, the comments from the
two groups came from two different perspectives. Most YA focused on the game
elements (e.g., “the false-attack by the opponents” [P3, P14, P16], “critical
hits” [P5], “misses” [P11], “using gestures to trigger attacks are fun and
easy to understand” [P6, P9, P13]) while only a few mentioned about the health
benefits as their preferred elements (P8, P10, P15). This is a completely
different for the MA, where 13 MA mentioned they liked the game because it
could be a good exercise activity while only 6 comments focused on design
elements (e.g., “false-attacks by the monster is a good design” [P23, P27,
P30], “it tricks me into performing defense moves, which is good for my
health” [P20, P24, P25]).
The two generations focused on the different perspectives again regarding the
elements that they did not like. Most comments from YA were about the graphics
and models used in the game, that they should be improved and more moves could
be added. On the other hand, most MA believed that the uncertain elements are
sometimes overused, which caused them to perform too many defense moves and
made them feeling exhausted during the game.
## 5\. Discussion
### 5.1. Effect of Age on Exergames
In general, the performance (i.e., completion times, success rates for both
attack and defense moves) of middle-aged adults were worse than young adults
in our motion-based first-person exergame, which is in line with previous
studies of similar games (Gerling et al., 2013). One possible reason could be
age-related declines in mobility; for instance, middle-aged adults typically
require more time to perform gestures (Ferrucci et al., 2016). They also were
not able to react to the monster’s attack sometimes or cancel their defense
moves when realizing that the monster was performing false-attacks; for
example, P20, P22, P24-25, P27-28: “I could not react in time.” Hence, it is
necessary to take into account age-related declines (e.g., working memory
(Meguro et al., 2000), grip strength (Kozakai et al., 2016), and muscle mass
(Brown and Hasser, 1996)) when designing exergames for middle-aged adults.
In addition, the two age groups perceived the game experience differently. We
found that young adults were more immersive (immersion, flow) in the game than
middle-aged adults and had a higher positive emotion, efficacy, competence.
However, young adults still felt more annoyed and experienced more negative
emotions than middle-aged adults even though they had a better performance
(e.g., the successful attack rate is much higher). One possible reason is that
young adults might have expected that they should perform much better due to
their competitive expectations of themselves and the game, while the
competition was downplayed in middle-aged adults (Subramanian et al., 2019).
Previous research has suggested that there may be a decline in susceptibility
to VR sickness as people age (Bardey et al., 2013). Our results also support
this, as we found that young adults felt sicker during gameplay than middle-
aged adults. Overall, sickness level for all participants were either
negligible or very low, with no participants experiencing severe simulator
sickness. That is, all participants had no issues in playing the game.
Existing literature in the exercise domain (e.g., tai chi (Lan et al., 2004),
arms training (Groslambert et al., 2006), arm abduction (Pincivero et al.,
2010)) have suggested that age does not affect the exertion level of the
exercise. However, this is not supported by our results because we found that
our two groups of participants produced different levels of exertion (middle-
aged adults had a higher avgHR% in the uncertain condition and burned more
calories than young adults but gave a lower Borg RPE ratings). Further study
is required to explain this.
### 5.2. Effect of Display Type on Exergames
Our results suggest that participants had a better performance in VR (i.e.,
higher success rates in attack and defense moves in VR than LD). This is
understandable because the greater flow experience brought by VR to the
players had a positive effect on performance in the game (Admiraal et al.,
2011). A previous study (Xu et al., 2020d) that also focused on the effect of
DT versus VR showed that VR could provide a greater positive game experience
(e.g., challenge, flow, immersion) to the players than LD, which was also
found in our results (i.e., VR led to a higher flow rating than LD). Existing
literature also indicated that game experience (from GEQ) could be perceived
the same in both VR and LD (Xu et al., 2019). One reason could be that in (Xu
et al., 2019), participants only experienced 4 minutes of gameplay, which is
relatively short for developing a fuller picture of the technologies. Hence,
we suggest that future studies consider a longer game duration, like 7- 8
minutes used in our research and in (Xu et al., 2020d), to let the players
experience a game in each technology more fully.
In addition, our findings indicate there was no significant difference
regarding the level of sickness that participants experienced between VR and
LD when playing the motion-based exergame, which is in line with (Xu et al.,
2019) but not (Xu et al., 2020d) where researchers reported that playing a
motion-based exergame in VR could lead to a higher sickness than LD. One
possible explanation could be that the type of game used in the experiment was
different. Our game and the game used in (Xu et al., 2019) involved more
interaction with the virtual world than the game in (Xu et al., 2020d). For
instance, players had direct contacts with the virtual objects (either through
attacking and defending against the monster in our game or directly using the
hands or feet to hit the objects in the game from (Xu et al., 2019)), which is
not the case for (Xu et al., 2020d) where the gestures performed by the users
did not have direct contact with the virtual objects in the form of cubes.
### 5.3. Effect of Uncertainty on Exergames
The purpose of the design of false-attacks, one uncertain element in our
exergame, was to trick the players into using the defense moves. Our results
show that this element achieved its intended goal because participants
performed more defense moves (Zoom+Squat) in the uncertain condition than the
counterpart condition. We also observed during the experiment that this design
tricked all players across both groups.
In addition, the design of misses had also forced them to perform more attack
moves in their attempts to kill the monster. Hence, participants had a higher
exertion level (i.e., avgHR%—MA, calories burned, Borg RPE) in the uncertain
condition. Furthermore, what is interesting to note is that participants did
not feel a worse experience by these design features since (1) they did not
complain about the features, and (2) the gameplay experience and sickness in
both game conditions were not significantly different. Therefore, we believe
that involving uncertain elements (i.e., false-attacks, misses, and critical
hits) in the type of exergame similar to ours could increase players’ energy
costs without incurring negative gameplay experiences in both VR and LD.
### 5.4. Design Guidelines
#### 5.4.1. Applying Uncertainty to Exergames
As our results show, the proposed uncertain elements in our exergame could be
useful in enhancing exertion levels during game sessions. We list with
examples of how these uncertain elements can be applied to other exergames.
For sports exergames, false-attack can be used in several ways. For example,
in the boxing game Creed: Rise to
Glory666https://www.oculus.com/experiences/rift/1872428116153565/, a false-
attack can be directly applied to Creed’s attack strategy to trick players
into making defense moves. False-attacks can be enhanced further by following
a real attack after the animation of a false-attack. For Eleven Table Tennis
VR777https://www.oculus.com/experiences/rift/989106554552337/, this can be
added as a way for NPC to pretend they want to move into one direction but not
moving into that direction. This type of false moves can be used in designing
basketball and football exergames where trickery is a key to make a defending
player go into one direction so that the player can move into the opposite way
(e.g., Kinect Sports: Soccer888https://marketplace.xbox.com/en-
US/Product/Kinect-Sports/66acd000-77fe-1000-9115-d8024d5308c9).
For exergames that involve one-way interaction with the enemy (i.e., player to
NPC), critical hits and misses can be used. For instance, in the tower defense
game Longbowman (Xie et al., 2018), critical hits and misses can be designed
with additional features. A critical hit can deal additional damage and also
slow down the movement of the enemy. In contrast, a miss does not damage the
enemy and would make the enemy become angry and move faster.
For exergames that involve two-way interaction with the enemy (player to NPC
and NPC to player), all three elements can be used. For instance, in Ring Fit
Adventure, a motion-based active game for the Nintendo Switch, all these
elements can be added in a similar way that we did in our exergame since it is
designed based on this commercial game.
#### 5.4.2. Highlighting Health Benefits to Middle-aged and Older Adults
Like older adults (Subramanian et al., 2019), middle-aged adults believe that
exergames are helpful to their health. We suggest making the potential health
benefits to middle-aged adults explicit and clear inside the game and as part
of the gameplay experience. For instance, designers could (1) introduce the
benefits of each gesture before the game, (2) present the energy cost like
calories burned during the game as part of any dynamic visual and audio
feedback, (3) give a summary report of the overall performance (e.g., for each
type of gestures, providing the total number the player performed) after the
game.
### 5.5. Limitations and Future Work
There are some limitations in this research, which can also serve as
directions for the future. One limitation is that we tested three elements of
uncertainty (false-attacks, misses, and critical hits) that covers four
uncertainty sources. Future work could explore more uncertainty sources
(Costikyan, 2013) in motion-based exergames. For example, we can use (1)
analytical complexity, by allowing more skills for the player but require the
player to kill the monster in a limited time so that the player needs to
analyze the best strategy to fight against their opponent carefully. It is
possible to integrate (2) hidden information, by not showing information of
the opponent’s attack moves. Addition, (3) narrative anticipation can be used
by adding a storyline to a game and fighting an opponent would reward them
with the corresponding piece of the storyline. By doing this, the player has
the desire to know the next piece of the storyline (Murnane et al., 2020).
In addition, there are some limitations related to the choice of VR HMD and
exergames in current commercial VR HMDs. We used the Oculus Rift CV1. Newer VR
HMDs (i.e., VIVE Pro Eye) that come with a higher resolution could impact
simulator sickness and game experience. We used the Oculus Rift CV1 because we
wanted to have consistency with prior studies (Xu et al., 2019; Xu et al.,
2020d). The Rift CV1, as a tethered helmet, has a limited range of motion
because of the attached cables. While standalone devices like Oculus Quest do
not have this limitation, they suffer from latency issues when used with
external motion sensors (i.e., Kinect) to capture motion data. In addition,
long gameplay sessions wearing any current HMDs could result in sweats in the
glasses; thus, the length of gameplay should be carefully designed to prevent
this issue. Also, to make MA-friendly exergames, future games should involve
more simple gestures (like zoom—hands stretching out, hands-up) to eliminate
any risks when wearing a VR HMD.
Our study only involved a single session. Longer-term studies will be useful
to determine if the same results hold and to determine additional effects that
may come with long-term exposures. In addition, due to the COVID-19, we cannot
to include the elderly adults (i.e., those 65 years old and above) in the
experiment. Future work could have all these three groups of adults (i.e.,
young, middle-aged, elderly) to assess their relative performance and
experience with exergames.
## 6\. Conclusion
In this research, we have investigated the effect of display type (virtual
reality and large display) with or without elements of uncertainty in motion-
based first-person perspective exergames. We also have explored the impact of
age by comparing game performance, gameplay experience, and level of energy
exertion between young adults and middle-aged adults. Our results suggest the
following three conclusions: (1) For the type of exergame like ours, virtual
reality could improve game performance while maintaining the same level of
sickness as large displays. (2) Uncertain elements like those used in this
research’s motion-based exergame might not help enhance the overall game
experience, but are instrumental in increasing exertion levels, which is one
of the essential features of exergames. (3) Exergames for middle-aged adults
should be carefully designed with consideration to age-related declines,
similar to older adults. We also proposed two main design guidelines which can
pave the way for improving the acceptability of VR exergames among young and
middle-aged adults.
###### Acknowledgements.
The authors would like to thank the anonymous reviewers for their valuable
comments and helpful suggestions and the Committee Member who guided the
revision of our paper. The work is supported in part by Xi’an Jiaotong-
Liverpool University (XJTLU) Key Special Fund (KSF-A-03) and XJTLU Research
Development Fund.
## References
* (1)
* Admiraal et al. (2011) Wilfried Admiraal, Jantina Huizenga, Sanne Akkerman, and Geert ten Dam. 2011. The concept of flow in collaborative game-based learning. _Group Awareness in CSCL Environments_ 27, 3 (May 2011), 1185–1194. https://doi.org/10.1016/j.chb.2010.12.013
* Andries and Robertson (2019) Valentina Andries and Judy Robertson. 2019. Designing Social Play to Support Young Hospitalised Children. In _Proceedings of the 18th ACM International Conference on Interaction Design and Children_ _(IDC ’19)_. Association for Computing Machinery, New York, NY, USA, 550–555. https://doi.org/10.1145/3311927.3325317 event-place: Boise, ID, USA.
* Barathi et al. (2018) Soumya C. Barathi, Eamonn O’Neill, Christof Lutteroth, Daniel J. Finnegan, Matthew Farrow, Alexander Whaley, Pippa Heath, Jude Buckley, Peter W. Dowrick, Burkhard C. Wuensche, and James L. J. Bilzon. 2018. Interactive Feedforward for Improving Performance and Maintaining Intrinsic Motivation in VR Exergaming. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18_. ACM Press, Montreal QC, Canada, 1–14. https://doi.org/10.1145/3173574.3173982
* Bardey et al. (2013) Aurore Bardey, Gaëlle Quarck, Paolino F., Pierre Denise, Michel Paolino, John Golding, and Venera GHULYAN-BEDIKIAN. 2013. Motion sickness susceptibility in healthy subjects and vestibular patients: effects of gender, age and trait-anxiety. _Journal of Vestibular Research_ 23, 4-5 (Nov. 2013), 203–209. https://doi.org/10.3233/VES-130501
* Berkovsky et al. (2010) Shlomo Berkovsky, Mac Coombe, Jill Freyne, Dipak Bhandari, and Nilufar Baghaei. 2010. Physical Activity Motivating Games: Virtual Rewards for Real Activity. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Atlanta, Georgia, USA) _(CHI ’10)_. Association for Computing Machinery, New York, NY, USA, 243–252. https://doi.org/10.1145/1753326.1753362
* Birk et al. (2017) Max V. Birk, Maximilian A. Friehs, and Regan L. Mandryk. 2017\. Age-Based Preferences and Player Experience: A Crowdsourced Cross-Sectional Study. In _Proceedings of the Annual Symposium on Computer-Human Interaction in Play_ _(CHI PLAY ’17)_. Association for Computing Machinery, New York, NY, USA, 157–170. https://doi.org/10.1145/3116595.3116608 event-place: Amsterdam, The Netherlands.
* Blocker et al. (2014) Kenneth Blocker, Timothy Wright, and Walter Boot. 2014\. Gaming Preferences of Aging Generations. _Gerontechnology_ 12, 3 (June 2014), 174–184. https://doi.org/10.4017/gt.2014.12.3.008.00
* Bogost (2005) Ian Bogost. 2005\. The Rhetoric of Exergaming. In _Proceedings of the Digital Arts and Cultures (DAC)_. Copenhagen, Denmark, 10. https://doi.org/10.1.1.500.1480
* Borg (1982) Gunnar Borg. 1982\. Psychophysical bases of perceived exertion. _Medicine & Science in Sports & Exercise_ 14, 5 (1982), 377–381. insights.ovid.com
* Brown and Hasser (1996) Marybeth Brown and Eileen M Hasser. 1996. Complexity of age-related change in skeletal muscle. _The Journals of Gerontology Series A: Biological Sciences and Medical Sciences_ 51, 2 (1996), B117–B123. Publisher: The Gerontological Society of America.
* Caillois (2001) Roger Caillois. 2001\. _Man, Play and Games_ (reprint edition ed.). University of Illinois Press, Urbana.
* Costikyan (2013) Greg Costikyan. 2013\. _Uncertainty in games_. Mit Press.
* De Schutter (2011) Bob De Schutter. 2011\. Never Too Old to Play: The Appeal of Digital Games to an Older Audience. _Games and Culture_ 6, 2 (May 2011), 155–170. https://doi.org/10.1177/1555412010364978 Publisher: SAGE Publications.
* De Schutter and Vanden Abeele (2010) Bob De Schutter and Vero Vanden Abeele. 2010. Designing Meaningful Play within the Psycho-Social Context of Older Adults. In _Proceedings of the 3rd International Conference on Fun and Games_ _(Fun and Games ’10)_. Association for Computing Machinery, New York, NY, USA, 84–93. https://doi.org/10.1145/1823818.1823827 event-place: Leuven, Belgium.
* DeKoven (2002) Bernie DeKoven. 2002\. _The Well-Played Game: A Playful Path to Wholeness_. iUniverse, San Jose.
* Drachen et al. (2018) Anders Drachen, Pejman Mirza-Babaei, and Lennart Nacke (Eds.). 2018\. _Games User Research_. Oxford University Press, Oxford, UK ; New York.
* Duh et al. (2010) Henry Been-Lirn Duh, Sharon Lynn Chu Yew Yee, Yuan Xun Gu, and Vivian Hsueh-Hua Chen. 2010\. A Narrative-Driven Design Approach for Casual Games with Children. In _Proceedings of the 5th ACM SIGGRAPH Symposium on Video Games_ _(Sandbox ’10)_. Association for Computing Machinery, New York, NY, USA, 19–24. https://doi.org/10.1145/1836135.1836138 event-place: Los Angeles, California.
* Eggenberger et al. (2015) Patrick Eggenberger, Vera Schumacher, Marius Angst, Nathan Theill, and Eling D de Bruin. 2015. Does multicomponent physical exercise with simultaneous cognitive training boost cognitive performance in older adults? A 6-month randomized controlled trial with a 1-year follow-up. _Clinical interventions in aging_ 10 (Aug. 2015), 1335–1349. https://doi.org/10.2147/CIA.S87732
* Eriksson et al. (2019) Eva Eriksson, Gökçe Elif Baykal, Staffan Björk, and Olof Torgersson. 2019. Using Gameplay Design Patterns with Children in the Redesign of a Collaborative Co-Located Game. In _Proceedings of the 18th ACM International Conference on Interaction Design and Children_ _(IDC ’19)_. Association for Computing Machinery, New York, NY, USA, 15–25. https://doi.org/10.1145/3311927.3323155 event-place: Boise, ID, USA.
* Ferreira et al. (2015) Daniel Ferreira, Rut Correia, Antonieta Nieto, Alejandra Machado, Yaiza Molina Rodríguez, and José Barroso. 2015. Cognitive decline before the age of 50 can be detected with sensitive cognitive measures. _Psicothema_ 27, 3 (May 2015), 216–222. https://doi.org/10.7334/psicothema2014.192
* Ferrucci et al. (2016) Luigi Ferrucci, Rachel Cooper, Michelle Shardell, Eleanor Simonsick, Jennifer Schrack, and Diana Kuh. 2016. Age-Related Change in Mobility: Perspectives From Life Course Epidemiology and Geroscience. _The Journals of Gerontology Series A: Biological Sciences and Medical Sciences_ 71, 9 (March 2016), 1184–1194. https://doi.org/10.1093/gerona/glw043
* Gajadhar et al. (2008) Brian J. Gajadhar, Yvonne A. W. de Kort, and Wijnand A. IJsselsteijn. 2008\. Shared Fun Is Doubled Fun: Player Enjoyment as a Function of Social Setting. In _Fun and Games_ , Panos Markopoulos, Boris de Ruyter, Wijnand IJsselsteijn, and Duncan Rowland (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 106–117. https://doi.org/10.1007/978-3-540-88322-7_11
* Gajadhar et al. (2010) Brian J. Gajadhar, Henk Herman Nap, Yvonne A. W. de Kort, and Wijnand A. IJsselsteijn. 2010\. Out of Sight, out of Mind: Co-Player Effects on Seniors’ Player Experience. In _Proceedings of the 3rd International Conference on Fun and Games_ _(Fun and Games ’10)_. Association for Computing Machinery, New York, NY, USA, 74–83. https://doi.org/10.1145/1823818.1823826 event-place: Leuven, Belgium.
* Gao and Mandryk (2012) Yue Gao and Regan Mandryk. 2012. The acute cognitive benefits of casual exergame play. In _Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI ’12_. ACM Press, Austin, Texas, USA, 1863\. https://doi.org/10.1145/2207676.2208323
* Gerling et al. (2012) Kathrin Gerling, Ian Livingston, Lennart Nacke, and Regan Mandryk. 2012. Full-body motion-based game interaction for older adults. In _Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI ’12_. ACM Press, Austin, Texas, USA, 1873\. https://doi.org/10.1145/2207676.2208324
* Gerling et al. (2013) Kathrin M. Gerling, Kristen K. Dergousoff, and Regan L. Mandryk. 2013. Is Movement Better? Comparing Sedentary and Motion-Based Game Controls for Older Adults. In _Proceedings of Graphics Interface 2013_ _(GI ’13)_. Canadian Information Processing Society, CAN, 133–140. event-place: Regina, Sascatchewan, Canada.
* Greenberg et al. (1999) Paul E. Greenberg, Tamar Sisitsky, Ronald C. Kessler, Stan N. Finkelstein, Ernst R. Berndt, Jonathan R. T. Davidson, James C. Ballenger, and Abby J. Fyer. 1999. The economic burden of anxiety disorders in the 1990s. _The Journal of Clinical Psychiatry_ 60, 7 (1999), 427–435. https://doi.org/10.4088/JCP.v60n0702 Place: US Publisher: Physicians Postgraduate Press.
* Groslambert et al. (2006) Alain Groslambert, Céline C Grange, Stéphane Perrey, Jérôme Maire, Nicolas Tordi, and Jean Denis Rouillon. 2006. Effects of Aging on Perceived Exertion and Pain During Arm Cranking in Women 70 to 80 YEARS OLD. _Journal of sports science & medicine_ 5, 2 (June 2006), 208–214. https://pubmed.ncbi.nlm.nih.gov/24259993 Publisher: Asist Group.
* Hardy et al. (2015) Sandro Hardy, Tim Dutz, Josef Wiemeyer, Stefan Göbel, and Ralf Steinmetz. 2015. Framework for personalized and adaptive game-based training programs in health sport. _Multimedia Tools and Applications_ 74, 14 (July 2015), 5289–5311. https://doi.org/10.1007/s11042-014-2009-z
* Hernandez et al. (2013) Hamilton A. Hernandez, Zi Ye, T.C. Nicholas Graham, Darcy Fehlings, and Lauren Switzer. 2013\. Designing action-based exergames for children with cerebral palsy. In _SIGCHI Conference on Human Factors in Computing Systems_. ACM Press, Paris, France, 1261–1270. https://doi.org/10.1145/2470654.2466164
* Hettiarachchi et al. (2019) Imali T. Hettiarachchi, Samer Hanoun, Darius Nahavandi, and Saeid Nahavandi. 2019. Validation of Polar OH1 optical heart rate sensor for moderate and high intensity physical activities. _PLOS ONE_ 14, 5 (May 2019), e0217288. https://doi.org/10.1371/journal.pone.0217288
* IJsselsteijn et al. (2008) W. A. IJsselsteijn, Y. A. W. De Kort, and Karolien Poels. 2008\. The game experience questionnaire. _Eindhoven: Technische Universiteit Eindhoven_ (2008).
* Ioannou et al. (2019) Christos Ioannou, Patrick Archard, Eamonn O’Neill, and Christof Lutteroth. 2019. Virtual Performance Augmentation in an Immersive Jump & Run Exergame. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19_. ACM Press, Glasgow, Scotland Uk, 1–15. https://doi.org/10.1145/3290605.3300388
* Iskenderova et al. (2017) Aliya Iskenderova, Florian Weidner, and Wolfgang Broll. 2017\. Drunk Virtual Reality Gaming: Exploring the Influence of Alcohol on Cybersickness. In _Proceedings of the Annual Symposium on Computer-Human Interaction in Play_ _(CHI PLAY ’17)_. Association for Computing Machinery, New York, NY, USA, 561–572. https://doi.org/10.1145/3116595.3116618 event-place: Amsterdam, The Netherlands.
* Johnson (2018) Mark R. Johnson. 2018\. _The Unpredictability of Gameplay_. Bloomsbury Academic, New York.
* Juul (2011) Jesper Juul. 2011\. _Half-Real: Video Games between Real Rules and Fictional Worlds_. The MIT Press, Cambridge, Mass.
* Kennedy et al. (1993) Robert S. Kennedy, Norman E. Lane, Kevin S. Berbaum, and Michael G. Lilienthal. 1993. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. _The International Journal of Aviation Psychology_ 3, 3 (July 1993), 203–220. https://doi.org/10.1207/s15327108ijap0303_3
* Kim et al. (2014) Sung Yeun (Su) Kim, Nathan Prestopnik, and Frank A. Biocca. 2014\. Body in the interactive game: How interface embodiment affects physical activity and health behavior change. _Computers in Human Behavior_ 36 (2014), 376 – 384. https://doi.org/10.1016/j.chb.2014.03.067
* Koulouris et al. (2020) Jordan Koulouris, Zoe Jeffery, James Best, Eamonn O’Neill, and Christof Lutteroth. 2020. Me vs. Super(Wo)Man: Effects of Customization and Identification in a VR Exergame. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3313831.3376661 event-place: Honolulu, HI, USA.
* Kozakai et al. (2016) Rumi Kozakai, Fujiko Ando, Heung Kim, Atsumu Yuki, Rei Otsuka, and Hiroshi Shimokata. 2016\. Sex-differences in age-related grip strength decline: A 10-year longitudinal study of community-living middle-aged and older Japanese. _The Journal of Physical Fitness and Sports Medicine_ 5, 1 (March 2016), 87–94. https://doi.org/10.7600/jpfsm.5.87
* Kumari et al. (2019) Shringi Kumari, Sebastian Deterding, and Jonathan Freeman. 2019\. The Role of Uncertainty in Moment-to-Moment Player Motivation: A Grounded Theory. In _Proceedings of the Annual Symposium on Computer-Human Interaction in Play_ (Barcelona, Spain) _(CHI PLAY ’19)_. Association for Computing Machinery, New York, NY, USA, 351–363. https://doi.org/10.1145/3311350.3347148
* Lan et al. (2004) Ching Lan, Ssu-Yuan Chen, and Jin-Shin Lai. 2004. Relative Exercise Intensity of Tai Chi Chuan is Similar in Different Ages and Gender. _The American Journal of Chinese Medicine_ 32, 1 (Jan. 2004), 151–160. https://doi.org/10.1142/S0192415X04001746 Publisher: World Scientific Publishing Co.
* Malone (1982) Thomas W. Malone. 1982\. Heuristics for Designing Enjoyable User Interfaces: Lessons from Computer Games. In _Proceedings of the 1982 Conference on Human Factors in Computing Systems_ (Gaithersburg, Maryland, USA) _(CHI ’82)_. Association for Computing Machinery, New York, NY, USA, 63–68. https://doi.org/10.1145/800049.801756
* Márquez Segura et al. (2013) Elena Márquez Segura, Annika Waern, Jin Moen, and Carolina Johansson. 2013. The Design Space of Body Games: Technological, Physical, and Social Design. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Paris, France) _(CHI ’13)_. Association for Computing Machinery, New York, NY, USA, 3365–3374. https://doi.org/10.1145/2470654.2466461
* Marshall et al. (2016) Joe Marshall, Florian ‘Floyd’ Mueller, Steve Benford, and Sebastiaan Pijnappel. 2016\. Expanding exertion gaming. _International Journal of Human-Computer Studies_ 90 (2016), 1 – 13\. https://doi.org/10.1016/j.ijhcs.2016.02.003
* Martin-Niedecken (2018) Anna Lisa Martin-Niedecken. 2018\. Designing for Bodily Interplay: Engaging with the Adaptive Social Exertion Game ”Plunder Planet”. In _Proceedings of the 17th ACM Conference on Interaction Design and Children_ (Trondheim, Norway) _(IDC ’18)_. Association for Computing Machinery, New York, NY, USA, 19–30. https://doi.org/10.1145/3202185.3202740
* Martin-Niedecken and Mekler (2018) Anna Lisa Martin-Niedecken and Elisa D. Mekler. 2018. The ExerCube: Participatory Design of an Immersive Fitness Game Environment. In _Serious Games_ , Stefan Göbel, Augusto Garcia-Agundez, Thomas Tregel, Minhua Ma, Jannicke Baalsrud Hauge, Manuel Oliveira, Tim Marsh, and Polona Caserman (Eds.). Springer International Publishing, Cham, 263–275.
* Martin-Niedecken et al. (2019) Anna Lisa Martin-Niedecken, Katja Rogers, Laia Turmo Vidal, Elisa D. Mekler, and Elena Márquez Segura. 2019. ExerCube vs. Personal Trainer: Evaluating a Holistic, Immersive, and Adaptive Fitness Game Setup. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300318
* Meguro et al. (2000) Yuko Meguro, Toshikatsu Fujii, Atsushi Yamadori, Takashi Tsukiura, Kyoko Suzuki, Jiro Okuda, and Mariko Osaka. 2000. The Nature of Age-Related Decline on the Reading Span Task. _Journal of Clinical and Experimental Neuropsychology_ 22, 3 (June 2000), 391–398. https://doi.org/10.1076/1380-3395(200006)22:3;1-V;FT391 Publisher: Routledge.
* Mestre et al. (2011) Daniel R. Mestre, Virginie Dagonneau, and Charles-Symphorien Mercier. 2011\. Does Virtual Reality Enhance Exercise Performance, Enjoyment, and Dissociation? An Exploratory Study on a Stationary Bike Apparatus. _Presence: Teleoperators and Virtual Environments_ 20, 1 (Feb. 2011), 14. https://doi.org/10.1162/pres_a_00031
* Monteiro et al. (2018) Diego Monteiro, Hai-Ning Liang, Wenge Xu, Marvin Brucker, Vijayakumar Nanjappan, and Yong Yue. 2018\. Evaluating enjoyment, presence, and emulator sickness in VR games based on first- and third- person viewing perspectives. _Computer Animation and Virtual Worlds_ 29, 3 (2018), 12\. https://doi.org/10.1002/cav.1830
* Mueller and Isbister (2014) Florian Mueller and Katherine Isbister. 2014. Movement-Based Game Guidelines. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Toronto, Ontario, Canada) _(CHI ’14)_. Association for Computing Machinery, New York, NY, USA, 2191–2200. https://doi.org/10.1145/2556288.2557163
* Mueller et al. (2016) Florian Mueller, Rohit Ashok Khot, Kathrin Gerling, and Regan Mandryk. 2016. Exertion Games. _Found. Trends Hum.-Comput. Interact._ 10, 1 (Dec. 2016), 1–86. https://doi.org/10.1561/1100000041
* Mueller et al. (2012) Florian Mueller, Frank Vetere, Martin Gibbs, Darren Edge, Stefan Agamanolis, Jennifer Sheridan, and Jeffrey Heer. 2012. Balancing Exertion Experiences. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Austin, Texas, USA) _(CHI ’12)_. Association for Computing Machinery, New York, NY, USA, 1853–1862. https://doi.org/10.1145/2207676.2208322
* Mueller et al. (2011) Florian “Floyd” Mueller, Darren Edge, Frank Vetere, Martin R. Gibbs, Stefan Agamanolis, Bert Bongers, and Jennifer G. Sheridan. 2011. Designing Sports: A Framework for Exertion Games. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ _(CHI ’11)_. Association for Computing Machinery, New York, NY, USA, 2651–2660. https://doi.org/10.1145/1978942.1979330 event-place: Vancouver, BC, Canada.
* Murnane et al. (2020) Elizabeth L. Murnane, Xin Jiang, Anna Kong, Michelle Park, Weili Shi, Connor Soohoo, Luke Vink, Iris Xia, Xin Yu, John Yang-Sammataro, Grace Young, Jenny Zhi, Paula Moya, and James A. Landay. 2020. Designing Ambient Narrative-Based Interfaces to Reflect and Motivate Physical Activity. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376478 event-place: Honolulu, HI, USA.
* Nacke et al. (2009) Lennart E. Nacke, Anne Nacke, and Craig A. Lindley. 2009\. Brain Training for Silver Gamers: Effects of Age and Game Form on Effectiveness, Efficiency, Self-Assessment, and Gameplay Experience. _CyberPsychology & Behavior_ 12, 5 (2009), 493–499. https://doi.org/10.1089/cpb.2009.0013 PMID: 19772440.
* Nes et al. (2013) Bjarne. M Nes, Imre Janszky, Ulrik Wisløff, Asbjørn Støylen, and Trine Karlsen. 2013\. Age-predicted maximal heart rate in healthy subjects: The HUNT Fitness Study. _Scandinavian Journal of Medicine & Science in Sports_ 23, 6 (2013), 697–704. https://doi.org/10.1111/j.1600-0838.2012.01445.x
* Ostchega et al. (2011) Yechiam Ostchega, Kathryn Porter, Jeffery Hughes, Charles F Dillon, and Tatiana Nwankwo. 2011\. Resting pulse rate reference data for children, adolescents, and adults: United States, 1999-2008. _National health statistics reports_ 41 (Aug. 2011), 1–16.
* Pasch et al. (2009) Marco Pasch, Nadia Bianchi-Berthouze, Betsy van Dijk, and Anton Nijholt. 2009. Movement-based sports video games: Investigating motivation and gaming experience. _Entertainment Computing_ 1, 2 (2009), 49 – 61. https://doi.org/10.1016/j.entcom.2009.09.004 Intelligent Technologies for Interactive Entertainment.
* Pincivero et al. (2010) Danny Pincivero, Mark Timmons, and D Elsing. 2010. RPE Angle Effects in Young and Middle-Aged Adults. _International journal of sports medicine_ 31, 4 (Feb. 2010), 257–260. https://doi.org/10.1055/s-0030-1247551
* Plante et al. (2003) Thomas G. Plante, Arianna Aldridge, Denise Su, Ryan Bogdan, Martha Belo, and Kamran Kahn. 2003\. Does Virtual Reality Enhance the Management of Stress When Paired With Exercise? An Exploratory Study. _International Journal of Stress Management_ 10, 3 (2003), 203–216. https://doi.org/10.1037/1072-5245.10.3.203
* Power et al. (2019) Christopher Power, Paul Cairns, Alena Denisova, Themis Papaioannou, and Ruth Gultom. 2019\. Lost at the edge of uncertainty: Measuring player uncertainty in digital games. _International Journal of Human–Computer Interaction_ 35, 12 (2019), 1033–1045. https://doi.org/10.1080/10447318.2018.1507161 Publisher: Taylor & Francis.
* Rizzo et al. (2011) Albert Skip Rizzo, Belinda Lange, Evan A. Suma, and Mark Bolas. 2011\. Virtual Reality and Interactive Digital Game Technology: New Tools to Address Obesity and Diabetes. _Journal of Diabetes Science and Technology_ 5, 2 (March 2011), 256–264. https://doi.org/10.1177/193229681100500209
* Rosney and Horvath (2018) Daniel M Rosney and Peter J Horvath. 2018. Exergaming Intervention in Sedentary Middle-Aged Adults Improves Cardiovascular Endurance, Balance and Lower Extremity Functional Fitness. _Health Science Journal_ 12, 6 (2018), 1–10. https://doi.org/10.4172/2165-7904-C1-55 Publisher: Technological Educational Institute of Athens.
* Sanders and Stappers (2013) Liz Sanders and Pieter Jan Stappers. 2013. _Convivial Toolbox: Generative Research for the Front End of Design_ (illustrated edition ed.). Laurence King Publishing, Amsterdam.
* Schubert et al. (2018) Matt Schubert, Amy Clark, and Annie Rosa. 2018. The Polar OH1 Optical Heart Rate Sensor is Valid during Moderate-Vigorous Exercise. _Sports Medicine International Open_ 2, 3 (March 2018), 67–70. https://doi.org/10.1055/a-0631-0920
* Sheehan and Katz (2013) Dwayne P. Sheehan and Larry Katz. 2013. The effects of a daily, 6-week exergaming curriculum on balance in fourth grade children. _Journal of Sport and Health Science_ 2, 3 (Sept. 2013), 131–137. https://doi.org/10.1016/j.jshs.2013.02.002
* Sherry et al. (2006) John L. Sherry, Kristen Lucas, Bradley S. Greenberg, and Ken Lachlan. 2006. Video Game Uses and Gratifications as Predicators of Use and Game Preference. In _Playing video games: Motives, responses, and consequences._ Lawrence Erlbaum Associates, New York, US, 213–224. Place: Mahwah, NJ, US Publisher: Lawrence Erlbaum Associates Publishers.
* Sinclair et al. (2007) Jeff Sinclair, Philip Hingston, and Martin Masek. 2007\. Considerations for the design of exergames. In _Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia - GRAPHITE ’07_. ACM Press, Perth, Australia, 289. https://doi.org/10.1145/1321261.1321313
* Smeddinck et al. (2013) Jan Smeddinck, Sandra Siegel, and Marc Herrlich. 2013\. Adaptive Difficulty in Exergames for Parkinson’s Disease Patients. In _Proceedings of Graphics Interface 2013_ _(GI ’13)_. Canadian Information Processing Society, CAN, 141–148. event-place: Regina, Sascatchewan, Canada.
* Soares et al. (2016) Antônio Soares, Noé Borges Júnior, Marcelo Hounsell, Elessandra Marcelino, Gabriel Rossito, and Yoshimasa Sagawa. 2016. A serious game developed for physical rehabilitation of frail elderly. _European Research in Telemedicine / La Recherche Européenne en Télémédecine_ 5, 2 (June 2016), 45–53. https://doi.org/10.1016/j.eurtel.2016.05.003
* Subramanian et al. (2019) Sruti Subramanian, Yngve Dahl, Nina Skjæret Maroni, Beatrix Vereijken, and Dag Svanæs. 2019\. Assessing Motivational Differences Between Young and Older Adults When Playing an Exergame. _Games for Health Journal_ 9, 1 (Sept. 2019), 24–30. https://doi.org/10.1089/g4h.2019.0082 Publisher: Mary Ann Liebert, Inc., publishers.
* Tekinbas and Zimmerman (2003) Katie Salen Tekinbas and Eric Zimmerman. 2003. _Rules of Play: Game Design Fundamentals_. The MIT Press, Cambridge, Mass.
* Thomas et al. (1993) Scott Thomas, Jeff Reading, and Roy Shephard. 1993\. Revision of the Physical Activity Readiness Questionnaire (PAR-Q). _Canadian journal of sport sciences_ 17, 4 (Jan. 1993), 338–45.
* Vara et al. (2016) Mª Dolores Vara, Rosa M. Baños, Paloma Rasal, Alejandro Rodríguez, Beatriz Rey, Maja Wrzesien, and Mariano Alcañiz. 2016. A game for emotional regulation in adolescents: The (body) interface device matters. _Computers in Human Behavior_ 57 (2016), 267 – 273. https://doi.org/10.1016/j.chb.2015.12.033
* Verhaeghen and Salthouse (1997) Paul Verhaeghen and Timothy Salthouse. 1997. Meta-analyses of age-cognition relations in adulthood: Estimates of linear and nonlinear age effects and structural models. _Psychological bulletin_ 122, 3 (Dec. 1997), 231–249. https://doi.org/10.1037//0033-2909.122.3.231
* Wang et al. (2017) Ping Wang, Xing-Ting Zhu, Han-Hui Liu, Yi-Wen Zhang, Yang Hu, Hui-Jie Li, and Xi-Nian Zuo. 2017. Age-Related Cognitive Effects of Videogame Playing Across the Adult Life span. _Games for Health Journal_ 6, 4 (June 2017), 237–248. https://doi.org/10.1089/g4h.2017.0005 Publisher: Mary Ann Liebert, Inc., publishers.
* Xie et al. (2018) Biao. Xie, Yongqi. Zhang, Haikun. Huang, Elisa. Ogawa, Tongjian. You, and Lap. Fai. Yu. 2018. Exercise Intensity-Driven Level Design. _IEEE Transactions on Visualization and Computer Graphics_ 24, 4 (2018), 1661–1670. https://doi.org/10.1109/TVCG.2018.2793618
* Xu et al. (2020a) Wenge Xu, Hai-Ning Liang, Nilufar Baghaei, Bing Wu Berberich, and Yong Yue. 2020a. Health Benefits of Digital Videogames for the Aging Population: A Systematic Review. _Games for Health Journal_ 9, 6 (2020), 389–404. https://doi.org/10.1089/g4h.2019.0130 PMID: 32589482.
* Xu et al. (2020b) Wenge Xu, Hai-Ning Liang, Qiuyu He, Xiang Li, Kangyou Yu, and Yuzheng Chen. 2020b. Results and Guidelines From a Repeated-Measures Design Experiment Comparing Standing and Seated Full-Body Gesture-Based Immersive Virtual Reality Exergames: Within-Subjects Evaluation. _JMIR Serious Games_ 8, 3 (2020), 15. https://doi.org/10.2196/17972
* Xu et al. (2020c) Wenge Xu, Hai-Ning Liang, Xiaoyue Ma, and Xiang Li. 2020c. VirusBoxing: A HIIT-Based VR Boxing Game. In _Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play_ (Virtual Event, Canada) _(CHI PLAY ’20)_. Association for Computing Machinery, New York, NY, USA, 98–102. https://doi.org/10.1145/3383668.3419958
* Xu et al. (2019) Wenge Xu, Hai-Ning Liang, Yifan Yu, Diego Monteiro, Khalad Hasan, and Charles Fleming. 2019\. Assessing the Effects of a Full-body Motion-based Exergame in Virtual Reality. In _Proceedings of the Seventh International Symposium of Chinese CHI on - Chinese CHI 2019_. ACM Press, Xiamen, China. https://doi.org/10.1145/3332169.3333574
* Xu et al. (2020d) Wenge Xu, Hai-Ning Liang, Zeying Zhang, and Nilufar Baghaei. 2020d. Studying the Effect of Display Type and Viewing Perspective on User Experience in Virtual Reality Exergames. _Games for Health Journal_ 9, 4 (Feb. 2020), 10. https://doi.org/10.1089/g4h.2019.0102 Publisher: Mary Ann Liebert, Inc., publishers.
## Appendix A Appendix
### A.1. Data Results
We list all data in Table 5 and 6. VR_Cer represents VR certain game condition
(GC), VR_Unc represents VR uncertain GC, TV_Cer represents TV certain GC,
TV_Unc represents TV uncertain GC.
Table 5. Means (SDs) of participants’ performance data regarding the completion time on each of the three lives of the monster, total number of gestures performed, and success rate of each move. | Young Adults | Middle-aged Adults
---|---|---
Type | VR_Cer | VR_Unc | TV_Cer | TV_Unc | VR_Cer | VR_Unc | TV_Cer | TV_Unc
Completion Time on Each of The Three Lives of The Monster
Life1 | 126.83 (34.74) | 126.14 (36.79) | 133.69 (54.42) | 140.05 (51.73) | 149.38 (41.71) | 152.82 (48.91) | 148.16 (60.72) | 144.64 (44.30)
Life2 | 113.80 (22.07) | 114.60 (33.05) | 119.71 (33.22) | 121.71 (37.82) | 133.96 (25.06) | 138.09 (39.38) | 131.36 (29.37) | 142.75 (47.57)
Life3 | 105.45 (19.23) | 109.30 (35.73) | 112.21 (29.10) | 115.39 (23.88) | 134.27 (36.88) | 126.20 (28.26) | 121.08 (19.63) | 139.17 (46.41)
Total Number of Gestures Performed
Kick | 33.19 (7.88) | 35.13 (7.44) | 38.00 (8.33) | 34.75 (10.08) | 36.50 (10.41) | 36.56 (8.60) | 35.19 (9.09) | 35.00 (9.35)
Push | 45.56 (13.77) | 44.25 (8.31) | 47.50 (9.64) | 44.13 (14.06) | 40.94 (10.97) | 41.31 (8.54) | 34.31 (13.80) | 35.88 (11.91)
Zoom+Kick | 35.25 (3.62) | 35.75 (5.01) | 32.81 (3.29) | 35.50 (3.12) | 34.06 (4.55) | 35.75 (5.42) | 35.06 (5.20) | 35.56 (5.67)
Zoom+Squat | 29.19 (7.88) | 33.88 (13.50) | 24.56 (9.24) | 36.63 (13.44) | 33.75 (4.93) | 46.69 (7.43) | 34.44 (5.68) | 50.44 (12.93)
Success Rate of Each Move
Kick | 82.19% (12.31%) | 83.72% (10.28%) | 77.28% (13.50%) | 75.87% (21.27%) | 80.03% (12.01%) | 80.44% (13.10%) | 75.96% (11.01%) | 76.04% (7.98%)
Push | 52.04% (25.06%) | 55.34% (24.32%) | 61.05% (18.84%) | 60.43% (20.57%) | 54.76% (13.12%) | 49.18% (18.52%) | 56.79% (15.61%) | 55.31% (15.83%)
Zoom+Kick | 98.32% (2.14%) | 99.50% (1.08%) | 96.16% (3.46%) | 98.13% (2.17%) | 97.61% (4.08%) | 97.31% (4.03%) | 99.14% (2.20%) | 96.19% (3.81%)
Zoom+Squat | 74.07% (13.17%) | 73.03% (13.36%) | 61.88% (17.56%) | 55.46% (18.25%) | 73.51% (10.20%) | 64.12% (9.21%) | 70.13% (11.33%) | 63.10% (6.78%)
Table 6. Means (SDs) of participants’ experience and exertion data regarding each game experience questionnaire subscale, simulator sickness questionnaire subscale, and exertion measurement. | Young Adults | Middle-aged Adults
---|---|---
Type | VR_Cer | VR_Unc | TV_Cer | TV_Unc | VR_Cer | VR_Unc | TV_Cer | TV_Unc
Game Experience Questionnaire
Competence | 12.00 (3.72) | 9.00 (4.52) | 10.25 (2.70) | 11.44 (3.44) | 6.69 (4.09) | 6.75 (4.60) | 7.38 (4.18) | 6.56 (2.92)
Immersion | 9.88 (5.64) | 10.44 (4.83) | 9.19 (4.86) | 10.06 (5.28) | 3.44 (2.45) | 3.50 (2.76) | 3.50 (2.90) | 3.50 (2.13)
Flow | 10.63 (4.11) | 10.31 (3.79) | 7.69 (3.52) | 8.75 (3.75) | 10.94 (5.63) | 11.81 (5.83) | 3.94 (1.98) | 4.31 (2.09)
Tension | 1.31 (1.45) | 1.88 (2.45) | 1.63 (1.59) | 1.69 (2.02) | 0.19 (0.54) | 0.06 (0.25) | 0.00 (0.00) | 0.06 (0.25)
Challenge | 6.13 (3.88) | 7.25 (2.59) | 6.69 (2.96) | 6.25 (3.04) | 6.06 (2.69) | 6.00 (2.76) | 5.44 (2.68) | 6.44 (2.58)
Negative Affect | 2.00 (2.19) | 2.56 (2.56) | 2.44 (2.76) | 2.81 (3.69) | 0.56 (1.21) | 0.63 (1.15) | 0.31 (0.60) | 0.19 (0.40)
Positive Affect | 10.56 (4.57) | 9.38 (3.90) | 8.88 (4.08) | 9.69 (3.59) | 4.94 (2.21) | 5.19 (2.14) | 5.38 (2.39) | 5.63 (2.09)
Simulator Sickness Questionnaire
Nausea | 11.93 (13.71) | 14.31 (17.42) | 11.93 (13.26) | 13.71 (16.33) | 2.98 (9.68) | 2.98 (8.33) | 1.19 (3.26) | 5.37 (14.77)
Oculomotor | 12.79 (14.57) | 11.84 (16.60) | 14.21 (13.80) | 14.21 (14.87) | 3.32 (7.81) | 6.16 (5.69) | 4.26 (9.17) | 7.11 (11.23)
Disorientation | 11.31 (23.41) | 6.96 (15.25) | 6.96 (21.56) | 7.83 (24.88) | 0.00 (0.00) | 0.00 (0.00) | 3.48 (9.51) | 1.74 (4.75)
Total SSQ | 14.03 (16.64) | 13.32 (17.43) | 13.56 (15.62) | 14.49 (18.16) | 2.81 (7.54) | 4.21 (5.27) | 3.51 (8.25) | 6.08 (12.13)
Exertion
avgHR% | 62.61% (8.87%) | 66.12% (11.72%) | 62.39% (10.15%) | 63.79% (10.14%) | 67.75% (7.22%) | 76.26% (6.12%) | 67.83% (8.73%) | 74.92% (6.90%)
maxHR% | 70.79% (9.76%) | 72.63% (12.60%) | 71.33% (12.60%) | 72.83% (10.65%) | 76.26% (6.60%) | 84.78% (6.84%) | 75.56% (7.51%) | 83.64% (5.93%)
Calories | 41.31 (10.73) | 50.88 (14.68) | 42.19 (14.79) | 46.06 (11.79) | 50.31 (10.26) | 60.25 (14.09) | 46.56 (9.61) | 60.25 (8.89)
Borg RPE 6-20 | 8.88 (2.28) | 10.19 (2.71) | 9.19 (1.83) | 8.88 (1.67) | 7.06 (0.44) | 8.19 (0.75) | 7.06 (0.44) | 8.00 (0.73)
|
# Spin dynamics from a constrained magnetic Tight-Binding model
Ramon Cardias<EMAIL_ADDRESS>SPEC, CEA, CNRS, Université Paris-Saclay,
CEA Saclay F-91191 Gif-sur-Yvette, FRANCE Cyrille Barreteau
<EMAIL_ADDRESS>SPEC, CEA, CNRS, Université Paris-Saclay, CEA Saclay
F-91191 Gif-sur-Yvette, FRANCE Pascal Thibaudeau<EMAIL_ADDRESS>CEA, DAM, Le Ripault, BP 16, F-37260, Monts, FRANCE Chu Chun Fu
<EMAIL_ADDRESS>Université Paris-Saclay, CEA, Service de Recherches de
Métallurgie Physique, F-91191, Gif-sur-Yvette, FRANCE
(September 3, 2024)
###### Abstract
A dynamics of the precession of coupled atomic moments in the tight-binding
(TB) approximation is presented. By implementing an angular penalty functional
in the energy that captures the magnetic effective fields self-consistently,
the motion of the orientation of the local magnetic moments is observed faster
than the variation of their magnitudes. This allows the computation of the
effective atomic magnetic fields that are found consistent with the
Heisenberg’s exchange interaction, by comparison with classical atomistic spin
dynamics on Fe, Co and Ni magnetic clusters.
## I Introduction
Nowadays, the coupling between structural and magnetic properties in 3d based
magnetic materials plays a key role in the manufacture of high performance
spintronics devices [1]. Moreover, it is also central in numerous anomalous
evolutions of structural parameters [2,
*antonangeliAnomalousPressureEvolution2008,
*cernyElasticStabilityMagnetic2007,
*soderlindCrystalStructureElasticconstant1994] with pressure. For instance,
one of its salient consequence is that the bcc phase of $\alpha$-Fe is
stabilized by its magnetic properties [6,
*mathonDynamicsMagneticStructural2004, *monzaIronPressureKohn2011]. Thus, to
accurately describe the dynamics of 3d metals and their alloys, a fully
coupled spin-lattice dynamics with an ab initio level of precision is highly
desirable. Unfortunately and despite notable progress [9, 10,
*tranchidaMassivelyParallelSymplectic2018], no such tool is available so far.
However the theory of magnetism is fundamentally a theory of electronic
structure. Antropov et al. first presented a description of the motion of
local magnetic moments in magnetic materials [12,
*antropovSpinDynamicsMagnets1996], in the framework of first-principles
methods. Their idea was motivated by the fact that the interatomic exchange
energy among atomic magnetic moments is small compared to intra-atomic
exchange and bandwidth energy. Thus, this adiabatic spin density approximation
allows them to treat the angles defining the directions of these magnetic
moments as sufficiently slow varying degrees of freedom, to separate them from
the individual motion of their underlying electrons, exactly like the nuclear
coordinates in the Born-Oppenheimer adiabatic approach to molecular dynamics
[14]. Moreover, by assuming that the magnetization density in the immediate
vicinity of each atom has a uniform orientation, each direction of every
magnetic moment can be followed in time according to a precession equation, as
it is the case of classical atomistic spin dynamics [15]. Consequently, the
initial many-electron system is mimicked by this system of classical moments,
when the directions and amplitudes are determined self-consistently from the
requirement of minimizing a given free energy. Thus for each moment, the
effective field that enters in the precession equation depends only on the
variation of the spin-dependant free electronic energy as a functional of the
magnetization direction only. Moreover, by assuming that the relevant
electronic correlation hole is essentially in the inner part of each atomic
volume, for this type of adiabatic transformation, the longitudinal moment
dynamics is nonadiabatic in this approach. It is governed by individual
electronic spin flips like Stoner excitations, which are also fast [16,
*melnikovDynamicSpinFluctuationTheory2018]. Thus, even if the amplitude of
each moment cannot be globally constant in time, for a small temporal
excursion fast enough to keep the adiabatic approximation, the longitudinal
dynamics can be often neglected.
The paper is organized as followed. In Sec. II, we review the framework used
to derive non-collinear magnetism within the tight-binding (TB) approximation.
Angular magnetic constraints are imposed by penalty functionals that are
solved equally during the self-consistently computation of the electronic
structure. In Sec. II.4, the derivation of an equation of precession of the
local magnetic moments that involves constrained magnetic fields is presented
that allows considerations both transverse and longitudinal dampened torques.
The dynamics of various magnetic dimers and trimers of Fe, Co and Ni is
studied in details in Secs. III.1 and III.2 to access the validity of the
isotropic Heisenberg exchange approximation, that is commonly assumed. Lastly,
in Sec. III.3 we analyse in depth the example of an Fe dimer exposing the
strength of our method as opposed to the limitations introduced by describing
this system in the global Heisenberg picture.
## II Methodology
When an Hamiltonian $H$ is a functional of the magnetization ${\bm{M}}$, the
effective field is nothing else than the functional derivative of $H$ with the
respect of the magnetization [18]. To calculate such an effective field acting
on the atomic magnetic moments, the atomistic spin dynamics (ASD) uses a
parameterized spin-Hamiltonian, where ab initio methods calculate it at every
self-consistent iteration with various methods. One of the ab initio approach
consists in the use of constrained density functional theory (cDFT) [19,
*ujfalussyConstrainedDensityFunctional1999, *gebauerMagnonsRealMaterials2000],
where a full accountability of accomplishments of calculations can be found
now in many references [22, *ujfalussyInitioSpinDynamics2004]. The accuracy of
the cDFT methods requires an extremely high computational price that scales
quickly with the dimension and size of the studied system. In contrast, spin-
Hamiltonian methods rely on spatial distributions of classical magnetic
moments and offer an option with a computational cost tuned by the accuracy
and how interatomic exchange parameters are treated. We offer a method that
relies in between, with a lower computational cost compared with the full ab
initio aspects of the cDFT method without having to rely on a correct
description of the parameters inside a spin-Hamiltonian for a given system.
### II.1 Magnetic tight-binding model
In this work we have used a magnetic TB model that has been described in a
review article [24] and has been extensively benchmarked and validated in many
different magnetic systems of various dimensionalities (bulk, surfaces,
interfaces, wires, clusters) [25, 26, 27], including complex magnetic
structures such as spin density wave [28] and non collinear configurations
[29].
It is based on a parametrized $spd$ description of the electronic structure
where in practice the parameters of the Hamiltonian are determined by a
careful fit of the total energy and band structures obtained from ab-initio
data over a large range of lattice constants of different crystallographic
structures. The magnetism is described via a Stoner-like interaction term. The
Stoner parameter $I$ of each element being also determined from a comparison
to ab-initio calculations at several lattice constants. This TB model
describes the electronic, magnetic and energetic properties with a precision
close to Density Functional Theory but at a much smaller computational effort.
To avoid a too lengthy derivation, we will present a simplified version of the
TB formalism that focuses on the most salient features of the model. Let us
consider a non-magnetic TB Hamiltonian $H^{0}$ written in a local basis set
$|i\rangle$. The site index $i$ is a composite object that also includes an
orbital index reference which can be dropped for simplicity. $H^{0}$ is
decomposed into onsite energy terms $\varepsilon_{i}^{0}=\langle
i|H^{0}|i\rangle$ and hopping integrals $\beta_{ij}=\langle i|H^{0}|j\rangle$.
The eigenfunctions of the system are written as a combination of atomic
orbitals $|\alpha\rangle=\sum_{i}C_{i}^{\alpha}|i\rangle$ and the density
matrix between sites reads
$\rho_{ij}=\sum_{\alpha}^{\text{occ}}C_{i}^{\alpha}C_{j}^{\alpha\star}$ where
the summation runs over the occupied energy levels
$\varepsilon_{\alpha}<E_{F}^{0}$ where $E_{F}^{0}$ is the Fermi level such
that $\sum_{i}\rho_{ii}$ is equal to the total number of electrons $N_{e}$ of
the system. The total energy of a non-magnetic system is here reduced to the
band energy only [30]
$\displaystyle E_{\text{tot}}^{0}$
$\displaystyle=\sum_{\alpha}^{\text{occ}}\varepsilon_{\alpha}^{0}=\mathrm{Tr}(\rho
H^{0})=\sum_{ij}\rho_{ij}H^{0}_{ji}$
$\displaystyle=\sum_{ij}\sum_{\alpha}^{\text{occ}}C_{i}^{\alpha}C_{j}^{\alpha\star}H^{0}_{ji}.$
(1)
To this non-magnetic framework, both the magnetic interaction and the local
charge neutrality can be added by appropriate constraints, such as the total
energy can be written in a formalism where each electronic spins are treated
collinear, i.e.
$E_{\text{tot}}=E_{\text{tot}}^{0}+\sum_{i}U_{i}(n_{i}-n_{i}^{0})^{2}-\frac{1}{4}\sum_{i}I_{i}m_{i}^{2},$
(2)
where $n_{i}=\rho_{ii}=n_{i\uparrow}+n_{i\downarrow}$ and
$m_{i}=n_{i\uparrow}-n_{i\downarrow}$ are respectively the charge and
magnetization of site $i$, whereas $I_{i}$ is the Stoner parameter and $U_{i}$
a large positive quantity. By minimizing Eq.(2) with respect to the normalized
coefficient $C_{i}^{\alpha}$, with the condition
$\sum_{i}(C_{i}^{\alpha})^{2}=1$, this leads to a Schrödinger equation for a
renormalized Hamiltonian $H_{\sigma}$ for $\uparrow$ or $\downarrow$ spins
separately. This Hamiltonian simply reads as
$H_{\sigma}=H^{0}+\sum_{i}|i\rangle\left(U_{i}(n_{i}-n_{i}^{0})-\frac{1}{2}I_{i}m_{i}\sigma\right)\langle
i|,$ (3)
where $\sigma=\pm 1$ is the spin $\uparrow$ or $\downarrow$. In this Stoner
picture only the onsite terms
$\varepsilon^{0}_{i}\rightarrow\varepsilon^{0}_{i}+(U_{i}(n_{i}-n_{i}^{0})-\frac{1}{2}I_{i}m_{i}\sigma)$
are affected by both the local charge neutrality and magnetism.
The generalization to non-collinear magnetism is straightforward. First the
previous expressions is extended to spin-orbitals with spin-dependent
coefficients $(C_{i\uparrow},C_{i\downarrow})$ on each site. Then an onsite
density matrix $\tilde{\rho}_{i}$ is manipulated as a $2\times 2$ matrix with
components
$\rho_{i}^{\sigma\sigma^{\prime}}=\sum_{\alpha}^{\text{occ}}C_{i\sigma}^{\alpha}C_{i\sigma^{\prime}}^{\alpha\star}$,
in order to write it more conveniently as
$\tilde{\rho}_{i}=\frac{1}{2}n_{i}\sigma_{0}+\frac{1}{2}\bm{m}_{i}\cdot\bm{\sigma}$,
where $\sigma_{0}$ is the identity matrix $\equiv\mathbb{I}$ and
${\bm{\sigma}}=(\sigma_{x},\sigma_{y},\sigma_{z})$ is a vector of Pauli
matrices, ${\bm{m}}_{i}=\text{Tr}(\tilde{\rho}_{i}{\bm{\sigma}})$. As a
consequence, the Hamiltonian $H$ then reads as
$H=H_{n}\sigma_{0}+\bm{H}_{m}.\bm{\sigma},$ (4)
where the components of the vector Hamiltonian $\bm{H}=(H_{n},\bm{H}_{m})$ are
$\displaystyle H_{n}$
$\displaystyle=\sum_{i}\left(\epsilon_{i}^{0}+U_{i}(n_{i}-n_{i}^{0})\right)|i\rangle\langle
i|+\sum_{ij}\beta_{ij}|i\rangle\langle j|,$ (5) $\displaystyle\bm{H}_{m}$
$\displaystyle=-\frac{1}{2}\sum_{i}\bm{\Delta}_{i}|i\rangle\langle i|.$ (6)
with $\bm{\Delta}_{i}=I_{i}\bm{m}_{i}$. When the total energy of the system is
written as the sum of the occupied eigenvalues (band energy term) of the
renormalized Hamiltonian, one has to take into account the so-called double
counting terms
$E_{\text{tot}}=\sum_{\alpha}^{\text{occ}}\varepsilon_{\alpha}-\frac{1}{2}\sum_{i}U_{i}((n_{i})^{2}-(n_{i}^{0})^{2})+\frac{1}{4}\sum_{i}I_{i}\left\lVert\bm{m}_{i}\right\rVert^{2},$
(7)
where $\varepsilon_{\alpha}$ are the eigenvalues of the renormalized
Hamiltonian.
### II.2 Magnetic constraints in TB
When dealing with magnetic systems it is often interesting to be able to
explore the energetics of various magnetic configurations. This can achieved
by trying several starting magnetic configurations but remains a relatively
limited strategy since this produces few self-consistent solutions to compare
with. It can be very interesting to consider the situation where magnetic
constraints are imposed on any given atom $i$ of the system. Appendix A
summarizes the fixed spin method that is limited to collinear magnetism.
However, among all the practical methods of optimization under constraints
[31], the penalty method is a very handy way to proceed.
This consists to supplement the total energy with a penalty term in a similar
way that has been done for the local charge neutrality constraint. There
exists many possible ways to impose constraints on a magnetic system [32, 21,
33], which have been carefully reported in the reference [34].
There also exists various types of penalty functional depending on the
quantity to impose. One can impose a given moment $\bm{m}_{i}^{\text{p}en}$ on
a given atomic site $i$ as presented in appendix B but it is also possible to
constrain only the polar angle $\theta_{i}$ between the atomic moments of atom
$i$ and the $z$-axis, a penalty functional of the form
$\lambda(\theta_{i}-\theta_{i}^{\text{p}en})^{2}$ can be considered. An
equivalent expression can apply to the azimuthal angle $\phi_{i}$ too. To
constraint simultaneously both angles, we could simply add these two
functionals. However as reported by Ma and Dudarev [33], a combined angular
penalty functional can be constructed, based on the dot product of
$\bm{m}_{i}$ and $\bm{e}_{i}^{\text{p}en}$, here considered as a unit vector
of given spherical angles $(\theta_{i}^{\text{p}en},\phi_{i}^{\text{p}en})$.
This penalty function reads
$E^{\text{p}en}_{i}=\lambda(\left\lVert\bm{m}_{i}\right\rVert-\bm{e}_{i}^{\text{p}en}\cdot\bm{m}_{i})$,
and leaves the norm of the magnetization $\left\lVert\bm{m}_{i}\right\rVert$
free to vary while the direction of the magnetic moment is constraint to be
the direction of $\bm{e}_{i}^{\text{p}en}$. Consequently, this introduces a
renormalization of the on-site terms of the TB Hamiltonian of the form
$-\bm{B}_{i}^{\text{p}en}\cdot\bm{\sigma}$ with
$\bm{B}_{i}^{\text{p}en}=-\lambda(\bm{e}_{i}-\bm{e}_{i}^{\text{p}en})$, where
$\bm{m}_{i}=\left\lVert\bm{m}_{i}\right\rVert\bm{e}_{i}$. Therefore the on-
site term $\bm{\Delta}_{i}$ of the magnetic Hamiltonian $\bm{H}_{m}$ (see Eq.
(6)) reads:
$\bm{\Delta}_{i}=I_{i}\bm{m}_{i}+2\bm{B}_{i}^{\text{p}en}$ (8)
This is exactly Eq. (1.9) of Ref. 35. The spin splitting field
$\bm{\Delta}_{i}$ is the sum of the Stoner-like exchange field
$I_{i}\bm{m}_{i}$ and the penalization field. This penalty scheme has many
specific properties. For example by noting that
$-\bm{B}_{i}^{\text{p}en}\cdot\bm{m}_{i}=E^{\text{p}en}_{i}$, it can be shown
that there are no double counting terms associated to the the renormalization.
Consequently the total energy can we written as in Eq. (18) but without the
last term. Moreover when $\lambda\to\infty$,
$\bm{e}_{i}\approx\bm{e}_{i}^{\text{p}en}$ and
$\bm{B}_{i}^{\text{p}en}\cdot\bm{m}_{i}=0$ and the penalization field becomes
perpendicular to the local magnetization.
To be more specific, let us now consider the variation of the total energy
with respect to the polar and azimuthal angles. By considering a variation of
angle $d\theta$ on site $i$ and by using the Force Theorem, it is
straightforward to show that
$dE=-\frac{d\bm{B}_{i}^{\text{p}en}}{d\theta}\cdot\bm{m}_{i}d\theta=-\left\lVert\bm{m}_{i}\right\rVert\frac{d\bm{B}_{i}^{\text{p}en}}{d\theta_{i}}\cdot\bm{e}_{i}d\theta_{i}$.
Now by taking the derivative of $\bm{B}_{i}^{\text{p}en}\cdot\bm{e}_{i}=0$,
and by noting that $\frac{d\bm{e}}{d\theta}=\bm{e}_{\theta}$, we find a
relationship between the polar angle variation of the energy, which is the
effective field up to a sign, and the penalty field
$\frac{1}{\left\lVert\bm{m}_{i}\right\rVert}\frac{\partial
E}{\partial\theta_{i}}=\bm{B}_{i}^{\text{p}en}\cdot\bm{e}_{i,\theta}=\bm{B}_{i,\theta}^{\text{p}en},$
(9)
and similarly with the azimuthal angle variation of the energy
$\frac{1}{\left\lVert\bm{m}_{i}\right\rVert}\frac{1}{\sin\theta_{i}}\frac{\partial
E}{\partial\phi_{i}}=\bm{B}_{i}^{\text{p}en}\cdot\bm{e}_{i,\phi}=\bm{B}_{i,\phi}^{\text{p}en}.$
(10)
Or in a more compact formulation
$\bm{B}_{i}^{\text{pen}}=\frac{\partial
E}{\partial\bm{m}_{i}}=\frac{1}{\left\lVert\bm{m}_{i}\right\rVert}\frac{\partial
E}{\partial\bm{e}_{i}}.$ (11)
Thanks to these penalty functionals, it becomes possible to target any local
arbitrary magnetic configuration to find the corresponding local effective
field, which is an extremely useful technique to explore the magnetic energy
landscape. It is also possible to assign $\lambda$ as a site-dependent
parameter, by setting it to zero to constraint some atoms and let the others
to adapt, during the self-consistency cycles.
In the following section we will use the penalty formalism to map the TB model
onto an Heisenberg Hamiltonian and to derive a spin dynamics equation of
motion that directly use the penalty field hence derived.
### II.3 Exchange parameters in TB
In this section the general features to map the total energy of an electronic
structure method onto a classical Heisenberg model is presented, that
describes a system of atomic spin, characterized by local magnetic moments
$\bm{m}_{i}$ at site $i$ interacting via bare isotropic interactions
$J^{0}_{ij}$:
$\displaystyle E_{\textrm{Heis}}$ $\displaystyle=-\frac{1}{2}\sum_{i\neq
j}J^{0}_{ij}\bm{m}_{i}\cdot\bm{m}_{j},$ (12)
$\displaystyle=-\frac{1}{2}\sum_{i\neq
j}J^{0}_{ij}\left\lVert\bm{m}_{i}\right\rVert\left\lVert\bm{m}_{j}\right\rVert\bm{e}_{i}\cdot\bm{e}_{j},$
$\displaystyle=-\frac{1}{2}\sum_{i\neq j}J_{ij}\bm{e}_{i}\cdot\bm{e}_{j},$
Within this approach the amplitude of the magnetization
$\left\lVert\bm{m}_{i}\right\rVert$ of site $i$ can be incorporated
effectively into the bare exchange interaction to produce a dressed exchange
interaction, once assumed that the $\left\lVert\bm{m}_{i}\right\rVert$ become
independent of the magnetic configuration. This assumption seems rather
drastic but in many magnetic systems, where the magnetic moments are not so
dependent on the magnetic configuration or for small rotations around a given
angle, which is the case treated here. By keeping this assumption in mind, we
can safely dropped the dressed reference.
However in systems that break globally the symmetry of space rotation
(particularly of nanometer size), this fails and the classical Heisenberg
model is only valid for a limited range around a given magnetic stable (or
metastable) configuration $\cal C$, that preserves the invariance by point
rotation only locally. In such systems the Heisenberg model can only be used
to explore the dynamic around configuration $\cal C$, that does not alter
substantially the invariance by point rotation, that are often found for low
temperatures. Consequently for higher temperatures or space transitions that
reduce the point symmetry, the $J_{ij}$’s become usually very sensitive to the
structural parameters such as the interatomic distances and local
environments, preventing their transferability to various atomic structures.
This point is well illustrated in Appendix D.
Since numerical implementations of the Heisenberg model are by far simpler
than electronic structure approaches, it is tempting to extract the desired
exchange parameters $J_{ij}$ from electronic structure calculations. To do so,
several methods have been reported in the literature. i) The simplest method
is based on a fit of the total energy obtained by multiple magnetic collinear
configurations, which do not necessitate any non-collinear numerical
implementations neither penalty constraints [36,
*vaclavkovaMagnetoelasticInteractionTwodimensional2020]. ii) Another approach
consists in performing finite difference calculations of the total energy
between various magnetic non-collinear configurations [38,
*sandratskiiNoncollinearMagnetismItinerantelectron1998,
*grotheerInitioTreatmentNoncollinear2000, *grotheerFastInitioMethods2001],
which can enlarged significantly the space of the magnetic configurations to
span. In addition by varying the relative angle between the magnetic sites, it
is possible to test the range of validity of the Heisenberg picture [42, 43].
iii) Based on this finite difference picture, in a seminal work Liechtenstein
et al derived an explicit expression of the exchange parameters, based on
second order variation of the band energy term relying on the magnetic Force
Theorem and Green’s function formalism [44,
*liechtensteinLocalSpinDensity1987]. The latter one has shown big success in
predicting various magnetic properties such as magnon excitation, critical
temperature and also used to perform dynamical calculation of magnetic moments
[46]. In this work, we have used the approach ii), where we rotated one
magnetic moment of an angle $\theta$ and developed an equation for $E(\theta)$
for each case, e.g. dimers (Sec. III.1) and trimers (Sec. III.2). We have
found that the energy curve between the TB model and the Heisenberg model
agree quite well, which leads to a good agreement between the spin dynamics of
the two different methods, shown later in Secs. III.1 and III.2.
Details of the derived expression for both cases and the fitting of the
energies to find the respective exchange coupling parameter $J_{ij}$ for each
case is explored in more details at the Appendix D.
### II.4 Spin-dynamics in TB
The change in direction of each of the local magnetic moments
${\bm{m}}_{i}=\text{Tr}(\tilde{\rho}_{i}{\bm{\sigma}})$ with time is given by
the transverse torque of this moment only with the effective pulsation, which
is in return precisely
$\bm{B}_{i}^{\text{eff}}\equiv-\bm{B}_{i}^{\text{pen}}=-\frac{\partial
E}{\partial\bm{m}_{i}}$,
$\frac{d\bm{m}_{i}}{dt}=\bm{m}_{i}\times\frac{\bm{B}_{i}^{\text{eff}}}{\hbar}=\frac{\bm{B}_{i}^{\text{pen}}}{\hbar}\times\bm{m}_{i}$
(13)
Because $\bm{B}_{i}^{\text{eff}}$ is constructed orthogonal to $\bm{m}_{i}$,
$\bm{B}_{i}^{\text{eff}}$ is itself a cross product of a functional of
$\bm{m}_{i}$, by $\bm{m}_{i}$. Eq. (13) is nothing else than the Larmor’s
precession equation, which is itself a non-relativistic limit of a more
complex motion of spinning particles in a co-moving frame [47].
In practice, TB SCF calculations are first performed without any constraint to
identify the stable magnetic (or metastable) states ${\bm{m}^{eq}_{i}}$. Such
a magnetic state is not necessarily unique and the process has to be repeated
in frustrated systems that produce degenerate states. However this process can
be systematized by considering methods for finding minimum energy paths of
transitions in magnetic systems [48, *ivanovEfficientOptimizationMethod2020].
Moreover if a precession around the equilibrium magnetization is considered,
the longitudinal term vanishes because $\bm{B}_{i}^{\text{eff}}$ is
constructed orthogonal to ${\bm{m}_{i}}$. Then a given spin direction
$\bm{m}_{i}(0)$ is chosen in the neighborhood of this equilibrium state and a
constrained SCF calculation is performed according to the chosen penalty
method described above, to get the local effective field. Thus, a spin
dynamics is produced by solving Eq.(13) in time by using an explicit solver.
In this case, each local moment may have different starting amplitude, that
remains constant over time and their motion evolve on local spheres, according
to the Rodrigues’ rotation formula, that is presented in Appendix C. The
procedure is repeated for each time step of the spin dynamics.
## III Spin dynamics of magnetic clusters
In this section, we study the dynamics of the magnetic moments under two
different scenarios: using an ”in house” atomic spin dynamics (ASD) as
implemented in Ref. [50] based on an Heisenberg Hamiltonian and the tight-
binding spin dynamics (TBSD) method described in the previous Sec. II. This is
applied for the most simple cases, i.e. dimers and equilateral triangle
trimers of equivalent atoms for which the corresponding effective exchange
interaction $J$ is obtained from our TB model and then used in the ASD for
comparison with TBSD. Note that since in the ASD code the dynamics is
expressed in terms of unit vectors and the effective field is written as
$-\frac{\partial E}{\partial\bm{e}_{i}}$ (with no $\left\lVert
m_{i}\right\rVert$ factor) we have used in the TBSD an effective field given
by $-\left\lVert m_{i}\right\rVert\bm{B}_{i}^{\text{pen}}$.
We would like to highlight that Ref. [51] have explored aspects of the results
presented in this paper, in parallel. Most of their efforts was to verify if
the effective field is exactly the negative of the constraining field, which
acts as a Lagrange multiplier to stabilize an out-of-equilibrium, noncollinear
magnetic configuration, a point raised in Ref. 32. However, the quality of the
derived effective field by constrained method is very sensitive to the
numerical limit of the Lagrange multiplier, a point we have carefully
monitored. It is noteworthy to say that our results are complementary and do
not overlap in any way, specially in the spin-dynamics aspect of this work.
### III.1 Magnetic dimers
Many studies have already addressed the spin dynamics of both quantum and
classical Heisenberg dimers [52, *efremovHeisenbergDimerSingle2002,
*kolezhukDynamicsAnisotropicSpin2004, *cabotQuantumSynchronizationDimer2019],
not always systematically by looking the temporal dynamics of each of their
individual moments. Using the method described in Sec. II.4, we studied the
time evolution of the net magnetic moments, here treated as a classical
tridimensional vectors, for magnetic dimers of Fe, Co and Ni. First, Eq. (13)
is solved and the precession of these magnetic moments is analyzed without
damping, by starting from a tilted angle of $10^{\circ}$ from the $z$-axis for
each atomic site, as the initial configuration. Then by using the method
presented in the Appendix D, our findings are compared with an atomistic spin
dynamics approach using the exchange coupling $J$ extracted from the angular
dependence of the total energy. Our results, depicted in Fig. 1, show that all
the three dimers behave well as under the Heisenberg interaction in the
studied limit, i.e. the effective field $B^{eff}_{i}$ can be described by a
constant isotropic exchange, Eq. (12), that does not depend on the
instantaneous magnetic configuration.
Figure 1: (color online) Magnetization and torque dynamics of individual
moments for for dimers of Fe (black), Co (red) and Ni (green). TBSD (resp.
ASD) results are in solid lines (resp. circles). Unit of torques is PHz.
Initial conditions are ${\bm{m}}_{1}=g(-\sin(10^{\circ}),0,\cos(10^{\circ}))$,
${\bm{m}}_{2}=g(\sin(10^{\circ}),0,\cos(10^{\circ}))$, where $g$ are the SCF
Landé factor for each atom (see Appendix D).
As shown in Appendix D, between $\theta=0^{\circ}$ and $\theta=10^{\circ}$ the
fit between the energy calculated from the TB onto a Heisenberg Hamiltonian
works perfectly, but that does not hold true for higher angles. It means that
a simple bi-linear Heisenberg Hamiltonian is not enough to describe the system
globally, but only locally with respect to the magnetic configuration. Because
the $z$-component of the magnetization is constant in time, the $z$-component
of the ASD torque is exactly zero, which is not the case in the TB dynamics.
However, this can be consistently monitored by decreasing the timestep used to
integrate the precession equation, Eq. (13).
We can monitor that the precession frequency, as calculated in the appendix C,
is well reproduced by the TB calculations.
### III.2 Magnetic trimers
It is known in the literature that in some specific situations, the exchange
coupling and Dzyaloshinskii-Moriya interactions calculated from the
ferromagnetic (FM) state are not a good fit for predictions of magnetic
properties, e.g. close to the paramagnetic state [56,
*rubanTemperatureinducedLongitudinalSpin2007] or the transition from the FM to
the skyrmion phase [58]. This is mainly because that in these scenarios,
interactions of higher order play an important role and even sometimes a
central role, such as the value of considering the 4-spin interaction in case
of stabilizing the skyrmion phase in hexagonal Fe film of one-atomic-layer
thickness on the Ir(111) surface [59]. These higher order interactions can be
seen as if the exchange constants become kinetic functions of the
magnetization state, a possibility theorized long time ago [60,
*chaoCanonicalPerturbationExpansion1978]. One could argue that it is only
needed a high-order more specific spin-Hamiltonian to describe the problem,
but in some other cases the so called beyond-Heisenberg interactions can also
be present, i.e. interactions that cannot be mapped into a spin-Hamiltonian
[62] or cases where the Heisenberg picture is simply broken [63]. Our goal
here is to explore the limits and differences between the spin dynamics
features using a spin-Hamiltonian and our presented here TB spin dynamics
method.
In order to do that, the magnetization dynamics of magnetic equilateral
triangle trimers of Fe, Co and Ni is explored, as can be seen in Fig. 2
Figure 2: (color online) a) Schematic representation of the the equilateral
triangle trimer.
The magnetization dynamics of Fe, Co and Ni triangle trimers, are depicted in
Fig. 3 as well as the torques in Fig. 4.
Figure 3: (color online) Magnetization dynamics of Fe (black), Co (red) and Ni
(green) triangle trimers. TBSD (resp. ASD) results are in solid (resp.
circles) lines. Initial conditions are
$\bm{m}_{1}(0)=g(-0.17365,0.0,0.98481)$,
$\bm{m}_{2}(0)=g(0.08682,-0.15038,0.98481)$,
$\bm{m}_{3}(0)=g(0.08682,0.15038,0.98481)$, where $g$ are the SCF Landé factor
for each atom (see text). Figure 4: (color online) Torque dynamics of Fe
(black), Co (red) and Ni (green) triangle trimers. TBSD (resp. ASD) results
are in solid (resp. circles) lines. Units of the torques are in PHz. Initial
conditions are identical than those in Fig. 3.
In order to evaluate the exchange coupling between the magnetic moments in
this case, an analogous procedure to what was done to the dimer is performed,
more precisely described in the Appendix D. Fitting with the energy obtained
from the TB calculation, the parameters are reported in Appendix D. Note that
in this particular case $J_{12}=J_{23}=J_{31}$ due to the symmetry. Initially,
self-consistent calculations under the angular penalty function were performed
in order to determine the magnetic moments of each atom in the system. With
that information, one performs simulations of the magnetization dynamics using
the spin Hamiltonian, Eq. (12). Parallel to it, the process described in Sec.
II.4 is followed, the magnetization dynamics is calculated and the comparison
between the different methods is shown in Fig. 3. Similarly to the dimers
case, the systems here presented show themselves as Heisenberg systems within
the studied limit, e.g. $\theta=10^{\circ}$, when calculating the precession
of the magnetic moments around the z-axis.
So far, these limits have served to prove the reliability of our method, and
not to justify the extra computational cost introduced to reproduce the
behavior of an ASD approach. In the next section we exhibit the simplest
situation that demonstrate its relevance.
### III.3 Configuration dependence of the exchange coupling parameters
$J_{ij}$
The task of finding a reliable Hamiltonian to describe variations of magnetic
configurations is not straightforward. Continuous efforts have been made
throughout the years in the attempt to understand the microscopic origin of
these exchange parameters and their consequences [64]. Recently, a method to
calculate the exchange coupling parameter $J_{ij}$ for any given magnetic
configuration, via first-principles simulations, was developed and applied to
study these interactions on Fe-bcc [65]. In fact, these configuration
dependent $J_{ij}$’s significantly improved the spin-wave dispersion
comparison between the theory and the experiment. Within the TB approximation,
Ref. [66] reports a configuration dependence of the exchange parameters by
comparing various effective field $B_{eff}$ between the Heisenberg model and
direct TB calculations. Moreover, it is crucial to understand the relevance of
higher order parameters in the expansion of the magnetic Hamiltonian, e.g. and
bi-quadratic terms, 3-spins, 4-spins, etc., as can be seen in works like Refs.
[59, 43] and [67]. Lastly Ref. [68] as implemented in Ref. [69], offers an
attractive solution to the problem of a statistically under-represented
magnetic reference state, but at a cost of a span of the entire magnetic
configuration space. In principle, this allows the derivation of effective
exchange coupling constants that average the effect of more than 2 independent
configurations of spins. Unfortunately this statistical method is more
suitable in the dilute magnetic limit and appears not adequate to capture the
magnetic behavior of a single specific dimer or trimer. Moreover its
implementation for alloys is complex.
So far, we have calculated the exchange coupling parameters by fitting the
energy from the TB calculations around the ground state, i.e. FM for Fe, Co
and Ni. These past studies have revealed the non-Heisenberg behavior of Fe in
particular and in order to illustrate our argument, we picked up the Fe dimer
as an example. For a dimer, one can express the total TB energy as an
expansion on a basis of Legendre polynomials up to a given order $N$, such as
$\displaystyle E(\theta)-E(0)$
$\displaystyle=\sum_{n=1}^{N}J_{12}^{(n)}P_{n}(\cos(\theta)).$ (14)
When this series ends to $N=1$, $J_{12}^{(1)}$ is just the usual intensity of
the Heisenberg coupling constant. If this series ends to $N\geq 2$, we can
interpret $J_{12}^{(2)}$ as a biquadratic component of the intensity of the
magnetic coupling, characterized by a beyond-Heisenberg behavior. In the Fig.
5 we show on the left, the total energy of Fe dimer as a function of the angle
$\theta$ between the magnetic moments of each Fe atom, along with the exchange
coupling $J_{ij}^{(1)}$ calculated by fitting the Heisenberg model around the
local $\theta$ (at every step of $\theta=10^{\circ}$), on the right.
Figure 5: (color online) TB total energy as a function of the angle between
the two magnetic moments of an Fe dimer on the left y-axis and the $N=1$
exchange coupling parameter derived locally for each angle, on the right
y-axis. In addition, the TB total energy is globally fitted by expansion in
Legendre polynomials in terms of $\cos(\theta)$. Here, $N=1$ would be the bi-
linear Heisenberg Hamiltonian, $N=2$ includes the bi-quadratic term and so on
so forth.
It is clear from the total energy calculations that, for that case, it cannot
be fitted by a simple bi-linear Heisenberg model. We tried then to add a bi-
quadratic correction to the model as $\cos^{2}(\theta)$, as done in Ref. [70],
by analyzing the
$P_{2}(\cos(\theta))=\frac{1}{2}\left(3\cos^{2}(\theta)-1\right)$ part of the
Legendre expansion, and then reported in the Fig. 5 along the $N=2$ curve. One
can note that this $N=2$ term improves globally the model curve, but quite not
match the TB calculations rigorously, in particular in the range of angles
when the FM order is not the preferred magnetic ground state. It is needed to
go up to the 6-th order to get a reasonable fit that captures all the
energetic features, including the reversal in the sign of the energy behavior
at intermediate angles. It is noteworthy to mention that the magnetic moment
of each of the Fe atoms changes throughout the rotation of about 40% (data not
shown), from 3 $\mu_{B}$ (FM configuration) to $\approx 1.8\mu_{B}$ (AF
configuration); a feature that is also not covered by the Heisenberg model.
The parametric derivation of such a simple configuration space indicates the
magnitude of the task at hand in much more complex systems, such as alloys and
materials with non-collinear magnetic configurations as ground state. However
we argue that properties strongly dependent on small variations around the
ground state, such as spin-wave spectra, are well described with a local
Heisenberg Hamiltonian, as already anticipated by Holstein and Primakoff [71],
but we need a more precise electronic structure behavior, in order to compute
the correct effective field far from the ground state and not necessarily
represented by the magnon state of lowest energy. In that scenario the
effective field directly derived from the electronic structure, produces the
correct dynamics in time for any directions of any local magnetic moments,
without prior knowledge of any exchange values and represents, by
construction, a direct solution to avoid such issue.
## IV Conclusion
In this paper, we have presented a method that offers an alternative between
full ab initio and spin-Hamiltonian based spin-dynamics. Our approach uses a
penalty functional on the magnetic moments of each site in order to calculate
self-consistently, at every time step, the respective effective magnetic
field. We have solved the precession equation on each site, without damping,
for dimers and trimers of Fe, Co and Ni, and compared our findings with an ASD
approach, where the magnetic effective field is not calculated directly from
the electronic structure, but from a parameterized spin-Hamiltonian. The
exchange coupling interaction $J$, as a parameter, was calculated by fitting
the TB total energy with a parameterized spin-Hamiltonian for a range of
directions of the atomic magnetic moments. Our results showed that within this
limit, they can be seen as good Heisenberg systems locally and the comparison
between the TB and ASD are fairly good. That is not the case where the same
set of magnetic moments connect different magnetic extrema, meaning that
different parametric local representations have to be calculated, which breaks
the whole Heisenberg picture. For those systems, one cannot map globally the
electronic structure onto a single Heisenberg model, although these parameters
still can predict with good accuracy properties of their local ground states.
We have illustrated this situation by studying the dependance of the total
energy of an Fe dimer, as a function of the angle between the atomic magnetic
moments, and proved that this cannot be mapped globally into a bi-linear
Heisenberg Hamiltonian only. In fact, a high-order expansion in power of the
angular directions between the atomic magnetic moments is mandatory to match
the landscape of the TB energy adequately. Finally, the TBSD here presented is
a satisfying solution, with a reasonable computational cost, to study the
spin-dynamics of systems that are not dominated by the pair Heisenberg’s
interaction only, because the construction of the ab initio effective field is
free from such hypothesis. This technique may serve also to investigate the
dynamics of more complex magnetic systems that include spin-orbit mediated
interactions in low dimensional symmetries, and appears to be both versatile
and general.
## Acknowledgments
We gratefully thank to the Programme Transversal de Competences for financial
support with the project DYNAMOL.
## Data availability statement
The data that support the findings of this study are available from the
corresponding author, upon reasonable request.
## Appendix A Fixed spin moment
The fixed spin moment calculation is probably the most straightforward method,
but is limited to the case of collinear magnetism and is independent of the
site index. This is to impose exactly a total magnetization of the system and
therefore the total number of $\uparrow$ and $\downarrow$ electrons. One
therefore needs to define two separate Fermi levels $E_{F}^{\sigma}$. For a
homogeneous system where each atom carries the same charge and the same
magnetization, the total energy is
$E_{\text{tot}}=\sum_{\alpha}^{|\varepsilon_{\alpha}^{\uparrow}|<E_{F}^{\uparrow}}\varepsilon_{\alpha}^{\uparrow}+\sum_{\alpha}^{|\varepsilon_{\alpha}^{\downarrow}|<E_{F}^{\downarrow}}\varepsilon_{\alpha}^{\downarrow}+\frac{1}{4}Im^{2},$
(15)
where
$\varepsilon_{\alpha}^{\sigma}=\varepsilon_{\alpha}^{0}-\frac{1}{2}Im\sigma$.
Then the total energy can be rewritten as
$E_{\text{tot}}=\sum_{\alpha}^{|\varepsilon_{\alpha}^{0}|<E_{F}^{\uparrow}+\frac{1}{2}Im}\varepsilon_{\alpha}^{0}+\sum_{\alpha}^{|\varepsilon_{\alpha}^{0}|<E_{F}^{\downarrow}-\frac{1}{2}Im}\varepsilon_{\alpha}^{0}-\frac{1}{4}Im^{2}.$
(16)
Consequently, the derivative of the total energy with the magnetization
becomes simply proportional to the difference of Fermi’s energies
$\frac{dE_{\text{tot}}}{dm}=\frac{(E_{F}^{\uparrow}-E_{F}^{\downarrow})}{2}.$
(17)
An effective field $B^{\text{eff}}=-(E_{F}^{\uparrow}-E_{F}^{\downarrow})/2$,
aligned to these moments, can be defined. It comes out that at the extrema of
$E_{\text{tot}}$, the two Fermi levels are equal and the effective field
becomes zero. By looking at the sign of the second derivative of the energy
around $m=0$, this is simple to recover the Stoner criterion as described in
the reference [72]. Although useful, the fixed spin moment method is limited
to rather homogeneous systems.
## Appendix B Penalty method for atomic spin moment
Let us consider the case where a given magnetization $\bm{m}_{i}^{\text{p}en}$
is imposed on each atom. A quadratic penalty term as
$E^{\text{p}en}_{i}=\frac{\lambda}{2}\left\lVert\bm{m}_{i}-\bm{m}_{i}^{\text{p}en}\right\rVert^{2}$
can be added to each site, where $\lambda$ is a large positive number. In
principle $\lambda$ should go to infinity, but in practice a good compromise
is found by increasing its value and to check the convergence of the desired
quantity computed with. However, this problem can be circumvented by
implementing the Augmented Lagrangian Method, that introduces a quadratic
constraint term in the renormalized Hamiltonian, such as the $\lambda$
parameter remains finite [73]. This is at the cost of an additional
computational complexity and the penalization approach with a sufficient large
$\lambda$ term is preferred.
This consists to supplement Eq.(6) with the term
$\lambda({\bm{m}}_{i}-{\bm{m}_{i}^{0}})|i\rangle\langle i|$. Consequently, the
on-site diagonal renormalization term can formally be written $U_{i}\Delta
n_{i}\sigma_{0}-(\bm{B}^{\text{S}toner}_{i}+\bm{B}^{\text{p}en}_{i})\cdot\bm{\sigma}$
with $\bm{B}^{\text{S}toner}_{i}=\frac{1}{2}I_{i}\bm{m}_{i}$,
$\bm{B}^{\text{p}en}_{i}=-\lambda(\bm{m}_{i}-\bm{m}_{i}^{\text{p}en})$ and
$\Delta n_{i}=(n_{i}-n_{i}^{0})$. The total energy should be corrected
accordingly by the double counting terms and reads
$\displaystyle E_{\text{tot}}[\\{\bm{m}_{i}^{0}\\}]$
$\displaystyle=\sum_{\alpha}^{\text{occ}}\varepsilon_{\alpha}-\frac{1}{2}\sum_{i}U_{i}((n_{i})^{2}-(n_{i}^{0})^{2})$
$\displaystyle+\frac{1}{4}\sum_{i}I_{i}\left\lVert\bm{m}_{i}\right\rVert^{2}-\frac{\lambda}{2}\sum_{i}(\left\lVert\bm{m}_{i}\right\rVert^{2}-\left\lVert\bm{m}_{i}^{\text{p}en}\right\rVert^{2}).$
(18)
In the limit $\lambda\to\infty$,
$-\lambda(\bm{m}_{i}-\bm{m}_{i}^{\text{p}en})\approx\bm{B}^{{\text{p}en}\infty}_{i}$
and $\bm{m}_{i}\approx\bm{m}_{i}^{\text{p}en}$. Consequently, the
corresponding double counting term
$-\frac{\lambda}{2}(\left\lVert\bm{m}_{i}\right\rVert^{2}-\left\lVert\bm{m}_{i}^{\text{p}en}\right\rVert^{2})$
can be rewritten as
$\bm{B}^{{\text{p}en}\infty}_{i}\cdot\bm{m}_{i}^{\text{p}en}$.
The fixed spin moment can be seen as a special case of the penalty method
applied for collinear magnetism with only one type of atom. The term
$-B^{\text{p}en}\sigma$ in the renormalized Hamiltonian just shifts rigidly
the eigenvalues by $-B^{\text{p}en}$ for $\uparrow$ spin and $B^{\text{p}en}$
for $\downarrow$ spin, such as
$\varepsilon_{\alpha}=\varepsilon_{\alpha}^{0}-\frac{1}{2}Im\sigma-B^{\text{p}en}\sigma$.
The total energy of Eq.(16) is recovered once provided
$E_{F}^{\sigma}=E_{F}+\sigma B^{\text{p}en}$. Then one gets
$B^{\text{p}en}=\frac{1}{2}(E_{F}^{\uparrow}-E_{F}^{\downarrow})=-B^{\text{eff}}$.
## Appendix C Solution of the spin dynamics of ferromagnetic dimers
The motion of each individual moments of ferromagnetic dimers within the
Heisenberg interaction is a two-body problem admitting an exact solution.
Let’s $\Omega_{s}^{0}\equiv J^{0}/\hbar$ the magnitude of the exchange
pulsation and $E=-J^{0}\bm{m}_{1}\cdot\bm{m}_{2}$ its interaction energy, with
$J^{0}>0$. The motion of each undamped moment is the solution of a set of 2
coupled equations of precession, which are
$\displaystyle\frac{d{\bm{m}}_{1}}{dt}$
$\displaystyle=\Omega_{s}^{0}\bm{m}_{2}\times{\bm{m}}_{1},$ (19)
$\displaystyle\frac{d{\bm{m}}_{2}}{dt}$
$\displaystyle=\Omega_{s}^{0}\bm{m}_{1}\times{\bm{m}}_{2},$
with the given initial conditions ${\bm{m}}_{1}(0)$ and ${\bm{m}}_{2}(0)$.
Equivalently when using an Heisenberg Hamiltonian with normalized vectors
$E=-J\bm{e}_{1}\cdot\bm{e}_{2}$, with $J=J^{0}m^{2}$ (where $m$ is the
amplitude of the magnetization) we get the coupled evolution equations:
$\displaystyle\frac{d{\bm{e}}_{1}}{dt}$
$\displaystyle=\Omega_{s}\bm{e}_{2}\times{\bm{e}}_{1},$ (20)
$\displaystyle\frac{d{\bm{e}}_{2}}{dt}$
$\displaystyle=\Omega_{s}\bm{e}_{1}\times{\bm{e}}_{2},$
with $\Omega_{s}\equiv J/\hbar$. This motion is decoupled in the frame of the
magnetization ${\bm{e}}\equiv\left({\bm{e}_{1}}+{\bm{e}_{2}}\right)$. In this
frame, by combining Eqs.(20) together, one finds $\frac{d{\bm{e}}}{dt}=\bm{0}$
and consequently ${\bm{e}}$ is a constant vector given by the initial
conditions ${\bm{e}}=\left({\bm{e}_{1}}(0)+{\bm{e}_{2}(0)}\right)$. By noting
that
$\Omega_{s}\bm{e}_{2}\times{\bm{e}}_{1}=\Omega_{s}(\bm{e}_{1}+\bm{e}_{2})\times{\bm{e}}_{1}=\Omega_{s}\bm{e}\times{\bm{e}}_{1}$,
Eqs. (20) become fully decoupled:
$\displaystyle\frac{d{\bm{e}}_{1}}{dt}$
$\displaystyle=\Omega_{s}{\bm{e}}\times{\bm{e}}_{1},$ (21)
$\displaystyle\frac{d{\bm{e}}_{2}}{dt}$
$\displaystyle=\Omega_{s}{\bm{e}}\times{\bm{e}}_{2}.$
Then the motion of each of these unit vectors ${\bm{e}}_{i}$ is simply the
motion of a vector in a constant field. Its solution is given by the
Rodrigues’ formula [74, *thibaudeauThermostattingAtomicSpin2011]
$\displaystyle{\bm{e}}_{i}(t)$
$\displaystyle=\cos(\Omega_{s}t){\bm{e}}_{i}(0)+\sin(\Omega_{s}t)\bm{e}+(1-\cos(\Omega_{s}t))\chi_{i}{\bm{e}}_{i}(0)\times\bm{e},$
(22)
where $\chi_{i}\equiv\bm{e}_{i}(0)\cdot\bm{e}$.
The same reasoning can be derived for trimers of identical atoms with the same
exchange parameters applied up to the first neighboring shell, in between. In
that very specific case, each atomic spin follows the same equation of
precession, namely
$\frac{d{\bm{e}}_{i}}{dt}={\Omega}_{s}{\bm{e}}\times{\bm{e}}_{i},$ (23)
with ${\bm{e}}\equiv\sum_{i=1}^{3}{\bm{e}}_{i}(0)$, where ${\bm{e}}$ is found
to be constant of motion. Consequently for trimers with identical atoms and
interactions, the precession frequency, and thus the value of the exchange
parameter, can be measured from a single motion of any spins, as depicted in
Figs. 3 and 4.
## Appendix D Calculation of the exchange coupling parameters
The macroscopic nature of the exchange coupling parameters and how they are
influenced by the various circumstances have been widely discussed in the
literature. The Bethe-Slater [76, *slaterCohesionMonovalentMetals1930,
*chikazumiPhysicsFerromagnetism1997] (BS) curve explains in an insightful way,
by means of direct exchange and the distance between nearest-neighbor (NN)
atoms, the trends followed by ferromagnetism (FM) and antiferromagnetism (AFM)
ground state of the 3d transition metals from bcc Cr to hcp Co. Recent studies
[79] have shown that, even for the bulk case of such elements, the BS curve
reveals a complicated background behind the macroscopic picture. Such NN
interactions depend not only on the distance but also the symmetry and their
bonds, i.e. influenced by the crystal field. That kind of dependence has also
been seen in supported nanoclusters [80,
*rodriguesFirstprinciplesTheoryElectronic2016,
*belabbesHundRuleDrivenDzyaloshinskiiMoriya2016], where for the same distance,
different values for the exchange coupling parameter can be found. In case of
small clusters, like the dimers and trimers studied here, the local density of
states of each atom is very localized, which set apart the majority band from
the minority band. It implies in a large band splitting that directly affects
the value of the of the exchange coupling parameter [83, 84]. As coordination
number increases, the hybridization results in the broadening of such bands,
shifting the center of it closer to the Fermi energy, thus decreasing the
value of the exchange coupling parameter as the coordination number increases
[85, 86]. Moreover, the results here presented follow this logic, as well as
the BS curve trend.
For each of the magnetic configurations, the total energy is computed with the
TB parameters found in reference [24]. When only one rotating single magnetic
moment is considered, the total energy in the Heisenberg model can be written
as a function of the angle with the $z$-axis, labelled $\theta$. For the dimer
it reads
$E_{\text{dimer}}(\theta)-E_{\text{dimer}}(0)=J_{\text{dimer}}(1-\cos(\theta)),$
(24)
and for the trimer
$E_{\text{trimer}}(\theta)-E_{\text{trimer}}(0)=2J_{\text{trimer}}(1-\cos(\theta)).$
(25)
As seen in Fig. 6, Eqs. (24) and (25) can be fitted with the total energy
computed in the TB approximation, in order to find the respective exchange
coupling parameters $J$. For the dimer, it is obvious that
$J_{12}=J_{21}\equiv J_{\text{dimer}}$ and for the trimer, because of the
$C_{3}$ symmetry, $J_{12}=J_{23}=J_{31}\equiv J_{\text{trimer}}$ also. The
fact that the fitting and the energy curve fall on top of each other, means
that both $J_{\text{dimer}}$ and $J_{\text{trimer}}$ are constants within the
limit considered of $\theta$, i.e. the electronic interaction in these systems
is dominated mainly by the Heisenberg’s pair interaction (12) in that range.
The computed values taken for an equal distance $d=2\text{\AA{}}$ between
atoms are reported in the tables 1 and 2.
Finally another strategy has been tested to evaluate the exchange parameters.
Instead of considering the total energy variations $E(\theta)$ as the
reference quantity, we have fitted the variation of the effective field
${\bm{B}}^{\text{pen}}$ as a function of the deviation angle $\theta$. Indeed
it is straightforward to show that
$\left\lVert\bm{B}^{\text{pen}}\right\rVert\left\lVert\bm{m}\right\rVert$ is
equal to $J\sin\theta$ for the dimer and $2J\sin\theta$ for the trimer,
respectively. The results are reported in parenthesis in the tables 1 and 2.
The agreement between the two approaches is good and could be systematically
improved by increasing the penalization constant $\lambda$.
| g ($\mu_{B}$) | $J_{\text{dimer}}$ (eV)
---|---|---
Fe | 3 | 0.616 (0.605)
Co | 2 | 0.574 (0.561)
Ni | 1 | 0.341 (0.312)
Table 1: Values of the computed SCF magnetization and exchange parameter for dimers (interatomic distance of 2Å) calculated in the TB approximation. In parenthesis is shown the result obtained from the fit of the effective field. | g ($\mu_{B}$) | $J_{\text{trimer}}$ (eV)
---|---|---
Fe | 2.6666 | 0.442 (0.463)
Co | 1.6666 | 0.279 (0.273)
Ni | 0.6666 | 0.089 (0.103)
Table 2: Values of the computed SCF magnetization and exchange parameter for equilateral triangle trimers calculated in the TB approximation. In parenthesis is shown the result obtained from the fit of the effective field. |
---|---
|
Figure 6: (color online) Calculated total energy (left) and effective field
(right) from the tight-binding model for dimers and trimers of Fe, Co and Ni
(interatomic distance of 2Å), in black, red and green, respectively. Dimers on
top and trimers on the bottom. The corresponding fits from an Heisenberg
Hamiltonian fall exactly on top of the tight-binding curves (and are not
shown), suggesting that the Heisenberg’s pair interaction dominates these
systems within that range of $\theta$.
## References
* Lee _et al._ [2010] J. H. Lee, L. Fang, E. Vlahos, X. Ke, Y. W. Jung, L. F. Kourkoutis, J.-W. Kim, P. J. Ryan, T. Heeg, M. Roeckerath, V. Goian, M. Bernhagen, R. Uecker, P. C. Hammel, K. M. Rabe, S. Kamba, J. Schubert, J. W. Freeland, D. A. Muller, C. J. Fennie, P. Schiffer, V. Gopalan, E. Johnston-Halperin, and D. G. Schlom, Nature 466, 954 (2010).
* Grimvall _et al._ [2012] G. Grimvall, B. Magyari-Köpe, V. Ozoliņš, and K. A. Persson, Rev. Mod. Phys. 84, 945 (2012).
* Antonangeli _et al._ [2008] D. Antonangeli, L. R. Benedetti, D. L. Farber, G. Steinle–Neumann, A.-l. Auzende, J. Badro, M. Hanfland, and M. Krisch, Appl. Phys. Lett. 92, 111911 (2008).
* Černý [2007] M. Černý, Materials Science and Engineering: A International Symposium on Physics of Materials, 2005, 462, 432 (2007).
* Söderlind _et al._ [1994] P. Söderlind, R. Ahuja, O. Eriksson, J. M. Wills, and B. Johansson, Phys. Rev. B 50, 5918 (1994).
* Hsueh _et al._ [2002] H. C. Hsueh, J. Crain, G. Y. Guo, H. Y. Chen, C. C. Lee, K. P. Chang, and H. L. Shih, Phys. Rev. B 66, 052420 (2002).
* Mathon _et al._ [2004] O. Mathon, F. Baudelet, J. P. Itié, A. Polian, M. d’Astuto, J. C. Chervin, and S. Pascarelli, Phys. Rev. Lett. 93, 255503 (2004).
* Monza _et al._ [2011] A. Monza, A. Meffre, F. Baudelet, J.-P. Rueff, M. d’Astuto, P. Munsch, S. Huotari, S. Lachaize, B. Chaudret, and A. Shukla, Phys. Rev. Lett. 106, 247201 (2011).
* Halilov _et al._ [1998] S. V. Halilov, H. Eschrig, A. Y. Perlov, and P. M. Oppeneer, Phys. Rev. B 58, 293 (1998).
* Ma _et al._ [2017] P.-W. Ma, S. L. Dudarev, and J. S. Wróbel, Phys. Rev. B 96, 094418 (2017).
* Tranchida _et al._ [2018] J. Tranchida, S. J. Plimpton, P. Thibaudeau, and A. P. Thompson, J. Comp. Phys. 372, 406 (2018).
* Antropov _et al._ [1995] V. P. Antropov, M. I. Katsnelson, M. van Schilfgaarde, and B. N. Harmon, Phys. Rev. Lett. 75, 729 (1995).
* Antropov _et al._ [1996] V. P. Antropov, M. I. Katsnelson, B. N. Harmon, M. van Schilfgaarde, and D. Kusnezov, Phys. Rev. B 54, 1019 (1996).
* Combes _et al._ [1981] J. M. Combes, P. Duclos, and R. Seiler, in _Rigorous Atomic and Molecular Physics_, edited by G. Velo and A. S. Wightman (Springer US, Boston, MA, 1981) pp. 185–213.
* Eriksson _et al._ [2017] O. Eriksson, A. Bergman, L. Bergqvist, and J. Hellsvik, _Atomistic Spin Dynamics: Foundations and Applications_, 1st ed. (Oxford University Press, Oxford, 2017).
* Moriya [2014] T. Moriya, _Spin Fluctuations in Itinerant Electron Magnetism_ (Springer Berlin, Berlin, 2014).
* Melnikov and Reser [2018] N. B. Melnikov and B. I. Reser, _Dynamic Spin-Fluctuation Theory of Metallic Magnetism_ (Springer Nature Springer, Cham, 2018).
* Zinn-Justin [2002] J. Zinn-Justin, _Quantum Field Theory and Critical Phenomena_ , 4th ed., International Series of Monographs on Physics No. 113 (Clarendon Press ; Oxford University Press, Oxford : New York, 2002).
* Stocks _et al._ [1998] G. M. Stocks, B. Ujfalussy, X. Wang, D. M. C. Nicholson, W. A. Shelton, Y. Wang, A. Canning, and B. L. Györffy, Philosophical Magazine B 78, 665 (1998).
* Újfalussy _et al._ [1999] B. Újfalussy, X.-D. Wang, D. M. C. Nicholson, W. A. Shelton, G. M. Stocks, Y. Wang, and B. L. Gyorffy, Journal of Applied Physics 85, 4824 (1999).
* Gebauer and Baroni [2000] R. Gebauer and S. Baroni, Phys. Rev. B 61, R6459 (2000).
* Fähnle _et al._ [2005] M. Fähnle, R. Drautz, R. Singer, D. Steiauf, and D. V. Berkov, Computational Materials Science 32, 118 (2005).
* Újfalussy _et al._ [2004] B. Újfalussy, B. Lazarovits, L. Szunyogh, G. M. Stocks, and P. Weinberger, Phys. Rev. B 70, 100404 (2004).
* Barreteau _et al._ [2016] C. Barreteau, D. Spanjaard, and M.-C. Desjonquères, Comptes Rendus Physique 17, 406 (2016).
* Autès _et al._ [2006] G. Autès, C. Barreteau, D. Spanjaard, and M.-C. Desjonquères, J. Phys.: Condens. Matter 18, 6785 (2006).
* Barreteau and Spanjaard [2012] C. Barreteau and D. Spanjaard, J. Phys.: Condens. Matter 24, 406004 (2012).
* Le Laurent _et al._ [2019] L. Le Laurent, C. Barreteau, and T. Markussen, Phys. Rev. B 100, 174426 (2019).
* Soulairol _et al._ [2010] R. Soulairol, C.-C. Fu, and C. Barreteau, J. Phys.: Condens. Matter 22, 295502 (2010).
* Soulairol _et al._ [2016] R. Soulairol, C. Barreteau, and C.-C. Fu, Phys. Rev. B 94, 024427 (2016).
* Finnis [2003] M. Finnis, _Interatomic Forces in Condensed Matter_ , Oxford Series on Materials Modelling No. 1 (Oxford Univ. Press, Oxford, 2003).
* Fletcher [2008] R. Fletcher, _Practical Methods of Optimization_ , 2nd ed., A Wiley-Interscience Publication (Wiley, Chichester, 2008).
* Dederichs _et al._ [1984] P. H. Dederichs, S. Blügel, R. Zeller, and H. Akai, Phys. Rev. Lett. 53, 2512 (1984).
* Ma and Dudarev [2015] P.-W. Ma and S. L. Dudarev, Phys. Rev. B 91, 054420 (2015).
* Kaduk _et al._ [2012] B. Kaduk, T. Kowalczyk, and T. Van Voorhis, Chem. Rev. 112, 321 (2012).
* Small and Heine [1984] L. M. Small and V. Heine, J. Phys. F: Met. Phys. 14, 3041 (1984).
* Phillips and Peralta [2013] J. J. Phillips and J. E. Peralta, J. Chem. Phys. 138, 174115 (2013).
* Vaclavkova _et al._ [2020] D. Vaclavkova, A. Delhomme, C. Faugeras, M. Potemski, A. Bogucki, J. Suffczyński, P. Kossacki, A. R. Wildes, B. Grémaud, and A. Saúl, 2D Mater. 7, 035030 (2020).
* Sandratskii [1986] L. M. Sandratskii, Physica Status Solidi (b) 136, 167 (1986).
* Sandratskii [1998] L. M. Sandratskii, Advances in Physics 47, 91 (1998).
* Grotheer _et al._ [2000] O. Grotheer, C. Ederer, and M. Fähnle, Phys. Rev. B 62, 5601 (2000).
* Grotheer _et al._ [2001] O. Grotheer, C. Ederer, and M. Fähnle, Phys. Rev. B 63, 100401 (2001).
* Rosengaard and Johansson [1997] N. M. Rosengaard and B. Johansson, Phys. Rev. B 55, 14975 (1997).
* Brinker _et al._ [2019] S. Brinker, M. d. S. Dias, and S. Lounis, New J. Phys. 21, 083015 (2019).
* Liechtenstein _et al._ [1984] A. I. Liechtenstein, M. I. Katsnelson, and V. A. Gubanov, J. Phys. F: Met. Phys. 14, L125 (1984).
* Liechtenstein _et al._ [1987] A. I. Liechtenstein, M. I. Katsnelson, V. P. Antropov, and V. A. Gubanov, Journal of Magnetism and Magnetic Materials 67, 65 (1987).
* Pajda _et al._ [2001] M. Pajda, J. Kudrnovský, I. Turek, V. Drchal, and P. Bruno, Phys. Rev. B 64, 174402 (2001).
* Bargmann _et al._ [1959] V. Bargmann, L. Michel, and V. L. Telegdi, Phys. Rev. Lett. 2, 435 (1959).
* Bessarab _et al._ [2015] P. F. Bessarab, V. M. Uzdin, and H. Jónsson, Computer Physics Communications 196, 335 (2015).
* Ivanov _et al._ [2020] A. V. Ivanov, D. Dagbartsson, J. Tranchida, V. M. Uzdin, and H. Jónsson, J. Phys. : Condens. Matter 32, 345901 (2020).
* Beaujouan _et al._ [2012] D. Beaujouan, P. Thibaudeau, and C. Barreteau, Phys. Rev. B 86, 174409 (2012).
* Streib _et al._ [2020] S. Streib, V. Borisov, M. Pereiro, A. Bergman, E. Sjöqvist, A. Delin, O. Eriksson, and D. Thonig, Phys. Rev. B 102, 214407 (2020).
* Mentrup _et al._ [1999] D. Mentrup, J. Schnack, and M. Luban, Physica A: Statistical Mechanics and its Applications 272, 153 (1999).
* Efremov and Klemm [2002] D. V. Efremov and R. A. Klemm, Phys. Rev. B 66, 174427 (2002).
* Kolezhuk _et al._ [2004] A. K. Kolezhuk, V. N. Glazkov, H. Tanaka, and A. Oosawa, Phys. Rev. B 70, 020403 (2004).
* Cabot _et al._ [2019] A. Cabot, G. L. Giorgi, F. Galve, and R. Zambrini, Phys. Rev. Lett. 123, 023604 (2019).
* Shallcross _et al._ [2005] S. Shallcross, A. E. Kissavos, V. Meded, and A. V. Ruban, Phys. Rev. B 72, 104437 (2005).
* Ruban _et al._ [2007] A. V. Ruban, S. Khmelevskyi, P. Mohn, and B. Johansson, Phys. Rev. B 75, 054402 (2007).
* Mühlbauer _et al._ [2009] S. Mühlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Böni, Science 323, 915 (2009).
* Heinze _et al._ [2011] S. Heinze, K. von Bergmann, M. Menzel, J. Brede, A. Kubetzka, R. Wiesendanger, G. Bihlmayer, and S. Blügel, Nature Physics 7, 713 (2011).
* Chao _et al._ [1977] K. A. Chao, J. Spalek, and A. M. Oles, J. Phys. C: Solid State Phys. 10, L271 (1977).
* Chao _et al._ [1978] K. A. Chao, J. Spałek, and A. M. Oleś, Phys. Rev. B 18, 3453 (1978).
* Kvashnin _et al._ [2016] Y. O. Kvashnin, R. Cardias, A. Szilva, I. Di Marco, M. I. Katsnelson, A. I. Lichtenstein, L. Nordström, A. B. Klautau, and O. Eriksson, Phys. Rev. Lett. 116, 217202 (2016).
* Mankovsky _et al._ [2020] S. Mankovsky, S. Polesya, and H. Ebert, Phys. Rev. B 101, 174401 (2020).
* Adler and Oitmaa [1979] J. Adler and J. Oitmaa, J. Phys. C: Solid State Phys. 12, 575 (1979).
* Szilva _et al._ [2013] A. Szilva, M. Costa, A. Bergman, L. Szunyogh, L. Nordström, and O. Eriksson, Phys. Rev. Lett. 111, 127204 (2013).
* Streib _et al._ [2021] S. Streib, A. Szilva, V. Borisov, M. Pereiro, A. Bergman, E. Sjöqvist, A. Delin, M. I. Katsnelson, O. Eriksson, and D. Thonig, arXiv:2103.04726 [cond-mat] (2021), arXiv:2103.04726 [cond-mat] .
* Dias _et al._ [2021] M. d. S. Dias, S. Brinker, A. Lászlóffy, B. Nyári, S. Blügel, L. Szunyogh, and S. Lounis, arXiv:2101.00463 [cond-mat] (2021), arXiv:2101.00463 [cond-mat] .
* Drautz and Fähnle [2004] R. Drautz and M. Fähnle, Phys. Rev. B 69, 104404 (2004).
* Szunyogh _et al._ [2011] L. Szunyogh, L. Udvardi, J. Jackson, U. Nowak, and R. Chantrell, Phys. Rev. B 83, 024401 (2011).
* Costa _et al._ [2005] A. T. Costa, R. B. Muniz, and D. L. Mills, Phys. Rev. Lett. 94, 137203 (2005).
* Holstein and Primakoff [1940] T. Holstein and H. Primakoff, Phys. Rev. 58, 1098 (1940).
* Blundell [2014] S. Blundell, _Magnetism in Condensed Matter_ , reprint ed., Oxford Master Series in Condensed Matter Physics No. 4 (Oxford Univ. Press, Oxford, 2014).
* Li _et al._ [2013] C. Li, W. Yin, H. Jiang, and Y. Zhang, Comput Optim Appl 56, 507 (2013).
* Rodrigues [1840] O. Rodrigues, J. Math. Pures Appl. 5, 380 (1840).
* Thibaudeau and Beaujouan [2011] P. Thibaudeau and D. Beaujouan, Physica A 391, 1963 (2011).
* Slater [1930a] J. C. Slater, Phys. Rev. 36, 57 (1930a).
* Slater [1930b] J. C. Slater, Phys. Rev. 35, 509 (1930b).
* Chikazumi [1997] S. Chikazumi, _Physics of Ferromagnetism_ , International Series of Monographs on Physics (Oxford Science Publications, 1997).
* Cardias _et al._ [2017] R. Cardias, A. Szilva, A. Bergman, I. D. Marco, M. I. Katsnelson, A. I. Lichtenstein, L. Nordström, A. B. Klautau, O. Eriksson, and Y. O. Kvashnin, Scientific Reports 7, 4058 (2017).
* Mavropoulos _et al._ [2010] P. Mavropoulos, S. Lounis, and S. Blügel, physica status solidi (b) 247, 1187 (2010).
* Rodrigues _et al._ [2016] D. C. d. M. Rodrigues, M. Pereiro, A. Bergman, O. Eriksson, and A. B. Klautau, J. Phys.: Condens. Matter 29, 025807 (2016).
* Belabbes _et al._ [2016] A. Belabbes, G. Bihlmayer, F. Bechstedt, S. Blügel, and A. Manchon, Phys. Rev. Lett. 117, 247202 (2016).
* Bergman _et al._ [2006] A. Bergman, L. Nordström, A. Burlamaqui Klautau, S. Frota-Pessôa, and O. Eriksson, Phys. Rev. B 73, 174434 (2006).
* Frota-Pessôa _et al._ [2000] S. Frota-Pessôa, R. B. Muniz, and J. Kudrnovský, Phys. Rev. B 62, 5293 (2000).
* Bezerra-Neto _et al._ [2013] M. M. Bezerra-Neto, M. S. Ribeiro, B. Sanyal, A. Bergman, R. B. Muniz, O. Eriksson, and A. B. Klautau, Scientific Reports 3, 3054 (2013).
* Igarashi _et al._ [2014] R. N. Igarashi, M. M. B. Neto, L. T. F. Eleno, A. Bergman, A. B. Klautau, O. Eriksson, and H. M. Petrilli, J. Phys.: Condens. Matter 26, 206003 (2014).
|
# From Order to Disorder of Alkanethiol SAMs on Complex Au (211), (221) and
(311) Surfaces: Impact of the Substrate
Dimitrios Stefanakis Department of Materials Science & Technology -
University of Crete, Vassilika Voutes, 700 13 Heraklion, GREECE
<EMAIL_ADDRESS>Vagelis Harmandaris Department of Mathematics &
Applied Mathematics - University of Crete, Vassilika Voutes, 700 13 Heraklion,
GREECE Institute of Applied & Computational Mathematics, Foundation for
Research and Technology-Hellas, 711 10 Heraklion, GREECE Computation-Based
Science and Technology Research Center, The Cyprus Institute, Nicosia 2121,
CYPRUS<EMAIL_ADDRESS>Georgios Kopidakis Department of Materials Science &
Technology - University of Crete, Vassilika Voutes, 700 13 Heraklion, GREECE
Institute of Electronic Structure and Laser, Foundation for Research and
Technology-Hellas, 711 10 Heraklion, GREECE<EMAIL_ADDRESS>Ioannis
Remediakis Department of Materials Science & Technology - University of
Crete, Vassilika Voutes, 700 13 Heraklion, GREECE Institute of Electronic
Structure and Laser, Foundation for Research and Technology-Hellas, 711 10
Heraklion, GREECE<EMAIL_ADDRESS>
###### Abstract
We investigate the impact of the substrate on the structural properties and
the morphology of alkanethiol self-assembled monolayers (SAMs) on gold, using
first principles calculations and atomistic molecular dynamics simulations. We
consider hexadecanethiols on Au(211), Au(221) and Au(311) surfaces which
contain few-atom wide terraces separated by monoatomic steps similar to the
complex Au surfaces used in experiments. The structure of the SAMs is probed
via several structural properties including tilt angles, mean C atom heights
from the surface, precession angles, gauche defects, gyration tensors and
relative shape anisotropy. Comparing these properties to those of the well-
studied SAMs on Au(111), we observe similarities but also striking
differences. A clear order to disorder transition is observed by changing the
substrate: well-ordered SAMs on (111) and (211) surfaces become mixed ordered-
disordered structures on (311) and fully disordered on (221). The presence of
steps on the Au surfaces also results in preferential tilt orientations with
long-range order. Our results show that in addition to the expected grafting
density dependence, the transition from order to disorder crucially depends on
substrate morphology. The onset of ordering behavior is related to the atomic
structure of the surface. The key parameter that affects long-range order is
the energy for changing the dihedral angle between
Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ of the adsorbed alkanethiol.
Self Assembled Monolayer, Density Functional Theory, Molecular Dynamics,
United Atom Model, Alkanethiols, Complex Au Surfaces, Au(111), Au(211),
Au(221), Au(311), Atomistic Force Field, Atomistic Simulations
## I Introduction
Self-assembled monolayers (SAMs) are systems where (relatively small)
molecules adsorbed on surfaces self-organize into, more or less, large ordered
domains. For the self-assembly process both the molecule/surface and
intermolecular interactions play an important role. Typically SAMs can be
easily formed by spontaneous adsorption from gas or liquid phases, and this
formation process is guided by a covalent linker: for clean metal substrates S
is the most preferable one[1]. Alkanethiol monolayers on noble metal
substrates (particularly Au or Ag) are the most common SAMs because of their
continuously growing usage for promising applications: molecular biology,
surface and materials science, inorganic chemistry, drug delivery and medical
therapy[2, 3], surface functionalization[4], catalysis and nanotechnology are
some of the scientific fields that can benefit from these formations.
Due to the above reasons, several workers have studied experimentally[5, 6, 7,
8, 1, 9] or theoretically[10, 11, 12, 13, 14, 15] SAMs on Au(111) planar
surfaces. On the other hand, very few studies have dealt with more interesting
high-index surfaces[16, 17]. Surfaces with Miller indexes higher than one
possess a periodic arrangement of terraces separated by infinitely long steps
which sometimes are joined in kinks. For example, the (211) surface of a face-
centered cubic (fcc) structured metal, such as gold, consists of three-atom
wide close-packed terraces and monoatomic steps. While atoms on terraces have
similar atomic environment as atoms on the flat (111) surface, step-edge atoms
offer much stronger binding sites for alkanethiols[18]. As a result, the
overall properties of a SAM on Au(211) might be very different than those of a
SAM on Au(111). Such a detailed study of the effects of surface structure on
the properties of SAMs is missing. Moreover, a theoretical investigation for
such systems is of great importance because such complex surfaces are also
closely related to the surfaces of gold nanoparticles.
In the present study, we use a detailed and accurate all-atom classical force
field that consists of interaction parameters that mainly come from previous
works. Some parts of the potential are re-parameterized using first-principles
calculations based on Density-Functional Theory (DFT). In particular, we
derive a new interaction potential term for the $\mathrm{Au-
S-C^{(1)}-C^{(2)}}$ dihedral angle, where Au is the nearest surface Au atom to
the S and $\mathrm{C^{(1)}}$, $\mathrm{C^{(2)}}$ are the first and the second
C atom in the chain. This was necessary to be done due to the lack of previous
works concerning these complex surfaces. Then, we perform long time classical
MD simulations, with the complete force field, to predict the structural
properties of hexadecanethiols adsorbed on various Au complex surface. More
specifically, we simulate SAMs on four different complex Au surfaces: (111),
(211), (221) and (311). SAMs on Au(111) can be used as a reference system for
the results of our own simulations as there is a lot of literature for them.
The Au(211) surface has been selected because it has been shown that this is
the surface that almost totally dominates thiolate-protected gold
nanoparticles of diameters between ~5 and ~34 nm at the thermodynamic
limit[18]. The Au(311) surface has been selected as it has a nice coverage of
fluorophore-labeled DNA and alkylthiol SAM on single crystal bead electrodes
of Au[16] leading to biosensors construction. Moreover, this system has been
studied among others by binding COOH-terminated alkanethiol molecules with
AuNP surfaces which is useful and valuable for preparation of probe
biomolecules for further biochip studies[17]. Finally, the Au(221) surface was
selected because it has different step type, (111)/(111), compared to the
(111)/(100) steps of (311) and (211). For all three complex surfaces, S atoms
bind to Au atoms on the step edge without sharing of Au atoms. The distance
between adjacent S atoms is therefore $2d$, where $d$ is the Au-Au distance in
bulk Au, and the two lattice vectors are orthogonal, being parallel and
perpendicular to the step edge.
In addition to stepped surfaces, we perform calculations for ideally flat
close-packed (111) with similar SAM-surface interaction as the stepped
surfaces. Several structures exist for SAMs on Au(111), the most studied one
being the $(\sqrt{3}\times\sqrt{3})R30^{\circ}$ with or without $(4\times 2)$
superstructure[1, 15]. In its simplest form, the unit cell contains 3 surface
Au atoms; S atoms of the SAM form a hexagonal lattice with S-S distance equal
to $\sqrt{3}d$ and SAM grafting density $\frac{1}{3d^{2}}$ where $d$ is the
Au-Au distance in bulk Au[10]. Another known structure is the
$(2\times\sqrt{3})rect$ structure which has 4 surface Au atoms per unit cell;
S atoms of the SAM form an orthogonal lattice with S-S distances equal to
$\sqrt{3}d$ and $2d$ with SAM grafting density $\frac{1}{2\sqrt{3}d^{2}}$[18,
5, 6]. The differences in symmetry and grafting density result in different
SAM properties. For example, the tilt angle is higher for the lower density
SAM[5, 6]. A detailed study of SAMS on Au(111) can be found in several works
in the literature and is beyond the scope of the present work. We use the
less-common $(2\times\sqrt{3})rect$ structure for (111) in order to have
direct comparison between stepped and flat surfaces. In all four structures
for Au we consider, S atoms of the SAM form orthogonal lattices with same S-S
distance, and thus same linear grafting density along the two Au atoms that
lie directly below the S atoms.
For all above systems, we calculate a variety of structural parameters that
can be defined in order to characterize them. Such properties include the tilt
angle ($\theta_{m}$), the mean C atom distance according to its ranking along
the chain ($h$) from the slab, the monolayer thickness ($z_{tail}$), the
precession angle of the chain ($\chi$) and the percentage of Gauche defects of
the alkane chain. The definitions of these quantities are shown schematicaly
in the model of an alkanethiol on gold surface shown in Figure 1. The presence
of Gauche defects (cis- instead of trans- or vice versa) is quite common for
these molecules and will also be studied in detail here.
SAMs on planar surfaces are well-known to bind through defects, such as
adatoms, vacancies and islands. When adatoms are present on a flat gold
surface, S is found to bind to bridge site between adatom and surface-layer
atom[19]. Similar local atomic arrangement is observed for thiol adsorption on
Au clusters where again S binds to two under-coordinated Au atoms[20, 21].
Instead of introducing defects on a perfect close-packed surface, we consider
ideal surfaces with periodic arrangement of steps. Au atoms along the step
edge are under-coordinated, having between five and seven neighbors while
atoms on Au(111) have nine neighbors. In addition, atoms right next to these
undercoordinated atoms have nine neighbors as they belong to (111) terraces.
Therefore, the structures we consider have similar qualitative features as
flat surfaces with adatoms while at the same time have perfect periodicity.
These high-index surfaces contain a periodic arrangement of steps with various
orientations and concentrations, and resemble model structures for defective
planar surfaces.
Figure 1: Some structural properties of the studied systems: tilt angle
($\theta_{m}$), C atom distance from the slab ($z_{tail}$), the precession
angle ($\chi$) and the torsion angle of the alkane chain ($\phi_{t}$).
Tilt angle, that is defined as the angle between the backbone of an
alkanethiol and the normal to the substrate ($\theta_{m}$ in Figure 1), is a
well studied property of various alkanethiol systems
($\mathrm{RS(CH_{2})_{n}CH_{3}}$) on metal substrates both theoretically[14,
13, 15, 22, 8, 12, 20] and experimentally[5, 6]. Most of these studies refer
to close packed arrangements of molecules with the
$(\sqrt{3}\times\sqrt{3})R30^{o}$ hexagonal periodicity relative to a Au
substrate or to a secondary $c(4\times 2)$ superstructure on it; the value of
tilt angles on such arrangements varies between ~$30^{o}$ and ~$35^{o}$ at
room temperature for various values of $n$. However, a few cases with a less
dense arrangement of $(2\times\sqrt{3})rect$ have been observed
experimentally[5, 6] as well as theoretically[11], where the tilt angle
differs a lot from the above as its value lies around $50^{o}$.
Such formations are observed experimentally as metastable states that are
quickly transformed into $(\sqrt{3}\times\sqrt{3})R30^{o}$ arrangements after
some disturbance, while in theoretical studies, where the bond distance
between the alkanethiol chains was kept fixed, the tilt angle remained
unchanged at ~$50^{o}$. Adding more C atoms in the chain, this flexible
structure seems to become more stable and can be observed in simulations where
the distances between chains are flexible[11].
We should also note that a well-studied structural characteristic of the SAMs
is the so called "odd-even" effect[23, 24, 25, 26]. According to this,
alterations of the properties of SAMs structures depending on the odd or even
number of the C atoms in the alkane chain have been observed for chain lengths
between 2 and 18. Especially for SAMs on Au(111) investigations, one important
property that is affected by this effect is the tilt angle of the alkane chain
which tends to be larger for odd numbers of C atoms in chain than for even
numbers. Here we consider SAMs with a constant alkane length of 16 C atoms.
The binding site of S for flat Au surface can be bridge, hollow or on-top,
depending on the alkanethiol length and surface defects[27]. DFT calculations
for the same surfaces used in the present study show very strong preference
for adsorption on the bridge site. For example, in (211) surface, adsorption
on bridge site has lower adsorption energy by more than 0.5 eV per molecule
compared to the top site[18].
## II Model and Simulation Methodology
### II.1 Sample preparation and construction
Unit cell generation: The construction of our samples, was based on the
results of Barmparis et al.[18] In that work, the authors had considered
methanethiolates ($\mathrm{RS-}\ \textrm{with}\ \mathrm{R=CH_{3}}$) adsorbed
on various Au($hkl$) surfaces. Using Density Functional Theory (DFT)
simulations, they considered every possible adsorption geometry on all
Au($hkl$) with indices $h,k,l<4$. Gold surfaces were modeled using slabs of Au
with periodic boundary conditions in directions along the surface plane. We
use the minimum energy structure of methanethiolate for each one of the four
Au surfaces considered in the present study. Starting from this minimum
adsorption energy state of each mentioned surface, we developed new structures
in sp3 order via a geometric procedure as follows:
1. 1.
Substitution of one H atom of the methyl group with one methylene group
($\mathrm{-CH_{2}-}$).
2. 2.
On the free bond of this methylene another methylene was added.
3. 3.
The above step was repeated until we reach the desired number of C atoms in
the chain. The last added group was a methyl group.
This way, we were able to construct the alkanethiol chains we needed
consisting of sixteen C atoms, C16. Distances and angles of the bonds were
fixed initially to values known from the literature (bond distance of C-C =
1.54 Å, angles of H-C-H, C-C-C = $\mathrm{109.47^{o}}$), while the distances
between C and H atoms are given by the initial sample. The thickness of the Au
slab was at least 8 Å while the lattice constant for all surfaces was 4.22 Å.
This value, which is close to the experimental one (4.08 Å), was used by
Barmparis et al.[18] and is preserved in our calculations for compatibility
reasons.
Characteristics of the various surfaces are summarized in Table 1 and are
demonstrated in Figure 2. In this figure, we used different shades of gold to
show the distance of Au atoms from the surface. Step-edge atoms, shown with
darkest color in Figure 2, are the ones that are bonded to S atoms of the
alkanethiol. Each S atom is bonded to two step-edge Au atoms, and each step-
edge Au atom is bonded to one S atom. The position of S in the middle of the
bridge site is dictated by the DFT calculations[18] which were used as a
starting point for the present work. In that work, several initial positions
of S were considered, and each structure was fully relaxed to find the lowest-
energy configuration. Middle of the bridge site was the preferred adsorption
geometry for S for all three stepped surfaces considered here. The main
features of the different Au surfaces modeled in this work are discussed
below.
Figure 2: Geometrical characteristics for the (111), (211), (221) and (311)
surfaces and the S positions on them: (a) Side view, (b) Top view. The color
shade of the Au atoms indicates proximity to the surface, the darkest ones
being those of the outermost layers. Table 1: Characteristics of the various
surfaces
| Surfaces
---|---
| Au(111) | Au(211) | Au(221) | Au(311)
Surface dimensions of a single cell (nm2) | 0.597 $\times$ 0.517 | 0.597 $\times$ 0.731 | 0.597 $\times$ 0.895 | 0.597 $\times$ 0.990
Surface area of a single cell (nm2) | 0.309 | 0.436 | 0.534 | 0.591
Grafting density (nm-2) | 3.24 | 2.29 | 1.87 | 1.69
Total surface dimensions (nm) | 17.9$\times$15.5 | 17.9$\times$21.9 | 17.9$\times$26.9 | 17.9$\times$29.7
Total slab surface (nm2) | 280.80 | 394.20 | 486.00 | 534.60
Number of Au atoms | 25200 | 19800 | 30600 | 43200
Microfacet notation | … | 3(111)$\times$(100) | 4(111)$\times$(111) | 2(111)$\times$(100)
In the stepped surfaces we consider, the grafting density is dictated by the
DFT simulations that show strong preference for step-edge binding of S on
stepped surfaces. The grafting density is $\frac{1}{ndL}$ where $d$ is the
distance between neighboring Au atoms along the step and $L$ the distance
between steps. In our simulation, we place one S atom every second Au atom,
therefore $n=2$. As shown in Table 1, the grafting densities we consider are
between 1.7 and 3.2 nm-2. Experimental grafting densities for alkanethiols
range from 0.2 nm-2[28] to 4.6 nm-2[10]. Below we give detailed information
for the characteristics of each surface.
#### The Au(111) surface:
In Au(111), S atoms are on a bridge site between two Au atoms of the surface
and are arranged in a rectangular lattice with dimensions of 0.597 $\times$
0.517 nm2. This gives a grafting density of 3.24 nm-2. This structure,
described as $(2\times\sqrt{3})rect$, although observed experimentally[5], has
a bit lower density than the most common SAM structure for Au(111) which is
$(\sqrt{3}\times\sqrt{3})R30^{\circ}$. We chose to use $(2\times\sqrt{3})rect$
for the perfectly flat Au(111) in order to have four SAM structures with
identical arrangement of S atoms both in terms of symmetry (S atoms form
rectangular lattices) and S-S distance. With this choice, symmetry and S-S
distance is the same in all four systems we consider. The S-S distance is
twice the nearest neighbor distance of bulk Au.
#### The Au(211) surface:
This surface consists of 3-atoms wide terraces and an 1-atom step. On the
terraces, atoms have the same atomic configuration as in (111) surface, while
on the step atoms resemble the structure of (100). The microfacet notation[29]
for this surface is therefore 3(111)$\times$(100) as shown in Table 1. The S
atoms are positioned on a bridge site between two Au atoms over the edge of
the steps and are arranged in a rectangular lattice with dimensions of 0.597
$\times$ 0.731 nm2. This gives a grafting density of 2.29 nm-2.
#### The Au(221) surface:
This surface is similar to the Au(211) since it consists of 4-atoms wide
terraces and an 1-atom step. On terraces atoms have the (111) configuration,
therefore the microfacet notation is 4(111)$\times$(111). The S atoms are
positioned on a bridge site between two Au atoms over the edge of the steps
and are arranged in a rectangular lattice with dimensions of 0.597 $\times$
0.895 nm2. This gives a grafting density of 1.87 nm-2.
#### The Au(311) surface:
The Au(311) surface consists of 2-atoms wide terraces and an 1-atom step. The
structure of (311) is similar to that of (211), the only difference being the
2-atom wide terraces compared to 3-atom-wide terraces in (211). Thus, the
microfacet notation for this surface is 2(111)$\times$(100). The S atoms are
positioned on a bridge site between two Au atoms over the edge of every other
step and are arranged in a rectangular lattice with dimensions of 0.597
$\times$ 0.990 nm2. This gives a grafting density of 1.69 nm-2.
Final sample construction: The final sample for each kind of surface was
formed by repeating the above initial cells 30 times on both x- and y- axes
providing SAMs with 900 alkanethioles.
### II.2 Atomistic force field (interaction potentials)
The entire force field used in the present work is based on previous classical
force fields for SAMs [10, 12] that is extended, as described below.
The interatomic potentials used here are described in Table 2. The majority of
these potentials were taken from the literature. Here, we consider immobile
alkanethiols, where the S-Au bond stays fixed throughout the simulation. This
is a reasonable approximation, given that binding of S to step-edge atoms is
extremely strong, and it is not likely that the S-Au bond can break at room
temperature. Barmparis et.al[18] found that the lowest values of the
alkanethiols adsorption energies over the above surfaces are -0.146, -0.81,
-0.68 and -0.75 eV for the (111), (211), (221) and (311) surfaces respectively
which are far from the typical kinetic energy of a gas molecule
($\frac{3}{2}kT,\ k:$ Boltzmann’s constant) at ~$3.9\times 10^{-2}$ eV. The
positions of Au atoms have been kept frozen as well for the same reason.
For the rest of our particles, the total potential energy ($V_{total}$) of a
particle is
$V_{total}=V_{b}+V_{nb}$ (1)
where $V_{b}$ stands for the intramolecular (bonded) and $V_{nb}$ for the
intermolecular (non-bonded) interactions. The $V_{b}$ and $V_{nb}$ are given
by
$\displaystyle V_{b}$ $\displaystyle=$ $\displaystyle
V_{stretch}+V_{bend}+V_{tor}$ (2a) $\displaystyle V_{nb}$ $\displaystyle=$
$\displaystyle
V_{LJ}(r)=4\epsilon_{ij}\biggl{(}\Bigl{(}\frac{\sigma_{ij}}{r_{ij}}\Bigr{)}^{12}-\Bigl{(}\frac{\sigma_{ij}}{r_{ij}}\Bigr{)}^{6}\biggr{)},$
(2b)
respectively, and described in Table 2.
The bond-stretching ($V_{stretch}$), the bond-bending ($V_{bend}$) and the
dihedral angles interactions ($V_{tor}$) of the SAMs chains were taken from
the literature [10, 12]. On the contrary, due to the lack of a detailed
interaction potential for the $\mathrm{Au-S-CH_{2}-CH_{2}}$ dihedral angles we
have developed a new one. For that we’ve performed new DFT calculations and
parametrized a polynomial function for each surface, as it is presented in the
Section "Calculation of Au-S-C(1)-C(2) dihedral angle potentials"II.3.
The non-bonded interactions of $\mathrm{S-CH_{x}}\ (x=2,3)$,
$\mathrm{CH_{x}-CH_{y}}\ (x,y=2,3)$ and $\mathrm{Au-CH_{x}}\ (x=2,3)$ were
described by the typical 12-6 Lennard-Jones potential of Equation 2b. The
estimation of the proper values for $\epsilon_{ij}$ and $\sigma_{ij}$ in LJ
interactions were based on Lorentz-Berthelot rules
($\sigma_{ij}=\frac{1}{2}(\sigma_{ii}+\sigma_{jj})$ and
$\epsilon_{ij}=\sqrt{\vphantom{b}\epsilon_{ii}\epsilon_{jj}}$) using the
values presented in Table 2.
Table 2: Interaction parameters of the molecular models used in simulations
Type of interaction and potential function | Type of interacting sites
---|---
Bond-stretching interactions
$V_{stretch}(r)=\frac{1}{2}k_{s}(r-r_{0})^{2}$ | $\mathrm{CH_{2}-CH_{x}}$ (x=2,3) | $\mathrm{S-CH_{2}}$ | $\mathrm{Au-S}$ | $\mathrm{Au-Au}$ (slab)
$r_{0}\ (nm)$ | 0.154 | 0.181 | |
$k_{s}\ (\frac{kJ}{mol}\times nm^{-2})$ | 217568.00[12] | 185769.00[12] | frozen | frozen
Bond-bending interactions
$V_{bend}(\theta)=\frac{1}{2}k_{b}(\theta-\theta_{0})^{2}$ | $\mathrm{CH_{2}-CH_{2}-CH_{x}}$ (x=2,3) | $\mathrm{S-CH_{2}-CH_{2}}$ | $\mathrm{Au-S-CH_{2}}$ | $\mathrm{Au-Au-Au}$ (slab)
$\theta_{0}\ (deg)$ | 109.5 | 114.4[10] | 110.1[18] |
$k_{b}\ (\frac{kJ}{mol}\times rad^{-2})$ | 519.653 | 519.653 | 519.653[10] | frozen
Dihedral angle interactions
$V_{tor}(\phi)=\sum\limits_{i=0}^{5}\alpha_{i}\cos(\phi)$ | $\mathrm{CH_{2}-CH_{2}-CH_{2}-CH_{x}}$ | $\mathrm{S-CH_{2}-CH_{2}-CH_{2}}$ | $\mathrm{Au-Au-Au-Au}$ (slab)
(Ryckaert - Bellemans function) | (x=2,3) | |
$\alpha_{i}\ (\frac{kJ}{mol})$ | $\alpha_{0}=9.2759\ /\ \alpha_{1}=12.1545\ /\ \alpha_{2}=-13.1168$ | frozen
| $\alpha_{3}=-3.0585\ /\ \alpha_{4}=26.2378\ /\ \alpha_{5}=-31.4929$[10] |
Dihedral interactions for the $\mathrm{Au-S-CH_{2}-CH_{2}}$ angle | Au surfaces
Polynomial coefficients | (111) | (211) | (221) | (311)
$a_{0}$ | $2.063716\times 10^{1}$ | $1.938784\times 10^{1}$ | $14.79002\times 10^{1}$ | $1.856027\times 10^{1}$
$a_{1}$ | $-1.415672\times 10^{-1}$ | $-1.316188\times 10^{-1}$ | $1.697293\times 10^{-1}$ | $2.507222\times 10^{-1}$
$a_{2}$ | $-2.655100\times 10^{-3}$ | $-3.480334\times 10^{-3}$ | $-2.947214\times 10^{-3}$ | $-2.147820\times 10^{-3}$
$a_{3}$ | $-1.259557\times 10^{-05}$ | $-4.817540\times 10^{-6}$ | $-5.552760\times 10^{-6}$ | $-1.823129\times 10^{-5}$
$a_{4}$ | $2.743923\times 10^{-07}$ | $5.119324\times 10^{-7}$ | $5.090070\times 10^{-7}$ | $2.218595\times 10^{-7}$
$a_{5}$ | $3.160848\times 10^{-09}$ | $1.715997\times 10^{-9}$ | $-1.184259\times 10^{-9}$ | $-3.324587\times 10^{-10}$
$a_{6}$ | $-1.457525\times 10^{-11}$ | $-3.462513\times 10^{-11}$ | $-3.632844\times 10^{-11}$ | $-1.365227\times 10^{-11}$
$a_{7}$ | $-1.864453\times 10^{-13}$ | $-7.556656\times 10^{-14}$ | $8.703494\times 10^{-14}$ | $5.466089\times 10^{-14}$
$a_{8}$ | $3.337915\times 10^{-16}$ | $1.006355\times 10^{-15}$ | $1.076216\times 10^{-15}$ | $3.809166\times 10^{-16}$
$a_{9}$ | $4.553404\times 10^{-18}$ | $9.766564\times 10^{-19}$ | $-2.113789\times 10^{-18}$ | $-1.473163\times 10^{-18}$
$a_{10}$ | $-2.656180\times 10^{-21}$ | $-1.045875\times 10^{-20}$ | $-1.137111\times 10^{-20}$ | $-3.823418\times 10^{-21}$
$a_{11}$ | $-4.046784\times 10^{-23}$ | $-5.492738\times 10^{-25}$ | $1.743463\times 10^{-23}$ | $1.269500\times 10^{-23}$
$a_{12}$ | | | $2.807201\times 10^{-27}$ |
Non-bonded interactions
$V_{LJ}(r)=4\epsilon_{ij}\Bigl{(}\bigl{(}\frac{\sigma_{ij}}{r_{ij}}\bigr{)}^{12}-\bigl{(}\frac{\sigma_{ij}}{r_{ij}}\bigr{)}^{6}\Bigr{)}$ | $\mathrm{S}$ | $\mathrm{CH_{2}}$ | $\mathrm{CH_{3}}$ | $\mathrm{Au}$
$\epsilon_{ij}\ (\frac{kJ}{mol})$ | 1.6628 | 0.4937 | 0.7326 | 0.1632
$\sigma_{ij}\ (nm)$ | 0.4250 | 0.3905 | 0.3905 | 0.2935[12]
Another interesting potential function is the one that describes the energy
cost related to the Au-S-C angle, with the surface of the substrate considered
as a plane. This potential, although not used in the present calculation, has
a large importance in the context of so called odd-even effects for SAMs. The
potential is of harmonic type,
$V(\theta)=\frac{1}{2}k_{b}(\theta-\theta_{0})^{2}$, with $k_{b}$ the bond-
bending constant given in Table 2. The angle $\theta_{0}$ equals 123.1∘ for
(111), 113.3∘ for (211), 108.1∘ for (221) and 111.5∘ for (311), respectively.
### II.3 Calculation of Au-S-C(1)-C(2) dihedral angle potentials
The atomistic force field described above, as well as the majority of the
parameter values in Table 2, originated from previous works for flat Au
surfaces where the dihedral $\mathrm{Au-S-C^{(1)}-C^{(2)}}$ does not play
important role as it does for stepped surfaces. The complexity of our surfaces
and the lack of previous calculations for the potential of
Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ dihedral angles, guided us to make a new
calculation method for it. Using state-of-the-art electronic structure
methods, we calculate the energy of adsorbed ethanethiol at different (fixed)
values of the dihedral angle, $\phi$, and fit the results to an analytical
function of $\phi$. In this way, we end up with an accurate potential that
takes into account variations of dihedral angle in adsorbed alkanethiols. The
idea is to build structures containing ethanethiols over each of the mentioned
surfaces, using the method already described in Section "Sample preparation
and construction"II.1 and calculate the potential for a number of angles by
rotating the S-$\mathrm{C^{(1)}}$ bond by 10 degrees at a time, starting from
the original position. This process is shown in Figure 3 for the (211)
surface; identical processes are used for the rest of the mentioned surfaces.
In order to make the Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ chain, we consider the Au
atom with the smallest distance from the S atom of the ethanethiol; note that
the S atom is positioned in the middle of a bridge site between two Au atoms
thus the two distances were almost equal; see also the schematic
representation in Figure 2.
Figure 3: Calculation process of the potential for the
Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ dihedral on a Au(211) surface. (a), (b), (c)
and (d) show the dihedral planes at the initial (original) position and after
the rotation at 90, 180 and 270 degrees respectively. _Red:_ the Au-S-C plane,
_Green_ : the S-C-C plane, _Blue_ : the new S-$\mathrm{C^{(1)}-C^{(2)}}$ plane
after rotation. Note the slight displacement of the C atoms due to the slab
repulsion when they seem to approach it in (b) and (c) where the
$\mathrm{C^{(1)}}$ atom has been moved slightly up and left in comparison to
its initial position.
The calculations were performed according to the Kohn–Sham’s[30] approach of
Hohenberg– Kohn’s DFT[31] theory by using the ASE’s[32] GPAW[33] code at
Finite Difference mode. The space grid points were set to have a distance of
0.2 Å between them, while the k-points were set to 2, 2, 1 for the x-, y- and
z- axes respectively. The exchange correlation functional was the revised
Perdew-Burke-Ernzerhof (RPBE)[34]. The systems were relaxed until they reached
the lowest energy which was finally selected. In some angle sites, where the
second C atom seemed to enter between the atoms of the slab surface, the
system was very unstable. This caused large variations in energy during
relaxation process and some displacement from their expected positions was
observed (Figure 3 b-c). Thus, in these situations the systems never converged
and, as a result, we selected the lowest energy during a long relaxation
process.
Starting from these data, we fitted a polynomial function for the potential
difference between the calculated value and the lowest calculated value with
respect to the Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ dihedral angle $\phi_{rot}$ on
each set of them in order to have this part of the potential scheme. Due to
the periodicity of the potential, in the calculation of polynomials we have
ensured that the value, as well as their first and second derivatives at the
initial and the final angle of calculation, are respectively equal. This was
achieved with accuracy between $10^{-10}$ and $10^{-7}$ depending on the
examined surface. The polynomials found to be of 11th (for surfaces (111),
(211) and (311)) and 12th (for 221 surface) grade and the results are
demonstrated in Table 2. The dihedral angle was finally fixed so that
$\phi_{rot}^{(cis)}=0$, according to the IUPAC/IUB convention.
Figure 4: Potential vs. the $\mathrm{Au-S-C^{(1)}-C^{(2)}}$ dihedral angle on
Au surfaces. The fitting is not so good in sites where there was strong
repulsion to C atoms from the slab atoms. The images demonstrate the positions
of atoms in selected dihedral angles. Angles with respect to
$\phi_{rot}^{(cis)}=0$, according the IUPAC/IUB convention.
We plotted the simulation data (obtained by the DFT calculations), in
conjuction with the fitting values as shown in Figure 4 for Au(211) and
Au(311). The potential on the vertical axis is presented in $kJ/mol$ (more
specifically this is the difference between the original value $V$ and the
calculated lowest value $V_{0}$), vs. the angle in degrees on the horizontal
axis. The diagrams are shifted properly in order to be plotted according to
the IUPAC/IUB ($\phi_{cis}=0$).
Although the potential functions for the (111) and (311) surfaces fit very
well the calculated values, there is a higher deviation of values for the
accuracy in the (211) and (221) configurations especially at angles where the
second C atom seems to “penetrate” into the slab. The reason of course is that
these are non permitted sites because the energy of the system is very high
there. However, in spite of the deviation from the expected accuracy, we
consider that these potential functions fit quite well the purpose they were
constructed for.
Comparing the new dihedral angle potentials for the Au-S-C-C angle to other
well-known potential for dihedral angles, we observe several similarities and
differences. The potential of Ghorai and Glotzer [12] , for the S-C-C-C
dihedral angle, which was originaly parametrized for the C-C-C-C dihedral
angle, is a third degree polynomial of $(1+\cos\phi)$, where $\phi$ is the
dihedral angle. The two potentials are smooth periodic functions of $\phi$,
with more than one minimum. The maximum energy is near $\phi=0$ and the
minimum near $\phi=\pi$. The energy difference between minimum and maximum
energy is 35 kJ/mol for Ghorai and Glotzer potential and 25 kJ/mol for the
present potential We thus find a softer potential compared to the one for
alkanes, which is a result of the presence of a metal atom in our case. The
Ghorai and Glotzer potential has two local minima located at angles
$\approx\pm 60^{\circ}$. We only find one such local minimum for each surface
as the presence of the Au atoms makes it energetically unfavorable for the
second methyl group to be located close to the surface. Our potential has a
large plateau region of high energy values between zero and 60 degrees; these
conformations correspond to CHn groups being very close to Au atoms (see Fig.
4).
### II.4 Atomistic simulations
After identifying the structures and the potential energy functions, we
proceeded to perform the Molecular Dynamics (MD) simulations, using the
GROMACS[35] open source package.
The MD simulations were performed using the NVT ensemble (canonical ensemble),
and the Nosé-Hoover thermostat for temperature coupling on the whole system
($\tau$=0.2ps) at T=300K[36, 37]. We used a large simulation box in z
direction with more than 1 nm of vacuum above the SAMs and $30\times 30=900$
thiol molecules. Due to the presence of the vacuum region, the system can
arrange its structure to reach the equilibrium density. The $x-$ and $y-$
periodicity cannot be modified due to the presence of the thick Au slab
underneath the SAM. At the conditions of the present study, the lattice
constant of Au is not expected to change with pressure or temperature, so
there is no need to use NPT ensemble. We also used the Verlet algorithm (Leap-
Frog approximation) for the integration of the equations of motion[38].
Periodic boundary conditions were applied in all three dimensions.
Especially for z-axis, the interval between the slabs was kept over 5 nm in
order to prevent our systems from vertical interactions. All structures were
gradually exposed to temperatures of 500, 400 and 350 Kelvin; the output of
the latter being the configuration that was eventually studied at 300K. The
simulation time was 200ns for the (111), (211), (221) surfaces and 400ns for
(311) to ensure proper equilibration and accurate calculation of the
structural properties.
The question of ergodicity (time average equals ensemble average) is always
important in simulations of self-assembled systems. To check that our
simulation respects this principle, we have performed multiple MD runs (from 3
to 5) for each system in order to ensure that they end up at similar states,
especially for the (221) and (311) ones that show less or no order at their
final states. We tried several different initial conditions and also performed
runs at high temperature and then cool down at room temperature. In all cases,
the key features of the final states were the same and independent of the
initial state or the equilibration method.
## III Results
### III.1 Systems equilibrium and tilt angles
In order to estimate the equilibrium state of each examined system, we
observed the time evolution of the tilt angle for the C16S chains on every
surface. The time of convergence was 19, 17 and 260 ns for the (111), (211)
and (311) systems, respectively, until they reached an ordered state. The
(221) system was converged at 33 ns but it had never been able to reach an
ordered state. Tilt angles and other structural parameters are tabulated in
Table 3. These are the average values from the analysis of the accumulated
configurations, after equilibration was reached.
Figure 5: Time evolution of the non-bonded potential energy for C16S chains
on Au surfaces. The energy of the (ordered) SAMs on the (111) and (211), and
the (amorphous) one on (221), converge within about 50ns, whereas the one on
the (311), which includes both ordered and amorphous domains, converges in
much longer time scale, of about 300ns, compared to the rest. Table 3:
Selected results from the MD simulations
| Surfaces
---|---
Properties | (111) | (211) | (221) | (311)
Time of structural properties convergence (ns) | 19 | 17 | 33 | 260
Tilt angle <$\theta_{m}$> (deg) | 52.6$\pm$2.8 | 61.1$\pm$3.1 | 61.3$\pm$16.7 | 69.6$\pm$3.2111These values have been calculated in the area around the main peak of the distribution
Mean height of last C atom <$z_{tail}$>(nm) | 1.44$\pm$0.06 | 1.12$\pm$0.07 | 0.84$\pm$0.38 | 0.96$\pm$0.22
Precession angle <$\chi$> (deg) | 147.2$\pm$2.7 | 235.4$\pm$4.3 | | 43.9$\pm$2.6111These values have been calculated in the area around the main peak of the distribution
Gauche defects of the last methyl in chain | 8.8% | 20.6% | | 18.2%
Eigenvalues of S tensor ($\lambda_{x}^{2},\lambda_{y}^{2},\lambda_{z}^{2}$) | 290.17 2.30 1.17 | 277.74 3.54 2.01 | 96.61 69.66 47.82 | 210.52 44.45 13.03
Relative shape anisotropy ($\kappa$ factor) | 0.99989 | 0.99968 | 0.37499 | 0.93135
Another property used to ensure convergence, in addition to the tilt-angle, is
the time evolution of the non-bonding energy for the C16S chains on every
surface. The results are shown in Figure 5. The alkanethiol chains on the
(111) and (211) have an excellent and very fast (below 20ns) relaxation, while
the ones on the (311) converge rather slowly (above 250ns) with respect with
the first two. The energy of (221) was almost constant, having however
significant fluctuations. In all cases, we followed the time evolution of the
systems to hundreds of ns in order to be sure that they had reached
thermodynamic equilibrium prior to the calculation of any structural
properties.
Figure 6: Final configurations of the four systems. Note the order on the
(111) and (211) surfaces in comparison with the disorder on the (221) and
(311) ones. Nevertheless, partially ordered formations are observed on (311).
Typical snapshots of the final configurations of the four surfaces are shown
in Figure 6.As a general observation for the final states of our systems, one
can see that only two of them, (111) and (211), have reached a total order
(Figure 6). The (311) system was partially ordered giving large vacancies with
fixed chains separated by transition zones where the alkanethiols were messed
(this will be discussed later in this work). The (221) system was totally
disordered. The convergence to the final state for (111), (211) and (221)
systems was very fast (around 20ns), rather the one of the (311) system which
seems to be slower (above 250ns) with respect to the first three.
The normalized distribution of tilt angles after equilibration on the examined
surfaces is plotted in Figure 7. From this diagram, we observe the excellent
order of alkanethiols on the (111) and (211) surfaces with mean values equal
to 52.6$\pm$2.8 and 61.1$\pm$3.1 degrees respectively. The result value for
the (111) system seems to be analogous to the theoretical[11] and
experimental[5, 6] results mentioned before, where tilt angles lie around an
average of $50^{o}$, thus confirming the method correctness.
Because of the partial order of the (311) system, its mean tilt angle was
calculated in the area around the main peak as it is demonstrated in the same
plot and it was found to be 69.6$\pm$3.2 degrees. The (221) system gives an
average tilt angle of 61.3$\pm$16.7 degrees with a very flat distribution
because of its disordered final state. However, there is a small peak near 90
degrees which indicates that there are chains almost parallel to the surface.
For this system, the percentage deviation from the average is very high
(27.27%), which strengthens our view.
The fact that the tilt angle is larger on the (211) surface indicates larger
interaction between the alkanethiol and this surface in comparison with the
(111). Due to the shift of the distribution maxima to larger tilt angle
values, Figure 7 also indicates increasing interaction of the chains with the
(311) surface.
Figure 7: Tilt angles normalized distributions for the C16S chains on various
Au surfaces. While there is convergence to equilibrium for surfaces (111) and
(211), its lack for (221) and (311) is evident.
From Figure 6, we observe ordered SAMs structures on (111), (211) and
partially on (311) surfaces and disorder on the (221) as it has been mentioned
above. This order and disorder can be explained considering two reasons: (a)
the distances between the S atoms on the various surface positions which lead
to larger distances between the $\mathrm{-CH_{2}-}$ and $\mathrm{-CH_{3}}$
groups belonging to adjacent chains and (b) the geometry of the surface
structure and in particular the different step type. The (221) surface has
(111) steps where one Au atom of the upper terrace is bonded to two atoms of
the lower terrace. The (211) and (311) surfaces have (100) steps where one Au
atom of the upper terrace is bonded to one atom of the lower terrace. Similar
differences due to the surface orientation have been observed in interfaces
between diamond and amorphous carbon.[39] Indeed, the order decreases as the
grafting density of S atoms on the Au surface gets lower from (111), to (211)
and (221) systems, as indicated in Table 1. The partial order of the SAMs on
the (311) one, can be explained by the extra step in Au surface that lies
between the two adjacent S atoms (see Figure 2 (311)(a)) which modifies the
relationship between metal-chain and chain-chain forces of the system.
Figure 8: Two-dimensional radial distribution function of the center of mass
for groups belonging to different alkanethiol chains (intermolecular) on
different Au surfaces. On (111), (211), and (311) the peaks are obvious
indicating ordered structure. On the contrary, on (221) the peaks vanish,
which indicates disordered configuration.
To further explore the emerging of order or disorder depending on surface
orientation, we plot the 2D radial distribution function ($g(r)$) of the
center of mass of alkanethiols in Fig. 8. The peaks of $g(r)$ corresponds to
distances where it is most likely to find two centers of mass. Ordered
structures show a series of distinct peaks whereas a random structure will
have $g(r)=1$. As can be seen from Fig. 8, (111), (211) and (311) surfaces
show clear peaks at specific distances between the center of masses of the
alkane groups, while a much smoother $g(r)$ is shown for the (221) surface.
These features 2D radial distribution function suggest that alkanethiol groups
on the (111), (211) and (311) systems are localized at specific positions in
an ordered superstructure. This is not the case for the (221) system, where
$g(r)$ has typical features for an amorphous-like or disordered system.
### III.2 Mean atom height and the monolayer thickness ($\mathbf{z_{tail}}$)
The mean atomic distance of $\mathrm{C_{16}S}$ chains on the various surfaces
examined in this study against the C atom ranking number are demonstrated in
Figure 9. Previous studies showed a rather linear profile of the mean distance
of the alkane chains from the Au surface for ordered configurations[10];
similar trends are also founded here. As this height is strongly depended on
the tilt angle expressed above, one expects that a bigger value of this angle
indicates a more sloping chain with respect to the vertical axis. This is true
for the chains on the (111), (211) and (311) surfaces where order was
observed. For the (221) surface where disorder is observed, the linearity does
not exist.
The mean height of the last C atom ($z_{tail}$) is demonstrated in Table 3.
Similarly to the tilt angle, the mean atom height for the (111), (211) and
(311) surfaces is 1.44$\pm$0.06, 1.12$\pm$0.07 and 0.96$\pm$0.22 nm
respectively, while for the (221) it is 0.84$\pm$0.38 nm. The shape of the
diagram for the last surface is rather a curved line than a herringbone as a
result of the observed disorder.
Figure 9: The mean atom height for every C in respect with its ranking number
in the alkane chain. Linearity is obvious for the ordered configurations,
while there is not exist for the rest.
The normalized distribution of the distance of the last C atom in chain from
the metal slab ($z_{tail}$) has been plotted in Figure 10. For the (111) and
(211) well-ordered systems a Gaussian-like profile is observed, with clear
peaks around 1.4 and 1.1 nm, respectively, while for the (311) system with the
semi-ordered behavior this peak occurs around 0.84 nm that represents
$z_{tail}$ the "ordered" areas of the system. However, for the (311) system,
an additional broad (slightly declined) height distribution is observed for
distances between 1.0 and 1.6 nm that indicates the existence of a significant
number of chains at the non-ordered areas that have some higher $z_{tail}$
than the ordered ones, but without a clear convergence into a second central
value.
Completely different is the height distribution of the last C atom for the
unordered (221) system; a very broad curve is found with a flattened peak
around 1.1 nm and a smaller peak around 0.2 nm. The first indicates that there
is not a global average value for $z_{tail}$ for this system, while the latter
shows that some chains are almost parallel to the surface.
Figure 10: The normalized distribution of $z_{tail}$ for the last C in the
all the alkane chains. Clear peaks indicate the ordered areas. Especially for
the Au(311) system, the clear peak at 0.84 nm indicates the system ordered
areas while the area between 1.0 and 1.6 nm indicates the non-ordered areas.
### III.3 Precession angle ($\mathbf{\chi}$)
The precession angles of the $\mathrm{C_{16}S}$ chains are defined as shown in
Figure 11 and are measured counterclockwise from x-direction. The mean values
for the (111) and (211) systems are 147.2$\pm$2.7 and 235.4$\pm$4.3 degrees
respectively (Table 3). For the semi-ordered (311) surface this value is
43.9$\pm$2.6 degrees and has been calculated in the area around the main peak,
as it has been done previously for the tilt angle as well. The values for the
normalized distributions of the precession angles for all of the systems are
plotted in Figure 12.
Figure 11: Precession angles definition for the C16S chains on the (211)
(upper left), (111) (lower left) and (311) (right) surface systems studied.
The angles are measured counterclockwise from the x-direction (detail from
Figure 6). The axes shown correspond to the following crystallographic
orientations: $x=[0\bar{1}1],y=[1\bar{1}\bar{1}]$ for (211);
$x=[\bar{2}11],y=[[0\bar{1}1]]$ for (111);
$x=[[0\bar{1}1]],y=[2\bar{3}\bar{3}]$ for (311). Figure 12: Precession angles
normalized distributions for the C16S chains.
In both of the (111) and (211) ordered cases, the chains lie very close to the
diagonal of the quadrilaterals formed by four neighboring S atoms. The
corresponding angles for these quadrilaterals measured from the x-direction as
it is shown in Figure 11 (upper left - lower left) on (111) and (211) surfaces
are ~130.9 and ~234.7 degrees respectively, which are very close to calculated
precession angles. Former studies on (111) systems with
$(\sqrt{3}\times\sqrt{3})R30^{o}$ close packed arrangement[10], have shown
that the alkane axis is projected between the nearest-neighbor (NN) and the
next-nearest-neighbor (NNN) of S atom that connects the alkane with the
substrate, preferring an orientation towards the NNN direction. This is also
true in this study for both the (111) and (211) ordered systems.
The semi-ordered (311) system indicates a different behavior: the
quadrilateral mentioned above is formed by every second S atom which lies on
the edge of the step (x-axis) and each S on the y-axis (Figure 11 (right)) and
its diagonal forms a ~39.7 degree angle from the x-axis according to the
displayed dimensions, very close to the average precession angle. On the other
hand, the (221) system, which gives a totally disordered formation, has not a
distinguished peak on the plot.
### III.4 Gauche defects
The all-trans configuration of the C chains in the systems we study, is
indicated by calculating the gauche defects percentage with respect to the
bond ranking along the chains starting from the $\mathrm{C^{(1)}-C^{(2)}}$
bond (ranking number 3) until the end of the chain. The results are
demonstrated in Figure 13.
Figure 13: The gauche defects as they occur on the various surfaces
formations.
As it has been stated elsewhere[10], most "gauche defects" of ordered SAM
alkanethiol chains are expected to occur in bonds far from the surface,
especially in the last bond of the chain, due to more available free volume at
the chain ends [40]. This is clearly shown for the ordered formations on the
(111), (211) (311) Au surfaces, where the percentage of gauche defects are
8.8%, 20.6% and 18.2% respectively (Table 3). In addition, one can observe the
herringbone arrangement in the percentage of the gauche defects. Such
conformations are energetically preferable in these situations because they
minimize overlaps between the neighboring molecules. The observed higher
values in the 3rd bond are due to the participation of the $\mathrm{C^{(2)}}$
atom in the 1st dihedral angle Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ which is
dominated by a different potential (see Section "Calculation of Au-S-C(1)-C(2)
dihedral angle potentials"II.3) and prevents the formation of bonds in trans
state. Figure 13 indicates that the percentage of the gauche defects increases
as the interaction of the chains with various surfaces increases. Indeed, for
(111) the defects are very few, while for (211) and (311) systems, these
defects are remarkably increased.
For the system on the (221) surface, where no order has been observed, the
previous features seem to be fade. This is also indicated in Figure 13 by the
almost random percentage values of gauche defects, while in the “semi-ordered”
(311) final state the oscillating high values are observed.
### III.5 Gyration tensor and relative shape anisotropy
As a final measure of the structure of the C16S/Au structures we’ve examined
the shape of the alkanethiol chains at their final order, by calculating their
radius of gyration tensor[41] that is defined by
$S_{mn}\stackrel{{\scriptstyle
def}}{{=}}\frac{1}{N}\sum_{i=1}^{N}(r_{m}^{(i)}-r_{CM}^{(i)})(r_{n}^{(i)}-r_{CM}^{(i)})$
(3)
where $N$ is the number of particles (here: $\mathrm{CH_{x}}$ united atoms) of
the chain, $r_{m}^{(i)}$ is the $m^{th}$ Cartesian coordinate of the average
position vector $r^{(i)}$ of the $i^{th}$ particle and $r_{CM}^{(i)}$ is the
average position vector of the center of mass of the specific chain the
particle belongs to. Because of its symmetry (it is a $3\times 3$ matrix), its
diagonalization gives a principal axis system where we choose that
$S=\mathrm{diag}(\lambda_{x}^{2},\lambda_{y}^{2},\lambda_{z}^{2})$ (4)
where $\lambda_{x}^{2},\lambda_{y}^{2},\lambda_{z}^{2}$ are the eigenvalues of
$S$ and $\lambda_{x}^{2}\geq\lambda_{y}^{2}\geq\lambda_{z}^{2}$.
The eigenvalues are demonstrated in Table 3. In the three ordered
conformations there is a clear preference to a specific dimension (obvious to
the chain axis dimension), as the respective eigenvalue is much greater than
the other two ones. This fact can be shown further by calculating the relative
shape anisotropy factor $\kappa$ as[41]
$\kappa^{2}=\frac{3}{2}\frac{\lambda_{x}^{4}+\lambda_{y}^{4}+\lambda_{z}^{4}}{(\lambda_{x}^{2}+\lambda_{y}^{2}+\lambda_{z}^{2})^{2}}-\frac{1}{2}$
(5)
where $0\leq\kappa\leq 1$. $\kappa=0$ only occurs if all particles are
spherically symmetric, and $\kappa=1$ only occurs if all particles lie on a
line. Indeed, the calculation of $\kappa$ (see Table 3) shows that chains on
the (111), (211) and (311) surfaces are very close to linearity
($\kappa$=0.99989, 0.99968 and 0.93135 respectively), while one of them (on
(221) surface) is far from it ($\kappa$=0.37499).
## IV Discussion
In this work we investigated the structural properties and ordering of
hexadecanethiol (C16S) SAMs formed on the planar Au(111), and stepped Au(211),
Au(221) and Au(311) surfaces via long detailed atomistic MD simulations. To
describe accurately the interaction of C16S chains with the Au surfaces we’ve
extended a classical force field reported in the literature, by parametrizing
the dihedral angle interaction potentials (different for each system) between
Au-S-$\mathrm{C^{(1)}-C^{(2)}}$, where Au is the nearest surface atom to the
ligant S atom of the chain and $\mathrm{C^{(1)}-C^{(2)}}$ the following C
atoms. The latter potentials were calculated using DFT calculations and were
described with high-degree polynomials after a fitting process.
Comparing the morphology of the C16S SAMs on various Au surfaces, a clear
transition from well-ordered, for (111) and (211) surfaces, to “semi-ordered”
for the (311), up to fully disordered structures for the (221) one, is
observed. In particular SAMs on the (311) Au surface show regimes of ordered
chains separated by non-ordered transition zones maybe as a result of the
specific surface geometry.
The structure of the C16S SAMs systems has been quantified by calculating
several different properties. The chain tilt angle is a very important
property since it indicates the interaction of alkanethiol chain with the
surface and the way the chains are "ordered" with respect to the planar
surface. For the (111) Au surface, results are in agreement with previous
theoretical and experimental data considering systems with the same coverage.
We also observed that on the complex (211) and (311) surfaces the tilt angle
was even larger than the one of (111), which indicates the stronger "tilting"
of chains towards the Au surface for these surfaces. For the disordered (221)
system a rather flat distribution of the tilt angles, between 50 and 90
degrees was observed. Consistent results were found by calculating the mean C
atom heights with respect to the ranking number of each C atom for the
different Au surfaces.
The precession angle also has some interesting features: the well-ordered
chains on (111) and (211) surfaces the chain axis was projected near the NNN
direction, in agreement with results reported in the literature. It is very
interesting that different is the case for the semi-ordered (311) system,
where the ordered chain axes prefers to be projected nearly above the diagonal
of the quadrilateral formed by every second S atom that lies on the edge of
the surface step (x-axis) and each S atom on the vertical y-axis. This fact
maybe another effect of the (311) geometry. The disordered C16S/(221)Au system
shows no preferential precession angle.
The overall morphology of C16S chains were also studied in the intermolecular
level by calculating the pair 2D radial distribution function between atoms
belonging to different chains. Clear strong peaks at various distances,
indicating a crystalline-like order, were found for the systems on the (111)
and (211), and less for the (311), surfaces, whereas for the (221) one the
peaks vanish indicating a disordered morphology.
Finally, the ordered and disordered states of the various examined systems
were related with the overall shape of the C16S chains, by calculating the
gyration tensor and the relative shape anisotropy factor. For the chains on
the (111), (211) and (311) Au surfaces, a value of $\kappa$ very close to 1
was found, indicating extended (almost all-trans) alkanethiol chains, whereas
for the (221) system $\kappa$ is much below one.
The results reported here emphasize the role of the Au substrate on the final
structure and morphology of the alkanethiol SAMs. In general, for SAMs on flat
surfaces, it is expected that the structure of the SAMs would strongly depend
on the grafting density, being a result of an interplay between the energetic
interaction of the molecules with the surface that enhances order, and the
associated entropy that leads to disorder; a transition from amorphous to
ordered domains is expected as the grafting density increases (entropy
decreases). Therefore, for systems with high grafting densities, as those
studied here, ordered structures are to be expected. However, from our results
it is clear that the geometrical characteristics of the Au substrate can also
strongly affect the self-assembled structures. For the Au(111) and the stepped
Au(211) the grafting densities are high (3.24 chains/nm-2 and 2.29 chains/nm-2
respectively) and the final structures are well ordered. For the stepped
Au(331) surface (grafting density of 1.69 chains/nm-2) domains with ordered
and amorphous like chains are found. On the contrary, for the Au(221) surface,
despite the fact that the grafting density is still relatively high (1.87
chains/nm-2), clear disordered structure has been observed. This can be
attributed to additional excluded volume interaction induced by the specific
steps in this surface that prevent the collective arrangement of the C16S
chains in well formed structures.
## V Conclusions
The "order to disorder transition" of the C16S chains by changing the type of
the Au surface can offer a direct way to control the morphology of the SAMs by
only changing the crystalline characteristics of the surface, thus providing a
complementary to chemistry way to produce SAMs with the desired morphology.
The above discussion is far from leading to definite conclusions. The current
work is, according to our knowledge, the first systematic
theoretical/simulation study concerning the complex role of the substrate on
the final properties of the SAMs. Without doubt, a lot of work needs to be
done in order to examine whether the order to disorder transition observed
here is seen also for other systems, and more general to clarify the role of
the substrate characteristics (geometry, crystalline structure, defects, etc.)
on the properties of the SAMs systems. For example, detailed studies of the
structure of SAMs as a function of the grafting density on the same surface or
for other metallic surfaces, such as Ag, Pt etc, are necessary to clarify
whether the observations reported here are valid for other systems as well. In
addition, it would be very interesting to investigate the SAMs structure if
the S atoms were not frozen at their initial positions (a movement of chains
might be possible, especially at low grafting densities). All these will be
the subject of future works.
## VI Acknowledgement
The authors thank the supporting teams of the ASE/GPAW and GROMACS open source
code that contributed significantly to the successful completion of this work.
They also acknowledge the CyTera, BIBLIOTHECA ALEXANDRINA (project
pro17a111s1) and ARIS high-performance computing facilities for granting
computing time, as well as their staff for valuable help. This work was
supported by computational time granted from the National Infrastructures for
Research and Technology S.A. (GRNET S.A.) in the National HPC facility - ARIS
- under project IDs pr007027-NANOGOLD and pa181005-NANOCOMPDESIGN. VH
acknowledges support by the project "SimEA", funded by the European Union’s
Horizon 2020 research and innovation programme under grant agreement No
810660. IR, GK and DS acknowledge support from HFRI Project MULTIGOLD numbered
1303-KA10480.
## VII Bibliography
## References
* Schreiber [2000] F. Schreiber, “Structure and growth of self-assembling monolayers,” Progress in Surface Science 65, 151–257 (2000).
* Bowman _et al._ [2008] M.-C. Bowman, T. E. Ballard, C. J. Ackerson, D. L. Feldheim, D. M. Margolis, and C. Melander, “Inhibition of hiv fusion with multivalent gold nanoparticles,” Journal of the American Chemical Society 130, 6896–6897 (2008), pMID: 18473457, https://doi.org/10.1021/ja710321g .
* Giljohann _et al._ [2010] D. Giljohann, D. Seferos, W. Daniel, M. Massich, P. Patel, and C. Mirkin, “Gold nanoparticles for biology and medicine,” Angewandte Chemie International Edition 49, 3280–3294 (2010), https://onlinelibrary.wiley.com/doi/pdf/10.1002/anie.200904359 .
* Skountzos, von Wrochem, and Mavrantzas [2020] E. N. Skountzos, F. von Wrochem, and V. G. Mavrantzas, “Structure and conformation of a crystalline p3ht film adsorbed on an alkanethiol self-assembled monolayer deposited on gold,” Macromolecular Theory and Simulations 29, 2000010 (2020), https://onlinelibrary.wiley.com/doi/pdf/10.1002/mats.202000010 .
* Barrena, Ocal, and Salmeron [2001] E. Barrena, C. Ocal, and M. Salmeron, “Structure and stability of tilted-chain phases of alkanethiols on au(111),” The Journal of Chemical Physics 114, 4210–4214 (2001), https://doi.org/10.1063/1.1346676 .
* Barrena _et al._ [2004] E. Barrena, E. Palacios-Lidón, C. Munuera, X. Torrelles, S. Ferrer, U. Jonas, M. Salmeron, and C. Ocal, “The role of intermolecular and molecule-substrate interactions in the stability of alkanethiol nonsaturated phases on au(111),” Journal of the American Chemical Society 126, 385–395 (2004), pMID: 14709106, https://doi.org/10.1021/ja036143d .
* Poirier [1999] G. E. Poirier, “Coverage-dependent phases and phase stability of decanethiol on au(111),” Langmuir 15, 1167–1175 (1999), https://doi.org/10.1021/la981374x .
* Vericat, Vela, and Salvarezza [2005] C. Vericat, M. E. Vela, and R. C. Salvarezza, “Self-assembled monolayers of alkanethiols on au(111): surface structures, defects and dynamics,” Phys. Chem. Chem. Phys. 7, 3258–3268 (2005).
* Guo and Li [2014] Q. Guo and F. Li, “Self-assembled alkanethiol monolayers on gold surfaces: resolving the complex structure at the interface by stm,” Phys. Chem. Chem. Phys. 16, 19074–19090 (2014).
* Alexiadis _et al._ [2007] O. Alexiadis, V. A. Harmandaris, V. G. Mavrantzas, and L. D. Site, “Atomistic simulation of alkanethiol self-assembled monolayers on different metal surfaces via a quantum, first-principles parametrization of the sulfur-metal interaction,” The Journal of Physical Chemistry C 111, 6380–6391 (2007), https://doi.org/10.1021/jp067347u .
* Ahn _et al._ [2011] Y. Ahn, J. K. Saha, G. C. Schatz, and J. Jang, “Molecular dynamics study of the formation of a self-assembled monolayer on gold,” The Journal of Physical Chemistry C 115, 10668–10674 (2011), https://doi.org/10.1021/jp200447k .
* Ghorai and Glotzer [2007] P. K. Ghorai and S. C. Glotzer, “Molecular dynamics simulation study of self-assembled monolayers of alkanethiol surfactants on spherical gold nanoparticles,” The Journal of Physical Chemistry C 111, 15857–15862 (2007), https://doi.org/10.1021/jp0746289 .
* Mar and Klein [1994] W. Mar and M. L. Klein, “Molecular dynamics study of the self-assembled monolayer composed of s(ch2)14ch3 molecules using an all-atoms model,” Langmuir 10, 188–196 (1994), https://doi.org/10.1021/la00013a028 .
* Bhatia and Garrison [1997] R. Bhatia and B. J. Garrison, “Structure of c(4x2) superlattice in alkanethiolate self-assembled monolayers,” Langmuir 13, 4038–4043 (1997), https://doi.org/10.1021/la962055d .
* Love _et al._ [2005] J. C. Love, L. A. Estroff, J. K. Kriebel, R. G. Nuzzo, and G. M. Whitesides, “Self-assembled monolayers of thiolates on metals as a form of nanotechnology,” Chemical Reviews 105, 1103–1170 (2005), pMID: 15826011, https://doi.org/10.1021/cr0300789 .
* Leung _et al._ [2018] K. K. Leung, A. D. Gaxiola, H.-Z. Yu, and D. Bizzotto, “Tailoring the dna sam surface density on different surface crystallographic features using potential assisted thiol exchange,” Electrochimica Acta 261, 188–197 (2018).
* Nguyen [2012] C. Nguyen, “Quantitative analysis of cooh-terminated alkanethiol sams on gold nanoparticle surfaces,” Advances in Natural Sciences: Nanoscience and Nanotechnology 3, 045008 (2012).
* Barmparis, Honkala, and Remediakis [2013] G. D. Barmparis, K. Honkala, and I. N. Remediakis, “Thiolate adsorption on au(hkl) and equilibrium shape of large thiolate-covered gold nanoparticles,” The Journal of Chemical Physics 138, 064702 (2013), https://doi.org/10.1063/1.4790368 .
* Maksymovych, Sorescu, and Yates [2006] P. Maksymovych, D. C. Sorescu, and J. T. Yates, “Gold-adatom-mediated bonding in self-assembled short-chain alkanethiolate species on the au(111) surface,” Phys. Rev. Lett. 97, 146103 (2006).
* Häkkinen [2012] H. Häkkinen, “The gold–sulfur interface at the nanoscale,” Nature Chem 4, 443–455 (2012).
* Walter _et al._ [2008] M. Walter, J. Akola, O. Lopez-Acevedo, P. D. Jadzinsky, G. Calero, C. J. Ackerson, R. L. Whetten, H. Grönbeck, and H. Häkkinen, “A unified view of ligand-protected gold clusters as superatom complexes,” Proceedings of the National Academy of Sciences 105, 9157–9162 (2008), https://www.pnas.org/content/105/27/9157.full.pdf .
* Vericat _et al._ [2010] C. Vericat, M. E. Vela, G. Benitez, P. Carro, and R. C. Salvarezza, “Self-assembled monolayers of thiols and dithiols on gold: new challenges for a well-known system,” Chem. Soc. Rev. 39, 1805–1834 (2010).
* Zharnikov _et al._ [2000] M. Zharnikov, S. Frey, H. Rong, Y.-J. Yang, K. Heister, M. Buck, and M. Grunze, “The effect of sulfur–metal bonding on the structure of self-assembled monolayers,” Phys. Chem. Chem. Phys. 2, 3359–3362 (2000).
* Gnatek _et al._ [2015] D. Gnatek, S. Schuster, J. Ossowski, M. Khan, J. Rysz, S. Krakert, A. Terfort, M. Zharnikov, and P. Cyganik, “Odd–even effects in the structure and stability of azobenzene-substituted alkanethiolates on au(111) and ag(111) substrates,” The Journal of Physical Chemistry C 119, 25929–25944 (2015), https://doi.org/10.1021/acs.jpcc.5b07899 .
* Partes _et al._ [2019] C. Partes, E. Sauter, M. Gärtner, M. Kind, A. Asyuda, M. Bolte, M. Zharnikov, and A. Terfort, “Reestablishing odd–even effects in anthracene-derived monolayers by introduction of a pseudo-c2v symmetry,” The Journal of Physical Chemistry C 123, 20362–20372 (2019), https://doi.org/10.1021/acs.jpcc.9b05299 .
* Tao and Bernasek [2007] F. Tao and S. L. Bernasek, “Understanding odd-even effects in organic self-assembled monolayers,” Chemical Reviews 107, 1408–1453 (2007), pMID: 17439290, https://doi.org/10.1021/cr050258d .
* Chesneau _et al._ [2010] F. Chesneau, J. Zhao, C. Shen, M. Buck, and M. Zharnikov, “Adsorption of long-chain alkanethiols on au(111): A look from the substrate by high resolution x-ray photoelectron spectroscopy,” The Journal of Physical Chemistry C 114, 7112–7119 (2010).
* Strong and Whitesides [1988] L. Strong and G. M. Whitesides, “Structures of self-assembled monolayer films of organosulfur compounds adsorbed on gold single crystals: electron diffraction studies,” Langmuir 4, 546–558 (1988).
* Van Hove and Somorjai [1980] M. Van Hove and G. Somorjai, “A new microfacet notation for high-miller-index surfaces of cubic materials with terrace, step and kink structures,” Surface Science 92, 489 – 518 (1980).
* Kohn and Sham [1965] W. Kohn and L. J. Sham, “Self-consistent equations including exchange and correlation effects,” Phys. Rev. 140, A1133–A1138 (1965).
* Hohenberg and Kohn [1964] P. Hohenberg and W. Kohn, “Inhomogeneous electron gas,” Phys. Rev. 136, B864–B871 (1964).
* [32] “Atomic simulation environment,” https://wiki.fysik.dtu.dk/ase/, accessed April 2018.
* [33] “Gpaw: Dft and beyond within the projector-augmented wave method,” https://wiki.fysik.dtu.dk/gpaw/, accessed April 2018\.
* Hammer, Hansen, and Nørskov [1999] B. Hammer, L. B. Hansen, and J. K. Nørskov, “Improved adsorption energetics within density-functional theory using revised perdew-burke-ernzerhof functionals,” Phys. Rev. B 59, 7413–7421 (1999).
* [35] “Gromacs: Fast, flexible, free,” http://www.gromacs.org/, accessed April 2018.
* Nosé [1984] S. Nosé, “A molecular dynamics method for simulations in the canonical ensemble,” Molecular Physics 52, 255–268 (1984), https://doi.org/10.1080/00268978400101201 .
* Hoover [1985] W. G. Hoover, “Canonical dynamics: Equilibrium phase-space distributions,” Phys. Rev. A 31, 1695–1697 (1985).
* Swope _et al._ [1982] W. C. Swope, H. C. Andersen, P. H. Berens, and K. R. Wilson, “A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters,” The Journal of Chemical Physics 76, 637–649 (1982), https://doi.org/10.1063/1.442716 .
* Kopidakis _et al._ [2007] G. Kopidakis, I. Remediakis, M. Fyta, and P. Kelires, “Atomic and electronic structure of crystalline–amorphous carbon interfaces,” Diamond and Related Materials 16, 1875 – 1881 (2007), proceedings of the 6th Specialists Meeting in Amorphous Carbon.
* Harmandaris _et al._ [2002] V. Harmandaris, M. Doxastakis, V. Mavrantzas, and D. N. Theodorou, “Detailed molecular dynamics simulation of the self-diffusion of n-alkane and cis-1,4 polyisoprene oligomer melts,” J. Chem. Phys. 116, 436–446 (2002).
* Theodorou and Suter [1985] D. N. Theodorou and U. W. Suter, “Shape of unperturbed linear polymers: polypropylene,” Macromolecules 18, 1206–1214 (1985), https://doi.org/10.1021/ma00148a028 .
## VIII Supplementary information
### VIII.1 Fitting Data for the calculated potential
The calculation process of the fitting data is demonstrated in Table 4. The
table is divided in four sections that represent each one of the studied
surfaces, as indicated. Each section is divided in three columns:
1. 1.
Starting from the third column of each section, the difference between the
absolute value and the lowest absolute value of calculated potential in kJ/mol
is demonstrated. Thus, a zero value indicates the configuration of the lowest
potential for each system.
2. 2.
The first column of each section indicates the dihedral angle of
Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ starting from our initial configuration (0
deg). This was retrieved from data reported by Barmparis et al.[citation 18 in
the paper manuscript] as the configuration of lowest adsorption energy for
each surface. The fact that the initial angle generally does not correspond to
the lowest energy in our systems is because we used ethanethiols instead of
methanethiols as Barmparis et al. did. This caused a shift of the systems
lowest energy to some adjacent angles. The only system that provided its
lowest energy at the same configuration with its initial one was Au(211). The
first and the last angles differ by 360 degrees and correspond of course to
the same energy due to the periodicity of the potential. We notice that, in
the calculation of polynomials we have ensured that the value, as well as
their first and second derivatives at the initial and the final angle of
calculation, are respectively equal. This was achieved with accuracy between
$10^{-10}$ and $10^{-7}$ depending on the examined surface.
3. 3.
The second column of each section indicates the dihedral angle translated into
the IUPAC/IUB convention, where the angle $\phi$ between the two planes of
dihedral angle is zero at _cis_ position ($\phi_{rot}^{(cis)}=0$).
Moreover, all data have been shifted properly in order to demonstrate the
$\phi_{rot}^{(cis)}$ at the center of each plot (see Fig. 14).
Table 4: Data for the dihedral Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ fitting for the
various surfaces
Au(111) | Au(211) | Au(221) | Au(311)
---|---|---|---
(deg)_a_ | (deg)_b_ | (kJ/mol)_c_ | (deg)_a_ | (deg)_b_ | (kJ/mol)_c_ | (deg)_a_ | (deg)_b_ | (kJ/mol)_c_ | (deg)_a_ | (deg)_b_ | (kJ/mol)_c_
40 | $-176.9$ | $0$ | -10 | $-176.5$ | $0.50053$ | 10 | $-181.1$ | $0.22168$ | -100 | $-177.6$ | $0.98058$
50 | $-166.9$ | $1.92209$ | 0 | $-166.5$ | $0$ | 20 | $-171.1$ | $1.41083$ | -90 | $-167.6$ | $2.0242$
60 | $-156.9$ | $4.12734$ | 10 | $-156.5$ | $2.9621$ | 30 | $-161.1$ | $2.81249$ | -80 | $-157.6$ | $3.42303$
70 | $-146.9$ | $7.53486$ | 20 | $-146.5$ | $2.08739$ | 40 | $-151.1$ | $4.06308$ | -70 | $-147.6$ | $4.27523$
80 | $-136.9$ | $10.84462$ | 30 | $-136.5$ | $5.32198$ | 50 | $-141.1$ | $5.44702$ | -60 | $-137.6$ | $4.54192$
90 | $-126.9$ | $13.1409$ | 40 | $-126.5$ | $9.18535$ | 60 | $-131.1$ | $4.75483$ | -50 | $-127.6$ | $4.34841$
100 | $-116.9$ | $14.57073$ | 50 | $-116.5$ | $11.98105$ | 70 | $-121.1$ | $5.02124$ | -40 | $-117.6$ | $3.55137$
110 | $-106.9$ | $17.17039$ | 60 | $-106.5$ | $15.99277$ | 80 | $-111.1$ | $4.59373$ | -30 | $-107.6$ | $2.24303$
120 | $-96.9$ | $20.64851$ | 70 | $-96.5$ | $17.38566$ | 90 | $-101.1$ | $4.17872$ | -20 | $-97.6$ | $1.02032$
130 | $-86.9$ | $22.73943$ | 80 | $-86.5$ | $22.54092$ | 100 | $-91.1$ | $2.9578$ | -10 | $-87.6$ | $0.37624$
140 | $-76.9$ | $22.88312$ | 90 | $-76.5$ | $20.73711$ | 110 | $-81.1$ | $0$ | 0 | $-77.6$ | $0.1412$
150 | $-66.9$ | $23.254$ | 100 | $-66.5$ | $17.31129$ | 120 | $-71.1$ | $0.83306$ | 10 | $-67.6$ | $0.99838$
160 | $-56.9$ | $22.92139$ | 110 | $-56.5$ | $18.7662$ | 130 | $-61.1$ | $1.0445$ | 20 | $-57.6$ | $2.33373$
170 | $-46.9$ | $22.81019$ | 120 | $-46.5$ | $18.76449$ | 140 | $-51.1$ | $1.50495$ | 30 | $-47.6$ | $5.25309$
180 | $-36.9$ | $23.06272$ | 130 | $-36.5$ | $20.18011$ | 150 | $-41.1$ | $5.093$ | 40 | $-37.6$ | $7.38145$
190 | $-26.9$ | $23.22129$ | 140 | $-26.5$ | $19.58138$ | 160 | $-31.1$ | $5.9225$ | 50 | $-27.6$ | $10.69598$
200 | $-16.9$ | $21.75202$ | 150 | $-16.5$ | $20.05389$ | 170 | $-21.1$ | $9.33662$ | 60 | $-17.6$ | $14.22945$
210 | $-6.9$ | $21.50898$ | 160 | $-6.5$ | $20.95312$ | 180 | $-11.1$ | $12.62207$ | 70 | $-7.6$ | $16.7203$
220 | $3.1$ | $20.04005$ | 170 | $3.5$ | $18.82854$ | 190 | $-1.1$ | $15.91302$ | 80 | $2.4$ | $19.02913$
230 | $13.1$ | $18.50947$ | 180 | $13.5$ | $19.6641$ | 200 | $8.9$ | $15.4219$ | 90 | $12.4$ | $21.00888$
240 | $23.1$ | $16.904$ | 190 | $23.5$ | $16.377$ | 210 | $18.9$ | $20.54621$ | 100 | $22.4$ | $22.58012$
250 | $33.1$ | $13.12848$ | 200 | $33.5$ | $9.82463$ | 220 | $28.9$ | $17.22396$ | 110 | $32.4$ | $23.46611$
260 | $43.1$ | $10.1456$ | 210 | $43.5$ | $7.72552$ | 230 | $38.9$ | $15.69446$ | 120 | $42.4$ | $24.34837$
270 | $53.1$ | $5.85983$ | 220 | $53.5$ | $4.39806$ | 240 | $48.9$ | $15.44822$ | 130 | $52.4$ | $24.77496$
280 | $63.1$ | $3.15267$ | 230 | $63.5$ | $4.733$ | 250 | $58.9$ | $16.2159$ | 140 | $62.4$ | $24.10734$
290 | $73.1$ | $1.53332$ | 240 | $73.5$ | $1.15543$ | 260 | $68.9$ | $20.20456$ | 150 | $72.4$ | $22.64752$
300 | $83.1$ | $0.89677$ | 250 | $83.5$ | $1.53741$ | 270 | $78.9$ | $17.38059$ | 160 | $82.4$ | $21.13333$
310 | $93.1$ | $0.56707$ | 260 | $93.5$ | $2.71241$ | 280 | $88.9$ | $16.19414$ | 170 | $92.4$ | $19.28863$
320 | $103.1$ | $1.33393$ | 270 | $103.5$ | $3.21506$ | 290 | $98.9$ | $17.62774$ | 180 | $102.4$ | $16.69052$
330 | $113.1$ | $1.84689$ | 280 | $113.5$ | $4.60673$ | 300 | $108.9$ | $15.50676$ | 190 | $112.4$ | $12.59822$
340 | $123.1$ | $1.83621$ | 290 | $123.5$ | $5.94061$ | 310 | $118.9$ | $10.67734$ | 200 | $122.4$ | $8.59163$
350 | $133.1$ | $1.91146$ | 300 | $133.5$ | $6.52637$ | 320 | $128.9$ | $6.9842$ | 210 | $132.4$ | $5.27355$
0 | $143.1$ | $2.20951$ | 310 | $143.5$ | $6.581$ | 330 | $138.9$ | $5.61106$ | 220 | $142.4$ | $2.88328$
10 | $153.1$ | $1.62349$ | 320 | $153.5$ | $4.94211$ | 340 | $148.9$ | $1.8958$ | 230 | $152.4$ | $1.17682$
20 | $163.1$ | $0.66892$ | 330 | $163.5$ | $3.73802$ | 350 | $158.9$ | $3.37958$ | 240 | $162.4$ | $0.17125$
30 | $173.1$ | $0.04899$ | 340 | $173.5$ | $3.21287$ | 0 | $168.9$ | $0.67759$ | 250 | $172.4$ | $0$
40 | $183.1$ | $0$ | 350 | $183.5$ | $0.50053$ | 10 | $178.9$ | $0.22168$ | 260 | $182.4$ | $0.98058$
_a_ Angle from the original position.
_b_ Angle with respect to $\phi_{rot}^{(cis)}=0$, according the IUPAC/IUB
convention.
_c_ Potential difference bettween the calculated value and the lowest
calculated value.
### VIII.2 Fitting plots for calculated potential vs. $\mathrm{Au-
S-C^{(1)}-C^{(2)}}$ dihedral angle
The fitting plots of the four calculated potentials described above are
demonstrated in Fig. 14. Fitting data are demonstrated in Table 4. The plots
for Au(111) and Au(311) fit in the calculated data points quite well, while
for the rest of them (Au(211) and Au(221)) the fitting is not so good at sites
near to 0 degrees ($\phi_{rot}^{(cis)}=0$) where the second C atom seems to
“penetrate” into the surface. The reason is that these are non permitted sites
because the energy of the system is very high there. However, in spite of the
deviation from the expected accuracy, we consider that these potential
functions fit quite well the purpose they were constructed for.
Figure 14: Potential vs. the Au-S-$\mathrm{C^{(1)}-C^{(2)}}$ dihedral angle
on Au (211) and (311) surfaces. _Red:_ simulation data, _Blue_ : fitting data.
Fitting is not so good in sites where there was strong repulsion to C atoms
from the slab atoms. Angles with respect to $\phi_{rot}^{(cis)}=0$, according
the IUPAC/IUB convention.
### VIII.3 2D structure factor, S(q)
We extracted the structure factor, $S(q)$, for all the examined systems from
data of the radial distribution function shown in Fig. 8 of the text. Given
the pair correlation function, $g(r)$, in 2D, the structure factor is
calculated using the formula
$S(q)=1+2\pi\rho\int{g(r)\frac{\sin{qr}}{qr}rdr}$
or by its “discretized” form:
$S(q_{k})=1+2\pi\rho\sum_{i=0}^{n-1}g(r_{i})\frac{\sin{q_{k}r_{i}}}{q_{k}r_{i}}r_{i}\Delta
r$
where: $n~{}=~{}900$ is the number of different distances between two chain
CMs, $\rho$ is the area density of the CMs, $r_{i}$ is the distance between
any two chain CMs, $\Delta r~{}=~{}0.01~{}nm$ is the elementary step of the
distance, $g(r_{i})$ the value of the radial distribution function for that
distance and $q_{k}=\frac{2\pi}{r_{k}}$.
Figure 15: Two-dimensional structure factror for the SAM systems considered
in the present study.
The structure factors for the four system are plotted in Figure 15 where we
plotted $S(q)$ for $q$ up to 12$\mathrm{nm^{-1}}$. Comparing this plot with
Fig. 8 of the paper, one can observe that there are peaks of $q$’s (1 - 12
$\mathrm{nm^{-1}}$) corresponding to distances 0.5 up to 6 nm between two CMs
of the chains indicating the order of both (111) and (211) systems. This is
also true for the (311) despite its semi-ordered configuration. On the
contrary, in the (221) unordered systems there are not such clear peaks which
is what we expected because of the system’s CMs aperiodicity.
|
# Identifying Authorship Style in Malicious Binaries: Techniques, Challenges &
Datasets
Jason Gray , Daniele Sgandurra , and Lorenzo Cavallaro Royal Holloway
University of LondonRoyal Holloway University of LondonKing’s College London
###### Abstract
Attributing a piece of malware to its creator typically requires threat
intelligence. Binary attribution increases the level of difficulty as it
mostly relies upon the ability to disassemble binaries to identify authorship
style. Our survey explores malicious author style and the adversarial
techniques used by them to remain anonymous. We examine the adversarial impact
on the state-of-the-art methods. We identify key findings and explore the open
research challenges. To mitigate the lack of ground truth datasets in this
domain, we publish alongside this survey the largest and most diverse meta-
information dataset of 15,660 malware labeled to 164 threat actor groups.
_Keywords_ adversarial $\cdot$ malware $\cdot$ authorship attribution $\cdot$
advanced persistent threats $\cdot$ datasets
## 1 Introduction
Malicious software (malware) remains one of the biggest threats to
organizations, and there seems no sign of this changing in the near future
[114]. Identifying malware authors to a person, group or country provides
evidence to analysts of the wider goals of threat actors. Furthermore, it
provides a method to counter cyber attacks and disrupt the malware economy
through public indictment [100, 90].
The current and only method for authorship attribution used by analysts
involves prolonged analysis of the threat actor over a long duration and
within different phases of the killchain [72]. Part of this process includes
gathering features such as network analysis and exploitation techniques
referred to as _indicators of compromise_ as well as relying on known
databases of Tactics, Techniques and Procedures (TTPs).
Sometimes there exists no wider context, especially if the threat actor is
unknown to the victim. In very few cases, analysts discover the malware source
code and use this to determine attribution through source code authorship
attribution [40, 63, 23, 67]. However, released source code leads to copycat
attacks or the malware no longer used [41]. This means defenders often find
themselves with only the malware binary as evidence. The quicker the defenders
analyse the malware and identify a probable threat actor, the quicker they can
understand and contextualize an attack (including if they must contact an
authority and which relevant authority), which leads to a quicker response
time and attack mitigation.
The specific problem of identifying an author of piece of malware is known as
Malware Authorship Attribution (MAA). However, using the binary alone
represents a difficult problem due to the complexities of program provenance
[107]. Despite this, the binary still provides interesting artifacts on author
style, e.g., implementation of encryption, propagation, mutation or even the
setup of command and communication servers within the malware infrastructure.
Even though the demand for malware attribution continues to increase, we
notice few publications detailing the methods of malware authorship
attribution.
Recent work [87, 21, 4, 59, 20] informed the wider authorship attribution
field. Neal et al. [87] wrote a survey on the wider topic of stylometry
focusing on de-anonymizing text. The survey by Burrows et al. [21] focuses on
the attribution of source code up until 2011 and highlights the positive use
of Machine Learning techniques in the authorship attribution field. Brennan et
al. [20] introduce the notion of exploring the adversarial approach toward the
stylometry problem and provide novel datasets to aide this research direction.
Kalgutkar et al. [59] provide further insight on the code authorship
attribution problem by exploring the use of features in benign source and
binary code attribution as well as the attribution models and methods. They
also present the challenges in the research field and incorporate the field of
plagiarism detection. Finally, Alrabaee et al. [4] discuss three state-of-the-
art techniques for the single-author binary authorship problem [22, 3, 106]
and provide promising results from applying malware to these systems.
The current state-of-the-art systems show promising results on attributing
programs where author style remains unaltered apart from compilation
techniques. However, there exist few attempts to extend these systems to
consider author masking techniques such as those used by some Advanced
Persistent Threat (APT) groups). This limitation to the current state-of-the-
art systems opens them to attack and thus there exists the need to fully
understand the adversarial challenges to authorship attribution of malware.
##### Contributions
Our contributions include a thorough systematization of the malware authorship
attribution problem focusing on the data modeling techniques, datasets and
features used for attribution to allow the community to understand how each
paper builds upon each other and the shortcomings within the current research.
We review eighteen attribution systems. We compare them in terms of
techniques, features, efficacy, functionality and adversarial robustness.
We discover there exist only two publicly available author-labeled malware
datasets, both of which contain significant flaws such as non unique labels.
Furthermore, we found the current features used for author style remain varied
with no clear consensus on authorship style (42 of a total of 72 features were
used separately by research groups). The current state-of-the-art systems
remains inapplicable to real world use cases. The majority of systems fail to
take into account modern malware development methods, e.g., assuming multiple
authors. Additionally, researchers use non-representative datasets of the real
world which introduce adversarial issues surrounding open world assumptions,
continuous learning, concept drift and obfuscation. On top of this, the
majority of attribution systems from existing research lack reproducibility
owing to system unavailability, systems no longer working, or the literature
omitting fundamental details.
Focusing on the dataset problem, we contribute by publishing a labeled meta-
information dataset of 15,660 malware. We extensively use open-source
intelligence to build a list of APT groups and then gather hashes of malware
to which we verify their legitimacy against VirusTotal. We use Natural
Language Processing techniques to gather the most high likely label for a hash
from various open-source intelligence material. This dataset is the largest
verified APT labeled malware dataset to date. We searched 896 files made up of
a mixture of PDFs, CSVs, rules, and indicator of compromise files. We found
15,660 unique hashes which we have labeled to 164 APT groups. Furthermore, we
identified an additional 7,485 unique hashes. For these unlabeled hashes, we
record the top 5 keywords from the file and the keywords of the metadata.
Our work complements Kalgutkar et al. [59] and Brennan et al. [20] by
extending the application of authorship attribution to malware by including a
full detailed analysis of malware author style, features and adversarial
approach. We expand upon the work by Alrabaee et al. [4] and incorporate the
multiple author attribution problem into the conversation of MAA. We also note
the survey by Xue et al. [119] which focuses on general Machine Learning based
program analysis whereas we focus purely on authorship style gained from
program analysis using multiple data modeling techniques.
Section 2 presents the background to threat actors, authorship attribution and
adversarial techniques. Section 3 systematizes the MAA problem, focusing on
the data modeling techniques, authorship style, features and datasets. Section
4 discusses real-world application of the current state-of-the-art, looking at
the challenges and recommendations for future work. Finally, in Section 5, we
present the method we used to create a new APT malware dataset for the
research community.
## 2 Background: Threats Actors, Authorship Attribution, & Adversarial
Techniques
In this section, we set out the background to the malware authorship
attribution (MAA) problem. MAA is the identification of the author of an
unknown malicious file111Throughout the paper, we refer to malicious files as
“malware”, “malware binaries” or “malicious binaries” interchangeably.. In
particular, we define the authors we wish to identify as Threat Actors. We
also explore the more wider form of the MAA problem and consider the types of
adversary attacks which MAA systems are likely to face.
### 2.1 Threat Actors
Although the threat actors with the greater skill level tend to use better
adversarial techniques [14], they also tend to possess unique styles when
using custom-made tools. Naturally, if any attackers use commercially
available or open source tools, then the author of the tool is not necessarily
the threat actor. As we wish to focus on identifying author style of threat
actors, we shall look to focus on where style exists i.e. within custom tools.
These tools are generally produced by Advanced Persistent Threats (APTs).
##### Advanced Persistent Threats
APTs represent the most sophisticated attackers. The US National Institute of
Standards and Technology (NIST) provides an in-depth definition of an APT
[86]. For the purpose of this paper, we consider APT groups as state and
state-sponsored threats. For instance, the Daqu, Flame and Gauss are examples
of malware used by allegedly state funded APT groups as part of espionage
campaigns [16]. Additionally, these campaigns, alongside Stuxnet and Red
October, display the difficulty of detecting state and state-sponsored APT
threats [118]. At the moment, there exists only sparse information on APT
groups, and the data remains unstructured and difficult to automatically
analyze. Lemay et al. [69] created a survey which contains several pieces of
information on APT groups retrieved from public sources, such as the various
aliases used for group names and the alleged campaigns conducted.
Table 1: Revised list of the top 10 APT groups. We gathered information from
AT&T Cybersecurity [13], MITRE [84] and CCN-CERT [28] to create this list. The
table also reports the alleged group location and the number of unique and
shared tools linked to each group.
| | | | | | |
---|---|---|---|---|---|---|---
Rank | | | | | |
2020 | 2018 | Group Name | Number of Aliases | Aliases | Suspected Location | Number of Unique Tools | Number of Shared Tools
1 | 1 | Lazarus Group | 4 | HIDDEN COBRA, Guardians of Peace, ZINC, NICKEL ACADEMY | DPRK | 16 | 2
| | | | | | |
2 | $\star$ | Gamaredon Group | 0 | N/A | N/A | 1 | 0
3 | 7 | Kimsuky | 1 | Velvet Chollima | DPRK | 0 | 0
4 | 3 | MuddyWater | 2 | TEMP.Zagros, Seedworm | Iran | 2 | 6
5 | $\star$ | TA505 | 1 | Hive0065 | N/A | 5 | 3
6 | 2 | Sofacy | 11 | SNAKEMACKEREL, APT 28, Sednit, Pawn Storm, Group 74, Tsar Team, Fancy Bear, Strontium, Swallowtail, SIG40, Threat Group-4127 | Russia | 20 | 4
7 | $\star$ | PROMETHIUM | 1 | StrongPity | N/A | 2 | 0
8 | 10 | Turla | 5 | Snake, Venomous Bear, Waterbug, WhiteBear, Krypton | Russia | 10 | 10
9 | 4 | Oil Rig | 3 | IRN2, HELIX KITTEN, APT 34 | Iran | 9 | 11
10 | $\star$ | Emissary Panda | 6 | TG-3390, BRONZE UNION, Threat Group-3390, APT27, Iron Tiger, LuckyMouse | China | 3 | 12
* •
$\star$ Not in the 2018 top 10 APT groups.
In addition, there exists a publicly available spreadsheet containing APT
groups and their aliases [113]. Various cyber-experts from several reputable
cyber-threat intelligence sources, such as FireEye, CrowdStrike and MITRE
[84], regularly contribute to the spreadsheet and it quickly gained popularity
amongst the research community for the _ground truth_. There also exist a few
open-source sharing methods such as STIX [88] and TAXII [89] to help
researchers, but most threat intelligence options require payment [19].
To further highlight the issues surrounding APT groups, we gathered
information from MITRE [84], AT&T Cybersecurity [13] and CCN-CERT [28] to
create a list, Table 1, of the top ten APT groups along with the alleged group
location and tools linked to each group. From the table, we see the vast
number of aliases and the lack of samples linked to each APT group. For
example, there currently exists no known malware linked to the group Kimsuky.
We also observe the majority of the APTs use both unique malware and open
source/shared tools e.g., Turla and Oil Rig use PsExec but allegedly,
different Nation States sponsor them. Therefore, MAA becomes increasingly
harder if all groups use identical tools. Furthermore, we remark there exist
no APT groups on the list allegedly sponsored by a Western or Five Eye
nation222The Five Eyes consist of United States, United Kingdom, Canada,
Australia and New Zealand. We believe the source of the data, predominantly
American Threat Intelligence companies, might introduce some bias to the list
as their focus aligns to the threat actors of Western or Five Eye Nation.
However, threat actors target and belong to a variety of countries. Finally,
we remark from 2018 to 2020 six APT groups remain in the top 10. This shows
the longevity of the groups despite an increase in public attribution.
### 2.2 Binary Similarity and YARA Rules
Currently, malware analysts use YARA rules333YARA is a pattern matching tool
with a rule based syntax which allows the discovery of specific signatures
[9]. for recognizing and attributing malware samples. YARA rules tend to
identify shellcode and code reuse for linking samples and not authorship style
which is akin to the binary similarity problem, i.e. comparing how much shared
code exists between binaries [48] or searching binaries for code cloning [37].
Using similarity for attribution is not foolproof and in many cases can be
lead to false accusations [14]. It usually also requires analyzing all of the
binary whereas identifying author style can be performed on smaller code
fragments.
Even though an analyst must write a rule based on their research of each
unique sample (meaning YARA rules remain as labor-intensive to most manual
malware analysis methods), they provide a much easier and quicker solution to
the current MAA systems. Research by Bassat and Cohen [15] shows the ease of
using YARA rules in the “wild” for clustering malware similarities between
alleged Russian APTs. However, the same research also shows YARA rules rely on
unpacked samples to trigger the identified traits within the YARA rules and
this is similar to current MAA systems. More recently, Raff et al. [96] tackle
the labor-intensive problem and develop the state-of-the-art to automatically
generate YARA rules using malware. Similar to the research by Bassat and Cohen
[15], Kaspersky developed a Threat Attribution tool based on APT malware
binary similarity [60].
### 2.3 Binary Authorship Attribution
MAA is a subset of the binary authorship attribution (BAA) problem. BAA
applies to other tasks, such as plagiarism and intellectual property rights.
In these cases, we know all the authors beforehand, e.g., the students in a
programming class. In contrast, malware authors wish to remain undisclosed due
to the illegality and secrecy of the underground market within which they
operate [1]. When we know all the possible authors, we call this Closed World
(Assumption), otherwise Open World (Assumption) [85]. All of the authorship
systems reviewed in this work use Closed World Assumption (CWA). From a data
modeling perspective, this prevents understanding the real world context of
authorship attribution. However, the CWA mitigates some of the challenges such
as quantifying some of the unknowns, i.e., the total number of authors.
Mitigating some of challenges can help with exploring other authorship
objectives.
There exists varying objectives of authorship attribution set out by Kalgutkar
et al. [59]. These consist of identification \- linking a binary to an author,
clustering \- grouping stylistic similarities, evolution \- tracking stylistic
changes over time, profiling \- understanding stylistic characteristics, and
verification \- checking for adversarial tampering. We focus on identification
as the other objectives can be a by-product of the research on authorship
identification and identification forms the basis of understanding authorship
style which the other objectives rely on.
Within all authorship attribution objectives, we must consider if the goal is
Single or Multiple authorship. The single authorship attribution problem
assumes only one author for every piece of binary. Conversely, the multiple
authorship attribution problem assumes multiple authors created the binary.
Assuming single authorship of binaries which multiple authors created is
likely to make any attribution system incorrectly learn authorship style and
lead to potential attacks on the system which we explore in the next section.
### 2.4 Adversary Techniques
Most BAA work assumes the authors are unaware of an attribution system being
in place. Few works consider authors using adversarial techniques to influence
the output of the attribution system. This requires attackers to first
identify which features appear easier to manipulate to affect the output of
the attribution system. Some of these attacks are aimed at the learning phase
(e.g,_training set poisoning_). However, most existing binary modification
attacks are aimed at evading the attribution system at run-time (_evasive
attacks_). Meng et al. [81] describe three evasive attacks namely; (i) the
confidence-loss attack, (ii) the untargeted attack and (iii) the targeted
attack. The confidence-loss attack defeats an attribution system by removing
any traces of author style to ensure it predicts no author label for the
binary. The untargeted attack attempts to make the prediction of the
attribution system as any other author than itself. The targeted attack tries
to convince the attribution system the binary belongs to a pre-chosen author
other than the attacker. We deem the confidence-loss attack as unsophisticated
as most malware authors try this by default to remain anonymous and maintain
their privacy. Whereas we class the other two attacks as sophisticated and we
believe APT groups are more likely to implement these attacks.
#### 2.4.1 Unsophisticated attacks.
We deem these attacks to be obfuscation techniques authors use to hide their
identity and fool malware detection systems. Common obfuscation techniques
include _encryption_ and _packing_. The use of encryption prevents easy
analysis. The adversary encrypts the main function to prevent static analysis
on the malware. The program initially calls a function to decrypt itself upon
runtime. This function requires a decryption key which the author either
stores at a remote location (such as a Communication and Control server) or
hides in the malware delivery method (such as a phishing email). Otherwise,
storing the key in the malware file allows for the malware analysts to decrypt
it.
Malware authors use _packing_ to evade analysis and detection systems. The
developer compresses the binary to hide the functionality of the binary. A
packed binary contains a small amount of code which enables the binary to
decompress itself at runtime. A packed version of a binary appears as a
completely different version to the original binary, which allows adversaries
to trick defense systems such as anti-virus software. The majority of packed
binaries require manually unpacking before applying static analysis
techniques. However, there exist automatic tools such as Un{i}packer which
unpacks common packing tools such as UPX, ASPack, PEtite and FSG [75].
Authorship attribution systems either require the samples unpacked to extract
author style or they apply their process to packed binaries to test if
authorship style remains after packing.
#### 2.4.2 Sophisticated attacks.
_False flags_ used by APTs to imitate other groups [14] are the primary
example of sophisticated attacks currently in use. Simko et al. [112]
considered the idea of _imitating programmer style_ for source code authorship
attribution. This led to the definitions of Forgery and Masking techniques.
The Forgery technique describes the process an adversary employs to create a
program which the attribution system outputs as a different APT group. For
example, we describe a targeted attack by A on B (involving an innocent party
C) when A successfully convinces B that C performed the attack. If A convinces
B any other attacker executed the attack, then we class the attack as
untargeted. Masking is when an adversary manages to hide as the original
author of a program it has modified. For example, an attacker wants to add
malicious code into an open source project without the original authors
knowing. Similarly to Forgery, masking can either be targeted or untargeted.
Matyukhina et al. [76] develop such an attack to five state-of-the-art source
code authorship attribution models by learning authorship style from data
collected from open source repositories. They create three types of source
code transformation attacks based on capturing author style to create both
targeted and untargeted attacks. Similarly, Quiring et al. [94] construct a
Monte-Carlo Tree search to transform source code for both targeted and
untargeted attacks on two state-of-the-art source code authorship attribution
systems. Interestingly they both circumvent the authorship attribution system
by [23] using different approaches. These attacks on source code authorship
attribution systems show MAA systems are likely to face similar attacks and so
any system must consider such attacks.
## 3 Malware Binary Authorship Attribution
We reviewed papers in the subject field over the last decade to identify
relevant systems and research applicable to MAA. Our search criteria looked
for work which addressed the problem of binary and malware authorship
attribution. We omitted any papers which performed a binary classification on
malware and contained no significant contribution on authorship styles to
malicious files, e.g., we omit the paper [65] as this classifies malware into
APT group or non-APT group but we include [66] as this classifies malware into
specific APT groups. We identified eighteen papers which possess a significant
relationship with MAA, and we contacted all authors whose systems were not
publicly available. We received a mixture of responses. Some systems had
contractual obligations to prevent them from being shared, others did not wish
to share their system or said their system shall be made available in the
future. On top of the eighteen papers, we identified the survey by Alrabaee et
al. [4] which evaluates the systems in [106, 3, 22]. Although this paper
provides no new system it helps provide added insight on the systems they
evaluated in the context of malware. We focus on: (i) _data modeling
techniques_ , (ii) datasets, and (iii) features. We decided on these three
areas as they represent the key components in building analytical systems for
understanding large data.
In Section 3.1, we first classify the _data modeling techniques_ used in these
works into five categories: (i) classification techniques identify whether a
piece of malware belongs to known set of groups; (ii) clustering techniques
enable us to group malware into authors based on underlying data trends; (iii)
anomaly detection methods allow us to label malware based on malware not
conforming to a known group or category; (iv) structured prediction methods
predict structured objects for example within a binary file we can identify a
structure for an author based on assembly language; (v) non-machine learning
methods include alternative probabilistic or manual methods.
In Section 3.2, we categorize the works based on the datasets used within the
systems. Specifically, we divide the datasets into benign source code, benign
binaries and malware binaries to match the current approach by researchers.
The benign software approach uses compiled source code from known authors and
the malware approach uses predominantly APT malware. In Section 3.3, we
explore malware author style and derive a categorization of author features
which we use to compare the eighteen BAA systems.
Table 2: A list of known Data Modeling Techniques used to tackle the binary authorship problem published between 2011 and 2019. There exist five categorizes of techniques: (i) Classification; (ii) Clustering; (iii) Anomaly detection; (iv) Structured prediction; and (v) Non-machine learning methods. Data Modeling | Algorithm | Attribution System
---|---|---
Deep/Artificial Neural Networks (DNN/ANN) | [103], [104], [78], [7], [8]
| Tree Bagging (TB) | [54]
| Random Forests (RF) | [50], [22], [54], [44]
| Support Vector Machine (SVM) | [106], [77], [80], [22], [54], [58], [78]
| Bayesian Classifiers (e.g., Naïve Bayes (NB)) | [50], [54]
Classification | Large Margin Nearest Neighbor (LMNN) | [106]
| K-Means Clustering | [106], [7], [8]
Clustering | Multi-View Fuzzy Clustering | [47]
Anomaly Detection | Isolated Forests (IF) | [66]
Structured Prediction | Conditional Random Fields (CRFs) | [80], [78]
| Dissimilarity Algorithm | [3]
| Manual Analysis | [74]
Non-Machine Learning | Attribution Weighting | [6]
### 3.1 Data Modeling Techniques
We present all the techniques used from the reviewed papers in Table 2. From
the table, we see fifteen of the systems use various Machine Learning (ML)
methods. We also notice the majority of ML methods favor the classification
problem. We believe the reason for this lies in the easier approach of solving
the closed-world problem using labeled source code data which we show in
Section 3.2 and Section 4.1. Research on source code authorship attribution
mirrors the same pattern [59].
Hong et al. [54] uniquely explore more than two classification algorithms and
conclude Random Forest (RF) and Support Vector Machine (SVM) as the most
suitable candidates for solving the problem due to their enhanced performance
against the other five techniques they tested. This concurs with the rest of
the field [50, 22, 58, 77, 44]. Seven papers consider three alternative ML
methods: clustering, anomaly detection and structured prediction techniques
[106, 66, 80, 78, 7, 8, 47]. We explore these further as they show promise
towards the open-world problem.
In detail, Rosenblum et al. [106] use a SVM classifier within their single-
author closed-world model and they extend this solution to the open-world
problem by using a k-mean clustering technique to cluster binaries based on
previously built author profiles. For this, they change their original
classifier to the Large Margin Nearest Neighbor (LMNN) as this aids building
author profiles. Laurenza et al. [66] approach APT triaging by identifying
outliers of APT style within malware using Isolated Forests (IF).
Meng et al. [80], Meng and Miller [78] extend the multiple author feature
discovery work ([77]) by using Conditional Random Fields (CRFs) applied to the
assumption multiple authors code consecutive basic blocks. In this scenario,
CRFs outperform SVMs. Continuing this assumption, Meng and Miller [78] explore
the use of Deep Neural Networks directly on the binaries’ raw bytes without
any analysis or feature extraction process. Rosenberg et al. [103, 104] also
consider the use of Artificial Neural Networks for classifying binaries to
authors. Alrabaee et al. [7, 8] use convolutional neural networks to cluster
author style and then use a classifier to determine if a piece of malware
belongs to an author cluster. Finally, Haddadpajouh et al. [47] choose a
multi-view fuzzy clustering model to group malware into APT groups based on
identifying loosely defined patterns among binary artifacts.
Alternative non-ML methods used to solve the BAA problem also use features
which identify author style. Both Alrabaee et al. [3] and Alrabaee et al. [6]
use probabilistic methods such as dissimilarity algorithms and a novel
attribution weighting formula respectively. Marquis-Boire et al. [74] propose
a pipeline driven from manual malware analysis.
### 3.2 Current Datasets
Datasets remain a key part of any analysis process due to the necessity of
identifying binary specific trends within the data. We summarize the current
sources used within the eighteen systems reviewed. We split the dataset
analysis into two sections: (i) Benign Source Code and Binaries; and (ii)
Malware Binaries. Afterwards, we provide an overall comparison of the
datasets.
#### 3.2.1 Benign Source Code and Binaries
Due to the lack of author labeled binaries, the majority of the research in
BAA uses source code from student competitions and then compiles it using a
variety of compilers to create a ground truth binary dataset. This approach
allows researchers more control on the cleanliness of the dataset.
Specifically, this provides researchers with greater certainty on the
verification of the ground truth. In addition, this provides the ability to
choose which complexities the toolchain process introduces, artificially
create larger datasets by using multiple toolchain processes and link author
styles learned from source code stylometry. Consequently, this approach leads
to datasets which fail to represent the real world. They tend to remain static
and not evolve alongside author styles. Additionally, these datasets add extra
time to consider all the different toolchain combinations to account for the
various compilation methods. Researchers also choose the datasets to consist
of only C and C++ languages due to the popularity of the programming languages
[116]. However, malware generally consists of various languages. We describe
the four main sources below.
##### Google Code Jam (GCJ) [45]
Since 2008, this worldwide student competition runs annually and the
organizers publish all the problems and solutions for anyone to download.
There exist multiple benefits for using the GCJ dataset for authorship
identification. Firstly, all the participants code similar programs and this
allows researchers to focus purely on author style and not program
functionality. Secondly, the dataset consists of diverse authors from all over
the world. Thirdly, GCJ offers substantial prizes to the participants meaning
they must know their identity. Hence, there is no necessity for the
participants to hide their author style unlike malware creators. In general,
the overall quality of the submissions varies as not all the samples compile
meaning researchers must clean the dataset before using it.
Hendrikse [50] uses the script written by Caliskan et al. [22] to obtain the
GCJ dataset. However, they both use different subsets of the same dataset for
testing and training their attribution systems. Alrabaee et al. [8] use the
GCJ dataset to build synthetic binaries from multiple authors by combining the
source codes of the various entries. They construct binaries consisting of
between two and eight authors. However, this method introduces the issue of
distinct separation between the various author styles within the binaries.
Therefore, we believe this method constructs a poor dataset for training BAA
systems due to the cleanliness allowing the systems to easily distinguish
between the authors. However, the dataset provides an opportunity to test
systems and evaluate whether they actually perform highly on such a clean
dataset.
##### GitHub [42]
This is a hosting site for software development which uses git, an open source
version control platform. GitHub encourages agile development for software
projects and allows multiple authors to edit and contribute to various
repositories whilst recording the contribution of each user. Meng et al. [79]
created the tool git-author to tackle the attribution of GitHub repositories
to each author. This enabled them to create a labeled dataset for multiple
authorship attribution. Three works use git-author for the ground truth of
their attribution system [77, 80, 78]. Additionally, the GitHub community
ranks each repository out of five stars which Caliskan et al. [22] use to
judge programmer ability. In this work, they build their GitHub dataset using
only repositories containing at least two hundred lines of code and they omit
any forked repositories or any named “Linux”, “kernel”, “OSX”, “LLVM” or
“next”. They state this ensures a sufficient amount of code exists to learn
author style and it also reduces the amount of shared code within their GitHub
dataset. Alrabaee et al. [8] collect fifty C/C++ projects where between 50 and
1,500 authors contributed to each project. Introducing a high number of
authors potentially saturates author style boundaries as there exists some
natural cross-over with author style making it even harder to distinguish
between the distinct authors.
Plenty of disadvantages exist from using this data source for malware
attribution. Firstly, the majority of repositories are benign projects and
malware authors are unlikely to use popular open source repositories for
malware development. Secondly, the openness of GitHub allows anyone to clone
the code and in turn author style. Finally, it opens up the code to the
potential attack where an adversary modifies the code without the repository
owner noticing through author style imitation [112].
##### Planet Source Code [93]
This platform hosts source code and claims to host 4.5 million lines of code
and this includes approximately 200,000 lines of C/C++ code. When a user
uploads their code to the site, they rank their own skill level choosing the
option of unranked, beginner, intermediate or advanced. Other site members
then rank each submission for the various awards the site offers. The
combination of both these ranking methods provides site users with confidence
in the coding standard. Similar to previous data sources, there exists the
assumption any uploaded source code belongs to the user who uploads the code.
##### Other Benign Sources.
In addition to the three public repositories above, Rosenblum et al. [106],
Alrabaee et al. [6] use student coursework. Alrabaee et al. [6] assume the
source code author refers to the student who submitted the coursework, whereas
the dataset used by Rosenblum et al. [106] included submissions where the
students worked in pairs. To mitigate this issue, they performed manual
analysis to identify a single author for each program. Alternatively,
academics use plagiarism detectors on coursework submissions to identify where
students cheated and this provides a form of “authorship attribution”.
However, plagiarism checkers fail to check for contributions from unknown
third parties [2, 39]. In comparison, for MAA we must consider methods to
identify unknown programmers/malware authors due to the “underground” behavior
exhibited [1]. Rosenblum et al. [106] state the students in their dataset
received skeleton code which potentially influenced the students’ programming
style even though they attempted to remove all the skeleton code from the
samples. In comparison to malware authors, the students must identify
themselves to receive a score for their coursework and therefore are likely to
refrain from implementing methods to hide their author style. Unfortunately,
data protection policies prevent both Rosenblum et al. [106], Alrabaee et al.
[6] from sharing the datasets.
Kalgutkar et al. [58], Gonzalez et al. [44] created a benign Android
application dataset using applications from stores such as Google Play Store,
Appland, Anzhi, Aptoide, Fdroid, MoboMarket, Nduoa, Tincent and Xiaomi for
which they attribute by using the private certificates from the signed APK
files. Additionally, Gonzalez et al. [44] use the store called 3gyu. As well
as the previous application stores, both these papers use APK files from
GitHub and an on-line collaborative system called Koodous.
##### Toolchain
The common toolchain approach uses multiple compilers and optimization levels.
However, every combination of compiler and optimization level used produces a
unique binary sample from the same source code. This generic approach excludes
the use of varying obfuscation tools and modifications which create further
unique binaries. There exist six papers [50, 78, 4, 6, 7, 8] which create a
dataset using multiple compilers from both open source and commercial sources,
such as Clang [30], GNU [43], ICC [57], LLVM [71], Microsoft Visual Studio
[82] and Xcode [11]. A sophisticated malware developer might create a
customized compiler yet this remains unlikely due to the deterrence of the
complexities of compiler design and it is a unique identifier. The
optimization functionality of compilers decreases the program’s runtime, but
at the same time it increases the compilation duration. Programmers consider a
cost-benefit analysis when deciding which level of optimization to perform.
Similar to using different compilers, using varying optimization levels
affects author style. Eight papers consider at least one optimization level
within their research to account for the effect of optimization on author
style [22, 77, 80, 50, 78, 4, 6, 8]. However, there still requires further
understanding of the impact of toolchains on author style.
#### 3.2.2 Malware Binaries
Creating an author labeled malware dataset echoes similar difficulties in
creating a malware family labeled dataset [109]. We show particular interest
in APT malware as APT groups tend to use sophisticated adversarial techniques.
To the best of our knowledge, there exist two attempts to create a large APT
labeled dataset. Laurenza et al. [66] create a list of APT groups and use
these to scan publicly available reports written by threat intelligence
companies, government departments, anti-virus and security companies for
related malware hashes. They use these hashes to download the samples from
sources such as VirusTotal. They store this dataset on GitHub [64]. For the
purpose of their paper Laurenza et al. [66] use a subset of [64] consisting of
19 APT groups and over 2000 malware samples. Due to the unavailability of the
exact dataset used in Laurenza et al. [66], we analyzed the GitHub dataset
[64]. The second attempt to create an APT malware dataset is by “cyber-
research” which they store on GitHub [33]. The dataset contains 3594 malware
samples444 “cyber-research” also include information on a further 855 samples
which they could not obtain. which are related to twelve APT groups and are
allegedly sponsored by five different nation-states. Similarly to Laurenza et
al. [66], “cyber-research” collect the malware samples using open source
threat intelligence reports from multiple vendors and then downloaded from
VirusTotal. However, “cyber-research” omit the method they used to label the
malware hashes from the 29 sources and so researchers have no assurances on
the validity of the label. We note Haddadpajouh et al. [47] use a subset of
[33], focusing on five groups namely APT1, APT3, APT28, APT33, and APT37. In
both cases, we observed general issues with creating labeled APT malware
datasets:
* •
APT group names used for a single APT group often differ which leads to
multiple aliases and not knowing which common name to use as the label. In
some cases, different groups share the same aliases. Either researchers linked
multiple APT groups to the same nation or multiple APT groups potentially
collaborated together. This makes it difficult to create a single list which
contains a one-to-one relationship between sample and group. This problem
relates to the one solved by Hurier et al. [56], who produce a distinct naming
dataset for malware family names as anti-virus vendors use their own naming
conventions.
* •
Reports on APT groups often reference multiple groups when the researchers
compare or link groups. Therefore, researchers must take extra care when
automatically extracting labels from the reports. For example, within [64]
there exist the same reports linked to differing APT groups.
Due to these problems and the availability of APT datasets, some authors use
alternative malware datasets or obtain datasets from private sources. Alrabaee
et al. [4] obtain malware from their own Security Lab (Zeus and Citadel
malware), from Contagio (Flame and Stuxnet malware) and from VirusSign (Bunny
and Babar malware). They omit the method they use to determine the ground
truth for this dataset. It appears Alrabaee et al. [6] use the same dataset
and they state they manually determined the labeling. Alrabaee et al. [7] use
a similar dataset but they add samples of the Mirai botnet to the dataset. The
Microsoft Malware Classification Challenge dataset [102] provides an
alternative popular malware source. Three works use subsets of this dataset
[3, 7, 8]. The dataset by Ronen et al. [102] contains nine malware
families555Ramnit, Lollipop, Kelihos_ver3, Vundo, Simda, Tracur, Kelihos_ver1,
Obfuscator.ACY, Gatak. and currently there exist no links between the nine
malware families and APT groups [84].
Four papers omit their malware sources and they all use cyber security experts
to label their datasets [74, 103, 54, 104]. Only the works by Rosenberg et al.
[103, 104] use datasets with labels representing the nation states which the
APT groups are allegedly from or backed by. In particular, they use malware
allegedly from or backed by two countries, namely Russia and China. Rosenberg
et al. [104] state the dataset consists of four unique malware families in the
training set666Net-Traveler and Winnti/PlugX both allegedly China and Cosmic
Duke and Sofacy/APT28 both allegedly Russia., with 400 samples from each
family, and they use two unique malware families in the testing set777Derusbi
allegedly China and Havex allegedly Russia., with 500 samples from each
family. Marquis-Boire et al. [74] use the smallest dataset containing only
three samples (NBOT, Bunny and Babar) which they claim belong to the APT group
named Snowglobe888This group allegedly associates with France.. The
alternative approaches to MAA by Kalgutkar et al. [58] and Gonzalez et al.
[44] look to explore Android malware datasets. These works offer an
interesting approach towards labeling the malware by the private certificates
from the signed APK files. This approach is unique to Android malware and
therefore fails to generalize. Gonzalez et al. [44] also perform manual
analysis as they consider a lot more malware including APK files from Virus
Total, Hacking Team and the Drebin dataset.
#### 3.2.3 Comparison of Datasets
Table 3 provides an overview of all the different datasets used within the
current research. We organized Table 3 as follows: we clustered all the
columns relating to _benign source code and binaries_ and then incorporate our
discussion on _malware binaries_ under the same titled column; we kept the
_Ground Truth_ column separate to highlight the various methods across both
benign and malware datasets; finally, we recorded the largest number of
authors and binaries considered in each work. We note the work by Alrabaee et
al. [8] appears twice in the table due to the work using two distinct datasets
for tackling the single and multiple author problem.
Overall, we observe the lack of systems using malware as the sole dataset.
Among those papers which use malware, researchers use binaries collected from
various sources and samples. This variety means there exists little overlap
between the different datasets preventing true system comparison. In most
cases, few samples exist for each author which makes it extremely hard for an
attribution system to pick up on author style trends.
Limited datasets exist for the multiple authorship problem. Currently,
researchers use benign source code from GitHub repositories to compile
multiple author binaries or they synthetically create them from single author
benign source code. Both these methods create binaries which represent the
extremes of author style within a binary: the GitHub binaries contain many
author styles distributed across the binary [34] and the synthetic binaries
contain multiple author style separated into distinct sections within the
binary [8]. Additionally, both datasets lack specific malware author style
traits.
Table 3: A summary of the largest datasets and sources used within the papers
we reviewed published between 2011 and 2019. We include the toolchain process
for the datasets created from source code and the method of author labeling to
determine the “Ground Truth”.
| | Benign Source Code and Binaries | | | |
---|---|---|---|---|---|---
Paper | Year | GCJ | GitHub | Planet | Othera | Languages | Optimizationb | Compilersc | Malware Binaries | Ground Truthd | Authorse | Binariese
Rosenblum et al. [106] | 2011 | ✓ | | | ✓ | C/C++ | | G | | $\star$ | 191 | 1,747
Alrabaee et al. [3] | 2014 | ✓ | | | | C/C++ | | | | $\star$ | 7 | $\square$
Marquis-Boire et al. [74] | 2015 | | | | | | | | ✓ | $\bullet$ | 3 | 3
Meng [77] | 2016 | | ✓ | | | C/C++ | 1 | G | | $\triangleleft$ | 282 | 170
Meng et al. [80] | 2017 | | ✓ | | | C/C++ | 1 | G | | $\triangleleft$ | 284 | 169
Rosenberg et al. [103] | 2017 | | | | | | | | ✓ | $\bullet$ | 2 | 4,200
Hendrikse [50] | 2017 | ✓ | | | | C/C++ | 2 | GLM | | $\star$ | 14 | 1,863
Alrabaee et al. [4] | 2017 | ✓ | ✓ | | | C/C++ | 1 | GIMf X | ✓ | $\star\diamond\triangleright$ | 1,000 | $\square$
Caliskan et al. [22] | 2018 | ✓ | ✓ | | | C/C++ | 3 | G | | $\star$ | 600 | 5,400
Meng and Miller [78] | 2018 | | ✓ | | | C/C++ | 5 | GIM | | $\triangleleft$ | 700 | 1,965
Hong et al. [54] | 2018 | | | | | | | | ✓ | $\bullet$ | 7 | 1,088
Alrabaee et al. [6] | 2018b | ✓ | ✓ | ✓ | ✓ | C/C++ | 2 | GICM | ✓ | $\star\diamond$ | 23,000 | 103,800
Rosenberg et al. [104] | 2018 | | | | ✓ | | | | ✓ | $\bullet$ | 2 | 4,200
Kalgutkar et al. [58] | 2018 | | ✓ | | ✓ | Javag | | | ✓ | $\diamond\bullet$ | 40 | 1,559
Gonzalez et al. [44] | 2018 | | ✓ | | ✓ | Javag | | | ✓ | $\diamond\bullet$ | 30 | 420
Laurenza et al. [66] | 2018 | | | | | | | | ✓ | $\bullet$ | 19 | 2,000+
Alrabaee et al. [7] | 2019a | ✓ | ✓ | | | C/C++ | | GICM | ✓ | $\star\bullet$ | 21,050 | 428,460
Alrabaee et al. [8] | 2019b | ✓ | ✓ | | | C/C++ | 2 | GICM | ✓ | $\star\bullet$ | 1,900 | 31,500
Alrabaee et al. [8] | 2019b | ✓ | ✓ | | | C/C++ | 4 | GICM | | $\star$ | 350 | 50
Haddadpajouh et al. [47] | 2020 | | | | | | | | ✓ | $\bullet$ | 5 | 1200
* a
Other sources for benign datasets where ✓ means they state the source
* b
Number of Optimization Levels used (blank means paper does not state/consider)
* c
Compilers used: G - GCC/g++ I - ICC L - LLVM C- Clang M - Microsoft Visual
Studio X - Xcode
* d
Ground Truth Method: $\star$ \- Source Code Author $\diamond$ \- Manually
Determined $\triangleleft$ \- git-author [79] $\triangleright$ \- Undisclosed
$\bullet$ \- Cyber Security Experts/Malware Analysis Reports
* e
Largest Dataset Stated
* f
Alrabaee et al. [4] state they use Visual Studio in their methodology but
include no dataset details.
* g
Android APK Files
* •
$\square$ \- Not Disclosed.
In terms of authorship attribution, the researchers treat the APT binaries as
single author which importantly introduces false author style links. The three
most promising APT datasets created by Laurenza et al. [66], cyber-research
[33] and Rosenberg et al. [104] exhibit flaws. The dataset by Laurenza et al.
[66] contains many APT groups but few samples per group whereas the datasets
by Rosenberg et al. [104] contains fewer groups but more samples per group.
The dataset by cyber-research [33] lacks assurances surrounding the labeling
process.
The issue of verifying the ground truth of the labels of the malware datasets
still requires investigating. Source code authors appear easier to distinguish
[59]. In comparison, the majority of malware requires manual analysis or cyber
security experts. In the case of malware from the campaign titled ‘Olympic
destroyer’, the threat actor used _false flags_ to trick analysts into
arriving at multiple attribution hypotheses. The original malware authors
included specific code reuse from previous campaigns by other attackers.
Additionally, they tried to confuse malware analysts by using different spoken
language within the comments, user interface and function names. Various
analysts discovered the various artifacts throughout the malware at different
times and this led to attribution to groups from Russia, Iran, China and North
Korea [14].
Fundamentally, using different datasets means each approach answers slightly
different research questions. Furthermore, this suggests a lack of sharing and
effort across the research field to try to solve the same problems. In Section
5, we hope to change this through the creation of an APT malware dataset which
addresses the limitations and shortcomings we identify and publish this for
the community to use for future research.
### 3.3 Author Features
Capturing author style provides the key to identification. The majority of the
state-of-the-art methods determine author style through extracting multiple
features and then completing feature ranking experiments using their data
modeling techniques (Table 2) on their chosen ground truth dataset (Table 3).
Researchers tend to extract various features based on domain expert knowledge
or previous research. In some cases, the papers [8, 80] use features extracted
directly from the binary through either vector or image representation for
some experiments. In these cases, it remains unclear which features the model
actually uses for author style which presents a gap in the research area.
Going forward we review only those specific features which the papers
explicitly stated.
Many of the state-of-the-art BAA systems rely on the area of _code stylometry_
research to provide a starting point for features related to author style.
Code stylometry features belong to three categories of lexical, syntactic and
semantic. However, they omit any features from code execution. To mitigate
this, Kalgutkar et al. [59] propose that researchers capture author style from
behavioral and application dependent characteristics.
#### 3.3.1 Malware Author Style
Malware authors tend to have unique goals [99] which we can use to help
determine the author style and extract features aimed towards capturing the
goal of the malware author. Marquis-Boire et al. [74] remain the only paper to
specifically consider malware features for author style through their aim of
identifying credible links between APT malware. In particular, they pick up on
malware programming style by APT groups such as the use of stealth, evasion
and data ex-filtration techniques. Kaspersky [60] discuss similar themes from
their binary similarity research but also widen the search to toolkits,
exploits and targeted victim. We extend the ideas from these works with our
previous discussion to devise five macro-categories, namely _strings_ ,
_implementation_ , _infrastructure_ , _assembly language_ and _decompiler_ to
compare the state-of-the-art systems in Section 3.3.3 along with providing
further explanations of each category.
Table 4: A list of the tools used during the feature extraction process
Tool | Type | Extraction Technique | Attribution System
---|---|---|---
angr [111] | ds | | [6]
BinComp [97] | cp | | [7]
BinShape [110] | o | | [6]
bjoern [73] | ds | | [22]
Cuckoo Sandbox [46] | s | | [106], [104], [47]
Custom Android App | u, p | | [58], [44]
DECAF [49] | s | | [50]
Dyninst [92] | ds | | [106]1, [77], [80], [78], [7]1, [8]1
FLOSS [38] | se | | [66]
FOSSIL [5] | o | | [6]
IDA Pro/Hex-Rays [52] | ds, d | | [3], [50], [22], [7], [8]
Jakstab [61] | ds | | [7], [8]
Manually | o | | [74]
Netwide Assembler [35] | ds | | [22]
Nucleus [10] | ds | | [8]
pefile [27] | o | | [66], [7], [8]
radare2 [95] | ds | | [22]
Unknown tool used | o | | [54]
UPX [91] | u | | [7], [8]
* 1
[106], [7] and [8] use ParseAPI which is now included within Dyninst [92].
* •
Key:- \- Static Analysis \- Static and Dynamic Analysis \- Dynamic Analysis
* •
ds \- disassembler o \- other s \- sandbox u \- unpacker p \- parser d \-
decompiler
* •
se \- string extractor cp \- compiler provenance
#### 3.3.2 Feature Extraction Tools
All these categories require tools able to extract features from varying
aspects of the binary. We collated all the tools used in the eighteen systems
in Table 4. This allows us to assess the popularity of each tool and
understand why some tools are used more than others. We note most of the tools
used are for static extraction. We observe the most popular tool as Dyninst
[92] closely followed by IDA Pro/Hex-Rays [52]. The reason Dyninst most likely
edges IDA Pro is due to Dyninst being open source. Five of the systems use
multiple tools to extract different features [6, 7, 8, 22, 106], and this
appears to be the best approach for extracting features from the five macro-
categories we recommend in Section 3.3.1. Two unpackers, UPX [91] and a custom
Android app, were used by Alrabaee et al. [7, 8] and Kalgutkar et al. [58],
Gonzalez et al. [44] respectively. This shows the lack of interest in applying
the current state-of-the-art methods to the malware domain. Only two dynamic
analysis tools were used in total. Rosenberg et al. [103, 104] and
Haddadpajouh et al. [47] both use Cuckoo Sandbox [46] and Hendrikse [50] uses
DECAF [49]. Unfortunately, the tool used by Hong et al. [54] is undisclosed.
Overall, a total of nineteen tools were used. This shows there exists limited
knowledge on whether extracting the same features via different tools affects
the ability to capture authorship style.
#### 3.3.3 Feature Comparison
We collated all the features from the eighteen systems and organized them into
the five feature macro categories related to malware author style. In total,
we collated 72 features. We structured the features by cross-referencing them
against the systematization of the data modeling techniques from Section 3.1
and present the results in Table 5 and Table 6. Where possible, we condensed
any papers into single columns which used exactly the same features. We also
include the column “extraction techniques” to indicate the programming
analysis techniques required to extract each feature. Due to all the
“assembly” features requiring only static analysis extraction techniques, we
present the categorization from the multiple author works [77, 80, 78]
alongside the single author works. We include the column “Authorship Problem”
in Table 6 for easier comparison across both the single and multiple author
problem.
From this results, we remark there exists no favorable feature set for which
the research field currently agrees upon. In fact, we recorded 42 unique
features. In terms of feature extraction, researchers show a clear preference
towards static analysis (36 features) and in terms of favorable macro-category
then there exists a clear preference towards “assembly language”. In the
following, we provide additional insight into the five macro-categories in
terms of the application of these features for MAA. We use the macro-
categories due to the vast number of features.
Table 5: State-of-the-art strings, implementation, infrastructure and
decompiler features used in binary and malware authorship attribution
research.
| | Classifying | Clustering | Anomaly Detection | Non ML
---|---|---|---|---|---
String Features | Extraction Technique | [22] | [103] | [54] | [104] | [58] | [7] | [8] | [47] | [66] | [74] | [6]
Artifact naming schemes/Algorithms | | | | | | | | | | | ✓ |
C&C Commands | | | | | | | | | | | ✓ |
Cuckoo Sandbox Report (Treated as Words) | | | ✓ | | ✓ | | | | | | |
Encryption Keys | | | | | | | | | | | ✓ |
Errors | | | | | | | | | | | ✓ | ✓
File Header | | | | | | | | | ✓ | ✓ | | ✓
Function Names | | | | | | | | | | | ✓ | ✓
Grammar Mistakes | | | | | | | | | | | ✓ |
MS-DOS Header | | | | | | | | | | ✓ | |
N-Grams (Words) | | ✓ | | | | ✓ | ✓ | | | | ✓ |
Optional Header | | | | | | | | | | ✓ | |
Operating System | | | | | | | | | | | | ✓
Programming Language Keywords | | ✓ | | | | | | | | | | ✓
Timestamp Formatting | | | | | | | | | | | ✓ |
Implementation Features | | | | | | | | | | | |
Binary Data Directories | | | | | | | | | | ✓ | |
C&C Parsing Implementation | | | | | | | | | | | ✓ |
Code Re-use | | | | | | | | | | | ✓ |
Compiler | | | | | | | | | | | ✓ | ✓
Configuration Techniques | | | | | | | | | | | ✓ |
Constructor Design | | | | | | | | | | | ✓ |
Cyclometric Complexity | | | | | | | | ✓ | | | |
Execution Traces | | | | | | | | ✓ | | | |
File Interactions Traits (Locations, Modified, etc) | | | | ✓ | | | | | | | ✓ | ✓
Function Lengths | | | | | | | | | | ✓ | | ✓
Multithreading Model (Use of Mutexes) | | | | ✓ | | | | | | | ✓ |
Obfuscated String Statistics | | | | | | | | | | ✓ | ✓ |
Obfuscation Functions | | | | | | | | | | | ✓ |
Propagation Mechanisms | | | | | | | | | | | ✓ |
Registry Keys | | | | ✓ | | | | | | | |
System API Calls | | | | ✓ | | | | ✓ | ✓ | | ✓ | ✓
System/OS Version Determination technique | | | | | | | | | | | ✓ |
Software Architecture & Design | | | | | | | | | | | ✓ |
Stealth and Evasion Techniques | | | | | | | | | | | ✓ |
Use of Global Variables | | | | | | | | | | | ✓ |
Infrastructure Features | | | | | | | | | | | |
DNS URLs | | | | ✓ | | | | | | | ✓ |
IP addresses (C&C Servers) | | | | ✓ | | | | | | | ✓ |
Network Communication | | | | | | | | | | | ✓ | ✓
User Agent/Beaconing Style | | | | | | | | | | | ✓ |
Decompiler Features | | | | | | | | | | | |
Abstract Syntax Tree | | ✓ | | | | | | | | | |
* •
Key:- \- Static Analysis \- Static and Dynamic Analysis \- Dynamic Analysis
Table 6: State-of-the-art assembly features used in binary and malware
authorship attribution research. All assembly features are extracted using
static analysis.
| | Classifying | Clustering | Anomaly Detection | Structured Prediction | Non ML
---|---|---|---|---|---|---
Assembly Features | Authorship Problem | [106] | [22] | [77, 80, 78] | [50] | [54] | [44] | [7] | [8] | [106] | [47] | [66] | [80, 78] | [3] | [74] | [6]
Annotated Control Flow Graph | | | | | | | | ✓ | | | | | | | |
Backward Slices of Variables | | | | ✓ | | | | | | | | | ✓ | | |
Block Catches Exceptions | | | | ✓ | | | | | | | | | ✓ | | ✓ |
Block Position Within a Function CFG | | | | ✓ | | | | | | | | | ✓ | | |
Block Throws Exceptions | | | | ✓ | | | | | | | | | ✓ | | ✓ |
Byte Codes | | | | | | | | | | | ✓ | | | | |
Call Graphlets | | ✓ | | | | | | | | ✓ | | | | ✓ | | ✓
CFG Edge Types | | | | ✓ | | | | | | | | | ✓ | | |
Constant Values | | | | ✓ | | | | | | | | | ✓ | | | ✓
Control Flow Graph Edges & Node Unigrams | | | ✓ | | | | | | | | | | | | |
Control Flow Graph Hashes | | | | | ✓ | | | | | | | | | | |
Data Flow Graph | | | | | | | | ✓ | | | | | | | |
Exact Syntax Template Library | | | | | | | | | | | | | | ✓ | |
Function (Opcode Chunks) | | | | | | ✓ | | | | | | | | | | ✓
Function CFG Width & Depth | | | | ✓ | | | | | | | | | ✓ | | |
Graphlets | | ✓ | | | | | | | | ✓ | | | | ✓ | |
Idioms (Instructions) | | ✓ | | ✓ | | | | | | ✓ | | | ✓ | ✓ | |
Imports & Exports (Shared Libraries, Method Names) | | | | | | ✓ | | | | | | ✓ | | | |
Inexact Syntax Template Library | | | | | | | | | | | | | | ✓ | |
Instruction Operand Sizes & Prefixes | | | | ✓ | | | | | | | | | ✓ | | |
Library Calls | | ✓ | | ✓ | | | | | | ✓ | | | ✓ | ✓ | | ✓
Loop Nesting Level | | | | ✓ | | | | | | | | | ✓ | | |
Loop Size | | | | ✓ | | | | | | | | | ✓ | | | ✓
N-Grams (Opcodes) | | ✓ | | ✓ | | | ✓ | | ✓ | ✓ | | | ✓ | ✓ | | ✓
Number of Basic Blocks | | | | | | | | | | | | | | | | ✓
Number of Input/Output/internal registers of a block | | | | ✓ | | | | | | | | | ✓ | | |
Number of Live Registers at Block Entry & Exit | | | | ✓ | | | | | | | | | ✓ | | |
Number of Used & Defined Registers | | | | ✓ | | | | | | | | | ✓ | | |
Opcodes | | | | | | | | | | | ✓ | | | | |
Register Flow Graph | | | | | ✓ | | | | | | | | | ✓ | |
Stack Height Delta of the Block | | | | ✓ | | | | | | | | | ✓ | | | ✓
Stack Memory Accesses | | | | ✓ | | | | | | | | | ✓ | | ✓ | ✓
Super Graphlets | | ✓ | | | | | | | | ✓ | | | | ✓ | |
* •
Key:- Single Authorship Problem Multiple Authorship Problem
##### Strings.
These features capture any strings, artifacts and values within the malicious
binary. The author influences any embedded _strings_ and so there exists a
wealth of knowledge on the author to gain from extracting strings. For
example, strings may infer the native language of the author and therefore
their potential location. Any naming conventions for both functions and
artifacts infer any author personality and choices. Other significant choices
for the author which strings help infer include programming language,
encryption techniques and error handling messages.
Malware authors understand how much information can be leaked from strings and
therefore continue to research methods for either removing author style or
changing them to imitate another author. Freely available tools such as
packers, obfuscators and strippers allow any author to remove their author
style from strings/constants. The simple task of adding false artifacts or
function names shall change author style within strings. We note the special
case of the works by Rosenberg et al. [103, 104], who convert the MAA problem
into a Natural Language Processing (NLP) problem through the use of analyzing
Cuckoo Sandbox reports [46].
##### Implementation.
Features in this category describe author choice involving both malware design
and execution. Predominately, researchers extract these features during
dynamic analysis which makes them much harder to obfuscate and mask. For
example, the approach the author takes to interact with the victim (e.g.,
propagation method). Some of these features can be mimicked (e.g., toolchain
process) and this potentially allows authors to imitate other authors. If
dynamic analysis fails, then analysts must rely on much harder and more manual
techniques to identify implementation features. These features also change
depending on both the malware authors’ development environment and victim’s
system making it harder to automate across varying types of malware.
##### Infrastructure.
We use this category to describe any feature which relates to specific
infrastructure choices made by the author, e.g., choice of IP for command and
control server. If a threat actor reuses the same infrastructure, then it may
offer an easy attribution decision. However, it might not be straightforward:
for example, authors might attack other authors to use their infrastructure to
imitate them or the author may loan out their infrastructure. We also expect
sophisticated authors to change their infrastructure for each attack, or at
the very least, mask identifiers such as IP addresses through methods such as
IP spoofing or proxies.
##### Assembly Language.
We collate any feature extracted from the assembly language representation of
the binary. Researchers mainly extract them using static analysis which make
them amenable to automation. These features focus on capturing instructions,
control flow, data flow, external interactions and register flow. This can be
either at the function level of the binary or much more fine-grained through
the basic blocks of the program. Capturing both the program flow and more
fine-grained features makes it harder for the author to modify them for
adversarial purposes. The assembly language also presents an opportunity to
feed it directly as a raw input into a Deep Neural Network (DNN) [80].
A malware analyst relies on the state-of-the-art disassembler to maintain
author style. Otherwise, the authorship attribution problem morphs into the
binary similarity problem. Furthermore, there exist multiple methods to build
_basic blocks_ and then _control flow graphs_ (CFG) to understand the flow of
the program. CFG include important program aspects such as _error handling_ ,
_functions_ and _library and system interactions_. Even when built, graphs
provide further complications as the problem of subgraph isomorphism remains
an NP complete problem [31]. Therefore, alternative representations must be
sought. However, these alternatives then become approximations through
statistical representations which increases the likelihood of losing
authorship style. Finally, choices in the toolchain process (e.g., CFG
flattening) present even more difficulties to overcome when building flow
graphs.
##### Decompiler.
This process attempts to recover the source code from the binary and remains
an unsolved problem [101]. State-of-the-art decompilers such as IDA [52]
recover code which closely represents the original source code, especially
when no optimization or other code modifications occurred during the toolchain
process. Source code recovery allows researchers to extract author style
features determined from _source code authorship attribution_ , e.g., abstract
syntax trees [22]. However, these features rely heavily on the state-of-the-
art decompilers and any binary modifications tend to highly impact the ability
to recover them [36].
## 4 Real-world Application of State-of-the-art Systems
In this section, we provide an evaluation of the eighteen BAA systems
identifying key findings and open research challenges (Section 4.1) and
research recommendations (Section 4.2). We present our results in Table 7 and
group the systems into the five data modeling techniques from Section 3.1. Our
systems evaluation consists of reviewing the efficacy and functionalities of
the eighteen systems. This comparison considers the applicability of the
current state-of-the-art techniques to malware binaries and enables us to set
out the future research directions. Here, we define Efficacy as the accuracy
of a system achieving its desired goal. We compare the efficacy of three
experiments: (i) compiled source code, (ii) obfuscation, and (iii) malware on
the largest datasets systematized in Table 3. For operational capability, we
must consider the overall implementation of the system. Thus, we devise the
following five categories to compare system functionality (based on the
availability and reproducibility of a system):
* $\ominus$
System currently not available. We received no reply from our correspondence
or the authors were unable to share the system.
* $\oslash$
System partially available. We were able to locate part of the system online
but core components were missing.
* $\odot$
System does not compile. We attempted to modify the source code and install
previous dependencies. However, this ultimately was not possible.
* $\otimes$
System contains errors at runtime. If we managed to construct the system, we
found errors occurred whilst attempting to run evaluation experiments which we
were unable to patch.
* $\oplus$
System completes. The system was able to run a malware evaluation test.
In addition to this, we devise various categories for further comparison of
the systems. We provide indication of the _ground truth_ used to perform the
experiment, namely source code, binaries or Android applications. We compare
the systems by the addressed _authorship problem (single or multiple)_ , and
the _feature extraction techniques_ used (static, dynamic or a combination of
both). We also compare whether the researchers implemented _parallelization_
or _cross-validation_ evaluations, and whether they took into account
_toolchains_ or _shared libraries_. From our author style feature
systematization (Section 3.3), we compare the systems over the five macro-
categories (i.e., _strings_ , _implementation_ , _infrastructure_ , _assembly_
and _decompiler_). Finally, using the discussion in Section 2.4, we explore
whether any researchers consider _adversarial_ challenges and _privacy_
implications to their authorship attribution systems using the following
categories we devised:
Adversarial:
* $\square$
Researchers do not consider any attacks
* $\boxdot$
Researchers consider unsophisticated attacks
* $\blacksquare$
Researchers consider sophisticated attacks
Privacy:
* $\square$
Researchers do not consider privacy implications
* $\boxdot$
Researchers mention privacy implications
* $\blacksquare$
Researchers discuss the privacy implications
### 4.1 Key Findings and Open Research Challenges
From the criteria above, and from the results shown in Table 7, we categorize
our key findings as follows. First, we explore _System Goal and Datasets_
focusing on ground truth and authorship problem solved. Then, we examine the
effect of _Languages, Code Re-use and Toolchains_ and _Attribution Features
and Extraction Methods_ on BAA systems. Next, we examine the _System
Functionality_ and in particular consider the importance of training,
reproducibility of results and availability for the state-of-the art systems.
Finally, we explore the open challenges with regards to _System Efficacy_ and
_Adversarial Considerations_ as well contextualizing the implications of these
systems on _Privacy and Ethics_.
##### System Goal and Datasets.
We note the majority of the systems focus on the single authorship problem.
All the systems use _closed-world assumption_ and in the majority of cases use
classification systems. This makes them impractical for use in the “wild”.
Rosenblum et al. [106] remains the only paper to partially consider the open
world problem by training a model based on the closed world model. However,
the datasets used to train, test and evaluate the systems contain minimal
consistency especially for the datasets containing malware. Although the use
of compiled source code repositories provides a ground truth, it brings extra
complexities with the necessity to consider all toolchain possibilities. There
exists no verifiable, publicly available and sufficiently large APT dataset
which researchers can use to build MAA systems. All of the current attribution
systems use datasets which are static, leading to a discontinuous learning
model which is likely to experience concept drift. This is a wider problem
within machine learning, deep learning and artificial intelligence models.
Kolosnjaji et al. [62] show concept drift occurs within malware detection
models based on unevolving datasets.
Table 7: Comparison of the Analyzed Systems between 2011 and 2019. Organized
by data modeling technique and cross-referenced against ground truth,
authorship problem, features, system efficacy, system functionality and
adversarial considerations.
| | | | | | | | | | Features | Efficacyd e | | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
| Paper | Year | Original Groundtrutha | Authorship Problemb | Analysisc | Parallelization | Cross-Validation | Toolchains | Shared Libraries | Strings | Implementation | Infrastructure | Assembly | Decompiler | Source Code (%) | Obfuscation (%) | Malware (%) | Functionalityf | Adversarialg | Privacyh
| [106] | 2011 | S | | | | ✓ | | | | | | ✓ | | 51 | F 58 | ACC 34 | $\oslash$ | $\square$ | $\square$
| [77] | 2016 | S | | | | ✓ | | ✓ | | | | ✓ | | 52 | $-$ | $-$ | $\ominus$ | $\square$ | $\square$
| [80] | 2017 | S | | | ✓ | ✓ | | ✓ | ✓ | | | ✓ | | 58 | $-$ | $-$ | $\ominus$ | $\square$ | $\square$
| [103] | 2017 | M | | | | ✓ | | | ✓ | | | | | $-$ | $-$ | 94.6 | $\ominus$ | $\square$ | $\square$
| [50] | 2017 | S | | | | ✓ | ✓ | ✓ | | | | ✓ | | 95.3 | 94.1 | $-$ | $\ominus$ | $\boxdot$ | $\boxdot$
| [22] | 2018 | S | | | | ✓ | | | ✓ | | | ✓ | ✓ | 83 | 88 | ACC 70 | $\odot$ | $\boxdot$ | $\blacksquare$
| [78] | 2018 | S | | | ✓ | ✓ | ✓ | ✓ | | | | ✓ | | 71 | $-$ | $-$ | $\ominus$ | $\square$ | $\square$
| [54] | 2018 | M | | | | ✓ | | | | ✓ | ✓ | ✓ | | $-$ | $-$ | AF 88.2 | $\ominus$ | $\square$ | $\square$
| [104] | 2018 | M | | | | ✓ | | | ✓ | | | | | $-$ | $-$ | 99.75 | $\ominus$ | $\square$ | $\square$
| [58] | 2018 | A | | | | ✓ | | | ✓ | | | | | 98 | 77 | 96 | $\ominus$ | $\boxdot$ | $\square$
| [44] | 2018 | A | | | | ✓ | | | | | | ✓ | | 86.74 | $-$ | 66.92 | $\ominus$ | $\square$ | $\square$
| [7] | 2019a | S | | | | ✓ | ✓ | ✓ | ✓ | | | ✓ | | F 94 | $-$ | CC 96.9 | $\ominus$ | $\square$ | $\square$
| [8] | 2019b | S | | | | ✓ | ✓ | ✓ | | ✓ | | ✓ | | P 84 | $-$ | P 45 | $\ominus$ | $\square$ | $\boxdot$
Classifying | [8] | 2019b | S | | | | ✓ | ✓ | ✓ | | ✓ | | ✓ | | P 89 | $-$ | $-$ | $\ominus$ | $\square$ | $\boxdot$
| [106] | 2011 | S | | | | ✓ | | | | | | ✓ | | AMI 45.6 | $-$ | $-$ | $\oslash$ | $\square$ | $\square$
Clustering | [47] | 2020 | M | | | | | | | ✓ | ✓ | | ✓ | | $-$ | $-$ | $95$ | $\ominus$ | $\square$ | $\square$
Anomaly Detection | [66] | 2018 | M | | | | ✓ | | | ✓ | ✓ | | ✓ | | $-$ | $-$ | 98 | $\otimes$ | $\square$ | $\square$
| [80] | 2017 | S | | | ✓ | ✓ | | ✓ | ✓ | | | ✓ | | 65 | $-$ | $-$ | $\ominus$ | $\square$ | $\square$
Structured Prediction | [78] | 2018 | S | | | ✓ | ✓ | ✓ | ✓ | | | | ✓ | | $\times$ | $-$ | $-$ | $\ominus$ | $\square$ | $\square$
| [3] | 2014 | S | | | | | | ✓ | | | | ✓ | | 84 | F 25 | ACC 69.75 | $\ominus$ | $\square$ | $\square$
| [74] | 2015 | M | | | | | | | ✓ | ✓ | ✓ | | | _n/a_ | _n/a_ | _n/a_ | _n/a_ | $\square$ | $\square$
Non-ML | [6] | 2018b | S | | | | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | P 49 | P 95 | Re 68 | $\otimes$ | $\boxdot$ | $\square$
* * a
S - Source Code M - Malware A - Android Applications
* b
Single Authorship Problem Multiple Authorship Problem
* c
Static Analysis Static and Dynamic Analysis Dynamic Analysis
* d
$\times$ Experiment Incomplete $-$ No Experiment Considered _na_ Not
Applicable
* e
All accuracy unless precedes with: F - $F_{1}$ measure [except for Alrabaee et
al. [7] who define and use $F_{0.5}$] AF - Average $F_{1}$ score AMI -
Adjusted Mutual Information ACC - Average Correctly Clustered P - Precision CC
- Correctly Clustered
* •
Re - The average accuracy in relation to a malware analysis report
* f
$\ominus$ System Not Available $\oslash$ System Partially Available $\odot$
System Does Not Compile $\otimes$ System Contains Errors At Runtime _n/a_ Not
Applicable
* g
$\square$ Researchers do not consider any attacks $\boxdot$ Researchers
consider unsophisticated attacks $\blacksquare$ Researchers consider
sophisticated attacks
* h
$\square$ Researchers do not consider privacy implications $\boxdot$
Researchers mention privacy implications $\blacksquare$ Researchers discuss
the privacy implications
Malware development follows a similar agile work flow process to benign
software development where multiple authors collaborate [24], as in the recent
GandCrab ransomware campaign [51]. However, there exists limited research
exploring multiple authorship within MAA. Even in BAA, only four out of the
eighteen papers consider multiple authorship for a program. From the four
multiple author focused papers, there still remain research gaps such as
applying these techniques to malware. However, there exist many challenges
with this approach. This includes overcoming both obfuscation and packing
techniques. Additionally, it remains unclear whether the features they use
help with clustering multiple malware authors.
##### Languages, Code Re-use and Toolchains.
Code re-use from other software and libraries impact attribution systems and
this can lead to the incorrect author attributed. We observe only eight
systems attempt to account for the effect of shared libraries on authorship
style. Previous works all attempt to remove standard libraries from the
binaries before extracting author features [77, 80, 78, 3, 6, 7, 8, 50]. These
works only focus on removing C/C++ libraries due to their datasets containing
binaries compiled from C/C++ source code. In fact, none of the state-of-the-
art systems consider any other programming languages.
However, authors write malware in multiple languages [25] and thus a
programming language gap exists when it comes to identifying the malware
author. Therefore, we believe using systems trained only on compiled source
code datasets to label unknown malware hinders a MAA system. Further issues
exist if the programmer adheres to language standards where a strict format
must be followed, e.g, the style guide for Python (PEP 8 [117]). Standards are
most likely to significantly reduce the amount of author style within a
program as everyone will produce similar looking code. However, the speed of
malware development must match the speed at which it requires deploying999The
window of deployment depends on the availability of a vulnerability patch. and
this determines the likelihood of a malware author following standards. In any
case, future research should consider features which are robust against any
standards to prevent this becoming an attack method to the attribution systems
themselves.
The re-use of code within benign programs is common practice and malware
development is no different. Within malware, there exists a lot of code re-use
from both open and closed sources due to the pressure of beating vulnerability
patching or meeting the demands of cyber warfare to complete mission
objectives. Code re-use can be both helpful and unhelpful. In fact, a lot of
code reuse from other authors contaminates samples and leads to an even
smaller dataset to learn author style. For example, if someone leaks the
source code then this quickly leads to multiple copycat attackers. On the
other hand, code reuse of the actual malware author helps identify malware
written by the same authors [105]. Only three papers within the current
research consider the effect of toolchains on their attribution system [78,
50, 6], meaning there still exist questions regarding the impact of compilers
on author style. However, if we consider a malware dataset then the choice of
compiler is predefined by the author and so by default we automatically would
train upon a dataset which potentially used various compilers. This may answer
why using a model trained on compiled source code provides limited aid when
applying it to malware.
##### Features, Extraction and Style.
We observe strings and assembly language as the two most popular feature
macro-categories for author style and this also correlates with static
analysis as the most popular technique to extract features. These popular
macro-categories omit key malware specific features and traits which experts
tend to discover among APT author style. The common extraction process used
involves static analysis, likely due to the “quickness” it provides over
dynamic analysis. Furthermore, there exist multiple tools for binary analysis
which achieve similar tasks. This leads to further research questions
surrounding the effect of extraction tools on author style.
There exists limited research around finding malware author style. The goals
of benign software programmers clearly differ to malicious software
programmers and yet most of the research focuses on only the benign stylometry
approach of lexical, syntactic and semantic features on assembly language and
these methods ignore malware specific features. In the case of multiple
authors, the state of the art mainly identifies fine-grained features (e.g.,
basic block exception handling) [77, 80, 78, 8] and this differs to the
features identified by Marquis-Boire et al. [74] for linking malware authors
(e.g., languages used, command and control server setups and obfuscation
techniques used).
##### System Functionality.
The majority of systems we tested, retrained their systems to perform each of
the evaluation tests101010Unfortunately, continuously retraining your system
to account for new discoveries in the “wild” remains a resource intensive
task.. The system which showed higher performance capability in some cases
required a training time of a week [80] with the most likely explanation of
using no parallelization techniques. After spending a considerable amount of
time and effort, none of the eighteen systems we tested fell into the “System
completes” category and this provides the main reason for a shortage of
further research within this field. Although the published results show
promise, the lack of consistency with the evaluation metrics makes it hard to
validate the results without further testing.
##### System Efficacy.
Not all the papers performed experiments using source code, obfuscation or
malware experiments111111We utilized the results of the survey by [4] to
incorporate the obfuscation and malware experiments using the systems from
[106, 22, 3]. However, [4] omit the accuracy results for these experiments and
instead use $F_{0.5}$ for the F-measure as they claim the systems in [106, 22,
3] are extra sensitive to false positives. and in some unique cases the system
takes too long to complete [78] or the method used is not applicable [74]. In
terms of presented results we gathered, only 19 out of 34 used the accuracy
metric. Therefore, we considered an alternative metric for the remaining 15
results. This highlights the lack of consistency on evaluation metrics across
the field. Let us examine the three types of experiments:
* •
Source Code. The results vary considerably and this makes it difficult to
compare the systems. The later systems seem to perform better. This appears to
be down to the progress of extraction techniques which allow researchers to
remove some external noise (e.g. system libraries) from the binaries.
* •
Obfuscation. From the very few experiments, it remains impossible to tell
whether the problem is solved due to the inconsistency of results. The result
by Hendrikse [50] appears the most promising. This is due to the thoroughness
of obfuscation techniques considered, and even though they considered the
fewest number of features, the prominent difference to the other systems is
the inclusion of dynamic features.
* •
Malware. Comparing the malware experiments is much harder, as the goals of the
systems differ slightly. The datasets used were also considerably smaller, and
the researchers undertook considerable efforts to clean the datasets. It is
these reasons which explain the considerably high accuracy attained. This is
not necessarily bad as it could help malware analysts examine a small subset
of malware which they believe originate from the same author. Even if we were
to consider much larger and dirtier datasets, then the state-of-the-art
systems remain unlikely to produces the same levels of accuracy.
We note the inconsistency of datasets encumbers the comparison of the systems.
A prime example of this inconsistency is Caliskan et al. [22], who report
better efficacy on obfuscation than Alrabaee et al. [4] despite appearing to
use the same method for obfuscation experiments. Even though both papers
report different metrics, we state the reasons we think there exists a higher
accuracy in the later results. Firstly, we believe Alrabaee et al. [4] used an
older version of the system from [22] as they published their paper first.
Secondly and most importantly, they both use different datasets.
##### Adversarial Considerations.
From the table, only four121212The obfuscations experiments for systems [106]
and [3] were computed in the survey by Alrabaee et al. [4]. of the eighteen
systems [22, 50, 58, 6] considered basic attacks, e.g, obfuscation. This
highlights the lack of adversarial considerations towards any binary
attribution system. Even those researchers who implemented unsophisticated
attacks (e.g., obfuscation) on their systems, reported an increase in the
amount of manual assistance needed to de-obfuscate the binaries. This meant
the systems became more semi-automated. Out of all the single authorship
methods, Hendrikse [50] provides the most comprehensive evaluations using
readily available obfuscation tools which range from very easy to hard
techniques. However, their attribution system uses the fewest amount of
features which opens itself to targeted and untargeted attacks. This is
because their system uses fewer features than Caliskan et al. [22] and Meng et
al. [81] show the binary attribution system created by Caliskan et al. [22] is
open to both: targeted and untargeted attacks. Meng et al. [81] extend the
attacks by Carlini and Wagner [26] designed for DNNs trained for image
labeling. Meng et al. [81] generated a method to modify the feature vector and
the binary. When they modify the binary they ensure the binary still executes
which is a fundamental requirement for a successful binary modification
attack. We predict this method of attack works for all the other single author
systems too. Therefore, the majority of single author state-of-the-art systems
remain open to both unsophisticated and sophisticated attacks.
##### Privacy and Ethics.
The majority of systems use author style features developed from benign source
code author identification rather than focusing on malicious author styles.
This means these systems and techniques can be used to identify benign
software developers who might create programs to avoid detection in nations
which prevent freedom of speech. Furthermore, these authors may have
previously submitted software to the benign sources used by many of the
systems. The authors may be unaware of researchers using their software. This
not only violates their privacy rights but this raises ethical questions
surrounding the further use of the benign datasets.
### 4.2 Recommendations
##### Real-world Application.
None of the MAA systems we reviewed appear immediately ready for
implementation in the “wild”. There exists a lack of sufficient details to
replicate the systems. Anyone wishing to join this research field must start
from scratch and redo the majority of previous work. Furthermore, limited
results on malware exist meaning it is unknown whether the current techniques
are effective for real-world use. Additionally, most systems require intense
manual analysis and significant training times further showing these systems
are unready for operational deployment.
##### Privacy.
Although these systems are aimed at detecting malicious authors, they can be
used to detect benign software users and this raises privacy concerns. This
provides further evidence future research must focus purely on malicious
author styles. Few of these works consider the privacy and anonymity
implication of the developed tools. Therefore, we believe MAA systems should
also be tested in other contexts than that of malware written by a threat
actor to measure their efficacy and impact in benign scenarios.
##### Adversarial Approach.
None of the analyzed papers consider sophisticated adversarial testing. We
suggest any MAA system must undergo adversarial testing before deployment. In
particular, it must show robustness to sophisticated attacks like the one
described by Meng et al. [81] which we predict works for all current single
author binary attribution systems. There also exists no research into
adversarial attacks on multiple authorship attribution systems. In the future,
we predict all APT malware authors shall implement sophisticated attacks to
remain anonymous and avoid law enforcement.
##### Datasets.
In general, there lacks both a consistency of performance metrics and datasets
used across the research field. Systems which trained upon source code and
were then used to identify malware author performed worse than those systems
which originally trained upon malware. The datasets used played a pivotal role
in these systems and most of them lacked a variety of programming languages or
ability to cope with the effect of shared libraries and compilers. We hope the
creation of our APT malware dataset in Section 5 allows a fair comparison
among future systems.
##### Multiple Authors.
The single author assumption fundamentally hinders the ability to determine
the author of binaries developed by multiple authors (_agile software
development_), especially as the commercialized malware industry uses agile
work flow methods to speed up the development process to both increase profits
and beat vulnerability discovery time. Being able to see if authors are used
across multiple malware development projects shall provide insight within the
malware development industry and introduce a new method of tracking malicious
threats, especially APT groups. To further aide this we suggest all future
work should adopt our approach of considering features from the five feature
macro-categories of: _strings_ , _implementation_ , _infrastructure_ ,
_assembly language_ and _decompiler_. This allows for all aspects of malware
author styles to be captured.
## 5 APTClass: Creation of an APT Malware Dataset
From our discussion in Section 3.2 on datasets, we deemed it a high priority
to ensure there exists a sufficiently large and diverse dataset accessible to
research for use in discovering malware authorship style and creating malware
authorship identification systems. In this section, we set out how we created
APTClass, a meta-information dataset consisting of 15,660 labeled malware
samples. Our overall approach follows a similar method to Laurenza and
Lazzeretti [64]: we gather a large amount of open-source intelligence (OSINT)
and then we perform preprocessing on the data before extracting information.
In addition, we propose a novel method for label identification and extraction
to solve the issues discovered in Section 3.2.2 and because of our focus on
labeling we only extract malware hashes. This can be extended to include URLs,
IP Addresses, or Tactics, Techniques and Procedures as shown by previous works
[70, 120]. Our novel label extraction method uses a matching algorithm which
combs the OSINT in a systematic process to match against a list of 1,532 APT
group names. We describe this process in detail in Section 5.1.
### 5.1 Method
APTClass follows five steps: (i) create a list of APT groups and group them by
alleged nation; (ii) gather OSINT, mainly PDF reports of attack campaigns;
(iii) extract hashes and label from the gathered intelligence; (iv) clean the
dataset by removing duplicate malware hashes and use VirusTotal [29] to verify
the legitimacy of the samples gathered in step (iii); and finally (v)
filtering for executable binaries. For the purpose of this dataset, APTClass
considers executable files as ELF, Windows 32 EXE, Windows 16 EXE, Windows
Installation file and Windows DLL.
Table 8: List of sources used for creating a consistent list of APT labels. Source Name | Last Updated
---|---
MISP [83] | October 2020
APT Operation Tracker [113] | October 2020
MITRE ATT&CK [84] | October 2020
sapphirex00 [108] | Nov 2018
Thailand CERT [115] | October 2020
Council on Foreign Relations [32] | October 2020
#### 5.1.1 Creating a consistent list of APT labels
To overcome the issues of multiple aliases introduced by various analysts,
APTClass treats each name as a unique group. Although this initially inflates
the number of groups and introduces some duplication (e.g., group 123 and
group123 are listed separately), we believe this to be the correct approach as
often analysts cannot reach a consensus regarding groups and may use different
names within OSINT when referring to the same group. APTClass still captures
any nation link for a group. From our experience, analysts tend to have a
higher confidence on linking groups to nations. APTClass also records whether
a group is linked to multiple nations to account for mis-attribution. APTClass
extracts the nation and group names from six sources in Table 8 using the
process set out in Algorithm 1. Essentially, APTClass extracts from the
sources a dictionary with _nations_ as keys and _a list of group names_ as
values. APTClass then standardizes the names and removes duplicates over the
six dictionaries. This approach identified 1,532 names. We are aware there
exists duplication among sources, however, this helps further validate the
list of names as well as increase the varying aliases for each group.
Input: $sources\\_list$
Output: $final\\_list$
Function _Main_ :
for _source in sources_list_ do
dictionary($nation$ : $group\\_name\\_list$) =
extract_nation_and_names($source$)
// returns a dictionary, with nations as keys and list of group names as
values
for _each nation_ do
$group\\_name\\_list$ = standardize($group\\_name\\_list$)
// removes punctuation and converts to lowercase
$group\\_name\\_list$ = remove_duplicates($group\\_name\\_list$)
// removes any duplicates from the list of group names
end for
for _name in group_name_list_ do
if _(nation, name) not in final_list_ then
$final\\_list$.append($(nation,name)$)
end if
end for
end for
$final\\_list$ = group_nations($final\\_list$)
// joins together groups from the same nations
return $final\\_list$
return
Algorithm 1 Creating list of APT names
#### 5.1.2 Gathering open-source threat intelligence
We performed an extensive search on GitHub for trustworthy repositories
containing any OSINT information. In particular, we focused on repositories
storing (i) reports (typically PDF files), (ii) indicator of compromises
(IoC), and (iii) YARA rules. We chose GitHub as the majority of OSINT is
shared on the platform from other researchers collecting their own
repositories of intelligence. We wanted to collate as many files as possible
to ensure we maximized the number of malware hashes.
#### 5.1.3 Extracting hashes and labels
We are aware of many OSINT parsers, however, these just extract the indicators
of compromise [12, 53] or try to gather tactics and techniques for groups [68]
without extracting the most important piece of information for our own
purpose: APT labels and hashes. Thus, we required a new approach to gather a
likely label for the malware hash. APTClass provides a fine-grained approach
to extracting the label. We set out this technique in Figure 1 and describe
the process below:
(1) Text Extraction OSINT PDF Text Per Page YARA Rule Text Per Rule Indicators
of Compromises Text Per Line (2) Hash Search File Metadata (incl. File Path)
(3) Text Processing (4) APT Number Search Output: Hash + Label (5) Metadata
N-gram Search (6) N-gram Keyword Search (7) Remaining Text Stop
yesnonoyesnoyesyesno
Figure 1: A high-level view of the extraction process for APTClass.
1. 1.
Text Extraction: APTClass extracts the text per page of PDF reports, text per
YARA rule and text per line of IoC files. This allows APTClass to try and
identify the best possible label closest to the hash.
2. 2.
Hash Search: We perform a regular expression search on the extracted text for
any MD5, SHA1, SHA256 or SHA512 hashes.
3. 3.
Text Processing: APTClass removes punctuation, stop words and hashes from the
text. The stop words consist of stop words from NLTK [17], spaCy [55] and
gensim [98] as well as any cyber words in the dictionary created by Bishop Fox
[18] and words previously determined “noise” from running APTClass multiple
times.
4. 4.
APT Number Search: APTClass performs an extensive search against the APT label
list looking for a match with either:
* •
APT$<$number$>$,
* •
APT-C-$<$number$>$,
* •
ATK$<$number$>$,
* •
SIG$<$number$>$ or
* •
FIN$<$number$>$
We do this as these labels tend to be extremely popular labels among analysts.
APTClass only uses this as a label if there is a clear majority within the
matches. APTClass also designates this match as the label when there is no
further match against the APT label list created in Section 5.1.1, i.e. steps
(5-7) all fail.
5. 5.
Metadata N-gram Search: APTClass considers a n-gram word search on the
metadata. Due to the likelihood of duplication within the OSINT, APTClass also
includes any metadata of the same file. APTClass considers all possible word
n-grams of the metadata. The logic for this is the metadata is likely to
include the original filename and any keywords attached by the author of the
report. We also include the file path as part of the metedata as the analyst
is likely to store the reports in the most relevant folder and therefore using
previous file paths increases our chance of matching the right label.
6. 6.
N-gram Keyword Search: APTClass extends the n-gram search to additionally
include the extracted text. APTClass performs the match based on all possible
n-grams of the top five keywords extracted. We empirically verified in most
cases the correct label lies among the top words. APTClass uses the top five
keywords but this can be increased until APTClass achieves a exact match with
a corresponding linear increase in processing time.
7. 7.
Remaining Text: If APTClass fails to identify a label for a hash in steps
(4-6) then it stores the text and repeats steps (3-7) using this remaining
text. If it fails again then the label will be a dictionary consisting of the
top five keywords from the full text and the keywords from the metadata.
Exact Match (APT1, China, d41d8cd98f00b204e9800998ecf8427e) (APT1, China,
d41d8cd98f00b204e9800998ecf8427e) (APT1, China,
d41d8cd98f00b204e9800998ecf8427e) Different Labels (APT1, China,
d41d8cd98f00b204e9800998ecf8427e) (APT10, China,
d41d8cd98f00b204e9800998ecf8427e) (APT1/APT10, China,
d41d8cd98f00b204e9800998ecf8427e) Different Labels and Nations (APT1, China,
d41d8cd98f00b204e9800998ecf8427e) (APT10, Russia,
d41d8cd98f00b204e9800998ecf8427e) (APT1/APT10, China/Russia,
d41d8cd98f00b204e9800998ecf8427e)
Figure 2: Example of APTClass cleaning process.
#### 5.1.4 Cleaning, verifying and filtering
Before checking the hashes discovered from the extraction process, APTClass
cleans the data by joining identical hashes and collates any information which
suggests mis-attribution. APTClass joins any samples with a exact match (i.e
the hash, the group name and group nation are identical). Next APTClass joins
any samples where the hash and the nation are identical but the labels differ;
in this case APTClass concatenates the labels. Finally, APTClass joins any
remaining samples with identical hashes but nations and labels differ; in this
case APTClass concatenates the labels and concatenates the nations. We provide
an example of this cleaning process in Figure 2. Once this step is complete,
APTClass submits each MD5, SHA1 and SHA256 sample to VirusTotal to check the
malware legitimacy, the file type and the corresponding hash values. After
this, APTClass repeats the cleaning step above and joins together any labels
for identical hashes. Finally, APTClass filters for executable samples.
### 5.2 Results
We run APTClass using the sources listed in Table 8, including 373 report
files, 504 IoCs and 19 Yara Rules. The analysis takes approximately 116
hours131313Approximately 80% of the time taken is accounted by the cleaning,
verifying and filtering process, this is determined in the verifying process
by the rate limit of the VirusTotal API. on a Ubuntu 16.04 Virtual Machine
equipped with 16 vCPU and 16GB RAM. At the end of this process, APTClass
returns a list of 15,660 labeled samples. The results are shown in Table 9,
together with a comparison of existing APT datasets. As we see from Table 9,
APTClass is comfortably larger than both [64] and [33]. Unfortunately, there
lacks the availability of the OSINT used within [64] and [33] and so we cannot
run APTClass on the same reports to see if there is any comparison. However,
we believe the issues discussed in Section 3.2.2 and slight difference in
goals of the three systems makes it very difficult to compare datasets in
terms of the granularity within Table 10.
Table 9: Comparison of our dataset against both [64] and [33].
| [64] | [33] | APTClass
---|---|---|---
Total labeled Samples | 8,927a | 3,594b | 15,660
Number of groups | 88 | 12 | 164
Number of threat intelligence files processed | 821 | 33 | 896
Total unknown samples | N/A | N/A | 7,485
Number of groups with 50+ samples | N/A | 11 | 37
Number of groups with 25+ samples | N/A | 12 | 54
* a
This includes file types other than ELF, Windows 32 EXE, Windows 16 EXE,
Windows Installation file and Windows DLL.
* b
cyber-research [33] include information on a further 855 samples which are not
on VirusTotal.
Table 10: The number of SHA256 hashes per Nation and APT Group. Nation | | APT Group | | APT Group |
---|---|---|---|---|---
China | 5,548 | apt10 | 548 | icefog | 90
India | 417 | apt17 | 2462 | infy | 189
Iran | 637 | apt27 | 85 | kimsuky | 77
Israel | 5,000 | apt28 | 500 | lazarus | 1046
Italy | 6 | apt29 | 93 | mirage | 75
Lebanon | 26 | apt33 | 83 | muddywater | 63
Libyan Arab Jamahiriya | 1 | apt37 | 77 | oceanlotus | 679
DPRK | 1,236 | apt40 | 103 | patchwork | 282
Pakistan | 8 | be2 | 110 | promethium | 89
Russia | 1,658 | black vine | 316 | rtm | 88
Turkey | 89 | blackgear | 270 | scarlet mimic | 61
United States | 74 | blacktech | 333 | sig17 | 4,992
Vietnam | 679 | cleaver | 112 | silence | 65
| | comment crew | 260 | ta505 | 171
| | confucius | 87 | thrip | 105
| | darkhotel | 94 | tick | 70
| | fin7 | 181 | tropic trooper | 59
| | gamaredon group | 159 | turla | 86
| | higaisa | 53 | |
| | | | |
APTClass creates an overall diverse dataset with 164 APT groups from which we
can create a concentrated subset consisting of 37 groups with 50 or more
samples. In Table 10, we provide a breakdown of the results by the 13 nations
(without potential mis-attribution) and the 37 groups with 50 or more samples.
Although there exists a clear disparity among the nations, this reflects the
information sources and publicly known attacks. Similarly among APT groups
there are certain groups where there are considerably more samples linked to
them (e.g APT17 - China and SIG17 - Israel), which reflects the samples by
nation with both China and Israel linked to the most amount of samples.
Overall, Table 10 mirrors the observations made in Section 2.1 and those seen
in Table 1. In fact, this further highlights the bias towards non-Western
nation sponsored APT groups. Interestingly, only two APT groups (Oil Rig and
Emissary Panda) of the 2020 top ten are not included in Table 1. Additionally,
the group kimsuky is linked to 77 samples compared to zero in Table 1. In
general, the number of samples vary considerably to Table 1 which is most
likely because not every threat intelligence company shares their intelligence
with MITRE.
### 5.3 Discussion
Even though we focus purely on extracting malware hashes from OSINT, APTClass
can be enriched by extracting other indicators or relevant information from
OSINT such as Tactics, Techniques and Procedures and malware families to build
further datasets for wider research into malware analysis. APTClass also
allows the user to select the sources used for the creation of the APT label
list (Section 5.1.1) and OSINT collection (Section 5.1.2).
(a) APTClass
(b) APT groups with 50+ samples
Figure 3: The detection results against up to 73 commercial anti-virus
engines.
One additional use of APTClass is it can help produce new methods for malware
detection. APTClass offers a different perspective on traditional detection
methods as well as testing them on the most sophisticated malicious
techniques. We show this by including the detection results of APTClass
against up to 73 commercial anti-virus engines from VirusTotal in Figure 3. In
Figure 3(a), we see there exists a small proportionate of APT malware which no
anti-virus engines detect. Interestingly, this issues is not specific to one
APT group or Nation (Figure 3(b)). These graphs highlight an unsolved problem
within malware detection. Furthermore, APTClass offers a unique niche dataset
for testing data modeling techniques used in the malware domain. Specifically,
we can see APTClass being used to further develop and understand sophisticated
adversarial attacks.
Due to the cross-domain benefits APTClass provides, we publish the code and
dataset for this joint project at https://s3lab.isg.rhul.ac.uk/aptclass. We
additionally welcome contributions towards evolving APTClass to continually
support the research community.
## 6 Conclusion
We presented a comprehensive survey of the Malware Authorship Attribution
problem by focusing on threat actor style and adversarial techniques to the
current state-of-the-art systems. We specifically examine the current data
modeling techniques, datasets and features used for malware authorship style.
We compared the results of eighteen binary attribution systems and identified
the current limitations of state-of-the-art techniques. Surprisingly, we found
most of these limitations apply to all of the eighteen systems, which shows a
lack of progression. Therefore, we envision our work as a source of
stimulation for future research, especially for new practitioners.
Furthermore, we mitigated the issue of lack of author labeled malware dataset
by creating a verified dataset containing 15,660 APT samples linked to 164 APT
group names and 13 nations. This is the largest dataset of this type publicly
available, and can be used by researchers and practitioners as a common ground
to test and compare their approaches.
## References
* Afroz et al. [2014] S. Afroz, A. C. Islam, A. Stolerman, R. Greenstadt, and D. McCoy. Doppelgänger finder: Taking stylometry to the underground. In _2014 IEEE Symposium on Security and Privacy_ , pages 212–226, May 2014. doi: 10.1109/SP.2014.21.
* Albluwi [2020] I. Albluwi. Plagiarism in programming assessments: A systematic review. _TOCE_ , 20(1):6:1–6:28, 2020. doi: 10.1145/3371156. URL https://doi.org/10.1145/3371156.
* Alrabaee et al. [2014] S. Alrabaee, N. Saleem, S. Preda, L. Wang, and M. Debbabi. Oba2: An onion approach to binary code authorship attribution. _Digital Investigation_ , 11:S94 – S103, 2014. ISSN 1742-2876. doi: https://doi.org/10.1016/j.diin.2014.03.012. URL http://www.sciencedirect.com/science/article/pii/S1742287614000176. Proceedings of the First Annual DFRWS Europe.
* Alrabaee et al. [2017] S. Alrabaee, P. Shirani, M. Debbabi, and L. Wang. On the feasibility of malware authorship attribution. In F. Cuppens, L. Wang, N. Cuppens-Boulahia, N. Tawbi, and J. Garcia-Alfaro, editors, _Foundations and Practice of Security_ , pages 256–272, Cham, 2017. Springer International Publishing. ISBN 978-3-319-51966-1.
* Alrabaee et al. [2018a] S. Alrabaee, P. Shirani, L. Wang, and M. Debbabi. Fossil: A resilient and efficient system for identifying foss functions in malware binaries. _ACM Trans. Priv. Secur._ , 21(2):8:1–8:34, Jan. 2018a. ISSN 2471-2566. doi: 10.1145/3175492. URL http://doi.acm.org/10.1145/3175492.
* Alrabaee et al. [2018b] S. Alrabaee, P. Shirani, L. Wang, M. Debbabi, and A. Hanna. On leveraging coding habits for effective binary authorship attribution. In J. Lopez, J. Zhou, and M. Soriano, editors, _Computer Security_ , pages 26–47, Cham, 2018b. Springer International Publishing. ISBN 978-3-319-99073-6.
* Alrabaee et al. [2019a] S. Alrabaee, M. Debbabi, and L. Wang. On the feasibility of binary authorship characterization. _Digital Investigation_ , 28(Supplement):S3–S11, 2019a. doi: 10.1016/j.diin.2019.01.028. URL https://doi.org/10.1016/j.diin.2019.01.028.
* Alrabaee et al. [2019b] S. Alrabaee, E. B. Karbab, L. Wang, and M. Debbabi. Bineye: Towards efficient binary authorship characterization using deep learning. In _Computer Security - ESORICS 2019 - 24th European Symposium on Research in Computer Security, Luxembourg, September 23-27, 2019, Proceedings, Part II_ , pages 47–67, 2019b. doi: 10.1007/978-3-030-29962-0˙3. URL https://doi.org/10.1007/978-3-030-29962-0_3.
* Alvarez [2020] V. M. Alvarez. YARA, 2020. URL https://virustotal.github.io/yara/.
* Andriesse et al. [2017] D. Andriesse, A. Slowinska, and H. Bos. Compiler-agnostic function detection in binaries. pages 177–189, 04 2017. doi: 10.1109/EuroSP.2017.11.
* Apple [2020] Apple. Xcode, 2020. URL https://developer.apple.com/xcode/.
* armbues [2015] armbues. ioc_parser, 2015. URL https://github.com/armbues/ioc_parser.
* AT&T Cybersecurity [2018] AT&T Cybersecurity. OTX trends 2018 Q1 and Q2, 2018. URL https://cybersecurity.att.com/resource-center/white-papers/2018-open-threat-exchange-trends.
* Bartholomew and Guerrero-Saade [2016] B. Bartholomew and J. A. Guerrero-Saade. Wave your false flags! deception tactics muddying attribution in targeted attacks. 2016\. URL https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2017/10/20114955/Bartholomew-GuerreroSaade-VB2016.pdf.
* Bassat and Cohen [2019] O. B. Bassat and I. Cohen. Mapping the connections inside russia’s apt ecosystem, 2019. URL https://www.intezer.com/blog-russian-apt-ecosystem/.
* Bencsáth et al. [2012] B. Bencsáth, G. Pék, L. Buttyán, and M. Félegyházi. The cousins of stuxnet: Duqu, flame, and gauss. _Future Internet_ , 4(4):971–1003, 2012. ISSN 1999-5903. doi: 10.3390/fi4040971. URL http://www.mdpi.com/1999-5903/4/4/971.
* Bird and Klein [2009] E. L. Bird, Steven and E. Klein. Natural language processing with python, 2009.
* Bishop Fox [2019] Bishop Fox. cyber.dic, 2019. URL https://github.com/BishopFox/cyberdic.
* Bouwman et al. [2020] X. Bouwman, H. Griffioen, J. Egbers, C. Doerr, B. Klievink, and M. van Eeten. A different cup of TI? the added value of commercial threat intelligence. In _29th USENIX Security Symposium (USENIX Security 20)_ , pages 433–450. USENIX Association, Aug 2020. ISBN 978-1-939133-17-5. URL https://www.usenix.org/conference/usenixsecurity20/presentation/bouwman.
* Brennan et al. [2012] M. Brennan, S. Afroz, and R. Greenstadt. Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. _ACM Trans. Inf. Syst. Secur._ , 15(3), nov 2012. ISSN 1094-9224. doi: 10.1145/2382448.2382450. URL https://doi.org/10.1145/2382448.2382450.
* Burrows et al. [2014] S. Burrows, A. L. Uitdenbogerd, and A. Turpin. Comparing techniques for authorship attribution of source code. _Softw., Pract. Exper._ , 44(1):1–32, 2014. doi: 10.1002/spe.2146. URL https://doi.org/10.1002/spe.2146.
* Caliskan et al. [2018] A. Caliskan, F. Yamaguchi, E. Dauber, R. E. Harang, K. Rieck, R. Greenstadt, and A. Narayanan. When coding style survives compilation: De-anonymizing programmers from executable binaries. In _25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018_ , 2018\. URL https://www2.seas.gwu.edu/~aylin/papers/caliskan_when.pdf.
* Caliskan-Islam et al. [2015] A. Caliskan-Islam, R. Harang, A. Liu, A. Narayanan, C. Voss, F. Yamaguchi, and R. Greenstadt. De-anonymizing programmers via code stylometry. In _24th USENIX Security Symposium (USENIX Security 15)_ , pages 255–270, Washington, D.C., 2015. USENIX Association. ISBN 978-1-931971-232. URL https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/caliskan-islam.
* Calleja et al. [2016] A. Calleja, J. Tapiador, and J. Caballero. A look into 30 years of malware development from a software metrics perspective. In F. Monrose, M. Dacier, G. Blanc, and J. Garcia-Alfaro, editors, _Research in Attacks, Intrusions, and Defenses_ , pages 325–345, Cham, 2016\. Springer International Publishing. ISBN 978-3-319-45719-2.
* Calleja et al. [2019] A. Calleja, J. Tapiador, and J. Caballero. The malsource dataset: Quantifying complexity and code reuse in malware development. _IEEE Transactions on Information Forensics and Security_ , 14(12):3175–3190, Dec 2019. ISSN 1556-6021. doi: 10.1109/TIFS.2018.2885512.
* Carlini and Wagner [2017] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In _2017 IEEE Symposium on Security and Privacy (SP)_ , pages 39–57, May 2017. doi: 10.1109/SP.2017.49.
* Carrera [2020] E. Carrera. pefile, 2020. URL https://github.com/erocarrera/pefile.
* [28] C. C. N. (CCN-CERT). Ciberamenazas Y Tendencias, 2020. URL https://www.ccn-cert.cni.es/informes/informes-ccn-cert-publicos/5377-ccn-cert-ia-13-20-ciberamenazas-y-tendencias-edicion-2020/file.html.
* Chronicle [2004] Chronicle. VirusTotal, 2004. URL www.virustotal.com.
* Clang [2020] Clang. Compiler, 2020. URL https://clang.llvm.org/index.html.
* Cook [1971] S. A. Cook. The complexity of theorem-proving procedures. In _Proceedings of the Third Annual ACM Symposium on Theory of Computing_ , STOC ’71, page 151–158, New York, NY, USA, 1971. Association for Computing Machinery. ISBN 9781450374644. doi: 10.1145/800157.805047. URL https://doi.org/10.1145/800157.805047.
* Council on Foreign Relations [2020] Council on Foreign Relations. Cyber operations tracker, 2020. URL https://www.cfr.org/interactive/cyber-operations.
* cyber-research [2019] cyber-research. APTMalware, 2019. URL https://github.com/cyber-research/APTMalware.
* Dauber et al. [2019] E. Dauber, A. Caliskan, R. E. Harang, G. Shearer, M. Weisman, F. Nelson, and R. Greenstadt. Git blame who?: Stylistic authorship attribution of small, incomplete source code fragments. _PoPETs_ , 2019(3):389–408, 2019. doi: 10.2478/popets-2019-0053. URL https://doi.org/10.2478/popets-2019-0053.
* development team [2015] T. N. development team. Netwide assembler, 2015. URL https://www.nasm.us/.
* Emmerik and Waddington [2004] M. V. Emmerik and T. Waddington. Using a decompiler for real-world source recovery. In _11th Working Conference on Reverse Engineering_ , pages 27–36, Nov 2004. doi: 10.1109/WCRE.2004.42.
* Farhadi et al. [2015] M. R. Farhadi, B. C. Fung, Y. B. Fung, P. Charland, S. Preda, and M. Debbabi. Scalable code clone search for malware analysis. _Digital Investigation_ , 15:46 – 60, 2015. ISSN 1742-2876. doi: https://doi.org/10.1016/j.diin.2015.06.001. URL http://www.sciencedirect.com/science/article/pii/S1742287615000705. Special Issue: Big Data and Intelligent Data Analysis.
* FireEye [2017] FireEye. FLOSS, 2017. URL https://github.com/fireeye/flare-floss.
* Foltýnek et al. [2020] T. Foltýnek, N. Meuschke, and B. Gipp. Academic plagiarism detection: A systematic literature review. _ACM Comput. Surv._ , 52(6):112:1–112:42, 2020\. doi: 10.1145/3345317. URL https://doi.org/10.1145/3345317.
* Frantzeskou et al. [2007] G. Frantzeskou, E. Stamatatos, S. Gritzalis, C. E. Chaski, and B. S. Howald. Identifying authorship by byte-level n-grams: The source code author profile (SCAP) method. _IJDE_ , 6(1), 2007. URL http://www.utica.edu/academic/institutes/ecii/publications/articles/B41158D1-C829-0387-009D214D2170C321.pdf.
* Gamer [2016] N. Gamer. The problem with open source malware, 2016. URL https://blog.trendmicro.com/the-problem-with-open-source-malware/.
* GitHub [2020] GitHub. Github repositories, 2020. URL https://github.com.
* GNU [2020] GNU. Compiler, 2020. URL https://gcc.gnu.org/.
* Gonzalez et al. [2018] H. Gonzalez, N. Stakhanova, and A. A. Ghorbani. Authorship attribution of android apps. In _Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy_ , CODASPY ’18, pages 277–286. ACM, 2018. ISBN 978-1-4503-5632-9. doi: 10.1145/3176258.3176322. URL http://doi.acm.org/10.1145/3176258.3176322.
* Google [2008-2020] Google. Google code jam, 2008-2020. URL https://codingcompetitions.withgoogle.com/codejam/.
* Guarnieri [2019] C. Guarnieri. Cuckoo sandbox, 2019. URL https://cuckoosandbox.org/.
* Haddadpajouh et al. [2020] H. Haddadpajouh, A. Azmoodeh, A. Dehghantanha, and R. M. Parizi. Mvfcc: A multi-view fuzzy consensus clustering model for malware threat attribution. _IEEE Access_ , 8:139188–139198, 2020.
* Haq and Caballero [2019] I. U. Haq and J. Caballero. A survey of binary code similarity. _CoRR_ , abs/1909.11424, 2019. URL http://arxiv.org/abs/1909.11424.
* Henderson et al. [2017] A. Henderson, L. Yan, X. Hu, A. Prakash, H. Yin, and S. McCamant. Decaf: A platform-neutral whole-system dynamic binary analysis platform. _IEEE Transactions on Software Engineering_ , 43(2):164–184, 2 2017. ISSN 0098-5589. doi: 10.1109/TSE.2016.2589242.
* Hendrikse [2017] S. Hendrikse. _The Effect of Code Obfuscation on Authorship Attribution of Binary Computer Files_. PhD thesis, 2017. URL https://nsuworks.nova.edu/gscis_etd/1009.
* Herzog [2018] B. Herzog. The gandcrab ransomware mindset, 2018. URL https://research.checkpoint.com/2018/gandcrab-ransomware-mindset/.
* Hex-Rays [2020] Hex-Rays. Ida, 2020. URL https://www.hex-rays.com/products/ida/.
* Hightower [2017] F. Hightower. Observable finder, 2017. URL https://github.com/fhightower/ioc-finder.
* Hong et al. [2018] J. Hong, S. Park, S.-W. Kim, D. Kim, and W. Kim. Classifying malwares for identification of author groups. _Concurrency and Computation: Practice and Experience_ , 30(3):e4197, 2018. doi: 10.1002/cpe.4197. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.4197. e4197 cpe.4197.
* Honnibal and Montani [2017] M. Honnibal and I. Montani. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. _To appear_ , 2017.
* Hurier et al. [2017] M. Hurier, G. Suarez-Tangil, S. K. Dash, T. F. Bissyandé, Y. L. Traon, J. Klein, and L. Cavallaro. Euphony: Harmonious unification of cacophonous anti-virus vendor labels for android malware. In _2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR)_ , pages 425–435, 2017. doi: 10.1109/MSR.2017.57.
* Intel [2020] Intel. C++ compiler, 2020. URL https://software.intel.com/en-us/c-compilers.
* Kalgutkar et al. [2018] V. Kalgutkar, N. Stakhanova, P. Cook, and A. Matyukhina. Android authorship attribution through string analysis. In S. Doerr, M. Fischer, S. Schrittwieser, and D. Herrmann, editors, _Proceedings of the 13th International Conference on Availability, Reliability and Security, ARES 2018, Hamburg, Germany, August 27-30, 2018_ , pages 4:1–4:10. ACM, 2018. doi: 10.1145/3230833.3230849. URL https://doi.org/10.1145/3230833.3230849.
* Kalgutkar et al. [2019] V. Kalgutkar, R. Kaur, H. Gonzalez, N. Stakhanova, and A. Matyukhina. Code authorship attribution: Methods and challenges. _ACM Comput. Surv._ , 52(1):3:1–3:36, Feb. 2019\. ISSN 0360-0300. doi: 10.1145/3292577. URL http://doi.acm.org/10.1145/3292577.
* Kaspersky [2020] Kaspersky. The power of threat attribution, 2020. URL https://media.kaspersky.com/en/business-security/enterprise/threat-attribution-engine-whitepaper.pdf.
* Kinder [2013] J. Kinder. Jakstab, 2013. URL https://github.com/jkinder/jakstab.
* Kolosnjaji et al. [2018] B. Kolosnjaji, A. Demontis, B. Biggio, D. Maiorca, G. Giacinto, C. Eckert, and F. Roli. Adversarial malware binaries: Evading deep learning for malware detection in executables. In _2018 26th European Signal Processing Conference (EUSIPCO)_ , pages 533–537, 2018. doi: 10.23919/EUSIPCO.2018.8553214.
* Krsul and Spafford [1997] I. Krsul and E. H. Spafford. Authorship analysis: identifying the author of a program. _Comput. Secur._ , 16(3):233–257, 1997. doi: 10.1016/S0167-4048(97)00005-9. URL https://doi.org/10.1016/S0167-4048(97)00005-9.
* Laurenza and Lazzeretti [2020] G. Laurenza and R. Lazzeretti. daptaset: A comprehensive mapping of apt-related data. In A. P. Fournaris, M. Athanatos, K. Lampropoulos, S. Ioannidis, G. Hatzivasilis, E. Damiani, H. Abie, S. Ranise, L. Verderame, A. Siena, and J. Garcia-Alfaro, editors, _Computer Security_ , pages 217–225, Cham, 2020\. Springer International Publishing. ISBN 978-3-030-42051-2.
* Laurenza et al. [2017] G. Laurenza, L. Aniello, R. Lazzeretti, and R. Baldoni. Malware triage based on static features and public apt reports. In S. Dolev and S. Lodha, editors, _Cyber Security Cryptography and Machine Learning_ , pages 288–305, Cham, 2017. Springer International Publishing. ISBN 978-3-319-60080-2.
* Laurenza et al. [2018] G. Laurenza, R. Lazzeretti, and L. Mazzotti. Malware triage for early identification of advanced persistent threat activities. _CoRR_ , abs/1810.07321, 2018. URL http://arxiv.org/abs/1810.07321.
* Layton et al. [2010] R. Layton, P. A. Watters, and R. Dazeley. Automatically determining phishing campaigns using the USCAP methodology. In _2010 eCrime Researchers Summit, eCrime 2010, Dallas, TX, USA, October 18-20, 2010_ , pages 1–8. IEEE, 2010. doi: 10.1109/ecrime.2010.5706698. URL https://doi.org/10.1109/ecrime.2010.5706698.
* Legoy et al. [2020] V. Legoy, M. Caselli, C. Seifert, and A. Peter. Automated retrieval of att&ck tactics and techniques for cyber threat reports, 2020.
* Lemay et al. [2018] A. Lemay, J. Calvet, F. Menet, and J. M. Fernandez. Survey of publicly available reports on advanced persistent threat actors. _Computers and Security_ , 72:26 – 59, 2018. ISSN 0167-4048. doi: https://doi.org/10.1016/j.cose.2017.08.005. URL http://www.sciencedirect.com/science/article/pii/S0167404817301608.
* Liao et al. [2016] X. Liao, K. Yuan, X. Wang, Z. Li, L. Xing, and R. Beyah. Acing the ioc game: Toward automatic discovery and analysis of open-source cyber threat intelligence. In _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , CCS ’16, page 755–766, New York, NY, USA, 2016\. Association for Computing Machinery. ISBN 9781450341394. doi: 10.1145/2976749.2978315. URL https://doi.org/10.1145/2976749.2978315.
* LLVM [2020] LLVM. Compiler, 2020. URL http://llvm.org/.
* Lockheed-Martin [2015] Lockheed-Martin. Gaining the advantage applying cyber kill chain® methodology to network defense, 2015. URL https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/cyber/Gaining_the_Advantage_Cyber_Kill_Chain.pdf.
* Lottmann and Yamaguchi [2016] M. Lottmann and F. Yamaguchi. bjoern, 2016. URL https://github.com/octopus-platform/bjoern.
* Marquis-Boire et al. [2015] M. Marquis-Boire, M. Marschalek, and C. Guarnieri. Big game hunting: The peculiarities in nationstate malware research. 2015\. URL https://www.blackhat.com/docs/us-15/materials/us-15-MarquisBoire-Big-Game-Hunting-The-Peculiarities-Of-Nation-State-Malware-Research.pdf.
* Masrepus et al. [2019] Masrepus, vfsrfs, and garanews. Un{i}packer, 2019. URL https://github.com/unipacker/unipacker.
* Matyukhina et al. [2019] A. Matyukhina, N. Stakhanova, M. Dalla Preda, and C. Perley. Adversarial authorship attribution in open-source projects. In _Proceedings of the Ninth ACM Conference on Data and Application Security and Privacy_ , CODASPY ’19, pages 291–302, New York, NY, USA, 2019. ACM. ISBN 978-1-4503-6099-9. doi: 10.1145/3292006.3300032. URL http://doi.acm.org/10.1145/3292006.3300032.
* Meng [2016] X. Meng. Fine-grained binary code authorship identification. In _Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering_ , FSE 2016, pages 1097–1099. ACM, 2016. ISBN 978-1-4503-4218-6. doi: 10.1145/2950290.2983962. URL http://doi.acm.org/10.1145/2950290.2983962.
* Meng and Miller [2018] X. Meng and B. P. Miller. Binary code multi-author identification in multi-toolchain scenarios. _Under Submission_ , 2018. URL http://ftp.cs.wisc.edu/paradyn/papers/Meng17MultiToolchain.pdf.
* Meng et al. [2013] X. Meng, B. P. Miller, W. R. Williams, and A. R. Bernat. Mining software repositories for accurate authorship. In _2013 IEEE International Conference on Software Maintenance (ICSM)_ , pages 250–259, Los Alamitos, CA, USA, 2013. IEEE Computer Society. doi: 10.1109/ICSM.2013.36. URL https://doi.ieeecomputersociety.org/10.1109/ICSM.2013.36.
* Meng et al. [2017] X. Meng, B. P. Miller, and K.-S. Jun. Identifying multiple authors in a binary program. In S. N. Foley, D. Gollmann, and E. Snekkenes, editors, _Computer Security – ESORICS 2017_ , pages 286–304, Cham, 2017. Springer International Publishing. ISBN 978-3-319-66399-9.
* Meng et al. [2018] X. Meng, B. P. Miller, and S. Jha. Adversarial binaries for authorship identification. _CoRR_ , abs/1809.08316, 2018. URL http://arxiv.org/abs/1809.08316.
* Microsoft [2019] Microsoft. Visual studio, 2019. URL https://visualstudio.microsoft.com/.
* MISP: Open Source Threat Intelligence Platform [2020] MISP: Open Source Threat Intelligence Platform. List of threat actors, 2020. URL https://raw.githubusercontent.com/MISP/misp-galaxy/main/clusters/threat-actor.json.
* Mitre [2020] Mitre. ATT&CK, 2020. URL https://attack.mitre.org/.
* Moore and Pham [2015] P. Moore and H. V. Pham. On context and the open world assumption. In _2015 IEEE 29th International Conference on Advanced Information Networking and Applications Workshops_ , pages 387–392, 2015. doi: 10.1109/WAINA.2015.7.
* National Institute of Standards and Technology [2011] National Institute of Standards and Technology. Managing information security risk, 2011. URL https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-39.pdf.
* Neal et al. [2018] T. J. Neal, K. Sundararajan, A. Fatima, Y. Yan, Y. Xiang, and D. L. Woodard. Surveying stylometry techniques and applications. _ACM Comput. Surv._ , 50(6):86:1–86:36, 2018\. doi: 10.1145/3132039. URL https://doi.org/10.1145/3132039.
* OASIS Cyber Threat Intelligence [2020a] OASIS Cyber Threat Intelligence. STIX 2.0, 2020a. URL https://oasis-open.github.io/cti-documentation/stix/intro.
* OASIS Cyber Threat Intelligence [2020b] OASIS Cyber Threat Intelligence. TAXII 2.0, 2020b. URL https://oasis-open.github.io/cti-documentation/taxii/intro.html.
* Office of the Director of National Intelligence [2018] Office of the Director of National Intelligence. A guide to cyber attribution, 2018. URL https://www.dni.gov/files/CTIIC/documents/ODNI_A_Guide_to_Cyber_Attribution.pdf.
* Open Source Software [2020] Open Source Software. UPX (Ultimate Packer for Executables), 2020. URL https://upx.github.io/.
* Paradyn-Project [2019] Paradyn-Project. Dyninst: Putting the performance in high performance computing, 2019. URL http://www.dyninst.org.
* Planet-Source-Code [2020] Planet-Source-Code. Planet source code repositories, 2020. URL https://www.planet-source-code.com.
* Quiring et al. [2019] E. Quiring, A. Maier, and K. Rieck. Misleading authorship attribution of source code using adversarial learning. In _28th USENIX Security Symposium (USENIX Security 19)_ , pages 479–496, Santa Clara, CA, Aug 2019. USENIX Association. ISBN 978-1-939133-06-9. URL https://www.usenix.org/conference/usenixsecurity19/presentation/quiring.
* radare org [2020] radare org. radare2, 2020. URL https://www.radare.org/n/radare2.html.
* Raff et al. [2020] E. Raff, R. Zak, G. L. Munoz, W. Fleming, H. S. Anderson, B. Filar, C. Nicholas, and J. Holt. Automatic Yara Rule Generation Using Biclustering. In _13th ACM Workshop on Artificial Intelligence and Security (AISec’20)_ , 2020. doi: 10.1145/3411508.3421372. URL http://arxiv.org/abs/2009.03779.
* Rahimian et al. [2015] A. Rahimian, P. Shirani, S. Alrbaee, L. Wang, and M. Debbabi. Bincomp: A stratified approach to compiler provenance attribution. _Digital Investigation_ , 14:S146 – S155, 2015. ISSN 1742-2876. doi: https://doi.org/10.1016/j.diin.2015.05.015. URL http://www.sciencedirect.com/science/article/pii/S1742287615000602. The Proceedings of the Fifteenth Annual DFRWS Conference.
* Řehůřek and Sojka [2010] R. Řehůřek and P. Sojka. Software Framework for Topic Modelling with Large Corpora. In _Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks_ , pages 45–50, Valletta, Malta, May 2010. ELRA. http://is.muni.cz/publication/884893/en.
* Reynolds [2020] R. Reynolds. The four biggest malware threats to uk businesses. _Network Security_ , 2020(3):6 – 8, 2020. ISSN 1353-4858. doi: https://doi.org/10.1016/S1353-4858(20)30029-5. URL http://www.sciencedirect.com/science/article/pii/S1353485820300295.
* Rid and Buchanan [2015] T. Rid and B. Buchanan. Attributing cyber attacks. _Journal of Strategic Studies_ , 38(1-2):4–37, 2015. doi: 10.1080/01402390.2014.977382. URL https://doi.org/10.1080/01402390.2014.977382.
* Robbins [2017] E. Robbins. _Solvers for Type Recovery and Decompilation of Binaries_. PhD thesis, University of Kent,, January 2017. URL https://kar.kent.ac.uk/61349/.
* Ronen et al. [2018] R. Ronen, M. Radu, C. Feuerstein, E. Yom-Tov, and M. Ahmadi. Microsoft malware classification challenge. _CoRR_ , abs/1802.10135, 2018. URL http://arxiv.org/abs/1802.10135.
* Rosenberg et al. [2017] I. Rosenberg, G. Sicard, and E. O. David. Deepapt: Nation-state apt attribution using end-to-end deep neural networks. In A. Lintas, S. Rovetta, P. F. Verschure, and A. E. Villa, editors, _Artificial Neural Networks and Machine Learning – ICANN 2017_ , pages 91–99, Cham, 2017. Springer International Publishing. ISBN 978-3-319-68612-7.
* Rosenberg et al. [2018] I. Rosenberg, G. Sicard, and E. O. David. End-to-end deep neural networks and transfer learning for automatic analysis of nation-state malware. volume 20, 2018. doi: 10.3390/e20050390. URL http://www.mdpi.com/1099-4300/20/5/390.
* Rosenberg and Beek [2018] J. Rosenberg and C. Beek. Examining code reuse reveals undiscovered links among north korea’s malware families, 2018. URL https://www.mcafee.com/blogs/other-blogs/mcafee-labs/examining-code-reuse-reveals-undiscovered-links-among-north-koreas-malware-families/.
* Rosenblum et al. [2011] N. Rosenblum, X. Zhu, and B. P. Miller. Who wrote this code? identifying the authors of program binaries. In V. Atluri and C. Diaz, editors, _Computer Security – ESORICS 2011_ , pages 172–189, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. ISBN 978-3-642-23822-2.
* Rosenblum et al. [2010] N. E. Rosenblum, B. P. Miller, and X. Zhu. Extracting compiler provenance from program binaries. In _Proceedings of the 9th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering_ , PASTE ’10, pages 21–28. ACM, 2010. ISBN 978-1-4503-0082-7. doi: 10.1145/1806672.1806678. URL http://doi.acm.org/10.1145/1806672.1806678.
* sapphirex00 [2018] sapphirex00. APTs and OPs table guide, 2018. URL https://github.com/sapphirex00/Threat-Hunting/raw/master/apts_and_ops_tableguide.xlsx.
* Sebastián et al. [2016] M. Sebastián, R. Rivera, P. Kotzias, and J. Caballero. Avclass: A tool for massive malware labeling. In _Research in Attacks, Intrusions, and Defenses - 19th International Symposium, RAID 2016, Paris, France, September 19-21, 2016, Proceedings_ , pages 230–253, 2016. doi: 10.1007/978-3-319-45719-2“˙11. URL https://doi.org/10.1007/978-3-319-45719-2_11.
* Shirani et al. [2017] P. Shirani, L. Wang, and M. Debbabi. Binshape: Scalable and robust binary library function identification using function shape. In M. Polychronakis and M. Meier, editors, _Detection of Intrusions and Malware, and Vulnerability Assessment_ , pages 301–324, Cham, 2017\. Springer International Publishing. ISBN 978-3-319-60876-1.
* Shoshitaishvili et al. [2016] Y. Shoshitaishvili, R. Wang, C. Salls, N. Stephens, M. Polino, A. Dutcher, J. Grosen, S. Feng, C. Hauser, C. Kruegel, and G. Vigna. Sok: (state of) the art of war: Offensive techniques in binary analysis. In _2016 IEEE Symposium on Security and Privacy (SP)_ , pages 138–157, 2016. doi: 10.1109/SP.2016.17.
* Simko et al. [2018] L. Simko, L. Zettlemoyer, and T. Kohno. Recognizing and imitating programmer style: Adversaries in program authorship attribution. _Proceedings on Privacy Enhancing Technologies_ , 2018(1):127 – 144, 2018. URL https://content.sciendo.com/view/journals/popets/2018/1/article-p127.xml.
* Stirparo et al. [2015] P. Stirparo, D. Bizeul, B. Bell, Z. Chang, J. Esler, K. Bleich, M. Moreno, M. K. A, J. Capmany, P. Hutchinson, B. Ivanov, A. Gironda, D. Ackerman, C. Fragoso, E. Sela, and F. Egloff. Apt groups and operations, 2015. URL https://apt.threattracking.com.
* Symantec [2019] Symantec. Internet security threat report 2019, 2019. URL https://www.symantec.com/content/dam/symantec/docs/reports/istr-24-2019-en.pdf.
* Thailand Computer Emergency Response Team [2020] Thailand Computer Emergency Response Team. Threat group cards: A threat actor encyclopedia, 2020. URL https://apt.thaicert.or.th/.
* TIOBE - The Software Quality Company [2018] TIOBE - The Software Quality Company. TIOBE Index, 2018. URL https://www.tiobe.com.
* van Rossum et al. [2001] G. van Rossum, B. Warsaw, and N. Coghlan. Pep 8 style guide for python code, 2001. URL https://www.python.org/dev/peps/pep-0008/.
* Virvilis and Gritzalis [2013] N. Virvilis and D. Gritzalis. The big four - what we did wrong in advanced persistent threat detection? In _2013 International Conference on Availability, Reliability and Security(ARES)_ , volume 00, pages 248–254, 2013. doi: 10.1109/ARES.2013.32. URL doi.ieeecomputersociety.org/10.1109/ARES.2013.32.
* Xue et al. [2019] H. Xue, S. Sun, G. Venkataramani, and T. Lan. Machine learning-based analysis of program binaries: A comprehensive study. _IEEE Access_ , 7:65889–65912, 2019.
* Zhao et al. [2020] J. Zhao, Q. Yan, X. Liu, B. Li, and G. Zuo. Cyber threat intelligence modeling based on heterogeneous graph convolutional network. In _23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020)_ , pages 241–256, San Sebastian, Oct. 2020\. USENIX Association. ISBN 978-1-939133-18-2. URL https://www.usenix.org/conference/raid2020/presentation/zhao.
|
# EAGER: Embedding-Assisted Entity Resolution for Knowledge Graphs ††thanks:
This work was supported by the German Federal Ministry of Education and
Research (BMBF, 01/S18026A-F) by funding the competence center for Big Data
and AI ”ScaDS.AI Dresden/Leipzig”. Some computations have been done with
resources of Leipzig University Computing Center.
Daniel Obraczka Leipzig University
<EMAIL_ADDRESS>Jonathan Schuchart Leipzig University
<EMAIL_ADDRESS>Erhard Rahm Leipzig University
<EMAIL_ADDRESS>
###### Abstract
Entity Resolution (ER) is a constitutional part for integrating different
knowledge graphs in order to identify entities referring to the same real-
world object. A promising approach is the use of graph embeddings for ER in
order to determine the similarity of entities based on the similarity of their
graph neighborhood. The similarity computations for such embeddings translates
to calculating the distance between them in the embedding space which is
comparatively simple. However, previous work has shown that the use of graph
embeddings alone is not sufficient to achieve high ER quality. We therefore
propose a more comprehensive ER approach for knowledge graphs called EAGER
(Embedding-Assisted Knowledge Graph Entity Resolution) to flexibly utilize
both the similarity of graph embeddings and attribute values within a
supervised machine learning approach. We evaluate our approach on 23 benchmark
datasets with differently sized and structured knowledge graphs and use
hypothesis tests to ensure statistical significance of our results.
Furthermore we compare our approach with state-of-the-art ER solutions, where
our approach yields competitive results for table-oriented ER problems and
shallow knowledge graphs but much better results for deeper knowledge graphs.
###### Index Terms:
Entity Resolution, Knowledge Graphs, Graph Embedding, Entity Alignment
## I Introduction
Knowledge Graphs (KGs) store real-world facts in machine-readable form. This
is done by making statements about entities in triple form
$(entity,property,value)$. For example the triple (Get_Out, director,
Jordan_Peele) tells us that the director of the movie ”Get Out” is ”Jordan
Peele”. Such structured information can be used for a variety of tasks such as
recommender systems, question answering and semantic search. For many KG usage
forms including question answering it is beneficial to integrate KGs from
different sources. An integral part of this integration is entity resolution
(ER), where the goal is to find entities which refer to the same real-world
object.
Existing ER systems mostly focus on matching entities of one specific entity
type (e.g. publication, movie, customer etc.) and assume matched schemata for
this entity type. This proves challenging when trying to use these systems for
ER in KGs typically consisting of many entity types with heterogeneous
attribute (property) sets. This is illustrated by the example in Figure 1
showing two simple movie-related KG subgraphs to be matched with each other.
Figure 1: Subgraphs of DBpedia and Wikidata. Green dashed lines show entities
that should be matched. Some URIs are shortened for brevity.
We observe there are entities of different types (film, director, actor) and
different attributes with heterogeneous value representations (e.g., birth
date values ”1979-02-21” in DBpedia and ”21 Febuary 1979” in Wikidata for two
matching director entities). Moreover, we see that matching entities such as
the movie ”Get Out” have different URIs and differently named edges referring
to properties and related entities, e.g. rdf:type vs. wdt:P31. These aspects
make a traditional schema (property) matching as a means to simplify ER very
challenging so that entity resolution for KGs should ideally not depend on it.
Given that URIs and property names may not show any similarity it becomes
apparent that the graph structure and related entities should be utilized in
the ER process, e.g., to consider the movie label and director to match
movies.
A promising way to achieve this in a generic manner, applicable to virtually
any entity type, is the use of graph embeddings. By encoding the entities of
the KGs into a low-dimensional space such approaches alleviate the obstacles
posed by the aforementioned KG heterogeneities. Capturing the topological and
semantic relatedness of entities in a geometric embedding space enables the
use of these embeddings as inputs for machine learning (ML) algorithms. The
performance of graph embedding approaches for ER has been recently studied by
Sun et. al [1]. However, as they point out, most approaches focus on refining
the embedding process, while ER mostly consists of finding the nearest
neighbors in the embedding space. Hence, the use of graph embeddings has to be
tailored to the ER task for good effectiveness. We build on the findings of
[1] and investigate the usefulness of learned graph embeddings as input for ML
classifiers for entity resolution. While there are different settings for KG
integration, such as enhancing a given KG or KG fusion, we focus here on the
simple ER setting, i.e., finding matching entities in two data sources. The
resulting match mappings can then be used for applications such as question
answering or as input for KG fusion.
In this paper, we propose and comprehensively evaluate the first (to our
knowledge) graph embedding supported ER system named EAGER: Embedding Assisted
Knowledge Graph Entity Resolution. It uses both knowledge graph embeddings and
attribute similarities as inputs for an ML classifier for entity resolution.
EAGER utilizes different kinds of graph embeddings, specifically the ones that
performed best in [1], as well as different ML classifiers. We comprehensively
evaluate the match effectiveness and runtime efficiency of EAGER with using
graph embeddings and attribute similarities either alone or in combination for
23 datasets of varying size and structure. We also compare the different graph
embeddings and classifiers with each other to identify good default
configurations. We further provide a comparison of EAGER with state-of the-art
ER approaches, namely Magellan [2] and DeepMatcher [3]. All our results are
analyzed using hypothesis tests to ensure statistical significance of our
findings.
We begin by presenting related work followed by a description of EAGER. In
Section IV the used datasets are presented including a new benchmark dataset
from the movie domain. Our evaluation is presented in Section V and we end
with conclusions and future work in Section VI.
## II Related Work and Background
Entity resolution has attracted a significant amount of research, sometimes
under different names such as record linkage [4, 5], link discovery [6, 7] or
deduplication [8]. In the following we can only present some relevant ER
approaches. We refer the reader to surveys and books like [9, 10, 11] for a
more thorough overview. Traditional ER approaches rely on learning distance-
or similarity-based measures and then use a threshold or classifier to decide
about whether two entities are the same. These classifiers can be unsupervised
[12, 13], supervised [7, 14] or employ active learning [8, 15]. For example
the Magellan Framework [2] provides supervised ml classifiers and provides
extensive guides for the entire ER process. Recently, deep learning has seen
some success in certain settings. DeepER [16] and DeepMatcher [3] provide a
variety of different architectures and among other aspects, such as attribute
similarities, use word embeddings as inputs for these networks. Both
frameworks have shown that especially for unstructured textual data deep
learning can outperform existing frameworks.
Collective ER approaches try to overcome the limitations of the more
conventional attribute-based methods. This paradigm uses the relationships
between entities as additional information and in some cases even considers
previous matching decisions in the neighborhood. Bhattacharya and Getoor [17]
show that using the neighborhood of potential match candidates in addition to
attribute-based similarity is especially useful for data with many ambiguous
entities. SiGMa [18] uses an iterative graph propagation algorithm relying on
relationship information as well as attribute-based similarity between graphs
to integrate large-scale knowledge bases. Pershina et al. [19] propagate
similarities using Personalized PageRank and are able to align industrial size
knowledge graphs. Zhu et al. [20] reformulate entity resolution as multi-type
graph summarization problem and use attribute-based similarity as well as
structural similarity, i.e. connectivity patterns in the graph.
More recently the use of graph embeddings has been shown promising for the
integration of KGs. An overview of currently relevant approaches that solely
rely on embedding techniques can be found in [1], some of these techniques
have been used in this work and will be discussed in more detail in Section
III-D.
Knowledge graph embedding (KGE) models typically aim to capture the
relationship structure of each entity in latent vector representations in
order to be used for further downstream applications. For a good overview of
current knowledge graph embedding approaches we refer the reader to a recent
survey from Ali et al. [21]. A widely used basic technique are translational
models, such as TransE by Bordes et al. [22] and its various proposed
improvements TransH [23], TransR [24] and TransD [25]. Translational models
interpret a relationship as a translation from its head entity to its tail
entity. Note that translational models also embed relationship names and would
therefore benefit from consistent vocabularies (schemata) across knowledge
graphs. Trouillon et al. [26] used complex valued vectors in order to better
capture anti-symmetric relationships, similar to an idea proposed by [27]
which restricts these complex representations to the unit circle. Based on the
influential Graph Convolutional Network (GCN) model by Kipf and Welling [28]
for ordinary graphs, Schichtkrull et al. [29] used relationship specific
weight matrices to capture relations as well as the neighborhood structure of
each entity.
EAGER aims to combine the two generally separate ER approaches of entity
embedding based techniques and traditional attribute based methods in KGs. We
show that our approach is viable for both real world KGs and artificial
shallow KGs that are based on tabular data as EAGER does not rely on
additional schema matching or any structural assumptions about the entities.
## III An overview of EAGER
Figure 2: Schematic summary of EAGER
In this section we present an overview of the EAGER approach for ER in
knowledge graphs and the specific approaches and configurations we will
evaluate. We start with a formal definition of the ER problem and an overview
of the EAGER workflow. We then discuss problems when calculating attribute
similarities in heterogeneous KGs and our use of graph embeddings. We finish
the section by describing the different variants of EAGER we intend to
evaluate and present the machine learning classifiers that we use. We close
this chapter by discussing the prediction step.
### III-A Problem statement
As stated in the introduction, KGs are constructed by triples in the form of
$(entity,property,value)$, where $property$ can be either a attribute property
or a relationship and $value$ a literal or another entity, respectively.
Therefore, a KG is a tuple
$\mathcal{KG}=(\mathcal{E},\mathcal{R},\mathcal{A},\mathcal{L},\mathcal{T})$,
where $\mathcal{E}$ is the set of entities, $\mathcal{A}$ the set of attribute
properties, $\mathcal{R}$ the set of relationship properties, $\mathcal{L}$
the set of literals and $\mathcal{T}$ is the set of triples. We distinguish
attribute triples $\mathcal{T}_{A}$ and relationship triples
$\mathcal{T}_{R}$, where
$\mathcal{T}_{A}:\mathcal{E}\times\mathcal{A}\times\mathcal{L}$ are triples
connecting entities and literals, e.g. (dbr:Jordan_Peele, dbo:birthDate,
"1979-02-21") and
$\mathcal{T}_{R}:\mathcal{E}\times\mathcal{R}\times\mathcal{E}$ connect
entities, e.g. (dbr:Get_Out, dbo:director, dbr:Jordan_Peele) as seen in Figure
1. Our goal is to find a mapping between entities of two KGs. More formally,
we aim to find
$\mathcal{M}=\\{(e_{1},e_{2})\in\mathcal{E}_{1}\times\mathcal{E}_{2}|e_{1}\equiv
e_{2}\\}$, where $\equiv$ refers to the equivalence relation. Furthermore, we
assume we are provided with a subset of the mapped entities
$\mathcal{M}_{T}\subseteq\mathcal{M}$ as training data, which is also
sometimes referred to as seed alignment in the literature.
### III-B Overview
The remaining chapter is dedicated to illustrate how our approach tackles
entity resolution in heterogeneous KGs. A schematical overview can be found in
Figure 2. Given two KGs $\mathcal{KG}_{1},\mathcal{KG}_{2}$ and a set of
initial matches $\mathcal{M}_{T}$ we create a feature vector $\mathcal{V}$ for
each match $(e_{1},e_{2})\in\mathcal{M}_{T}$ to train a machine learning
classifier. Additionally to the positive matches provided in $\mathcal{M}_{T}$
we sample negative examples by sampling random pairs
$(e_{1},e_{2})\notin\mathcal{M}_{T}$ to create a balanced set of positive and
negative examples. After the training step the classifier then acts as an
oracle to answer specific alignment queries, i.e. entity pairs, in order to
make a prediction. In the following we present our approach in more detail.
### III-C Attribute Similarities
Since schemata across different KGs may differ wildly, creating a schema
matching before ER in heterogeneous KGs is difficult and can introduce
additional sources for error. While matching attributes by hand is possible
for datasets with a low number of attributes this is not possible for large
KGs, where more sophisticated approaches are necessary. Keeping the focus on
the matching process, we chose to concatenate all attribute values of each
entity into a single string and used 3 similarity measures for comparisons:
Levenshtein, Generalized Jaccard with an Alphanumeric Tokenizer, which returns
the longest strings of alphanumeric characters, and Trigrams with the Dice
coefficient. This results in three separate features that can be used as input
to a classifier. Note, that EAGER is generally not bound to any specific
distance/similarity measures and any other features that can be derived from
two strings can be used.
### III-D Graph Embeddings
Given that the focus of this study lies not on the creation of embeddings
itself, our approach can take any entity embeddings that are embedded in the
same space. Since most KG embedding frameworks are not specialized for ER, we
use OpenEA111https://github.com/nju-websoft/OpenEA which was developed by Sun
et al. for their 2020 benchmark study[1]. It offers a variety of embedding
approaches and embeds entities into the same space. Specifically, we chose
three of the best approaches of said study, namely BootEA, MultiKE and RDGCN:
#### III-D1 BootEA
Sun et al. in 2018 [30] based their approach on the TransE model and combined
it with elaborate bootstrapping and negative sampling techniques to improve
performance. TransE aims to find an embedding function $\phi$ that minimizes
$||\phi(e_{h})+\phi(r)-\phi(e_{t})||$ for any
$(e_{h},r,e_{t})\in\mathcal{T}_{R}$. Bootstrapping is done by additionally
sampling likely matching entities (resampled every few epochs based on the
current model) in order to increase the effective seed alignment size.
Additionally, negative relationship tuples are sampled and resampled every few
epochs based on the current model in order to improve the distinction between
otherwise similar entities. Since TransE is an unsupervised model, Sun et al.
proposed a new objective function which incorporates both the original
objective function of TransE and the likelihood of two entities from different
KGs matching. Thus making use of the seed alignment.
#### III-D2 MultiKE
In order to also incorporate more than just relational information, Zhang et
al. [31] proposed a flexible model which combines different views on each
entity. Here, the name attribute, relations and all remaining attributes are
embedded separately, using pre-trained word2vec word embeddings [32] for names
and a variation on TransE for relations. Attribute embeddings are obtained by
training a convolutional neural network taking the attribute and attribute
value as input. All three embedding vectors are then combined into a single
unified embedding space. In this approach the two knowledge graphs are treated
as one combined graph where entities from the seed alignment are treated as
equal.
#### III-D3 RDGCN
Different to the aforementioned approaches, Wu et al. [33] proposed a new
technique using two constructed conventional graphs and the GCN model by Kipf
and Welling with highways. Instead of learning embeddings for entities and
relations within one graph, RDGCN constructs a primary entity graph and a dual
relationship graph in order to alternate the optimization process between the
two. That way, the relationship representations from the dual graph are used
to optimize the entity representations from the primal graph and vice versa by
applying a graph attention mechanism. As the actual neigborhood information of
each entity is not fully exploited in this case, Wu et al. showed that feeding
the resulting entity representations into a GCN can help significantly improve
the overall embedding quality.
### III-E Combinations
As the aim of our study is to investigate, whether combining entity embeddings
with attribute similarities is superior to using either on their own, we
present three different variants of our approach, that only differ in the
construction of their feature vector $\mathcal{V}$:
* •
$\textsc{EAGER}_{A\mathbin{\|}E}$, where
$\mathcal{V}=concat(\mathcal{V}_{A},\mathcal{V}_{E})$
* •
$\textsc{EAGER}_{E}$, where $\mathcal{V}=\mathcal{V}_{E}$
* •
$\textsc{EAGER}_{A}$, where $\mathcal{V}=\mathcal{V}_{A}$
Where $\mathcal{V}_{A}$ contains the attribute similarities, and
$\mathcal{V}_{E}$ the embeddings. The $concat$ operation simply appends one
vector to the other.
### III-F Prediction
The trained classifier is presented with alignment queries, i.e. pairs of
entities that it will have to classify as match or non-match. Choosing these
pairs is a non-trivial question since exploring all possible pairs would lead
to a quadratic number of alignment queries relative to the KG size, which is
not scalable to large datasets. Traditionally, blocking strategies are used to
reduce the number of pairs by a linear factor. Due to the heterogeneous nature
of KGs new strategies for this problem have to be found. An alternative could
be to use the embeddings to find a number of nearest neighbors, which is a
scalable solution since the triangle inequality in metric spaces can be
exploited to reduce the number of comparisons for the neighborhood search.
Finding a good solution for this problem is however out of scope for our study
and in the experiments we therefore use the test data to create prediction
pairs, sampling negative examples randomly as done in the training step. More
on our experimental setup can be found in Section V-A.
## IV Datasets
To evaluate our approach we use multiple datasets that can generally be put
into two categories: rich and shallow graph datasets. While the former are
sampled from popular knowledge graphs and therefore contain a rich graph
structure, i.e. lots of different relationships, the latter are derived from
tabular data and have a very limited number of relationships.
Table I: Shallow graph datasets statistics
Datasets | KGs | $|\mathcal{R}|$ | $|\mathcal{A}|$ | $|\mathcal{T}_{R}|$ | $|\mathcal{T}_{A}|$ | $|\mathcal{E}|$ | $|\mathcal{M}|$
---|---|---|---|---|---|---|---
abt-buy | abt | 3 | 4 | 2753 | 2998 | 1920 | 1097
buy | 4 | 4 | 4654 | 3480 | 2392
amazon-google | amazon | 4 | 4 | 8528 | 5802 | 4443 | 1300
google | 4 | 4 | 16429 | 12971 | 9749
acm-dblp | acm | 4 | 3 | 15007 | 5874 | 9190 | 2224
dblp | 4 | 3 | 16444 | 6041 | 10462
dblp-gs | dblp | 4 | 3 | 16017 | 5832 | 10256 | 5347
gs | 4 | 3 | 390579 | 190336 | 228211
imdb-tmdb | imdb | 3 | 13 | 17532 | 25723 | 5129 | 1978
tmdb | 4 | 493 | 27903 | 24695 | 6056
imdb-tvdb | imdb | 3 | 13 | 17532 | 25723 | 5129 | 2488
tvdb | 3 | 350 | 15455 | 21430 | 7810
tmdb-tvdb | tmdb | 4 | 493 | 27903 | 24695 | 6056 | 2483
tvdb | 3 | 350 | 15455 | 21430 | 7810
Table II: Rich graph datasets statistics, adapted from [1] Datasets | KGs | V1 | V2
---|---|---|---
$|\mathcal{R}|$ | $|\mathcal{A}|$ | $|\mathcal{T}_{R}|$ | $|\mathcal{T}_{A}|$ | $|\mathcal{M}|$ | $|\mathcal{R}|$ | $|\mathcal{A}|$ | $|\mathcal{T}_{R}|$ | $|\mathcal{T}_{A}|$ | $|\mathcal{M}|$
15K | D-W | DB | 248 | 342 | 38,265 | 68,258 | 15,000 | 167 | 175 | 73,983 | 66,813 | 15,000
WD | 169 | 649 | 42,746 | 138,246 | 121 | 457 | 83,365 | 175,686
D-Y | DB | 165 | 257 | 30,291 | 71,716 | 15,000 | 72 | 90 | 68,063 | 65,100 | 15,000
YG | 28 | 35 | 26,638 | 132,114 | 21 | 20 | 60,970 | 131,151
EN-DE | EN | 215 | 286 | 47,676 | 83,755 | 15,000 | 169 | 171 | 84,867 | 81,988 | 15,000
DE | 131 | 194 | 50,419 | 156,150 | 96 | 116 | 92,632 | 186,335
EN-FR | EN | 267 | 308 | 47,334 | 73,121 | 15,000 | 193 | 189 | 96,318 | 66,899 | 15,000
FR | 210 | 404 | 40,864 | 67,167 | 166 | 221 | 80,112 | 68,779
100K | D-W | DB | 413 | 493 | 293,990 | 451,011 | 100,000 | 318 | 328 | 616,457 | 467,103 | 100,000
WD | 261 | 874 | 251,708 | 687,860 | 239 | 760 | 588,203 | 878,219
D-Y | DB | 287 | 379 | 294,188 | 523,062 | 100,000 | 230 | 277 | 576,547 | 547,026 | 100,000
YG | 32 | 38 | 400,518 | 749,787 | 31 | 36 | 865,265 | 855,161
EN-DE | EN | 381 | 451 | 335,359 | 552,750 | 100,000 | 323 | 326 | 622,588 | 560,247 | 100,000
DE | 196 | 252 | 336,240 | 716,615 | 170 | 189 | 629,395 | 793,710
EN-FR | EN | 400 | 466 | 309,607 | 497,729 | 100,000 | 379 | 364 | 649,902 | 503,922 | 100,000
FR | 300 | 519 | 258,285 | 426,672 | 287 | 468 | 561,391 | 431,379
In this section we present the datasets used for our evaluation, starting with
the shallow graph datasets, followed by the rich graph datasets.
### IV-A Shallow Graph Datasets
To investigate how the interplay of attribute similarities and graph
embeddings fares in settings with less dense KGs we transformed classical ER
benchmark datasets and created a new benchmark dataset with multiple entity
types. The classical ER datasets are taken from [34] and transformed into
simple KGs. Due to repurposing of these ER tasks we only have the gold
standard for one entity type: publication for the benchmarks from the
publication domain, and product from the datasets associated with e-commerce.
To address this shortcoming we created a new benchmark from the movie domain,
where the gold standard was hand-labeled for the five entity types Person,
Movie, TvSeries, Episode, Company. The movie datasets were created from three
sources containing information about movies and tv series:
IMDB222https://www.imdb.com/, TheMovieDB333https://www.themoviedb.org/ and
TheTVDB444https://www.thetvdb.com/. We provide more details about the datasets
in Table I. It is important to note, that the repurposed classical ER
benchmark datasets have a very low number of different attributes, while the
movie datasets are more rich in this respect. Also note that the number of
entities in the knowledge graphs is different to the published number of
products or articles for abt-buy, amazon-google, dblp-acm and dblp-scholar as
the knowledge graphs contain additional entity types such as places, events,
authors, brands and prices. We make the movie datasets publicly available for
future research at https://github.com/ScaDS/MovieGraphBenchmark.
### IV-B Rich Graph Datasets
In the study by Sun et al. [1] the authors provided datasets from DBpedia
(DB), Wikidata (WD) and Yago (YG) that were sampled with the intention of
properly emulating the graph structure of real-world KGs. To investigate
several aspects that are relevant in the ER process they provide two versions
of each linking task where V1 has dataset samples that are less dense than V2.
Additionally, there is a small and large integration task with each dataset
consisting of 15K and 100K entities, with the gold standard for each task
containing 15K and 100K matches respectively. It is worth mentioning, that two
of the ER tasks have a cross-lingual character with samples from the English
(EN), French (FR) and German (DE) versions of DBpedia. The datasets show a
variety of entity types. For example the 100K version of D-W (V2) has 91
different values for relationship triples with the property dbo:type in the
DBpedia KG. These entity types have a wide range from movies and persons to
geographical locations and corporations. Due to the sampling done by Sun et
al. the type information is missing for most entities, making the real variety
of entity types much larger. More details about the datasets are provided in
Table II.
## V Evaluation
We discuss our results on the presented datasets, starting with a description
of the experiment setup, followed by the results on the shallow and rich graph
datasets, with a focus on investigating whether the use of attribute
similarities in combination with knowledge graph embeddings is beneficial for
the respective setting. Furthermore we compare our approach with state-of-the-
art frameworks and present runtimes of our approach.
### V-A Setup
For the evaluation we use a 5-fold cross validation with a 7-2-1 split in
accordance with [1]: For each dataset pair the set of reference entity matches
is divided into 70% testing, 20% training and 10% validation. For each split
we sample negative examples to create an equal share of positive and negative
examples. The entire process is repeated 5 times to create 5 different folds.
For the OpenEA datasets the graph embeddings were computed using the
hyperparameters given by the study of [1]. For all other datasets, apart from
dblp-scholar, the *-15K parameter sets were used. For dblp-scholar, the *-100K
parameters were applied as the scholar dataset contains more than 100,000
entities. For the classifiers, Random Forest Classifier was used with 500
estimators and a Multi Layer Perceptron (MLP) was used with two hidden layers
of size 200 and 20. Furthermore, MLP was trained using the Adam [35] optimizer
with $\alpha=10^{-5}$.
### V-B Shallow Graph Datasets
The results for the shallow datasets are displayed in Table III. We display
the average rank of each combination of input variant, embedding approach and
classifier which is a number between 1 and 14 (since there are 14 possible
combinations), where 1 would mean this combination achieves the best result
for each dataset.
As expected there is too little information in the shallow datasets to produce
good results with the embeddings alone. We can also see that
$\textsc{EAGER}_{A\mathbin{\|}E}$ and $\textsc{EAGER}_{A}$ perform similarly.
However there is an apparent difference in performance between the movie
datasets and the others. Out of the three embedding approaches only
$\textsc{EAGER}_{A\mathbin{\|}E}$ with MultiKE performs better than
$\textsc{EAGER}_{A}$ on the movie datasets. For the classical ER benchmarks
using only $\mathcal{V}_{A}$ as input for either RF or MLP gives the best
results overall. While MultiKE performs the second-worst for
$\textsc{EAGER}_{E}$ it gives the best results when used in
$\textsc{EAGER}_{A\mathbin{\|}E}$. Averaged over all shallow test datasets,
$\textsc{EAGER}_{A}$ performs best and the Random Forest (RF) classifier was
the most effective classifier reaching the lowest average ranks.
Table III: Averaged F-measure on test set of shallow graph datasets. The best value in a row is highlighted Dataset | $\textsc{EAGER}_{A\mathbin{\|}E}$ | $\textsc{EAGER}_{A}$ | $\textsc{EAGER}_{E}$
---|---|---|---
BootEA | MultiKE | RDGCN | BootEA | MultiKE | RDGCN
MLP | RF | MLP | RF | MLP | RF | MLP | RF | MLP | RF | MLP | RF | MLP | RF
abt-buy | 0.885 | 0.952 | 0.958 | 0.952 | 0.925 | 0.920 | 0.968 | 0.965 | 0.623 | 0.648 | 0.383 | 0.655 | 0.650 | 0.661
amazon-google | 0.751 | 0.798 | 0.789 | 0.760 | 0.784 | 0.768 | 0.808 | 0.817 | 0.631 | 0.646 | 0.571 | 0.645 | 0.638 | 0.665
dblp-acm | 0.995 | 0.997 | 0.997 | 0.997 | 0.995 | 0.997 | 0.997 | 0.997 | 0.579 | 0.614 | 0.617 | 0.688 | 0.559 | 0.598
dblp-scholar | 0.993 | 0.997 | 0.994 | 0.997 | 0.995 | 0.996 | 0.997 | 0.998 | 0.562 | 0.588 | 0.537 | 0.576 | 0.547 | 0.571
imdb-tmdb | 0.967 | 0.977 | 0.988 | 0.984 | 0.969 | 0.975 | 0.979 | 0.980 | 0.874 | 0.859 | 0.911 | 0.913 | 0.874 | 0.873
imdb-tvdb | 0.938 | 0.960 | 0.973 | 0.967 | 0.940 | 0.953 | 0.965 | 0.960 | 0.821 | 0.786 | 0.873 | 0.844 | 0.807 | 0.792
tmdb-tvdb | 0.973 | 0.977 | 0.983 | 0.981 | 0.966 | 0.977 | 0.980 | 0.978 | 0.874 | 0.844 | 0.871 | 0.877 | 0.857 | 0.831
Avg Rank | 7.786 | 4.143 | 2.929 | 3.429 | 6.643 | 5.571 | 2.786 | 2.714 | 11.929 | 11.857 | 11.714 | 9.714 | 12.214 | 11.571
Figure 3: Averaged F-measure, Precision and Recall per Type on Movie Datasets
using $\textsc{EAGER}_{A\mathbin{\|}E}$ with MLP
Looking at the movie datasets in more detail as shown in Figure 3, we can see
that there is a difference in performance depending on the entity type. In
most cases, $\textsc{EAGER}_{A\mathbin{\|}E}$ reaches an F-measure of over 90%
for all entity types showing that the approach is generic and able to achieve
good match quality for multiple heterogeneous entity types. Still there are
some differences between the entity types. TVShows and Films generally perform
worse than TVEpisodes and Persons with especially the precision for Film
standing out negatively. This is especially pronounced in the IMDB-TMDB and
IMDB-TVDB datasets. This might be attributed to different sets of attributes
between those datasets, e.g. as IMDB does not contain full-length descriptions
of films and tv shows whereas TMDB and TVDB do. Interestingly, Films/TVShows
with very dissimilar titles due to different representations of non-English
titles can be matched using the KGEs. For example the soviet drama ”Defence
Counsel Sedov” has the romanized title ”Zashchitnik Sedov” in IMDB, while TMDB
has either the translated ”Defence Counsel Sedov” or the cyrillic ”Защитник
Седов”. These entity pairs are correctly matched in the
$\textsc{EAGER}_{A\mathbin{\|}E}$ variant.
To properly compare the performance of the approaches across all approaches we
used the statistical analysis presented by Demšar [36] and more specifically
the Python package Autorank [37], which aims to simplify the use of the
proposed methods by Demšar. The performance measurement for each dataset and
classifier are our paired samples. Given that we have more than two datasets,
simply using hypothesis tests for all pairs would result in a multiple testing
problem, which means the probability of accidentally reporting a significant
difference would be highly increased. We therefore use the procedure
recommended by Demšar: First we test if the average ranks of algorithms are
significantly different using the Friedman test. If this is the case we
perform a Nemenyi test to compare all classifiers and input combinations.
The null hypothesis of the Friedman test can be rejected ($p=1.60\times
10^{-13}$). A Nemenyi test is therefore performed and we present the critical
distance diagram in Figure 4.
Figure 4: Critical distance diagram of Nemenyi test for shallow graph
datasets, connected groups are not significantly different (at $p=0.05$)
The axis shows the average rank of the input/embedding combination. Groups
that are connected are not significantly different at the significance level
of 0.05, which is internally corrected to ensure that all results together
fulfill this. Approaches that have a higher difference in average rank than
the critical distance (CD) are significantly different.
While $\textsc{EAGER}_{A}$ performs the best, there is no significant
difference to $\textsc{EAGER}_{A\mathbin{\|}E}$. What we can see is that
$\textsc{EAGER}_{E}$ is significantly outperformed by all other approaches.
### V-C Rich Graph Datasets
Table IV: Averaged F-measure on test set of rich graph datasets. The best value in a row is highlighted. For average rank the best 3 values of the compared ranks are highlighted Dataset | $\textsc{EAGER}_{A\mathbin{\|}E}$ | $\textsc{EAGER}_{A}$ | $\textsc{EAGER}_{E}$
---|---|---|---
BootEA | MultiKE | RDGCN | BootEA | MultiKE | RDGCN
MLP | RF | MLP | RF | MLP | RF | MLP | RF | MLP | RF | MLP | RF | MLP | RF
15K | D-W(V1) | 0.775 | 0.668 | 0.881 | 0.858 | 0.805 | 0.842 | 0.827 | 0.828 | 0.764 | 0.678 | 0.853 | 0.871 | 0.718 | 0.707
D-W(V2) | 0.934 | 0.841 | 0.945 | 0.918 | 0.897 | 0.890 | 0.868 | 0.870 | 0.938 | 0.847 | 0.939 | 0.942 | 0.808 | 0.796
D-Y(V1) | 0.870 | 0.775 | 0.986 | 0.982 | 0.974 | 0.986 | 0.972 | 0.971 | 0.837 | 0.746 | 0.952 | 0.941 | 0.947 | 0.953
D-Y(V2) | 0.983 | 0.908 | 0.995 | 0.993 | 0.977 | 0.991 | 0.978 | 0.978 | 0.975 | 0.888 | 0.973 | 0.971 | 0.947 | 0.960
EN-DE(V1) | 0.923 | 0.852 | 0.986 | 0.984 | 0.966 | 0.976 | 0.947 | 0.945 | 0.891 | 0.798 | 0.957 | 0.950 | 0.937 | 0.955
EN-DE(V2) | 0.970 | 0.918 | 0.992 | 0.990 | 0.968 | 0.978 | 0.956 | 0.955 | 0.946 | 0.875 | 0.961 | 0.958 | 0.934 | 0.956
EN-FR(V1) | 0.868 | 0.736 | 0.978 | 0.973 | 0.950 | 0.963 | 0.922 | 0.920 | 0.806 | 0.709 | 0.952 | 0.942 | 0.907 | 0.935
EN-FR(V2) | 0.965 | 0.876 | 0.991 | 0.989 | 0.963 | 0.977 | 0.937 | 0.936 | 0.942 | 0.875 | 0.977 | 0.978 | 0.921 | 0.948
100K | D-W(V1) | 0.873 | 0.850 | 0.887 | 0.862 | 0.768 | 0.774 | 0.810 | 0.811 | 0.868 | 0.820 | 0.850 | 0.871 | 0.645 | 0.556
D-W(V2) | 0.962 | 0.927 | 0.951 | 0.923 | 0.756 | 0.792 | 0.845 | 0.844 | 0.959 | 0.916 | 0.917 | 0.957 | 0.610 | 0.609
D-Y(V1) | 0.980 | 0.958 | 0.990 | 0.987 | 0.991 | 0.993 | 0.975 | 0.975 | 0.959 | 0.942 | 0.949 | 0.954 | 0.963 | 0.968
D-Y(V2) | 0.993 | 0.965 | 0.995 | 0.990 | 0.983 | 0.989 | 0.976 | 0.975 | 0.979 | 0.958 | 0.953 | 0.978 | 0.921 | 0.968
EN-DE(V1) | 0.943 | 0.907 | 0.989 | 0.982 | 0.954 | 0.961 | 0.944 | 0.943 | 0.901 | 0.859 | 0.956 | 0.947 | 0.872 | 0.891
EN-DE(V2) | 0.965 | 0.933 | 0.993 | 0.988 | 0.926 | 0.932 | 0.943 | 0.941 | 0.934 | 0.890 | 0.970 | 0.969 | 0.779 | 0.847
EN-FR(V1) | 0.925 | 0.867 | 0.981 | 0.969 | 0.947 | 0.938 | 0.920 | 0.919 | 0.866 | 0.819 | 0.948 | 0.943 | 0.866 | 0.894
EN-FR(V2) | 0.968 | 0.899 | 0.989 | 0.979 | 0.897 | 0.901 | 0.925 | 0.923 | 0.925 | 0.877 | 0.959 | 0.968 | 0.742 | 0.806
Avg Rank | 5.938 | 11.094 | 1.344 | 3.000 | 6.812 | 5.375 | 7.688 | 8.281 | 8.625 | 12.625 | 6.125 | 5.656 | 11.969 | 10.469
The experiment results for the rich graph datasets are shown in Table IV. It
is evident that $\textsc{EAGER}_{A\mathbin{\|}E}$ achieves the best results.
Overall it can solve the diverse match tasks including for multi-lingual KGs
and larger KGs very well with F-Measure values between 96% and 99% in most
cases. As before MultiKE performs the best out of all graph embedding
approaches, especially in conjunction with the MLP classifier.
Comparing the performances between the datasets we see that on the variants
with richer graph structure (V2) the results are better than on (V1) for the
respective datasets. There is also a difference when contrasting the different
sizes of the datasets. While $\textsc{EAGER}_{A\mathbin{\|}E}$ with BootEA and
MultiKE generally seem to achieve better results on the larger 100K datasets
compared to their 15K counterparts, this is less true for RDGCN.
Again, we use the statistical procedure to make robust statements about
performance. For our rich graph datasets we can reject the null hypothesis
($p=2.80\times 10^{-20}$) of the Friedman test that all approaches and their
average ranks should be equal. We therefore proceed and perform a Nemenyi test
to determine which variants performed significantly different. The results are
shown in Figure 5.
Figure 5: Critical distance diagram of Nemenyi test for rich graph datasets,
connected groups are not significantly different (at $p=0.05$)
Generally, using $\textsc{EAGER}_{A\mathbin{\|}E}$ with any embedding approach
performs better than $\textsc{EAGER}_{E}$, however for BootEA this difference
is not significant. We can see that $\textsc{EAGER}_{A\mathbin{\|}E}$ with
MultiKE is significantly better than all other variants. This is evidence that
the combination of attribute similarities and embeddings is preferable to
using attribute similarities or embeddings on their own for the task of entity
resolution in rich knowledge graphs. This is even true for embedding
techniques that already rely on attribute information such as MultiKE.
### V-D Training Time
Table V: Averaged training times (in seconds) on rich graph datasets of size 100K. Dataset | $\textsc{EAGER}_{E}$ | $\textsc{EAGER}_{A}$ | $\textsc{EAGER}_{A\mathbin{\|}E}$
---|---|---|---
MLP | RF | MLP | RF | MLP | RF
D-W(V1) | 554.30 | 967.14 | 4,165.16 | 3,428.90 | 3,948.18 | 4,082.59
D-W(V2) | 531.79 | 942.87 | 3,083.77 | 2,603.19 | 3,130.56 | 3,136.22
D-Y(V1) | 380.90 | 809.90 | 938.38 | 242.76 | 570.24 | 699.73
D-Y(V2) | 335.37 | 822.18 | 954.18 | 233.36 | 503.07 | 658.02
EN-DE(V1) | 451.90 | 900.58 | 1,420.74 | 1,053.92 | 1,668.12 | 1,688.45
EN-DE(V2) | 334.95 | 898.91 | 1,064.44 | 775.35 | 1,279.61 | 1,365.34
EN-FR(V1) | 456.43 | 858.63 | 2,183.16 | 1,755.25 | 2,263.96 | 2,360.49
EN-FR(V2) | 377.26 | 819.43 | 1,642.92 | 1,281.80 | 1,870.10 | 1,806.68
Experiments were run on a cluster provided by the Leipzig University Computing
Center, which is comprised of several nodes with AMD EPYC 32 core processors
and up to 512GB RAM. Experiments were run on a single node. To illustrate the
relative runtimes of the considered variants we focus on the bigger KGs with
100K entities. Table V shows the running times for training on each 100K
dataset, averaged over all 5 folds. The full training times are mostly
dominated by the pre-processing of the attribute similarities. This pre-
processing is not necessary for $\textsc{EAGER}_{E}$ and hence training time
is up to about 8 times faster for MLP. On average, training times for
$\textsc{EAGER}_{A\mathbin{\|}E}$ are slightly longer than for
$\textsc{EAGER}_{A}$ due to an increase in the dimensionality of the input.
### V-E Comparison with other approaches
We compare our approach to the state-of-the-art ER frameworks Magellan [2] and
DeepMatcher [3]. Magellan is an ER framework that allows the use of ML
classifiers for ER. We present the best performing classifiers XGBoost [38]
and Random Forest (rf). DeepMatcher provides several deep learning solutions
for ER, we employ the hybrid variant which uses a bidirectional recurrent
neural network with a decomposable attention-based attribute summarization
module. To avoid any decrease in performance due to blocking we provide both
frameworks with respective training or test entity mappings directly. Because
such a setup is not possible for the approaches discussed in [1], which mostly
use resolution strategies based on nearest neighbors, we cannot fairly compare
our approach with theirs and therefore refrain from this comparison here.
#### V-E1 Shallow graph datasets
We start with the comparison for the shallow datasets. Since both Magellan and
DeepMatcher expect matched schemata we align the attributes by hand where
necessary. We report F-measure (fm), Precison (prec) and Recall (rec) averaged
over the 5 folds, with the variance over the folds in Table VI. For the
comparison with other approaches we use $\textsc{EAGER}_{A\mathbin{\|}E}$ and
for brevity we will refer to it simply as EAGER.
Table VI: Averaged F-measure, precision and recall on test set of shallow
graph datasets. The best F-measure in a row is highlighted
Dataset | EAGER MLP | EAGER RF | DeepMatcher | Magellan XGBoost | Magellan RF
---|---|---|---|---|---
fm | prec | rec | fm | prec | rec | fm | prec | rec | fm | prec | rec | fm | prec | rec
abt-buy | 0.958 | 0.975 | 0.942 | 0.952 | 0.963 | 0.941 | 0.930 | 0.885 | 0.980 | 0.974 | 0.971 | 0.978 | 0.977 | 0.975 | 0.979
amazon-google | 0.789 | 0.794 | 0.787 | 0.760 | 0.804 | 0.722 | 0.743 | 0.673 | 0.836 | 0.724 | 0.737 | 0.712 | 0.727 | 0.766 | 0.693
dblp-acm | 0.997 | 0.999 | 0.994 | 0.997 | 0.999 | 0.995 | 0.990 | 0.980 | 0.999 | 0.998 | 0.999 | 0.997 | 0.999 | 1.000 | 0.998
dblp-scholar | 0.994 | 0.997 | 0.990 | 0.997 | 0.999 | 0.996 | 0.994 | 0.992 | 0.997 | 0.997 | 0.997 | 0.997 | 0.998 | 0.998 | 0.997
imdb-tmdb | 0.988 | 0.985 | 0.992 | 0.984 | 0.978 | 0.990 | 0.984 | 0.971 | 0.997 | 0.995 | 0.998 | 0.993 | 0.997 | 0.997 | 0.996
imdb-tvdb | 0.973 | 0.959 | 0.987 | 0.967 | 0.940 | 0.994 | 0.987 | 0.979 | 0.996 | 0.993 | 0.992 | 0.993 | 0.994 | 0.991 | 0.996
tmdb-tvdb | 0.983 | 0.989 | 0.977 | 0.981 | 0.991 | 0.971 | 0.988 | 0.978 | 0.998 | 0.993 | 0.992 | 0.994 | 0.995 | 0.993 | 0.997
Avg Rank | 3.286 | 3.786 | 4.000 | 2.500 | 1.429
All frameworks perform very well with almost all F-measure values over $0.95$
except on amazon-google. Magellan achieves higher f-measures than EAGER on all
datasets except amazon-google. The statistical analysis shows a significant
difference: $p=0.012$ using the Friedman test. However, looking at the
critical distance diagram in Figure 6 we can see that only DeepMatcher and
EAGER RF is significantly outperformed by Magellan RF. There is no significant
difference between EAGER MLP and Magellan RF but EAGER does not depend on the
provision of schema matching.
Figure 6: Critical distance diagram of Nemenyi test for shallow graph
datasets, connected groups are not significantly different (at $p=0.05$)
#### V-E2 Rich graph datasets
For the rich graph datasets the heterogeneity of the different KGs was a
problem for Magellan and DeepMatcher since they both expect perfectly matched
schemata. This was manageable for the smaller datasets, where this can be done
by hand. In order to use Magellan and Deepmatcher on the rich graph datasets
we did the same as for EAGER and concatenated all entity attributes into a
single attribute.
Table VII: Averaged F-measure, Precision and Recall on test set of rich graph
datasets. The best F-measure value in a row is highlighted
Dataset | EAGER MLP | EAGER RF | DeepMatcher | Magellan XGBoost | Magellan RF
---|---|---|---|---|---
fm | prec | rec | fm | prec | rec | fm | prec | rec | fm | prec | rec | fm | prec | rec
15K | D-W(V1) | 0.881 | 0.990 | 0.794 | 0.858 | 0.991 | 0.756 | 0.876 | 0.854 | 0.899 | 0.837 | 0.896 | 0.786 | 0.822 | 0.849 | 0.798
D-W(V2) | 0.945 | 0.993 | 0.903 | 0.918 | 0.992 | 0.854 | 0.904 | 0.895 | 0.914 | 0.863 | 0.913 | 0.818 | 0.848 | 0.867 | 0.830
D-Y(V1) | 0.986 | 1.000 | 0.972 | 0.982 | 1.000 | 0.964 | 0.980 | 0.976 | 0.984 | 0.973 | 0.975 | 0.970 | 0.972 | 0.973 | 0.971
D-Y(V2) | 0.995 | 1.000 | 0.991 | 0.993 | 0.999 | 0.987 | 0.987 | 0.984 | 0.990 | 0.975 | 0.977 | 0.972 | 0.974 | 0.975 | 0.974
EN-DE(V1) | 0.986 | 0.997 | 0.974 | 0.984 | 0.996 | 0.971 | 0.968 | 0.972 | 0.964 | 0.966 | 0.990 | 0.944 | 0.960 | 0.977 | 0.945
EN-DE(V2) | 0.992 | 0.997 | 0.988 | 0.990 | 0.997 | 0.982 | 0.975 | 0.973 | 0.977 | 0.973 | 0.992 | 0.955 | 0.970 | 0.985 | 0.955
EN-FR(V1) | 0.978 | 0.996 | 0.960 | 0.973 | 0.994 | 0.952 | 0.954 | 0.950 | 0.959 | 0.953 | 0.984 | 0.924 | 0.951 | 0.979 | 0.924
EN-FR(V2) | 0.991 | 0.997 | 0.984 | 0.989 | 0.996 | 0.982 | 0.968 | 0.965 | 0.972 | 0.971 | 0.993 | 0.949 | 0.970 | 0.993 | 0.949
100K | D-W(V1) | 0.887 | 0.994 | 0.801 | 0.862 | 0.989 | 0.764 | 0.925 | 0.905 | 0.946 | 0.817 | 0.904 | 0.746 | 0.815 | 0.887 | 0.754
D-W(V2) | 0.951 | 0.991 | 0.915 | 0.923 | 0.988 | 0.866 | 0.929 | 0.912 | 0.947 | 0.834 | 0.922 | 0.761 | 0.830 | 0.892 | 0.775
D-Y(V1) | 0.990 | 1.000 | 0.981 | 0.987 | 1.000 | 0.976 | 0.992 | 0.991 | 0.993 | 0.983 | 0.991 | 0.976 | 0.982 | 0.986 | 0.979
D-Y(V2) | 0.995 | 1.000 | 0.991 | 0.990 | 1.000 | 0.982 | 0.993 | 0.992 | 0.995 | 0.985 | 0.987 | 0.983 | 0.984 | 0.984 | 0.983
EN-DE(V1) | 0.989 | 0.997 | 0.981 | 0.982 | 0.997 | 0.968 | 0.972 | 0.974 | 0.971 | 0.967 | 0.991 | 0.945 | 0.966 | 0.987 | 0.946
EN-DE(V2) | 0.993 | 0.997 | 0.990 | 0.988 | 0.997 | 0.980 | 0.977 | 0.975 | 0.980 | 0.969 | 0.993 | 0.945 | 0.966 | 0.985 | 0.947
EN-FR(V1) | 0.981 | 0.996 | 0.966 | 0.969 | 0.995 | 0.946 | 0.956 | 0.958 | 0.955 | 0.947 | 0.988 | 0.908 | 0.945 | 0.984 | 0.910
EN-FR(V2) | 0.989 | 0.994 | 0.983 | 0.979 | 0.992 | 0.967 | 0.968 | 0.966 | 0.970 | 0.963 | 0.991 | 0.937 | 0.961 | 0.987 | 0.937
Avg Rank | 1.125 | 2.312 | 2.688 | 3.938 | 4.938
We can see in Table VII that EAGER using MLP outperforms all other approaches
except on D-W (V1) and D-Y (V1) for the 100K sizes, where DeepMatcher performs
best. Magellan is outperformed on all datasets by EAGER and DeepMatcher.
Contrary to the smaller datasets the bigger number of training examples seems
especially beneficial for DeepMatcher. Using our statistical analysis we can
reject the Friedman test ($p=2.21\times 10^{-11}$) and therefore show the
results of the Nemenyi tests in Figure 7.
Figure 7: Critical distance diagram of Nemenyi test for rich graph datasets,
connected groups are not significantly different (at $p=0.05$)
It is apparent, that our approach significantly outperforms Magellan, while
EAGER using MLP also significantly outperforms DeepMatcher overall.
## VI Conclusion & Future work
We explored the combination of knowledge graph embeddings and attribute
similarities for entity resolution in knowledge graphs. These approaches are
included in a new learning-based ER system called EAGER. We tested our
approach on a range of different datasets and showed that using a combination
of both graph embeddings and attribute similarities generally yields the best
results compared to just using either one. We showed that our approach yields
competitive results that are on par with or significantly outperform state of
the art approaches. The approach is generic and can deal with arbitrary entity
types without prior schema matching.
In the future we will investigate blocking strategies based on both embeddings
and attribute information to improve runtimes and thus scalability to large
datasets. We will also explore whether new property matching schemes like
LEAPME [39] can be utilized for blocking to reduce the high cost in pre-
processing attribute similarities. As training data in practice is often too
small or hard to obtain at all, using alternative learning strategies such as
unsupervised and active learning in this context should be explored.
## References
* [1] Z. Sun, Q. Zhang, W. Hu, C. Wang, M. Chen, F. Akrami, and C. Li, “A benchmarking study of embedding-based entity alignment for knowledge graphs,” _Proceedings of the VLDB Endowment_ , vol. 13, no. 11, pp. 2326–2340, 2020. [Online]. Available: http://www.vldb.org/pvldb/vol13/p2326-sun.pdf
* [2] P. Konda, S. Das, P. S. G. C., A. Doan, A. Ardalan, J. R. Ballard, H. Li, F. Panahi, H. Zhang, J. F. Naughton, S. Prasad, G. Krishnan, R. Deep, and V. Raghavendra, “Magellan: Toward building entity matching management systems,” _Proc. VLDB Endow._ , vol. 9, no. 12, pp. 1197–1208, 2016. [Online]. Available: http://www.vldb.org/pvldb/vol9/p1197-pkonda.pdf
* [3] S. Mudgal, H. Li, T. Rekatsinas, A. Doan, Y. Park, G. Krishnan, R. Deep, E. Arcaute, and V. Raghavendra, “Deep learning for entity matching: A design space exploration,” in _Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018_. ACM, 2018, pp. 19–34. [Online]. Available: https://doi.org/10.1145/3183713.3196926
* [4] P. Domingos, “Multi-relational record linkage,” in _In Proceedings of the KDD-2004 Workshop on Multi-Relational Data Mining_ , 2004, pp. 31–48.
* [5] M. G. Elfeky, A. K. Elmagarmid, and V. S. Verykios, “Tailor: A record linkage tool box,” in _ICDE_ , 2002.
* [6] J. Volz, C. Bizer, M. Gaedke, and G. Kobilarov, “Silk - a link discovery framework for the web of data,” in _LDOW_ , 2009.
* [7] M. A. Sherif, A.-C. N. Ngomo, and J. Lehmann, “Wombat - a generalization approach for automatic link discovery,” in _ESWC_ , 2017.
* [8] S. Sarawagi and A. Bhamidipaty, “Interactive deduplication using active learning,” in _KDD_ , 2002.
* [9] A. K. Elmagarmid, P. G. Ipeirotis, and V. S. Verykios, “Duplicate record detection: A survey,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 19, pp. 1–16, 2007.
* [10] M. Nentwig, M. Hartung, A.-C. N. Ngomo, and E. Rahm, “A survey of current link discovery frameworks,” _Semantic Web_ , vol. 8, pp. 419–436, 2017.
* [11] P. Christen, _Data Matching: Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection_. Springer Publishing Company, Incorporated, 2012.
* [12] A.-C. N. Ngomo and K. Lyko, “Unsupervised learning of link specifications: deterministic vs. non-deterministic,” in _OM_ , 2013.
* [13] A. Nikolov, M. d’Aquin, and E. Motta, “Unsupervised learning of link discovery configuration,” in _ESWC_ , 2012.
* [14] R. Isele and C. Bizer, “Learning expressive linkage rules using genetic programming,” _PVLDB_ , vol. 5, pp. 1638–1649, 2012.
* [15] A.-C. N. Ngomo and K. Lyko, “Eagle: Efficient active learning of link specifications using genetic programming,” in _ESWC_ , 2012.
* [16] M. Ebraheem, S. Thirumuruganathan, S. R. Joty, M. Ouzzani, and N. Tang, “Deeper - deep entity resolution,” _CoRR_ , vol. abs/1710.00597, 2017. [Online]. Available: http://arxiv.org/abs/1710.00597
* [17] I. Bhattacharya and L. Getoor, “Collective entity resolution in relational data,” _IEEE Data Eng. Bull._ , vol. 29, pp. 4–12, 2006.
* [18] S. Lacoste-Julien, K. Palla, A. Davies, G. Kasneci, T. Graepel, and Z. Ghahramani, “Sigma: simple greedy matching for aligning large knowledge bases,” in _KDD_ , 2013.
* [19] M. Pershina, M. Yakout, and K. Chakrabarti, “Holistic entity matching across knowledge graphs,” _2015 IEEE International Conference on Big Data (Big Data)_ , pp. 1585–1590, 2015.
* [20] L. Zhu, M. Ghasemi-Gol, P. A. Szekely, A. Galstyan, and C. A. Knoblock, “Unsupervised entity resolution on multi-type graphs,” in _ISWC_ , 2016\.
* [21] M. Ali, M. Berrendorf, C. T. Hoyt, L. Vermue, M. Galkin, S. Sharifzadeh, A. Fischer, V. Tresp, and J. Lehmann, “Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework,” pp. 1–40, 2020. [Online]. Available: http://arxiv.org/abs/2006.13365
* [22] A. Bordes, N. Usunier, A. Garcia-Durán, J. Weston, and O. Yakhnenko, “Translating embeddings for modeling multi-relational data,” in _Advances in Neural Information Processing Systems_ , 2013.
* [23] Z. Wang, J. Zhang, J. Feng, and Z. Chen, “Knowledge graph embedding by translating on hyperplanes,” 2014. [Online]. Available: https://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8531
* [24] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, “Learning entity and relation embeddings for knowledge graph completion,” 2015. [Online]. Available: https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9571
* [25] G. Ji, S. He, L. Xu, K. Liu, and J. Zhao, “Knowledge graph embedding via dynamic mapping matrix,” in _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. Beijing, China: Association for Computational Linguistics, Jul. 2015, pp. 687–696. [Online]. Available: https://www.aclweb.org/anthology/P15-1067
* [26] T. Trouillon, J. Welbl, S. Riedel, Éric Gaussier, and G. Bouchard, “Complex embeddings for simple link prediction,” 2016.
* [27] Z. Sun, Z.-H. Deng, J.-Y. Nie, and J. Tang, “Rotate: Knowledge graph embedding by relational rotation in complex space,” 2019.
* [28] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” 2016.
* [29] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling, “Modeling relational data with graph convolutional networks,” in _The Semantic Web_. Cham: Springer International Publishing, 2018, pp. 593–607.
* [30] Z. Sun, W. Hu, Q. Zhang, and Y. Qu, “Bootstrapping entity alignment with knowledge graph embedding,” _IJCAI International Joint Conference on Artificial Intelligence_ , vol. 2018-July, pp. 4396–4402, 2018.
* [31] Q. Zhang, Z. Sun, W. Hu, M. Chen, L. Guo, and Y. Qu, “Multi-view knowledge graph embedding for entity alignment,” in _IJCAI_ , vol. 2019-Augus. IJCAI, jun 2019, pp. 5429–5435. [Online]. Available: http://arxiv.org/abs/1906.02390
* [32] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” 2013.
* [33] Y. Wu, X. Liu, Y. Feng, Z. Wang, R. Yan, and D. Zhao, “Relation-aware entity alignment for heterogeneous knowledge graphs,” _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence_ , Aug 2019\. [Online]. Available: http://dx.doi.org/10.24963/ijcai.2019/733
* [34] H. Köpcke, A. Thor, and E. Rahm, “Evaluation of entity resolution approaches on real-world match problems,” _Proceedings of the VLDB Endowment_ , 2010.
* [35] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2017.
* [36] J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” _Journal of Machine Learning Research_ , vol. 7, pp. 1–30, 2006\.
* [37] S. Herbold, “Autorank: A python package for automated ranking of classifiers,” _Journal of Open Source Software_ , vol. 5, no. 48, p. 2173, 2020. [Online]. Available: https://doi.org/10.21105/joss.02173
* [38] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , ser. KDD ’16. ACM, 2016, pp. 785–794. [Online]. Available: http://doi.acm.org/10.1145/2939672.2939785
* [39] D. Ayala, I. Hernández, D. Ruiz, and E. Rahm, “Leapme: Learning-based property matching with embeddings,” _arXiv preprint arXiv:2010.01951_ , 2020\.
|
# Let’s Share VMs: Optimal Placement and Pricing across Base Stations in MEC
Systems
Marie Siew†, Kun Guo†, Desmond Cai§, Lingxiang Li∗, Tony Q.S. Quek† This work
was supported in part by the National Natural Science Foundation of China
under Grants 61901528, 62001254 and 61771263, and in part by the Hunan Natural
Science Foundation under Grant 2020JJ5769. (Corresponding author: Kun Guo).
†Information Systems Technology and Design Pillar, Singapore University of
Technology and Design
§Institute of High Performance Computing, Singapore
$*$University of Electronic Science and Technology of China, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>desmond-
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
In mobile edge computing (MEC) systems, users offload computationally
intensive tasks to edge servers at base stations. However, with unequal demand
across the network, there might be excess demand at some locations and
underutilized resources at other locations. To address such load-unbalanced
problem in MEC systems, in this paper we propose virtual machines (VMs)
sharing across base stations. Specifically, we consider the joint VM placement
and pricing problem across base stations to match demand and supply and
maximize revenue at the network level. To make this problem tractable, we
decompose it into master and slave problems. For the placement master problem,
we propose a Markov approximation algorithm MAP on the design of a continuous
time Markov chain. As for the pricing slave problem, we propose OPA - an
optimal VM pricing auction, where all users are truthful. Furthermore, given
users’ potential untruthful behaviors, we propose an incentive compatible
auction iCAT along with a partitioning mechanism PUFF, for which we prove
incentive compatibility and revenue guarantees. Finally, we combine MAP and
OPA or PUFF to solve the original problem, and analyze the optimality gap.
Simulation results show that collaborative base stations increases revenue by
up to 50$\%$.
###### Index Terms:
Edge Computing, Network Economics
## I Introduction
Mobile Edge Computing (MEC) is an enabler of exciting new technologies and
applications like deep learning on devices, virtual and augmented reality, and
smart city data analytics. These exciting new technologies and applications
have high computation requirements. MEC enables them by allowing users to
offload computationally intensive tasks to the network edge (e.g., base
stations in cellular networks and access points in WiFi networks), which are
equipped with computing capability by connecting to the edge servers [1]. With
servers placed at the network edge near the end users, Wide-area-network (WAN)
delay is avoided, allowing it to meet the stringent latency requirements of
delay sensitive tasks, that cloud computing is unable to [2].
Unlike cloud computing, the computational resources at the edge server are
limited. Hence optimizing resource allocation in MEC is an important research
question. In particular, demand for computation is uneven across the network.
Leading to excess demand at some coverage areas, and underutilized resources
at others. In this load-unbalanced scenario, there are users not being served,
and from the network operator’s perspective, resources are not efficiently
utilized and revenue is not maximized. This prompts a global optimization and
organization of resources over the network, to place resources more
effectively in light of the network’s demand pattern.
Virtual machine (VM) migration is perceived as a promising way to solve the
load-unbalanced scenario [3]. There have been works on VM migration in MEC [4,
5, 6, 7, 8]. These works investigate at the level of a single user, in
response to user mobility. In contrast, there has been a lack of work from the
global perspective. To this end, we propose the idea of “Collaborative Base
Stations”, where base stations share their VMs with each other. This involves
the migration of VMs, in accordance with the relative demand across base
stations. In particular, we consider a joint optimization of VM placement and
pricing at base stations to match the demand and supply from the network
level. A joint formulation is used because on one hand, the price at one base
station has an impact on users’ demand, which affects the VM placements. On
the other hand, VM placement determines the resource supply at one base
station. This way, users’ demand will be satisfied as much as possible and the
revenue across the network is maximized.
However, some difficulties arise when solving the formulated joint VM
Migration and Pricing for Profit maximization problem (MPP). Firstly, there is
a sophisticated coupling of the price and VM placement variables, making it
difficult to solve MPP directly. Secondly, MPP is a combinatorial optimization
problem, with the number of VMs deployed at each base station being integers.
It could be intractable, when the number of base stations increases and the
total number of VMs deployed at the edge increases. Thirdly, the pricing at
one base station is affected by the demand and bid information reported by the
user. Users’ potential untruthful behaviors make pricing at base stations
challenging.
To tackle these difficulties, we first use primal decomposition to decouple
the variables, decomposing MPP into the slave problem NP \- Normalized Pricing
problem, and master problem VP \- VM Placement problem. Next, we propose an
online Markov approximation enabled algorithm which solves the combinatorial
VP in a distributed manner. This helps to deal with the potential
intractability when the problem size gets large. It does so by modelling the
different VM configurations as states of a Continuous Time Markov Chain
(CTMC). The VM migrations happen according to the transition rate of the CTMC,
which is in turn dependent on the performance level (revenue) of the placement
configurations. How is the revenue of the VM placement configurations
obtained? We solve NP to obtain the optimal revenue for each placement
configuration. Specifically, at each base station we conduct either OPA - the
Optimal Pricing Auction, or iCAT - an incentive CompAtible Truthful auction,
which ensures users are truthful. iCAT guarantees the revenue $R$, when $R$ is
less than or equal to the optimal. To successfully estimate $R$, we further
present a user partitioning mechanism. The results of the auction will be fed
back to the base station and network operator, directly influencing the
transition rates of the CTMC.
Our contributions are summarized as follows:
* •
To deal with unequal demand across the MEC coverage areas, we formulate a
joint VM migration and pricing problem across base stations to match demand
and supply at the network level. This works towards ensuring that user demand
is met, resource placement is optimized globally, and the operator’s revenue
is maximized.
* •
Due to 1) the combinatorial nature of the problem, 2) the coupling of price
and placement variables, and 3) users having the incentive to hide their true
valuations, we use primal decomposition to decompose the problem into a master
and slave problem. For the master VM placement problem, we present MAP, a
Markov approximation-enabled algorithm which solves the combinatorial problem
in a distributed manner at individual base stations.
* •
To solve the pricing problem, we present an optimal pricing auction OPA, and
prove that it is optimal. Besides, as users might have an incentive to hide
their true valuations, we present an incentive compatible auction iCAT, prove
that it is dominant strategy incentive compatible and that its revenue is $R$,
when $R$ is less than or equal to the optimal. To estimate the target $R$, we
present a user partitioning algorithm PUFF, and prove that its competitive
ratio is 4.
* •
We present the combined algorithm cMAP which solves our original joint VM
placement and pricing problem, with an optimality gap of
$\frac{1}{\beta}\log|\mathbb{V}|$. Following which, we conduct a perturbation
analysis and show that the optimality gap of the stationary distribution
caused by potential perturbations is bounded by
$1-\exp(-2\beta\psi_{\text{max}})$, where $\psi_{\text{max}}$ is the
perturbation error.
* •
Finally, we provide simulation results which show that our proposed solution
cMAP: MAP \+ OPA converge to optimality, and analyze the impact of $\beta$.
While the performance of cMAP: MAP \+ PUFF is not optimal, it has a
competitive ratio of $4$, as we have proved. Results show that our mechanism
cMAP increases revenue by up to 50$\%$, compared to the baseline where base
stations do not collaborate and VMs are not migrated.
The rest of this paper is organized as follows. In Section II, we introduce
related works. The system model and problem formulation are given in Section
III, which is followed by the optimal VM placement algorithm and the auction
pricing algorithms in Sections IV and V. In Section VI, we give the complete
implementation and analysis. In Section VII we discuss simulations results and
in Section VIII we conclude.
## II Related Works
There are two mainstream ways to address the load-unbalanced problems for
efficient resource utilization in MEC systems. On this basis we introduce the
related works.
The first way is to optimize users’ task offloading decisions, i.e. whether or
not to offload, and which base station the user offloads to [1, 3]. In this
way, the computing resources at base stations are fixed and the users are
handovered among base stations. For instance, [9, 10, 11] have optimized task
offloading to strike a balance between energy consumption and delay from the
perspective of users. [12] studied the static edge server placement problem.
[13, 14, 15] aimed to maximize the network revenue through task offloading.
This paper considers an alternative way, in which the computing resources are
migrated among base stations to serve the associated users. Particularly, VM
migration in MEC draws attention in industry and academic fields [3, 16, 17].
(Note that while there has been work on VM placement or migration for revenue
maximization in clouds [18, 19], these works are specific with respect to data
center topologies.) Most of the work on VM or service migration in MEC focus
on improving user experience (e.g. reducing delay), in light of user mobility
[4, 5, 6, 8, 7]. For example in [4] Plachy et al. proposed a dynamic VM
placement and communication path selection algorithm. In [5] Taleb et al.
optimized a policy on the service migration decision, given the user’s
distance. In [6], Ouyang et al. used Lyapunov optimization to optimize the
placements over different timeslots. Another line of research regarding VM
migration looks at how it can maximize network profit or revenue. In [20], Sun
et al. optimized the tradeoff between maximizing the migration gain and
minimizing the migration cost. In this work, we investigate from a novel
perspective. We look at VM migration in MEC at a global level, in light of the
network’s demand patterns, for revenue maximization. And we formulate a joint
VM migration and pricing problem because the price and migration decisions
have a coupled impact on revenue. To the best of our knowledge, we do not know
of many other works which take this approach.
Our proposed incentive compatible auctions and their proofs borrow from, but
are different from the Profit Extractor and Random Sampling Auction in [21,
22]. Profit Extractor and Random Sampling Auction cater to fully digital
goods, with zero marginal cost of producing the next good, and hence an
infinite supply. Unlike this, our network has a limited supply of VMs,
resulting in unique novel algorithms and proofs.
## III System Model and Problem Formulation
Consider an MEC system with $K$ base stations with heterogeneous computing
capability. Each base station $k$ is equipped with an edge server containing
$v_{k}$ VMs. These are virtualised computing resources which users can offload
their computationally intensive tasks to, at a price of $p_{k}$. Since the
base stations are controlled by the same network operator, these VMs can be
migrated from one base station to another, to optimize the utilization of
resources. This global coordination of resources will help to deal with load-
unbalanced scenarios where there are excess demands in one coverage area, and
underutilized resources in another part of the network.
Each base station $k$ has a set of users $[1,...,i,...,n_{k}]\in U_{k}$ which
are associated with it. Each user $i$ offloads its computationally intensive
tasks to the edge server for auxiliary processing. Different users require
different number of VMs, with user $i$ at base station $k$ requiring $r_{k,i}$
VMs. At base station $k$, different users respond differently to the price
$p_{k}$.
A user $i$ at base station $k$ has willingness to pay $u_{k,i}$. The
willingness to pay can be viewed as the utility a user gets from job
computation using the VM. Different users have different willingness to pay.
For example, a user with a more urgent job would have a higher willingness to
pay than a user who is not as urgent. A user who will execute the job no
matter what, with less regard of the price would have a higher willingness to
pay (e.g., IoT sensors’ periodic data analytics). A user will decide to
execute its job if its payoff $\pi_{k,i}=u_{k,i}-p_{k}$ is non-negative, i.e.
if utility minus payment is greater than 0 $(\pi_{k,i}\geq 0)$. Therefore, the
demand (total number of VM requests) at base station $k$ will be $\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}>p_{k}\\}}$, where
$\mathds{1}_{\\{u_{k,i}>p_{k}\\}}$ is the indicator function representing
whether user $i$’s willingness to pay is higher than $p_{k}$.
The demand for VMs at each base station $k$ could be higher or lower than the
supply $v_{k}$. Hence, the network operator would perform a global
optimization of VMs, shifting them to locations with higher demand, to achieve
a higher utilization of resources and to optimize its profit. At the same
time, the network operator sets prices $p_{k}$ differently across coverage
areas, to obtain the highest possible revenue, in light of the varying demand
across the network. The joint Migration and Pricing for Profit maximization
problem (MPP) is as follows:
$\displaystyle\textbf{MPP}:\max_{\textbf{p},\textbf{v}}$
$\displaystyle\quad\sum_{k=1}^{K}p_{k}\min\left\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}},v_{k}\right\\}$ (1) s.t.
$\displaystyle\quad v_{k}\in\mathbb{Z}_{0}^{+},k=1,...,K$
$\displaystyle\quad\sum_{k=1}^{K}v_{k}=V,$
where $V$ is the total number of VMs, placed by the network operator across
$K$ base stations. Besides, $\mathbb{Z}_{0}^{+}$ indicates the set of non-
negative integers. In MPP, the decision variables are the prices across the
various base stations $\textbf{p}=[p_{1},...,p_{k},...,p_{K}]$, in which each
element is normalized (i.e., $p_{k}\in[0,1]$) without loss of generality, and
the VM placements across the network $\textbf{v}=[v_{1},...,v_{k},...,v_{K}]$.
The objective function is the sum of the revenue obtained across base
stations. In particular, it is the price multiplied by the number of units of
demand which is met with supply.
Some difficulties arise when solving MPP. Firstly, MPP is a combinatorial
optimization problem, with $v_{k}$ being integers. It could be intractable,
when the number of base stations increases and the total number of VMs
increases. Even if we relax $v_{k}$ to continuous values, the problem is still
non-convex. Secondly, there is a coupling of p and v in the objective
function, making it difficult to solve MPP directly.
To tackle the difficulties in solving MPP, firstly we use primal decomposition
[23], such that MPP is decomposed into slave problem NP \- Normalized Pricing
problem, and master problem VP \- VM Placement problem. Specifically, fixing
v, the slave problem is as follows:
$\textbf{NP}:\max_{\textbf{p}}\quad\sum_{k=1}^{K}p_{k}\min\left\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}},v_{k}\right\\}.$ (2)
Given the optimal solution from the slave problem, the master problem updates
the VM migration decisions:
$\displaystyle\textbf{VP}:\max_{\textbf{v}}$
$\displaystyle\quad\Phi^{*}_{\textbf{v}}$ (3) s.t.
$\displaystyle\quad\textbf{v}\in\mathbb{V},$
where $\Phi^{*}_{\textbf{v}}$ is the optimal value of NP for the given v and
$\mathbb{V}=\\{\textbf{v}|\sum_{k=1}^{K}v_{k}=V\bigcap
v_{k}\in\mathbb{Z}_{0}^{+},k=1,...,K\\}$ is the set of all possible VM
placements across the network, with size $|\mathbb{V}|$.
Following this, we propose a distributed Markov Approximation implementation
to solve VP. And finally, we propose both optimal and incentive compatible
auction mechanisms to solve NP. We discuss the details in the following
sections.
## IV The optimal VM placement algorithm
In this section, we will show how we solve the master problem VP.
Particularly, we first reformulate and approximate VP and then, propose a
Markov approximation-enabled algorithm, named MAP \- Markov Approx VM
Placement algorithm.
### IV-A Reformulating and Approximating VP
The master problem VP can be rewritten as
$\displaystyle\textbf{VP-EQ}:\max_{\pi_{\textbf{v}}}$
$\displaystyle\quad\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}\Phi^{*}_{\textbf{v}}$
(4) s.t. $\displaystyle\quad 0\leq\pi_{\textbf{v}}\leq
1,\forall\textbf{v}\in\mathbb{V}$
$\displaystyle\quad\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}=1,$
where $\pi_{\textbf{v}}$ could be seen as the proportion of time spent in
configuration v.
VP is an NP hard combinatorial optimization problem, and hence challenging to
solve, even for a centralized implementation. Even if we relax $v_{k}$ to
continuous values, the problem is still non-convex. Therefore, we use the log-
sum-exp approximation
$f(\Phi^{*}_{\textbf{v}})=\frac{1}{\beta}\log(\sum_{{\textbf{v}}\in\mathbb{V}}\exp(\beta\Phi^{*}_{\textbf{v}}))$
to approximate VP-EQ. This approximation allows for a distributed
implementation at individual base stations. This is useful when the system
dynamics change - when new users enter, or when users move from coverage area
to area. This approximation is upper bounded by
$\frac{1}{\beta}\log|\mathbb{V}|$, following Proposition 5 [24]:
###### Proposition 1.
For $\beta>0$, we have
$\max_{\emph{{v}}}\Phi^{*}_{\emph{{v}}}\leq\frac{1}{\beta}\log(\sum_{\emph{{v}}\in\mathbb{V}}\exp(\beta\Phi^{*}_{\emph{{v}}}))\leq\max_{\emph{{v}}}\Phi^{*}_{\emph{{v}}}+\frac{1}{\beta}\log|\mathbb{V}|.$
(5)
Therefore,
$\max_{\textbf{v}}\Phi^{*}_{\textbf{v}}=\lim_{\beta\rightarrow\infty}\frac{1}{\beta}\log(\sum_{\textbf{v}\in\mathbb{V}}\exp(\beta\Phi^{*}_{\textbf{v}}))$,
i.e., the approximation tends towards VP-EQ for large $\beta$. As the log-sum-
exp function is a closed and convex function, the conjugate of its conjugate
is itself, and hence we have
$\frac{1}{\beta}\log(\sum_{\emph{{v}}\in\mathbb{V}}\exp(\beta\Phi^{*}_{\emph{{v}}}))=\sum_{\textbf{v}}\pi_{\textbf{v}}\Phi^{*}_{\textbf{v}}-\frac{1}{\beta}\sum_{\textbf{v}}\pi_{\textbf{v}}\log\pi_{\textbf{v}}$,
according to [24, 25]. Therefore the log-sum-exp approximation of VP-EQ is
equivalent to the following problem
$\displaystyle\textbf{VP-approx}:\max_{\pi_{\textbf{v}}}$
$\displaystyle\quad\sum_{\textbf{v}}\pi_{\textbf{v}}\Phi^{*}_{\textbf{v}}-\frac{1}{\beta}\sum_{\textbf{v}}\pi_{\textbf{v}}\log\pi_{\textbf{v}}$
(6) s.t. $\displaystyle\quad 0\leq\pi_{\textbf{v}}\leq
1,\forall\textbf{v}\in\mathbb{V}$
$\displaystyle\quad\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}=1.$
By solving the KKT conditions of VP-approx, the optimal solution is achieved
in Theorem 1.
###### Theorem 1.
The optimal solution to VP-approx is
$\pi_{\emph{{v}}}^{*}=\frac{\exp(\beta\Phi^{*}_{\emph{{v}}})}{\sum_{\emph{{v}}\in\mathbb{V}}\exp(\beta\Phi^{*}_{\emph{{v}}})}.$
(7)
Proof. Let $\lambda$ be the Lagrange multiplier associated with the constraint
$\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}=1$. The Lagrangian of VP-
approx will then be
$L(\pi_{\textbf{v}},\lambda)=\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}\Phi^{*}_{\textbf{v}}-\frac{1}{\beta}\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}\log\pi_{\textbf{v}}-\lambda(\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}-1).$
(8)
Therefore, the KKT conditions will be:
$\displaystyle\Phi_{\textbf{v}}^{*}-\frac{1}{\beta}(\log\pi_{\textbf{v}}^{*}+1)-\lambda=0,\>\forall\textbf{v}\in\mathbb{V},$
(9) $\displaystyle\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}=1,$ (10)
$\displaystyle\lambda\geq 0.$ (11)
Solving the KKT conditions for the primal and dual optimal points
$\pi_{\textbf{v}}^{*}$ and $\lambda^{*}$, we obtain
$\pi_{\textbf{v}}^{*}=\exp(\beta(\Phi_{\textbf{v}}^{*}-\lambda)-1)$. Using the
constraint $\sum_{\textbf{v}\in\mathbb{V}}\pi_{\textbf{v}}=1$, we obtain
$\lambda^{*}=\frac{1}{\beta}\log\sum_{\textbf{v}}\exp(\beta\Phi_{\textbf{v}}^{*}-1)$.
Finally, substituting $\lambda^{*}$ into
$\pi_{\textbf{v}}^{*}=\exp(\beta(\Phi_{\textbf{v}}^{*}-\lambda)-1)$, we obtain
(7). ∎
Therefore, by time-sharing among VM placement configurations according to the
probability distribution $\pi_{\textbf{v}}^{*}$, we are able to solve VP-
approx, and hence VP-EQ, VP, and MPP approximately.
### IV-B Solving VP: Algorithm design
The idea consists of designing a Markov Chain, in which the state space is the
space of possible VM placement configurations $|\mathbb{V}|$, and the
stationary distribution is $\pi_{\textbf{v}}^{*}$, the optimal solution to VP-
approx. This would allow us to solve the joint VM placement and pricing
problem MPP with an optimality gap of $\frac{1}{\beta}\log|\mathbb{V}|$. To
help us in the construction of the Markov chain, we use the following result
from [25]:
###### Lemma 1.
For any distribution of the form $\pi_{\textbf{v}}^{*}$ in (7), there exists
at least one continuous-time time-reversible ergodic Markov chain whose
stationary distribution is $\pi_{\textbf{v}}^{*}$.
A continuous time-reversible markov chain (CTMC) is completely defined by its
state space and transition rate. We let the state space be the space of
possible VM placement configurations $\mathbb{V}$. The transition rate
$q_{\textbf{vv\textprime}}$ indicates the rate at which the CTMC shifts from
placement configuration v to v´. According to [25], for the CTMC to converge
to stationary distribution $\pi_{\textbf{v}}^{*}$, it needs to satisfy the
following two conditions: 1) Irreducibility, meaning that any two states of
the CTMC are reachable from each other. 2) Satisfaction of the detailed
balanced equation: for any $\textbf{v},\textbf{v}\textprime\in\mathbb{V}$,
$\pi^{*}_{\textbf{v}}q_{\textbf{vv\textprime}}=\pi^{*}_{\textbf{v\textprime}}q_{\textbf{v\textprime
v}}$. In other words,
$\exp(\beta\Phi^{*}_{\textbf{v}})q_{\textbf{vv\textprime}}=\exp(\beta\Phi^{*}_{\textbf{v\textprime}})q_{\textbf{v\textprime
v}}$ based on (7).
Condition 1 can be satisfied because any two states (placement configurations)
are reachable from each other. For Condition 2, let us set
$q_{\textbf{vv\textprime}}=0$ for any two states which involve the migration
of more than one VM from one base station to another. This is done to reduce
the computation required, especially when the network is large. For states
which involve the migration of only one VM, we have
$q_{\textbf{v}\textbf{v}^{\prime}}=\exp(\frac{1}{2}\beta(\Phi^{*}_{\textbf{v}^{\prime}}-\Phi^{*}_{\textbf{v}})).$
(12)
The detailed balance equation will be satisfied. The transition rate
$q_{\textbf{v}\textbf{v}^{\prime}}$ is exponentially proportional to the
performance of the target minus current VM placement configuration. Therefore,
when the performance (optimal revenue) of the target configuration is
relatively higher than the current, there will be a higher transition rate,
and vice versa.
The performance of each configuration v is equivalent to its revenue obtained.
In the next section, we show how to obtain the optimal revenue given a VM
placement configuration v. In particular, we propose auction mechanisms to
solve the slave problem NP. Following which, we will show how the algorithms
solving the master problem VP and slave problem NP are combined to solve the
original problem MPP.
## V The Auction Pricing Mechanisms
In this section, we show how the slave problem NP can be solved. Specifically,
NP defined in (2), can be decomposed into individual pricing problems for each
base station, where each base station $k$ solves the following problem:
$\textbf{NP-k}:\max_{p_{k}}\quad p_{k}\min\left\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}},v_{k}\right\\}.$ (13)
NP-k can be solved by an auction. We provide two solutions, firstly OPA \-
Optimal Pricing Auction, which assumes the users are truthful, submitting bids
$b_{k,i}$ equal to their true valuations $u_{k,i}$, and then PUFF \-
Partitioning Users For truthFulness mechanism, which includes an incentive
CompAtible Truthful auction iCAT. Our auction mechanisms are prior free, since
they can be carried out without knowledge on the distribution of users’
valuations $u_{k,i}$ .
### V-A The Optimal Pricing Auction (OPA)
The mechanics behind OPA are as follows: users submit tuple
$(r_{k,i},b_{k,i})$ to base station $k$, where $r_{k,i}$ is the amount of VMs
requested by user $i$ at base station $k$, and $b_{k,i}$ is the bid indicating
the user’s willingness to pay for one VM. Since all users are truthful, the
bid reported by the user is equal to its valuation (i.e., $b_{k,i}=u_{k,i}$).
At price $p_{k}$, all users with valuation $u_{k,i}\geq p_{k}$ will be willing
to participate in the auction. Then, we prove the optimal price will be
$p_{k}^{*}\in\mathbb{B}_{k}=U_{k}$ in Theorem 2, where $\mathbb{B}_{k}$ and
$U_{k}$ are the set of bids and valuations for users at base station $k$,
respectively.
###### Theorem 2.
When all users are truthful, the optimal price of NP-k, termed as $p_{k}^{*}$,
is found in $\mathbb{B}_{k}=U_{k}$.
Proof. When all users are truthful, we have $\mathbb{B}_{k}=U_{k}$. Then, we
prove that $p_{k}^{*}$ is found in $\mathbb{B}_{k}$.
For the case with $p_{k}>\text{max}_{i\in U_{k}}b_{k,i}=\text{max}_{{i\in
U_{k}}}u_{k,i}$, $\mathds{1}_{\\{b_{k,i}\geq
p_{k}\\}}=\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}}=0$ holds, such that all users
would reject to rent the VMs at base station $k$. Therefore, the revenue
attained at base station $k$ is $\textit{Rev}(p_{k})=p_{k}\min\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}},v_{k}\\}=0$.
Then, we analyse the case with $p_{k}<\text{max}_{i\in U_{k}}b_{k,i}$.
Rearrange $\mathbb{B}_{k}$ in descending order and denote the set of ordered
bids by $\\{b_{1},b_{2},...,b_{n_{k}}\\}$, where $b_{i}$ represents the $i$-th
highest bid. Using the fact that $b_{k,i}=u_{k,i}$, we have
$\displaystyle\textit{Rev}(p_{k}=b_{i}-\epsilon)$
$\displaystyle=(b_{i}-\epsilon)\>\min\left\\{\\!\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{b_{k,i}\geq(b_{i}-\epsilon)\\}},v_{k}\\!\right\\}$
(14) $\displaystyle<b_{i}\>\min\left\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{b_{k,i}\geq b_{i}\\}},v_{k}\right\\}$
$\displaystyle=\textit{Rev}(p_{k}=b_{i}),$
where $\epsilon<b_{i}-b_{i-1}$, no new users rent the VMs at base station $k$
by changing the price from $p_{k}=b_{i}$ to $p_{k}=b_{i}-\epsilon$, that is,
$\min\left\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{b_{k,i}\geq(b_{i}-\epsilon)\\}},v_{k}\right\\}=\min\left\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{b_{k,i}\geq b_{i}\\}},v_{k}\right\\}$ hold. Based
on (14), we thus conclude that $p_{k}^{*}$ lies in $\mathbb{B}_{k}$. ∎
Using this insight that the optimal price belongs to the set of bids, the
structure of our proposed OPA is summarized in Algorithm 1. In detail, after
receiving the tuple $(r_{k,i},b_{k,i})$ from all the users, base station $k$
will sort them into descending order with respect to $b_{k,i}$. For each
unique bid $b_{k,i}$, the platform will set $\bar{p}_{k}=b_{k,i}$, and
calculate the revenue $\textit{Rev}(\bar{p}_{k})=\bar{p}_{k}\min\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq\bar{p}_{k}\\}},v_{k}\\}$. Following
which, it will optimize over $\bar{p}_{k}$ and achieve
$p_{k}^{*}=\text{argmax}_{\bar{p}_{k}=b_{k,i},\forall i\in
U_{k}}\textit{Rev}(\bar{p}_{k})$.
Algorithm 1 OPA: Optimal Pricing Auction
1:Input: Tuple $(r_{k,i},b_{k,i}),\forall i\in U_{k}$
2:Sort $(r_{k,i},b_{k,i})$ according to descending order with respect to
$b_{k,i}$.
3:for all unique $b_{k,i}$ do
4: Set $\bar{p}_{k}=b_{k,i}$
5: $\textit{Rev}(\bar{p}_{k})\leftarrow\bar{p}_{k}\min\\{\sum_{i\in
U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq\bar{p}_{k}\\}},v_{k}\\}$
$\triangleright$ By Eq. (13)
6:end for
7:Output: $p_{k}^{*}\leftarrow\text{argmax}_{\bar{p}_{k}=b_{k,i},\forall i\in
U_{k}}\textit{Rev}(\bar{p}_{k})$
8:end
### V-B The Incentive CompAtible Truthful Auction (iCAT)
In reality, users may have an incentive to submit bids unequal to their true
valuations (i.e. $b_{k,i}\neq u_{k,i}$), hoping to achieve a higher payoff.
Therefore, we present incentive compatible auction mechanism iCAT, by which
the user’s dominant strategy is to be truthful.
Given a target revenue $R$, the auction mechanism will post price
$p_{k}=\frac{R}{\min\\{\sum_{i\in U_{k}}r_{k,i},v_{k}\\}}$, where $\sum_{i\in
U_{k}}r_{k,i}$ is the total demand of the users currently in the auction.
Users will decide whether or not to accept the offer by weighing if their
payoff $p_{k}-u_{k,i}$ is not lesser than $0$ (individual rationality met). If
any user $i$ rejects the offer, he is removed from future rounds of the
auction. Then, the set of users in the auction is updated as $U_{k}\leftarrow
U_{k}\setminus\\{i\\}$. The process repeats: base station $k$ obtains the new
demand $\sum_{i\in U_{k}}r_{k,i}$ of users currently in the auction, and
broadcasts the new price $p_{k}=\frac{R}{\min\\{\sum_{i\in
U_{k}}r_{k,i},v_{k}\\}}$. If all users remaining in the auction accept the
offer, they will be the winners, paying the last offer price $p_{k}$.
Therefore, base station $k$ would rent
$\min\\{\sum_{i}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}},v_{k}\\}$ units of
VMs to users with bids in the set $U_{k}$ at price
$p_{k}=\frac{R}{\min\\{\sum_{i\in U_{k}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq
p_{k}\\}},v_{k}\\}}$.
The complete iCAT is summarized in Algorithm 2. The main idea behind this
mechanism is that it prunes the set of auction users until it obtains a set
$U_{k}$ where: the users in $U_{k}$ are willing to pay
$p_{k}=\frac{R}{\min\\{\sum_{i\in U_{k}}r_{k,i},v_{k}\\}}$, the price at which
the base station obtains revenue $R$ given demand $\sum_{i\in U_{k}}r_{k,i}$.
Note that our auction mechanism does not involve the users submitting any bids
$b_{k,i}$. Truthfulness is ensured via the structure of the mechanism, as
proved in Theorem 3. In particular, we prove that iCAT is dominant strategy
incentive compatible, meaning that being truthful gives the users a higher
payoff compared to any other strategy.
Algorithm 2 iCAT: incentive CompAtible Truthful Auction
1:Input: Initialize $U_{k}$, the number of VMs required by user $i$
($r_{k,i}$), and target revenue $R$ at base station $k$.
2:while $U_{k}$ is not empty do
3: Base station $k$ posts price $p_{k}=\frac{R}{\min\\{\sum_{i\in
U_{k}}r_{k,i},v_{k}\\}}$;
4: if $u_{k,i}<p_{k}$ for any user $i\in U_{k}$ then
5: User $i$ rejects to join in the auction;
6: Base station $k$ updates $U_{k}\leftarrow U_{k}\setminus\\{i\\}$;
7: else
8: All users in $U_{k}$ would join in the auction;
9: Exit while loop;
10: end if
11:end while
12:Output: $p_{k}\leftarrow\frac{R}{\min\\{\sum_{i\in
U_{k}}r_{k,i},\>v_{k}\\}}$ and $\textit{Rev}(p_{k})\leftarrow R$ with $U_{k}$
not empty, otherwise, $p_{k}\leftarrow 0$ and $\textit{Rev}(p_{k})\leftarrow
0$.
13:end
###### Theorem 3.
Mechanism iCAT is dominant strategy incentive compatible.
Proof. If a user rejects an offer, he will be out of the auction and unable to
participate in the next round, hence getting a payoff of $0$. Therefore
rejecting $p_{k}$, when $p_{k}<u_{k,i}$, is a dominated strategy.
Likewise, accepting $p_{k}>u_{k,i}$ is a dominated strategy, since prices will
rise the next round. Therefore the dominant strategy for every user $i$ is to
report his true value $u_{k,i}$. ∎
The following theorem provides an optimality guarantee for iCAT. It uses the
benchmark $\text{OptRev}^{\geq
2}(U_{k}^{all})=\text{max}_{p_{k}}\>p_{k}\>\min\\{\sum_{i\in
U_{k}^{all}}r_{k,i}\mathds{1}_{\\{u_{k,i}\geq p_{k}\\}},v_{k}\\}$, which has a
requirement of at least two users being in the market. This is not a serious
constraint in light of the number of users at one base station. Besides, we
use $U_{k}^{all}$ to indicate the initial $U_{k}$ in iCAT, that is, the total
number of users at base station $k$.
###### Theorem 4.
The mechanism iCAT achieves a revenue of $R$ if $\text{OptRev}^{\geq
2}(U_{k}^{all})\geq R$, and a revenue of $0$ otherwise.
Proof. According to Theorem 2, we have
$\text{OptRev}^{\geq 2}(U_{k}^{all})=u_{k,x}^{*}\>\min\left\\{\sum_{i\in
U_{k,x}^{*}}r_{k,i},v_{k}\right\\},$ (15)
for some $u_{k,x}^{*}$ and $U_{k,x}^{*}=\\{i|u_{k,i}\geq u_{k,x}^{*}\\}$.
If $\text{OptRev}^{\geq 2}(U_{k}^{all})>R$, then some $u_{k,x}$ not equal to
$u_{k,x}^{*}$ could be found to obtain a revenue $\textit{Rev}(u_{k,x})$ equal
to $R$. On the contrary, if $\text{OptRev}^{\geq 2}(U_{k}^{all})<R$, by (15)
we will not be able to find any $u_{k,x}$ satisfying
$u_{k,x}\geq\frac{R}{\min\\{\sum_{i\in U_{k}^{all}}r_{k,i},v_{k}\\}}$.
According to line 12 in Algorithm 2, a revenue of 0 is obtained in this case.
Besides, for the case with $\text{OptRev}^{\geq 2}(U_{k}^{all})=R$, the
revenue of $R$ is achieved naturally. ∎
Intuitively, the target revenue $R$ plays a key role in iCAT. How shall the
base station estimate $R$? For truthfulness, we want $R$ to be estimated
independently of the bidders we run auction iCAT on. Hence, we further propose
a partitioning mechanism PUFF \- Partitioning Users For truthFulness, for the
base station to estimate $R$ while preserving truthfulness.
### V-C Partitioning Users For Truthfulness (PUFF)
The operations of PUFF are as follows: We partition the set of all users into
two sets. Following which, we calculate the optimal revenues $R_{1}$ and
$R_{2}$ for each set. Next, we use the optimal revenues as ’estimates of $R$’
for the opposing set and run iCAT in each set. Note that when the total supply
is less than the total demand, we will run the separate auctions using
$\lfloor v_{k}/2\rfloor$ and $\lceil v_{k}/2\rceil$ number of VMs. The
complete PUFF is summarized in Algorithm 3.
Algorithm 3 PUFF: Partitioning Users For truthFulness Mechanism
1:Input: Initialize $U_{k}$ and the number of VMs required by user $i$
($r_{k,i}$).
2:Randomly partition $U_{k}$ into two sets $S_{1}$ and $S_{2}$ of equal size.
3:if $\sum_{i\in U_{k}}r_{k,i}>v_{k}$ then
4: Calculate $R_{1}=$ optimal revenue of $S_{1}$ given $\lfloor
v_{k}/2\rfloor$ VMs, and $R_{2}=$ optimal revenue of $S_{2}$ given $\lceil
v_{k}/2\rceil$ VMs;
5: Run auction iCAT($S_{1},\lfloor v_{k}/2\rfloor,R_{2}$) on set $S_{1}$, and
iCAT($S_{2},\lceil v_{k}/2\rceil,R_{1}$) on set $S_{2}$.
6:else
7: Calculate $R_{1}=$ optimal revenue of $S_{1}$ given $v_{k}$ VMs, and
$R_{2}=$ optimal revenue of $S_{2}$ given $v_{k}$ VMs.
8: Run auction iCAT($S_{1},v_{k},R_{2}$) on set $S_{1}$, and
iCAT($S_{2},v_{k},R_{1}$) on set $S_{2}$.
9:end if
10:end
In the following theorem, we show that PUFF is truthful.
###### Theorem 5.
Mechanism PUFF is dominant strategy truthful.
Proof. Auction iCAT is truthful when implemented with an $R$ estimated
independently of the users it is run on. ∎
Next, we state a lemma which helps us towards proving lower bounds on the
performance of PUFF.
###### Lemma 2.
The revenue of PUFF is at least $\min(R_{1},R_{2})$.
Proof. Either $R_{1}>R_{2}$, $R_{2}>R_{1}$, or $R_{1}=R_{2}$ holds in the
PUFF. Therefore, at least one auction out of iCAT($S_{1},R_{2}$) and
iCAT($S_{2},R_{1}$) succeeds, i.e. gets a revenue of above 0, giving a revenue
of $\min(R_{1},R_{2},R_{1}+R_{2})$. ∎
Following which, we prove bounds on the optimality gap of PUFF, proving that
its competitive ratio is $4$, in a special case where all users $i$ request
one VM, i.e., $r_{k,i}=1$.
###### Theorem 6.
Assume $r_{k,i}=1$ for all users. Let $Rev$ be the expected revenue of PUFF.
We will have $\frac{\text{Rev}}{\text{OptRev}^{\geq
2}(U_{k}^{all})}\geq\frac{1}{4}$.
Proof. We know from Theorem 2 that $\text{OptRev}^{\geq
2}(U_{k}^{all})=u_{k,x}^{*}\>\min\\{\sum_{i\in U_{k,x}^{*}}r_{k,i},v_{k}\\}$
for some $u_{k,x}^{*}$ and $U_{k,x}^{*}=\\{i|u_{k,i}\geq u_{k,x}^{*}\\}$. Let
$D=\sum_{i\in U_{k}^{all}}r_{k,i}$ and $S=v_{k}$. Further, we first analyse
the case where $D\geq S$. Given this $u_{k,x}^{*}$, we will have $R_{1}\geq
u_{k,x}\>\min\\{\sum_{i\in U_{k,x}^{*}\cap S_{1}}r_{k,i},\lfloor
v_{k}/2\rfloor\\}$ and $R_{2}\geq u_{k,x}\>\min\\{\sum_{i\in U_{k,x}^{*}\cap
S_{2}}r_{k,i},\lceil v_{k}/2\rceil\\}$. Therefore, we deduce that
$\displaystyle\frac{Rev}{\text{OptRev}^{\geq
2}(U_{k}^{all})}\stackrel{{\scriptstyle(a)}}{{\geq}}\frac{\mathbb{E}[\min(R_{1},R_{2})]}{u_{k,x}^{*}\>\min\\{\sum_{i\in
U_{k,x}^{*}}r_{k,i},v_{k}\\}}$ (16)
$\displaystyle\stackrel{{\scriptstyle(b)}}{{\geq}}\frac{\mathbb{E}[\min(u_{k,x}^{*}\>\min\\{A,\lfloor
v_{k}/2\rfloor\\},u_{k,x}^{*}\>\min\\{B,\lceil
v_{k}/2\rceil\\})]}{u_{k,x}^{*}\>\min\\{\sum_{i\in
U_{k,x}^{*}}r_{k,i},v_{k}\\}}$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{\geq}}\frac{\min(\lfloor
v_{k}/2\rfloor,\mathbb{E}[\min\\{A,B\\}]}{\min\\{\sum_{i\in
U_{k,x}^{*}}r_{k,i},v_{k}\\}}$
$\displaystyle\stackrel{{\scriptstyle(d)}}{{\geq}}\frac{\min\\{\lfloor
v_{k}/2\rfloor,1/4\sum_{i\in U_{k,x}^{*}}r_{k,i}\\}}{\min\\{\sum_{i\in
U_{k,x}^{*}}r_{k,i},v_{k}\\}}\geq\frac{1}{4}.$
In inequality $(b)$, we have $A=\sum_{i\in U_{k,x}^{*}\cap S_{1}}r_{k,i}$ and
$B=\sum_{i\in U_{k,x}^{*}\cap S_{2}}r_{k,i}$. Note that the transition from
inequality $(c)$ to $(d)$ is due to the fact that if we flip $k\geq 2$ coins
(corresponding to partitioning the winners into the 2 sets),
$\mathbb{E}[\min(H,T)]\geq\frac{1}{4}$ [22], Chapter 13.
Likewise, for the case where $D\leq S$, following the same logic we have
$\frac{Rev}{\text{OptRev}^{\geq
2}(U_{k}^{all})}\geq\frac{\min\\{v_{k},1/4\sum_{i\in
U_{k,x}^{*}}r_{k,i}\\}}{\min\\{\sum_{i\in
U_{k,x}^{*}}r_{k,i},v_{k}\\}}\geq\frac{1}{4}.\qed$ (17)
It is emphasized that, iCAT, PUFF and their proofs borrow from, but are
different from the Profit Extractor and Random Sampling Auction in [21, 22].
Profit Extractor and Random Sampling Auction cater to fully digital goods,
with 0 marginal cost of producing the next good, and hence an infinite supply.
Unlike this, our network has a limited supply of VMs, resulting in unique
novel algorithms and proofs.
## VI Combined Algorithm and Analysis
In this section, we present the combined VM placement and pricing mechanism,
describing its implementation. Next, we analyse its performance, termed cMAP,
and prove bounds on the optimality gap caused by potential perturbations on
$\Phi^{*}_{\textbf{v}}$.
### VI-A Algorithm Implementation
The distributed and combined Markov Approx VM Placement and Pricing Algorithm
(cMAP) is summarized in Algorithm 4 and works as follows: Each round, we
randomly select a base station. The base station $k$ considers potential
configurations $\textbf{v}^{\prime}$ in which it has gained one VM, or sent
one VM to elsewhere. The network operator obtains the target revenue
$\Phi_{\textbf{v}^{\prime}}$ using OPA,PUFF, or via historical data. The base
station then starts exponential clocks for each of these configurations,
following the transition rate
$q_{\textbf{v}\textbf{v}^{\prime}}\leftarrow\exp(0.5\beta(\Phi^{*}_{\textbf{v}^{\prime}}-\Phi^{*}_{\textbf{v}}))$.
When the performance of the target configuration is relatively higher (or
lower) than the current, there will be a higher (or lower) rate of switching.
The process repeats until convergence to the stationary distribution, the
optimal point of VP-approx. This point approximates the optimal point of MPP
with an optimality gap of $\frac{1}{\beta}\log|\mathbb{V}|$, according to
Proposition 5. Note that due to its distributed nature, our algorithm is able
to handle the dynamic scenarios when new users enter the system, or when users
shift from region to region.
Algorithm 4 cMAP: Combined Markov Approx VM Placement and Pricing Algorithm
1:Input: $V$, the total number of VMs across the network, $\\{U_{k}\\}$, the
set of users across all base stations, and $\\{r_{k,i}\\}$, the number of VMs
required by all users.
2:Initialise a configuration v.
3:Network operator calculates $\Phi^{*}_{\textbf{v}}\leftarrow$
OPA(v,$\\{U_{k}\\}$, $\\{r_{k,i}\\}$) or PUFF(v,$\\{U_{k}\\}$,
$\\{r_{k,i}\\}$);
4:while True do
5: Randomly select a base station $k$.
6: Consider configurations $\textbf{v}^{\prime}$ with $v_{k}\pm 1$ VMs at $k$.
7: for all configurations $\textbf{v}^{\prime}$ do
8: Network operator obtains the target revenue
$\Phi^{*}_{\textbf{v}^{\prime}}\leftarrow$ OPA(v′,$\\{U_{k}\\}$,
$\\{r_{k,i}\\}$) or PUFF(v′,$\\{U_{k}\\}$, $\\{r_{k,i}\\}$);
9: Set clocks with transition rate
$q_{\textbf{v}\textbf{v}^{\prime}}\leftarrow$
$\exp(0.5\beta(\Phi^{*}_{\textbf{v}^{\prime}}-\Phi^{*}_{\textbf{v}}))$;
10: end for
11: The CTMC transits to the next state according to
$q_{\textbf{v}\textbf{v}^{\prime}}$;
12:end while
### VI-B Algorithm Analysis
Our combined mechanism cMAP attains an optimality gap of
$\frac{1}{\beta}\log|\mathbb{V}|$ for the original problem MPP. In practice,
the system may obtain an inaccurate value of $\Phi_{\textbf{v}}^{*}$, the
optimal revenue under configuration v. This may occur when we implement the
incentive compatible auction mechanism PUFF and estimate $R$.
In light of this we analyse the impact of the perturbations, by bounding the
optimality gap caused by the perturbations, on problem VP-approx. To this end,
we construct a new CTMC which takes into account the perturbations, and
characterize its stationary distribution, in the following.
For each state v with optimal revenue $\Phi_{\textbf{v}}^{*}$, we let
$\overline{\Phi}_{\textbf{v}}$ be its corresponding perturbed inaccurate
revenue. The perturbation error
$\epsilon_{\textbf{v}}=\overline{\Phi}_{\textbf{v}}-\Phi_{\textbf{v}}^{*}$
lies in the range $[-\psi_{\textbf{v}},\psi_{\textbf{v}}]$. For each state v,
we quantize the error into $2a_{\textbf{v}}+1$ potential values
$[-\psi_{\textbf{v}},...,-\psi_{\textbf{v}}/a_{\textbf{v}},0,...,\psi_{\textbf{v}}/a_{\textbf{v}},...,\psi_{\textbf{v}}]$,
where the error
$\epsilon_{\textbf{v}}=\frac{n}{a_{\textbf{v}}}\psi_{\textbf{v}}$ with
probability $\rho_{\textbf{v}_{n}},n=0,\pm 1,..\pm a_{\textbf{v}}$, and
$\sum_{n=-a_{\textbf{v}}}^{a_{\textbf{v}}}\rho_{\textbf{v}_{n}}=1$. This means
that we have constructed a new CTMC in which each state v of the original CTMC
is now expanded into $2a_{\textbf{v}}+1$ states. The transition rate follows
the following equation:
$q_{\textbf{v}_{n}\textbf{v}_{n^{\prime}}^{{}^{\prime}}}=\exp(0.5\beta(\Phi_{\textbf{v}^{\prime}_{n^{\prime}}}^{*}-\Phi_{\textbf{v}_{n}}^{*}))\rho_{\textbf{v}^{\prime}_{n^{\prime}}}.$
(18)
Based on the detailed balanced equation
$\pi_{\textbf{v}_{n}}q_{\textbf{v}_{n}\textbf{v}^{\prime}_{n^{\prime}}}=\pi_{\textbf{v}^{\prime}_{n^{\prime}}}q_{\textbf{v}^{\prime}_{n^{\prime}}\textbf{v}_{n}}$,
we have
$\pi_{\textbf{v}_{n}}\\!\exp(\frac{1}{2}\beta(\Phi_{\textbf{v}^{\prime}_{n^{\prime}}}^{*}-\Phi_{\textbf{v}_{n}}^{*}))\rho_{\textbf{v}^{\prime}_{n^{\prime}}}\\!=\\!\pi_{\textbf{v}^{\prime}_{n^{\prime}}}\\!\exp(\frac{1}{2}\beta(\Phi_{\textbf{v}_{n}}^{*}-\Phi_{\textbf{v}^{\prime}_{n^{\prime}}}^{*}))\rho_{\textbf{v}_{n}},$
(19)
which results in
$\pi_{\textbf{v}_{n}}\exp(\beta\Phi_{\textbf{v}^{\prime}_{n^{\prime}}}^{*})\rho_{\textbf{v}^{\prime}_{n^{\prime}}}=\pi_{\textbf{v}^{\prime}_{n^{\prime}}}\exp(\beta\Phi_{\textbf{v}_{n}}^{*})\rho_{\textbf{v}_{n}}.$
(20)
Because
$\sum_{\textbf{v}^{\prime}\in\mathbb{V}}\sum_{n^{\prime}=-a_{\textbf{v}^{\prime}}}^{a_{\textbf{v}^{\prime}}}\pi_{\textbf{v}^{\prime}_{n^{\prime}}}=1$,
we obtain
$\pi_{\textbf{v}_{n}}=\frac{\exp(\beta\Phi_{\textbf{v}_{n}}^{*})\rho_{\textbf{v}_{n}}}{\sum_{\textbf{v}^{\prime}\in\mathbb{V}}\sum_{n^{\prime}=-a_{\textbf{v}^{\prime}}}^{a_{\textbf{v}^{\prime}}}\exp(\beta\Phi_{\textbf{v}^{\prime}_{n^{\prime}}}^{*})\rho_{\textbf{v}^{\prime}_{n^{\prime}}}}.$
(21)
Letting
$\sigma_{\textbf{v}^{\prime}}=\sum_{n^{\prime}=-a_{\textbf{v}^{\prime}}}^{a_{\textbf{v}^{\prime}}}\rho_{\textbf{v}^{\prime}_{n^{\prime}}}\exp(\beta\frac{n^{\prime}}{a_{\textbf{v}^{\prime}}}\psi_{\textbf{v}^{\prime}})$,
the distribution of the new perturbed CTMC will be
$\overline{\pi}_{\textbf{v}}=\sum_{n=-a_{\textbf{v}}}^{a_{\textbf{v}}}\pi_{\textbf{v}_{n}}=\frac{\sigma_{\textbf{v}}\exp(\beta\Phi^{*}_{\textbf{v}})}{\sum_{\textbf{v}^{\prime}\in\mathbb{V}}\sigma_{\textbf{v}^{\prime}}\exp(\beta\Phi^{*}_{\textbf{v}^{\prime}})}.$
(22)
We use the Total Variation Distance [26, 27] as a metric to quantify the
optimality gap between the stationary distribution of the perturbed CTMC
$\overline{\pi}_{\textbf{v}}$ and $\pi_{\textbf{v}}^{*}$, the optimal solution
of VP-approx, as follows:
$d_{TV}(\pi_{\textbf{v}}^{*},\overline{\pi}_{\textbf{v}})=\frac{1}{2}\sum_{\textbf{v}\in\mathbb{V}}|\pi_{\textbf{v}}^{*}-\overline{\pi}_{\textbf{v}}|.$
(23)
With the stationary distribution of the perturbed CTMC
$\overline{\pi}_{\textbf{v}}$, we use a result in [27], which proved that the
total variation distance is bounded as follows
$d_{TV}(\pi_{\textbf{v}}^{*},\overline{\pi}_{\textbf{v}})\leq
1-\exp(-2\beta\psi_{\text{max}}),$ (24)
where $\psi_{\text{max}}=\text{max}_{\textbf{v}}\psi_{\textbf{v}}$, the
largest perturbation error among states v. The revenue gap is hence bounded as
follows:
$|\pi_{\textbf{v}}^{*}\Phi^{*}_{\textbf{v}}-\overline{\pi}_{\textbf{v}}\overline{\Phi}_{\textbf{v}}|\leq
2\Phi_{\text{max}}(1-\exp(-2\beta\psi_{\text{max}})),$ (25)
where $\Phi_{\text{max}}=\text{max}_{\textbf{v}}\Phi_{\textbf{v}}$.
The upper bound on both the Total Variation Distance between the two
distributions $d_{TV}(\pi_{\textbf{v}}^{*},\overline{\pi}_{\textbf{v}})$ and
the optimality gap
$|\pi_{\textbf{v}}^{*}\Phi^{*}_{\textbf{v}}-\overline{\pi}_{\textbf{v}}\overline{\Phi}_{\textbf{v}}|$
is independent with respect to $\rho_{\textbf{v}_{n}}$, the distribution of
perturbed revenues, and is independent with respect to $|\mathbb{V}|$, the
total number of configurations. This indicates that the optimality gap does
not increase with the network size and number of configurations
$|\mathbb{V}|$. Besides this, using Markov Approximation enables us to perform
a distributed implementation on this large combinatorial problem.
## VII Simulation Results
In this section, we evaluate the performance of our combined mechanisms cMAP:
MAP (which solves the VM placement problem) along with either OPA or PUFF
(which solve the normalized pricing problem), and provide some insights.
### VII-A Convergence, and insights on pricing
Firstly, we consider a network in which there are 5 BSs, and 10 VMs being
distributed amongst these 5 BSs. The 5 base stations have $(2,0,2,4,0)$ users
respectively. We set $r_{k,i}$, the number of VMs required by user $i$ at BS
$k$, to be between 1 to 3 VMs. $u_{k,i}$, the willingness to pay of user $i$
at BS $k$, follows uniform distribution $U[0,1]$.
Figure 1: Convergence of the cMAP.
Under this setup, we run cMAP (the combined Markov Approximation VM Placement
Algorithm) along with auction OPA. for different values of $\beta$. We plot
the running average over a window size of $30$ jumps, in comparison with the
optimal value, as seen in Fig 1. The optimal value is obtained by exhaustive
search, evaluating the solution to MPP over all combinations of v. As seen,
for $\beta=50$, we are able to achieve optimality. For $\beta=10$, the
converged stationary distribution over configurations of v is near optimal.
Under $\beta=10$, the top 5 most common states are $\textbf{v}=(2,0,2,5,1)$,
$(2,0,3,5,0)$, $(2,1,2,5,0)$, $(3,0,2,5,0)$, $(2,0,2,6,0)$, which are best
able to meet the total demand of $(2,0,2,4,0)$. Notice that as $\beta$
increases, performance improves: the running average is closer to the optimal
point, and fluctuations decrease. The fluctuations occur because under our
Markov Approximation-inspired algorithm, we converge not to a specific state
of the CTMC, but to a stationary distribution over the states of the CTMC.
Recall that the converged stationary distribution has an optimality gap of
$\frac{1}{\beta}\log|\mathbb{V}|$ from the optimal point of the original
problem VP. This shows that as $\beta\rightarrow\infty$, the performance
converges to the optimal value of VP. A potential tradeoff in having a higher
$\beta$ is: if $\Phi_{\textbf{v}}^{*}>\Phi_{\textbf{v}^{\prime}}^{*}$,
according to (12) there will be a lower rate of switching, and a higher
probability of staying in the current state. As $\beta$ increases, the network
is more likely to stay in the current state. This may lead to a longer time
spent in local minimums, due to the lack of exploration, and hence a longer
convergence time.
Next, with the current setup we compare the performance of our proposed
mechanisms MAP \+ OPA and MAP \+ PUFF to the following baselines:
1) Cooperative BS + Uniform Pricing: Under this scenario, the base stations
are cooperative. They share the VMs with each other, where the VMs are
transferred within the network via our proposed MAP. Unlike our proposed
combined solution, here we use uniform pricing: a common price is set
throughout the network, regardless of the demand pattern. A benefit of uniform
pricing is that it is faster to implement.
2) Non-cooperative BS + Auction: Under this scenario, the base stations are no
longer cooperative - they do not share the VMs with each other. We obtain the
average result under the non-cooperative scenario, by averaging over all the
possible combinations of v. For each configuration v, we use the optimal
auction OPA to obtain $\Phi_{\textbf{v}}^{*}$.
3) Non-cooperative BS + Uniform Pricing: Under this scenario, the base
stations not only do not share the VMs with each other, but also do not
consider the demand pattern, using a common price throughput the network.
Figure 2: The effect of different uniform prices on revenue.
We plot the revenue obtained under the various methods, and show how the
performance varies when different prices are set as the uniform price in Fig.
2. As seen, our proposed algorithms cMAP outperforms the baselines, especially
when OPA is used as the pricing mechanism. While MAP in combination with PUFF
is not near-optimal, we have proved that PUFF has a competitive ratio of $4$.
The baselines involving uniform price perform best when the price is ”neutral”
- neither too low nor too high. If the price is too high, the users (likely
having a lower willingness to pay) would not choose to use the VMs. If the
price is too low, the revenue the network operator obtains will be low. Fig. 2
also shows that resource sharing among base stations increases the revenue.
### VII-B A larger setup, with insights on willingness to pay and the demand-
supply ratio
Next, we enlarge our setup and compare the performance of our proposed
mechanisms with the different baselines. In this setup, there are 20 VMs
shared amongst the 5 base stations. The number of users at each base station
are randomized, along with $r_{k,i}$, the number of VM units each user
requests. We let the users’ willingness to pay $u_{k,i}$ follow a uniform
distribution $U[a,b]$.
Figure 3: The impact of willingness to pay on revenue. Figure 4: The impact
of the Demand/Supply ratio on revenue.
In Fig. 3, we show the impact of users’ willingness to pay on the revenue. The
range of $u_{k,i}$ is adjusted, from the uniform distribution $U[0,0.4]$ (low
willingness to pay), to $U[0.2,0.6]$, $U[0.4,0.8]$ and $U[0.6,1]$ (high
willingess to pay). Our propsed solution MAP \+ OPA (with $\beta=10$)
outperforms the baselines, obtaining a near-optimal revenue. Our results show
that on average, having base station cooperation increases the revenue by up
to $53\%$ percent. As seen in Fig. 3, when the users have a higher willingness
to pay, the revenue increases. Notice that uniform pricing ($p=0.5$) does not
perform well, when the users have low willingness to pay.
Fig. 4 illustrates the impact of revenue when the
$\frac{\text{Demand}}{\text{Supply}}$ ratio is varied. Supply is fixed at $20$
VMs, while demand is increased, from $D=7$ (low demand), to $D=21$ (near equal
demand and supply) and high demand $D=38$. Our solution cMAP outperforms the
baselines, especially when demand increases, as the supply of VMs is shifted
around the network to meet demand more effectively, and an optimal auction is
used to extract the highest revenue possible. Our results show that on
average, having base station cooperation increases the revenue by up to
$57\%$. As seen in Fig. 4, as the $\frac{\text{Demand}}{\text{Supply}}$ ratio
increases, revenue increases because more units of demand are being met. Once
the $\frac{\text{Demand}}{\text{Supply}}$ ratio hits 1, revenue no longer
increases much due to the lack of global supply in the system.
## VIII Conclusions
In this paper, we have addressed the load-unbalanced problem in MEC systems,
by jointly optimizing the VM placement and pricing across base stations.
Specifically, we have formulated a revenue maximization problem from the
network operator’s perspective, which was decomposed to a VM placement master
problem and a normalized pricing slave problem. The objective function of the
master problem is the optimal value of the slave problem. Then, we solved the
master problem by designing a CTMC and solved the slave problem by proposing
auctions considering users’ truthful and untruthful behaviors, respectively.
By combining the algorithms proposed for the master and slave problem, cMAP is
implemented for VM placement and pricing decision making across base stations.
Through theoretical analysis, we give the optimal gap of cMAP, which is nested
with OPA (auction mechanism with users’ truthful behaviors) and PUFF (auction
machanism with users’ untruthful behaviors), respectively. Finally, we
demonstrated the convergence and efficiency of cMAP. For future work we will
analyse the impact of factors like having a heterogeneous cost of VM migration
between base stations.
## References
* [1] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 4, pp. 2322–2358, 2017.
* [2] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge computing—a key technology towards 5G.”
* [3] P. Mach and Z. Becvar, “Mobile edge computing: A survey on architecture and computation offloading,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 3, pp. 1628–1656, 2017.
* [4] J. Plachy, Z. Becvar, and E. C. Strinati, “Dynamic resource allocation exploiting mobility prediction in mobile edge computing,” in _2016 IEEE 27th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC)_. IEEE, 2016, pp. 1–6.
* [5] A. Ksentini, T. Taleb, and M. Chen, “A markov decision process-based service migration procedure for follow me cloud,” in _2014 IEEE International Conference on Communications (ICC)_. IEEE, 2014, pp. 1350–1354.
* [6] T. Ouyang, Z. Zhou, and X. Chen, “Follow me at the edge: Mobility-aware dynamic service placement for mobile edge computing,” _IEEE Journal on Selected Areas in Communications_ , vol. 36, no. 10, pp. 2333–2345, 2018.
* [7] H. Ma, Z. Zhou, and X. Chen, “Leveraging the power of prediction: Predictive service placement for latency-sensitive mobile edge computing,” _IEEE Transactions on Wireless Communications_ , vol. 19, no. 10, pp. 6454–6468, 2020\.
* [8] S. Wang, Y. Guo, N. Zhang, P. Yang, A. Zhou, and X. S. Shen, “Delay-aware microservice coordination in mobile edge computing: A reinforcement learning approach,” _IEEE Transactions on Mobile Computing_ , 2019.
* [9] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” vol. 34, no. 12, pp. 3590–3605, Dec. 2016.
* [10] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” vol. 24, no. 5, pp. 2795–2808, Oct. 2016.
* [11] T. Q. Dinh, J. Tang, Q. D. La, and T. Q. Quek, “Offloading in mobile edge computing: Task allocation and computational frequency scaling,” _IEEE Transactions on Communications_ , vol. 65, no. 8, pp. 3571–3584, 2017.
* [12] Y. Li and S. Wang, “An energy-aware edge server placement algorithm in mobile edge computing,” in _2018 IEEE International Conference on Edge Computing (EDGE)_. IEEE, 2018, pp. 66–73.
* [13] Q. Wang, S. Guo, J. Liu, C. Pan, and L. Yang, “Profit maximization incentive mechanism for resource providers in mobile edge computing,” _IEEE Transactions on Services Computing_ , pp. 1–1, 2019.
* [14] Y. Shih, C. Wang, and A. Pang, “Fog computing service provision using bargaining solutions,” _IEEE Transactions on Services Computing_ , pp. 1–1, 2019.
* [15] A. Kiani and N. Ansari, “Toward hierarchical mobile edge computing: An auction-based profit maximization approach,” _IEEE Internet of Things Journal_ , vol. 4, no. 6, pp. 2082–2091, 2017.
* [16] Z. Tao, Q. Xia, Z. Hao, C. Li, L. Ma, S. Yi, and Q. Li, “A survey of virtual machine management in edge computing,” _Proceedings of the IEEE_ , vol. 107, no. 8, pp. 1482–1499, 2019.
* [17] S. Wang, J. Xu, N. Zhang, and Y. Liu, “A survey on service migration in mobile edge computing,” _IEEE Access_ , vol. 6, pp. 23 511–23 528, 2018.
* [18] J. W. Jiang, T. Lan, S. Ha, M. Chen, and M. Chiang, “Joint VM placement and routing for data center traffic engineering,” in _2012 Proceedings IEEE INFOCOM_. IEEE, 2012, pp. 2876–2880.
* [19] B. Jennings and R. Stadler, “Resource management in clouds: Survey and research challenges,” _Journal of Network and Systems Management_ , vol. 23, no. 3, pp. 567–619, 2015.
* [20] X. Sun and N. Ansari, “Primal: Profit maximization avatar placement for mobile edge computing,” in _2016 IEEE International Conference on Communications (ICC)_. IEEE, 2016, pp. 1–6.
* [21] A. V. Goldberg and J. D. Hartline, “Competitive auctions for multiple digital goods,” in _European Symposium on Algorithms_. Springer, 2001, pp. 416–427.
* [22] T. Roughgarden, “Algorithmic game theory,” _Communications of the ACM_ , vol. 53, no. 7, pp. 78–86, 2010.
* [23] D. P. Palomar and M. Chiang, “A tutorial on decomposition methods for network utility maximization,” _IEEE Journal on Selected Areas in Communications_ , vol. 24, no. 8, pp. 1439–1451, Aug. 2006.
* [24] S. Boyd, S. P. Boyd, and L. Vandenberghe, _Convex optimization_. Cambridge university press, 2004.
* [25] M. Chen, S. C. Liew, Z. Shao, and C. Kai, “Markov approximation for combinatorial network optimization,” _IEEE transactions on information theory_ , vol. 59, no. 10, pp. 6301–6327, 2013.
* [26] P. Diaconis and D. Stroock, “Geometric bounds for eigenvalues of markov chains,” _The Annals of Applied Probability_ , pp. 36–61, 1991.
* [27] S. Zhang, Z. Shao, M. Chen, and L. Jiang, “Optimal distributed P2P streaming under node degree bounds,” _IEEE/ACM Transactions on Networking_ , vol. 22, no. 3, pp. 717–730, 2013.
|
∎
11institutetext: A. Zhigljavsky 22institutetext: School of Mathematics,
Cardiff University, Cardiff, CF24 4AG, UK
22email<EMAIL_ADDRESS>33institutetext: J. Noonan
44institutetext: School of Mathematics, Cardiff University, Cardiff, CF24 4AG,
UK
44email<EMAIL_ADDRESS>
# Random and quasi-random designs in group testing
Jack Noonan Anatoly Zhigljavsky (Corresponding Author)
(Received: date / Accepted: date)
###### Abstract
For large classes of group testing problems, we derive lower bounds for the
probability that all significant items are uniquely identified using specially
constructed random designs. These bounds allow us to optimize parameters of
the randomization schemes. We also suggest and numerically justify a procedure
of constructing designs with better separability properties than pure random
designs. We illustrate theoretical considerations with a large simulation-
based study. This study indicates, in particular, that in the case of the
common binary group testing, the suggested families of designs have better
separability than the popular designs constructed from disjunct matrices. We
also derive several asymptotic expansions and discuss the situations when the
resulting approximations achieve high accuracy.
## 1 Introduction
Assume that there are $n$ items (units, elements, variables, factors, etc.)
$a_{1},\ldots,a_{n}$ with some of them defective (significant, important,
etc.). The problem of group testing (also known as “pooling” or “factor
screening”) is to determine the defective items by testing a certain number of
test groups $X_{j}$. A design ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ is a
collection of $N$ test groups. We assume that all test groups
$X_{j}\in{D}_{N}$ belong to some set ${\cal D}$ containing certain subsets of
the set ${\cal A}=\\{a_{1},\ldots,a_{n}\\}.$ The set ${\cal D}\subseteq
2^{\cal A}$ will be called design set.
The group testing problems differ in the following aspects:
1. (i)
assumptions concerning the occurrence of defective items;
2. (ii)
assumptions on admissible designs;
3. (iii)
forms of the test function which provides observation results;
4. (iv)
assumptions on the number of allowed wrong answers (lies);
5. (v)
definitions of the problem solution.
The group testing problems considered in this paper are specified by the
following properties.
1. (i)
As the main special case, we assume that there are exactly $d$ defective items
with $0\\!<\\!d\\!\leq\\!n$. Many statements, however, are formulated for the
very general models defined by prior distributions for the number of defective
items, see Section 2.5. Moreover, a few results (e.g. Theorem 3.2 and points
three of Corollary 2 and Corollary 3) cover the problem of finding defectives
the so-called binomial sample, where the events “item $a_{i}$ is defective”
are independent and have the same prior probability.
2. (ii)
We only consider non-adaptive designs
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}\subset{\cal D}$. As the principal case, we
consider the design sets ${\cal D}$, which contain the test groups $X$
consisting of exactly $s$ items with suitable $s$, see Section 2.5; such
designs are normally called constant-row-weight designs, see Section 4.3. For
brevity, we will call these designs simply constant-weight designs. The
constant-weight random designs seem to be marginally more efficient than
Bernoulli designs, where in order to build every test group $X_{j}\in{D}_{N}$,
each item is included into $X_{j}$ with given probability; see Section 3.4 and
Section 4.1.
3. (iii)
Let $T\subset{\cal A}$ denote an unknown collection of defective items and
$X\subset{\cal A}$ be a test group. We consider group testing models where the
observation result for given $X$ and $T$ is
$\displaystyle f_{{h}}(X,T){=}\min\\{{h},|X\cap T|\\}\,,$ (1.1)
where $|\cdot|$ stands for the number of elements in a discrete set and ${h}$
is a positive integer. In the most important special case of binary (or
disjunctive) model, ${h}=1$. In this model, by inspecting a group
$X\subset{\cal A}$ we receive 1 if there is at least one defective item in $X$
and 0 otherwise. In the additive (or “adder”, in the terminology of D’yachkov
(2014)) model, $f_{\infty}(X,T)=|X\cap T|$ so that we choose ${h}=\infty$; in
fact, any number between $n$ and $\infty$ can be chosen as ${h}$. (In the
additive model, after inspecting a group $X$ we receive the number of
defectives in $X$.) In the so-called multiaccess channel model, ${h}=2$.
4. (iv)
In the main body of the paper, we assume that the test results are noiseless
(or error-free). In Section 2.3 we show how most of our results can be
extended to the case of noisy testing, where up to $L$ lies (wrong answers,
errors) are allowed. Moreover, in Section 4.7 some specific results are
specialized for the important case of binary group testing with lies.
5. (v)
As a rule, we are not interested in the designs that provide 100% guarantee
that all defective items are correctly identified (in the group testing
literature, this criterion is often referred to as “zero-error probability
criterion” or “exact recovery”). Instead, we are interested in studying the
probability $1-\gamma$ that all defective items are discovered (for random
designs) with the main theoretical contribution of this paper being the
derivation of the lower bounds $1-\gamma^{*}$ for this probability; when it
suffices to recover the defective set with high probability we are considering
the small error probability criterion. Moreover, in Section 4.3 we propose
designs that seem to provide very high values of $1-\gamma$, even in
comparison to the designs constructed from suitable disjunct matrices, see
Tables 10 and 11 in Section 4.5.
Group testing is a well established area and has attracted significant
attention of specialists in optimum design, combinatorics, information theory
and discrete search. The origins of group testings can be traced back to the
paper Dorfman (1943) devoted to adaptive procedures of blood testing for
detection of syphilitic men. Since then, the field of group testing has seen
significant developments with extensive literature and numerous books
dedicated to the field. The textbooks Du and Hwang (2000, 2006) and lecture
notes D’yachkov (2014) provide a background on group testing especially for
zero-error non-adaptive problems. An excellent introduction and summary of
recent developments in group testing and its connection to information theory
can be found in Aldridge et al. (2019). The group testing problem in the
binomial sample is especially popular in the group testing literature, see
Aldridge et al. (2019); Sobel and Groll (1959); Torney et al. (1998).
Research in group testing often concentrates around the following important
areas:
(a)
construction of efficient designs (both, adaptive and non-adaptive);
(b)
studying properties of different families of designs;
(c)
derivation of upper and lower bounds for the lengths of designs providing
either exact or weak recovery of the defective items;
(d)
extension of results in the noiseless setting for the case of noisy group
testing;
(e)
construction of efficient decoding procedures to locate the defective items
(given a design).
In this paper, we touch upon all the above areas. In particular:
(a)
in Section 4.3 we develop a procedure of construction of a sequence of nested
nearly doubly regular designs $D_{1},D_{2},\ldots$ which, for all $N$, have
large Hamming distances between all pairs $X_{i},X_{j}\in D_{N}$ $(i\neq j)$
and, as a consequence, excellent separability properties (this is confirmed by
a numerical study described in Sections 4.3 and 4.5);
(b)
one of the main purposes of the paper is an extensive study of the probability
of recovery of defective items for constant-weight random designs (both, in
non-asymptotic and asymptotic regimes);
(c)
as explained in Remark 1 of Section 2.4, most results on the probability of
recovery of defective items can be reformulated as existence theorems of
deterministic designs providing weak recovery; moreover, in Sections 5.2 and
5.3 we derive asymptotic upper bounds for the lengths of deterministic designs
providing exact recovery;
(d)
in Sections 2.3, 4.7 and 5.5 we show how most important results obtained in
the noiseless setting can be extended for the noisy group testing when up to
$L$ lies are allowed;
(e)
in Section 4.6 we numerically demonstrate that the so-called Combinatorial
Orthogonal Matching Pursuit (COMP) decoding procedure alone could be very
inefficient; see Section 4.5 for the definition of the COMP procedure.
Existence theorems for group testing problems were extensively studied in
Russian literature by M.B. Malutov, A.G. Dyachkov, V.V. Rykov and other
representatives of the Moscow probability school, see e.g. D’yachkov and Rykov
(1983); Tsybakov et al. (1983). The construction of upper bounds for the
length of optimal zero-error designs in the binary group testing model has
attracted significant attention; see Du and Hwang (2000) for a good survey. In
the papers Katona and Srivastava (1983); Macula (1997a); Macula and Reuter
(1998), the construction schemes of group testing designs in important
specific cases, including the case of the binary model with two and, more
generally, $d$ defectives, are studied. Using probabilisitic arguments,
existence theorems for designs under the zero-error criterion for the additive
model have been thoroughly studied in Zhigljavsky and Zabalkanskaya (1996).
Motivated by the results of Zhigljavsky and Zabalkanskaya (1996), in
Zhigljavsky (2003) expressions for the binary model were derived under the
zero-error and small-error criterions. The results of Zhigljavsky (2003)
provided the inspiration for this paper. Note that there is a limited number
of results on construction of optimal algorithms for finding one, two or three
defectives in search with lies, see e.g. De Bonis et al. (1997); Hill and
Karim (1992); Macula (1997b). Some asymptotic expansions in existence theorems
for general group testing problems have been derived in Zhigljavsky (2010).
In the majority of papers devoted to construction of designs for the non-
adaptive binary group testing problem, the designs are built from the so-
called disjunct matrices, these are defined in Section 4.5. Moreover, the COMP
decoding procedure (according to COMP, all items in a negative test are
identified as non-defective whereas all remaining items are identified as
potentially defective, see Section 4.5) is often used for identification of
the set of defective items; see e.g. a popular paper Chan et al. (2014) and a
survey on non-adaptive group testing algorithms through the point of view of
decoding of test results Chen and Hwang (2008). Despite common claims, as
explained in Sections 4.5 and 4.6, the designs based on the use of disjunct
matrices are inefficient and the COMP decoding procedure alone leads to poor
decoding.
In the asymptotic considerations, we assume that the number of defective items
is small relative to the total number of items $n$; that is, we consider a
very sparse regime. Many results can be generalized to a sparse regime when
$d$ slowly increases with $n$ but $d/n\to 0$ as $n\to\infty$. There is a big
difference between the asymptotic results in the sparse regime and results in
the case when $d/n\to{\rm const}>0$ as $n\to\infty$. In particular, in view of
Cantor and Mills (1966); Erdős and A. (1963); Lindström (1964, 1975), where
the non-adaptive group testing problem for the additive model is considered
with no constraints on both the test groups and the number of defective items,
$N\sim{2n}/{\log_{2}n},$ $n\rightarrow\infty,$ for the minimal length of the
non-adaptive strategies that guarantee detection of all defective items. For
fixed $d$, the best known explicit constructions of designs come from number
theory Bose and Chowla (1962); Lindström (1969) and are closely related to the
concept of Bose-Chaudhuri-Hocquenghem codes. For these constructions it is
shown that $N\leq d\log_{2}n(1+o(1))$ tests are required. For $d\geq 3$, the
best currently known construction is with $N\leq
4d\log_{2}n/\log_{2}d(1+o(1))$ and can be obtained from results of D’yachkov
and Rykov (1981); Poltyrev (1987). This result is constructed using random
coding and is shown to be order-optimal.
In the very sparse regime with $d$ constant and $n\rightarrow\infty$, the best
known upper bound for the length of zero-error designs in the binary group
testing problem has been derived in Dyachkov et al. (1989), see also Theorem
7.2.15 in Du and Hwang (2000): $N\leq\frac{1}{2}{dc_{d}}(1+o(1))\log_{2}n$,
where
$1/{c_{d}}=\max\limits_{0\leq q\leq 1}\max\limits_{0\leq Q\leq
1}\left\\{-(1-Q)\log_{2}(1-q^{d})+d\left[Q\log_{2}\frac{q}{Q}+(1-Q)\log_{2}\frac{1-q}{1-Q}\right]\right\\}$
and $c_{d}={d\log_{2}e}(1+o(1))\;\mbox{as}\;d\rightarrow\infty.$
Asymptotically, when both $n$ and $d$ are large, this is a marginally better
bound than the asymptotic bound
$\displaystyle N\leq N_{*}(n,d)\sim\frac{e}{2}d^{2}\log
n\,,\;\;n\rightarrow\infty,\;d=d(n)\rightarrow\infty,\;d(n)/n\rightarrow 0\,,$
which has been derived in D’yachkov and Rykov (1983) by the probabilistic
method based on the use of the Bernoulli design. Exactly the same upper bound
can be obtained using random constant-weight designs, see Corollary 5.2 in
Zhigljavsky (2003). Development of existential (upper) bounds for group
testing designs for binary group testing has has been complemented by
establishing various lower bounds; for comparison of the lower and upper
bounds, see the well-written Section 7.2 of Du and Hwang (2000).
Primarily for the binary model, notable contributions in recent years are as
follows. In Aldridge et al. (2014), the authors consider the problem of
nonadaptive noiseless group testing problem using Bernoulli designs and
describe a number of algorithms used to locate the defective set after the
design has been constructed; one of these is the COMP procedure which will be
discussed in Section 4.5. For bounds on the number of tests when using
Bernoulli designs, also see Scarlett and Cevher (2016a, b). In Aldridge et al.
(2016), instead of Bernoulli designs the authors consider designs where each
item is placed in a constant number of tests. The tests are chosen uniformly
at random with replacement so the test matrix has (almost) constant column
weights, these terms will be fully explained in Section 4.3. The authors show
that application of the COMP detection algorithm with these constant column-
weight-designs significantly increases detection of the defective items in all
sparsity regimes. This (almost) constant-column-weight property will be
discussed further in Section 4.3 where it will be combined with a Hamming
distance constraint to improve the probability of separation. In Coja-Oghlan
et al. (2020a), for the randomised design construction discussed in Aldridge
et al. (2016), the authors provide a sharp bound on the number of tests
required to locate the defective items. In Coja-Oghlan et al. (2020b), the
authors consider existence bounds for both a test design and an efficient
algorithm that solve the group testing problem with high probability. In
Mézard and Toninelli (2011), the authors consider the binomial sample group
testing problem where each item is defective with probability $q$. The authors
construct a class of two-stage algorithms that reach the asymptotically
optimal value of $nq|\log(q)|$. The asymptotic bounds for the one-stage
(nonadaptive) setting for the binomial sample problem are studied in Mézard et
al. (2008).
This paper differs from the aforementioned papers in the following aspects:
(a) the majority of known theoretical results require large $n$ and only
numerical evidence is presented when $n$ is small; this paper, however,
provides rigorous results for any $n$ where many asymptotic results do not
apply; (b) the asymptotic expansions in this paper provide constants that have
crucial significance when $n$ is only moderately large (this additional
constant term is not present in many asymptotic results for group testing);
(c) many of the previously cited papers use decoding procedures that do not
guarantee identification of the defective set even if it is possible to locate
it. Procedures like COMP are fast to execute, and as previously mentioned,
with certain design constructions can in a large number of cases locate the
defective set. However, in this paper we will use decoding procedures that
will guarantee the location of the defective set if this is possible given the
design.
By requiring a given design to satisfy the constraint of being able to find
the defective items, we are considering an example of a (random) constraint
satisfaction problem (CSP). Many of the main advances of this paper can be
viewed as the careful counting of satisfying assignments for a CSP, where the
satisfying assignments can correspond to tests that are able to differentiate
between different subsets of ${\cal A}$. The techniques used in this paper are
related to approaches used in the random CSPs literature, see for instance
Zdeborová and Krzakala (2016). However group testing problems are very
specific and cannot be simply considered as specific application of the
general CSP methodology.
The rest of the paper is organized as follows. In Section 2 we develop a
general methodology of derivation the lower bounds for $1-\gamma$, the
probability that all defective items are uniquely identifiable from test
results taken according to constant-weight random designs and establish
several important auxiliary results. In Section 3 we derive lower bounds for
$1-\gamma$ in a general group testing problem and consider the case of
additive model for discussing examples and numerical results. The more
practically important case of the binary model is treated in Section 4.
Section 2 is devoted to asymptotic existence bounds and construction of
accurate approximations. In Appendix A we provide some proofs and in Appendix
B we formally describe the algorithm of Section 4.3. Let us consider the
content of Sections 2, 3, 4 and 5 in more detail.
In Section 2.1 we discuss general discrete search problems. In Section 2.2 we
develop the general framework for derivation of the upper bounds $\gamma^{*}$
for $\gamma$, the probability that for a random design all defective items
cannot be recovered; the main result is formulated as Theorem 2.1. Theorem 2.2
of Section 2.3 extends Theorem 2.1 to the case when some of $N$ test results
are allowed to be wrong (the case of lies). In Section 2.4 we show how many of
our results can be reformulated in terms of existence bounds in the cases of
weak and exact recovery. In Section 2.5 we consider different assumptions on
the occurrence of defective items and the randomisation schemes used for the
construction of the randomized designs. In Sections 2.6 and 2.7 we formulate
two important combinatorial results, Lemmas 2 and 3.
In Section 3.1 we derive upper bounds $\gamma^{*}$ for $\gamma$ for a general
test function (1.1) in the most important case ${\cal D}={\cal P}_{n}^{s}$;
that is, when all $X_{i}\in{D}_{N}$ have exactly $s$ items (see (2.14) for the
formal definition of ${\cal P}_{n}^{s}$). In Section 3.2 we specialize the
general results of Section 3.1 to a relatively easy case of the additive model
and consider special instances of the information about the defective items
including the case of the binomial sample case, see Corollary 2. In Section
3.3 we provide some results of simulation studies for the additive model. In
Section 3.4 we show how to extend the results established for the case ${\cal
D}={\cal P}_{n}^{s}$ to cover other randomization schemes for choosing the
groups of items $X_{i}$ including the case of Bernoulli designs.
In Section 4.1 we provide a collection of upper bounds $\gamma^{*}$ for
$\gamma$ for different instances of the binary model. All results formulated
in this section follow from general results and specific considerations of
Sections 3.1 and 3.4. In Section 4.2, we illustrate some of the theoretical
results formulated in Section 4.1 by results of simulation studies. In Section
4.3 we develop a procedure for construction of a sequence of nested nearly
doubly regular designs $D_{1},D_{2},\ldots$ which, for all $N$, have large
Hamming distances between all pairs $X_{i},X_{j}\in D_{N}$ $(i\neq j)$. With
the help of numerical studies we also demonstrate excellent separability
properties of the resulting designs. In Section 4.4 we apply the technique of
Section 4.3 and numerically demonstrate that indeed the resulting designs
provide a superior separability relative to random designs. In Section 4.5 we
numerically compare random, improved random of Section 4.3 and the very
popular designs constructed from the disjunct matrices. In particular, we find
that improved random designs have a better separability than the designs
constructed from the disjunct matrices, see Tables 10 and 11. In Section 4.6
we discuss the (in)efficiency of the COMP decoding procedure. In Section 4.7
some specific upper bounds are specialized for the binary group testing with
lies; simulation results are provided to illustrate theoretical bounds.
In Section 5.1 we describe the technique used to transform finite-$n$ results
into the asymptotic expansions. A very important feature of the developed
expansions is that in the very-sparse regime we have explicit expressions for
the constant term, additionally to the main term involving $\log n$. Sections
5.2, 5.3 and 5.4 we apply results of Section 5.1 respectively to the cases of
additive model (both exact and weak recoveries), binary model with exact
recovery and the binary model with weak recovery. Results of these sections
clearly demonstrate the following: (a) weak recovery is much simpler than
exact recovery, (b) the constant terms in the asymptotic expansions play an
absolutely crucial role if these expansions are used as approximations, and
(c) the resulting approximations have rather simple form and are very accurate
already for moderate values of $n$. Finally, in Section 5.5 we discuss a
technique of transforming the asymptotic upper bounds for $N$ for noise-free
group testing problems into upper bounds for $N$ in the same model when up to
$L$ lies are allowed.
## 2 General discrete search problem, random designs and the probability of
solving the problem
### 2.1 Problem statement
We consider the group testing problems from the general point of view of
discrete search. Following O’Geran et al. (1991) a discrete search problem can
often be determined as a quadruple $\\{{\cal T},{\cal D},f,{\cal Y}\\}$, where
${\cal T}=\\{T\\}$ is a target set, which is an ordered collection of all
possible targets $T$, ${\cal D}=\\{X\\}$ is a design set, a collection of all
allowed test groups $X$, and $f:{\cal D}\times{\cal T}\rightarrow{\cal Y}$ is
a test function mapping ${\cal D}\times{\cal T}$ to ${\cal Y}$, the set of all
possible outcomes of a single test. In group testing, the targets $T$ are
allowed collections of defective items and a value $f(X,T)$ for fixed
$X\in{\cal D}$ and $T\in{\cal T}$ is a test result at the test group $X$ under
the assumption that the unknown target is $T$. For a pair of targets
$T_{i},T_{j}\in{\cal T}$, we say that $X\in{\cal D}$ separates $T_{i}$ and
$T_{j}$ if $f(X,T_{i})\neq f(X,T_{j})$. We say that a design
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ separates $T\in{\cal T}$ if for any
$T^{\prime}\in{\cal T}$, such that $T^{\prime}\neq T$, there exists a test
group $X\in{D}_{N}$ separating the pair $(T,T^{\prime})$. We only consider
solvable search problems where each $T\in{\cal T}$ can be uniquely identified
from test results at all $X\in{\cal D}$.
In this paper, we are interested in studying properties of random designs for
solving group testing problems. Let ${\mathbb{R}}$ and ${\mathbb{Q}}$ be
distributions on ${\cal D}$ and ${\cal T}$ respectively. Let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with mutually
independent and ${\mathbb{R}}$-distributed test groups
$X_{i}\,\,(i=1,\ldots,N)$ and let $T\in{\cal T}$ be a
${\mathbb{Q}}$-distributed random target. For a random $N$-point design
${D}_{N}$, we are interested in estimating the value of
$\gamma=\gamma({\mathbb{Q}},{\mathbb{R}},N)$ such that
$\displaystyle{{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated
by }{D}_{N}\\}}=1-\gamma\,.$ (2.1)
The intractable nature of the l.h.s in (2.1) makes it (unless the problem is
very easy and hence impractical) impossible to explicitly compute $\gamma$.
One of the main aims of this paper is the derivation of explicit upper bounds
$\gamma^{*}=\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ for $\gamma$ so that
$\displaystyle{{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated
by }{D}_{N}\\}}\geq 1-\gamma^{*}\,.$ (2.2)
This will allow us to state that a random design ${D}_{N}$ solves the group
testing problem with probability at least $1-\gamma^{*}$.
Another way of interpreting the results of the form (2.2) is as follows. For a
given search problem $\\{{\cal T},{\cal D},f,{\cal Y}\\}$, an algorithm of
generating the test groups $X_{1},X_{2},\ldots$ and $\gamma\in(0,1)$, define
$N_{\gamma}$ to be the smallest integer $N$ such that
$\displaystyle{{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated
by }{D}_{N}\\}}\geq 1-\gamma\,,$ (2.3)
where the probability is taken over randomness in $T$ and $X_{1},X_{2},\ldots$
Computation of the exact value of $\gamma$ is a very difficult problem.
However, as formulated in the following lemma, the ability of computing any
upper bound $\gamma^{*}=\gamma^{*}(N)$ for $\gamma$ in (2.3) implies the
possibility of derivation of the corresponding upper bound for $N_{\gamma}$.
###### Lemma 1
Let $\\{{\cal T},{\cal D},f,{\cal Y}\\}$ be a solvable discrete search problem
with random $T$, $X_{1},X_{2},\ldots$ be a sequence of test groups
$X_{i}\in{\cal D}$ and $\gamma^{*}=\gamma^{*}(N)$ be an upper bound for
$\gamma$ in (2.3) for a design $D_{N}=\\{X_{1},\ldots,X_{N}\\}$. Then for any
$0\\!<\\!\gamma\\!<\\!1$, (2.3) is satisfied for any $N\geq N_{\gamma}$ where
$\displaystyle
N_{\gamma}:=\min\Biggl{\\{}\\!N=1,2,\ldots:\gamma^{*}(N)<\gamma\Biggl{\\}}\,.$
(2.4)
###### Remark 1
Even if the test groups $X_{1},X_{2},\ldots$ leading to (2.3) are random, from
formula (2.3) with $N=N_{\gamma}$ we deduce that there exists a deterministic
design ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ with $N\leq N_{\gamma}$ such that
(2.3) holds, where the probability in (2.3) is taken over ${\mathbb{Q}}$
(random $T$) only. This follows from the discreteness of the space of all
$N$-point designs and that the expectation of the event “$T\textrm{ is
separated by }{D}_{N}$” with respect to random designs is the l.h.s. in (2.3).
### 2.2 A general technique for derivation of upper bounds
$\gamma^{*}=\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ for $\gamma$
For fixed $T_{i}$ and $T_{j}\in{\cal T}$, let
$\displaystyle p_{ij}=\mbox{Pr}_{{\mathbb{R}}}\\{f(X,T_{i})=f(X,T_{j})\\}\,$
(2.5)
be the probability that the targets $T_{i}$ and $T_{j}$ are not separated by
one random test $X\in{\cal D}$, which is distributed according to
${\mathbb{R}}$. The following theorem is a straightforward application of the
union bound.
###### Theorem 2.1
Let $\\{{\cal T},{\cal D},f,{\cal Y}\\}$ be a solvable discrete search problem
with ${\mathbb{R}}$ and ${\mathbb{Q}}$ being any distributions on ${\cal D}$
and ${\cal T}$ respectively. For a fixed $N\geq 1$, let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with each
$X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed. Then
for $\gamma=\gamma({\mathbb{Q}},{\mathbb{R}},N)$ of (2.1), we have
$\gamma({\mathbb{Q}},{\mathbb{R}},N)\leq\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$
with
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\sum_{i=1}^{|{\cal
T}|}{\rm Pr}_{\mathbb{Q}}\\{T=T_{i}\\}\sum_{j\neq i}p_{ij}^{N}\,.$ (2.6)
Proof. By applying the union bound, the probability that $T_{i}$ is not
separated from at least one $T_{j}\in{\cal T}$ $(T_{j}\neq T_{i})$ after $N$
random tests is less than or equal to $\sum_{j\neq i}\left(p_{ij}\right)^{N}$
and we thus have $1-\sum_{j\neq i}\left(p_{ij}\right)^{N}$ as a lower bound
for the probability that $T_{i}$ is separated from all other $T_{j}\in{\cal
T}$. Averaging over $T_{i}$ we obtain
$\displaystyle{{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated
by }{D}_{N}\\}}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{|{\cal T}|}{\rm
Pr}_{{\mathbb{R}}}\\{T_{i}\textrm{ is separated by }{D}_{N}\\}{\rm
Pr}_{\mathbb{Q}}\\{T=T_{i}\\}$ $\displaystyle\geq$ $\displaystyle
1-\sum_{i=1}^{|{\cal T}|}{\rm Pr}_{\mathbb{Q}}\\{T=T_{i}\\}\sum_{j\neq
i}p_{ij}^{N}=1-\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)\,.$
The statement of the theorem follows. $\Box$
For the very common scenario when ${\mathbb{Q}}$ is uniform on ${\cal T}$,
that is ${\rm Pr}_{\mathbb{Q}}\\{T=T_{i}\\}=1/|{\cal T}|$ for all
$i=1,\ldots|{\cal T}|$, the formula (2.6) for
$\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ simplifies to
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\frac{2}{|{\cal
T}|}\sum_{i=1}^{|{\cal T}|}\sum_{j=1}^{i-1}p_{ij}^{N}\,.$ (2.7)
Note also that the in order to apply the upper bound (2.6), the test function
$f(X,T)$ does not have to be of the form (1.1). Indeed, this bound can be used
for many discrete search problems of different nature from group testing; in
particular, for solving the “Mastermind” game O’Geran et al. (1993).
### 2.3 Extension to the case when several lies (errors) are allowed
Assume the so-called $L$-lie search problem, where up to $L$ test results
$Y(X_{j},T)$ at some $X_{j}\in{D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ may differ
from $f(X_{j},T)$. For a random $N$-point design ${D}_{N}$ we are interested
in bounding the value of $\gamma$, $0<\gamma<1$, such that
$\displaystyle{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ can be uniquely
identified by }{D}_{N}\textrm{ with at most $L$ lies}\\}=1-\gamma\,.$
An important observation is that if a non-adaptive design
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ is applied in a general $L$-lie search
problem, then one can guarantee that the target can be uniquely identified if
and only if the two vectors $F_{T}=(f(X_{1},T),\ldots,f(X_{N},T))$ and
$F_{T^{\prime}}=(f(X_{1},T^{\prime}),\ldots,f(X_{N},T^{\prime}))$ differ in at
least $2L+1$ components where $(T,T^{\prime})$ is any pair of different
targets in ${\cal T}$. That is, a target $T\in{\cal T}$ can be uniquely
identified if and only if for all $T^{\prime}\in{\cal T}\setminus\\{T\\}$
$\displaystyle d_{H}(F_{T},F_{T^{\prime}})\geq 2L+1\,,$ (2.8)
where $d_{H}(a,a^{\prime})$ is the Hamming distance between two $n$-vectors
$a$ and $a^{\prime}$; that is, the number of components of $a$ and
$a^{\prime}$ that are different.
The following statement is a generalization of Theorem 2.1 to the case of
$L$-lie search problem.
###### Theorem 2.2
Let $\\{{\cal T},{\cal D},f,{\cal Y}\\}$ be a solvable $L$-lie search problem
with ${\mathbb{R}}$ and ${\mathbb{Q}}$ being any distributions on ${\cal D}$
and ${\cal T}$ respectively. For a fixed $N\geq 1$, let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with each
$X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed. Then
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\sum_{i=1}^{|{\cal
T}|}{\rm Pr}_{{\mathbb{Q}}}\\{T=T_{i}\\}\ \sum_{j\neq
i}\sum_{l=0}^{2L}{{N}\choose{l}}\left(p_{ij}\right)^{N-l}\left(1-p_{ij}\right)^{l}\,.$
(2.9)
Proof of Theorem 2.2 can be found in Appendix A. Theorem 2.2 can be seen as a
generalisation of Theorem 9 of Zhigljavsky (2003). Note that in the most
important case when ${\mathbb{Q}}$ is uniform on ${\cal T}$, (2.9) becomes
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\frac{2}{|{\cal
T}|}\sum_{i=2}^{|{\cal
T}|}\sum_{j=1}^{i-1}\sum_{l=0}^{2L}{{N}\choose{l}}\left(p_{ij}\right)^{N-l}\left(1-p_{ij}\right)^{l}\,.$
(2.10)
One can consider a version of the $L$-lie search problem where all wrong
answers are the same; that is, the wrong results are equal to some $y\in{\cal
Y}$, and this value $y$ can be obtained by correct answers as well. This
problem is a little simpler than the general $L$-lie problem and in this
problem it is enough to ensure $d_{H}(F_{T},F_{T^{{}^{\prime}}})\geq L+1$
rather than $(\ref{eq:min-dH})$, to guarantee the unique identification of the
defective set. For this problem the upper bound is:
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\sum_{i=1}^{|{\cal
T}|}{\rm Pr}_{{\mathbb{Q}}}\\{T=T_{i}\\}\ \sum_{j\neq
i}\sum_{l=0}^{L}{{N}\choose{l}}\left(p_{ij}\right)^{N-l}\left(1-p_{ij}\right)^{l}\,.$
(2.11)
For several setups of the group testing problem, we will derive closed-form
expressions for $p_{ij}$; we therefore can easily compute the upper bounds
(2.9) and (2.11) for the corresponding $L$-lie group testing problems as well.
These bounds will be very similar to the ones formulated for problems with no
lies but with an extra summation in the right-hand side.
### 2.4 Existence bounds in the cases of weak and exact recovery
As was noted in Remark 1, $N_{\gamma}$ of (2.4) has the following
interpretation as an existence bound in the case of weak recovery: for a given
$\gamma\in(0,1)$ and any $N\geq N_{\gamma}$, there exist deterministic designs
$D_{N}$ such that ${{\rm Pr}_{{\mathbb{Q}}}\\{T\textrm{ is separated by
}{D}_{N}\\}}\geq 1-\gamma$. In the most important case when ${\mathbb{Q}}$ is
uniform on ${\cal T}$, in view of (2.7), we can write the existence bound
$N_{\gamma}$ of (2.4) as
$\displaystyle N_{\gamma}=\min\Biggl{\\{}\\!N=1,2,\ldots:\sum_{i=2}^{|{\cal
T}|}\sum_{j=1}^{i-1}p_{ij}^{N}<\frac{\gamma|{\cal T}|}{2}\Biggl{\\}}\,.$
(2.12)
In case of exact recovery, we need to separate all possible pairs
$(T,T^{\prime})\in{\cal T}\times{\cal T}$. Let, as in Theorem 2.1,
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with
independent ${\mathbb{R}}$-distributed test groups $X_{i}$. By the union
bound, similarly to the proof of Theorem 2.1, the probability that at least
one pair $(T,T^{\prime})\in{\cal T}\times{\cal T}$ is not separated by
$D_{N}$, is not larger than $\sum_{i=2}^{|{\cal
T}|}\sum_{j=1}^{i-1}p_{ij}^{N}$. If this expression is smaller than 1, then,
by the discreteness of ${\cal T}$, there is at least one deterministic design
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ separating all $(T,T^{\prime})\in{\cal
T}\times{\cal T}$. The smallest $N$ when this happens is
$\displaystyle N_{0}:=\min\Biggl{\\{}\\!N=1,2,\ldots:\sum_{i=2}^{|{\cal
T}|}\sum_{j=1}^{i-1}p_{ij}^{N}<1\Biggl{\\}}\,$ (2.13)
and for all $N\geq N_{0}$ there exist deterministic designs $D_{N}$
guaranteeing unique identification of the unknown target $T\in{\cal T}$.
By comparing (2.12) and (2.13) we observe that if we set $\gamma=2/|{\cal T}|$
then $N_{\gamma}$ and $N_{0}$ coincide so we might suggest that $N_{0}$ is the
limit of $N_{\gamma}$ as $\gamma\to 0$. This intuition rarely works, however,
as in typical group testing problems values of $|{\cal T}|$ are astronomically
large but values of $\gamma$ are simply small. As we demonstrate in several
subsections of Section 5, weak recovery is indeed a much simpler problem than
exact recovery, at least in the case of fixed $\gamma>0$.
Assume now that up to $L$ lies are allowed. Similarly to (2.13) and using
(2.10), we deduce that there are deterministic designs $D_{N}$ guaranteeing
unique identification of the unknown target $T\in{\cal T}$ if $N\geq N_{0,L}$
where
$\displaystyle N_{0,L}:=\min\Biggl{\\{}\\!N=2L,2L+1,\ldots:\sum_{i=2}^{|{\cal
T}|}\sum_{j=1}^{i-1}\sum_{l=0}^{2L}{{N}\choose{l}}\left(p_{ij}\right)^{N-l}\left(1-p_{ij}\right)^{l}<1\Biggl{\\}}\,.$
### 2.5 Typical target and design sets and assumptions on the randomisation
schemes ${\mathbb{Q}}$ and ${\mathbb{R}}$ in group testing
In group testing problems, the target set ${\cal T}$ has, as a rule, a very
particular structure considered below. Denote the collection of all subsets of
${\cal A}=\\{a_{1},\ldots,a_{n}\\}$ of length $k$ by ${\cal P}_{n}^{k}$:
$\displaystyle{\cal P}_{n}^{k}=\left\\{(a_{i_{1}},\dots,a_{i_{k}}),\;1\leq
i_{1}<\dots i_{k}\leq n\right\\}.$ (2.14)
The collection of groups of items containing $k$ items or less will be denoted
by ${\cal P}_{n}^{\leq k}=\bigcup_{j=0}^{k}{\cal P}_{n}^{j},$ where ${\cal
P}_{n}^{0}=\emptyset$. All target sets ${\cal T}$ considered in this paper
will have the form ${\cal T}=\cup_{j\in B}{\cal P}_{n}^{j}\,,$ where $B$ is a
subset of $\\{0,1,\ldots,n\\}.$ The main choices of $B$ are $B=\\{d\\}$ and
$B=\\{0,1,\ldots,d\\}$ for $1\leq d\leq n$; this corresponds to ${\cal
T}={\cal P}_{n}^{d}$ and ${\cal T}={\cal P}_{n}^{\leq d}$ respectively.
The distribution ${\mathbb{Q}}$ for $T\in{\cal T}$ defines the assumptions on
the occurrence of defective items. In a typical group testing setup,
${\mathbb{Q}}$ has the property of exchangeability; that is, symmetry with
respect to re-numeration of the items. We express this as follows. Let
${\mathbb{B}}$ be a probability distribution on $\\{0,1,\ldots,n\\}$ and $\xi$
be a ${\mathbb{B}}$-distributed random variable. Then for a
${\mathbb{Q}}$-distributed random target $T\in{\cal T}$ and any
$j\in\\{0,1,\ldots,n\\}$:
$\displaystyle\mbox{Pr}_{{\mathbb{Q}}}\\{|T|=j\\}=\mbox{Pr}_{\mathbb{B}}\\{\xi=j\\}\;{\rm
and}\;\mbox{Pr}_{{\mathbb{Q}}}\\{T=\textsf{T}\,|\,|T|={j}\\}=\left\\{\begin{array}[]{cc}1/{{n\choose{j}}}&{\rm
if}\;\;\textsf{T}\in{\cal P}_{n}^{j}\\\ 0&{\rm
otherwise}\end{array}\right.\,,$ (2.17)
where the term ${{n\choose{j}}}$ is the number of elements in ${\cal
P}_{n}^{j}.$ In the main two particular cases, when ${\cal T}={\cal
P}_{n}^{d}$ and ${\cal T}={\cal P}_{n}^{\leq d}$, the measure ${\mathbb{B}}$
is concentrated on the one-point set $\\{d\\}$ and on $\\{0,1,\ldots,d\\}$,
respectively. The assumption (2.17) can also be expressed as follows: $\forall
j\text{ and }\forall\textsf{T}\in{\cal P}_{n}^{j}$
$\displaystyle\mbox{Pr}_{{\mathbb{Q}}}\\{T=\textsf{T}\\}={\mbox{Pr}_{\mathbb{B}}\\{\xi=j\\}}/{{n\choose
j}}\,.$
The main objective of choosing the design set ${\cal D}$ (as well as the
randomization scheme $\cal R$) is the efficiency of the resulting group
testing procedure. Bearing this in mind, we mostly use ${\cal D}={\cal
P}_{n}^{s}$ with suitable $s$. As a rule, in this case we achieve better
bounds than, say, in the case ${\cal D}={\cal P}_{n}^{\leq s}$, with optimal
$s$ as well as in the case of Bernoulli designs, when each item is included
into a test group with probability $p$, with optimal $p$; see Table 6.
For the main choice ${{\cal D}}={\cal P}_{n}^{s}$, we choose the distribution
${\mathbb{R}}$ to be the uniform on ${{\cal D}}$ so that
$\mbox{Pr}_{{\mathbb{R}}}\\{X=\textsf{X}\\}={1}/{{n\choose s}}$ for all
$\textsf{X}\in{\cal P}_{n}^{s}$. For this choice of ${\mathbb{R}}$, we can
rewrite the probabilities $p_{ij}$ of (2.5) as $p_{ij}={k_{ij}}/{|{{\cal
D}}|}={k_{ij}}/{{n\choose s}}\,,$ where
$\displaystyle k_{ij}=\left|\\{X\in{{\cal
D}}:\;f(X,T_{i})=f(X,T_{j})\\}\right|\;\;\;\;\;\mbox{\rm for
$\;\;T_{i},T_{j}\in{{\cal T}}$}\,.$
In accordance with O’Geran et al. (1991), these coefficients will be called
Rényi coefficients. As shown below, computation of these coefficients involves
some counting only.
### 2.6 An important auxiliary result
Consider integers $m,l$ and $p$ satisfying the conditions $0\leq p\leq m\leq
l\leq n$ and $p<l$. Denote
$\displaystyle{\cal T}(n,l,m,p)=\\{(T,T^{\prime})\in{\cal P}_{n}^{\leq
n}\times{\cal P}_{n}^{\leq n}:\;|T|=l,\;|T^{\prime}|=m,\;|T\cap
T^{\prime}|=p\\}\subset{\cal P}_{n}^{l}\times{\cal P}_{n}^{m}\,.\;\;\;$ (2.18)
Note that the condition $p<l$ guarantees that $T\neq T^{\prime}$ for all pairs
$(T,T^{\prime})\in{\cal T}(n,l,m,p).$ ${\cal T}(n,l,m,p)$ is simply the
collection of pairs of assignments $(T,T^{\prime})$ of defective items such
that $T$ contains $l$ defective items, $T^{\prime}$ contains $m$ defective
items and they have exactly $p$ defective items in common. Interpretation for
the numbers $l,m$ and $p$ is given on Figure 1 (left).
The following lemma allows computing the number of elements in the sets
(2.18).
###### Lemma 2
The number of different non-ordered pairs in ${\cal T}(n,l,m,p)$ equals
$\displaystyle
Q(n,l,m,p)=\left\\{\begin{array}[]{ll}{{n}\choose{\,p\;m-p\;l-p\;n-l-m+p\,}}&{\rm
if}\;m<l\\\ &\\\ \frac{1}{2}{{n}\choose{\,p\;m-p\;m-p\;n-2m+p\,}}&{\rm
if}\;m=l\,,\end{array}\right.\,$ (2.22)
where
$\displaystyle{{n}\choose{n_{1}\;n_{2}\dots
n_{k}}}=\left\\{\begin{array}[]{cc}\frac{n!}{n_{1}!n_{2}!\dots
n_{k}!}&\mbox{\rm if }n_{r}\geq 0,\;\sum_{r=1}^{k}n_{r}=n\\\ &\\\ 0&\mbox{\rm
if }\,\min\\{n_{1},\ldots,n_{k}\\}<0\\\ \end{array}\right.$
is the multinomial coefficient.
For the proof of (2.22), which only involves simple counting arguments, see
Theorem 4.1 in Zhigljavsky and Zabalkanskaya (1996). Note the coefficient
$\frac{1}{2}$ in (2.22) for the case $l=m$; it is related to the fact that
$Q(n,l,m,p)$ is the number of non-ordered pairs $(T,T^{\prime})$ in ${\cal
T}(n,l,m,p)$.
### 2.7 Balanced design sets
Let the design set ${\cal D}$ be ${\cal D}={\cal P}_{n}^{s}$ and
$(T,T^{\prime})\in{\cal T}(n,l,m,p)\,$ both fixed such that $T\neq T^{\prime}$
and $l,m,p$ satisfy $0\leq p\leq m\leq l\leq n$ and $p<l$. Define the quantity
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!R(n,l,m,p,u,v,r)=|\left\\{X\in{\cal
D}:\,|X\cap(T\backslash T^{\prime})|=u,\,|X\cap(T^{\prime}\backslash
T)|=v,\,|X\cap T\cap T^{\prime}|=r\right\\}|\,,$ (2.24)
where $u,v,r$ are some nonnegative integers. $R(n,l,m,p,u,v,r)$ is the number
of tests in ${\cal D}$ that contain $u$ defective items from $T\setminus
T^{\prime}$, $v$ defective items from $T^{\prime}\setminus T$ and $r$
defective items from $T\cap T^{\prime}$. Interpretation for the numbers $u,v$
and $r$ is given on Figure 1 (right).
Observe that the number $R(n,l,m,p,u,v,r)$ is non-zero only if
$\displaystyle 0\leq u\leq l-p,\;0\leq v\leq m-p,\;0\leq r\leq p\,.$
Joining these restrictions on the parameters $u,v,r$ with the restrictions on
$p,m$ and $l$ in the definition of the sets ${\cal T}(n,l,m,p)$, we obtain the
combined parameter restriction
$\displaystyle 0\leq p\leq m\leq l\leq n,\;p<l,\;0\leq u\leq l-p,\;0\leq v\leq
m-p,\;0\leq r\leq p\,.$ (2.25)
$l\\!-\\!p$$m\\!-\\!p$$T$$T^{\prime}$$p$
$X$$T$$T^{\prime}$$r$$u$$v$$s\\!-\\!u\\!-\\!v\\!-\\!r$$l\\!-\\!p\\!-\\!u$$m\\!-\\!p\\!-\\!v$$p\\!-\\!r$
Figure 1: Depiction of the sets $T,T^{\prime}$ with $(T,T^{\prime})\in{\cal
T}(n,l,m,p)$, $X\in{{\cal P}_{n}^{s}}$ and their intersections.
As discussed and proved in Theorem 3.2 in Zhigljavsky (2003), formally the
design set ${\cal D}={\cal P}_{n}^{s}$ is balanced. This means the number
$R(n,l,m,p,u,v,r)$ does not depend on the choice of the pair
$(T,T^{\prime})\in{\cal T}(n,l,m,p)$ for any set of integers $u,v,r,p,m,l$
satisfying (2.25). Moreover, as shown in the next lemma, the number
$R(n,l,m,p,u,v,r)$ can be explicitly computed.
###### Lemma 3
The design set ${\cal D}\\!=\\!{\cal P}_{n}^{s}$ is balanced for any $s\leq
n$. For this design set, and for any set of integers $u,v,r,p,m,l$ satisfying
(2.25), we have
$\displaystyle
R(n,l,m,p,u,v,r)={{p}\choose{r}}{{l-p}\choose{u}}{{m-p}\choose{v}}{{n-l-m+p}\choose{s-r-
u-v}}$ (2.26)
where the convention $\left(\begin{array}[]{c}b\\\
a\end{array}\right)=0\;\mbox{{\rm for $a<0\;$ and $a>b$} }$ may be used for
certain values of parameters.
For the proof of Lemma 3, see Theorem 3.2 in Zhigljavsky (2003). Lemma 3
implies, in particular, that the design sets ${\cal D}={\cal P}_{n}^{\leq s}$
are also balanced for all $1\leq s\leq n$: clearly, a union of disjoint
balanced design sets is also a balanced design set.
## 3 Derivation of an upper bound for $\gamma$ in a general group testing
problem
### 3.1 General test function (1.1) and ${\cal D}={\cal P}_{n}^{s}$
In this section, we consider test functions
$f(\cdot,\cdot)=f_{h}(\cdot,\cdot)$ of the form (1.1). The following theorem
provides a closed-form expression for the Rényi coefficients in this case and
represents the major input into the non-asymptotic expressions of the upper
bounds in specific cases.
###### Theorem 3.1
Let the test function be defined by (1.1), $0\leq p\leq m\leq l\leq n$, $p<l$,
${\cal D}={\cal P}_{n}^{s}$ and $(T_{i},T_{j})\in{\cal T}(n,l,m,p)$. Then the
value of the Rényi coefficient $k_{ij}$ does not depend on the choice of the
pair $(T_{i},T_{j})\in{\cal T}(n,l,m,p)$ and equals $k_{ij}=K({\cal
P}_{n}^{s},n,l,m,p),$ where
$\displaystyle{K({\cal P}_{n}^{s},n,l,m,p)}\,$ $\displaystyle=$
$\displaystyle\,\sum_{r{=}0}^{p}\sum_{u{=}0}^{m{-}p}R(n,l,m,p,u,u,r)$ (3.1)
$\displaystyle{+}$
$\displaystyle\sum_{r{=}0}^{p}\sum_{u{=}w}^{l{-}p}\sum_{v{=}u{+}1}^{m{-}p}R(n,l,m,p,u,v,r)+\sum_{r{=}0}^{p}\sum_{v{=}w}^{m{-}p}\sum_{u{=}v{+}1}^{l{-}p}R(n,l,m,p,u,v,r)\,.\;\;\;\;\;$
Here $w=\max\\{0,{h}-r\\}$ and the terms $R(n,l,m,p,u,v,r)$ are as in (2.26).
The proof of Theorem 3.1 can be found in Appendix A; it also follows from
Theorem 3.3 in Zhigljavsky (2003). Set
$\displaystyle q_{{\cal D},n,l,m,p}={K({\cal
P}_{n}^{s},n,l^{\prime},m^{\prime},p)}/{{n\choose s}}\;\;{\rm with}\;\
l^{\prime}=\max(l,m),m^{\prime}=\min(l,m)\;{\rm and}\;{\cal D}={\cal
P}_{n}^{s}\,,\;\;\;$ (3.2)
where $K({\cal P}_{n}^{s},n,l,m,p)$ are the Rényi coefficients of (3.1); note
that using the convention of Lemma 3, for all $d=0,\ldots,n$ we have $K({\cal
D},n,d,d,d)=0$ and hence $q_{{\cal D},n,d,d,d}=0$. Then we have the following
theorem.
###### Theorem 3.2
Let ${\cal T}={\cal P}_{n}^{\leq d}$ and ${\cal D}={\cal P}_{n}^{s}$ where
$n\geq 2$, $1\leq d\leq n$, $1\leq s\leq n$. Let ${\mathbb{Q}}$ be a
distribution satisfying (2.17) and let ${\mathbb{R}}$ be the uniform
distribution on ${\cal D}$. For a fixed $N\geq 1$, let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with each
$X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed. Then
$\displaystyle\\!\\!\\!\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\sum_{{b}=0}^{d}{\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}\min\left\\{1,\frac{1}{{{n}\choose{{b}}}}\sum_{m=0}^{d}\,\sum_{p=0}^{\min\\{{b},m\\}}\\!{\textstyle{{n}\choose{p\;m-p\;{b}-p\;n-{b}-m+p}}}q^{N}_{{\cal
D},n,{b},m,p}\right\\}\,.\;\;\;$ (3.3)
The proof of Theorem 3.2 is included in the Appendix A; it is a generalisation
of Theorem 6.2 in Zhigljavsky (2003). The following corollary follows from
Theorem 3.2 and its proof. More specifically, the only adjustment needed in
the proof of Theorem 3.2 is to set $Q_{N,n,{b}}({\cal D})=\min\\{1,S_{2}\\}$,
where $S_{2}$ is defined in the proof.
###### Corollary 1
Let ${\cal T}={\cal P}_{n}^{d}$ and ${\cal D}={\cal P}_{n}^{s}$, where $n\geq
2$, $1\leq d<n$, $1\leq s<n$. Let ${\mathbb{Q}}$ and ${\mathbb{R}}$ be uniform
distributions on ${\cal T}$ and ${\cal D}$ respectively. For a fixed $N\geq
1$, let ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with
each $X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed.
Then
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\min\left\\{1,\frac{1}{{{n}\choose{d}}}\,\sum_{p=0}^{d-1}\\!{\textstyle{{n}\choose{p\;d-p\;d-p\;n-2d+p}}}\left({K({\cal
P}_{n}^{s},n,d,d,p)}/{{n\choose s}}\right)^{N}\ \right\\}\,.$ (3.4)
### 3.2 Additive model
In this section we specialize general results of Section 3.1 to the case of
additive model, where $f(X,T)=|X\cap T|$ so that we can set ${h}=\infty$ in
(1.1) and (3.1). This removes two terms in (3.1) hence simplifying this
expression. Furthermore, using (2.26) and the Vandermonde convolution formula,
we obtain the following statement.
###### Lemma 4
Let $f(X,T)=|X\cap T|$, ${\cal D}={\cal P}_{n}^{s}$ and $0\leq p\leq m\leq
l\leq n,\;p<l$. Then $k_{ij}=K({\cal P}_{n}^{s},n,l,m,p)$ with
$\displaystyle K({\cal
P}_{n}^{s},n,l,m,p)=\sum_{u=0}^{m-p}{{l-p}\choose{u}}\,{{m-p}\choose{u}}\,{{n-l-m+2p}\choose{s-2u}}\,.$
By considering Lemma 4, Corollary 1 and specialising Theorem 3.2 to some
specific cases, we obtain the following corollary.
###### Corollary 2
Let $f(X,T)=|X\cap T|$ and set $n\geq 2$, $1\leq d<n$, $1\leq s<n$ and ${\cal
D}={\cal P}_{n}^{s}$. For a fixed $N\geq 1$, let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with each
$X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed, where
${\mathbb{R}}$ is the uniform distribution on ${\cal D}$. We consider the
following cases for ${\cal T}={\cal P}_{n}^{d}$ and ${\mathbb{Q}}$:
1. 1.
Let ${\cal T}={\cal P}_{n}^{d}$ and ${\mathbb{Q}}$ be the uniform distribution
on ${\cal T}$. Then $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ can be obtained
from (3.4) with
$\displaystyle{K({\cal
P}_{n}^{s},n,d,d,p)}=\sum_{u=0}^{d-p}{{d-p}\choose{u}}^{2}\,\,{{n-2d+2p}\choose{s-2u}}\,\,.$
2. 2.
Let ${\cal T}={\cal P}_{n}^{\leq d}$ and ${\mathbb{Q}}$ be a distribution
satisfying (2.17). Then $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ can be
obtained from (3.3) with
$\displaystyle q_{{\cal D},n,{b},m,p}=\frac{1}{{n\choose
s}}\sum_{u=0}^{m-p}{{{b}-p}\choose{u}}\,{{m-p}\choose{u}}\,{{n-{b}-m+2p}\choose{s-2u}}\
\,.$ (3.5)
3. 3.
Let ${\cal T}={\cal P}_{n}^{\leq n}$, ${\mathbb{Q}}$ satisfy (2.17) and
suppose ${\mathbb{B}}$ is the $Bin(n,q)$ distribution on $\\{0,1,\ldots n\\}$.
Then from Theorem 3.2 we obtain
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\sum_{{b}=0}^{n}{n\choose{b}}q^{b}(1-q)^{n-{b}}\min\left\\{1,\frac{1}{{{n}\choose{{b}}}}\sum_{m=0}^{n}\,\sum_{p=0}^{\min\\{{b},m\\}}\\!{\textstyle{{n}\choose{p\;m-p\;{b}-p\;n-{b}-m+p}}}q^{N}_{{\cal
D},n,{b},m,p}\right\\}$
with $q_{{\cal D},n,{b},m,p}$ given in (3.5).
### 3.3 Simulation study for the additive model
In Figures 3–3, using red crosses we depict the probability ${{\rm
Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated by }{D}_{N}\\}}$ as a
function of $N$. These values have been obtained via Monte Carlo simulations
with $50,000$ repetitions. With the black dots we plot the value of $1-\gamma$
as a function of $N_{\gamma}$. For these figures, we have set ${\cal T}={\cal
P}_{n}^{3}$ and chosen $s=n/2$ based on the asymptotic considerations
discussed in the beginning of Section 5.4.
In Tables 1–2, for a given value of $1-\gamma^{*}$ we tabulate the value of
$1-\gamma$ for the additive group testing model, where ${\cal T}={\cal
P}_{n}^{3}$, ${\cal D}={\cal P}_{n}^{s}$ and ${\mathbb{Q}}$ and ${\mathbb{R}}$
are uniform on ${\cal T}$ and ${\cal D}$ respectively. The values have been
obtained via Monte Carlo simulations. When considering the inverse problem
discussed in (2.3), we also include the explicit upper bounds $N_{\gamma}$ and
the value of $N_{\gamma^{*}}$ obtained via Monte Carlo for different values of
$n,s$ and $\gamma^{*}$. In all Monte Carlo simulations, we have used $50,000$
repetitions. Tables 1–2 and Figures 3–3 demonstrate that when $\gamma^{*}$ is
small, the union bound used in the proof of Theorem 2.1 appears very sharp
since the values of $1-\gamma$ and $1-\gamma^{*}$ almost coincide.
Figure 2: Additive model; $n=20,s=10$.
Figure 3: Additive model; $n=50,s=25$.
| $n=20$ | $n=50$ | $n=100$ | $n=150$
---|---|---|---|---
$\lambda$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$
0.10 | 31 | 34 | 0.96 | 38 | 40 | 0.96 | 42 | 44 | 0.97 | 42 | 46 | 0.96
0.20 | 16 | 17 | 0.96 | 19 | 21 | 0.97 | 21 | 23 | 0.96 | 23 | 24 | 0.96
0.30 | 11 | 12 | 0.97 | 14 | 15 | 0.97 | 14 | 16 | 0.97 | 17 | 18 | 0.97
0.40 | 9 | 11 | 0.98 | 11 | 13 | 0.98 | 13 | 15 | 0.98 | 14 | 16 | 0.98
0.50 | 8 | 11 | 0.98 | 11 | 13 | 0.98 | 12 | 14 | 0.98 | 13 | 15 | 0.98
Table 1: Additive model with $\gamma^{*}=0.05$, $d=3$ $s=\lceil\lambda n\rceil$, various $n$ and $\lambda$. | $n=20$ | $n=50$ | $n=100$ | $n=150$
---|---|---|---|---
$\lambda$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$
0.10 | 28 | 30 | 0.92 | 34 | 36 | 0.93 | 38 | 40 | 0.93 | 41 | 43 | 0.94
0.20 | 15 | 16 | 0.93 | 17 | 19 | 0.93 | 20 | 21 | 0.93 | 21 | 22 | 0.93
0.30 | 9 | 11 | 0.94 | 12 | 14 | 0.94 | 14 | 15 | 0.95 | 15 | 17 | 0.95
0.40 | 8 | 10 | 0.95 | 10 | 12 | 0.95 | 12 | 14 | 0.95 | 13 | 15 | 0.96
0.50 | 8 | 10 | 0.96 | 10 | 12 | 0.97 | 12 | 14 | 0.96 | 12 | 14 | 0.97
Table 2: Additive model with $\gamma^{*}=0.1$, $d=3$ $s=\lceil\lambda
n\rceil$, various $n$ and $\lambda$.
### 3.4 Extension for ${\cal D}\neq{\cal P}_{n}^{s}$
In this section we demonstrate how the key results of the Sections 3.1 and 3.2
can be easily modified for the case when ${\cal D}=\cup_{s}{\cal P}_{n}^{s}$,
where the union is taken over any subset of $\\{0,1,\ldots,n\\}$, and for a
distribution ${\mathbb{R}}$ that is not necessarily uniform on ${\cal D}$.
Let ${\cal D}={\cal P}_{n}^{\leq n}$, ${\mathbb{S}}$ be a probability
distribution on $\\{0,1,\ldots,n\\}$ and $\zeta$ be a
${\mathbb{S}}$-distributed random variable on $\\{0,1,\ldots,n\\}$. The
distribution ${\mathbb{R}}$ depends on ${\mathbb{S}}$ in the following way:
for a ${\mathbb{R}}$-distributed random test $X\in{\cal D}$ we have
$\displaystyle\mbox{Pr}_{{\mathbb{R}}}\\{|X|={s}\\}=\mbox{Pr}_{\mathbb{S}}\\{\zeta={s}\\},\,\,\
\mbox{Pr}_{{\mathbb{R}}}\\{X=x\,|\,|X|={s}\\}=1/{{n\choose
s}}\,\,\,\,\mbox{$\forall x\in{\cal P}_{n}^{s}$, else 0}\,.$ (3.6)
These two requirements mean that for all $s\in\\{0,1,\ldots,n\\}$ and
$\textsf{X}\in{\cal P}_{n}^{s}$ we have
$\displaystyle\mbox{Pr}_{{\mathbb{R}}}\\{X=\textsf{X}\\}={\mbox{Pr}_{\mathbb{S}}\\{\zeta=s\\}}/{{n\choose
s}}\,.$
Note that in the case of Bernoulli design, when each item is included into a
group of items with probability $p$, ${\mathbb{S}}$ is Bin($n,p$), the
Binomial distribution with parameters $n$ and $p$.
For a general test function $f(X,T)$ we introduce the probability
$p_{ijs}=\mbox{Pr}\\{f(X,T_{i})=f(X,T_{j})\,|\,|X|=s\\}.$
By conditioning on $s$, we obtain
$p_{ij}=\sum_{s=0}^{n}p_{ijs}\mbox{Pr}_{\mathbb{S}}\\{\zeta={s}\\}.$ In view
of the conditional uniformity of ${\mathbb{R}}$, which is the second condition
in (3.6), the probabilities $p_{ijs}$ can be written as
$\displaystyle p_{ijs}={k_{ijs}}/{|{\cal P}_{n}^{s}|}={k_{ijs}}/{{{n\choose
s}}}$
where $k_{ijs}=k(T_{i},T_{j},s)\,$ is the number of $X\in{\cal P}_{n}^{s}$
such that $f(X,T_{i})\,=\,f(X,T_{j});\;$ that is,
$\displaystyle k_{ijs}=\left|\\{X\in{{\cal
P}_{n}^{s}}:\;f(X,T_{i})=f(X,T_{j})\\}\right|\;\;\;\;\;\mbox{\rm for
$\;\;T_{i},T_{j}\in{{\cal T}}$}\,.$
From this, we obtain
$\displaystyle
p_{ij}=\sum_{s=0}^{n}{k_{ijs}}\mbox{Pr}_{\mathbb{S}}\\{\zeta={s}\\}/{{{n\choose
s}}}\,.$
Set
$\displaystyle q_{{\cal D},n,l,m,p;\,{\mathbb{S}}}=\sum_{s=0}^{n}\frac{K({\cal
P}_{n}^{s},n,l^{\prime},m^{\prime},p)}{{n\choose
s}}\mbox{Pr}_{\mathbb{S}}\\{\zeta={s}\\}\;\;{\rm
with}\;\;l^{\prime}=\max(l,m),m^{\prime}=\min(l,m)\,.$
Then all results of the previous sections established for the case ${\cal
D}={\cal P}_{n}^{s}$ can be can be extended for the group testing problems
with ${\cal D}=\cup_{s}{\cal P}_{n}^{s}$ by replacing $q_{{\cal D},n,l,m,p}$
of (3.2) with $q_{{\cal D},n,l,m,p;\,{\mathbb{S}}}$.
## 4 Group testing for the binary model
### 4.1 A general result and its specialization to particular cases
In the binary group testing, we have ${h}=1$ in (1.1) and thus the test
function is
$\displaystyle f(X,T)=f_{1}(X,T)=\left\\{\begin{array}[]{ll}0&\;\mbox{ if
}\;|X\cap T|=\emptyset,\\\ 1&\mbox{ otherwise.}\end{array}\right.$ (4.3)
###### Theorem 4.1
Let the test function be (4.3), $0\leq p\leq m\leq l\leq n$, $p<l$, ${\cal
D}={\cal P}_{n}^{s}$ and $(T_{i},T_{j})\in{\cal T}(n,l,m,p)$. Then the value
of the Rényi coefficient $k_{ij}$ does not depend on the choice of the pair
$(T_{i},T_{j})\in{\cal T}(n,l,m,p)$ and equals $k_{ij}=K({\cal
P}_{n}^{s},n,l,m,p),$ where
$\displaystyle K({\cal
P}_{n}^{s},n,l,m,p)={{n}\choose{s}}-{{n-l}\choose{s}}-{{n-m}\choose{s}}+2{{n-l-m+p}\choose{s}}\,.$
(4.4)
The proof of Theorem 4.1 can be obtained from Zhigljavsky (2003) and is
included in Appendix A for completeness.
###### Corollary 3
Let the test function be (4.3) and set $n\geq 2$, $1\leq d<n$, $1\leq s<n$ and
${\cal D}={\cal P}_{n}^{s}$. For a fixed $N\geq 1$, let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design with each
$X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed, where
${\mathbb{R}}$ is the uniform distribution on ${\cal D}$. We consider the
following cases for ${\cal T}={\cal P}_{n}^{d}$ and ${\mathbb{Q}}$:
1. 1.
Let ${\cal T}={\cal P}_{n}^{d}$ and ${\mathbb{Q}}$ be the uniform distribution
on ${\cal T}$. For a fixed $N\geq 1$, let ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$
be a random $N$-point design with each $X_{i}\in{D}_{N}$ independent and
${\mathbb{R}}$-distributed. Then $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ can
be obtained from (3.4) with
$\displaystyle{K({\cal P}_{n}^{s},n,d,d,p)}={{n\choose
s}}-2{{{n-d}\choose{s}}+2{{n-2d+p}\choose{s}}}\,.$
2. 2.
Let ${\cal T}={\cal P}_{n}^{\leq d}$ and ${\mathbb{Q}}$ be a distribution
satisfying (2.17). Then $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ can be
obtained from (3.3) with
$\displaystyle q_{{\cal
D},n,{b},m,p}=1{-}\left[{\left({n{-}{b}}\atop{s}\right)+\left({n{-}m}\atop{s}\right){-}2\left({n{-}{b}{-}m{+}p}\atop{s}\right)}\right]\big{/}{\left({n}\atop{s}\right)}\,.$
(4.5)
3. 3.
Let ${\cal T}={\cal P}_{n}^{\leq n}$, ${\mathbb{Q}}$ be a distribution
satisfying (2.17) and suppose ${\mathbb{B}}$ is the $Bin(n,q)$ distribution on
$\\{0,1,\ldots n\\}$ for some $q>0$. Then application of Theorem 3.2 provides
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)=\sum_{{b}=0}^{n}{n\choose{b}}q^{b}(1-q)^{n-{b}}\min\left\\{1,\frac{1}{{{n}\choose{{b}}}}\sum_{m=0}^{n}\,\sum_{p=0}^{\min\\{{b},m\\}}\\!{\textstyle{{n}\choose{p\;m-p\;{b}-p\;n-{b}-m+p}}}q^{N}_{{\cal
D},n,{b},m,p}\right\\}$
with $q_{{\cal D},n,{b},m,p}$ obtained from (4.5).
In Table 3, using the results of part one of Corollary 3 we consider the
inverse problem discussed in (2.3) and tabulate the value of $N_{\gamma}$
supposing ${\cal T}={\cal P}_{n}^{3}$ for different values of $s$ and $n$. In
Table 4, using the results of part three of Corollary 3 we tabulate the value
of $N_{\gamma}$ supposing ${\mathbb{B}}$ is the $Bin(n,3/n)$ distribution. In
distribution ${\mathbb{B}}$, the probability of success has been set to $3/n$
so that each target $T\in{\cal T}$ will have three elements on average to
compare with the results of Table 3. We see the binomial sample problem
requires significantly more tests to locate the defective items with high
probability than the case of exactly $d$ defectives.
$\lambda$
---
0.10
0.15
0.20
0.25
0.30
$\gamma=0.01$
---
$n=20$ | $n=50$ | $n=100$
47 | 58 | 64
37 | 47 | 50
33 | 40 | 44
32 | 39 | 43
34 | 40 | 44
$\gamma=0.05$
---
$n=20$ | $n=50$ | $n=100$
38 | 48 | 54
30 | 39 | 42
27 | 33 | 37
26 | 33 | 36
28 | 34 | 38
Table 3: Values of $N_{\gamma}$ for binary model with $d=3$, $s=\lceil\lambda
n\rceil$ for various $n$ and $\lambda$.
$\lambda$
---
0.10
0.15
0.20
0.25
0.30
$\gamma=0.01$
---
$n=20$ | $n=50$ | $n=100$
90 | 119 | 142
84 | 117 | 184
105 | 187 | 410
166 | 283 | 731
316 | 547 | 1334
$\gamma=0.05$
---
$n=20$ | $n=50$ | $n=100$
71 | 95 | 113
63 | 91 | 154
70 | 129 | 242
101 | 186 | 380
170 | 330 | 604
Table 4: Values of $N_{\gamma}$ for binary model with ${\mathbb{B}}$ the
$Bin(n,3/n)$ distribution, $s=\lceil\lambda n\rceil$ for various $n$ and
$\lambda$.
The results below will address the scenario of Bernoulli designs. In the
following corollaries we set ${\cal D}={\cal P}_{n}^{\leq n}$ and
${\mathbb{S}}$ is the $Bin(n,\kappa)$ distribution for some $0<\kappa<1$. The
discussion of Section 3.4 results in the following.
###### Corollary 4
Let the test function be (4.3) and set $n\geq 2$, $1\leq d<n$, $1\leq s<n$.
Let ${\cal D}={\cal P}_{n}^{\leq n}$, ${\mathbb{R}}$ be a distribution
satisfying the constraints (3.6) and suppose ${\mathbb{S}}$ is the
$Bin(n,\kappa)$ distribution on $\\{0,1,\ldots n\\}$. Let
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random design with each
$X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed. We
consider the following cases for ${\cal T}={\cal P}_{n}^{d}$ and
${\mathbb{Q}}$:
1. 1.
Let ${\cal T}={\cal P}_{n}^{d}$ and ${\mathbb{Q}}$ be the uniform distribution
on ${\cal T}$. Then $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ can be obtained
from (3.4) by replacing ${K({\cal P}_{n}^{s},n,d,d,p)}/{{n\choose s}}$ with
$\displaystyle\sum_{s=0}^{n}\frac{K({\cal P}_{n}^{s},n,d,m,p)}{{n\choose
s}}{\rm Pr}_{\mathbb{S}}\\{S={s}\\}=1-2\sum\limits_{s=0}^{n}\left({n-d\choose
s}-{n-2d+p\choose s}\right)\kappa^{s}(1-\kappa)^{n-s}\,.$
2. 2.
Let ${\cal T}={\cal P}_{n}^{\leq n}$, ${\mathbb{Q}}$ be a distribution
satisfying the constraint (2.17) and suppose ${\mathbb{B}}$ is the $Bin(n,q)$
distribution on $\\{0,1,\ldots n\\}$. Then from (3.3) we obtain
$\displaystyle\\!\\!\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)\\!=\\!\sum_{{b}=0}^{n}{n\choose{b}}q^{b}(1\\!-\\!q)^{n-{b}}\min\left\\{1,\frac{1}{{{n}\choose{{b}}}}\sum_{m=0}^{n}\,\sum_{p=0}^{\min\\{{b},m\\}}\\!{\textstyle{{n}\choose{p\;m-p\;{b}-p\;n-{b}-m+p}}}q^{N}_{{\cal
D},n,{b},m,p,\kappa}\right\\},\;\;\;\;$
where
$\displaystyle q_{{\cal
D},n,{b},m,p,\kappa}=1{-}\sum\limits_{s=0}^{n}\left({{{n{-}{b}}\choose{s}}+{{n{-}m}\choose{s}}{-}2{{n{-}{b}{-}m{+}p}\choose{s}}}\right)\kappa^{s}(1-\kappa)^{n-s}\,.$
In Table 5, using the results of part one of Corollary 3 we tabulate the value
of $N_{\gamma}$ supposing ${\cal T}={\cal P}_{n}^{d}$ with $d=3$ for different
values of $s$ and $n$. This table considers more choices for $s$ when compared
to Table 3. In Table 6, we tabulate the value of $N_{\gamma}$ obtained via
part one of Corollary 4 supposing ${\mathbb{S}}$ is the $Bin(n,\lceil\lambda
n\rceil/n)$ distribution. The probability parameter has been set to
$\lceil\lambda n\rceil/n$ such that each $X_{i}$ in
${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ will have $\lceil\lambda n\rceil$ elements
on average to compare with the results of Table 5. The results of these tables
indicate it is preferable to have a design with constant-row-weight rather
than including each item in a test with some fixed probability (at least for
choices of $s$ of interest).
$\lambda$
---
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
$\gamma=0.01$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
35 | 82 | 86 | 112
35 | 47 | 58 | 64
25 | 33 | 43 | 48
25 | 33 | 40 | 44
27 | 32 | 39 | 43
27 | 34 | 40 | 44
37 | 43 | 45 | 48
37 | 43 | 50 | 54
62 | 52 | 62 | 64
62 | 66 | 73 | 79
$\gamma=0.05$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
28 | 66 | 72 | 94
28 | 38 | 48 | 54
20 | 27 | 36 | 41
20 | 27 | 33 | 37
22 | 26 | 33 | 36
22 | 28 | 34 | 38
29 | 35 | 38 | 41
29 | 35 | 42 | 46
51 | 43 | 52 | 55
51 | 55 | 63 | 69
Table 5: Values of $N_{\gamma}$ for binary model with $d=3$, $s=\lceil\lambda
n\rceil$ for various $n$ and $\lambda$.
$\lambda$
---
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
$\gamma=0.01$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
49 | 96 | 92 | 115
49 | 55 | 61 | 66
34 | 38 | 46 | 49
34 | 38 | 42 | 45
34 | 37 | 41 | 44
34 | 38 | 42 | 45
41 | 46 | 46 | 49
41 | 46 | 51 | 55
58 | 53 | 62 | 64
58 | 65 | 73 | 79
$\gamma=0.05$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
39 | 78 | 76 | 97
39 | 44 | 51 | 56
27 | 31 | 38 | 42
27 | 31 | 35 | 38
27 | 30 | 34 | 37
27 | 31 | 35 | 38
33 | 37 | 39 | 41
33 | 37 | 43 | 47
47 | 44 | 52 | 55
47 | 54 | 62 | 69
Table 6: Values of $N_{\gamma}$ for binary model with ${\mathbb{S}}$ the
$Bin(n,\lceil\lambda n\rceil/n)$ distribution for various $n$ and $\lambda$.
### 4.2 Simulation study
In Tables 7–9, for a given value of $1-\gamma^{*}$ we tabulate the value of
$1-\gamma$ for the binary group testing model, where ${\cal T}={\cal
P}_{n}^{d}$, ${\cal D}={\cal P}_{n}^{s}$ and ${\mathbb{Q}}$ and ${\mathbb{R}}$
are uniform on ${\cal T}$ and ${\cal D}$ respectively. Similarly to Tables
1–2, we also include the explicit upper bounds $N_{\gamma}$ and the value of
$N_{\gamma^{*}}$ obtained via Monte Carlo methods with $50,000$ trials for
different values of $n,s$ and $\gamma^{*}$. We see once again, that for small
values of $\gamma^{*}$, the union bound used in Theorem 2.1 appears very
sharp.
In Figures 5–7, using red crosses we depict the probability ${{\rm
Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated by }{D}_{N}\\}}$ as a
function of $N$ obtained with $50,000$ Monte Carlo simulations. With the black
dots we plot the value of $1-\gamma$ as a function of $N_{\gamma}$. For these
figures, we have chosen $s=\lfloor(1-2^{-1/d})n\rfloor$ based on the
asymptotic considerations discussed in the beginning of Section 5.4.
| $n=20$ | $n=50$ | $n=100$ | $n=200$
---|---|---|---|---
$\lambda$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$
0.10 | 36 | 38 | 0.96 | 44 | 48 | 0.96 | 49 | 54 | 0.96 | 55 | 59 | 0.97
0.20 | 25 | 27 | 0.96 | 30 | 33 | 0.96 | 33 | 37 | 0.96 | 38 | 41 | 0.96
0.30 | 29 | 31 | 0.96 | 33 | 35 | 0.96 | 35 | 38 | 0.97 | 39 | 41 | 0.96
0.40 | 33 | 35 | 0.96 | 37 | 42 | 0.97 | 42 | 46 | 0.97 | 47 | 51 | 0.96
0.50 | 52 | 55 | 0.97 | 57 | 63 | 0.97 | 62 | 69 | 0.98 | 68 | 76 | 0.97
Table 7: Binary model with $\gamma^{*}=0.05$, $d=3$ $s=\lceil\lambda n\rceil$, various $n$ and $\lambda$. | $n=20$ | $n=50$ | $n=100$ | $n=200$
---|---|---|---|---
$\lambda$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$
0.10 | 32 | 34 | 0.92 | 40 | 44 | 0.93 | 45 | 50 | 0.93 | 51 | 55 | 0.94
0.20 | 22 | 24 | 0.92 | 28 | 31 | 0.93 | 32 | 34 | 0.92 | 36 | 38 | 0.93
0.30 | 26 | 28 | 0.93 | 30 | 32 | 0.93 | 33 | 35 | 0.93 | 36 | 38 | 0.93
0.40 | 29 | 32 | 0.93 | 35 | 39 | 0.94 | 39 | 43 | 0.94 | 44 | 47 | 0.94
0.50 | 46 | 51 | 0.95 | 52 | 59 | 0.96 | 58 | 65 | 0.96 | 64 | 72 | 0.95
Table 8: Binary model with $\gamma^{*}=0.1$, $d=3$ $s=\lceil\lambda n\rceil$, various $n$ and $\lambda$. | $n=20$ | $n=50$ | $n=100$ | $n=200$
---|---|---|---|---
$\lambda$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$ | $N_{\gamma^{*}}$ | $N_{\gamma}$ | $1-\gamma$
0.10 | 24 | 29 | 0.84 | 34 | 39 | 0.87 | 38 | 44 | 0.87 | 43 | 49 | 0.88
0.20 | 18 | 21 | 0.84 | 24 | 27 | 0.85 | 26 | 31 | 0.86 | 29 | 34 | 0.86
0.30 | 21 | 24 | 0.85 | 25 | 28 | 0.85 | 27 | 31 | 0.85 | 30 | 35 | 0.87
0.40 | 24 | 28 | 0.86 | 30 | 35 | 0.87 | 34 | 39 | 0.88 | 37 | 43 | 0.88
0.50 | 37 | 45 | 0.90 | 44 | 53 | 0.89 | 50 | 60 | 0.92 | 53 | 67 | 0.95
Table 9: Binary model with $\gamma^{*}=0.25$, $d=3$ $s=\lceil\lambda n\rceil$,
various $n$ and $\lambda$.
Figure 4: Binary model: $\gamma$ vs $\gamma^{*}$ for $n=100$ and $d=3$.
Figure 5: Binary model: $\gamma$ vs $\gamma^{*}$ for $n=200$ and $d=3$.
Figure 6: Binary model: $\gamma$ vs $\gamma^{*}$ for $n=100$ and $d=4$.
Figure 7: Binary model: $\gamma$ vs $\gamma^{*}$ for $n=200$ and $d=4$.
From Tables 7–9 and Figures 5–7 we can draw the following conclusions. For
small values of $\gamma$, the value of $\gamma^{*}$ is very close to $\gamma$
(equivalently $N_{\gamma}$ is very close to $N_{\gamma}$). For larger values
of $\gamma$, we see that $\gamma^{*}$ is often very conservative with the true
$\gamma$ being significantly smaller.
We use the following decoding technique for random designs and improved random
designs of Section 4.3. We start with the COMP procedure described in the
beginning of Section 4.5 to eliminate uniquely defined non-defective items.
Then, in the case where the defective factors are unknown, we perform several
additional individual tests to exactly locate the defective items (such tests
are very easy to design). In simulation studies we do not need this as the
group $T=T_{i}$ consisting of defective items is known and we only need to
establish whether there is another group $T^{\prime}=T_{j}$ giving exactly the
same test results. In one random test, the probability that the results
coincide is $p_{ij}$ defined in (2.5). As follows from formula (4.4), this
probability is high only if $|T_{i}\setminus T_{j}|=1$; this is used
explicitly in the proof of Theorem 5.1 and noticed in the beginning of Section
5.4. In $N$ tests, such probability becomes $p_{ij}^{N}$ and if $N$ is not
very small, $p_{ij}^{N}$ becomes negligible when $|T_{i}\setminus T_{j}|>1$.
The probability $\tilde{p}_{ij}$ that both results are 1 are also small when
$|T_{i}\setminus T_{j}|>1$. Therefore, for checking whether $T$ is not the
unique group of items consistent with all the test results, it is enough to
only check item groups $T^{\prime}$ with $|T\setminus T^{\prime}|=1$. The same
considerations can be used for the additive and other group testing models.
### 4.3 Improving on random designs in group testing problems
Any $N$-point design ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ has an equivalent
matrix representation as an $N\times n$-matrix ${\cal X}({D}_{N})$ where
columns relate to items and rows to test groups. Let $a_{i,j}=1$ if item
$a_{j}$ $(j=1,\ldots,n)$ is included into the test group $X_{i}$
$(i=1,\ldots,N)$; otherwise $a_{i,j}=0$. Then the test matrix corresponding to
design ${D}_{N}$ is ${\cal X}({D}_{N}):=(a_{i,j})_{i,j=1}^{N,n}\,$. We shall
denote the rows of ${\cal X}({D}_{N})$ by ${\cal
X}_{i}:=(a_{i,1},\ldots,a_{i,n})$ for $i=1,\ldots,N$. A design is called
constant-column-weight design if all columns of ${\cal X}({D}_{N})$ have the
same number of ones whereas for a constant-row-weight design all rows of
${\cal X}({D}_{N})$ have the same number of ones. The designs which are both
constant-row-weight and constant-column-weight designs are referred to as
doubly regular designs, see Section 1.3 in Aldridge et al. (2019). If, for a
given design, one of the constancy assumptions is approximately true, we shall
use the prefix ‘near-constant’.
In the most important case ${\cal D}={\cal P}_{n}^{s}$, all designs (including
random designs and the designs constructed in this section) are automatically
constant-row-weight designs. To improve on the separability properties of
random designs, we will construct near-constant-column weight designs and
hence our designs will be nearly doubly regular designs. Moreover, we will
impose restrictions on the Hamming distance between the tests (equivalently
the rows of ${\cal X}({D}_{N})$). Summarizing, the designs of this section
will have near-constant-column weights, constant-row-weights and have an
additional restriction on the Hamming distance between the rows of ${\cal
X}({D}_{N})$. Notice that the fact that keeping large Hamming distances
between columns of the test matrix ${\cal X}(\cdot)$ tend to improve
separability properties of the design has been noted in group testing
literature, see e.g. Aldridge et al. (2016). Moreover, the main idea behind
the $d$-disjunct designs of Macula Macula (1996) is maximization of the
minimal Hamming distance between these columns.
Here we shall describe the algorithm of construction of the nested designs we
propose; a formal description as a pseudo-code for the algorithm can be found
in Appendix B. We start with a one-element design ${D}_{1}=\\{X_{1}\\}$, where
$X_{1}$ is a random group. At $k$-th step we have a design
${D}_{k-1}=\\{X_{1},\ldots,X_{k-1}\\}$ and we are looking for a new test group
$X_{k}$ to be added to the design ${D}_{k-1}$. To do this, we generate 100
candidate test groups $U_{k}=\\{X_{k,1},\ldots,X_{k,100}\\}$ with
$X_{k,i}\in{\cal P}_{n}^{s}$ according to the following procedure. For 75 of
the candidate tests, repeat the following. Check the frequency of occurrence
of each item and locate the items with the smallest number of occurrences. If
there are greater than $s$ of these items, return a random sample of size $s$.
If there are fewer than $s$, say $s^{\prime}$, such lowest-frequency items,
return all $s^{\prime}$ items and supplement the remaining $s-s^{\prime}$
items with a random sample from the group containing items that have not
appeared the fewest. This describes Algorithm 1 in the Appendix B. To form the
remaining 25 candidate tests, we simply sample them randomly from ${\cal
D}={\cal P}_{n}^{s}$. The 100 candidate tests chosen in this manner encourage
nearly equal column weights of the constructed designs ${D}_{k}$ for all $k$.
Of the 100 candidates of the set $U_{k}$, we select a single test group as
$X_{k}$ by maximizing the smallest Hamming distance to all previous points in
the design ${D}_{k-1}$. Specifically, we locate any test group (or groups)
$X^{\prime}\in U_{k}$ such that $\min_{1\leq j\leq
k-1}d_{H}(X,X_{j})\to\max_{X\in U_{k}}$. This may result in more than one such
$X^{\prime}$. If this occurs, we select the group $X^{\prime}\in U_{k}$ such
that $\sum_{i=1}^{N}d_{H}(X^{\prime},X_{i})$ is largest. This whole process is
described as Algorithm 2 in Appendix B.
For the random design ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ with each
$X_{i}\in{D}_{N}$ chosen independently and uniformly in ${\cal P}_{n}^{s}$,
the distribution of the Hamming distance between any two rows of ${\cal
X}({D}_{N})$ can be computed. Without loss of generality, we only need to
consider the first and second rows of ${\cal X}({D}_{N})$, that is ${\cal
X}_{1}$ and ${\cal X}_{2}$. The random variable of interest is $d_{H}({\cal
X}_{1},{\cal X}_{2})$. Assume $s\leq n/2$. Then for $x=0,1,\ldots s$ we
clearly have
$\displaystyle{\rm Pr}\\{d_{H}({\cal X}_{1},{\cal X}_{2})=2x\\}={{s\choose
s-x}{n-s\choose x}}/{{n\choose s}}\,.$
In Figures 9–11, we plot the distribution of inter-row distances of ${\cal
X}({D}_{N})$ in dotted red and ${\cal X}({D}_{N}^{\prime})$ in solid green,
where ${D}_{N}^{\prime}$ is a design obtained by Algorithm 2. The truncation
of the lower tail of the distribution in red demonstrates that Algorithm 2
performs very well at preventing small Hamming distances and encouraging large
ones.
Figure 8: Distribution of inter-point Hamming distances for random (red) and
after the application of Alg. 1 (green); $n=50$ and $s=11$.
Figure 9: Distribution of inter-point Hamming distances for random (red) and
after the application of Alg. 1 (green); $n=50$ and $s=25$.
Figure 10: Distribution of inter-point Hamming distances for random (red) and
after the application of Alg. 1 (green); $n=100$ and $s=21$.
Figure 11: Distribution of inter-point Hamming distances for random (red) and
after the application of Alg. 1 (green); $n=100$ and $s=50$.
### 4.4 Simulation study for quasi-random designs
In Figures 13–15, we demonstrate the effect Algorithm 2 has on the probability
of separation for the binary group testing problem. Using the red crosses we
depict the probability ${\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\,$ is
separated by ${D}_{N}\\}$ as a function of $N$. With the black dots we plot
the value of $1-\gamma^{*}$ as a function of $N_{\gamma}$. With green plusses
we depict the probability of separation when the design ${D}_{N}^{\prime}$ is
obtained by Algorithm 2. For these figures we have set $d=3$ and
$s=s(n)=\lambda_{d}n$ with $\lambda_{d}$ chosen asymptotically optimally as
$\lambda_{d}=1-2^{-1/d}$ (see Section 5.4). From these figures we can see
Algorithm 2 significantly increases the probability of separation for the
binary testing problem. This is particularly evident for smaller values of
$N$.
Figure 12: Binary model with $n=20,s=5$;
random (red) vs improved random (green).
Figure 13: Binary model with $n=50,s=11$;
random (red) vs improved random (green).
Figure 14: Binary model with $n=100,s=21$;
random (red) vs improved random (green).
Figure 15: Binary model with $n=150,s=31$;
random (red) vs improved random (green).
### 4.5 Comparison with designs constructed from the disjunct matrices
Given a test matrix ${\cal X}({D}_{N}):=(a_{i,j})_{i,j=1}^{N,n}\,$, let ${\cal
S}(a_{j}):=\\{i:a_{i,j}=1\\}$ denote set of tests in which item $a_{j}$ is
included. For a subset ${\cal L}\subseteq{\cal A}$, let ${\cal{S}}({\cal
L})=\cup_{a_{j}\in{\cal L}}{\cal S}(a_{j})$. Then a test matrix ${\cal
X}={\cal X}({D}_{N})$ is called $d$-disjunct if for any subset ${\cal
L}\subseteq{\cal A}$ satisfying $|{\cal L}|=d$ and any $a_{j}\notin{\cal L}$,
we never have ${\cal S}(a_{j})\subseteq{\cal{S}}({\cal L})$. A $d$-disjunct
matrix can be used to uniquely identify $d$ or less defective items and has
the following simple decoding procedure to identify the true defective set:
all items in a negative test are identified as non-defective whereas all
remaining items are identified as (potentially) defective. This simple
procedure is called the combinatorial orthogonal matching pursuit (COMP)
algorithm, see (Aldridge et al., 2019, p. 37). Consider the following
construction of $d$-disjunct matrices ${\cal X}$.
Let $[m]:=\\{1,2,...,m\\}$ be a set of integers. Then each of the $n$ columns
is labeled by a (distinct) $k$ subset of $[m]$. The numbers $m$ and $k$ must
satisfy $n\leq{m\choose k}$. Set ${\cal X}$ to have ${m\choose d}$ rows with
each row labeled by a (distinct) $d$-subset of $[m]$, where $d<k<m$; $a_{i,j}$
= 1 if and only if the label of row $i$ is contained in the label of column
$j$. It was proved in Macula (1996), that this procedure makes ${\cal X}$
$d$-disjunct. The number of rows in ${\cal X}$, and hence the number of tests
performed, is $N={m\choose d}$ which can be very large and can make
identification of the defective set expensive. To avoid a large number of
tests, it was recommended in Macula (1998) to set $d=2$ regardless of the true
$d$; we will call such a matrix 2-disjunct. Whilst the 2-disjunct matrix will
no longer guarantee the identification of the defective set if the true $d>2$,
it was claimed in Macula (1998), see also D’yachkov et al. (2005), that with
high probability the defective set will be identified.
In Tables 10 and 11, we investigate the probability the defective set $T$ is
identified when ${\cal T}={\cal P}_{n}^{3}$ and ${\cal T}={\cal P}_{n}^{4}$
for designs constructed by the following three procedures: (a) the design
corresponding to the 2-disjunct matrix ${\cal X}$ with the full decoding; (b)
the design corresponding to the 2-disjunct matrix ${\cal X}$ with only the
COMP procedure used for decoding; (c) ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ with
each $X_{i}\in{D}_{N}$ chosen independently and ${\mathbb{R}}$-distributed on
${\cal D}={\cal P}_{n}^{s}$ where $s$ is chosen according to its
asymptotically optimal value (see Section 5.4); (d) the design is an improved
random design constructed from Algorithm 1. For different values of $n$, when
constructing the 2-disjunct matrix ${\cal X}$ we have chosen $m$ and $k$ such
that $n\leq{m\choose k}$, $2<k<m$ and $N={m\choose 2}$ is as small as
possible. For $n=50,100,200$ and $300$, this results in choosing $m=8$ and
$k=3$, $m=9$ and $k=4$, $m=10$ and $k=4$ and $m=11$ and $k=4$ respectively. We
have then set the random and improved random designs (constructed from
Algorithm 2) (c) and (d) to have the same value of $N$. In these tables, the
letter next to $1-\gamma$ corresponds to the procedure used. Within Tables 10
and 11, results have been obtained from Monte Carlo simulations with $100,000$
repetitions.
We can make the following conclusions from the results presented in Tables 10
and 11: (i) random designs are slightly inferior to the designs obtained from
2-disjunct matrices (note, however, that random designs are nested and can be
constructed for any $N$), (ii) the COMP decoding procedure alone is
insufficient and makes the pair [design, decoding procedure] poor, and (iii)
improved random designs constructed by applying Algorithm 1 have much better
separability than both random designs and the designs obtained from 2-disjunct
matrices.
$n$ | $N$ | $1-\gamma$ (a) | $1-\gamma$ (b) | $1-\gamma$ (c) | $1-\gamma$ (d)
---|---|---|---|---|---
50 | 28 | 0.99 | 0.82 | 0.89 | 0.96
100 | 36 | 0.95 | 0.67 | 0.95 | 0.97
200 | 45 | 0.98 | 0.70 | 0.98 | 0.98
300 | 55 | 0.98 | 0.77 | 0.98 | 0.99
Table 10: Separability comparison for 2-disjunct, random and improved random designs: ${\cal T}={\cal P}_{n}^{3}$. $n$ | $N$ | $1-\gamma$ (a) | $1-\gamma$ (b) | $1-\gamma$ (c) | $1-\gamma$ (d)
---|---|---|---|---|---
50 | 28 | 0.90 | 0.51 | 0.53 | 0.86
100 | 36 | 0.76 | 0.26 | 0.70 | 0.92
200 | 45 | 0.86 | 0.29 | 0.84 | 0.96
300 | 55 | 0.92 | 0.38 | 0.94 | 0.99
Table 11: Separability comparison for 2-disjunct, random and improved random
designs: ${\cal T}={\cal P}_{n}^{4}$.
### 4.6 Efficiency of the COMP decoding procedure for random designs
For a disjunct test matrix ${\cal X}$, the COMP decoding procedure described
in Section 4.5 is guaranteed to find the defective set and can do so very
efficiently (possibly defective items become definitely defective). When the
design is not disjunct, say ${D}_{N}$ is constructed randomly, there is no
guarantee the COMP procedure will identify the true defective set. Instead,
the procedure will provide a set containing the true defective set possibly
mixed in with some non-defectives. In (Aldridge et al., 2019, p.37), the set
returned by the COMP algorithm is referred to as the largest satisfying set.
For situations when the COMP procedure does not return a uniquely defined $T$,
further analysis (based on the tests with positive results) must be performed
to reduce the number of possible target groups of items $T$ consistent with
all available test results. In Figures 17–17, we investigate the efficiency of
COMP expressed as the ratio
$\displaystyle{\mbox{Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{\text{COMP decoding
returns exactly $T$ for design ${D}_{N}$}\\}}/{{{\rm
Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated by }{D}_{N}\\}}}\,$
for the designs ${D}_{N}$ is constructed randomly. The values in these figures
have been obtained from Monte Carlo methods with $50,000$ repetitions. From
these figures we observe that despite for larger $N$ the COMP procedure has a
higher efficiency, this efficiency is still very low. We thus conclude, also
taking into account the second conclusion at the end of Section 4.5, for
random designs ${D}_{N}$ the COMP procedure alone will not guarantee
identification of the target set frequently enough and must be supplemented by
further analysis of positive results.
Figure 16: Binary model with $n=50,s=11$.
Figure 17: Binary model with $n=100,s=21$.
### 4.7 Binary group testing with lies
As discussed in Section 2.3, the results of this paper can be extended to the
case where several lies are allowed by introducing the final sum on the right
hand side of (2.10). As an example, we shall provide a generalisation of part
one of Corollary 3.
###### Corollary 5
Let the test function be defined by (4.3). Let ${\cal T}={\cal P}_{n}^{d}$ and
${\cal D}={\cal P}_{n}^{s}$, where $n\geq 2$, $1\leq d<n$, $1\leq s<n$ and
suppose at most $L$ lies are allowed. Let ${\mathbb{Q}}$ and ${\mathbb{R}}$ be
uniform distributions on ${\cal T}$ and ${\cal D}$ respectively. For a fixed
$N\geq 1$, let ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be a random $N$-point design
${D}_{N}$ with each $X_{i}\in{D}_{N}$ chosen independently and
${\mathbb{R}}$-distributed. Then $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ for
the $L$-lie problem can be obtained from (3.4) by replacing
$\displaystyle\frac{K({\cal P}_{n}^{s},n,d,d,p)}{{n\choose
s}}=1-2\cdot\frac{{{n-d}\choose{s}}-{{n-2d+p}\choose{s}}}{{{n}\choose{s}}}\,$
with
$\displaystyle\sum_{l=0}^{2L}{{N}\choose{l}}\left(1-2\cdot\frac{{{n-d}\choose{s}}-{{n-2d+p}\choose{s}}}{{{n}\choose{s}}}\right)^{N-l}\left(2\cdot\frac{{{n-d}\choose{s}}-{{n-2d+p}\choose{s}}}{{{n}\choose{s}}}\right)^{l}\,.$
In Table 12 and Table 13, we document the values of $N^{*}_{\gamma}$ obtained
from Corollary 5 for $L=1$ and $L=2$ respectively, for several choices of $s$
and $n$. When comparing these tables with Table 5, we see the significant
increase in tests needed when lies are present. In Figures 19–19, using red
crosses we depict ${\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ can be
uniquely identified by }{D}_{N}\textrm{ with at most $1$ lies}\\}$ as a
function of $N$. This has been obtain from Monte Carlo methods with $50,000$
repetitions. With the black dots we plot the value of $1-\gamma^{*}$ as a
function of $N_{\gamma}$ obtained via Corollary 5. In these figures we have
set $s=n/4$ on the basis of Table 12. We see once again for small values of
$\gamma$, the value of $\gamma^{*}$ is very close to $\gamma$ (equivalently
$N_{\gamma}$ is very close to $N_{\gamma}$). For larger values of $\gamma$, we
see that $\gamma^{*}$ is very conservative.
$\lambda$
---
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
$\gamma=0.01$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
56 | 126 | 130 | 166
56 | 73 | 87 | 95
41 | 52 | 66 | 72
41 | 52 | 61 | 66
44 | 51 | 59 | 64
59 | 53 | 61 | 66
59 | 67 | 68 | 71
59 | 67 | 75 | 81
98 | 81 | 92 | 94
98 | 101 | 109 | 115
$\gamma=0.05$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
47 | 108 | 113 | 145
47 | 63 | 76 | 83
34 | 44 | 58 | 63
34 | 44 | 53 | 58
37 | 44 | 52 | 56
37 | 46 | 53 | 58
50 | 58 | 59 | 63
50 | 58 | 66 | 71
83 | 69 | 81 | 83
83 | 87 | 96 | 102
Table 12: Values of $N_{\gamma}$ for binary model with $d=3$, $L=1$,
$s=\lceil\lambda n\rceil$, various $n$ and $\lambda$.
$\lambda$
---
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
$\gamma=0.01$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
73 | 163 | 166 | 210
73 | 94 | 111 | 120
53 | 67 | 84 | 91
53 | 67 | 78 | 84
57 | 66 | 76 | 81
57 | 69 | 79 | 84
77 | 87 | 87 | 91
77 | 87 | 96 | 102
127 | 104 | 118 | 120
127 | 131 | 139 | 146
$\gamma=0.05$
---
$n=10$ | $n=20$ | $n=50$ | $n=100$
64 | 143 | 147 | 188
64 | 83 | 99 | 108
46 | 59 | 75 | 82
46 | 59 | 69 | 75
50 | 58 | 68 | 73
50 | 61 | 70 | 75
67 | 77 | 78 | 81
67 | 77 | 86 | 92
111 | 92 | 105 | 107
111 | 115 | 124 | 131
Table 13: Values of $N_{\gamma}$ for binary model with $d=3$, $L=2$,
$s=\lceil\lambda n\rceil$, various $n$ and $\lambda$.
Figure 18: Lies; binary model with $n=20,L=1,s=5$.
Figure 19: Lies; binary model with $n=50,L=1,s=13$.
## 5 Asymptotic results
To start this section, let us make a general comment about the asymptotic
expansions in group testing. In most of the known expansions (usually based on
the use of Bernoulli designs) the authors are interested in the main
asymptotic term only. The authors believe that this is not enough if the
asymptotic expansions are intended for the use as (even rough) approximations;
see, for example, a discussion in Section 5.4 on the asymptotic existence
bound in the case of weak recovery in the binary model. All our expansions in
the case of very sparse regime (that is, for fixed $d$) are accurate up to the
constant term which we have confirmed by numerous numerical studies. As a
result, all our sparse-regime asymptotic expansions can be used as rather
accurate approximations already for moderate values of $n$ such as $n=1000$.
Typically, this is not so if only the leading term in the expansions is kept.
The situation in the sparse regime (when $d\to\infty$ but $d/n\to 0$ as
$n\to\infty$) is different and depends on the rate of increase of $d$. If $d$
increases as $\log n$ then once again our expansions are rather accurate up to
the constant term. However, if $d=n^{\beta}+o(1)$ as $n\to\infty$ with some
$0<\beta<1$ then we usually can guarantee only the leading term in the
expansions and hence the expansions become pretty useless if one wants to use
them for deriving approximations. Moreover, our technique completely fails in
the case when $d$ grows like ${\rm const}\\!\cdot\\!n$ as $n\to\infty$.
### 5.1 Technical results
The main technical result used for derivation of asymptotic upper bounds in
the error-free environment (no lies) for both exact and weak recoveries is
Theorem 5.1 in Zhigljavsky (2003), which we formulate below as Theorem 5.1.
This theorem is especially useful in the case when ${{\cal D}}={\cal
P}_{n}^{s}$ with $s=s(n)=\lambda n+o(1)$ (here $0<\lambda<1$ and $n\to\infty$)
and ${{\cal T}}$ is either ${\cal P}_{n}^{d}$ or ${\cal P}_{n}^{\leq d}$ with
$d$ fixed (that is, for a very sparse regime). As we show below some results
can be extended to a sparse regime when $d\to\infty$ but $d/n\to 0$ as
$n\to\infty$. However, unless $d$ tends to infinity very slowly (like $\log
n$, for example), we lose the very attractive feature of the expansions, which
is the correct constant term.
The authors are not confident that Theorem 5.1 can be applied to the problem
of binomial group testing. Also, there are some extra technical difficulties
in applying this theorem for Bernoulli designs. At least, we cannot get the
constant term $c$ in (5.4) for Bernoulli designs (for these designs, the main
term $C\log n$ is the same as for our main case ${{\cal D}}={\cal P}_{n}^{s}$
with $s=\lambda n+o(1)$ and suitable $\lambda$).
###### Theorem 5.1
Let $I$ be some integer, $c_{i},r_{i},\alpha_{i}$ $(i\\!=\\!1,\ldots,I)$ be
some real numbers, $c_{i}\\!>\\!0,$ $0\\!<\\!r_{i}\\!<\\!1$, at least one of
$\alpha_{i}$ be positive, $\\{q_{i,n}\\}$, $\\{r_{i,n}\\}$ be families of
positive numbers $(i\\!=\\!1,\dots,I)$ such that $0\\!<\\!r_{i,n}\\!<\\!1$ for
all $i$ and
$\displaystyle q_{i,n}=c_{i}n^{\alpha_{i}}(1+o(1)),\quad
r_{i,n}=r_{i}+o\left(\frac{1}{\log n}\right)\;\;\;\;{\rm
as}\;\;n\rightarrow\infty\,.$ (5.1)
Define $M(n)$ as the solution (with respect to $M$) of the equation
$\sum^{I}_{i=1}q_{i,n}r_{i,n}^{M}\,=1\,$ and set
$\displaystyle N(n)=\min\,\left\\{k=1,2,\ldots\;\mbox{\rm such that
}\;\;\sum^{I}_{i=1}q_{i,n}r_{i,n}^{k}\,<1\right\\}\,,$ (5.2) $\displaystyle
C=\max_{i=1,\ldots,I}\;\frac{\alpha_{i}}{-\log r_{i}}\,.$ (5.3)
Finally, let $c$ be the solution of the equation $\sum_{j\in{\cal
J}}c_{j}r_{j}^{c}\,=1\,,$ where ${\cal J}$ is the subset of the set
$\\{1,\ldots,I\\}$ at which the maximum in (5.3) is attained. Then
$N(n)=\lfloor M(n)\rfloor+1$ and
$\displaystyle M(n)=C\log n+c+o(1)\quad{\rm as}\quad n\rightarrow\infty\,.$
(5.4)
Note that $C$ and $c$ in (5.4) are constants in the sense that they do not
depend on $n$. Extensive numerical results for exact and weak recoveries in
the binary, additive and multichannel models show that the resulting
asymptotic formula (5.4) (in cases ${{\cal D}}={\cal P}_{n}^{s}$ and ${{\cal
T}}={\cal P}_{n}^{d}$ or ${{\cal T}}={\cal P}_{n}^{\leq d}$) is very accurate
even for moderate values of $n$. In fact, in all these cases the difference
$N(n)-[C\log n+c]$ tends to zero very fast (as $n\to\infty$) as long as $d$ is
not too large (here $N(n)$ is the upper bound in any of the existence theorems
and is defined in (5.2)). In the sparse regime, when $d\to\infty$ (but $d/n\to
0$), the approximation $N(n)\simeq C\log n+c$ is still accurate but $n$ has to
be significantly larger for this approximation to have close to zero accuracy.
To distinguish the cases of exact recovery ($\gamma=0$) and weak recovery
($\gamma>0$) we shall write $M_{0}(n)$ for the upper bounds (5.4) in case of
exact recovery and $M_{\gamma}(n)$ in case of weak recovery.
As follows from Theorem 3.2 and Corollary 1 of Section 3.1 for weak recovery
(similar considerations are true for exact recovery), in cases ${{\cal
D}}={\cal P}_{n}^{s}$ and either ${{\cal T}}={\cal P}_{n}^{d}$ or ${{\cal
T}}={\cal P}_{n}^{\leq d}$, the existence bounds have the form (5.2).
Establishment of the asymptotic relations (5.1), from which everything else
follows, is usually a straightforward application of the following two simple
asymptotic formulas (see Lemmas 5.1 and 5.2 in Zhigljavsky (2003)).
* (a)
Let $n\rightarrow\infty$, $u$ and $w$ be positive integers and
$s\\!=\\!\lambda n\\!+\\!O(1)$ as $n\rightarrow\infty$
($0\\!<\\!\lambda\\!<\\!1$). Then
$\displaystyle{{\left({n-w}\atop{s-u}\right)}}\big{/}{{\left({n}\atop{s}\right)}}=\lambda^{u}(1-\lambda)^{w-u}+O\left({1}/{n}\right)\;\;{\rm
as}\;\;n\rightarrow\infty\,.$
* (b)
Let $Q(n,l,m,p)$ be as in (2.22), $p$, $m$, $l$ be fixed and
$n\rightarrow\infty$. Then
$\displaystyle Q(n,l,m,p)=c_{l,m,p}\cdot
n^{l+m-p}\left(1+O\left({1}/{n}\right)\right),\quad n\rightarrow\infty\,,$
$\displaystyle{\rm with}\;\;\;\;$ $\displaystyle
c_{l,m,p}=\left\\{\begin{array}[]{ll}{1}/\left[{p!(m\\!-\\!p)!(l\\!-\\!p)}\right]&{\rm
if}\;\;m\neq l\,,\\\ {1}//\left[{2p!((m\\!-\\!p)!)^{2}}\right]&{\rm
if}\;\;m=l\,.\end{array}\right.$
The set ${\cal J}$ of Theorem 5.1 determines the set (or sets) ${\cal
T}(n,l,m,p)$ (see (2.18)) of pairs of target groups $(T,T^{\prime})$ which are
most difficult to separate by the random design. Theorem 5.1 establishes that
by the time the pairs from these set/s ${\cal T}(n,l,m,p)$ will be separated
(in the case of weak recovery, with probability $1-\gamma$), the pairs
$(T,T^{\prime})$ from all other sets ${\cal T}(n,l,m,p)$ will be automatically
separated with much higher probability which is infinitely close to 1. In most
cases, the set ${\cal J}$ defined in Theorem 5.1 contains just one number and
hence computation of the constant $c$ in (5.4) is immediate. Even if this is
not the case, as in (5.13) below, a very accurate approximation to the exact
value of $c$ can be easily found.
### 5.2 Additive model
For the additive model, the case ${\cal T}={\cal P}_{n}^{\leq d}$ is not very
interesting (the same applies to the Binomial testing) as we can make an
initial test with all items included into the test group and hence determine
the total number of defectives. Therefore, we only consider the case ${\cal
D}\\!=\\!{\cal P}_{n}^{s}$, ${\cal T}\\!=\\!{\cal P}_{n}^{d}$. Assume
$n\rightarrow\infty$, $s=s(n)=\lambda n+O(1)$ when $n\rightarrow\infty$,
$0<\gamma<1.$ The optimal value of $\lambda$ is $1/2$, both for weak and exact
recovery. For $\lambda=1/2$, ${\cal J}$ consists of the single index
corresponding to $l=m=d$ and $p=0$. This gives for exact and weak recovery
respectively:
$\displaystyle M_{0}(n)$ $\displaystyle=$
$\displaystyle(d+1)\log_{2}n\\!-\\!\log_{2}(d-1)!\\!-\\!1+o(1)\quad{\rm
as}\quad n\rightarrow\infty\,,$ (5.6) $\displaystyle M_{\gamma}(n)$
$\displaystyle=$
$\displaystyle\frac{d\log_{2}n\\!-\\!\log_{2}(d!\gamma)}{2d\\!-\\!\log_{2}((2d)!)\\!+\\!2\log_{2}(d!)}+o(1)\;\;{\rm
as}\;n\rightarrow\infty\,.$ (5.7)
The asymptotic expressions (5.6) and (5.7) have first appeared as (Zhigljavsky
and Zabalkanskaya, 1996, Corollary 5.1). Let us make some observations from
analyzing formulas (5.6) and (5.7).
First, the denominator $F(d)={2d\\!-\\!\log_{2}((2d)!)\\!+\\!2\log_{2}(d!)}$
in (5.7) is monotonically increasing with $d$ from $F(2)=3-\log_{2}3\simeq
1.415$ to $\infty$. This implies that the problem of exact recovery is much
more complicated than the problem of weak recovery and ratio of leading
coefficients in (5.6) and (5.7) tends to infinity as $d$ increases. This also
shows the diminishing role of $\gamma$ in (5.7) and the possibility to allow
$\gamma$ to slowly decrease as $d$ increases.
Second, the asymptotic expansion of $F(d)$ at $d=\infty$ is
$F(d)=\frac{1}{2}\log_{2}(\pi d)+O\left(1/{d}\right)$ with the respective
approximation $F(d)\simeq\frac{1}{2}\log_{2}(\pi d)$ being very accurate for
all $d$. Stirling formula also gives
$\log_{2}(d!)=d\log_{2}(d/e)+\frac{1}{2}\log_{2}(2\pi d)+O\left(1/{d}\right)$
as $d\to\infty$. This allows us to write the following asymptotic version of
(5.7) in the sparse regime with $d=n^{\beta}+O(1)$ and $0<\beta<1$ as
$\displaystyle\\!\\!\\!M_{\gamma}(n)=\frac{n^{\beta}(1+2(1-\beta)\log
n)}{\log(\pi n^{\beta})}\\!+O(1)\;\;{\rm as}\;n\\!\rightarrow\\!\infty\,.$
The sparse-regime version of (5.6) is very clear and need only the expansion
$\log_{2}((d-1)!)=d\log_{2}(d/e)+\frac{1}{2}\log_{2}(2\pi/d)+O\left(1/{d}\right)$
as $d\to\infty$. Thus, for $d=\lfloor n^{\beta}\rfloor$ with $0<\beta<1$ we
obtain
$\displaystyle M_{0}(n)=(\lfloor
n^{\beta}\rfloor+1+\beta/2)\log_{2}n\\!+O((1)\;\;\;{\rm
as}\;n\\!\rightarrow\\!\infty\,.$
### 5.3 Binary model, exact recovery
Consider first the case of exact recovery in the binary model with ${\cal
T}={\cal P}_{n}^{d}$, ${\cal D}={\cal P}_{n}^{s}$ and $s=s(n)=\lambda n+O(1)$.
From Corollary 5.2 in Zhigljavsky (2003) we obtain the following: the optimal
value of $\lambda$ is $\lambda=1/(d+1)$ for which the set ${\cal J}$ of
Theorem 5.1 consists of one index corresponding to $l=m=d$ and $p=d-1$; this
gives
$\displaystyle
M_{0}(n)=\frac{(d+1)\log_{2}n-\log_{2}(d-1)!-1}{-\log_{2}\left(1-{2d^{d}}/{(d+1)^{d+1}}\right)}\,+o(1)\quad{\rm
as}\quad n\rightarrow\infty\,.$ (5.8)
The numerator in (5.8) coincides with the rhs in (5.6). The denominator in the
rhs of (5.8), $G(d):=-\log_{2}[(1-{2d^{d}}/(d+1)^{d+1}]$, provides the
coefficient characterizing the complexity of the binary model with respect to
the additive one. Function $G(d)$ monotonically decreases from $G(2)\simeq
0.507$ to 0 with $G(d)=2/[de\log 2]+O\left({d}^{-2}\right)$ for large $d$.
This gives us the following sparse-regime version of (5.8) ($d=\lfloor
n^{\beta}\rfloor,\;0<\beta<1/2$):
$\displaystyle\\!\\!\\!\\!\\!\\!M_{0}(n)=\lfloor n^{\beta}\rfloor
e\log\sqrt{2}\left[(\lfloor
n^{\beta}\rfloor\\!+\\!1\\!+\\!\beta/2)\log_{2}n\\!\right]\\!+\\!O(1)\;\;{\rm
as}\;n\\!\rightarrow\\!\infty\,.\;\;\;\;$ (5.9)
Consider now the case of exact recovery in the binary model with ${\cal
T}={\cal P}_{n}^{\leq d}$, $d>2$, ${\cal D}={\cal P}_{n}^{s}$ and
$s=s(n)=\lambda n+0(1)$. From Corollary 5.3 in Zhigljavsky (2003) we obtain
the following: the optimal value of $\lambda$ is $\lambda=1/d$ for which the
set ${\cal J}$ of Theorem 5.1 consists of one index corresponding to $l=d$ and
$m=p=d-1$; this gives
$\displaystyle
M_{0}(n)=\frac{d\log_{2}n-\log_{2}(d-1)!}{-\log_{2}\left(1-{(d-1)^{d-1}}/{d^{d}}\right)}\,+o(1)\quad{\rm
as}\quad n\rightarrow\infty\,.$ (5.10)
The denominator $H(d):=-\log_{2}[\left(1-{(d-1)^{d-1}}/{d^{d}}\right)]$ in the
rhs of (5.10) is noticeably smaller than the denominator $G(d)$ in the rhs of
(5.8). For large $d$, we have $H(d)=1/[(d-1)e\log 2]+O\left({d}^{-2}\right)$.
This gives us the following sparse-regime version of (5.10) for ${\cal
T}={\cal P}_{n}^{\leq d}$ and $d=\lfloor n^{\beta}\rfloor$ with $0<\beta<1/2$:
$\displaystyle\\!\\!\\!\\!\\!\\!M_{0}(n)=\lfloor n^{\beta}-1\rfloor
e\log{2}\left[(\lfloor
n^{\beta}\rfloor+\\!\beta/2)\log_{2}n\right]\\!+\\!O(1)\;\;{\rm
as}\;n\\!\rightarrow\\!\infty\,.\;\;\;\;$ (5.11)
Comparing (5.9) with (5.11) we can conclude that in the sparse regime with
$d\to\infty$, the problem of exact recovery in the binary model with ${\cal
T}={\cal P}_{n}^{\leq d}$ is approximately twice harder than in the case of
${\cal T}={\cal P}_{n}^{d}$ in the sense that it requires approximately twice
more tests needed to guarantee the exact recovery of all defectives.
### 5.4 Binary model, weak recovery
Consider now the case of weak recovery; the non-asymptotic version is
considered in Corollary 3. Assume that ${\cal T}$ is either ${\cal P}_{n}^{d}$
or ${\cal T}={\cal P}_{n}^{\leq d}$, $d\geq 2$, $0<\gamma<1$, ${\cal D}={\cal
P}_{n}^{s}$, $s=s(n)=\lambda n+O(1)$ when $n\rightarrow\infty$. Then the
optimal value of $\lambda$ is $\lambda=1-2^{-1/d}$; for this value of
$\lambda$ the set ${\cal J}$ of Theorem 5.1 consists of $d$ indices
corresponding to $l=m=d$ and $p=0,1,\ldots,d-1$;
$\displaystyle M_{\gamma}(n)=d\log_{2}n+c+o(1)\quad{\rm as}\quad
n\rightarrow\infty\,,$ (5.12)
where $c=c(\gamma,d)$ is the solution of the equation
$\displaystyle\sum_{p=0}^{d-1}{2^{-c(d-p)/d}{\displaystyle\frac{d!}{p!(d-p)!^{2}}}}=\gamma\,.$
(5.13)
Numerical results show that the asymptotic expansion (5.12) provides an
approximation $N_{\gamma}(n)\simeq d\log_{2}n+c$ which is extremely accurate
for even moderate values of $n$ such as $n=10^{3}$.
By comparing (5.13) with (5.8) and (5.10) we conclude that in the case of
binary model, weak recovery (for any $0<\gamma<1$) is a much simpler problem
than exact recovery.
Since the set ${\cal J}$ of Theorem 5.1 consists of $d$ indices rather than
one, the constant $c$ is a solution of the equation containing $d$ summands,
see (5.13). Despite formally we cannot neglect any of the terms in (5.13),
keeping just one term, with $p=t-1$, provides an easily computable but rather
accurate lower bound for $c$: $c\geq c_{\ast}=d\log_{2}(d/\gamma)\,.$ Table 14
shows that the loss of precision in (5.12) due to the substitution of $c$ by
$c_{\ast}=d\log_{2}(d/\gamma)$ in (5.13) is minimal. As a by-product, Table 14
shows that neglecting the constant term in the asymptotic expressions like
(5.12) would make such asymptotic formulas totally impractical as in practice
$n$ is rarely astronomically large.
$d$ | 2 | 3 | 5 | 10 | 20 | 30 | 40 | 50
---|---|---|---|---|---|---|---|---
$c$ | 13.295 | 21.701 | 39.858 | 89.722 | 199.45 | 316.73 | 438.91 | 564.74
$c_{\ast}$ | 13.288 | 21.686 | 39.829 | 89.657 | 199.31 | 316.53 | 438.64 | 564.38
Table 14: Values of $c$ defined as the solution of (5.13) and
$c_{\ast}=d\log_{2}(d/\gamma)$ for $\gamma=0.02$ and different values of $d$ .
As perhaps the main conclusion of this section, we offer the following
approximation for $N_{\gamma}$ in the case of binary model with ${\cal
D}={\cal P}_{n}^{s}$, ${\cal T}={\cal P}_{n}^{d}$ and ${\cal T}={\cal
P}_{n}^{\leq d}$ and $s$ chosen asymptotically optimally by $s=\lfloor
n(1-2^{-1/d})\rfloor$:
$\displaystyle N_{\gamma}(n)\simeq d\log_{2}n+d\log_{2}(d/\gamma)\,.$ (5.14)
If we use this formula and express $\gamma$ through $N_{\gamma}(n)$, then we
get an approximation
$\displaystyle\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)\simeq 2^{-N/d}nd$ (5.15)
for the value $\gamma^{*}({\mathbb{Q}},{\mathbb{R}},N)$ of part one of
Corollary 3. Formulas (5.14) and (5.15) connect all major parameters of
interest, $n$, $d$, $N$ and $\gamma$, into one simple approximate relation.
This relation can clearly show, in particular, allowed rates of increase of
$d$ as a function of $n$ guaranteeing the same or even decreasing $\gamma$.
The approximation (5.14) is extremely accurate already for very moderate $n$
(say, $n\geq 200$) and not very large $d$. Rather surprisingly, the
approximation (5.15) becomes reasonably accurate for moderate $n$ too, as long
as the r.h.s. in (5.15) gets small enough. A very simple MAPLE code can
provide such a comparison (with almost arbitrary computational precision) for
values of $n$ up to $10^{6}$ and $d$ up to 20 or more. Actually, what is
important for formula (5.15) getting high levels of accuracy is the value of
$N$ which has to be large enough; this is consistent with very high level of
accuracy of (5.14) for large values of $N_{\gamma}(n)$.
### 5.5 Extensions to noisy testing
In Zhigljavsky (2010) a technique is developed of transforming the asymptotic
upper bounds (5.4), obtained from the non-asymptotic expression (5.2), for an
upper bounds for $N$ in the same model when up to $L$ lies are allowed.
Theorems 2 and 3 of Zhigljavsky (2010) imply that any asymptotic bound of the
form (5.4) can be rewritten in the form
$\displaystyle N(n)=C\log n+c_{1}\log\log n+c_{0}(n)\,,$ (5.16)
where the constant $C$ is exactly the same as in (5.4) and the constant
$c_{1}$ is computable from the considerations very similar to indicated in
Theorem 5.1. The main difficulty in using the asymptotic expansion (5.16) as
an approximation for finite $n$ is related to a rather difficult structure of
the function $c_{0}(n)$, which is bounded (with a computable upper bound) but
not monotonic in $n$. The first term in (5.16) dominates the asymptotical
behaviour of $N(n)$. However, the constant $c_{1}$ is always larger than $C$
and, depending on the allowed number of lies $L$, could be very large. This
makes the second term in (5.16) significantly more influential than the first
term (assuming, for example, $L=5$). Moreover, for small or moderate values of
$n$, the values of $c_{0}(n)$ could also be larger than the main asymptotic
term $C\log n$.
## Appendix A: Proofs
#### Proof of Theorem 2.2
We are interested in computing the value of $\gamma^{*}$ which satisfies the
following.
$\displaystyle{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ can be uniquely
identified by }{D}_{N}\textrm{ with at most $L$ lies}\\}=1-\gamma$
$\displaystyle=$ $\displaystyle\sum_{i=1}^{|{\cal T}|}{\rm
Pr}_{{\mathbb{R}}}\\{T_{i}\textrm{ can be uniquely identified
by}{D}_{N}\textrm{ with at most $L$ lies}\\}{\rm
Pr}_{{\mathbb{Q}}}\\{T=T_{i}\\}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{|{\cal T}|}{\rm
Pr}_{{\mathbb{R}}}\\{d_{H}(F_{T_{i}},F_{T_{j}})\geq 2L+1\text{ for all }j\neq
i\\}{\rm Pr}_{{\mathbb{Q}}}\\{T=T_{i}\\}$ $\displaystyle=$ $\displaystyle
1-\sum_{i=1}^{|{\cal T}|}{\rm
Pr}_{{\mathbb{R}}}\\{d_{H}(F_{T_{i}},F_{T_{j}})\leq 2L\text{ for at least one
}j\neq i\\}{\rm Pr}_{{\mathbb{Q}}}\\{T=T_{i}\\}$ $\displaystyle\geq$
$\displaystyle 1-\sum_{i=1}^{|{\cal T}|}{\rm
Pr}_{{\mathbb{Q}}}\\{T=T_{i}\\}\sum_{j\neq i}{\rm
Pr}_{{\mathbb{R}}}\\{d_{H}(F_{T_{i}},F_{T_{j}})\leq 2L\\}=1-\gamma^{*}\,.$
For a given design ${D}_{N}=\\{X_{1},\dots,X_{N}\\}$, consider the matrix
$\|f(X_{i},T_{j})\|_{i,j=1}^{N,|{\cal T}|}\,$ whose rows correspond to the
test sets $X_{i}$ and the columns correspond to the targets $T_{j}$. Denote
the columns of this matrix by $A_{j}$ ($j=1,\ldots,|{\cal T}|$).
Let $(X_{1},X_{2},\dots,X_{N})$ be a random sample from ${\cal D}$. Then for
any fixed pair $(i,j)$ such that $i\neq j$ $\,(i,j=1,\dots,|{\cal T}|)$ and
any integer $l$ $\,(0\leq l\leq N)$ we have
$\displaystyle\Pr\\{d_{H}(A_{i},A_{j})=l\\}={{N}\choose{l}}\left(p_{ij}\right)^{N-l}\left(1-p_{ij}\right)^{l}$
and therefore
$\displaystyle\Pr\\{d_{H}(A_{i},A_{j})\leq
2L\\}=\sum_{l=0}^{2L}{{N}\choose{l}}\left(p_{ij}\right)^{N-l}\left(1-p_{ij}\right)^{l}\,.\hskip
142.26378pt\Box$
#### Proof of Theorem 3.1
Let $(T_{i},T_{j})\in{\cal T}(n,l,m,p)$ and $a$ be some integer. Introduce the
sets
$\displaystyle{\cal D}^{a,a}=\\{X\in{\cal D}:\,|X\cap T_{i}|=a,\,|X\cap
T_{j}|=a\\}\,,$ $\displaystyle{\cal D}^{a,>a}=\\{X\in{\cal D}:\,|X\cap
T_{i}|=a,\,|X\cap T_{j}|>a\\}\,,$ $\displaystyle{\cal D}^{>a,a}=\\{X\in{\cal
D}:\,|X\cap T_{i}|>a,\,|X\cap T_{j}|=a\\}\,.$
Remind that $k_{ij}=\left|\\{X\in{\cal D}:\,f(X,T_{i})=f(X,T_{j})\\}\right|$
and $f(X,T){=}\min\\{{h},|X\cap T|\\}$.
We have the equality $f(X,T_{i})=f(X,T_{j})\,$ if and only if one of the three
following cases occurs: (i) $X\in{\cal D}^{a,a}$ for some $a\geq 0$; (ii)
$X\in{\cal D}^{a,>a}$ for some $a\geq{h}$; (iii) $X\in{\cal D}^{>a,a}$ for
some $a\geq{h}$. Therefore,
$\displaystyle k_{ij}=\sum_{a\geq 0}|{\cal D}^{a,a}|+\sum_{a\geq{h}}|{\cal
D}^{a,>a}|+\sum_{a\geq{h}}|{\cal D}^{>a,a}|.$ (5.17)
The set of integers $n$, $m$, $l$, $p$, $u$, $v$ and $r$ satisfy then the
constraints (2.25). Using these constraints and the definition of the
coefficients $R(\cdot)$, see (2.24), we can re-express the sums in the right-
hand side of (5.17) as follows:
$\sum_{a\geq 0}|{\cal
D}^{a,a}|=\sum_{r{=}0}^{p}\sum_{u{=}0}^{m{-}p}R(n,l,m,p,u,u,r)\,,$
$\sum_{a\geq{h}}|{\cal
D}^{a,>a}|=\sum_{r{=}0}^{p}\sum_{u{=}w}^{l{-}p}\sum_{v{=}u{+}1}^{m{-}p}R(n,l,m,p,u,v,r)\,,$
where $w=\max\\{0,{h}-r\\}$, and analogously
$\sum_{a\geq{h}}|{\cal
D}^{>a,a}|=\sum_{r{=}0}^{p}\sum_{v{=}w}^{m{-}p}\sum_{u{=}v{+}1}^{l{-}p}R(n,l,m,p,u,v,r)\,.$
By substituting this into (5.17) we get (3.1). To finish the proof we just
need to mention that the above calculation does not depend on the choice of
the pair $(T_{i},T_{j})\in{\cal T}(n,l,m,p)$ since ${\cal D}={\cal P}_{n}^{s}$
is balanced. $\Box$
#### Proof of Theorem 3.2
Let ${D}_{N}=\\{X_{1},\ldots,X_{N}\\}$ be an ${\mathbb{R}}$-distributed random
design and let $T$ be ${\mathbb{Q}}$-distributed. For some $0<\gamma<1$, we
have ${\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated by
}{D}_{N}\\}=1-\gamma.$
Let ${\cal P}_{N}={\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\mbox{ is not
separated by }{D}_{N}\\}.$ Then ${\rm
Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated by
}{D}_{N}\\}=1-{\cal P}_{N}.$ By conditioning on $T\in{\cal P}_{n}^{{b}}$, for
$0\leq{b}\leq d$, and ${\mathbb{B}}$-distributed random variable $\xi$ we have
$\displaystyle{\cal P}_{N}={\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\mbox{ is
not separated by }{D}_{N}\\}=\sum\limits_{{b}=0}^{d}P_{N,n,{b}}({\cal D}){\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}\,,$
where $P_{N,n,{b}}({\cal D})$ is the probability
$\displaystyle P_{N,n,{b}}({\cal D})={\rm
Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\mbox{ is not separated by
}{D}_{N}|\,|T|={b}\\}\,.$
Since ${\cal D}$ is balanced, the probability $P_{N,n,{b}}({\cal D})$ is
correctly defined; that is, it does not depend on the choice of a particular
$T$ such that $|T|={b}$.
For a pair $(T,T^{\prime})\in{\cal T}\times{\cal T}$ of different targets, set
$P(N,T,T^{\prime})$ to be the probability of the event that $T$ and
$T^{\prime}$ are not separated after $N$ random tests. If $T=T_{i}$ and
$T^{\prime}=T_{j}$ then, in the notation of Section 2.1,
$P(1,T,T^{\prime})=p_{ij}=k_{ij}/{{n\choose s}}$, where $k_{ij}$ are the Rényi
coefficients and $P(N,T,T^{\prime})=(P(1,T,T^{\prime}))^{N}$.
For a fixed $T$, such that $|T|={b}$, the probability $P_{N,n,{b}}({\cal D})$
that after $N$ random tests $T$ is not separated from all $T^{\prime}\neq T$,
is less than or equal to $P_{N,n,{b}}({\cal D})\leq Q_{N,n,{b}}({\cal D})$
where
$\displaystyle Q_{N,n,{b}}({\cal D})=\min\\{1,\sum_{T^{\prime}\neq
T}P(N,T,T^{\prime})\\}=\min\\{1,S_{1}+S_{2}+S_{3}\\}\,.$
Here
$S_{1}=\sum_{T^{\prime}:|T^{\prime}|<{b}}P(N,T,T^{\prime}),\;\;S_{2}=\sum_{T^{\prime}\neq
T,|T^{\prime}|={b}}P(N,T,T^{\prime}),\;\;S_{3}=\sum_{T^{\prime}:|T^{\prime}|>{b}}P(N,T,T^{\prime})\,.$
One can show that
$S_{1}=\frac{1}{{{n}\choose{{b}}}}\sum_{m=0}^{{b}-1}\;\sum_{p=0}^{m}\\!Q(n,{b},m,p)\left(\frac{K({\cal
P}_{n}^{s},n,{b},m,p)}{{n\choose s}}\right)^{N}\,,$
$S_{2}=\frac{2}{{{n}\choose{{b}}}}\sum_{p=0}^{{b}-1}\\!Q(n,{b},{b},p)\left(\frac{K({\cal
P}_{n}^{s},n,{b},{b},p)}{{n\choose s}}\right)^{N}\,,\,$
and
$S_{3}=\frac{1}{{{n}\choose{{b}}}}\sum_{m={b}+1}^{d}\;\sum_{p=0}^{b}\\!Q(n,{b},m,p)\left(\frac{K({\cal
P}_{n}^{s},n,m,{b},p)}{{n\choose s}}\right)^{N}\,.$
Using the definition of $q_{{\cal D},n,d,m,p}$ we obtain
$S_{1}+S_{2}+S_{3}=\frac{1}{{{n}\choose{{b}}}}\sum_{m=0}^{d}\;\sum_{p=0}^{\min\\{{b},m\\}}{\textstyle{{n}\choose{p\;m-p\;{b}-p\;n-{b}-m+p}}}q^{N}_{{\cal
D},n,{b},m,p}\,.$
From the inequality
${\cal P}_{N}=\sum_{{b}=0}^{d}{\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}P_{N,n,{b}}({\cal D})\leq\sum_{{b}=0}^{d}{\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}Q_{N,n,{b}}({\cal D})\,=\sum_{{b}=0}^{d}{\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}\min\\{1,S_{1}\\!+\\!S_{2}\\!+\\!S_{3}\\}\,,$
we obtain:
$\displaystyle{\rm Pr}_{{\mathbb{Q}},{\mathbb{R}}}\\{T\textrm{ is separated by
}{D}_{N}\\}=1-\gamma\geq 1-\sum_{{b}=0}^{d}{\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}Q_{N,n,{b}}({\cal D})\,$ $\displaystyle=$
$\displaystyle 1-\sum_{{b}=0}^{d}{\rm
Pr}_{\mathbb{B}}\\{\xi={b}\\}\min\\{1,S_{1}\\!+\\!S_{2}\\!+\\!S_{3}\\}=1-\gamma^{*}\,.\hskip
99.58464pt\Box$
#### Proof of Theorem 4.1
Rewriting (3.1) for ${h}=1$ we obtain
$K({\cal
D},n,l,m,p)=\sum_{r{=}0}^{p}\sum_{u{=}0}^{m{-}p}R(n,l,m,p,u,u,r){+}\sum_{r{=}1}^{p}\sum_{u{=}0}^{l{-}p}\sum_{v{=}u{+}1}^{m{-}p}R(n,l,m,p,u,v,r)+$
$\sum_{r{=}1}^{p}\sum_{u{=}0}^{m{-}p}\sum_{v{=}u{+}1}^{l{-}p}R(n,\\!l,\\!m,\\!p,\\!v,\\!u,\\!r)+\sum_{u{=}1}^{l{-}p}\sum_{v{=}u{+}1}^{m{-}p}R(n,\\!l,\\!m,\\!p,\\!u,\\!v,\\!0)+\sum_{u{=}1}^{m{-}p}\sum_{v\\!=\\!u+\\!1}^{l-p}R(n,\\!l,\\!m,\\!p,\\!v,\\!u,\\!0)$
$=\sum_{r{=}1}^{p}\sum_{u{=}0}^{l{-}p}\sum_{v{=}0}^{m{-}p}R(n,l,m,p,u,v,r)+\sum_{u{=}1}^{l{-}p}\sum_{v{=}1}^{m{-}p}R(n,l,m,p,u,v,0)+R(n,l,m,p,0,0,0).$
By using Lemma 3.1 in Zhigljavsky (2003) the following identity holds
$\displaystyle\left({n}\atop{s}\right)=\sum_{r{=}0}^{p}\sum_{u{=}0}^{l{-}p}\sum_{v{=}0}^{m{-}p}R(n,l,m,p,u,v,r)\,,$
which allows us to state
$\displaystyle K({\cal
D},n,l,m,p)=\left({n}\atop{s}\right)-\left(\sum_{u=1}^{l-p}R(n,l,m,p,u,0,0)+\sum_{v=1}^{m-p}R(n,l,m,p,0,v,0)\right)\,.$
By then applying the expression for $R(\cdot)$ given in (2.26), we obtain
$\displaystyle K({\cal
D},n,l,m,p)=\left({n}\atop{s}\right)-\sum_{u=1}^{l-p}\left({l\\!-\\!p}\atop{u}\right)\left({n\\!-\\!l\\!-\\!m\\!+\\!p}\atop{s-u}\right)-\sum_{v=1}^{m-p}\left({m\\!-\\!p}\atop{v}\right)\left({n\\!-\\!l\\!-\\!m\\!+\\!p}\atop{s-v}\right)\,.$
Application of the Vandermonde convolution formula then provides (4.4). $\Box$
## Appendix B: Pseudo-code for Algorithm 1 and Algorithm 2
Input: A design ${D}_{N}$.
Result: One test containing $s$ items to be used within Algorithm 2.
$Output=\\{\\}$;
For each item $1,\ldots,n$, determine the frequency it appears in ${D}_{N}$;
if _there are at least $s$ items with equal smallest frequency of occurrence_
then
Append to $Output$ a sample of $s$ elements from these items;
end if
else
Append to $Output$ all the items with the smallest frequency of occurrence,
say $s^{\prime}$ of these, and sample the remaining $s-s^{\prime}$ items
randomly from groups that have not appeared the fewest;
end if
return _Output_
Algorithm 1
Input: $N$ and $N^{\prime}:=$ The number of candidate tests.
Result: A matrix ${\cal X}={\cal X}({D}_{N})$ or equivalent design ${D}_{N}$.
Construct ${\cal X}({D}_{N})$ with ${D}_{N}=\\{X_{1}\\}$, with $X_{1}$
${\mathbb{R}}$-distributed from ${\cal D}={\cal P}_{n}^{s}$.
while _Number of rows in ${\cal X}({D}_{N})<N$_ do
Create the $N^{\prime}$ candidate tests
$C_{N^{\prime}}=\\{X^{\prime}_{1},X^{\prime}_{2},\ldots
X^{\prime}_{N^{\prime}}\\}$ by: repeating Algorithm 1 on ${D}_{N}$ a total of
$0.75\times N^{\prime}$ times; randomly sample without replacement from ${\cal
D}={\cal P}_{n}^{s}$ a total of $0.25\times N^{\prime}$ times;
Construct the test matrix ${\cal X}^{\prime}:={\cal
X}^{\prime}(C_{N^{\prime}})$;
Determine the row $k$ in ${\cal X}^{\prime}$ (that is ${\cal X}^{\prime}_{k})$
that satisfies: $\min_{1\leq j\leq N}d_{H}({\cal X}^{\prime}_{k},{\cal
X}_{j})=\max_{1\leq i\leq N^{\prime}}\min_{1\leq j\leq N}d_{H}({\cal
X}^{\prime}_{i},{\cal X}_{j})$ \- if ties occur, select the item such that
that $\sum_{j=1}^{N}d_{H}({\cal X}^{\prime}_{k},{\cal X}_{j})$ is highest;
Append ${\cal X}^{\prime}_{k}$ to the rows of ${\cal X}={\cal X}({D}_{N})$.
end while
return _${\cal X}({D}_{N})$_
Algorithm 2
## References
* Aldridge et al. [2014] M. Aldridge, L. Baldassini, and O. Johnson. Group testing algorithms: bounds and simulations. _IEEE Transactions on Information Theory_ , 60(6):3671–3687, 2014.
* Aldridge et al. [2016] M. Aldridge, O. Johnson, and J. Scarlett. Improved group testing rates with constant column weight designs. In _2016 IEEE Internat. Symposium on Inform. Theory (ISIT)_ , pages 1381–1385. IEEE, 2016.
* Aldridge et al. [2019] M. Aldridge, O. Johnson, and J. Scarlett. Group testing: An information theory perspective. _Foundations and Trends in Communications and Information Theory_ , 15(3–4):196–392, 2019.
* Bose and Chowla [1962] R. Bose and S. Chowla. Theorems in the additive theory of numbers. _Commentarii Mathematici Helvetici_ , 37(1):141–147, 1962.
* Cantor and Mills [1966] D. Cantor and W. Mills. Determination of a subset from certain combinatorial properties. _Canadian Journal of Mathematics_ , 18:42–48, 1966.
* Chan et al. [2014] C. Chan, S. Jaggi, V. Saligrama, and S. Agnihotri. Non-adaptive group testing: Explicit bounds and novel algorithms. _IEEE Trans. on Information Theory_ , 60(5):3019–3035, 2014.
* Chen and Hwang [2008] H. Chen and F. Hwang. A survey on nonadaptive group testing algorithms through the angle of decoding. _Journal of Combinatorial Optimization_ , 15(1):49–59, 2008.
* Coja-Oghlan et al. [2020a] A. Coja-Oghlan, O. Gebhard, M. Hahn-Klimroth, and P. Loick. Information-theoretic and algorithmic thresholds for group testing. _IEEE Transactions on Information Theory_ , 66(12):7911–7928, 2020a.
* Coja-Oghlan et al. [2020b] A. Coja-Oghlan, O. Gebhard, M. Hahn-Klimroth, and P. Loick. Optimal group testing. In _Conference on Learning Theory_ , pages 1374–1388. PMLR, 2020b.
* De Bonis et al. [1997] A. De Bonis, L. Gargano, and U. Vaccaro. Group testing with unreliable tests. _Information sciences_ , 96(1-2):1–14, 1997.
* Dorfman [1943] R. Dorfman. The detection of defective members of large populations. _The Annals of Mathematical Statistics_ , 14(4):436–440, 1943.
* Du and Hwang [2000] D. Du and F. Hwang. _Combinatorial group testing and its applications_. World Scientific, Singapore, 2000.
* Du and Hwang [2006] D. Du and F. Hwang. _Pooling designs and nonadaptive group testing: important tools for DNA sequencing_. World Scientific, 2006.
* D’yachkov [2014] A. D’yachkov. Lectures on designing screening experiments. _arXiv:1401.7505_ , 2014.
* D’yachkov and Rykov [1981] A. D’yachkov and V. Rykov. On a coding model for a multiple-access adder channel. _Problemy Peredachi Informatsii_ , 17(2):26–38, 1981.
* D’yachkov and Rykov [1983] A. D’yachkov and V. Rykov. A survey of superimposed code theory. _Problems of Control and Information Theory_ , 12:229–242, 1983.
* Dyachkov et al. [1989] A. Dyachkov, V. Rykov, and A. Rashad. Superimposed distance codes. _Problems of Control and Information_ , 18:237–250, 1989\.
* D’yachkov et al. [2005] A D’yachkov, F. Hwang, A. Macula, P. Vilenkin, and C. Weng. A construction of pooling designs with some happy surprises. _Journal of Computational Biology_ , 12(8):1129–1136, 2005.
* Erdős and A. [1963] P. Erdős and Rényi A. On two problems of information theory. _Magyar Tud. Akad. Mat. Kutató Int. Közl_ , 8:229–243, 1963.
* Hill and Karim [1992] R. Hill and J. Karim. Searching with lies: the Ulam problem. _Discrete mathematics_ , 106:273–283, 1992.
* Katona and Srivastava [1983] G. Katona and J. Srivastava. Minimal 2-covering of a finite affine space based on GF (2). _Journal of statistical planning and inference_ , 8:375–388, 1983.
* Lindström [1964] B. Lindström. On a combinatory detection problem. i. _I. Magyar Tud. Akad. Mat. Kutató Int. Közl_ , 9:195–207, 1964.
* Lindström [1969] B. Lindström. Determination of two vectors from the sum. _Journal of Combinatorial Theory_ , 6(4):402–407, 1969.
* Lindström [1975] B. Lindström. Determining subsets by unramified experiments. _A Survey of Statistical Design and Linear Models_ , 1975.
* Macula [1996] A. Macula. A simple construction of $d$-disjunct matrices with certain constant weights. _Discrete Mathematics_ , 162(1-3):311–312, 1996\.
* Macula [1997a] A. Macula. Error-correcting nonadaptive group testing with $d^{e}$-disjunct matrices. _Discrete Applied Mathematics_ , 80:217–222, 1997a.
* Macula [1997b] A. Macula. A nonadaptive version of Ulam’s problem with one lie. _Journal of statistical planning and inference_ , 61:175–180, 1997b.
* Macula [1998] A. Macula. Probabilistic nonadaptive and two-stage group testing with relatively small pools and DNA library screening. _Journal of Combinatorial Optimization_ , 2(4):385–397, 1998.
* Macula and Reuter [1998] A. Macula and G. Reuter. Simplified searching for two defects. _Journal of statistical planning and inference_ , 66(1):77–82, 1998.
* Mézard and Toninelli [2011] M. Mézard and C. Toninelli. Group testing with random pools: optimal two-stage algorithms. _IEEE Transactions on Information Theory_ , 57(3):1736–1745, 2011.
* Mézard et al. [2008] M. Mézard, M. Tarzia, and C. Toninelli. Group testing with random pools: phase transitions and optimal strategy. _Journal of Statistical Physics_ , 131(5):783–801, 2008.
* O’Geran et al. [1991] J. O’Geran, H. Wynn, and A. Zhigljavsky. Search. _Acta Applicandae Mathematicae_ , 25:241–276, 1991.
* O’Geran et al. [1993] J. O’Geran, H. Wynn, and A. Zhiglyavsky. Mastermind as a test-bed for search algorithms. _Chance_ , 6(1):31–37, 1993.
* Poltyrev [1987] G. Poltyrev. Improved upper bound on the probability of decoding error for codes of complex structure. _Problemy Peredachi Informatsii_ , 23(4):5–18, 1987.
* Scarlett and Cevher [2016a] J. Scarlett and V. Cevher. Limits on support recovery with probabilistic models: an information-theoretic framework. _IEEE Trans. on Inform. Theory_ , 63(1):593–620, 2016a.
* Scarlett and Cevher [2016b] J. Scarlett and V. Cevher. Phase transitions in group testing. In _Proceedings of the twenty-seventh annual ACM-SIAM symposium on discrete algorithms_ , pages 40–53. SIAM, 2016b.
* Sobel and Groll [1959] M. Sobel and P. Groll. Group testing to eliminate efficiently all defectives in a binomial sample. _Bell System Technical Journal_ , 38(5):1179–1252, 1959.
* Torney et al. [1998] D. Torney, F. Sun, and W. Bruno. Optimizing nonadaptive group tests for objects with heterogeneous priors. _SIAM Journal on Applied Mathematics_ , 58(4):1043–1059, 1998.
* Tsybakov et al. [1983] B. Tsybakov, V. Mikhailov, and N. Likhanov. Bounds for packet transmission rate in a random-multiple-access system. _Prob. Inform. Transm._ , 19:61–81, 1983.
* Zdeborová and Krzakala [2016] L. Zdeborová and F. Krzakala. Statistical physics of inference: thresholds and algorithms. _Advances in Physics_ , 65(5):453–552, 2016.
* Zhigljavsky [2003] A. Zhigljavsky. Probabilistic existence theorems in group testing. _Journal of statistical planning and inference_ , 115(1):1–43, 2003.
* Zhigljavsky [2010] A. Zhigljavsky. Nonadaptive group testing with lies: Probabilistic existence theorems. _Journal of statistical planning and inference_ , 140(10):2885–2893, 2010.
* Zhigljavsky and Zabalkanskaya [1996] A. Zhigljavsky and L. Zabalkanskaya. Existence theorems for some group testing strategies. _Journal of statistical planning and inference_ , 55(2):151–173, 1996.
|
Fachbereich Informatik, Technische Universität Kaiserslautern,
Germanyhttps://orcid.org/0000-0002-4681-2149Max Planck Institute for Software
Systems (MPI-SWS), Kaiserslautern,
Germanyhttps://orcid.org/0000-0002-0775-7781Max Planck Institute for Software
Systems (MPI-SWS), Kaiserslautern,
Germanyhttps://orcid.org/0000-0002-6421-4388 Pascal Bergsträßer, Moses
Ganardi, Georg Zetzsche [500]Theory of computation Problems, reductions and
completeness [500]Theory of computation Theory and algorithms for application
domains
# A characterization of wreath products where knapsack is decidable
Pascal Bergsträßer Moses Ganardi Georg Zetzsche
###### Abstract
The knapsack problem for groups was introduced by Miasnikov, Nikolaev, and
Ushakov. It is defined for each finitely generated group $G$ and takes as
input group elements $g_{1},\ldots,g_{n},g\in G$ and asks whether there are
$x_{1},\ldots,x_{n}\geq 0$ with $g_{1}^{x_{1}}\cdots g_{n}^{x_{n}}=g$. We
study the knapsack problem for wreath products $G\wr H$ of groups $G$ and $H$.
Our main result is a characterization of those wreath products $G\wr H$ for
which the knapsack problem is decidable. The characterization is in terms of
decidability properties of the indiviual factors $G$ and $H$. To this end, we
introduce two decision problems, the _intersection knapsack problem_ and its
restriction, the _positive intersection knapsack problem_.
Moreover, we apply our main result to $H_{3}(\mathbb{Z})$, the discrete
Heisenberg group, and to Baumslag-Solitar groups $\mathsf{BS}(1,q)$ for $q\geq
1$. First, we show that the knapsack problem is undecidable for $G\wr
H_{3}(\mathbb{Z})$ for any $G\neq 1$. This implies that for $G\neq 1$ and for
infinite and virtually nilpotent groups $H$, the knapsack problem for $G\wr H$
is decidable if and only if $H$ is virtually abelian and solvability of
systems of exponent equations is decidable for $G$. Second, we show that the
knapsack problem is decidable for $G\wr\mathsf{BS}(1,q)$ if and only if
solvability of systems of exponent equations is decidable for $G$.
###### keywords:
knapsack, wreath products, decision problems in group theory, decidability,
discrete Heisenberg group, Baumslag-Solitar groups
## 1 Introduction
#### The knapsack problem
The knapsack problem is a decision problem for groups that was introduced by
Miasnikov, Nikolaev, and Ushakov [1]. If $G$ is a finitely generated group,
then the knapsack problem for $G$, denoted $\mathsf{KP}(G)$, takes group
elements $g_{1},\ldots,g_{n},g\in G$ as input (as words over the generators)
and it asks whether there are natural numbers $x_{1},\ldots,x_{n}\geq 0$ such
that $g_{1}^{x_{1}}\cdots g_{n}^{x_{n}}=g$. Since its introduction, a
significant amount of attention has been devoted to understanding for which
groups the problem is decidable and what the resulting complexity is [19, 21,
12, 2, 8, 10, 20, 11]. For matrix semigroups, the knapsack problem has been
studied implicitly by Bell, Halava, Harju, Karhumäki, and Potapov [5], Bell,
Potapov, and Semukhin [6], and for commuting matrices by Babai, Beals, Cai,
Ivanyos, and Luks [3].
There are many groups for which knapsack has been shown decidable. For
example, knapsack is decidable for virtually special groups [21, Theorem 3.1],
co-context-free groups [8, Theorem 8.1], hyperbolic groups [1, Theorem 6.1],
the discrete Heisenberg group [8, Theorem 6.8], and Baumslag-Solitar groups
$\mathsf{BS}(p,q)$ for co-prime $p,q>1$ [9, Theorem 2] and for $p=1$ [20,
Theorem 4.1]. Moreover, the class of groups where knapsack is decidable is
closed under free products with amalgamation [26, Theorem 14] and HNN
extensions [26, Theorem 13] over finite identified subgroups. On the other
hand, there are nilpotent groups for which knapsack is undecidable [8, Theorem
6.5].
#### Wreath products
A prominent construction in group theory and semigroup theory is the wreath
product $G\wr H$ of two groups $G$ and $H$. Wreath products are important
algorithmically, because the Magnus embedding theorem [32, Lemma] states that
for any free group $F$ of rank $r$ and a normal subgroup $N$ of $F$, one can
find $F/[N,N]$ as a subgroup of $\mathbb{Z}^{r}\wr(F/N)$, where $[N,N]$ is the
commutator subgroup of $N$. This has been used by several authors to obtain
algorithms for groups of the form $F/[N,N]$, and in particular free solvable
groups. Examples include the word problem (folklore, see [15]), the conjugacy
problem [28, 30, 15, 29], the power problem [15], and the knapsack problem
[11, 12].
For groups $G$ and $H$, their wreath product $G\wr H$ can be roughly described
as follows. An element of $G\wr H$ consists of (i) a labeling, which maps each
element of $H$ to an element of $G$ and (ii) an element of $H$, called the
_cursor_. Here, the labeling has finite support, meaning all but finitely many
elements of $H$ are mapped to the identity of $G$. Moreover, each element of
$G\wr H$ can be written as a product of elements from $G$ and from $H$.
Multiplying an element $g\in G$ will multiply $g$ to the label of the current
cursor position. Multiplying an element $h\in H$ will move the cursor by
multiplying $h$.
Understanding the knapsack problem for wreath products is challenging for two
reasons. First, the path that the expression $g_{1}^{x_{1}}\cdots
g_{n}^{x_{n}}g^{-1}$ takes through the group $H$ can have complicated
interactions with itself: The product can place elements of $G$ at (an _a
priori_ unbounded number of) positions $h\in H$ that are later revisited. At
the end of the path, each position of $H$ must carry the identity of $G$ so as
to obtain $g_{1}^{x_{1}}\cdots g_{k}^{x_{k}}g^{-1}=1$. The second reason is
that the groups $G$ and $H$ play rather different roles: _A priori_ , for each
group $G$ the class of all $H$ with decidable $\mathsf{KP}(G\wr H)$ could be
different, resulting in a plethora of cases.
Decidability of the knapsack problem for wreath products has been studied by
Ganardi, König, Lohrey, and Zetzsche [12]. They focus on the case that $H$ is
knapsack-semilinear, which means that the solution sets of equations
$g_{1}^{x_{1}}\cdots g_{n}^{x_{n}}=g$ are (effectively) semilinear. A set
$S\subseteq\mathbb{N}^{n}$ is semilinear if it is a finite union of linear
sets
$\\{u_{0}+\lambda_{1}u_{1}+\dots+\lambda_{k}u_{k}\mid\lambda_{1},\dots,\lambda_{k}\in\mathbb{N}\\}$
for some vectors $u_{0},\dots,u_{k}\in\mathbb{N}^{n}$. Under this assumption,
they show that $\mathsf{KP}(G\wr H)$ is decidable if and only if solvability
of systems of exponent equations is decidable for $G$ [12, Theorem 5.3]. Here,
an exponent equation is one of the form $g_{1}^{x_{1}}\cdots g_{n}^{x_{n}}=g$,
where variables $x_{i}$ are allowed to repeat. The problem of solvability of
systems of exponent equations is denoted $\mathsf{ExpEq}(G)$. Moreover, it is
shown there that for some number $\ell\in\mathbb{N}$, knapsack is undecidable
for $G\wr(H_{3}(\mathbb{Z})\times\mathbb{Z}^{\ell})$, where
$H_{3}(\mathbb{Z})$ denotes the discrete Heisenberg group and $G$ is any non-
trivial group [12, Theorem 5.2]. Since
$\mathsf{KP}(H_{3}(\mathbb{Z})\times\mathbb{Z}^{\ell})$ is decidable for any
$\ell\geq 0$ [8, Theorem 6.8], this implies that wreath products do not
preserve decidability of knapsack in general. However, apart from the latter
undecidability result, little is known about wreath products $G\wr H$ where
$H$ is not knapsack-semilinear. As notable examples of this, knapsack is
decidable for solvable Baumslag-Solitar groups $\mathsf{BS}(1,q)$ [20, Theorem
4.1] and for the discrete Heisenberg group $H_{3}(\mathbb{Z})$ [8, Theorem
6.8], but it is not known for which $G$ the knapsack problem is decidable for
$G\wr H_{3}(\mathbb{Z})$ or for $G\wr\mathsf{BS}(1,q)$.
The only other paper which studies the knapsack problem over wreath products
is [11]. It is concerned with complexity results (for knapsack-semilinear
groups) whereas in this paper we are concerned with decidability results.
#### Contribution
Our main result is a characterization of the groups $G$ and $H$ for which
$\mathsf{KP}(G\wr H)$ is decidable. Specifically, we introduce two problems,
_intersection knapsack_ $\mathsf{KP}^{\pm}(H)$ and the variant _positive
intersection knapsack_ $\mathsf{KP}^{+}(H)$ and show the following. Let $G$
and $H$ be finitely generated, with $G$ non-trivial and $H$ infinite. Then
knapsack for $G\wr H$ is decidable if and only if $\mathsf{ExpEq}(G)$ is
decidable and either (i) $G$ is abelian and $\mathsf{KP}^{+}(H)$ is decidable
or (ii) $G$ is not abelian and $\mathsf{KP}^{\pm}(H)$ is decidable. Note that
the case of finite $H$ is not interesting: For $|H|=m$, $\mathsf{KP}(G\wr H)$
is equivalent to $\mathsf{KP}(G^{m})$ (see Section 3).
Thus, our result relieves us from considering every pair $(G,H)$ of groups and
allows us to study the factors separately. It is not hard to see that
decidability of $\mathsf{ExpEq}(G)$ is necessary for decidability of
$\mathsf{KP}(G\wr H)$ if $H$ is infinite. It is surprising that the only other
property of $G$ that is relevant for decidability of $\mathsf{KP}(G\wr H)$ is
whether $G$ is abelian or not. This is in contrast to the effect of other
structural properties of $G$ on the complexity of
$\mathsf{KP}(G\wr\mathbb{Z})$: If $G\neq 1$ is a finite nilpotent group, then
$\mathsf{KP}(G\wr\mathbb{Z})$ is $\mathsf{NP}$-complete [11, Theorem 2],
whereas for finite and non-solvable $G$, the problem
$\mathsf{KP}(G\wr\mathbb{Z})$ is $\Sigma_{2}^{p}$-complete [11, Corollary 25].
#### Applications
We also obtain two applications. First, we deduce that $\mathsf{KP}(G\wr
H_{3}(\mathbb{Z}))$ is undecidable for every $G\neq 1$. This implies that if
$G\neq 1$ and $H$ is virtually nilpotent and infinite, then $\mathsf{KP}(G\wr
H)$ is decidable if and only if $H$ is virtually abelian and
$\mathsf{ExpEq}(G)$ is decidable. Moreover, we show that
$\mathsf{KP}(G\wr\mathsf{BS}(1,q))$ is decidable if and only if
$\mathsf{ExpEq}(G)$ is.
#### Ingredients
For the “if” direction of our main result, we reduce $\mathsf{KP}(G\wr H)$ to
$\mathsf{ExpEq}(G)$ and $\mathsf{KP}^{\pm}(H)$ (respectively
$\mathsf{KP}^{+}(H)$) using extensions of techniques used by Figelius,
Ganardi, Lohrey, and Zetzsche [11]. Roughly speaking, the problem
$\mathsf{KP}^{\pm}(H)$ takes as input an expression
$h_{0}g_{1}^{x_{1}}h_{1}\cdots g_{n}^{x_{n}}h_{n}$ and looks for numbers
$x_{1},\ldots,x_{n}\geq 0$ such that the walk defined by the product
$h_{0}g_{1}^{x_{1}}h_{1}\cdots g_{n}^{x_{n}}h_{n}$ meets specified constraints
about self-intersections. Such a constraint can be either (i) a _loop
constraint_ , meaning the walk visits the same point after two specified
factors or (ii) a _disjointness constraint_ saying that the $(x_{i}+1)$-many
points visited when multiplying $g_{i}^{x_{i}}$ do not intersect the
$(x_{j}+1)$-many points visited while multiplying $g_{j}^{x_{j}}$.
The “only if” reductions in our main result involve substantially new ideas.
The challenge is to guarantee that the constructed instances of
$\mathsf{KP}(G\wr H)$ will leave an element $\neq 1$ somewhere, as soon as any
constraint is violated. In particular, the loop constraints have to be checked
independently of the disjointness constraints. Moreover, if several
constraints are violated, the resulting elements $\neq 1$ should not cancel
each other. Furthermore, this has to be achieved despite almost no information
on the structure of $G$ and $H$. This requires an intricate construction that
uses various patterns in the Cayley graph of $H$ for which we show that only
very specific arrangements permit cancellation. To this end, we introduce the
notion of _periodic complexity_ , which measures how many periodic sequences
are needed to cancel out a sequence of elements of a group. Roughly speaking,
for the loop constraints we use patterns of high periodic complexity, whereas
for the disjointness constraints we use patterns with low periodic complexity
but many large gaps. This ensures that the disjointness patterns cannot cancel
the loop patterns or vice versa.
## 2 Preliminaries
#### Knapsack problems
For a group $G$ and a subset $S\subseteq G$ we write $S^{*}$ for the submonoid
generated by $S$, i.e. the set of products of elements from $S$. Let $G$ be a
group with a finite (monoid) generating set $\Sigma\subseteq G$, i.e.
$G=\Sigma^{*}$. Such groups are called finitely generated. An exponent
expression over $G$ is an expression $E=e_{1}\dots e_{n}$ consisting of atoms
$e_{i}$ where each atom $e_{i}$ is either a constant $e_{i}=g_{i}\in G$ or a
power $e_{i}=g_{i}^{x_{i}}$ for some $g_{i}\in G$ and variable $x_{i}$. Here
the group elements $g_{i}$ are given as words over $\Sigma$. We write
$\gamma(e_{i})=g_{i}$ for the constant or the base of the power. Furthermore
let $P_{E}\subseteq[1,n]$ be the set of indices of the powers in $E$ and
$Q_{E}=[1,n]\setminus P_{E}$ be the set of indices of the constants in $E$. If
$\nu\in\mathbb{N}^{X}$ is a valuation of the variables $X$ that occur in $E$,
then for each $i\in[1,n]$, we define $\nu(e_{i})=\gamma(e_{i})^{\nu(x_{i})}$
if $i\in P_{E}$; and $\nu(e_{i})=e_{i}$ if $i\in Q_{E}$. Moreover,
$\nu(E):=\nu(e_{1})\cdots\nu(e_{n})$ and the set of $G$-solutions of $E$ as
$\mathsf{sol}_{G}(E):=\\{\nu\in\mathbb{N}^{X}\mid\nu(E)=1\\}$.
For a group $G$, the problem of _solvability of exponent equations_
$\mathsf{ExpEq}(G)$ is defined as:
Given
a finite list of exponent expression $E_{1},\dots,E_{k}$ over $G$.
Question
Is $\bigcap_{i=1}^{k}\mathsf{sol}_{G}(E_{i})$ non-empty?
An exponent expression is called a knapsack expression if all variables occur
at most once. The knapsack problem $\mathsf{KP}(G)$ over $G$ is defined as
follows:
Given
a knapsack expression $E$ over $G$.
Question
Is there a valuation $\nu$ such that $\nu(E)=1$?
The definition from [1] asks whether $g_{1}^{x_{1}}\cdots g_{n}^{x_{n}}=g$ has
a solution for given $g_{1},\ldots,g_{n},g\in G$. The two versions are inter-
reducible in polynomial time [8, Proposition 7.1].
#### Wreath products
Let $G$ and $H$ be groups. Consider the direct sum $K=\bigoplus_{h\in
H}G_{h}$, where $G_{h}$ is a copy of $G$. We view $K$ as the set $G^{(H)}$ of
all mappings $f\colon H\to G$ such that $\mathsf{supp}(f):=\\{h\in H\mid
f(h)\neq 1\\}$ is finite, together with pointwise multiplication as the group
operation. The set $\mathsf{supp}(f)\subseteq H$ is called the _support_ of
$f$. The group $H$ has a natural left action on $G^{(H)}$ given by
$\tensor*[^{h}]{{f}}{}(a)=f(h^{-1}a)$, where $f\in G^{(H)}$ and $h,a\in H$.
The corresponding semidirect product $G^{(H)}\rtimes H$ is the (restricted)
_wreath product_ $G\wr H$. In other words:
* •
Elements of $G\wr H$ are pairs $(f,h)$, where $h\in H$ and $f\in G^{(H)}$.
* •
The multiplication in $G\wr H$ is defined as follows: Let
$(f_{1},h_{1}),(f_{2},h_{2})\in G\wr H$. Then
$(f_{1},h_{1})(f_{2},h_{2})=(f,h_{1}h_{2})$, where
$f(a)=f_{1}(a)f_{2}(h_{1}^{-1}a)$.
There are canonical mappings $\sigma\colon G\wr H\to H$ with $\sigma(f,h)=h$
and $\tau\colon G\wr H\to G^{(H)}$ with $\tau(f,h)=f$ for $f\in G^{(H)}$,
$h\in H$. In other words: $g=(\tau(g),\sigma(g))$ for $g\in G\wr H$. Note that
$\sigma$ is a homomorphism whereas $\tau$ is in general not a homomorphism.
Throughout this paper, the letters $\sigma$ and $\tau$ will have the above
meaning (the groups $G,H$ will be always clear from the context). We also
define $\mathsf{supp}(g)=\mathsf{supp}(\tau(g))$ for all $g\in G\wr H$.
The following intuition might be helpful: An element $(f,h)\in G\wr H$ can be
thought of as a finite multiset of elements of $G\setminus\\{1_{G}\\}$ that
are sitting at certain elements of $H$ (the mapping $f$) together with the
distinguished element $h\in H$, which can be thought of as a _cursor_ moving
in $H$. We can compute the product $(f_{1},h_{1})(f_{2},h_{2})$ as follows:
First, we shift the finite collection of $G$-elements that corresponds to the
mapping $f_{2}$ by $h_{1}$: If the element $g\in G\setminus\\{1_{G}\\}$ is
sitting at $a\in H$ (i.e., $f_{2}(a)=g$), then we remove $g$ from $a$ and put
it to the new location $h_{1}a\in H$. This new collection corresponds to the
mapping $f^{\prime}_{2}\colon a\mapsto f_{2}(h_{1}^{-1}a)$. After this shift,
we multiply the two collections of $G$-elements pointwise: If $g_{1}\in G$ and
$g_{2}\in G$ are sitting at $a\in H$ (i.e., $f_{1}(a)=g_{1}$ and
$f^{\prime}_{2}(a)=g_{2}$), then we put $g_{1}g_{2}$ into the location $a$.
The new distinguished $H$-element (the new cursor position) becomes
$h_{1}h_{2}$.
Clearly, $H$ is a subgroup of $G\wr H$. We also regard $G$ as a subgroup of
$G\wr H$ by identifying $G$ with the set of all $f\in G^{(H)}$ with
$\mathsf{supp}(f)\subseteq\\{1\\}$. This copy of $G$ together with $H$
generates $G\wr H$. In particular, if $G=\langle\Sigma\rangle$ and
$H=\langle\Gamma\rangle$ with $\Sigma\cap\Gamma=\emptyset$ then $G\wr H$ is
generated by $\Sigma\cup\Gamma$. With these embeddings, $GH$ is the set of
$(f,h)\in G\wr H$ with $\mathsf{supp}(f)\subseteq\\{1\\}$ and $h\in H$.
#### Groups
Our applications will involve two well-known types of groups: the discrete
Heisenberg group $H_{3}(\mathbb{Z})$, which consists of the matrices
$\left(\begin{smallmatrix}1&a&c\\\ 0&1&b\\\ 0&0&1\end{smallmatrix}\right)$
with $a,b,c\in\mathbb{Z}$, and the Baumslag-Solitar groups [4]
$\mathsf{BS}(p,q)$ for $p,q\in\mathbb{N}$, where $\mathsf{BS}(p,q)=\langle
a,t\mid ta^{p}t^{-1}=a^{q}\rangle$.
A subgroup $H$ of $G$ is called _finite-index_ if there are finitely many
cosets $gH$. If $ab=ba$ for every $a,b\in G$, then $G$ is _abelian_. A group
has a property _virtually_ if it has a finite-index subgroup $H$ with that
property. For example, a group is virtually abelian if it has a finite-index
abelian subgroup. For two elements $a,b\in G$, we write $[a,b]=aba^{-1}b^{-1}$
and call this the _commutator_ of $a,b$. If $A,B$ are subgroups of $G$, then
$[A,B]$ is the subgroup generated by all $[a,b]$ with $a\in A$ and $b\in B$.
For $g,h\in G$, we write $\tensor*[^{h}]{{g}}{}=hgh^{-1}$. In particular, if
$g\in G$ and $h\in H$, then $\tensor*[^{h}]{{g}}{}$ is the element $(f,1)\in
G\wr H$ with $f(h)=g$ and $f(h^{\prime})=1$ for $h^{\prime}\neq h$.
## 3 Main results
We first introduce the new (positive) intersection knapsack problem. A
solution to a knapsack expression $E$ describes a walk in the Cayley graph
that starts and ends in the group identity. Whereas the ordinary knapsack
problem only asks for the expression to yield the identity, our extended
version can impose constraints on how this walk intersects itself.
A walk over $G$ is a nonempty sequence $\pi=(g_{1},\dots,g_{n})$ over $G$. Its
support is $\mathsf{supp}(\pi)=\\{g_{1},\dots,g_{n}\\}$. It is a loop if
$g_{1}=g_{n}$. Two walks are disjoint if their supports are disjoint. We
define a partial concatenation on walks: If $\pi=(g_{1},\dots,g_{n})$ and
$\rho=(h_{1},\dots,h_{m})$ with $g_{n}=h_{1}$ then
$\pi\rho=(g_{1},\dots,g_{n},h_{2},\dots,h_{m})$. A progression with period
$h\in G$ over $G$ is a walk of the form $\pi=(g,gh,gh^{2},\dots,gh^{\ell})$
for some $g\in G$ and $\ell\geq 0$. We also call the set $\mathsf{supp}(\pi)$
a progression, whose period may not be unique. If $h\neq 1$ we also call $\pi$
a ray.
A factorized walk is a walk $\pi$ equipped with a factorization
$(\pi_{1},\dots,\pi_{n})$, i.e. $\pi=\pi_{1}\dots\pi_{n}$. One also defines
the concatenation of factorized walks in the straightforward fashion. If
$E=e_{1}\dots e_{n}$ is an exponent expression and $\nu$ is a valuation over
$E$ we define the factorized walk $\pi_{\nu,E}=\pi_{1}\dots\pi_{n}$ induced by
$\nu$ on $E$ where
$\pi_{i}=\begin{cases}(\nu(e_{1}\dots e_{i-1})\,g_{i}^{k})_{0\leq
k\leq\nu(x_{i})},&\text{if }e_{i}=g_{i}^{x_{i}}\\\ (\nu(e_{1}\dots
e_{i-1}),\nu(e_{1}\dots e_{i-1})\,g_{i}),&\text{if }e_{i}=g_{i}.\end{cases}$
The intersection knapsack problem $\mathsf{KP}^{\pm}(G)$ over $G$ is defined
as follows:
Given
a knapsack expression $E$ over $G$, a set $L\subseteq[0,n]^{2}$ of loop
constraints, and a set $D\subseteq[1,n]^{2}$ of disjointness constraints.
Question
Is there a valuation $\nu$ such that $\nu(E)=1$ and the factorized walk
$\pi_{\nu,E}=\pi_{1}\dots\pi_{n}$ induced by $\nu$ on $E$ satisfies the
following conditions:
* •
$\pi_{i+1}\dots\pi_{j}$ is a loop for every $(i,j)\in L$
* •
$\pi_{i}$ and $\pi_{j}$ are disjoint for every $(i,j)\in D$.
The positive intersection knapsack problem $\mathsf{KP}^{+}(G)$ over $G$ is
the restriction of $\mathsf{KP}^{\pm}(G)$ to instances where $D=\emptyset$. We
denote the set of solutions of a $\mathsf{KP}^{\pm}(G)$-instance (resp.
$\mathsf{KP}^{+}(G)$-instance) $(E,I,D)$ (resp. $(E,I)$) as
$\mathsf{sol}_{G}(E,I,D)$ (resp. $\mathsf{sol}_{G}(E,I)$). Figure 1 shows an
example for the intersection knapsack problem over $\mathbb{Z}^{2}$.
Figure 1: Consider the knapsack equation
$g_{1}^{x_{1}}g_{2}^{x_{2}}g_{3}^{x_{3}}g_{4}^{x_{4}}=1$ over $\mathbb{Z}^{2}$
written multiplicatively, where $g_{1}=(0,2)$, $g_{2}=(1,0)$, $g_{3}=(-2,-2)$
and $g_{4}=(1,0)$ and the disjointness condition $D=\\{(1,3)\\}$. The solid
dot represents the origin $(0,0)$. The knapsack equation is satisfied by
$(x_{1},x_{2},x_{3},x_{4})=(2,2,2,2)$ but it violates $D$, as illustrated on
the left. On the right the solution $(x_{1},x_{2},x_{3},x_{4})=(2,1,2,3)$ is
depicted, which satisfies $D$.
The following is our main result.
###### Theorem 3.1.
Let $G$ and $H$ be f.g. groups such that $G$ is non-trivial and $H$ is
infinite. Then $\mathsf{KP}(G\wr H)$ is decidable if and only if
$\mathsf{ExpEq}(G)$ is decidable and either
1. 1.
$G$ is abelian and $\mathsf{KP}^{+}(H)$ is decidable or
2. 2.
$G$ is not abelian and $\mathsf{KP}^{\pm}(H)$ is decidable.
Here, we assume $H$ to be infinite, because the case of finite $H$ is not
interesting: If $|H|=m$, then $G\wr H$ has $G^{m}$ as a finite-index subgroup
[25, Proposition 1], meaning $\mathsf{KP}(G\wr H)$ is decidable if and only if
$\mathsf{KP}(G^{m})$ is [8, Theorem 7.3].
If $H$ is knapsack-semilinear, it is easy to see that both
$\mathsf{KP}^{+}(H)$ and $\mathsf{KP}^{\pm}(H)$ are decidable via an encoding
in Presburger arithmetic. Hence, the main decidability result of [12], saying
that for knapsack-semilinear $H$, $\mathsf{KP}(G\wr H)$ is decidable if and
only if $\mathsf{ExpEq}(G)$ is decidable, is generalized by Theorem 3.1.
#### Logical version of $\mathsf{KP}^{+}$ and $\mathsf{KP}^{\pm}$
For our applications of Theorem 3.1, it is often convenient to use a
formulation of $\mathsf{KP}^{+}(G)$ and $\mathsf{KP}^{\pm}(G)$ in terms of
logics over an extended Cayley graph of $G$. The _Cayley graph of $G$_ is the
logical structure $\mathcal{C}(G)=(G,(\xrightarrow{g})_{g\in G})$, with domain
$G$ and with the relation $\xrightarrow{g}$ for each111Customarily, one only
includes the edge relations $(\xrightarrow{s})_{s\in S}$ for some finite
generating set $S$ of $G$. We choose $S=G$ to make the presentation in the
following cleaner. $g\in G$, where $g_{1}\xrightarrow{g}g_{2}$ if and only if
$g_{1}g=g_{2}$. We define the extension
$\mathcal{C}^{+}(G)=(G,(\xrightarrow{g})_{g\in
G},(\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}})_{g\in G})$ where
$\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}$ is the reflexive transitive
closure of $\xrightarrow{g}$. Finally, we define a further extension
$\mathcal{C}^{\pm}(G)=(G,(\xrightarrow{g})_{g\in
G},(\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}})_{g\in
G},(\bot_{g,h})_{g,h\in G})$ with _disjointness relations_ $\bot_{g,h}$, which
are binary relations on pairs $G^{2}$: For any $g,h\in G$ and
$(g_{1},g_{2}),(h_{1},h_{2})\in G^{2}$ we have that
$(g_{1},g_{2})\bot_{g,h}(h_{1},h_{2})$ if and only if for some
$k,\ell\in\mathbb{N}$, we have $g_{1}g^{k}=g_{2}$, $h_{1}h^{\ell}=h_{2}$, and
the walks $(g_{1},g_{1}g,\dots,g_{1}g^{k})$ and
$(h_{1},h_{1}h,\dots,h_{1}h^{\ell})$ are disjoint. We denote by
$\mathcal{F}^{\pm}$ the set of positive existential first-order formulas over
$\mathcal{C}^{\pm}(G)$, i.e. formulas $\exists y_{1}\dots\exists
y_{m}\varphi(y_{1},\dots,y_{m})$ where $\varphi(y_{1},\dots,y_{m})$ is a
positive Boolean combination of atomic formulas. Then $\mathsf{SAT}^{\pm}(G)$
is the decision problem that asks if a closed formula in $\mathcal{F}^{\pm}$
holds in $\mathcal{C}^{\pm}(G)$. The fragment $\mathcal{F}^{+}$ and the
problem $\mathsf{SAT}^{+}(G)$ are defined similarly. Clearly,
$\mathsf{KP}^{\pm}(G)$ (resp. $\mathsf{KP}^{+}(G)$) reduces to
$\mathsf{SAT}^{\pm}(G)$ (resp. $\mathsf{SAT}^{+}(G)$). In Section A.1, we
show:
###### Theorem 3.2.
For any finitely generated group $G$, the problem $\mathsf{SAT}^{\pm}(G)$
(resp. $\mathsf{SAT}^{+}(G)$) is decidable if and only if
$\mathsf{KP}^{\pm}(G)$ (resp. $\mathsf{KP}^{+}(G)$) is decidable.
#### Virtually nilpotent groups
It was shown by Ganardi, König, Lohrey, and Zetzsche that for some number
$\ell\in\mathbb{N}$ and all groups $G\neq 1$,
$\mathsf{KP}(G\wr(H_{3}(\mathbb{Z})\times\mathbb{Z}^{\ell}))$ is undecidable
[12, Theorem 5.2], but essentially nothing is known so far about the groups
$G$ for which the problem $\mathsf{KP}(G\wr H_{3}(\mathbb{Z}))$ is decidable.
Using Theorem 3.1, this can be settled.
###### Theorem 3.3.
For every non-trivial $G$, the problem $\mathsf{KP}(G\wr H_{3}(\mathbb{Z}))$
is undecidable.
This is in contrast to decidability of $\mathsf{KP}(H_{3}(\mathbb{Z}))$ [8,
Theorem 6.8]. We show Theorem 3.3 by proving in Section 6 that
$\mathsf{SAT}^{+}(H_{3}(\mathbb{Z}))$ (and thus
$\mathsf{KP}^{+}(H_{3}(\mathbb{Z}))$) is undecidable.
The interest in the Heisenberg group stems from its special role inside the
class of virtually nilpotent groups. This class, in turn, consists exactly of
the finite extensions of groups of unitriangular integer matrices (see, for
example, [17, Theorem 17.2.5]). Furthermore, a celebrated result of Gromov
[14] states that the f.g. virtually nilpotent groups are precisely the f.g.
groups with polynomial growth. In some sense, the discrete Heisenberg group is
the smallest f.g. virtually nilpotent group that is not virtually abelian.
Therefore, Theorem 3.3 implies the following characterization of all wreath
products $G\wr H$ with decidable $\mathsf{KP}(G\wr H)$ where $H$ is infinite
and virtually nilpotent. See Section A.2 for details.
###### Corollary 3.4.
Let $G,H$ be f.g. non-trivial groups. If $H$ is virtually nilpotent and
infinite, then $\mathsf{KP}(G\wr H)$ is decidable if and only if $H$ is
virtually abelian and $\mathsf{ExpEq}(G)$ is decidable.
By undecidability of $\mathsf{ExpEq}(H_{3}(\mathbb{Z}))$, this implies: If
$G\neq 1$ and $H$ are f.g. virtually nilpotent and $H$ is infinite, then
$\mathsf{KP}(G\wr H)$ is decidable if and only if $G$ and $H$ are virtually
abelian.
#### Solvable Baumslag-Solitar groups
Our second application of Theorem 3.1 concerns wreath products
$G\wr\mathsf{BS}(1,q)$. It is known that knapsack is decidable for
$\mathsf{BS}(1,q)$ [20, Theorem 4.1], but again, essentially nothing is known
about $\mathsf{KP}(G\wr\mathsf{BS}(1,q))$ for any $G$.
###### Theorem 3.5.
For any f.g. group $G$ and $q\geq 1$, the problem
$\mathsf{KP}(G\wr\mathsf{BS}(1,q))$ is decidable if and only if
$\mathsf{ExpEq}(G)$ is decidable.
Extending methods from Lohrey and Zetzsche [20], we show that
$\mathsf{KP}^{\pm}(\mathsf{BS}(1,q))$ is decidable for any $q\geq 1$ and thus
obtain Theorem 3.5 in Section 6.
#### Magnus embedding
Another corollary concerns groups of the form $F/[N,N]$, where $F$ is a f.g.
free group and $N$ is a normal subgroup. Recall that any f.g. group can be
written as $F/N$, where $F$ is an f.g. free group and $N$ is a normal subgroup
of $F$. Dividing by $[N,N]$ instead of $N$ yields $F/[N,N]$, which is subject
to the Magnus embedding [32, Lemma] of $F/[N,N]$ into
$\mathbb{Z}^{r}\wr(F/N)$, where $r$ is the rank of $F$. We show in Section
A.3:
###### Corollary 3.6.
Let $F$ be a finitely generated free group and $N$ be a normal subgroup of
$F$. If $\mathsf{KP}^{+}(F/N)$ is decidable, then so is
$\mathsf{KP}(F/[N,N])$.
#### Knapsack vs. intersection knapsack
Introducing the problems $\mathsf{KP}^{+}$ and $\mathsf{KP}^{\pm}$ raises the
question of whether they are substantially different from the similar problems
$\mathsf{KP}$ and $\mathsf{ExpEq}$: Is $\mathsf{KP}^{+}(G)$ or
$\mathsf{KP}^{\pm}(G)$ perhaps inter-reducible with $\mathsf{KP}(G)$ or
$\mathsf{ExpEq}(G)$? Our applications show that this is not the case. Since
$\mathsf{KP}(H_{3}(\mathbb{Z}))$ is decidable [8, Theorem 6.8], but
$\mathsf{KP}^{+}(H_{3}(\mathbb{Z}))$ is not, neither $\mathsf{KP}^{+}(G)$ nor
$\mathsf{KP}^{\pm}(G)$ can be inter-reducible with $\mathsf{KP}(G)$ in
general. Moreover, one can show222Since there is no published proof available,
we include a proof in Appendix E, with kind permission of Moses Ganardi and
Markus Lohrey. that $\mathsf{ExpEq}(\mathsf{BS}(1,2))$ is undecidable [13],
whereas $\mathsf{KP}^{\pm}(\mathsf{BS}(1,q))$ is decidable for any $q\geq 1$.
Hence, neither $\mathsf{KP}^{+}(G)$ nor $\mathsf{KP}^{\pm}(G)$ can be inter-
reducible with $\mathsf{ExpEq}(G)$ in general. However, we leave open whether
there is a f.g. group $G$ for which $\mathsf{KP}^{+}(G)$ is decidable, but
$\mathsf{KP}^{\pm}(G)$ is undecidable (see Section 7).
## 4 From wreath products to intersection knapsack
In this section, we prove the “if” direction of Theorem 3.1 by deciding
$\mathsf{KP}(G\wr H)$ using $\mathsf{ExpEq}(G)$ and either
$\mathsf{KP}^{\pm}(H)$ or $\mathsf{KP}^{+}(H)$ (depending on whether $G$ is
abelian).
#### Normalization
We fix a wreath product $G\wr H$ with $G$ and $H$ finitely generated groups.
Note that we may assume that $\mathsf{KP}(H)$ is decidable. In our reduction,
we will augment the $\mathsf{KP}(G\wr H)$-instance with positive intersection
constraints regarding the cursor in $H$. This results in instances of the
_hybrid intersection knapsack problem_ $\mathsf{HKP}^{\pm}(G\wr H)$ over $G\wr
H$: It is defined as $\mathsf{KP}^{\pm}(G\wr H)$ but the loop and disjointness
constraints consider the $\sigma$-image of elements. Let us make this more
precise. If $E=\alpha_{1}\cdots\alpha_{n}$ is a knapsack expression over $G\wr
H$, then we define for all $i\in[1,n]$ and $\nu\in\mathbb{N}^{X}$ the set
$\mathsf{supp}_{E}^{\nu}(i):=\\{\sigma(\nu(\alpha_{1}\cdots\alpha_{i-1})\gamma(\alpha_{i})^{k})\mid
0\leq k\leq\nu(x_{i})-1\\}$
if $i\in P_{E}$ and
$\mathsf{supp}_{E}^{\nu}(i):=\\{\sigma(\nu(\alpha_{1}\cdots\alpha_{i-1}))\\}$
if $i\in Q_{E}$. For a walk $w=(w_{1},\dots,w_{k})$ over $G\wr H$ we write
$\sigma(w):=(\sigma(w_{1}),\dots,\sigma(w_{k}))$. Then the _hybrid
intersection knapsack problem_ $\mathsf{HKP}^{\pm}(G\wr H)$ over $G\wr H$ is
defined as follows:
Given
a knapsack expression $E$ over $G$, a set $L\subseteq[0,n]^{2}$ of loop
constraints, and a set $D\subseteq[1,n]^{2}$ of disjointness constraints.
Question
Is there a valuation $\nu\in\mathbb{N}^{X}$ with factorized walk
$\pi_{\nu,E}=\pi_{1}\dots\pi_{n}$ induced by $\nu$ on $E$ such that the
following conditions are fulfilled:
* •
$\nu(E)=1$
* •
$\sigma(\pi_{i+1}\dots\pi_{j})$ is a loop for all $(i,j)\in L$
* •
$\mathsf{supp}_{E}^{\nu}(i)\cap\mathsf{supp}_{E}^{\nu}(j)=\emptyset$ for all
$(i,j)\in D$.
Its _positive_ version $\mathsf{HKP}^{+}(G\wr H)$ is again defined by having
no disjointness constraints. The set $\mathsf{sol}_{G\wr H}$ is defined
accordingly. Note that to simplify the constructions in the proofs, the
disjointness constraints in an $\mathsf{HKP}^{\pm}(G\wr H)$-instance disregard
the last point of walks.
In the following, when we write a knapsack expression as
$E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1}$, we assume w.l.o.g. that
$\alpha_{n+1}$ is a constant. Two elements $g,h\in H$ are called commensurable
if $g^{x}=h^{y}$ for some $x,y\in\mathbb{Z}\setminus\\{0\\}$. It is known that
if $g_{1},g_{2}$ have infinite order and are not commensurable, then there is
at most one solution $(x_{1},x_{2})\in\mathbb{Z}^{2}$ for the equations
$g_{1}^{x_{1}}g_{2}^{x_{2}}=g$ [11, Lemma 9].
Let $E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1}$ be a knapsack expression and
write $g_{i}=\gamma(\alpha_{i})$ for $i\in[1,n+1]$. The expression (resp. the
corresponding $\mathsf{HKP}^{\pm}(G\wr H)$-instance) is c-simplified if for
any $i,j\in P_{E}$ with $g_{i}\notin H$ and $g_{j}\notin H$, we have that
commensurability of $\sigma(g_{i})$ and $\sigma(g_{j})$ implies
$\sigma(g_{i})=\sigma(g_{j})$. We call the expression (resp. the corresponding
$\mathsf{HKP}^{\pm}(G\wr H)$-instance) _normalized_ if it is c-simplified and
each atom $\alpha_{i}$ with $i\in[1,n]$ is of one of the following types: We
either have (a) $i\in Q_{E}$ and $g_{i}\in H$ or (b) $i\in P_{E}$ and
$\sigma(g_{i})=1$ or (c) $i\in P_{E}$, $g_{i}\in GH$ and $\sigma(g_{i})$ has
infinite order. Using generalizations of ideas from [24] and [22], we show:
###### Theorem 4.1.
Given an instance of $\mathsf{KP}(G\wr H)$, one can effectively construct an
equivalent finite set of normalized $\mathsf{HKP}^{+}(G\wr H)$-instances.
Here, a problem instance $I$ is _equivalent_ to a set $\mathcal{I}$ of problem
instances if $I$ has a solution if and only if at least one of the instances
in $\mathcal{I}$ has a solution.
#### Non-abelian case
Note that in a normalized knapsack expression, atoms of type (b) and (c) and
the last atom $\alpha_{n+1}$ may place non-trivial elements of $G$. Our next
step is to transform the input instance further so that only the atoms of type
(c) can place non-trivial elements of $G$, which leads to the notion of
stacking-freeness.
Let $E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1}$ be a knapsack expression over
$G\wr H$ and let $g_{i}:=\gamma(\alpha_{i})$ for all $i\in[1,n+1]$. We call an
index $i\in[1,n+1]$ stacking if either $i\in P_{E}$ and $\sigma(g_{i})=1$, or
$i=n+1$ and $g_{n+1}\notin H$. We say that $E$ is stacking-free if it has no
stacking indices. Thus, a normalized expression $E$ is stacking-free if each
atom is either of type (c) or a constant in $H$.
###### Lemma 4.2.
Given a normalized $\mathsf{HKP}^{\pm}(G\wr H)$-instance, one can effectively
construct an equivalent finite set of stacking-free, normalized
$\mathsf{HKP}^{\pm}(G\wr H)$-instances.
Let us sketch the proof of Lemma 4.2. We use the notion of an address from
[24]. An address of $E$ is a pair $(i,h)$ with $i\in[1,n+1]$ and $h\in H$ such
that $h\in\mathsf{supp}(\gamma(\alpha_{i}))$. The set of addresses $A_{E}$ of
$E$ is finite and can be computed. Intuitively, an address represents a
position in a knapsack expression where a point in $H$ can be visited.
Intuitively, instead of placing elements of $G$ by atoms of type (b) and by
$\alpha_{n+1}$, we introduce loop and disjointness constraints guaranteeing
that in points visited by these atoms, a solution would have placed elements
that multiply to $1\in G$. To this end, we pick an address $(i,h)\in A$ of a
stacking index $i$ and then guess a set $C\subseteq A$ of addresses such that
the point $h^{\prime}\in H$ visited at $(i,h)$ is visited by exactly the
addresses in $C$. The latter condition is formulated using loop and
disjointness constraints in an $\mathsf{HKP}^{\pm}(G\wr H)$-instance $I_{C}$.
In $I_{C}$, we do not place elements at $C$ anymore; instead, we construct a
set $S_{C}$ of exponent equations over $G$ that express that indeed the point
$h^{\prime}$ carries $1\in G$ in the end. Note that this eliminates one
address with stacking index. We repeat this until we are left with a set of
stacking-free instances of $\mathsf{HKP}^{\pm}(G\wr H)$, each together with an
accumulated set of exponent equations over $G$. We then take the subset
$\mathcal{I}$ of $\mathsf{HKP}^{\pm}(G\wr H)$-instances whose associated
$\mathsf{ExpEq}(G)$-instance has a solution. This will be our set for Lemma
4.2.
The last step of the non-abelian case is to construct
$\mathsf{KP}^{\pm}(H)$-instances.
###### Lemma 4.3.
Given a stacking-free, normalized $\mathsf{HKP}^{\pm}(G\wr H)$-instance, one
can effectively construct an equivalent finite set of
$\mathsf{KP}^{\pm}(H)$-instances.
We are given an instance $(E,L,D)$ with $E=\alpha_{1}\cdots\alpha_{n}$ and
write $g_{i}=\gamma(\alpha_{i})$ for $i\in[1,n]$. As $(E,L,D)$ is normalized
and stacking-free, only atoms of type (c) with $g_{i}\notin H$ can place non-
trivial elements of $G$. Moreover, if $\alpha_{i}$ and $\alpha_{j}$ are such
atoms, then the elements $\sigma(g_{i})$ and $\sigma(g_{j})$ are either non-
commensurable or equal. In the first case, the two rays produced by
$\alpha_{i}$ and $\alpha_{j}$ can intersect in at most one point; in the
second case, they intersect along subrays corresponding to intervals
$I_{i}\subseteq[0,\nu(x_{i})]$ and $I_{j}\subseteq[0,\nu(x_{j})]$.
Thus, the idea is to split up each ray wherever the intersection with another
ray starts or ends: We guess for each ray as above the number $m\leq
2\cdot|A_{E}|-1$ of subrays it will be split into and replace $g_{i}^{x_{i}}$
with $g_{i}^{y_{1}}\cdots g_{i}^{y_{m}}$. After the splitting, subrays are
either equal or disjoint. We guess an equivalence relation on the subrays;
using loop constraints, we ensure that subrays in the same class are equal;
using disjointness constraints, we ensure disjointness of subrays in distinct
classes. Finally, we have to check that for each equivalence class $C$, the
element of $G$ produced by the rays in $C$ does indeed multiply to $1\in G$.
This can be checked because $\mathsf{ExpEq}(G)$ (and thus the word problem for
$G$) is decidable.
#### Abelian case
We now come to the case of abelian $G$: We show that $\mathsf{KP}(G\wr H)$ is
decidable, but only using instances of $\mathsf{KP}^{+}(H)$ instead of
$\mathsf{KP}^{\pm}(H)$. Here, the key insight is that we can use the same
reduction, except that we just do not impose the disjointness constraints. In
the above reduction, we use disjointness constraints to control exactly which
positions in our walk visit the same point in $H$. Then we can check that in
the end, each point in $H$ carries $1\in G$. However, if $G$ is abelian, it
suffices to make sure that the set of positions in our walk decomposes into
subsets, each of which produces $1\in G$: If several of these subsets do visit
the same point in $H$, the end result will still be $1\in G$.
We illustrate this in a slightly simpler setting. Suppose we have a product
$g=\tensor*[^{h_{1}}]{{a}}{{}_{1}}\cdots\tensor*[^{h_{n}}]{{a}}{{}_{n}}$ with
$h_{1},\ldots,h_{n}\in H$ and $a_{1},\ldots,a_{n}\in G$. Then $g$ is obtained
by placing $a_{1}$ at $h_{1}\in H$, then $a_{2}$ at $h_{2}\in H$, etc. For a
subset $S=\\{s_{1},\ldots,s_{k}\\}\subseteq[1,n]$ with $s_{1}<\cdots<s_{k}$,
we define
$g_{S}=\tensor*[^{h_{s_{1}}}]{{a}}{{}_{s_{1}}}\cdots\tensor*[^{h_{s_{k}}}]{{a}}{{}_{{s_{k}}}}$.
Hence, we only multiply those factors from $S$. An equivalence relation
$\equiv$ on $[1,n]$ is called _cancelling_ if $g_{C}=1$ for every class $C$ of
$\equiv$. Moreover, $\equiv$ is called _equilocal_ if $i\equiv j$ if and only
if $h_{i}=h_{j}$. It is called _weakly equilocal_ if $i\equiv j$ implies
$h_{i}=h_{j}$. Now observe that for any $G$, we have $g=1$ if and only if
there is an equilocal cancelling equivalence on $[1,n]$. However, if $G$ is
abelian, then $g=1$ if and only if there is a _weakly_ equilocal equivalence
on $[1,n]$. Since weak equilocality can be expressed using only equalities
(and no disequalities), it suffices to impose loop conditions in our
instances.
#### Comparison to previous approach in [22]
The reduction from $\mathsf{KP}(G\wr H)$ to $\mathsf{ExpEq}(G)$ and
$\mathsf{KP}^{\pm}(H)$ ($\mathsf{KP}^{+}(H)$ respectively) uses similar ideas
as the proof of [22, Theorem 4], where it is shown $\mathsf{ExpEq}(K)$ is in
$\mathsf{NP}$ if $K$ is an iterated wreath product of $\mathbb{Z}^{r}$ for
some $r\in\mathbb{N}$.
Let us compare our reduction with the proof of [22, Theorem 4]. In [22], one
solves $\mathsf{ExpEq}(K)$ by writing $K=G\wr H$ where $G$ is abelian and $H$
is orderable and knapsack-semilinear. In both proofs, solvability of an
instance (of $\mathsf{ExpEq}(G\wr H)$ in [22] and $\mathsf{KP}(G\wr H)$ here)
is translated into a set of conditions by using similar decomposition
arguments. Then, the two proofs differ in how satisfiability of these
conditions is checked.
In [22], this set of conditions is expressed in Presburger arithmetic, which
is possible due to knapsack-semilinearity of $H$. In our reduction, we have to
translate the conditions in $\mathsf{ExpEq}(G)$ and $\mathsf{KP}^{+}(H)$
($\mathsf{KP}^{\pm}(H)$) instances. Here, we use loop constraints where in
Presburger arithmetic, once can compare variables directly. Moreover, our
reduction uses disjointness constraints to express solvability in the case
that $G$ is non-abelian. This case does not occur in [22, Theorem 4]. Finally,
we have to check whether the elements from $G$ written at the same point of
$H$ multiply to 1. The reduction of [22] can express this directly in
Presburger arithmetic since $G$ is abelian. Here, we use instances of
$\mathsf{ExpEq}(G)$.
## 5 From intersection knapsack to wreath products
In this section, we prove the “only if” direction of Theorem 3.1. Since it is
known that for infinite $H$, decidability of $\mathsf{KP}(G\wr H)$ implies
decidability of $\mathsf{ExpEq}(G)$ [12, Proposition. 3.1, Proposition 5.1],
it remains to reduce (i) $\mathsf{KP}^{+}(H)$ to $\mathsf{KP}(G\wr H)$ for any
group $G\neq 1$, and (ii) $\mathsf{KP}^{\pm}(H)$ to $\mathsf{KP}(G\wr H)$ for
any non-abelian group $G$. In the following, let $G$ be a non-trivial group
and $H$ be any group and suppose $\mathsf{KP}(G\wr H)$ is decidable.
First let us illustrate how to reduce $\mathsf{KP}^{+}(H)$ to
$\mathsf{KP}(G\wr H)$. Suppose we want to verify whether a product $h_{1}\dots
h_{m}=1$ over $H$ satisfies a set of loop constraints $L\subseteq[0,m]^{2}$,
i.e. $h_{i+1}\dots h_{j}=1$ for all $(i,j)\in L$. To do so we insert into the
product for each $(i,j)\in L$ a function $f\in G^{(H)}$ after the element
$h_{i}$ and its inverse $f^{-1}$ after the element $h_{j}$. We call these
functions loop words since their supports are contained in a cyclic subgroup
$\langle t\rangle$ of $H$. We can choose the loop words such that this
modified product evaluates to 1 if and only if the loop constraints are
satisfied. For the reduction from $\mathsf{KP}^{\pm}(H)$ we need to make the
construction more robust since we simultaneously need to simulate disjointness
constraints.
If $H$ is a torsion group then $\mathsf{KP}^{+}(H)$ and $\mathsf{KP}^{\pm}(H)$
are decidable if the word problem of $H$ is decidable: For each exponent, we
only have to check finitely many candidates. Since $\mathsf{KP}(G\wr H)$ is
decidable, we know that $\mathsf{KP}(H)$ is decidable and hence also the word
problem. Thus, we assume $H$ not to be a torsion group and may fix an element
$t\in H$ of infinite order.
#### Periodic complexity
Let $K$ be a group. The following definitions will be employed with
$K=\mathbb{Z}$ or $K=H$. For any subset $D\subseteq K$, let $G^{(D)}$ be the
group of all functions $u\colon K\to G$ whose support
$\mathsf{supp}(u)=\\{h\in K\mid u(h)\neq 1\\}$ is finite and contained in $D$.
A function $f\in G^{(K)}$ is basic periodic if there exists a progression $D$
in $K$ and $c\in G$ such that $f(h)=c$ for all $h\in D$ and $f(h)=1$
otherwise. The value of such a function $f$ is the element $c$; a period of
$f$ is a period of its support. We will identify a word $u=c_{1}\dots c_{n}\in
G^{*}$ with the function $u\in G^{(\mathbb{Z})}$ where $u(i)=c_{i}$ for
$i\in[1,n]$ and $u(i)=1$ otherwise. Recall that for $u\in G^{(\mathbb{Z})}$
and $s\in\mathbb{Z}$, we have $\tensor*[^{s}]{{u}}{}(n)=u(n-s)$. We extend
this to $s\in\mathbb{Z}_{\infty}:=\mathbb{Z}\cup\\{\infty\\}$ by setting
$\tensor*[^{\infty}]{{u}}{}(n)=1$ for all $n\in\mathbb{Z}$. The periodic
complexity of $u\in G^{(\mathbb{Z})}$ is the minimal number $\mathsf{pc}(u)=k$
of basic periodic functions $u_{1},\dots,u_{k}$ such that
$u=\prod_{i=1}^{k}u_{i}$. Given a progression $D=\\{p+qn\mid n\in[0,\ell]\\}$
in $\mathbb{Z}$ and a function $u\in G^{(\mathbb{Z})}$ we define
$\pi_{D}(u)(n)=u(p+qn)$ for all $n\in\mathbb{Z}$ and say that $\pi_{D}(u)$ is
a periodic subsequence of $u$. Note that periodic subsequences of basic
periodic functions are again basic periodic. Furthermore, since $\pi_{D}\colon
G^{(\mathbb{Z})}\to G^{(\mathbb{Z})}$ is a homomorphism, taking periodic
subsequences does not increase the periodic complexity.
###### Lemma 5.1.
Given $n,k\in\mathbb{N}$ and $a\in G\setminus\\{1\\}$, one can compute
$u_{1},\dots,u_{n}\in\langle a\rangle^{(\mathbb{N})}$ such that
$\prod_{i=1}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}$
has periodic complexity $\geq k$ for all
$(p_{1},\dots,p_{n})\neq(q_{1},\dots,q_{n})\in\mathbb{Z}_{\infty}^{n}$.
Here is a proof sketch for Lemma 5.1. The case $n=1$ can be shown by taking
any function $v=a_{1}\dots a_{m}\in\langle a\rangle^{(\mathbb{N})}$ with large
periodic complexity and defining $u_{1}=a_{1}(1)^{m-1}a_{2}(1)^{m-1}\dots
a_{m}(1)^{m-1}a_{1}\dots a_{m}$ where $(1)^{m-1}$ is the sequence consisting
of $m-1$ many $1$’s. If $p,q\in\mathbb{Z}_{\infty}$ are distinct then
$\tensor*[^{p}]{{u}}{{}_{1}}\tensor*[^{q}]{{u}}{{}^{-1}_{1}}$ always contains
$v$ or $v^{-1}$ as a periodic subsequence and thus has large periodic
complexity. For $n>1$ we define $u_{i}$ ($i>1$) to be stretched versions of
$u_{1}$ such that the supports of any two functions
$\tensor*[^{p}]{{u}}{{}_{i}}$, $\tensor*[^{q}]{{u}}{{}_{j}}$ where $i\neq j$
intersect in at most one point. This allows to argue that
$\prod_{i=1}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}$
still has large periodic complexity as soon as $p_{i}\neq q_{i}$ for some $i$.
#### Expressing loop constraints
We now show how to use Lemma 5.1 to encode loop constraints over a product
$h_{1}\dots h_{m}$ over $H$ in an instance of $\mathsf{KP}(G\wr H)$.
Recall that a loop constraint $(i,j)$ stipulates that $\sigma(g_{i+1}\dots
g_{j})=1$. If we only want to reduce $\mathsf{KP}^{+}(H)$, it is not hard to
see that it would suffice to guarantee
$\prod_{i=1}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}\neq
1$ in Lemma 5.1. In that case, we could essentially use the functions $u_{i}$
as loop words. However, in order to express disjointness constraints in
$\mathsf{KP}^{\pm}(H)$, we will construct expressions over $G\wr H$ that place
additional “disjointness patterns” in the Cayley graph of $H$. We shall make
sure that the disjointness patterns are tame: Roughly speaking, this means
they are basic periodic and either (i) place elements from a fixed subgroup
$\langle a\rangle$ or (ii) can intersect a loop word at most once. Here, the
high periodic complexity of
$\prod_{i=1}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}$
will allow us to conclude that tame patterns cannot make up for a violated
loop constraint.
Let us make this precise. Recall that two elements $g,h\in H$ are called
commensurable if $g^{x}=h^{y}$ for some $x,y\in\mathbb{Z}\setminus\\{0\\}$.
Let $a\in G\setminus\\{1\\}$. Let $\mathsf{P}_{a,t}(G\wr H)$ be the set of
elements $g\in G\wr H$ such that $\tau(g)$ is basic periodic and either, (i)
its value belongs to $\langle a\rangle$, or (ii) its period is not
commensurable to $t$. In particular, a power $(ch)^{k}$ (where $c\in G$, $h\in
H$, $k\in\mathbb{N}$) belongs to $\mathsf{P}_{a,t}(G\wr H)$ if $c\in\langle
a\rangle$ or $h$ is not commensurable to $t$. Note that since loop words are
always placed along the direction $t$, this guarantees tameness: In case (ii),
the period of $\tau(g)$ being non-commensurable to $t$ implies that the
support of any $h^{\prime}g$, $h^{\prime}\in H$, can intersect the support of
a loop word in $\langle a\rangle^{(\langle t\rangle)}$ at most once. Using
Lemma 5.1, we show the following.
###### Lemma 5.2.
Given $a\in G\setminus\\{1\\}$, $m\in\mathbb{N}$ and $L\subseteq[0,m]^{2}$ we
can compute $f_{0},\dots,f_{m}\in\langle a\rangle^{(t^{*})}$ such that:
1. 1.
Let $h_{1},\dots,h_{m}\in H$. Then $h_{1}\dots h_{m}=1$ and $h_{i+1}\dots
h_{j}=1$ for all $(i,j)\in L$ if and only if $f_{0}h_{1}f_{1}\dots
h_{m}f_{m}=1$.
2. 2.
Let $g_{1},\dots,g_{m}\in\mathsf{P}_{a,t}(G\wr H)$ such that
$\sigma(g_{i+1}\dots g_{j})\neq 1$ for some $(i,j)\in L$. Then
$f_{0}g_{1}f_{1}\dots g_{m}f_{m}\neq 1$.
Observe that the first constraint says that if we only use the loop words
$f_{i}$, then they allow us to express loop constraints. The second constraint
tells us that a violated loop constraint cannot be compensated even with
perturbations $g_{1},\ldots,g_{m}$, provided that they are tame.
#### The abelian case
Lemma 5.2 provides a simple reduction from $\mathsf{KP}^{+}(H)$ to
$\mathsf{KP}(G\wr H)$. Given an instance $(E=e_{1}\dots e_{n},L)$ of
$\mathsf{KP}^{+}(H)$ we compute $f_{0},\dots,f_{m}\in\langle
a\rangle^{(t^{*})}$ using Lemma 5.2. Then $\nu\colon X\to\mathbb{N}$ satisfies
$\nu(E)=1$ and $\nu(e_{i+1}\dots e_{j})$ for all $(i,j)\in L$ if and only if
$\nu(f_{0}e_{1}f_{1}\dots e_{n}f_{n})=1$. Hence $(E,L)$ has a solution if and
only if $\nu(f_{0}e_{1}f_{1}\dots e_{n}f_{n})=1$ does.
#### The non-abelian case
Now let $G$ be a non-abelian group. In the following we will reduce
$\mathsf{KP}^{\pm}(H)$ to $\mathsf{KP}(G\wr H)$. The first step is to
construct from an $\mathsf{KP}^{\pm}(H)$-instance $I$ an equivalent
$\mathsf{HKP}^{+}(G\wr H)$-instance $\hat{I}$ using a nontrivial commutator
$[a,b]\neq 1$ in $G$. In a second step we apply the “loop words”-construction
from Lemma 5.2 (point 2) to $\hat{I}$, going to a (pure) knapsack instance. It
guarantees that, if a loop constraint is violated, then the knapsack instance
does not evaluate to 1. Furthermore, if a disjointness constraint is violated
then there exists a large number of pairwise distant points in the Cayley
graph of $H$ which are labeled by a nontrivial element. These points cannot be
canceled by the functions $f_{i}$ from Lemma 5.2. Finally, if all loop and
disjointness constraints are satisfied then the induced walk in the Cayley
graph provides enough “empty space” such that the loop words can be shifted to
be disjoint from the original walk induced by $\hat{I}$ (encoding the
disjointness constraints).
#### Normalization
Let $I=(E=e_{1}\dots e_{n},L,D)$ be a $\mathsf{KP}^{\pm}(H)$-instance where
$e_{i}$ is either a constant $e_{i}=h_{i}$ or a power $e_{i}=h_{i}^{x_{i}}$.
We will start by establishing the following useful properties. We call $I$
torsion-free if $h_{i}$ has infinite order for all $i\in P_{E}$. Call $I$
orthogonalized for all $(i,j)\in D\cap P_{E}^{2}$ such that we have $\langle
h_{i}\rangle\cap\langle h_{j}\rangle=\\{1\\}$. If $I$ is torsion-free and
orthogonalized then it is called normalized. The orthogonality will be crucial
for the tameness of the disjointness patterns since at most one of the
elements $h_{i},h_{j}$ for $(i,j)\in D\cap P_{E}^{2}$ is commensurable to $t$.
Furthermore, it guarantees that there is at most one intersection point for
any pair $(i,j)\in D$.
###### Lemma 5.3.
One can compute a finite set $\mathcal{I}$ of normalized instances of
$\mathsf{KP}^{\pm}(H)$ such that $I$ has a solution if and only if there
exists $I^{\prime}\in\mathcal{I}$ which has a solution.
Here, torsion-freeness is easily achieved: If $h_{i}$ has finite order, then
$h_{i}^{x_{i}}$ can only assume finitely many values, so we replace
$h_{i}^{x_{i}}$ by one of finitely many constants. Orthogonality requires an
observation: If $\langle h_{i}\rangle\cap\langle h_{j}\rangle\neq\\{1\\}$,
then any two intersecting progressions $\pi_{i},\pi_{j}$ with periods $h_{i}$
and $h_{j}$, respectively, must intersect periodically, meaning there exists
an intersection point that is close to an endpoint of $\pi_{i}$ or $\pi_{j}$.
This means, in lieu of $(i,j)\in D$, we can require disjointness of one power
with a constant.
#### Expressing disjointness constraints
Hence we can assume that $I$ is normalized. To express disjointness
constraints, we must assume that $G$ is non-abelian. Let $a,b\in G$ with
$aba^{-1}b^{-1}=[a,b]\neq 1$. Our starting point is the following idea. To
express that two progressions $\pi_{i}$ and $\pi_{j}$, induced by a valuation
of $E$, are disjoint, we construct an expression over $G\wr H$ that first
places $a$ at each point in $\pi_{i}$, then $b$ at each point in $\pi_{j}$,
then again $a^{-1}$ at each point in $\pi_{i}$, and finally $b^{-1}$ at each
point in $\pi_{j}$, see (2). Here we need loop constraints that express that
the start and endpoints of the two traversals of $\pi_{i}$ (and $\pi_{j}$)
coincide. Then, if $\pi_{i}$ and $\pi_{j}$ are disjoint, the effect will be
neutral; otherwise any intersection point will carry $aba^{-1}b^{-1}\neq 1$.
However, this leads to two problems. First, there might be more than one
disjointness constraint: If $k$ disjointness constraints are violated by the
same point $h^{\prime\prime}\in H$, then $h^{\prime\prime}$ would carry
$[a,b]^{k}$, which can be the identity (for example, $G$ may be finite).
Second, when we also place loop words (which multiply elements from $\langle
a\rangle$), those could also interfere with the commutator (for example,
instead of $aba^{-1}b^{-1}$, we might get $aba^{-1}(a)b^{-1}(a^{-1})=1$).
Instead, we do the following. Let $t\in H$ be the element of infinite order
used for the loop words. Moreover, let
$D=\\{(i_{1},j_{1}),\dots,(i_{d},j_{d})\\}$. For each $(i_{k},j_{k})\in D$,
instead of performing the above “commutator construction” once, we perform it
$n+d$ times, each time shifted by $t^{N_{k}}\in H$ for some large $N_{k}$. The
numbers $N_{0}<N_{1}<\cdots$ are chosen so large that for at least one
commutator, there will be no interference from other commutators or from loop
words.
Let us make this precise. Since $I$ is orthogonalized, we may assume that for
each $(i,j)\in D\cap P_{E}^{2}$, the elements $h_{j}$ and $t$ are not
commensurable; otherwise we swap $i$ and $j$. The resulting
$\mathsf{HKP}^{+}(G\wr H)$-instance $\hat{I}$ will have length
$m=n+4d(n+d)(n+2)$. In preparation, we can compute a number $N$ such that the
functions $f_{0},\dots,f_{m}$ from Lemma 5.2 for any $L\subseteq[0,m]^{2}$
satisfy $\mathsf{supp}(f_{i})\subseteq\\{t^{j}\mid j\in[0,N-1]\\}$. For each
$i\in[1,n]$, $c\in G$, $s\in\mathbb{N}$, we define the knapsack expression
$E_{i,c,s}$ over $G\wr H$ as
$E_{i,c,s}=\begin{cases}e_{1}\dots
e_{i-1}\,(t^{s})\,(c\,t^{-s}h_{i}t^{s})^{x_{i}}(ct^{-s})\,e_{i+1}\dots
e_{n},&\text{if }e_{i}=h_{i}^{x_{i}},\\\ e_{1}\dots
e_{i-1}\,(t^{s})\;(c\,t^{-s}h_{i}t^{s})\;\;(ct^{-s})\,e_{i+1}\dots
e_{n},&\text{if }e_{i}=h_{i}.\end{cases}$ (1)
The parentheses indicate the atoms. We define
$\hat{E}=E\cdot\prod_{k=1}^{d}\prod_{s\in S_{k}}\Big{(}E_{i_{k},a,s}\cdot
E_{j_{k},b,s}\cdot E_{i_{k},a^{-1},s}\cdot E_{j_{k},b^{-1},s}\Big{)}$ (2)
where $S_{k}=\\{j(n+d)^{2k}N\mid j\in[1,n+d]\\}$ for all $k\in[1,d]$, and all
occurrences of expressions of the form $E_{i,c,s}$ use fresh variables. Note
that $E_{i_{k},a,s}\cdot E_{j_{k},b,s}\cdot E_{i_{k},a^{-1},s}\cdot
E_{j_{k},b^{-1},s}$ performs the commutator construction for $(i_{k},j_{k})$,
shifted by $t^{s}$. Let $\hat{E}=\hat{e}_{1}\dots\hat{e}_{m}$ be the resulting
expression. Notice that its length is indeed $m=n+4d(n+d)(n+2)$ as claimed
above.
Finally, in our $\mathsf{HKP}^{+}(G\wr H)$ instance, we also add a set
$J\subseteq[0,m]^{2}$ of loop constraints stating that for each $k\in[1,d]$
and $s\in S_{k}$, the $i_{k}$-th atom in $E_{i_{k},a,s}$ arrives at the same
place in $H$ as the $i_{k}$-th atom in $E$ (and analogously for
$E_{j_{k},b,s}$, $E_{i_{k},a^{-1},s}$, $E_{j_{k},b^{-1},s}$). See Section C.4
for details.
Let $f_{0},\dots,f_{m}\in\langle a\rangle^{(t^{*})}$ be the loop words from
Lemma 5.2 for the set $J\subseteq[0,m]^{2}$. It is now straightforward to
verify that the elements $\hat{e}_{i}$ are all tame as explained above. In
other words, for every valuation $\nu$ and $i\in[1,m]$, we have
$\nu(\hat{e}_{i})\in\mathsf{P}_{a,t}$ (see Lemma C.3).
#### Shifting loop words
By construction, we now know that if the instance
$f_{0}\hat{e}_{1}f_{1}\cdots\hat{e}_{m}f_{m}$ of $\mathsf{KP}(G\wr H)$ has a
solution, then so does our normalized instance $I$ of $\mathsf{KP}^{\pm}(H)$.
However, there is one last obstacle: Even if all loop and disjointness
constraints can be met for $I$, we cannot guarantee that
$f_{0}\hat{e}_{1}f_{1}\cdots\hat{e}_{m}f_{m}$ has a solution: It is possible
that some loop words interfere with some commutator constructions so as to
yield an element $\neq 1$.
The idea is to _shift_ all the loop words $f_{0},\ldots,f_{m}$ in direction
$t$ by replacing $f_{i}$ by
$t^{r}f_{i}t^{-r}=\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{i}}$ for some
$r\in\mathbb{N}$. We shall argue that for some $r$ in some bounded interval,
this must result in an interference free expression; even though the elements
$\hat{e}_{i}$ may modify an unbounded number of points in $H$. To this end, we
use again that the $\hat{e}_{i}$ are tame: Each of them either (i) places
elements from $\langle a\rangle$, or (ii) has a period non-commensurable to
$t$. In the case (i), there can be no interference because the $f_{i}$ also
place elements in $\langle a\rangle$, which is an abelian subgroup. In the
case (ii), $\hat{e}_{i}$ can intersect the support of each $f_{j}$ at most
once. Hence, there are at most $m$ points each $f_{j}$ has to avoid after
shifting. The following simple lemma states that one can always shift finite
sets $F_{i}$ in parallel to avoid finite sets $A_{i}$, by a bounded shift.
Notice that the bound does not depend on the size of the elements in the sets
$F_{i}$ and $A_{i}$.
###### Lemma 5.4.
Let $F_{1},\ldots,F_{m}\subseteq\mathbb{Z}$ with $|F_{i}|\leq N$ and
$A_{1},\ldots,A_{m}\subseteq\mathbb{Z}$ with $|A_{i}|\leq\ell$. There exists a
shift $r\in[0,Nm\ell]$ such that $(r+F_{i})\cap A_{i}=\emptyset$ for each
$i\in[1,m]$.
###### Proof 5.5.
For every $a\in\mathbb{Z}$ there exist at most $|F_{i}|\leq N$ many shifts
$r\in\mathbb{N}$ where $a\in r+F_{i}$. Therefore there must be a shift
$r\in[0,Nm\ell]$ such that $(r+F_{i})\cap A_{i}=\emptyset$ for each
$i\in[1,m]$.
We can thus prove the following lemma, which clearly completes the reduction
from $\mathsf{KP}^{\pm}(H)$ to $\mathsf{KP}(G\wr H)$.
###### Lemma 5.6.
$I=(E,L,D)$ has a solution if and only if
$\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}\hat{e}_{1}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots\hat{e}_{m}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}}$
has a solution for some $r\in[0,Nm^{2}]$.
## 6 Applications
#### The discrete Heisenberg group
Here, we prove that $\mathsf{SAT}^{+}(H_{3}(\mathbb{Z}))$ is undecidable.
Together with Theorem 3.1 and Theorem 3.2, this directly implies Theorem 3.3.
Define the matrices $A=\begin{pmatrix}1&1&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix}$,
$B=\begin{pmatrix}1&0&0\\\ 0&1&1\\\ 0&0&1\end{pmatrix}$, and
$C=\begin{pmatrix}1&0&1\\\ 0&1&0\\\ 0&0&1\end{pmatrix}$. The group
$H_{3}(\mathbb{Z})$ is generated by $A$ and $B$ and we have $AC=CA$ and
$BC=CB$. It is well-known that (I) $A^{i}C^{j}=A^{i^{\prime}}C^{j^{\prime}}$
iff $i=i^{\prime}$ and $j=j^{\prime}$; and (II)
$B^{i}C^{j}=B^{i^{\prime}}C^{j^{\prime}}$ iff $i=i^{\prime}$ and
$j=j^{\prime}$; and (III) $A^{i}B^{j}A^{-i^{\prime}}B^{-j^{\prime}}=C^{k}$ if
and only if $i=i^{\prime}$, $j=j^{\prime}$, and $k=ij$. For proofs, see
Section D.1.
We show undecidability of $\mathsf{SAT}^{+}(H_{3}(\mathbb{Z}))$ by reducing
from solvability of Diophantine equations over natural numbers. Hence, we are
given a finite system $\bigwedge_{j=1}^{m}E_{j}$ of equations of the form
$x=a$, $z=x+y$, and $z=xy$. It is well-known that solvability of such equation
systems is undecidable [27]. Given such an equation system over a set of
variables $X$ we define a $\mathcal{C}^{+}(H_{3}(\mathbb{Z}))$-formula
containing the variables $\\{g_{x}\mid x\in X\\}\cup\\{g_{0}\\}$ with the
interpretation that $g_{x}=g_{0}C^{x}$. First we state that
$g_{0}\xrightarrow{C}\mathrel{\vphantom{\to}{}^{*}}g_{x}$ for all $x\in X$.
Expressing $x=a$ is done simply with $g_{0}\xrightarrow{C^{a}}g_{x}$. For
$z=x+y$, we use
$C^{x}A^{*}\cap
A^{x^{\prime}}C^{*}\cap(AC)^{*}\neq\emptyset\leavevmode\nobreak\
\leavevmode\nobreak\ \wedge\leavevmode\nobreak\ \leavevmode\nobreak\
A^{x^{\prime}}C^{*}\cap C^{z}A^{*}\cap C^{y}(AC)^{*}\neq\emptyset.$
This can be expressed in $\mathcal{C}^{+}(H_{3}(\mathbb{Z}))$ with a fresh
variable $f_{x^{\prime}}$ for $g_{0}A^{x^{\prime}}$: For example, the first
conjunct holds iff there exists $h\in H_{3}(\mathbb{Z})$ such that
$g_{0}\xrightarrow{A}\mathrel{\vphantom{\to}{}^{*}}f_{x^{\prime}}$,
$g_{x}\xrightarrow{A}\mathrel{\vphantom{\to}{}^{*}}h$,
$f_{x^{\prime}}\xrightarrow{C}\mathrel{\vphantom{\to}{}^{*}}h$,
$g_{0}\xrightarrow{AC}\mathrel{\vphantom{\to}{}^{*}}h$. By (I) and $AC=CA$,
the first conjunct holds iff $x=x^{\prime}$. Similarly, the second conjunct
holds iff $z=x^{\prime}+y$, hence $z=x+y$. For $z=xy$, we use:
$C^{x}A^{*}\cap
A^{x^{\prime}}C^{*}\cap(AC)^{*}\neq\emptyset\leavevmode\nobreak\
\leavevmode\nobreak\ \wedge\leavevmode\nobreak\ \leavevmode\nobreak\
B^{y^{\prime}}C^{*}\cap C^{y}B^{*}\cap(BC)^{*}\neq\emptyset\\\
\leavevmode\nobreak\ \leavevmode\nobreak\ \wedge\leavevmode\nobreak\
\leavevmode\nobreak\ A^{x^{\prime}}B^{*}(A^{-1})^{*}\cap
B^{y^{\prime}}C^{*}\cap C^{z}B^{*}\neq\emptyset.$
Like above, the first and second conjunct express $x^{\prime}=x$ and
$y^{\prime}=y$. The third says that
$A^{x^{\prime}}B^{r}(A^{-1})^{s}=B^{y^{\prime}}C^{z}$ for some $r,s\geq 0$, so
by (III), it states $z=x^{\prime}y^{\prime}$, hence $z=xy$.
#### Solvable Baumslag-Solitar groups
We show that $\mathsf{SAT}^{\pm}(\mathsf{BS}(1,q))$ is decidable for every
$q\geq 1$. By Theorem 3.1 and Theorem 3.2, this proves Theorem 3.5. Our proof
is based on the following observation, which is shown in Section D.2.
###### Proposition 6.1.
The first-order theory of $\mathcal{C}^{+}(\mathsf{BS}(1,q))$ is decidable.
For Proposition 6.1, we show that given any finite subset
$F\subseteq\mathsf{BS}(1,q)$, the structure
$(\mathsf{BS}(1,q),(\xrightarrow{g})_{g\in
F},(\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}})_{g\in F})$ is effectively
an automatic structure, which implies that its first-order theory is decidable
[18, Corollary 4.2]. This uses a straightforward extension of the methods in
[20]. In [20, proof of Theorem 4.1], it is shown that
$\mathsf{KP}(\mathsf{BS}(1,q))$ can be reduced to the existential fragment of
the structure $(\mathbb{Z},+,V_{q})$, where $V_{q}(n)$ is the largest power of
$q$ that divides $n$. The structure $(\mathbb{Z},+,V_{q})$ is called _Büchi
arithmetic_ and is well-known to be automatic. Here, we show that
$(\mathsf{BS}(1,q),(\xrightarrow{g})_{g\in
F},(\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}})_{g\in F})$ can be
interpreted in a slight extension of Büchi arithmetic that is still automatic.
From Proposition 6.1, we can derive a stronger statement, which clearly
implies decidability of $\mathsf{SAT}^{\pm}(\mathsf{BS}(1,q))$:
###### Theorem 6.2.
The first-order theory of $\mathcal{C}^{\pm}(\mathsf{BS}(1,q))$ is decidable.
Indeed, since $\mathsf{BS}(1,q)$ is torsion-free, we can express the predicate
$\bot_{g,h}$ using universal quantification: We have
$(g_{1},g_{2})\bot_{g,h}(h_{1},h_{2})$ if and only if
$g_{1}\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}g_{2}$ and
$h_{1}\xrightarrow{h}\mathrel{\vphantom{\to}{}^{*}}h_{2}$ and
$\forall
f,f^{\prime}\in\mathsf{BS}(1,q)\colon\left(g_{1}\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}f\wedge
f\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}g_{2}\wedge
h_{1}\xrightarrow{h}\mathrel{\vphantom{\to}{}^{*}}f^{\prime}\wedge
f^{\prime}\xrightarrow{h}\mathrel{\vphantom{\to}{}^{*}}h_{2}\right)\to f\neq
f^{\prime}.$
## 7 Conclusion
We have shown that for infinite groups $H$, the problem $\mathsf{KP}(G\wr H)$
is decidable if and only if $\mathsf{ExpEq}(G)$ is decidable and either (i)
$G$ is abelian and $\mathsf{KP}^{+}(H)$ is decidable or (ii) $G$ is non-
abelian and $\mathsf{KP}^{\pm}(H)$ is decidable. This reduces the study of
decidablity of $\mathsf{KP}(G\wr H)$ to decidability questions about the
factors $G$ and $H$.
However, we leave open whether there is a group $H$ where $\mathsf{KP}^{+}(H)$
is decidable, but $\mathsf{KP}^{\pm}(H)$ is undecidable. It is clear that both
are decidable for all groups in the class of knapsack-semilinear groups. This
class contains a large part of the groups for which knapsack has been studied.
For example, it contains graph groups [21, Theorem 3.11] and hyperbolic groups
[19, Theorem 8.1]. Moreover, knapsack-semilinearity is preserved by a variety
of constructions: This includes wreath products [12, Theorem 5.4], graph
products [23], free products with amalgamation and HNN-extensions over finite
identified subgroups [23], and taking finite-index overgroups [23]. Moreover,
the groups $H_{3}(\mathbb{Z})$ and $\mathsf{BS}(1,q)$ for $q\geq 2$ are also
unable to distinguish $\mathsf{KP}^{+}$ and $\mathsf{KP}^{\pm}$: We have shown
here that $\mathsf{KP}^{+}$ is undecidable in $H_{3}(\mathbb{Z})$ and
$\mathsf{KP}^{\pm}$ is decidable in $\mathsf{BS}(1,q)$. To the best of the
authors’ knowledge, among the groups for which knapsack is known to be
decidable, this only leaves $\mathsf{BS}(p,q)$ for $p,q$ coprime, and
$G\wr\mathsf{BS}(1,q)$ (with decidable $\mathsf{ExpEq}(G)$) as candidates to
distinguish $\mathsf{KP}^{+}$ and $\mathsf{KP}^{\pm}$.
## References
* [1] A., A. and A. “Knapsack Problems in Groups” In _Mathematics of Computation_ 84, 2015, pp. 987–1016
* [2] A. and A. “Knapsack problem for nilpotent groups” In _Groups Complexity Cryptology_ 9.1, 2017, pp. 87–98
* [3] L. Babai et al. “Multiplicative Equations over Commuting Matrices” In _Proceedings of SODA 1996_ ACM/SIAM, 1996, pp. 498–507
* [4] Gilbert Baumslag and Donald Solitar “Some two-generator one-relator non-Hopfian groups” In _Bulletin of the American Mathematical Society_ 68.3, 1962, pp. 199–201 DOI: 10.1090/S0002-9904-1962-10745-9
* [5] Paul Bell et al. “Matrix Equations and Hilbert’s Tenth Problem” In _International Journal of Algebra and Computation_ 18.8, 2008, pp. 1231–1241 DOI: 10.1142/S0218196708004925
* [6] Paul C. Bell, Igor Potapov and Pavel Semukhin “On the Mortality Problem: From Multiplicative Matrix Equations to Linear Recurrence Sequences and Beyond” In _Proceedings of MFCS 2019_ , 2019, pp. 83:1–83:15 DOI: 10.4230/LIPIcs.MFCS.2019.83
* [7] J. Büchi “Weak Second-Order Arithmetic and Finite Automata” In _Mathematical Logic Quarterly_ 6.1-6, 1960, pp. 66–92
* [8] D., M. and G. “Knapsack and subset sum problems in nilpotent, polycyclic, and co-context-free groups” In _Algebra and Computer Science_ 677, Contemporary Mathematics American Mathematical Society, 2016, pp. 138–153
* [9] Fedor Anatolievich Dudkin and Alexander Victorovich Treyer “Knapsack problem for Baumslag–Solitar groups” In _Siberian Journal of Pure and Applied Mathematics_ 18.4 Novosibirsk State University, 2018, pp. 43–55
* [10] E., A. and A. “Knapsack problems in products of groups” In _Journal of Symbolic Computation_ 74, 2016, pp. 96–108
* [11] Michael Figelius, Moses Ganardi, Markus Lohrey and Georg Zetzsche “The Complexity of Knapsack Problems in Wreath Products” In _Proceedings of ICALP 2020_ , 2020, pp. 126:1–126:18 DOI: 10.4230/LIPIcs.ICALP.2020.126
* [12] Moses Ganardi, Daniel König, Markus Lohrey and Georg Zetzsche “Knapsack Problems for Wreath Products” In _Proceedings of STACS 2018_ 96, LIPIcs Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018, pp. 32:1–32:13
* [13] Moses Ganardi and Markus Lohrey Personal communication, 2020
* [14] Mikhael Gromov “Groups of polynomial growth and expanding maps” In _Publications Mathématiques de l’Institut des Hautes Études Scientifiques_ 53.1 Springer, 1981, pp. 53–78
* [15] Funda Gul, Mahmood Sohrabi and Alexander Ushakov “Magnus embedding and algorithmic properties of groups $F/N^{(d)}$” In _Transactions of the American Mathematical Society_ 369.9, 2017, pp. 6189–6206
* [16] Derek F Holt, Sarah Rees, Claas E Röver and Richard M Thomas “Groups with context-free co-word problem” In _Journal of the London Mathematical Society_ 71.3 Oxford University Press, 2005, pp. 643–657
* [17] M.. Kargapolov and Ju.. Merzljakov “Fundamentals of the Theory of Groups” Translated from the second Russian edition New York: Springer-Verlag, 1979
* [18] Bakhadyr Khoussainov and Anil Nerode “Automatic presentations of structures” In _International Workshop on Logic and Computational Complexity_ , 1994, pp. 367–392 Springer
* [19] Markus Lohrey “Knapsack in hyperbolic groups” In _Journal of Algebra_ 545, 2020, pp. 390–415
* [20] Markus Lohrey and Georg Zetzsche “Knapsack and the Power Word Problem in Solvable Baumslag-Solitar Groups” In _Proceedings of MFCS 2020_ , 2020, pp. 67:1–67:15 DOI: 10.4230/LIPIcs.MFCS.2020.67
* [21] Markus Lohrey and Georg Zetzsche “Knapsack in Graph Groups” In _Theory of Computing Systems_ 62.1, 2018, pp. 192–246
* [22] M., M., M. and G. “The Complexity of Knapsack Problems in Wreath Products” In _CoRR_ abs/2002.08086, 2020 URL: https://arxiv.org/abs/2002.08086
* [23] M., M. and G. “Closure properties of knapsack semilinear groups” In _CoRR_ abs/1911.12857, 2019 URL: https://arxiv.org/abs/1911.12857
* [24] M., D., M. and G. “Knapsack Problems for Wreath Products” In _CoRR_ abs/1709.09598, 2017 URL: http://arxiv.org/abs/1709.09598
* [25] M., B. and G. “Rational subsets and submonoids of wreath products” In _Information and Computation_ 243, 2015, pp. 191–204
* [26] M. and G. “Knapsack in Graph Groups, HNN-Extensions and Amalgamated Products” In _Proceedings of STACS 2016_ 47, Leibniz International Proceedings in Informatics (LIPIcs) Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2016, pp. 50:1–50:14
* [27] Yuri V. Matiyasevich “Hilbert’s Tenth Problem” Cambridge, Massachusetts: MIT Press, 1993
* [28] Jane Matthews “The conjugacy problem in wreath products and free metabelian groups” In _Transactions of the American Mathematical Society_ 121.2 JSTOR, 1966, pp. 329–339
* [29] Alexei Miasnikov, Svetla Vassileva and Armin Weiß “The Conjugacy Problem in Free Solvable Groups and Wreath Products of Abelian Groups is in $\mathrm{TC}^{0}$” In _Theory of Computing Systems_ 63.4, 2019, pp. 809–832
* [30] VN Remeslennikov and VG Sokolov “Some properties of a Magnus embedding” In _Algebra and Logic_ 9.5 Springer, 1970, pp. 342–349
* [31] J. Richard Büchi and Steven Senger “Definability in the Existential Theory of Concatenation and Undecidable Extensions of this Theory” In _Mathematical Logic Quarterly_ 34.4, 1988, pp. 337–342 DOI: 10.1002/malq.19880340410
* [32] W. “On a theorem of Marshall Hall” In _Annals of Mathematics. Second Series_ 40, 1939, pp. 764–768
* [33] Wolfgang Woess “Random walks on infinite graphs and groups” Cambridge University Press, 2000
## Appendix A Proofs from Section 3
### A.1 Proof of Theorem 3.2
The goal of this section is to show that $\mathsf{SAT}^{\pm}(G)$ is
effectively equivalent to $\mathsf{KP}^{\pm}(G)$ and $\mathsf{SAT}^{+}(G)$ is
effectively equivalent to $\mathsf{KP}^{+}(G)$ for any finitely generated
group $G$. We begin with the equivalence of the more general problems. The
first direction is shown in the following lemma:
###### Lemma A.1.
For any finitely generated group $G$ it holds that if $\mathsf{SAT}^{\pm}(G)$
is decidable, then $\mathsf{KP}^{\pm}(G)$ is decidable as well.
###### Proof A.2.
Let $(E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1},L,D)$ be a
$\mathsf{KP}^{\pm}(G)$-instance with $\alpha_{n+1}$ a constant and variables
in $X=\\{x_{1},\dots,x_{n}\\}$. We write $g_{i}:=\gamma(\alpha_{i})$ for all
$i\in[1,n+1]$ and define the following formula in $\mathcal{F}^{\pm}$:
$\begin{split}\varphi:=\exists y_{0},\dots,y_{n}\colon&\bigwedge_{i\in
P_{E}}y_{i-1}\xrightarrow{g_{i}}\mathrel{\vphantom{\to}{}^{*}}y_{i}\wedge\bigwedge_{i\in
Q_{E}\setminus\\{n+1\\}}y_{i-1}\xrightarrow{g_{i}}y_{i}\wedge
y_{n}\xrightarrow{g_{n+1}}y_{0}\wedge\\\ &\bigwedge_{(i,j)\in
L}y_{i}\xrightarrow{1}y_{j}\wedge\bigwedge_{(i,j)\in
D}(y_{i-1},y_{i})\bot_{g_{i},g_{j}}(y_{j-1},y_{j}).\end{split}$
Let $\varphi(y_{0},\dots,y_{n})$ be the part of $\varphi$ without the
existential quantifiers which means that $y_{0},\dots,y_{n}$ are free
variables in $\varphi(y_{0},\dots,y_{n})$. For an assignment $\mu\colon
Y:=\\{y_{0},\dots,y_{n}\\}\to G$ we write
$\mu\models\varphi(y_{0},\dots,y_{n})$ if $\varphi(y_{0},\dots,y_{n})$
evaluates to true when setting $y_{i}$ to $\mu(y_{i})$ for all $i\in[0,n]$.
We claim that $\mathsf{sol}_{G}(E,L,D)\neq\emptyset$ if and only if
$\varphi(y_{0},\dots,y_{n})$ is satisfiable. For the first direction we assume
that $\nu\in\mathsf{sol}_{G}(E,L,D)$ and let
$\pi_{\nu,E}=\pi_{1}\cdots\pi_{n+1}$ be the factorized walk induced by $\nu$
on $E$. We define the assignment $\mu\colon Y\to G$ such that
$\mu(y_{i}):=\nu(\alpha_{1}\cdots\alpha_{i})$ for all $i\in[1,n]$ and
$\mu(y_{0}):=1$. Then $\mu(y_{i-1})g_{i}^{\nu(x_{i})}=\mu(y_{i})$ for all
$i\in P_{E}$ and $\mu(y_{i-1})g_{i}=\mu(y_{i})$ for all $i\in
Q_{E}\setminus\\{n+1\\}$. Moreover, since $\nu(E)=1$, it holds that
$\mu(y_{n})g_{n+1}=\mu(y_{0})$. Since $\nu$ fulfills the loop constraints in
$L$, we have that $\mu(y_{i})=\mu(y_{j})$ for all $(i,j)\in L$. For all
$(i,j)\in D$ we have that $\pi_{i}$ and $\pi_{j}$ are disjoint and therefore
$(\mu(y_{i-1}),\mu(y_{i}))\bot_{g_{i},g_{j}}(\mu(y_{j-1}),\mu(y_{j}))$ is
fulfilled. Thus, $\mu\models\varphi(y_{0},\dots,y_{n})$.
For the other direction we assume that $\mu\colon Y\to G$ such that
$\mu\models\varphi(y_{0},\dots,y_{n})$. Then we define the valuation
$\nu\in\mathbb{N}^{X}$ such that $\mu(y_{i-1})g_{i}^{\nu(x_{i})}=\mu(y_{i})$
and $\nu(x_{i})$ is minimal with this property for all $i\in P_{E}$. This can
be computed by trying all values for $\nu(x_{i})$ iteratively since
$\mu(y_{i-1})\xrightarrow{g_{i}}\mathrel{\vphantom{\to}{}^{*}}\mu(y_{i})$
evaluates to true. As
$\bigwedge_{i\in
P_{E}}\mu(y_{i-1})\xrightarrow{g_{i}}\mathrel{\vphantom{\to}{}^{*}}\mu(y_{i})\wedge\bigwedge_{i\in
Q_{E}\setminus\\{n+1\\}}\mu(y_{i-1})\xrightarrow{g_{i}}\mu(y_{i})\wedge\mu(y_{n})\xrightarrow{g_{n+1}}\mu(y_{0})$
is fulfilled, we have that
$\mu(y_{0})\nu(\alpha_{1})\cdots\nu(\alpha_{n})\nu(\alpha_{n+1})=\mu(y_{0})$
and therefore $\nu(E)=1$. Let $\pi_{\nu,E}=\pi_{1}\cdots\pi_{n+1}$ be the
factorized walk induced by $\nu$ on $E$. Since $\mu(y_{i})=\mu(y_{j})$ for all
$(i,j)\in L$, it follows that
$\mu(y_{0})\nu(\alpha_{1})\cdots\nu(\alpha_{i})=\mu(y_{0})\nu(\alpha_{1})\cdots\nu(\alpha_{j})$,
which means that $\pi_{i+1}\cdots\pi_{j}$ is a loop for all $(i,j)\in L$.
Moreover, since
$(\mu(y_{i-1}),\mu(y_{i}))\bot_{g_{i},g_{j}}(\mu(y_{j-1}),\mu(y_{j}))$ is
fulfilled for all $(i,j)\in D$, the minimality of $\nu(x_{i})$ and
$\nu(x_{j})$ if $i,j\in P_{E}$ implies that the walks
$(\mu(y_{0})\nu(\alpha_{1}\cdots\alpha_{i-1})g_{i}^{k})_{0\leq
k\leq\nu(x_{i})}$ and
$(\mu(y_{0})\nu(\alpha_{1}\cdots\alpha_{j-1})g_{j}^{\ell})_{0\leq\ell\leq\nu(x_{j})}$
are disjoint. These walks are also disjoint if $i\in Q_{E}$ or $j\in Q_{E}$ by
setting $\nu(x_{i}):=1$ or $\nu(x_{j}):=1$. Therefore, $\pi_{i}$ and $\pi_{j}$
are disjoint for all $(i,j)\in D$. Thus, $\nu\in\mathsf{sol}_{G}(E,L,D)$.
The reduction from $\mathsf{SAT}^{\pm}(G)$ to $\mathsf{KP}^{\pm}(G)$ is
established by the next lemma.
###### Lemma A.3.
For any finitely generated group $G$ it holds that if $\mathsf{KP}^{\pm}(G)$
is decidable, then $\mathsf{SAT}^{\pm}(G)$ is decidable as well.
###### Proof A.4.
Let $\varphi:=\exists y_{1},\dots,y_{n}\psi\in\mathcal{F}^{\pm}$ be a formula
in prenex normal form where $\psi$ is quantifier-free with variables
$Y=\\{y_{1},\dots,y_{n}\\}$. If we replace the atoms of $\psi$ by variables
and regard the resulting formula as a formula in propositional logic, we can
compute all satisfying assignments $\mu_{1},\dots,\mu_{m}$ by trying all
combinations of truth assignments of the variables. Then we can write
$\varphi\equiv\bigvee_{i=1}^{m}\exists
y_{1},\dots,y_{n}\bigwedge_{j=1}^{c_{i}}a_{i,j}$
where $a_{i,1},\dots,a_{i,c_{i}}$ are the atoms of $\psi$ that are set to true
in $\mu_{i}$ for all $i\in[1,m]$. We consider each disjunct separately and
write it as
$\exists y_{1},\dots,y_{n}\bigwedge_{j=1}^{c}a_{j}.$
We replace all atoms of the form $a_{j}=(g_{1},g_{2})\bot_{g,h}(h_{1},h_{2})$
by the conjunction
$g_{1}\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}g_{2}\wedge
h_{1}\xrightarrow{h}\mathrel{\vphantom{\to}{}^{*}}h_{2}$ and write the
resulting formula as
$\exists y_{1},\dots,y_{n}\bigwedge_{j=1}^{c^{\prime}}b_{j}.$
Furthermore, we define the set
$B:=\\{((g_{1},g_{2}),(h_{1},h_{2}))\mid\exists j\in[1,c]\colon
a_{j}=(g_{1},g_{2})\bot_{g,h}(h_{1},h_{2})\\}.$
Let $b_{j}=s_{j}\xrightarrow{t_{j}}e_{j}$ or
$b_{j}=s_{j}\xrightarrow{t_{j}}\mathrel{\vphantom{\to}{}^{*}}e_{j}$ with
$s_{j},e_{j}\in Y$ and $t_{j}\in G$ for all $j\in[1,c^{\prime}]$. Without loss
of generality we assume that for all $j,k\in[1,c^{\prime}]$ it holds that
$s_{j}\neq s_{k}$ or $e_{j}\neq e_{k}$. We define the graph
$\Gamma:=(Y,\mathcal{E}^{1},\mathcal{E}^{\ast},t)$ with vertices $Y$, two
sorts of edges
$\mathcal{E}^{1}:=\\{(s_{j},e_{j})\mid j\in[1,c^{\prime}]\wedge
b_{j}=s_{j}\xrightarrow{t_{j}}e_{j}\\}$
and
$\mathcal{E}^{\ast}:=\\{(s_{j},e_{j})\mid j\in[1,c^{\prime}]\wedge
b_{j}=s_{j}\xrightarrow{t_{j}}\mathrel{\vphantom{\to}{}^{*}}e_{j}\\}$
and edge labeling
$t\colon\mathcal{E}:=\mathcal{E}^{1}\cup\mathcal{E}^{\ast}\to G$ such that
$t(s_{j},e_{j}):=t_{j}$ for all $j\in[1,c^{\prime}]$. For any subset of edges
$\mathcal{S}\subseteq\mathcal{E}$ we write
$\mathcal{S}^{-1}:=\\{(v,u)\mid(u,v)\in\mathcal{S}\\}$
to denote the set of reverse edges and $\mathcal{S}^{\pm
1}:=\mathcal{S}\cup\mathcal{S}^{-1}$.
Let $C\subseteq Y$ be an undirected connected component of $\Gamma$ and $u\in
C$. We interpret $u$ as initial vertex and represent all other vertices in $C$
by a path starting with $u$. Consider an edge $(v,w)\in\mathcal{E}\cap C^{2}$
that lies in the connected component $C$. We choose an undirected path from
$u$ to $v$ and denote it by a tuple $(p_{1},\dots,p_{\ell})$ with
$p_{k}\in\mathcal{E}^{\pm 1}$ for all $k\in[1,\ell]$. We now define a knapsack
expression that follows the path and the edge $(v,w)$ to reach $w$ and then
goes back to $u$. For all $k\in[1,\ell]$ we define
$\alpha_{k}:=\begin{cases}t(p_{k})^{x_{k}},&\text{if
}p_{k}\in{\mathcal{E}^{\ast}}^{\pm 1}\\\
t(p_{k}),&\text{otherwise}\end{cases}$
where we extend the edge labeling to reverse edges by setting
$t(p_{k}):=\begin{cases}t(p_{k}),&\text{if }p_{k}\in\mathcal{E}\\\
t(p_{k}^{-1})^{-1},&\text{otherwise.}\end{cases}$
To follow the edge $(v,w)$ we let
$\alpha_{\ell+1}:=\begin{cases}t(v,w)^{x_{\ell+1}},&\text{if
}(v,w)\in\mathcal{E}^{\ast}\\\ t(v,w),&\text{otherwise.}\end{cases}$
To walk back to $u$ we define
$\alpha_{\ell+2}:=\begin{cases}(t(v,w)^{-1})^{x_{\ell+2}},&\text{if
}(v,w)\in\mathcal{E}^{\ast}\\\ t(v,w)^{-1},&\text{otherwise}\end{cases}$
and
$\alpha_{\ell+2+k}:=\begin{cases}(t(p_{\ell+1-k})^{-1})^{x_{\ell+2+k}},&\text{if
}p_{\ell+1-k}\in{\mathcal{E}^{\ast}}^{\pm 1}\\\
t(p_{\ell+1-k})^{-1},&\text{otherwise}\end{cases}$
for all $k\in[1,\ell]$. We then define the knapsack expression
$E_{v,w}:=\alpha_{1}\cdots\alpha_{2\ell+2}$ and loop constraint
$L_{v,w}:=\\{(0,2\ell+2)\\}$. If we do this for every edge lying in $C$ we
obtain the knapsack expression
$E_{C}:=\prod_{(v,w)\in\mathcal{E}\cap C^{2}}E_{v,w}$
where we make the indices continuous.
Let $\ell_{v,w}$ be the adjusted index $\ell$ in $E_{v,w}$ for all
$(v,w)\in\mathcal{E}\cap C^{2}$. For every $v\in C$ we define the set of
indices
$I_{v}:=\\{\ell_{v,w}\mid(v,w)\in\mathcal{E}\\}\cup\\{\ell_{w,v}+1\mid(w,v)\in\mathcal{E}\\}.$
We write $I_{v}=\\{\ell_{1},\dots,\ell_{r}\\}$ with $\ell_{1}<\dots<\ell_{r}$
and let $L_{v}:=\\{(\ell_{k},\ell_{k+1})\mid 1\leq k<r\\}$. Intuitively, the
loop constraints in $L_{v}$ ensure that all edges incident to $v$ start or end
at the same point. We can now define the set of loop constraints
$L_{C}:=\bigcup_{(v,w)\in\mathcal{E}\cap C^{2}}L_{v,w}\cup\bigcup_{v\in
C}L_{v}$
where we adjust the indices in $L_{v,w}$ properly.
If we do this for all undirected connected components $C_{1},\dots,C_{s}$ of
$\Gamma$ that have size greater than one, we obtain the
$\mathsf{KP}^{+}(G)$-instance
$(E:=E_{C_{1}}\cdots E_{C_{s}},L:=L_{C_{1}}\cup\dots\cup L_{C_{s}})$
where we adjust the indices and $\ell_{v,w}$ properly. We define the
corresponding disjointness constraints
$D:=\\{(\ell_{g_{1},g_{2}}+1,\ell_{h_{1},h_{2}}+1)\mid((g_{1},g_{2}),(h_{1},h_{2}))\in
B\\}.$
Let $(E_{1},L_{1},D_{1}),\dots,(E_{m},L_{m},D_{m})$ be the resulting
$\mathsf{KP}^{\pm}(G)$-instances for all disjuncts. We claim that $\varphi$ is
satisfiable if and only if
$\bigcup_{i=1}^{m}\mathsf{sol}_{G}(E_{i},L_{i},D_{i})\neq\emptyset$.
For the first direction let
$\varphi_{i}(y_{1},\dots,y_{n}):=\bigwedge_{j=1}^{c_{i}}a_{i,j}$ and assume
that $\mu\colon Y\to G$ is a satisfying assignment of $\varphi_{i}$ for some
$i\in[1,m]$. We write $E_{i}=\alpha_{1}\cdots\alpha_{d}$ and by definition
every power $\alpha_{j}$ with $j\in P_{E_{i}}$ has base $t(y_{k},y_{\ell})$
for some $k,\ell\in[1,n]$ with $(y_{k},y_{\ell})\in{\mathcal{E}^{\ast}}^{\pm
1}$. We define the valuation $\nu\in\mathbb{N}^{X}$ such that for all $j\in
P_{E_{i}}$ where $\alpha_{j}$ has base $t(y_{k},y_{\ell})$ for some
$k,\ell\in[1,n]$ with $(y_{k},y_{\ell})\in{\mathcal{E}^{\ast}}^{\pm 1}$ it
holds that $\mu(y_{k})t(y_{k},y_{\ell})^{\nu(x_{j})}=\mu(y_{\ell})$ and
$\nu(x_{j})$ is minimal with this property. Note that $\nu$ can be computed by
trying all values iteratively since the construction of $E_{i}$ implies that
$\mu(y_{k})\xrightarrow{t(y_{k},y_{\ell})}\mathrel{\vphantom{\to}{}^{*}}\mu(y_{\ell})$
is fulfilled for every power $\alpha_{j}$ with base $t(y_{k},y_{\ell})$ where
$(y_{k},y_{\ell})\in{\mathcal{E}^{\ast}}^{\pm 1}$ as
$\mu\models\varphi_{i}(y_{1},\dots,y_{n})$. It follows that $\nu(E_{i})=1$ and
$\nu$ fulfills the loop constraints in $L_{i}$ since variables of powers with
equal or inverse base are set to the same value. Let
$\pi_{\nu,E_{i}}=\pi_{1}\cdots\pi_{d}$ be the factorized walk induced by $\nu$
on $E_{i}$. Since
$(\mu(g_{1}),\mu(g_{2}))\bot_{t(g_{1},g_{2}),t(h_{1},h_{2})}(\mu(h_{1}),\mu(h_{2}))$
is fulfilled for all $((g_{1},g_{2}),(h_{1},h_{2}))\in B$ and
$\gamma(\alpha_{\ell_{g_{1},g_{2}}+1})=t(g_{1},g_{2})$ and
$\gamma(\alpha_{\ell_{h_{1},h_{2}}+1})=t(h_{1},h_{2})$, the minimality of
$\nu(x_{\ell_{g_{1},g_{2}}+1})$ and $\nu(x_{\ell_{h_{1},h_{2}}+1})$, where we
set $\nu(x_{j}):=1$ if $j\in Q_{E_{i}}$, implies that
$\pi_{\ell_{g_{1},g_{2}}+1}$ and $\pi_{\ell_{h_{1},h_{2}}+1}$ are disjoint.
Thus, $\nu\in\mathsf{sol}_{G}(E_{i},L_{i},D_{i})$.
For the other direction we assume that
$\nu\in\mathsf{sol}_{G}(E_{i},L_{i},D_{i})$ for some $i\in[1,m]$. Let
$E_{i}=\alpha_{1}\cdots\alpha_{d}$ and
$\varphi_{i}(y_{1},\dots,y_{n}):=\bigwedge_{j=1}^{c_{i}}a_{i,j}$. To show that
$\varphi_{i}(y_{1},\dots,y_{n})$ is satisfiable, we define the assignment
$\mu\colon Y\to G$ such that for all $v\in Y$ with $v$ incident to an edge of
$\Gamma$ it holds that
$\mu(v):=\begin{cases}\prod_{k=1}^{\ell_{v,w}}\nu(\alpha_{k}),&\text{if
}(v,w)\in\mathcal{E}\text{ for some }w\in Y\\\
\prod_{k=1}^{\ell_{w,v}+1}\nu(\alpha_{k}),&\text{if }(w,v)\in\mathcal{E}\text{
for some }w\in Y\end{cases}$
where $\ell_{v,w}$ is the adjusted index in $E_{i}$. For every $v\in Y$ that
is not incident to any edge of $\Gamma$ we set $\mu(v)$ to an arbitrary value
of $G$. The loop constraints in $L_{i}$ ensure that the assignment $\mu$ is
well-defined. By definition of $E_{i}$ it follows that $\mu\models b_{j}$ for
all $j\in[1,c^{\prime}]$. Since we add for all $j\in[1,c_{i}]$ with
$a_{i,j}=(g_{1},g_{2})\bot_{g,h}(h_{1},h_{2})$ the atoms
$b_{k}=g_{1}\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}g_{2}$ and
$b_{\ell}=h_{1}\xrightarrow{h}\mathrel{\vphantom{\to}{}^{*}}h_{2}$ for some
$k,\ell\in[1,c^{\prime}]$, the disjointness constraints in $D_{i}$ imply that
$\mu\models a_{i,j}$ for all $j\in[1,c_{i}]$. Thus,
$\mu\models\varphi_{i}(y_{1},\dots,y_{n})$.
We show next that $\mathsf{SAT}^{+}(G)$ is effectively equivalent to
$\mathsf{KP}^{+}(G)$ for any finitely generated group $G$. The first direction
is shown in the following lemma:
###### Lemma A.5.
For any finitely generated group $G$ it holds that if $\mathsf{SAT}^{+}(G)$ is
decidable, then $\mathsf{KP}^{+}(G)$ is decidable as well.
###### Proof A.6.
We can copy the proof of Lemma A.1 by setting $D:=\emptyset$.
The reduction from $\mathsf{SAT}^{+}(G)$ to $\mathsf{KP}^{+}(G)$ is
established by the next lemma.
###### Lemma A.7.
For any finitely generated group $G$ it holds that if $\mathsf{KP}^{+}(G)$ is
decidable, then $\mathsf{SAT}^{+}(G)$ is decidable as well.
###### Proof A.8.
We can copy the proof of Lemma A.3 by assuming that
$\varphi\in\mathcal{F}^{+}$.
### A.2 Proof of Corollary 3.4
Although we will not refer to this definition in the proof, we include a
definition of nilpotent groups for completeness. For a group $G$, we define
its _lower central series_ as the subgroups $G_{1},G_{2},\ldots$ with
$G_{1}=G$ and $G_{i+1}=[G_{i},G]$ for $i\geq 1$. Then, $G$ is _nilpotent_ if
there is a number $n\geq 1$ with $G_{n}=\\{1\\}$.
###### Proof A.9 (Proof of Corollary 3.4).
Suppose $\mathsf{KP}(G\wr H)$ is decidable. Since $H$ is infinite, we know
that $\mathsf{ExpEq}(G)$ must be decidable [12, Proposition 3.1, Proposition
5.1]. Towards a contradiction, assume that $H$ is not virtually abelian. As a
finitely generated virtually nilpotent group, $H$ contains a finite-index
nilpotent subgroup $K$ that is also torsion-free [17, Theorem 17.2.2]. Since
$H$ is not virtually abelian, $K$ cannot be abelian. Since every non-abelian
torsion-free nilpotent group has $H_{3}(\mathbb{Z})$ as a subgroup (see, for
example, the proof of [16, Theorem 12]), we know that $H_{3}(\mathbb{Z})$ is a
subgroup of $H$. Hence, $\mathsf{KP}(G\wr H)$ is undecidable by Theorem 3.3,
which is a contradiction.
Conversely, suppose $H$ is virtually abelian and $\mathsf{ExpEq}(G)$ is
decidable. Since $H$ is virtually abelian, it is knapsack-semilinear [23,
Theorem 7.1]. Therefore, since $\mathsf{ExpEq}(G)$ is decidable, decidability
of $\mathsf{KP}(G\wr H)$ is shown in [12, Theorem 5.3].
### A.3 Proof of Corollary 3.6
###### Proof A.10 (Proof of Corollary 3.6).
By the Magnus embedding theorem [32, Lemma], the group $F/[N,N]$ embeds in
$\mathbb{Z}^{r}\wr(F/N)$, where $r$ is the rank of $F$. By Theorem 3.1,
decidability of $\mathsf{KP}^{+}(F/N)$ implies decidability of
$\mathsf{KP}(\mathbb{Z}^{r}\wr(F/N))$. Finally, for any $G$,
$\mathsf{KP}^{+}(G)$ is a special case of $\mathsf{ExpEq}(G)$.
## Appendix B Proofs from Section 4
### B.1 The modified intersection knapsack problem
To simplify the constructions in the proofs from Section 4, we use slight
variations of the problems $\mathsf{KP}^{\pm}(H)$ and $\mathsf{KP}^{+}(H)$.
Let $E=\alpha_{1}\cdots\alpha_{n}$ be a knapsack expression over $G$. For
every $i\in[1,n]$ and $\nu\in\mathbb{N}^{X}$ we define
$S_{E}^{\nu}(i):=\\{\nu(\alpha_{1}\cdots\alpha_{i-1})\gamma(\alpha_{i})^{k}\mid
0\leq k\leq\nu(x_{i})-1\\}$
if $i\in P_{E}$ and
$S_{E}^{\nu}(i):=\\{\nu(\alpha_{1}\cdots\alpha_{i-1})\\}$
if $i\in Q_{E}$. Intuitively, $S_{E}^{\nu}(i)$ is the set of points visited by
the ray associated to $\alpha_{i}$ under the valuation $\nu$ where we leave
out the last point.
###### Definition 1.
The modified intersection knapsack problem $\mathsf{MKP}^{\pm}(G)$ over $G$ is
defined as follows:
Given
a knapsack expression $E$ over $G$, a set $L\subseteq[0,n]^{2}$ of loop
constraints, and a set $D\subseteq[1,n]^{2}$ of disjointness constraints.
Question
Is there a valuation $\nu\in\mathbb{N}^{X}$ with factorized walk
$\pi_{\nu,E}=\pi_{1}\dots\pi_{n}$ induced by $\nu$ on $E$ such that the
following conditions are fulfilled:
* •
$\nu(E)=1$
* •
$\pi_{i+1}\dots\pi_{j}$ is a loop for all $(i,j)\in L$
* •
$S_{E}^{\nu}(i)\cap S_{E}^{\nu}(j)=\emptyset$ for all $(i,j)\in D$?
The positive modified intersection knapsack problem $\mathsf{MKP}^{+}(G)$ over
$G$ is the restriction of $\mathsf{MKP}^{\pm}(G)$ to instances where
$D=\emptyset$. As before, let $\mathsf{sol}_{G}(E,L,D)$ (resp.
$\mathsf{sol}_{G}(E,L)$) be the set of solutions of the
$\mathsf{MKP}^{\pm}(G)$-instance $(E,L,D)$ (resp.
$\mathsf{MKP}^{+}(G)$-instance $(E,L)$) over $G$.
Note that the restricted problems $\mathsf{KP}^{+}(G)$ and
$\mathsf{MKP}^{+}(G)$ are identical and the only difference between
$\mathsf{KP}^{\pm}(G)$ and $\mathsf{MKP}^{\pm}(G)$ is that the disjointness
constraints of $\mathsf{MKP}^{\pm}(G)$-instances ignore the last point of
walks. The equivalence of $\mathsf{KP}^{\pm}(G)$ and $\mathsf{MKP}^{\pm}(G)$
is established by the following lemma:
###### Lemma B.1.
For any finitely generated group $G$ we have that $\mathsf{KP}^{\pm}(G)$ and
$\mathsf{MKP}^{\pm}(G)$ are effectively equivalent.
###### Proof B.2.
We first reduce $\mathsf{KP}^{\pm}(G)$ to $\mathsf{MKP}^{\pm}(G)$. Let
$(E=\alpha_{1}\cdots\alpha_{n},L,D)$ be a $\mathsf{KP}^{\pm}(G)$-instance. We
define the knapsack expression
$E^{\prime}:=\beta_{1}\cdots\beta_{2n}:=\alpha_{1}\cdot 1\cdots\alpha_{n}\cdot
1$
with loop constraints $L^{\prime}:=\\{(2i-1,2j-1)\mid(i,j)\in L\\}$ and
disjointness constraints
$D^{\prime}:=\bigcup_{(i,j)\in
D}\\{(2i-1,2j-1),(2i-1,2j),(2i,2j-1),(2i,2j)\\}.$
We regard $(E^{\prime},L^{\prime},D^{\prime})$ as
$\mathsf{MKP}^{\pm}(G)$-instance. Note that with the added 1’s we can ensure
that $D^{\prime}$ considers also the last points of the disjointness
constraints defined in $D$.
We show that
$\mathsf{sol}_{G}(E,L,D)=\mathsf{sol}_{G}(E^{\prime},L^{\prime},D^{\prime})$.
Let $\nu\in\mathbb{N}^{X}$ be a valuation and let
$\pi_{\nu,E}=\pi_{1}\cdots\pi_{n}$ be the factorized walk induced by $\nu$ on
$E$. Clearly, it holds that $\nu(E)=1$ if and only if $\nu(E^{\prime})=1$ and
$\nu$ fulfills the loop constraints in $L$ if and only if it fulfills the loop
constraints in $L^{\prime}$. We now consider the disjointness constraints. Let
$g_{i}:=\gamma(\alpha_{i})$ for all $i\in[1,n]$ and $\nu(x_{i}):=1$ if $i\in
Q_{E}$. For all $(i,j)\in D$ we have that $\pi_{i}$ and $\pi_{j}$ are disjoint
if and only if
$\\{\nu(\alpha_{1}\cdots\alpha_{i-1})g_{i}^{k}\mid 0\leq
k\leq\nu(x_{i})\\}\cap\\{\nu(\alpha_{1}\cdots\alpha_{j-1})g_{j}^{k}\mid 0\leq
k\leq\nu(x_{j})\\}=\emptyset$
which holds if and only if
$\\{\nu(\alpha_{1}\cdots\alpha_{i-1})g_{i}^{k}\mid 0\leq
k\leq\nu(x_{i})-1\\}\text{ and
}\\{\nu(\alpha_{1}\cdots\alpha_{i-1})g_{i}^{\nu(x_{i})}\\}$
are disjoint to
$\\{\nu(\alpha_{1}\cdots\alpha_{j-1})g_{j}^{k}\mid 0\leq
k\leq\nu(x_{j})\\}\text{ and
}\\{\nu(\alpha_{1}\cdots\alpha_{j-1})g_{j}^{\nu(x_{j})}\\}$
which in turn holds if and only if $S_{E^{\prime}}^{\nu}(2i-1)$ and
$S_{E^{\prime}}^{\nu}(2i)$ are disjoint to $S_{E^{\prime}}^{\nu}(2j-1)$ and
$S_{E^{\prime}}^{\nu}(2j)$. Thus, $\nu\in\mathsf{sol}_{G}(E,L,D)$ if and only
if $\nu\in\mathsf{sol}_{G}(E^{\prime},L^{\prime},D^{\prime})$.
We now reduce $\mathsf{MKP}^{\pm}(G)$ to $\mathsf{KP}^{\pm}(G)$. Let
$(E=\alpha_{1}\cdots\alpha_{n},L,D)$ be an $\mathsf{MKP}^{\pm}(G)$-instance
and $g_{i}:=\gamma(\alpha_{i})$ for all $i\in[1,n]$. Let $P\subseteq P_{E}$ be
a set of powers whose variables will be set to 0. For all $j\in[1,n]$ we
replace
$\alpha_{j}\text{ by
}\begin{cases}g_{j}^{y_{i_{j,1}}}g_{j}=:\beta_{i_{j,1}}\beta_{i_{j,2}},&\text{if
}j\in P_{E}\setminus P\\\ 1=:\beta_{i_{j,2}},&\text{if }j\in P\\\ 1\cdot
g_{j}=:\beta_{i_{j,1}}\beta_{i_{j,2}},&\text{if }j\in Q_{E}\end{cases}$
to get the knapsack expression $E_{P}$ and we write
$E_{P}=\beta_{1}\cdots\beta_{r}$ with variables in
$Y:=\\{y_{1},\dots,y_{r}\\}$ by making indices continuous where we adjust
$i_{j,1}$ and $i_{j,2}$ accordingly. We define the loop constraints
$L_{P}:=\\{(i_{j,2},i_{k,2})\mid(j,k)\in L\\}$ and the disjointness
constraints
$D_{P}:=\\{(i_{j,1},i_{k,1})\mid(j,k)\in D\wedge j,k\notin P\\}.$
We interpret $(E_{P},L_{P},D_{P})$ as $\mathsf{KP}^{\pm}(G)$-instance. The
idea is to split progressions at the last point such that the
$\mathsf{KP}^{\pm}(G)$-instance ignores this point. The splitting is not
possible if the variable is set to 0. Thus, we need to guess the the set of
powers $P$ whose variables are set to 0 beforehand. It remains to show that
$\mathsf{sol}_{G}(E,L,D)\neq\emptyset$ if and only if $\bigcup_{P\in
P_{E}}\mathsf{sol}_{G}(E_{P},L_{P},D_{P})\neq\emptyset$.
For the first direction let $\nu\in\mathsf{sol}_{G}(E,L,D)$. We define
$P:=\\{i\in P_{E}\mid\nu(x_{i})=0\\}$ and the valuation
$\nu_{P}\in\mathbb{N}^{Y}$ such that $\nu_{P}(y_{i_{j,1}}):=\nu(x_{j})-1$ for
all $j\in P_{E}\setminus P$. Let $\pi_{\nu_{P},E_{P}}=\pi_{1}\cdots\pi_{r}$ be
the factorized walk induced by $\nu_{P}$ on $E_{P}$. By definition of $E_{P}$
it clearly holds that $\nu_{P}(E_{P})=1$ and $\nu_{P}$ fulfills all loop
constraints in $L_{P}$. We now consider the disjointness constraints. Let
$\nu_{P}(y_{i_{j,1}}):=1$ for all $j\in Q_{E}$ and $h_{i}:=\gamma(\beta_{i})$
for all $i\in[1,r]$. For every $(j,k)\in D$ with $j,k\notin P$ we have that
$S_{E}^{\nu}(j)\cap S_{E}^{\nu}(k)=\emptyset$. Therefore, it holds that
$\\{\nu_{P}(\beta_{1}\cdots\beta_{i_{j,1}-1})h_{i_{j,1}}^{\ell}\mid
0\leq\ell\leq\nu_{P}(y_{i_{j,1}})\\}\cap\\{\nu_{P}(\beta_{1}\cdots\beta_{i_{k,1}-1})h_{i_{k,1}}^{\ell}\mid
0\leq\ell\leq\nu_{P}(y_{i_{k,1}})\\}=\emptyset$
which implies that $\pi_{i_{j,1}}$ and $\pi_{i_{k,1}}$ are disjoint. Thus,
$\nu_{P}\in\mathsf{sol}_{G}(E_{P},L_{P},D_{P})$.
For the other direction let $\nu_{P}\in\mathsf{sol}_{G}(E_{P},L_{P},D_{P})$
for some $P\subseteq P_{E}$. We define the valuation $\nu\in\mathbb{N}^{X}$
such that
$\nu(x_{j}):=\begin{cases}\nu_{P}(y_{i_{j,1}})+1,&\text{if }j\in
P_{E}\setminus P\\\ 0,&\text{if }j\in P\end{cases}$
for all $j\in P_{E}$. Clearly, it holds that $\nu(E)=1$ and $\nu$ fulfills all
loop constraints in $L$. Let $\nu(x_{i}):=1$ for all $i\in Q_{E}$ and
$\pi_{\nu_{P},E_{P}}=\pi_{1}\cdots\pi_{r}$ be the factorized walk induced by
$\nu_{P}$ on $E_{P}$. For every $(j,k)\in D$ with $j,k\notin P$ we have that
$\pi_{i_{j,1}}$ and $\pi_{i_{k,1}}$ are disjoint. Therefore, it holds that
$\\{\nu(\alpha_{1}\cdots\alpha_{j-1})g_{j}^{\ell}\mid
0\leq\ell\leq\nu(x_{j})-1\\}\cap\\{\nu(\alpha_{1}\cdots\alpha_{k-1})g_{k}^{\ell}\mid
0\leq\ell\leq\nu(x_{k})-1\\}=\emptyset$
which implies that $S_{E}^{\nu}(j)\cap S_{E}^{\nu}(k)=\emptyset$. For
$(j,k)\in D$ with $j\in P$ or $k\in P$ it holds that $\nu(x_{j})=0$ or
$\nu(x_{k})=0$ and therefore $S_{E}^{\nu}(j)=\emptyset$ or
$S_{E}^{\nu}(k)=\emptyset$ which implies that $S_{E}^{\nu}(j)\cap
S_{E}^{\nu}(k)=\emptyset$. Thus, $\nu\in\mathsf{sol}_{G}(E,L,D)$.
### B.2 Proof of Theorem 4.1
Let $P$ and $P^{\prime}$ be two potentially equal decision problems defined so
far. Let $S=\\{I_{1},\dots,I_{s}\\}$ be a finite set of instances of $P$ and
$S^{\prime}=\\{I_{1}^{\prime},\dots,I_{t}^{\prime}\\}$ be a finite set of
instances of $P^{\prime}$. We say that $S$ is equivalent to $S^{\prime}$ if
$\bigcup_{i=1}^{s}\mathsf{sol}_{P}(I_{i})\neq\emptyset$ if and only if
$\bigcup_{i=1}^{t}\mathsf{sol}_{P^{\prime}}(I_{i}^{\prime})\neq\emptyset$.
Here, $\mathsf{sol}_{P}$ and $\mathsf{sol}_{P^{\prime}}$ denote the set of
solutions of an instance of the respective problem. We define the equivalence
also directly on instances by assuming singleton sets.
We say that a knapsack expression $E=\alpha_{1}\cdots\alpha_{n}$ is torsion-
free if for all $i\in P_{E}$ it holds that $\sigma(\gamma(\alpha_{i}))=1$ or
$\sigma(\gamma(\alpha_{i}))$ has infinite order.
###### Lemma B.3.
For any knapsack expression one can effectively construct an equivalent finite
set of torsion-free knapsack expressions.
###### Proof B.4.
We use the ideas of the proof of Lemma 7.1 from [24]. First note that by
conjugation we can eliminate constants in a knapsack expression $E$ and assume
that $E=g_{1}^{x_{1}}\cdots g_{d}^{x_{n}}g$ where $g_{1},\dots,g_{n},g\in G\wr
H$. Let $i\in\\{1,\dots,n\\}$ such that $\sigma(g_{i})\neq 1$ and
$\mathsf{ord}(\sigma(g_{i}))=q<\infty$. Since $\mathsf{KP}(H)$ is decidable we
can compute $q$ as follows. We first check if
$\sigma(g_{i})^{x}\sigma(g_{i})=1$ has a solution and if so, we try every
value for $x$ starting with 0 until we find a solution which is then $q-1$.
We then construct the expression
$E_{r}^{\prime\prime}=g_{1}^{x_{1}}\cdots
g_{i-1}^{x_{i-1}}(g_{i}^{q})^{x_{i}}g_{i}^{r}g_{i+1}^{x_{i+1}}\cdots
g_{n}^{x_{n}}g$
and from that the knapsack expression
$E_{r}^{\prime}=g_{1}^{x_{1}}\cdots
g_{i-1}^{x_{i-1}}(g_{i}^{q})^{x_{i}}(g_{i}^{r}g_{i+1}g_{i}^{-r})^{x_{i+1}}\cdots(g_{i}^{r}g_{n}g_{i}^{-r})^{x_{n}}g_{i}^{r}g$
for all $r\in[0,q-1]$. The idea is to write exponents as multiple of the order
of the base with remainder. We then shift the constant factor for the
remainder via conjugation to the end of the expression. Note that
$E_{r}^{\prime}$ has one non-trivial torsion element less than $E$ since
$\sigma(g_{i}^{q})=1$ and conjugation by $g_{i}^{r}$ does not change the
orders of the elements $g_{i+1},\dots,g_{n}$. Clearly, it holds that
$\mathsf{sol}_{G\wr H}(E_{r}^{\prime\prime})=\mathsf{sol}_{G\wr
H}(E_{r}^{\prime})$ for all $r\in[0,q-1]$.
If $\nu\in\mathbb{N}^{X}$ is a solution of $E$, then for $r:=\nu(x_{i})\text{
mod }q$ we get a solution $\nu^{\prime}\in\mathbb{N}^{X}$ of $E_{r}^{\prime}$
by setting
$\nu^{\prime}(x_{j}):=\begin{cases}s,&\text{if }j=i\\\
\nu(x_{j}),&\text{otherwise}\end{cases}$
for all $j\in[1,n]$ where $\nu(x_{i})=sq+r$. Conversely, if
$\nu^{\prime}\in\mathbb{N}^{X}$ is a solution of $E_{r}^{\prime}$ for some
$r\in[0,q-1]$, then $\nu\in\mathbb{N}^{X}$ with
$\nu(x_{j}):=\begin{cases}q\nu^{\prime}(x_{i})+r,&\text{if }j=i\\\
\nu^{\prime}(x_{j}),&\text{otherwise}\end{cases}$
for all $j\in[1,n]$ is a solution of $E$. Thus, it holds that
$\mathsf{sol}_{G\wr H}(E)\neq\emptyset$ if and only if
$\bigcup_{r=0}^{q-1}\mathsf{sol}_{G\wr H}(E_{r}^{\prime})\neq\emptyset$.
Repeating this process for all $E_{r}^{\prime}$ until we get torsion-free
knapsack expressions $E_{1},\dots,E_{t}$ yields the lemma.
A knapsack expression $E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1}$ is in
$GH$-form if for all $i\in P_{E}$ it holds that $\sigma(\gamma(\alpha_{i}))=1$
or $\gamma(\alpha_{i})\in GH$ and for all $i\in Q_{E}\setminus\\{n+1\\}$ it
holds that $\alpha_{i}\in H$.
To do the transformation into $GH$-form, we need an order on the elements in
the support of some atom of $E$. Let $h\in H$ be a torsion-free element. We
define the binary relation $\preceq_{h}$ on $H$ as in [24]. For
$h^{\prime},h^{\prime\prime}\in H$ we write
$h^{\prime}\preceq_{h}h^{\prime\prime}$ if there is a $k\geq 0$ such that
$h^{\prime}=h^{k}h^{\prime\prime}$. Clearly, $\preceq_{h}$ is a partial order
since $h$ is torsion-free. Moreover, since $\mathsf{KP}(H)$ is decidable, we
can decide with a knapsack instance over $H$ whether
$h^{\prime}\preceq_{h}h^{\prime\prime}$.
To multiply elements $a_{i}$ for $i\in I$ in a certain order, we write for a
finite linearly ordered set $(I=\\{i_{1},\dots,i_{m}\\},\leq)$ with
$i_{1}<\dots<i_{m}$ the product $\prod_{j=1}^{m}a_{i_{j}}$ as $\prod_{i\in
I}^{\leq}a_{i}$. The following lemma is shown in [24].
###### Lemma B.5.
Let $g\in G\wr H$ such that $\mathsf{ord}(\sigma(g))=\infty$ and let $h\in H$
and $m\in\mathbb{N}$. Moreover, let
$F=\mathsf{supp}(g)\cap\\{\sigma(g)^{-i}h\mid i\in[0,m-1]\\}$. Then $F$ is
linearly ordered by $\preceq_{\sigma(g)}$ and
$\tau(g^{m})(h)=\mathop{\kern 2.5018pt{\mathop{\hbox
to0.0pt{\hss\hbox{\set@color$\displaystyle{\vphantom{\prod}}$}}{\displaystyle\prod}\hbox
to0.0pt{\hbox{\set@color$\displaystyle{\vphantom{\prod}}^{\preceq_{\sigma(g)}}$}\hss}}\limits_{h^{\prime}\in
F}}\kern 12.81587pt}\tau(g)(h^{\prime}).$
Thus, $\preceq_{\sigma(g)}$ tells us how to evaluate $\tau(g^{m})$ at a
certain element of $H$. We use this to establish the $GH$-form for $E$.
###### Lemma B.6.
For any torsion-free knapsack expression one can effectively construct an
equivalent torsion-free $\mathsf{HKP}^{+}(G\wr H)$-instance in $GH$-form.
###### Proof B.7.
We use the idea of the proof of Lemma 29 from [22]. Let $u\in G\wr H$ with
$\sigma(u)$ torsion-free and for $h\in\mathsf{supp}(u)$ let
$a_{h}:=\tau(u)(h)$. We want to dissect $u^{m}$ such that every element in the
support of $u$ yields a ray. For $h\in\mathsf{supp}(u)$ such a ray visits the
points $\sigma(u)^{k}h$ for all $k\in[0,m-1]$. Note that if
$h_{1},h_{2}\in\mathsf{supp}(u)$ and
$\sigma(u)^{k_{1}}h_{1}=\sigma(u)^{k_{2}}h_{2}$ for some $0\leq k_{1}\leq
k_{2}\leq m-1$, that is, the rays of $h_{1}$ and $h_{2}$ intersect and the ray
of $h_{1}$ visits the intersection points first, then
$h_{1}\preceq_{\sigma(u)}h_{2}$.
We extend the partial order $\preceq_{\sigma(u)}$ to a linear order
$\leq_{\sigma(u)}$ on $\mathsf{supp}(u)$. Then by Lemma B.5 for all
$x\in\mathbb{N}$ it holds that
$u^{x}=\Bigg{(}\mathop{{\mathop{\hbox
to0.0pt{\hss\hbox{\set@color$\displaystyle{\vphantom{\prod}}$}}{\displaystyle\prod}\hbox
to0.0pt{\hbox{\set@color$\displaystyle{\vphantom{\prod}}^{\leq_{\sigma(u)}}$}\hss}}\limits_{h\in\mathsf{supp}(u)}}\kern
6.08429pt}h(a_{h}h^{-1}\sigma(u)h)^{x}h^{-1}\sigma(u)^{-x}\Bigg{)}\sigma(u)^{x}.$
Note that the part $h(a_{h}h^{-1}\sigma(u)h)^{x}$ writes $a_{h}$ at the points
$\sigma(u)^{k}h$ for $k\in[0,x]$. We then go back with $h^{-1}\sigma(u)^{-x}$
to the beginning which is the starting point for the next element in
$\mathsf{supp}(u)$. Finally, we walk with $\sigma(u)^{x}$ to the end of the
progression since also the last factor of the product walks back to the
beginning. As in knapsack expressions we cannot use the variable $x$ multiple
times, we need loop constraints to ensure that we walk back and forth by the
same distance.
Let $\mathsf{supp}(u)=\\{h_{1},\dots,h_{\ell}\\}$ such that
$h_{1}\leq_{\sigma(u)}\dots\leq_{\sigma(u)}h_{\ell}$ and $a_{i}:=a_{h_{i}}$.
Then we can construct the following $\mathsf{HKP}^{+}(G\wr H)$-instance:
$\begin{split}&\Bigg{(}\prod_{i=1}^{\ell}h_{i}(a_{i}h_{i}^{-1}\sigma(u)h_{i})^{y_{4i-2}}h_{i}^{-1}(\sigma(u)^{-1})^{y_{4i}}\Bigg{)}\sigma(u)^{y_{4\ell+1}}\\\
=&\Bigg{(}\prod_{i=1}^{\ell}\beta_{4i-3}\beta_{4i-2}\beta_{4i-1}\beta_{4i}\Bigg{)}\beta_{4\ell+1}=:E_{u}\end{split}$
where for all $j\in[1,4\ell+1]$ it holds that $\gamma(\beta_{j})\in GH$ if
$j\in P_{E_{u}}$ and $\beta_{j}\in H$ if $j\in Q_{E_{u}}$. We define the
corresponding loop constraints
$\begin{split}L_{u}:=&\\{(4i-4,4i)\mid 1\leq i\leq\ell\\}\cup\\\
&\\{(4i-1,4(i+1)-1)\mid 1\leq i\leq\ell-1\\}\cup\\\
&\\{(4\ell-1,4\ell+1)\\}.\end{split}$
This means that for any solution $\nu^{\prime}\in\mathsf{sol}_{G\wr
H}(E_{u}g_{u},L_{u})$, for some $g_{u}\in G\wr H$, it must hold that
$\nu^{\prime}(y_{4i-2})=\nu^{\prime}(y_{4i})=\nu^{\prime}(y_{4\ell+1})$ for
all $i\in[1,\ell]$ since $h_{i}^{-1}\sigma(u)h_{i}$ is torsion-free. Thus, for
all $g_{u}\in G\wr H$ we have $\mathsf{sol}_{G\wr
H}(u^{x}g_{u})=\pi_{u^{x}}^{(g_{u})}(\mathsf{sol}_{G\wr H}(E_{u}g_{u},L_{u}))$
where we define the projection $\pi_{u^{x}}^{(g_{u})}$ as
$\displaystyle\pi_{u^{x}}^{(g_{u})}\colon\mathsf{sol}_{G\wr
H}(E_{u}g_{u},L_{u})$ $\displaystyle\to\mathsf{sol}_{G\wr H}(u^{x}g_{u})$
$\displaystyle\nu^{\prime}$ $\displaystyle\mapsto\Bigg{(}\begin{aligned}
\nu\colon\\{x\\}&\to\mathbb{N}\\\
x&\mapsto\nu^{\prime}(y_{2})\end{aligned}\Bigg{)}.$
Moreover, since $\sigma(\gamma(\beta_{4i-2}))=h_{i}^{-1}\sigma(u)h_{i}$ and
$\sigma(u)$ is torsion-free, it follows that $\sigma(\gamma(\beta_{4i-2}))$,
$\sigma(\gamma(\beta_{4i}))$ and $\sigma(\gamma(\beta_{4\ell+1}))$ are
torsion-free as well for all $i\in[1,\ell]$. Therefore, we have that
$\sigma(\gamma(\beta_{j}))$ is torsion-free for all $j\in P_{E_{u}}$. Note
that the factors $\beta_{4i-3}$ and $\beta_{4i-1}$ with $4i-3,4i-1\in
Q_{E_{u}}$ are not torsion-free in general.
We now consider the whole torsion-free knapsack expression
$E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1}$. By conjugation we can eliminate
the constants in $E$ and assume that $E=g_{1}^{x_{1}}\cdots g_{d}^{x_{n}}g$
with $g_{1},\dots,g_{n},g\in G\wr H$ which is still torsion-free as
conjugation does not change the order of an element. Then we construct the
following $\mathsf{HKP}^{+}(G\wr H)$-instance:
$(E_{g_{1}}\cdots E_{g_{n}}g,L_{g_{1}}\cup\dots\cup L_{g_{n}})$
where we choose continuous indices and variables
$Y=Y_{1}\dot{\cup}\dots\dot{\cup}Y_{n}$ such that $E_{g_{i}}$ has variables
$Y_{i}$. Here we set $E_{g_{i}}:=g_{i}^{x_{i}}$ and $L_{g_{i}}:=\emptyset$ if
$\sigma(g_{i})=1$. Note that since $E$ is torsion-free, we have that all
$\sigma(g_{i})\neq 1$ are torsion-free and therefore $E_{g_{i}}$ is well-
defined. The lemma follows from the following observation:
$\mathsf{sol}_{G\wr H}(E)=\pi(\mathsf{sol}_{G\wr H}(E_{g_{1}}\cdots
E_{g_{n}}g,L_{g_{1}}\cup\dots\cup L_{g_{n}}))$
with the projection $\pi$ defined for $\nu^{\prime}\in\mathsf{sol}_{G\wr
H}(E_{g_{1}}\cdots E_{g_{n}}g,L_{g_{1}}\cup\dots\cup L_{g_{n}})$ as
$\displaystyle\pi(\nu^{\prime})\colon X$ $\displaystyle\to\mathbb{N}$
$\displaystyle x_{i}$
$\displaystyle\mapsto\begin{cases}\pi_{g_{i}^{x_{i}}}^{(g_{g_{i}})}(\nu^{\prime}|_{Y_{i}})(x_{i}),&\text{if
}\sigma(g_{i})\neq 1\\\ \nu^{\prime}(x_{i}),&\text{otherwise}\end{cases}$
where $g_{g_{i}}:=\nu^{\prime}(E_{g_{1}}\cdots
E_{g_{i-1}})^{-1}\nu^{\prime}(E_{g_{i+1}}\cdots E_{g_{n}}g)^{-1}$ and
$\nu^{\prime}|_{Y_{i}}$ denotes the restriction of $\nu^{\prime}$ to $Y_{i}$.
In the next normalization step we deal with commensurable elements. For a
knapsack expression $E=\alpha_{1}\cdots\alpha_{n}$ let us define an
equivalence relation $||$ on the set
$R_{E}=\\{r\in P_{E}\mid\gamma(\alpha_{r})\notin
H\wedge\sigma(\gamma(\alpha_{r}))\neq 1\\}.$
For $r_{1},r_{2}\in R_{E}$ we say $r_{1}||r_{2}$ if
$\sigma(\gamma(\alpha_{r_{1}}))$ and $\sigma(\gamma(\alpha_{r_{2}}))$ are
commensurable. In the following we write $g_{i}:=\gamma(\alpha_{i})$ for all
$i\in[1,n]$.
###### Lemma B.8.
One can compute the $||$-classes for any knapsack expression
$E=\alpha_{1}\cdots\alpha_{n}$.
###### Proof B.9.
First note that $R_{E}$ can be computed since $\mathsf{KP}(H)$ is decidable.
For each pair $(i,j)\in R_{E}^{2}$ check with $\mathsf{KP}(H)$-instances if
$\sigma(g_{i})^{x}\sigma(g_{j})^{y}\sigma(g_{i})=1$ or
$\sigma(g_{i})^{x}(\sigma(g_{j})^{-1})^{y}\sigma(g_{i})=1$ has a solution with
$x,y\in\mathbb{N}$. If so, then there are $a,b\in\mathbb{Z}\setminus\\{0\\}$
such that $\sigma(g_{i})^{a}=\sigma(g_{j})^{b}$ which means that $i$ and $j$
are contained in the same $||$-class. If the instances do not have a solution,
then $i$ and $j$ are in different $||$-classes.
###### Lemma B.10.
For any $||$-class $C$ of a knapsack expression $E=\alpha_{1}\cdots\alpha_{d}$
one can compute natural numbers $e_{c}\neq 0$ for $c\in C$ such that
$\sigma(g_{c_{1}})^{e_{c_{1}}}=\sigma(g_{c_{2}})^{e_{c_{2}}}$ or
$\sigma(g_{c_{1}})^{e_{c_{1}}}=\sigma(g_{c_{2}})^{-e_{c_{2}}}$ for all
$c_{1},c_{2}\in C$.
###### Proof B.11.
Let $C=\\{i_{1},\dots,i_{m}\\}$ be a $||$-class with $i_{1}<\dots<i_{m}$. We
first compute $a_{j},b_{j}\in\mathbb{Z}\setminus\\{0\\}$ with
$\sigma(g_{i_{j}})^{a_{j}}=\sigma(g_{i_{j+1}})^{b_{j}}$ for all $j\in[1,m-1]$.
To this end, we try all values for $x$ and $y$ in $\mathbb{Z}\setminus\\{0\\}$
until we find $a_{j}$ and $b_{j}$. This process terminates since
$\sigma(g_{i_{j}})$ and $\sigma(g_{i_{j+1}})$ are commensurable.
Now we can define integers
$e_{j}:=\prod_{k=1}^{j-1}b_{k}\cdot\prod_{k=j}^{m}a_{k}$
for all $j\in[1,m]$. Then for all $j\in[1,m-1]$ it holds that
$\begin{split}\sigma(g_{i_{j}})^{e_{j}}&=\sigma(g_{i_{j}})^{\prod_{k=1}^{j-1}b_{k}\cdot\prod_{k=j}^{m}a_{k}}\\\
&=\sigma(g_{i_{j}})^{a_{j}\cdot\prod_{k=1}^{j-1}b_{k}\cdot\prod_{k=j+1}^{m}a_{k}}\\\
&=\sigma(g_{i_{j+1}})^{b_{j}\cdot\prod_{k=1}^{j-1}b_{k}\cdot\prod_{k=j+1}^{m}a_{k}}\\\
&=\sigma(g_{i_{j+1}})^{e_{j+1}}.\end{split}$
Taking $|e_{j}|$ for all $j\in[1,m]$ yields the lemma.
###### Lemma B.12.
For any torsion-free $\mathsf{HKP}^{+}(G\wr H)$-instance in $GH$-form one can
effectively construct an equivalent finite set of c-simplified, torsion-free
$\mathsf{HKP}^{+}(G\wr H)$-instances in $GH$-form.
###### Proof B.13.
Let $(E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1},L)$ be a torsion-free
$\mathsf{HKP}^{+}(G\wr H)$-instance in $GH$-form and write
$g_{i}:=\gamma(\alpha_{i})$ for all $i\in[1,n+1]$. Let $g:=g_{i}$ for some
$i\in P_{E}$ with $g_{i}\notin H$ and $\sigma(g_{i})\neq 1$. This means that
$\sigma(g)$ is torsion-free since $E$ is torsion-free. Let
$e_{g}\in\mathbb{N}\setminus\\{0\\}$ be the exponent from Lemma B.10
corresponding to $g$. Note that $e_{g}$ can be computed since by Lemma B.8 we
can effectively identify the $||$-class of $g$.
We first show that we can assume a slightly weaker property than to be
c-simplified. We allow that for two elements $g_{i},g_{j}\notin H$ with
$i,j\in P_{E}$ such that $\sigma(g_{i})$ and $\sigma(g_{j})$ are commensurable
it holds that $\sigma(g_{i})=\sigma(g_{j})$ or
$\sigma(g_{i})=\sigma(g_{j})^{-1}$. To this end, we want to write $g^{x}$ as
$g^{e_{g}y+r}$ for every remainder $r\in[0,e_{g}-1]$ but we have to make sure
that the resulting $\mathsf{HKP}^{+}(G\wr H)$-instances are still in
$GH$-form.
Let us construct the $\mathsf{HKP}^{+}(G\wr H)$-instance
$\begin{split}F^{(g)}:=&(\tau(g)\sigma(g)^{e_{g}})^{y_{1}}\sigma(g)^{-1}\sigma(g)(\sigma(g)^{-e_{g}})^{y_{4}}\sigma(g)\cdots\\\
&(\tau(g)\sigma(g)^{e_{g}})^{y_{5(e_{g}-1)-4}}\sigma(g)^{-1}\sigma(g)(\sigma(g)^{-e_{g}})^{y_{5(e_{g}-1)-1}}\sigma(g)(\tau(g)\sigma(g)^{e_{g}})^{y_{5e_{g}-4}}\sigma(g)^{-1}\sigma(g)\\\
=&\beta_{1}\cdots\beta_{5e_{g}-2}\end{split}$
with loop constraints
$\begin{split}J^{(g)}:=&\\{(5i-5,5i-1)\mid 1\leq i\leq e_{g}-1\\}\cup\\\
&\\{(5i-2,5(i+1)-3)\mid 1\leq i\leq e_{g}-1\\}.\end{split}$
Intuitively, this means that for all valuations $\nu$ we force that
$\displaystyle(\sigma(g)^{e_{g}})^{\nu(y_{5i-4})}\cdot(\sigma(g)^{-e_{g}})^{\nu(y_{5i-1})}$
$\displaystyle=1$
$\displaystyle(\sigma(g)^{-e_{g}})^{\nu(y_{5i-1})}\cdot\sigma(g)\cdot(\sigma(g)^{e_{g}})^{\nu(y_{5i+1})}\sigma(g)^{-1}$
$\displaystyle=1$
for all $i\in[1,e_{g}-1]$. Since $\sigma(g)$ is torsion-free, this implies
that $\nu(y_{5i-4})=\nu(y_{5i-1})=\nu(y_{5e_{g}-4})$ for all
$i\in[1,e_{g}-1]$. Note that $F^{(g)}$ constitutes the part $g^{e_{g}y}$. The
factor $(\tau(g)\sigma(g)^{e_{g}})^{y_{1}}$ visits powers of $\sigma(g)$ where
the exponents are multiples of $e_{g}$ with offset 0. With
$(\sigma(g)^{-e_{g}})^{y_{4}}$ we walk back to the beginning and set with
$\sigma(g)$ the offset to 1. We then visit powers of $\sigma(g)$ where the
exponents are multiples of $e_{g}$ with offset 1. We do this for every offset
in $[0,e_{g}-1]$ to reach all the points of the progression associated to
$g^{e_{g}y}$. The factors $\sigma(g)^{-1}\sigma(g)$ are only needed to define
the loop constraints.
For the part of the remainder $g^{r}$ we construct the $\mathsf{HKP}^{+}(G\wr
H)$-instance
$\begin{split}G_{r}^{(g)}:=&(\tau(g)\sigma(g)^{e_{g}})^{z_{1}}\sigma(g)^{-e_{g}}\sigma(g)\cdots(\tau(g)\sigma(g)^{e_{g}})^{z_{3r-2}}\sigma(g)^{-e_{g}}\sigma(g)\\\
=&\gamma_{1}\cdots\gamma_{3r}\end{split}$
with loop constraints
$K_{r}^{(g)}:=\\{(3i-3,3i-1)\mid 1\leq i\leq r\\}$
for all $r\in[0,e_{g}-1]$. Again since $\sigma(g)$ is torsion-free, this means
intuitively that for every valuation $\nu$ we force that $\nu(z_{3i-2})=1$ for
all $i\in[1,r]$. The idea of the construction of $G_{r}^{(g)}$ is the same as
for $F^{(g)}$ but we set the exponents $z_{i}$ to 1.
We can now combine the two $\mathsf{HKP}^{+}(G\wr H)$-instances to obtain
$(E_{r}^{(g)}:=F^{(g)}\cdot G_{r}^{(g)},L_{r}^{(g)}:=J^{(g)}\cup K_{r}^{(g)})$
for all $r\in[0,e_{g}-1]$ where we write
$E_{r}^{(g)}=\delta_{1}\cdots\delta_{5e_{g}-2+3r}$ and adjust the loop
constraints in $J^{(g)}$ and $K_{r}^{(g)}$ accordingly.
Let $R_{E}=\\{i_{1},\dots,i_{m}\\}$. If we replace every $g_{i_{j}}$ in $E$ by
$E_{r}^{(g_{i_{j}})}$ and add the loop constraints $L_{r}^{(g_{i_{j}})}$ for
all $j\in[1,m]$, we get the $\mathsf{HKP}^{+}(G\wr H)$-instance
$(E_{r_{1},\dots,r_{m}}:=E_{1}\cdots E_{n}g_{n+1},L_{r_{1},\dots,r_{m}}:=L\cup
L_{r_{1}}^{(g_{i_{1}})}\cup\dots\cup L_{r_{m}}^{(g_{i_{m}})})$
for all $r_{1}\in[0,e_{g_{i_{1}}}-1],\dots,r_{m}\in[0,e_{g_{i_{m}}}-1]$ where
$E_{k}:=\begin{cases}E_{r_{j}}^{(g_{k})},&\text{if }k=i_{j}\text{ for some
}j\in[1,m]\\\ \alpha_{k},&\text{otherwise}\end{cases}$
for all $k\in[1,n]$. To get a well-defined $\mathsf{HKP}^{+}(G\wr
H)$-instance, we write
$E_{r_{1},\dots,r_{m}}=\beta_{1}\cdots\beta_{s}\beta_{s+1}$ with variables in
$Y=\\{y_{1},\dots,y_{s}\\}$ and adjust the loop constraints in
$L_{r_{1},\dots,r_{m}}$ accordingly. By construction any solution $\nu$ of
$(E,L)$ can be transformed into a solution of
$(E_{r_{1},\dots,r_{m}},L_{r_{1},\dots,r_{m}})$ where
$r_{j}:=\nu(x_{i_{j}})\text{ mod }e_{g_{i_{j}}}$ for all $j\in[1,m]$.
Conversely, any solution of $(E_{r_{1},\dots,r_{m}},L_{r_{1},\dots,r_{m}})$
can be transformed into a solution of $(E,L)$. Moreover, note that
$(E_{r_{1},\dots,r_{m}},L_{r_{1},\dots,r_{m}})$ is clearly torsion-free and in
$GH$-form. If we write $u_{i}:=\gamma(\beta_{i})$ for all $i\in[1,s+1]$, then
by the choice of $e_{g}$ for all $i,j\in P_{E_{r_{1},\dots,r_{m}}}$ with
$u_{i},u_{j}\notin H$ and $\sigma(u_{i})$ and $\sigma(u_{j})$ commensurable it
holds that $\sigma(u_{i})=\sigma(u_{j})$ or
$\sigma(u_{i})=\sigma(u_{j})^{-1}$.
Finally, we construct for every
$(E_{r_{1},\dots,r_{m}},L_{r_{1},\dots,r_{m}})$ an $\mathsf{HKP}^{+}(G\wr
H)$-instance that is c-simplified. Let $C=\\{c_{1},\dots,c_{k}\\}$ be a
$||$-class of $E_{r_{1},\dots,r_{m}}$ with $c_{1}<\dots<c_{k}$. Then for all
$i\in[2,m]$ with $\sigma(u_{c_{1}})=\sigma(u_{c_{i}})^{-1}$ we replace
$u_{c_{i}}^{y_{c_{i}}}$ in $E_{r_{1},\dots,r_{m}}$ by an expression of the
form
$\gamma_{1}\cdots\gamma_{7}:=\sigma(u_{c_{i}})^{z_{1}}(\tau(u_{c_{i}})\sigma(u_{c_{1}}))^{z_{2}}\sigma(u_{c_{1}})^{-1}\sigma(u_{c_{1}})\sigma(u_{c_{i}})^{z_{5}}\sigma(u_{c_{i}})^{-1}\sigma(u_{c_{i}})$
and add the corresponding loop constraints $\\{(0,3),(1,6)\\}$ to
$L_{r_{1},\dots,r_{m}}$ by adjusting indices properly. Intuitively, for all
valuations $\nu$ we force that $\nu(z_{2})=\nu(z_{1})+1$ and
$\nu(z_{5})=\nu(z_{2})+1$. The idea is to walk with
$\sigma(u_{c_{i}})^{z_{1}}$ to the end of the progression associated to
$u_{c_{i}}^{y_{c_{i}}}$ and then place with
$(\tau(u_{c_{i}})\sigma(u_{c_{1}}))^{z_{2}}$ the elements in direction of
$\sigma(u_{c_{1}})$ and walk with $\sigma(u_{c_{i}})^{z_{5}}$ back to the end
of the progression again. With this method only factors in $H$ do not satisfy
the commensurability property.
Note that all constructed expressions are torsion-free and in $GH$-form. Doing
this for all $||$-classes concludes the proof.
### B.3 Proof of Lemma 4.2
If $(i,h)$ is an address of a knapsack expression
$E=\alpha_{1}\cdots\alpha_{n}$ with $i\in P_{E}$ and
$\sigma(\gamma(\alpha_{i}))\neq 1$ and $\nu\in\mathbb{N}^{X}$ is a valuation,
then
$(\sigma(\nu(\alpha_{1}\cdots\alpha_{i-1}))h(h^{-1}\sigma(\gamma(\alpha_{i}))h)^{j})_{0\leq
j\leq\nu(x_{i})-1}$
is the ray associated to $(i,h)$.
For a non-empty set $S:=\\{E_{1},\dots,E_{m}\\}$ of exponent expressions over
$G$ with variables in $X$ we define the set of solutions by
$\mathsf{sol}_{G}(S):=\bigcap_{i=1}^{m}\mathsf{sol}_{G}(E_{i})$. Since by
assumption $\mathsf{ExpEq}(G)$ is decidable, we have that
$\mathsf{sol}_{G}(S)$ is decidable as well.
Let $(E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1},L,D)$ be a normalized
$\mathsf{HKP}^{\pm}(G\wr H)$-instance where $\alpha_{i}=g_{i}$ or
$\alpha_{i}=g_{i}^{x_{i}}$ for all $i\in[1,n]$ and $\alpha_{n+1}\in G\wr H$.
Let $I\subseteq[1,n+1]$ be the set of stacking indices. We say that an address
$(i,h)$ is stacking if $i$ is stacking. Let $C\subseteq A_{E}$ be a set which
contains at least one stacking address. We will construct a normalized
$\mathsf{HKP}^{\pm}(G\wr H)$-instance $(E_{C},L_{C},D_{C})$ and a set of
exponent expressions $S_{C}$ over the variable set $\\{x_{i}\mid i\in I\\}$.
Intuitively, $C$ represents an intersection point of rays with the progression
of at least one stacking index. In $(E_{C},L_{C},D_{C})$ the intersection
point is skipped and $S_{C}$ expresses that the elements at this point
multiply to 1.
We will prove that $(E,L,D)$ has a solution if and only if there exists a set
$C\subseteq A_{E}$ which intersects $I\times H$ and a valuation $\nu_{C}$
which satisfies both $(E_{C},L_{C},D_{C})$ and $S_{C}$. Furthermore, we show
that the number of addresses $(i,h)\in I\times H$ decreases from $E$ to
$E_{C}$. Hence, by iterating this procedure we end up with stacking-free
$\mathsf{HKP}^{\pm}(G\wr H)$-instances.
#### Construction
Let $S:=\emptyset$ and let
$C:=\\{(i_{1},h_{1}),\dots,(i_{m},h_{m})\\}\subseteq A_{E}$ with
$i_{1}<\dots<i_{m}$ be a set of addresses which intersects $I\times H$. We
first add for all $i\in I$ with
$\mathsf{supp}(g_{i})=\\{s_{1},\dots,s_{m_{i}}\\}$ the expression
$s_{1}s_{1}^{-1}\cdots
s_{m_{i}}s_{m_{i}}^{-1}=\gamma_{1}\cdots\gamma_{2m_{i}}$
before $\alpha_{i}$ needed later to define loop and disjointness constraints.
By adjusting indices we can assume that $E$ is in that form. For all $i\in I$
we define the function $\rho_{i}\colon\mathsf{supp}(g_{i})\to[1,n+1]$ such
that $\rho_{i}(s)$ is the index of the added $s$ before $\alpha_{i}$ for any
$s\in\mathsf{supp}(g_{i})$. We construct the knapsack expression $E_{C}$ from
$E$ by replacing for all $j\in[1,m]$ the atom
$\alpha_{i_{j}}\text{ by
}\begin{cases}g_{i_{j}}^{x_{i_{j,1}}}\sigma(g_{i_{j}})g_{i_{j}}^{x_{i_{j,3}}}=:\beta_{i_{j,1}}\beta_{i_{j,2}}\beta_{i_{j,3}},&\text{if
}i_{j}\notin I\\\ (f^{\prime},1)^{x_{i,1}}=:\beta_{i_{j,1}},&\text{if
}{i_{j}}\in I\setminus\\{n+1\\}\text{ and }g_{i_{j}}=(f,1),\\\
(f^{\prime},h)=:\beta_{i_{j,1}},&\text{if ${i_{j}}=n+1$ is stacking and
}g_{i_{j}}=(f,h),\\\ \end{cases}$
where we define
$f^{\prime}(h^{\prime}):=\begin{cases}1,&\text{if }h^{\prime}=h_{i_{j}}\\\
f(h^{\prime}),&\text{otherwise}\end{cases}$
for all $h^{\prime}\in H$. We remark that we can easily compute a
representation of the elements $(f^{\prime},1)$ and $(f^{\prime},h)$ as words
over the generators of $G$ and $H$ from the representation of $g_{i}$. By
making indices continuous and adjusting $i_{j,1},i_{j,2},i_{j,3}$ and
$\rho_{i}$ accordingly, we can write
$E_{C}=\beta_{1}\cdots\beta_{r}\beta_{r+1}$
with variables in $Y:=\\{y_{1},\dots,y_{r}\\}$ and $u_{i}:=\gamma(\beta_{i})$
for all $i\in[1,r+1]$.
For $j\in[1,m]$ with $i_{j}$ non-stacking in $E$ we set
$\rho_{i_{j,1}}(1):=i_{j,1}$. Since $E$ is in $GH$-form, we have that
$\mathsf{supp}(g_{i_{j}})=\\{1\\}$ if $i_{j}$ is non-stacking.
We define the loop constraints
$L_{C}:=L\cup\\{(\rho_{i_{j,1}}(h_{j}),\rho_{i_{j+1,1}}(h_{j+1}))\mid
j\in[1,m-1]\\}$
where we adjust the indices in $L$ properly. Intuitively, the loop constraints
ensure that every solution makes every $(i_{j},h_{j})$ reach the intersection
point given by $C$. But after the replacement of $g_{i_{j}}$ these addresses
do not put an element at the intersection point anymore.
Let $\operatorname{id}\colon A_{E}\to[1,n]$ be the map defined by
$\operatorname{id}((i,h)):=i$ for all $(i,h)\in A_{E}$. Let
$\alpha^{\prime}\colon[1,n+1]\setminus\operatorname{id}(C)\to[1,r+1]$ be the
map defined by the adjustment of the indices. Then we define
$\alpha\colon[1,n+1]\to[1,r+1]$ such that
$\alpha(k):=\begin{cases}i_{j,1},&\text{if }k=i_{j}\text{ for some
}j\in[1,m]\\\ \alpha^{\prime}(k),&\text{otherwise}\end{cases}$
for all $k\in[1,n+1]$. To ensure that every address not in $C$ does not reach
the point given by the address $(i_{j},h_{j})\in C\cap(I\times H)$, we define
the disjointness constraints
$\begin{split}D_{C}:=&D^{\prime}\cup\\{(\rho_{\alpha(k)}(h)+1,\rho_{i_{j,1}}(h_{j})+1)\mid(k,h)\in
A_{E}\setminus C\text{ and }k\in I\\}\cup\\\
&\\{(\alpha(k),\rho_{i_{j,1}}(h_{j}))+1\mid(k,h)\in A_{E}\setminus C\text{ and
}k\notin I\\}\end{split}$
where we adjust the indices in $D$ as follows:
$\begin{split}D^{\prime}:=&\\{(i_{j,x},\alpha(\ell))\mid(k,\ell)\in D\wedge
k=i_{j}\text{ for some }j\in[1,m]\text{ with }{i_{j}}\notin I\wedge
x\in[1,3]\\}\cup\\\
&\\{(i_{j,x},i_{j^{\prime},y})\mid(i_{j},i_{j^{\prime}})\in D\text{ for some
}j,j^{\prime}\in[1,m]\text{ with }i_{j},i_{j^{\prime}}\notin I\wedge
x,y\in[1,3]\\}\cup\\\ &\\{(\alpha(k),\alpha(\ell))\mid(k,\ell)\in
D\\}\end{split}$
and assume without loss of generality that if $(k,\ell)\in D$ and either
$k=i_{j}$ or $\ell=i_{j}$ for some $j\in[1,m]$ with ${i_{j}}\notin I$, then we
always have that $k=i_{j}$. Intuitively, if $D$ contains a disjointness
constraint for a ray that is split into parts, then $D^{\prime}$ ensures this
constraint for every such part. Note that $(E_{C},L_{C},D_{C})$ is still
normalized.
We now extend the set of exponent expressions. Let
$a_{j}:=\begin{cases}\tau(g_{i_{j}})(h_{j})^{y_{i_{j,1}}},&\text{if
}\sigma(g_{i_{j}})=1\text{ and }i_{j}\neq n+1\\\
\tau(g_{i_{j}})(h_{j}),&\text{otherwise}\end{cases}$
for all $j\in[1,m]$. Then we define
$S_{C}:=S\cup\Bigg{\\{}\prod_{j=1}^{m}a_{j}\Bigg{\\}}$
where we replace variables $x_{i}$ in $S$ by $y_{\alpha(i)}$. The additional
exponent expression ensures that the elements written at the point given by
$C$ multiply to 1. Here we only need variables for stacking indices since
elements of non-stacking indices can visit the point at most once.
We repeat this process for $(E_{C},L_{C},D_{C})$ and $S_{C}$ until the
resulting $\mathsf{HKP}^{\pm}(G\wr H)$-instance
$(E^{\prime},L^{\prime},D^{\prime})$ has no stacking addresses left. If for
the corresponding set of exponent expressions $S^{\prime}$ it holds that
$\mathsf{sol}_{G}(S^{\prime})\neq\emptyset$, we construct a stacking-free
$\mathsf{HKP}^{\pm}(G\wr H)$-instance by removing the exponents of powers with
base 1. Let $(E_{1},L_{1},D_{1}),\dots,(E_{t},L_{t},D_{t})$ be the constructed
stacking-free, normalized $\mathsf{HKP}^{\pm}(G\wr H)$-instances for all
possible choices of sets $C$ during the construction. We claim that
$\mathsf{sol}_{G\wr H}(E,L,D)\neq\emptyset$ if and only if
$\bigcup_{i=1}^{t}\mathsf{sol}_{G\wr H}(E_{i},L_{i},D_{i})\neq\emptyset$.
#### Termination
We show that in each step of the construction above the number of stacking
addresses gets strictly smaller. This means that after a finite number of
steps the resulting $\mathsf{HKP}^{\pm}(G\wr H)$-instance has no stacking
addresses left and the construction terminates. For a knapsack expression
$E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1}$ with $g_{i}:=\gamma(\alpha_{i})$
for all $i\in[1,n+1]$ let
$s(E):=|\\{(i,h)\in A_{E}\mid i\in I\\}|$
be the number of stacking addresses. Let $C\subseteq A_{E}$ be a set of
addresses that contains a stacking address $(i,h)$. During the construction of
$E_{C}$ we replace $g_{i}$ in $E$ by $(f^{\prime},\sigma(g_{i}))$ where
$f^{\prime}(h)=1$. This means that
$\mathsf{supp}(f^{\prime})=\mathsf{supp}(f)-1$. Thus, it holds that
$s(E_{C})=s(E)-|\\{(i,h)\in C\mid i\in I\\}|.$
Since $C$ contains at least one stacking address, it follows that
$s(E_{C})<s(E)$.
#### Correctness
It remains to show that $(E,L,D)$ has a solution if and only if one of the
constructed $(E_{1},L_{1},D_{1}),\dots,(E_{t},L_{t},D_{t})$ has a solution. We
consider each step of the construction separately. Let
$(E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1},L,D)$ be a normalized
$\mathsf{HKP}^{\pm}(G\wr H)$-instance with $s(E)\geq 1$ and
$g_{i}:=\gamma(\alpha_{i})$ for all $i\in[1,n+1]$. Let $S$ be a set of
exponent expressions over $G$ with variables in $X$. We assume that $(E,L,D)$
and $S$ are generated during the construction. We show that
$\mathsf{sol}_{G\wr H}(E,L,D)\cap\mathsf{sol}_{G}(S)\neq\emptyset$ if and only
if there exists $C\subseteq A_{E}$ containing a stacking address such that
$\mathsf{sol}_{G\wr
H}(E_{C},L_{C},D_{C})\cap\mathsf{sol}_{G}(S_{C})\neq\emptyset$.
For the first direction assume that $\nu\in\mathsf{sol}_{G\wr
H}(E,L,D)\cap\mathsf{sol}_{G}(S)\neq\emptyset$. As $s(E)\geq 1$, there is an
address $(i,h)\in A_{E}$ of a sacking element $g_{i}$. Let
$h_{C}:=\mathsf{supp}_{E}^{\nu}(\rho_{i}(h)+1)$ be the point visited by
$(i,h)$ under $\nu$ and
$\displaystyle C:=$ $\displaystyle\\{(j,1)\in A_{E}\mid j\notin S\text{ and
}h_{C}\in\mathsf{supp}_{E}^{\nu}(j)\\}\cup$ $\displaystyle\\{(j,h^{\prime})\in
A_{E}\mid j\in S\text{ and
}h_{C}\in\mathsf{supp}_{E}^{\nu}(\rho_{j}(h^{\prime})+1)\\}$
be the set of all addresses reaching $h_{C}$ under $\nu$. We write
$C=\\{(i_{1},h_{1}),\dots,(i_{m},h_{m})\\}$ with $i_{1}<\dots<i_{m}$. Let
$(E_{C},L_{C},D_{C})$ be the $\mathsf{HKP}^{\pm}(G\wr H)$-instance and $S_{C}$
be the set of exponent expressions constructed above from $(E,L,D)$ and $S$
with respect to $C$. We now define a valuation $\nu_{C}\in\mathbb{N}^{Y}$. For
all $j\in[1,m]$ with ${i_{j}}\notin I$ we assign $\nu_{C}(y_{i_{j,1}}):=e$ and
$\nu_{C}(y_{i_{j,3}}):=\nu(x_{i_{j}})-e-1$ where $e\in[0,\nu(x_{i_{j}})-1]$
such that
$\sigma(\nu(\alpha_{1})\cdots\nu(\alpha_{i_{j}-1})g_{i_{j}}^{e})=h_{C}$. For
all $k\in P_{E}$ with $k\in I$ or $k\notin\operatorname{id}(C)$ we assign
$\nu_{C}(y_{\alpha(k)}):=\nu(x_{k})$.
Since $\nu\in\mathsf{sol}_{G\wr H}(E,L,D)$ and the construction only splits up
some rays, we have that $\sigma(\nu_{C}(E_{C}))=1$ and $\nu_{C}$ fulfills all
loop constraints in $L_{C}$ and all disjointness constraints in $D_{C}$ by
definition of $C$. As $\tau(\nu(E))(h^{\prime})=1$ for all $h^{\prime}\in H$
and there is no address of $E_{C}$ visiting the point $h_{C}$ under $\nu_{C}$,
it holds that $\tau(\nu_{C}(E_{C}))(h^{\prime})=1$ for all $h^{\prime}\in H$.
Moreover, from $\prod_{j=1}^{m}a_{j}^{\nu}=1$ with
$a_{j}^{\nu}:=\begin{cases}\tau(g_{i_{j}})(h_{j})^{\nu(x_{i_{j}})},&\text{if
}\sigma(g_{i_{j}})=1\text{ and }i_{j}\neq n+1\\\
\tau(g_{i_{j}})(h_{j}),&\text{otherwise}\end{cases}$
and from $\nu\in\mathsf{sol}_{G}(S)$ it follows that
$\nu_{C}\in\mathsf{sol}_{G}(S_{C})$ since the exponent expressions in $S$ only
contain variables with stacking indices. Thus, it holds that
$\nu_{C}\in\mathsf{sol}_{G\wr
H}(E_{C},L_{C},D_{C})\cap\mathsf{sol}_{G}(S_{C})$.
For the other direction assume that $\nu_{C}\in\mathsf{sol}_{G\wr
H}(E_{C},L_{C},D_{C})\cap\mathsf{sol}_{G}(S_{C})$ for some set of addresses
$C=\\{(i_{1},h_{1}),\dots,(i_{m},h_{m})\\}\subseteq A_{E}$ with
$i_{1}<\dots<i_{m}$ containing a stacking address $(i,h)$. We now define a
valuation $\nu\in\mathbb{N}^{X}$. For all $j\in[1,m]$ with ${i_{j}}\notin I$
we assign $\nu(x_{i_{j}}):=\nu_{C}(y_{i_{j,1}})+\nu_{C}(y_{i_{j,3}})+1$. For
all $k\in P_{E}$ with $k\in I$ or $k\notin\operatorname{id}(C)$ we assign
$\nu(x_{k}):=\nu_{C}(y_{\alpha(k)})$.
Since by construction $S_{C}$ only contains variables with stacking indices
and $\nu_{C}\in\mathsf{sol}_{G}(S_{C})$, we have that
$\nu\in\mathsf{sol}_{G}(S)$ as $S_{C}$ extends $S$ by one expression. The
additional exponent expression in $S_{C}$ ensures that by definition of $\nu$
it holds that $\prod_{j=1}^{m}a_{j}^{\nu}=1$. Therefore, the disjointness
constraints in $D_{C}$ imply that we have $\tau(\nu(E))(h_{C})=1$ where
$h_{C}:=\mathsf{supp}_{E}^{\nu}(\rho_{i}(h)+1)$. By construction of
$(E_{C},L_{C},D_{C})$ it follows that $\tau(\nu(E))(h^{\prime})=1$ for all
$h^{\prime}\in H$. Moreover, since $\sigma(\nu_{C}(E_{C}))=1$, it holds that
$\sigma(\nu(E))=1$ and the definitions of $L_{C}$ and $D_{C}$ imply that $\nu$
fulfills all loop constraints in $L$ and all disjointness constraints in $D$.
Thus, we have that $\nu\in\mathsf{sol}_{G\wr
H}(E,L,D)\cap\mathsf{sol}_{G}(S)$.
This implies that for a normalized $\mathsf{HKP}^{\pm}(G\wr H)$-instance
$(E,L,D)$ and $S=\emptyset$ it holds that $\mathsf{sol}_{G\wr
H}(E,L,D)=\mathsf{sol}_{G\wr H}(E,L,D)\cap\mathsf{sol}_{G}(S)\neq\emptyset$ if
and only if there exist sets of addresses $C_{1},\dots,C_{m}$ such that for
the normalized $\mathsf{HKP}^{\pm}(G\wr H)$-instance
$(E^{\prime},L^{\prime},D^{\prime})$ and the set of exponent expressions
$S^{\prime}$ constructed with respect to $C_{1},\dots,C_{m}$ we have
$s(E^{\prime})=0$ and $\mathsf{sol}_{G\wr
H}(E^{\prime},L^{\prime},D^{\prime})\cap\mathsf{sol}_{G}(S^{\prime})\neq\emptyset$.
Since $s(E^{\prime})=0$ and $S^{\prime}$ only contains variables with stacking
indices, it holds that $\mathsf{sol}_{G\wr
H}(E^{\prime},L^{\prime},D^{\prime})\cap\mathsf{sol}_{G}(S^{\prime})\neq\emptyset$
if and only if $\mathsf{sol}_{G\wr
H}(E^{\prime},L^{\prime},D^{\prime})\neq\emptyset$ and
$\mathsf{sol}_{G}(S^{\prime})\neq\emptyset$. Thus, it suffices to construct
$\mathsf{HKP}^{\pm}(G\wr H)$-instances where the corresponding set of exponent
expressions has a solution. Removing the exponents of powers in $E^{\prime}$
that have base 1 yields a stacking-free, normalized $\mathsf{HKP}^{\pm}(G\wr
H)$-instance that fulfills the claim.
### B.4 Proof of Lemma 4.3
If $E=\alpha_{1}\cdots\alpha_{n}$ is a stacking-free knapsack expression in
$GH$-form, for all addresses $(i,h)\in A_{E}$ it holds that $h=1$. Thus, we
can write $A_{E}=\\{i\in P_{E}\mid\gamma(\alpha_{i})\in GH\setminus H\\}$. In
the following we often view an addresses $i\in A_{E}$ as the associated ray
$(\sigma(\nu(\alpha_{1}\cdots\alpha_{i-1}))\sigma(\gamma(\alpha_{i}))^{j})_{0\leq
j\leq\nu(x_{i})-1}$
under a valuation $\nu\in\mathbb{N}^{X}$. We say that two rays are parallel if
their periods are commensurable.
We first construct for every splitting of rays into subrays and equivalence
relation on these subrays an $\mathsf{MKP}^{\pm}(H)$-instance. We then show
that the resulting instances fulfill the claim. Note that by Lemma B.1 the
resulting $\mathsf{MKP}^{\pm}(H)$-instances can be transformed to
$\mathsf{KP}^{\pm}(H)$-instances that prove the lemma.
#### Construction
Let $(E=\alpha_{1}\cdots\alpha_{n}\alpha_{n+1},L,D)$ be a stacking-free,
normalized $\mathsf{HKP}^{\pm}(G\wr H)$-instance with
$g_{i}:=\gamma(\alpha_{i})$ for all $i\in[1,n+1]$. Let
$A_{E}=\\{a_{1},\dots,a_{m}\\}$ be the rays of $E$ with $a_{1}<\dots<a_{m}$.
Note that if we split a ray at the intersection points with other rays, then
every intersection point results in at most two new subrays. As there are
$m-1$ other rays, a ray is split into at most $1+2(m-1)=2m-1$ subrays. Let
$N:=[1,2m-1]^{m}$ and for every $\eta:=(n_{a_{1}},\dots,n_{a_{m}})\in N$ we
define the knapsack expression $E_{\eta}^{\prime}$ by replacing
$g_{a_{i}}^{x_{a_{i}}}$ in $E$ by
$g_{a_{i}}^{y_{1}}\sigma(g_{a_{i}})^{-1}\sigma(g_{a_{i}})\cdots
g_{a_{i}}^{y_{3n_{a_{i}}-2}}\sigma(g_{a_{i}})^{-1}\sigma(g_{a_{i}})=\beta_{1}\cdots\beta_{3n_{a_{i}}}.$
This means we split $g_{a_{i}}^{x_{a_{i}}}$ into $n_{a_{i}}$ parts where the
factors $\sigma(g_{a_{i}})^{-1}\sigma(g_{a_{i}})$ are needed later to define
loop constraints. By making indices continuous, we can write
$E_{\eta}^{\prime}=\beta_{1}\cdots\beta_{r}\beta_{r+1}$
with variables in $Y:=\\{y_{1},\dots,y_{r}\\}$ and $u_{i}:=\gamma(\beta_{i})$
for all $i\in[1,r+1]$. We remark that if $E$ is stacking-free and normalized,
then so is $E_{\eta}^{\prime}$. For every $i\in A_{E}$ and $j\in[1,n_{i}]$ let
$a_{i,j}$ be the index of the $j$-th subray of $i$ in $E_{\eta}^{\prime}$.
Furthermore, let $\alpha\colon[1,n+1]\setminus A_{E}\to[1,r+1]$ be defined by
the adjustment of the indices.
Let $\Theta_{\eta}$ be the set of all equivalence relations on
$A_{E_{\eta}^{\prime}}$. Note that $\Theta_{\eta}$ is finite and can be
computed by dividing the rays of $E_{\eta}^{\prime}$ into equivalence classes.
Then for all $\sim\in\Theta_{\eta}$ we define the loop constraints
$\displaystyle L_{\sim}^{\prime}:=$ $\displaystyle L\cup$ (3)
$\displaystyle\\{(i-1,j-1)\mid i,j\in A_{E_{\eta}^{\prime}}\wedge i<j\wedge
i\sim j\\}\cup$ (4) $\displaystyle\\{(i+1,j+1)\mid i,j\in
A_{E_{\eta}^{\prime}}\wedge i<j\wedge i\sim j\\}$ (5)
where we adjust the indices in $L$ properly. For two rays $i$ and $j$ of
$E_{\eta}^{\prime}$ with $i\sim j$ the loop constraint $(i,j-1)$ in 4 ensures
that the starting points of $i$ and $j$ are equal. The loop constraint
$(i+2,j+1)$ in 5 ensures that any solution $\nu$ satisfies
$\sigma(\nu(\beta_{1})\cdots\nu(\beta_{i}))\sigma(u_{i})^{-1}=\sigma(\nu(\beta_{1})\cdots\nu(\beta_{j}))\sigma(u_{j})^{-1}$
which means that the endpoints of $i$ and $j$ are equal. Since
$E_{\eta}^{\prime}$ is normalized, this implies that the rays $i$ and $j$ must
be equal. To ensure that the rays in different $\sim$-classes are disjoint, we
define the disjointness constraints
$D_{\sim}^{\prime}:=D^{\prime}\cup\\{(i,j)\mid i,j\in
A_{E_{\eta}^{\prime}}\wedge i<j\wedge i\nsim j\\}$
where we adjust $D$ as follows:
$\begin{split}D^{\prime}:=&\\{(a_{i,j},a_{k,\ell})\mid i,k\in
A_{E}\wedge(i,k)\in D\wedge j\in[1,n_{i}]\wedge\ell\in[1,n_{j}]\\}\cup\\\
&\\{(a_{i,j},\alpha(k))\mid i\in A_{E}\wedge k\in[1,d+1]\setminus
A_{E}\wedge(i,k)\in D\wedge j\in[1,n_{i}]\\}\cup\\\
&\\{(\alpha(i),\alpha(k))\mid i,k\in[1,d+1]\setminus A_{E}\wedge(i,k)\in
D\\}\end{split}$
and assume without loss of generality that if $(i,k)\in D$ with $i\in A_{E}$
or $k\in A_{E}$, then we always have that $i\in A_{E}$. Intuitively, if $D$
contains a disjointness constraint for a ray that is split into parts, then
$D^{\prime}$ ensures this constraint for every such part.
Now by construction it is enough to evaluate $\tau(E_{\eta}^{\prime})$ only
within $\sim$-classes. This means that if there is a $\sim$-class
$C=\\{c_{1},\dots,c_{k}\\}$ with $c_{1}<\dots<c_{k}$ such that
$\prod_{i=1}^{k}\tau(u_{c_{i}})(1)\neq 1$, then we demand that every solution
sets $y_{c_{i}}$ to 0 for all $i\in[1,k]$. To this end, for all such
$\sim$-classes we remove $u_{c_{i}}^{y_{c_{i}}}$ from $E_{\eta}^{\prime}$ for
all $i\in[1,k]$ and adjust $L_{\sim}^{\prime}$ and $D_{\sim}^{\prime}$
properly. Let
$(E_{\eta}=\gamma_{1}\cdots\gamma_{s}\gamma_{s+1},L_{\sim},D_{\sim})$ be the
resulting $\mathsf{HKP}^{\pm}(G\wr H)$-instance with
$v_{i}:=\gamma(\gamma_{i})$ for all $i\in[1,s+1]$. Then
$(\sigma(E_{\eta}),L_{\sim},D_{\sim})$ is an $\mathsf{MKP}^{\pm}(H)$-instance
where we let $\sigma(g^{x}):=\sigma(g)^{x}$ for an atom $g^{x}$. We claim that
$\mathsf{sol}_{G\wr H}(E,L,D)\neq\emptyset\text{ if and only if
}\bigcup_{\eta\in
N\wedge\sim\in\Theta_{\eta}}\mathsf{sol}_{H}(\sigma(E_{\eta}),L_{\sim},D_{\sim})\neq\emptyset.$
#### Correctness
It remains to show that the $\mathsf{HKP}^{\pm}(G\wr H)$-instance $(E,L,D)$
has a solution if and only if there exist $\eta\in N$ and
$\sim\in\Theta_{\eta}$ such that the $\mathsf{MKP}^{\pm}(H)$-instance
$(\sigma(E_{\eta}),L_{\sim},D_{\sim})$ has a solution. For the first direction
we assume that $\nu\in\mathsf{sol}_{G\wr H}(E,L,D)$. For all $i\in P_{E}$ let
$\sigma_{i}\colon[0,\nu(x_{i})-1]\to H$ such that
$\sigma_{i}(e):=\sigma(\nu(\alpha_{1}\cdots\alpha_{i-1})g_{i}^{e})$
for all $e\in[0,\nu(x_{i})-1]$. Furthermore, we define a function $f\colon
H\to\mathcal{P}(A_{E})$ such that
$f(h):=\\{i\in A_{E}\mid\exists e\in[0,\nu(x_{i})-1]\colon\sigma_{i}(e)=h\\}$
for all $h\in H$. This means that $f$ maps a point $h\in H$ to the set of rays
that visit $h$ under the valuation $\nu$.
We now split the rays into subrays to get $\eta\in N$ and
$\sim\in\Theta_{\eta}$ such that $(\sigma(E_{\eta}),L_{\sim},D_{\sim})$ has a
solution. For every $i\in A_{E}$ with $\nu(x_{i})\neq 0$ there is a partition
of $[0,\nu(x_{i})-1]$ into disjoint intervals
$[s_{1}^{(i)},e_{1}^{(i)}],\dots,[s_{n_{i}}^{(i)},e_{n_{i}}^{(i)}]$ such that
for all $j\in[1,n_{i}]$ and $k\in[s_{j}^{(i)},e_{j}^{(i)}]$ it holds that
$f(\sigma_{i}(s_{j}^{(i)}))=f(\sigma_{i}(k))$
and for all $j\in[1,n_{i}-1]$ it holds that
$f(\sigma_{i}(s_{j}^{(i)}))\neq f(\sigma_{i}(s_{j+1}^{(i)})).$
For $i\in A_{E}$ with $\nu(x_{i})=0$ we set $n_{i}:=1$ and
$[s_{1}^{(i)},e_{1}^{(i)}]:=[1,0]=\emptyset$. Intuitively, we split a ray
whenever the intersection with another ray starts or ends.
We need to show that $n_{i}\leq 2m-1$ for all $i\in A_{E}$. Let $i\in A_{E}$
be a ray and $i\neq j\in A_{E}$ be one of the $m-1$ other rays such that $i$
and $j$ intersect. Let $n_{i,j}$ be the number of disjoint intervals in the
partition of $[0,\nu(x_{i})-1]$ as defined above with respect to the function
defined by
$f_{j}(h):=\\{k\in A_{E}\setminus\\{j\\}\mid\exists
e\in[0,\nu(x_{k})-1]\colon\sigma_{k}(e)=h\\}$
for all $h\in H$. That is, we do not split $i$ at the intersection with $j$.
If $i$ and $j$ are non-parallel, then they intersect in exactly one point
$\sigma_{i}(z)$ for some $z\in[0,\nu(x_{i})-1]$. This implies that $j\in
f(\sigma_{i}(z))$ and $j\notin f(\sigma_{i}(e))$ for all
$e\in[0,\nu(x_{i})-1]\setminus\\{z\\}$. Thus, we have $n_{i}\leq n_{i,j}+2$.
If $i$ and $j$ are parallel, then they have the same period since $E$ is
c-simplified. So the intersection of $i$ and $j$ is a subray of $i$ with
starting point $\sigma_{i}(z_{1})$ and endpoint $\sigma_{i}(z_{2})$ for some
$0\leq z_{1}\leq z_{2}\leq\nu(x_{i})-1$. This implies that $j\in
f(\sigma_{i}(e))$ for all $e\in[z_{1},z_{2}]$ and $j\notin f(\sigma_{i}(e))$
for all $e\in[0,\nu(x_{i})-1]\setminus[z_{1},z_{2}]$. Thus, we have $n_{i}\leq
n_{i,j}+2$. By induction it follows that $n_{i}\leq 1+2(m-1)$. Therefore,
$\eta:=(n_{a_{1}},\dots,n_{a_{m}})\in N$ where we recall that
$A_{E}=\\{a_{1},\dots,a_{m}\\}$ with $a_{1}<\dots<a_{m}$.
Let $E_{\eta}^{\prime}$ be the knapsack expression corresponding to $\eta$ as
constructed above and $a_{i,j}$ for $i\in A_{E}$ and $j\in[1,n_{i}]$ be the
index of the $j$-th subray of $i$ in $E_{\eta}^{\prime}$. Then we define the
equivalence relation $\sim\in\Theta_{\eta}$ such that for all $i,k\in
A_{E},j\in[1,n_{i}]$ and $\ell\in[1,n_{k}]$ it holds that $a_{i,j}\sim
a_{k,\ell}$ if and only if
$|[s_{j}^{(i)},e_{j}^{(i)}]|=e_{j}^{(i)}-s_{j}^{(i)}+1=e_{\ell}^{(k)}-s_{\ell}^{(k)}+1=|[s_{\ell}^{(k)},e_{\ell}^{(k)}]|$
and
$\sigma_{i}(s_{j}^{(i)}+z)=\sigma_{k}(s_{\ell}^{(k)}+z)$
for all $z\in[0,e_{j}^{(i)}-s_{j}^{(i)}]$. This means that $\sim$ relates all
equal subrays.
Now we define the valuation $\nu^{\prime}\in\mathbb{N}^{Y}$ such that
$\nu^{\prime}(y_{k}):=\begin{cases}e_{j}^{(i)}-s_{j}^{(i)}+1,&\text{if
}k=a_{i,j}\text{ for some }i\in A_{E}\text{ and }j\in[1,n_{i}]\\\
\nu(x_{k}),&\text{otherwise}\end{cases}$
for all $k\in P_{E_{\eta}^{\prime}}$. Let $\beta\colon[1,s+1]\to[1,r+1]$ map
the indices of elements of $E_{\eta}$ to the corresponding indices of elements
of $E_{\eta}^{\prime}$ that are not removed. Since $\nu$ is a solution of
$(E,L,D)$, for every $i\in A_{E}$ with $\nu(x_{i})\neq 0$ and $j\in[1,n_{i}]$
it holds that $\prod_{\ell=1}^{k}\tau(u_{c_{\ell}})(1)=1$, where
$C=\\{c_{1},\dots,c_{k}\\}$ with $c_{1}<\dots<c_{k}$ is the $\sim$-class
containing $a_{i,j}$, and therefore $u_{a_{i,j}}^{y_{a_{i,j}}}$ is not removed
from $E_{\eta}^{\prime}$. By construction of $(E_{\eta},L_{\sim},D_{\sim})$
and since $\nu\in\mathsf{sol}_{H}(\sigma(E),L,D)$, it follows that for
$\nu^{\prime\prime}\in\mathbb{N}^{Z}$ defined by
$\nu^{\prime\prime}(z_{i}):=\nu^{\prime}(y_{\beta(i)})$ for all $i\in
P_{E_{\eta}}$ we have
$\nu^{\prime\prime}\in\mathsf{sol}_{H}(\sigma(E_{\eta}),L_{\sim},D_{\sim})$.
For the other direction assume that
$\nu^{\prime}\in\mathsf{sol}_{H}(\sigma(E_{\eta}),L_{\sim},D_{\sim})$ for some
$\eta\in N$ and $\sim\in\Theta_{\eta}$. Since after the construction
$g_{i}^{x_{i}}$ is split into $g_{i}^{z_{i_{1}}}h_{1}g_{i}^{z_{i_{2}}}\cdots
h_{m_{i}-1}g_{i}^{z_{i_{m_{i}}}}$ for some $m_{i}\in[0,n_{i}]$ and products
$h_{j}$ of elements of $H$ such that $h_{j}=1$ for all $j\in[1,m_{i}-1]$, we
define the valuation $\nu\in\mathbb{N}^{X}$ by
$\nu(x_{i}):=\nu^{\prime}(z_{i_{1}})+\dots+\nu^{\prime}(z_{i_{m_{i}}})$
for all $i\in P_{E}$. Since $\sigma(\nu^{\prime}(E_{\eta}))=1$, we also have
that $\sigma(\nu(E))=1$. Moreover, as
$\nu^{\prime}\in\mathsf{sol}_{H}(\sigma(E_{\eta}),L_{\sim},D_{\sim})$, the
definitions of $L_{\sim}$ and $D_{\sim}$ imply that all loop constraints in
$L$ and all disjointness constraints in $D$ are fulfilled under $\nu$.
It remains to show that $\tau(\nu(E))(h)=1$ for all $h\in H$. Note that we can
regard $\sim$ also as equivalence relation on $A_{E_{\eta}}$. For $i,j\in
A_{E_{\eta}}$ we say that $i\sim j$ if and only if $\beta(i)\sim\beta(j)$.
Moreover, for $i\in A_{E_{\eta}}$ let
$\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(i)$ be the support of the ray $i$
under $\nu^{\prime}$. Since $\nu^{\prime}$ fulfills the loop constraints in
$L_{\sim}$, for any two rays $i,j\in A_{E_{\eta}}$ in the same $\sim$-class
$C$ it holds that
$\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(i)=\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(j)$
and we define
$\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(C):=\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(i)$.
By construction of $E_{\eta}$ we have that
$\prod_{i=1}^{k}\tau(v_{c_{i}})(1)=1$ for any $\sim$-class
$C=\\{c_{1},\dots,c_{k}\\}$ with $c_{1}<\dots<c_{k}$. As the disjointness
constraints in $D_{\sim}$ ensure that rays of different $\sim$-classes are
disjoint, for any $\sim$-class $C$ it follows that
$\tau(\nu^{\prime}(E_{\eta}))(h)=1$ for all
$h\in\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(C)$. Thus, since any $i\in
A_{E_{\eta}}$ is contained in a $\sim$-class, we have that
$\tau(\nu^{\prime}(E_{\eta}))(h)=1$ for all $h\in H$. By definition of $\nu$
this implies that $\tau(\nu(E))(h)=1$ for all $h\in H$ since the rays of $E$
under $\nu$ are built of the rays of $E_{\eta}$ under $\nu^{\prime}$.
### B.5 Reduction in abelian case
As consequence of Theorem 4.1 we can start the reduction with a normalized
$\mathsf{HKP}^{+}(G\wr H)$-instance. Again, in the first step we make the
instance stacking-free.
###### Lemma B.14.
For any normalized $\mathsf{HKP}^{+}(G\wr H)$-instance one can effectively
construct an equivalent finite set of stacking-free, normalized
$\mathsf{HKP}^{+}(G\wr H)$-instances.
###### Proof B.15.
We do the same construction as in the proof of Lemma 4.2 but we leave the
disjointness constraints out. This results in stacking-free, normalized
$\mathsf{HKP}^{+}(G\wr H)$-instances $(E_{1},L_{1}),\dots,(E_{t},L_{t})$. For
the correctness we assume that the normalized $\mathsf{HKP}^{+}(G\wr
H)$-instance $(E,L)$ with $s(E)\geq 1$ and the set of exponent expressions $S$
are generated during the construction. We need to show that
$\mathsf{sol}_{G\wr H}(E,L)\cap\mathsf{sol}_{G}(S)\neq\emptyset$ if and only
if there exists $C\subseteq A_{E}$ containing an address of a stacking index
such that $\mathsf{sol}_{G\wr
H}(E_{C},L_{C})\cap\mathsf{sol}_{G}(S_{C})\neq\emptyset$. The first direction
works exactly the same as in the proof of Lemma 4.2. For the other direction
it remains to argue that $\tau(\nu(E))(h_{C})=1$. But since $G$ is abelian,
this follows from the fact that $\prod_{j=1}^{m}a_{j}^{\nu}=1$ and
$\tau(\nu_{C}(E_{C}))(h_{C})=1$.
From now on we assume that $(E,L)$ is a stacking-free, normalized
$\mathsf{HKP}^{+}(G\wr H)$-instance. The next lemma shows how to reduce
$\mathsf{HKP}^{+}(G\wr H)$ for stacking-free, normalized
$\mathsf{HKP}^{+}(G\wr H)$-instances to $\mathsf{MKP}^{+}(H)$.
###### Lemma B.16.
For any stacking-free, normalized $\mathsf{HKP}^{+}(G\wr H)$-instance one can
effectively construct an equivalent finite set of
$\mathsf{KP}^{+}(H)$-instances.
###### Proof B.17.
We can again almost copy the proof of Lemma 4.3 by leaving the disjointness
constraints out. The result of the construction are
$\mathsf{MKP}^{+}(H)$-instances $(E_{1},L_{1}),\dots,(E_{t},L_{t})$. For the
correctness we have to show that the $\mathsf{HKP}^{+}(G\wr H)$-instance
$(E,L)$ has a solution if and only if there exist $\eta\in N$ and
$\sim\in\Theta_{\eta}$ such that the $\mathsf{MKP}^{+}(H)$-instance
$(\sigma(E_{\eta}),L_{\sim})$ has a solution. The first direction is again the
same as in the proof of Lemma 4.3. For the other direction we only use
disjointness constraints to show that $\tau(\nu^{\prime}(E_{\eta}))(h)=1$ for
all $h\in\mathsf{supp}_{E_{\eta}}^{\nu^{\prime}}(C)$ and $\sim$-classes $C$.
But since $G$ is abelian, this also follows from the fact that $\prod_{i\in
C}\tau(v_{i})(1)=1$ for any $\sim$-class $C$.
## Appendix C Proofs from Section 5
### C.1 Proof of Lemma 5.1
###### Lemma C.1.
Given $k\in\mathbb{N}$ one can compute $u\in\langle a\rangle^{(\mathbb{N})}$
with periodic complexity $\geq k$.
###### Proof C.2.
A function $u\in\langle a\rangle^{(\mathbb{N})}$ is $(k,s)$-alternating if
there are intervals $L_{1}=[\ell_{1},r_{1}],\dots,L_{k}=[\ell_{k},r_{k}]$ and
elements $c_{1},\dots,c_{k}\in\langle a\rangle$ such that $|L_{j}|\geq s$,
$\ell_{1}\leq r_{1}<\ell_{2}\leq r_{2}<\dots<\ell_{k}\leq r_{k}$, and
$u(n)=c_{j}$ for all $n\in L_{j}$, $j\in[1,k]$, and $c_{j}\neq c_{j+1}$ and
$j\in[1,k-1]$. We claim that every $(4k,2^{2^{k}})$-alternating function $u$
has periodic complexity at least $k$. The statement then follows by choosing
the word $u=(a)^{2^{2^{k}}}(1)^{2^{2^{k}}}\dots(a)^{2^{2^{k}}}(1)^{2^{2^{k}}}$
consisting of $4k$ blocks. Here, the notation $(c)^{\ell}$ stands for the word
consisting of $\ell$ many $c$.
The proof proceeds by induction on $k$. Let $L_{1},\dots,L_{4k}$ be intervals
of size $\geq 2^{2^{k}}$ and $c_{1},\dots,c_{4k}\in\langle a\rangle$ such that
$u$ is constant $c_{j}$ on each interval $L_{j}$ and $c_{j}\neq c_{j+1}$ for
all $j\in[1,4k-1]$. Take any basic periodic function $v\neq 1$ with support
$\mathsf{supp}(v)=\\{p+qn\mid 0\leq n\leq\ell\\}$ for some numbers $p,q,\ell$.
Let $c\in\langle a\rangle$ such that $v(n)=c$ for all $n\in\mathsf{supp}(v)$.
It suffices to show that $\mathsf{pc}(uv^{-1})\geq k-1$. If the period $q$ is
at least $2^{2^{k-1}}+1$ then each set $L_{j}\setminus\mathsf{supp}(v)$
contains an interval of size $2^{2^{k-1}}$. Hence $uv^{-1}$ is
$(4k,2^{2^{k-1}})$-alternating and by induction $\mathsf{pc}(uv^{-1})\geq
k-1$. If $q\leq 2^{2^{k-1}}$ consider the restriction of $uv^{-1}$ to
$D=\\{n\in\mathbb{N}\mid n\equiv p\pmod{q}\\}$. Notice that
$\mathsf{supp}(v)\subseteq D$ and $\mathsf{supp}(v)$ is convex in $D$, i.e. if
$n_{1}<n_{2}<n_{3}\in D$ and $n_{1},n_{3}\in\mathsf{supp}(v)$ then
$n_{2}\in\mathsf{supp}(v)$. Moreover $|L_{j}\cap D|\geq 2^{2^{k-1}}$ for all
$j\in[1,4k]$ since $|L_{j}|\geq 2^{2^{k}}$ and $q\leq 2^{2^{k-1}}$. Let
$J_{+}=\\{j\in[1,4k]\mid L_{j}\cap\mathsf{supp}(v)=L_{j}\cap D\\}$ and
$J_{-}=\\{j\in[1,4k]\mid L_{j}\cap\mathsf{supp}(v)=\emptyset\\}$, which are
disjoint sets because $L_{j}\cap D$ is always nonempty. Define
$c_{j}^{\prime}$ for all $j\in J_{+}\cup J_{-}$ by
$c_{j}^{\prime}=\begin{cases}c_{j},&\text{if }j\in J_{-},\\\
c_{j}c^{-1},&\text{if }j\in J_{+}.\end{cases}$
Notice that $uv^{-1}$ is constant $c_{j}^{\prime}$ on each set $L_{j}\cap D$
for all $j\in J_{+}\cup J_{-}$. Morever, $J_{+}$ is an interval by convexity
of $\mathsf{supp}(v)$ in $D$. Furthermore, if $j\notin J_{+}\cup J_{-}$ then
$j$ must be adjacent to the interval $J_{+}$; otherwise there would be indices
$j_{1}<j_{2}<j_{3}$ such that $L_{j_{1}}$ and $L_{j_{3}}$ both intersect
$\mathsf{supp}(v)$, and $L_{j_{2}}$ contains a point in
$D\setminus\mathsf{supp}(v)$, which again would contradict the convexity of
$\mathsf{supp}(v)$ in $D$. Therefore $(c_{j}^{\prime})_{j\in J_{+}\cup J_{-}}$
is alternating except in at most two positions. We can pick a subset
$J\subseteq J_{+}\cup J_{-}$ of size $\geq 4k-4$ such that the sequence
$(c_{j}^{\prime})_{j\in J}$ is alternating. Hence, the periodic subsequence of
$uv^{-1}$ induced by $D$ is $(4(k-1),2^{2^{k-1}})$-alternating. By induction
we obtain $\mathsf{pc}(uv^{-1})\geq k-1$, concluding the proof.
First we prove the case $n=1$. Let $v=a_{1}\dots a_{m}$ be any function with
$\mathsf{pc}(v)\geq k$ (Lemma C.1). Then let
$u=a_{1}(1)^{m-1}a_{2}(1)^{m-1}\dots a_{m}(1)^{m-1}a_{1}\dots a_{m}$. Let
$p\neq q\in\mathbb{Z}_{\infty}$. If $p=\infty$ (or $q=\infty$) then
$\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}}$ is
$\tensor*[^{q}]{{u}}{{}^{-1}}$ (or $\tensor*[^{p}]{{u}}{}$, respectively) and
has periodic complexity $\geq k$. Next we can assume that $p,q\in\mathbb{Z}$
and $p<q$ since
$\mathsf{pc}(\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}})=\mathsf{pc}(\tensor*[^{q}]{{u}}{}\tensor*[^{p}]{{u}}{{}^{-1}})$
(because $\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}}$ is the point-wise
inverse of $\tensor*[^{q}]{{u}}{}\tensor*[^{p}]{{u}}{{}^{-1}}$). If $q-p<m$
then $v$ is a periodic subsequence of
$\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}}$. If $q-p\geq m$ then
$v^{-1}$ is a periodic subsequence of
$\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}}$. In any case
$\mathsf{pc}(\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}})\geq\mathsf{pc}(v)\geq
k$.
Now let $n\in\mathbb{N}$ be arbitrary and let $u=a_{1}\dots a_{m}$ be any
function such that
$\mathsf{pc}(\tensor*[^{p}]{{u}}{}\tensor*[^{q}]{{u}}{{}^{-1}})\geq k+4(n-1)$
for all $p\neq q\in\mathbb{Z}_{\infty}$, which can be constructed as described
above. Then, we set $u_{1}=u$ and for all $i\in[2,n]$ we define
$u_{i}=a_{1}1^{|u_{i-1}|}a_{2}1^{|u_{i-1}|}\dots a_{m}.$
We claim that for any $p\neq q$ and $i\in[1,n]$, there is a progression
$D\subseteq\mathbb{Z}$ with period $|u_{i-1}|+1$ (if $i=1$ the period is 1)
such that
$\pi_{D}(\tensor*[^{p}]{{u}}{{}_{i}}\tensor*[^{q}]{{u}}{{}^{-1}_{i}})$ has
periodic complexity $\geq k+4(n-1)$. In particular, we have
$|\mathsf{supp}(\tensor*[^{r}]{{u}}{{}_{j}})\cap D|\leq 1$ for every
$r\in\mathbb{Z}$ and $j\neq i$.
The claim is obvious for $i=1$. For $i>1$, we distinguish two cases. First,
suppose $p-q$ is divisible by $|u_{i-1}|+1$. Then the support of
$\tensor*[^{p}]{{u}}{{}_{i}}\tensor*[^{q}]{{u}}{{}^{-1}_{i}}$ is included in
some progression $D$ with period $|u_{i-1}|+1$. Moreover, we have
$\pi_{D}(\tensor*[^{p}]{{u}}{{}_{i}}\tensor*[^{q}]{{u}}{{}^{-1}_{i}})=\tensor*[^{p^{\prime}}]{{u}}{}\tensor*[^{q^{\prime}}]{{u}}{{}^{-1}}$
for some $p^{\prime},q^{\prime}$ with $p^{\prime}\neq q^{\prime}$, hence
$\mathsf{pc}(\pi_{D}(\tensor*[^{p}]{{u}}{{}_{i}}\tensor*[^{q}]{{u}}{{}^{-1}_{i}}))=\mathsf{pc}(\tensor*[^{p^{\prime}}]{{u}}{}\tensor*[^{q^{\prime}}]{{u}}{{}^{-1}})\geq
k+4(n-1)$.
Now suppose $p-q$ is not divisible by $|u_{i-1}|+1$. Then there is a
progression $D$ with period $|u_{i-1}|+1$ such that
$\pi_{D}(\tensor*[^{p}]{{u}}{{}_{i}}\tensor*[^{q}]{{u}}{{}^{-1}_{i}})=u$,
hence
$\mathsf{pc}(\pi_{D}(\tensor*[^{p}]{{u}}{{}_{i}}\tensor*[^{q}]{{u}}{{}^{-1}_{i}}))=\mathsf{pc}(u)\geq
k+4(n-1)$.
Now take numbers $p_{1},q_{1},\dots,p_{n},q_{n}\in\mathbb{Z}_{\infty}$ with
$p_{j}\neq q_{j}$ for some $j\in[1,n]$ and consider
$w=\prod_{i=1}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}.$
We can rewrite the equation to
$\tensor*[^{p_{j}}]{{u}}{{}_{j}}\tensor*[^{q_{j}}]{{u}}{{}^{-1}_{j}}=\Big{(}\prod_{i<j}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}\Big{)}^{-1}\cdot
w\cdot\Big{(}\prod_{i>j}^{n}\tensor*[^{p_{i}}]{{u}}{{}_{i}}\tensor*[^{q_{i}}]{{u}}{{}^{-1}_{i}}\Big{)}^{-1}.$
(6)
By (6) and the observation above there exists a progression
$D\subseteq\mathbb{Z}$ such that
$\mathsf{pc}(\pi_{D}(\tensor*[^{p_{j}}]{{u}}{{}_{j}}\tensor*[^{q_{j}}]{{u}}{{}^{-1}_{j}}))\geq
k+4(n-1)$ and the functions
$\tensor*[^{p_{j}}]{{u}}{{}_{j}}\tensor*[^{q_{j}}]{{u}}{{}^{-1}_{j}}$ and $w$
differ in at most $4(n-1)$ positions in $D$. Thus
$\mathsf{pc}(\pi_{D}(\tensor*[^{p_{j}}]{{u}}{{}_{j}}\tensor*[^{q_{j}}]{{u}}{{}^{-1}_{j}}))\leq\mathsf{pc}(\pi_{D}(w))+4(n-1)$
and hence $\mathsf{pc}(w)\geq k$.
### C.2 Proof of Lemma 5.2
Notice that the “if”-direction of statement 1. is a special case of 2. Let
$L=\\{(i_{1},j_{1}),\dots,(i_{\ell},j_{\ell})\\}$. By Lemma 5.1 we can
construct functions $u_{1},\dots,u_{\ell}\in\langle a\rangle^{(\mathbb{N})}$
such that
$\prod_{k=1}^{\ell}\tensor*[^{p_{k}}]{{u}}{{}_{k}}\tensor*[^{q_{k}}]{{u}}{{}^{-1}_{k}}$
has periodic complexity at least $2m+1$ for all
$(p_{1},\dots,p_{\ell})\neq(q_{1},\dots,q_{\ell})\in\mathbb{Z}_{\infty}^{\ell}$.
For $i\in[1,\ell]$ let $\bar{f}_{i}\in\langle a\rangle^{(t^{*})}$ with
$\bar{f}_{i}(h)=u_{i}(j)$ if $h=t^{j}$ for some $j\in\mathbb{Z}$ and
$\bar{f}_{i}(h)=1$ otherwise. Then for all $i\in[0,m]$ we define
$f_{i}=\prod_{k\in[1,\ell],\,i_{k}=i}\bar{f}_{k}\prod_{k\in[1,\ell],\,j_{k}=i}\bar{f}_{k}^{-1}.$
Let $h_{1},\dots,h_{m}\in H$ and define $\sigma_{i}=h_{1}\dots h_{i}$. If
$h_{1}\dots h_{m}=1$ and $\sigma_{i_{k}}=\sigma_{j_{k}}$ for all
$k\in[1,\ell]$ then
$\displaystyle f_{0}h_{1}f_{1}\dots h_{m}f_{m}$
$\displaystyle=\prod_{i=0}^{m}\tensor*[^{\sigma_{i}}]{{f_{i}}}{}=\prod_{k=1}^{\ell}\tensor*[^{\sigma_{i_{k}}}]{{\bar{f}}}{{}_{k}}\tensor*[^{\sigma_{j_{k}}}]{{\bar{f}}}{{}^{-1}_{k}}=1.$
For statement 2. let $g_{1},\dots,g_{m}\in\mathsf{P}_{a,t}(G\wr H)$ and define
$\sigma_{i}=\sigma(g_{1}\dots g_{i})$ for all $i\in[0,m]$. In particular,
$\sigma_{0}=1_{H}$. We have
$\tau(f_{0}g_{1}f_{1}\dots
g_{m}f_{m})=\tensor*[^{\sigma_{0}}]{{f}}{{}_{0}}\prod_{i=1}^{m}w_{i}\tensor*[^{\sigma_{i}}]{{f}}{{}_{i}}$
(7)
where $w_{i}=\tensor*[^{\sigma_{i-1}}]{{\tau(g_{i})}}{}$ for $i\in[1,m]$.
Assume that $\sigma_{i_{s}}\neq\sigma_{j_{s}}$. We will apply on (7) the
homomorphism
$\varphi\colon G^{(H)}\to
G^{(\mathbb{Z})},\quad\varphi(f)(n)=f(\sigma_{i_{s}}t^{n}).$
For each $k\in[1,\ell]$ let $p_{k}\in\mathbb{Z}$ with
$\sigma_{i_{s}}t^{p_{k}}=\sigma_{i_{k}}$ if
$\sigma_{i_{s}}^{-1}\sigma_{i_{k}}\in\langle t\rangle$ and $p_{k}=\infty$
otherwise. It satisfies
$\varphi(\tensor*[^{\sigma_{i_{k}}}]{{\bar{f}}}{{}_{k}})=\tensor*[^{p_{k}}]{{u}}{{}_{k}}$:
If $\sigma_{i_{s}}^{-1}\sigma_{i_{k}}\notin\langle t\rangle$ then
$p_{k}=\infty$ and
$\varphi(\tensor*[^{\sigma_{i_{k}}}]{{\bar{f}}}{{}_{k}})(n)=\bar{f}_{k}(\sigma_{i_{k}}^{-1}\sigma_{i_{s}}t^{n})=1.$
Otherwise, $p_{z}\in\mathbb{Z}$ and
$\varphi(\tensor*[^{\sigma_{i_{k}}}]{{\bar{f}}}{{}_{k}})(n)=\bar{f}_{k}(\sigma_{i_{k}}^{-1}\sigma_{i_{s}}t^{n})=\bar{f}_{k}(t^{-p_{k}}t^{n})=u_{k}(n-p_{k})=\tensor*[^{p_{k}}]{{u}}{{}_{k}}(n).$
Similarly, let $q_{k}\in\mathbb{Z}$ such that
$\sigma_{i_{1}}t^{q_{k}}=\sigma_{j_{k}}$ if
$\sigma_{i_{s}}^{-1}\sigma_{i_{k}}\in\langle t\rangle$ and $q_{k}=\infty$
otherwise; it satisfies
$\varphi(\tensor*[^{\sigma_{j_{k}}}]{{\bar{f}}}{{}^{-1}_{k}})=\tensor*[^{q_{k}}]{{u}}{{}^{-1}_{k}}$.
Since $\sigma_{i_{s}}\neq\sigma_{j_{s}}$ we have $p_{s}\neq q_{s}$. Therefore
$\prod_{k=1}^{\ell}\tensor*[^{p_{k}}]{{u}}{{}_{k}}\tensor*[^{q_{k}}]{{u}}{{}^{-1}_{k}}$
has periodic complexity at least $2m+1$. Furthermore, $\varphi(w_{i})$ is a
basic periodic function for all $i\in[1,\ell]$. Let $I$ be the set of indices
$i\in[1,m]$ where the value of $\tau(g_{i})$ does not belong to $\langle
a\rangle$. If $i\in I$ then $\tau(g_{i})$ has a period that is not
commensurable to $t$ and hence $|\mathsf{supp}(\varphi(w_{i}))|\leq 1$. Let
$W=\bigcup_{i\in I}\mathsf{supp}(\varphi(w_{i}))$, which has size at most $m$.
If $n\in\mathbb{Z}\setminus W$ then $\varphi(w_{i})(n)\in\langle a\rangle$ for
all $i\in[1,m]$ and therefore
$\displaystyle\varphi(\tau(f_{0}g_{1}f_{1}\dots g_{m}f_{m}))(n)$
$\displaystyle=\varphi(\tensor*[^{\sigma_{0}}]{{f}}{{}_{0}})(n)\prod_{i=1}^{m}\varphi(w_{i})(n)\varphi(\tensor*[^{\sigma_{i}}]{{f}}{{}_{i}})(n)$
$\displaystyle=\prod_{i=0}^{m}\varphi(\tensor*[^{\sigma_{i}}]{{f}}{{}_{i}})(n)\prod_{i=1}^{m}\varphi(w_{i})(n)$
$\displaystyle=\prod_{k=1}^{\ell}\tensor*[^{p_{k}}]{{u}}{{}_{k}}(n)\tensor*[^{q_{k}}]{{u}}{{}^{-1}_{k}}(n)\prod_{i=1}^{m}\varphi(w_{i})(n).$
If $f_{0}g_{1}f_{1}\dots g_{m}f_{m}=1$ then
$\prod_{k=1}^{\ell}\tensor*[^{p_{k}}]{{u}}{{}_{k}}\tensor*[^{q_{k}}]{{u}}{{}^{-1}_{k}}$
and $\prod_{i=1}^{m}\varphi(w_{i})^{-1}$ differ in at most $|W|\leq m$
positions. Since $\prod_{i=1}^{m}\varphi(w_{i})^{-1}$ has periodic complexity
at most $m$, the periodic complexity of
$\prod_{k=1}^{\ell}\tensor*[^{p_{k}}]{{u}}{{}_{k}}\tensor*[^{q_{k}}]{{u}}{{}^{-1}_{k}}$
is bounded by $2m$, which is a contradiction.
### C.3 Proof of Lemma 5.3
We proceed in two steps. Let $E=e_{1}\dots e_{n}$. Suppose that there exists a
power $e_{k}=h_{k}^{x_{k}}$ such that $h_{k}$ has finite order $q\geq 1$. We
claim that, if $I$ has a solution $\nu$ then there exists one where
$\nu(x_{k})$ is bounded by $2q-1$. If $\nu$ is any solution of $I$ we can
define a solution $\nu^{\prime}$ by $\nu^{\prime}(x)=\nu(x)$ for all $x\neq
x_{k}$ and $\nu^{\prime}(x_{k})=\nu(x_{k})-iq$ where $i\in\mathbb{N}$ is
minimal such that $0\leq\nu^{\prime}(x_{k})\leq 2q-1$. Furthermore the induced
factorized walks $\pi_{\nu,E}=\pi_{1}\dots\pi_{n}$ and
$\pi_{\nu^{\prime},E}=\pi_{1}\dots\pi_{k}^{\prime}\dots\pi_{n}$ are identical
up to the $k$-th subwalks $\pi_{k}$, $\pi_{k}^{\prime}$, which have the same
support (and the same endpoints). Therefore $\nu$ and $\nu^{\prime}$ satisfy
the same interval and disjointness constraints. Hence, for $c\in\mathbb{N}$
let us define
$E_{c}=e_{1}\dots e_{k-1}\underbrace{h_{k}\cdots h_{k}}_{\text{$c$ atoms
$h_{k}$}}e_{k+1}\dots e_{n}.$
Furthermore, we need to adapt the sets $L$ and $D$. Every disjointness
constraint in $D$ referring to $e_{k}$ must be replaced by $c$ disjointness
constraints referring to the $c$ atoms $h_{k}$. Formally we set
$\displaystyle L_{c}$
$\displaystyle=\\{(\iota_{c}(i),\iota_{c}(j))\mid(i,j)\in L\\}$ (8)
$\displaystyle D_{c}$
$\displaystyle=\\{(i,j)\mid(\delta_{c}(i),\delta_{c}(j))\in D\\}$
where the functions $\iota_{c}$ and $\delta_{c}$ are defined by
$\iota_{c}(i)=\begin{cases}i,&\text{if }i<k,\\\ i+c-1,&\text{if }k\leq
i,\end{cases}\quad\delta_{c}(i)=\begin{cases}i,&\text{if }i<k,\\\ k,&\text{if
}k\leq i<k+c,\\\ i-c+1,&\text{if }k+c\leq i.\end{cases}$ (9)
It is easy to see that $I$ has a solution $\nu$ with $\nu(x_{k})=c$ if and
only if $I[x_{k}=c]=(E_{c},L_{c},D_{c})$ has a solution. We construct the set
$\mathcal{I}=\\{I[x_{k}=c]\mid 0\leq c\leq 2q-1\\}$. This step reduces the
number of powers $h_{i}^{x_{i}}$ where $h_{i}$ has finite order, so we can
repeat this construction until the instances are torsion-free.
Next, to establish orthogonality, suppose that $E$ contains powers
$h_{\ell}^{x_{\ell}}$ and $h_{r}^{x_{r}}$ such that $(\ell,r)\in D$. If
$\ell=r$ then the instance is unsatisfiable and we can return
$\mathcal{I}=\emptyset$. Now assume that $\ell<r$. Since $\langle
h_{\ell}\rangle\cap\langle h_{r}\rangle\neq\\{1\\}$ there exist integers $s>0$
and $t\neq 0$ such that $h_{\ell}^{s}=h_{r}^{t}$. The idea is that, if the
$\ell$-th and the $r$-th subwalk intersect then they already intersect in the
start or the end area of one of the rays of constant length.
Assume that $t>0$ (the case $t<0$ is similar). For the case that
$\nu(x_{\ell})$ is bounded by $s$ or $\nu(x_{r})$ is bounded by $t$, we can
construct a finite number of instances $I[x_{\ell}=c]$, $I[x_{r}=c]$ as above.
It remains to consider the case that $\nu(x_{\ell})\geq s$ and $\nu(x_{r})\geq
t$. We define the following knapsack expression:
$E^{\prime}=e_{1}\dots
e_{\ell-1}h_{\ell}^{s}h_{\ell}^{y_{\ell}}e_{\ell+1}\dots
e_{r-1}h_{r}^{t}h_{r}^{y_{r}}e_{r+1}\dots e_{n}.$
Similar to (8) and (9), we can define sets $L^{\prime}$, $D^{\prime}$ such
that $I^{\prime}=(E^{\prime},L^{\prime},D^{\prime})$ has a solution if and
only if $I$ has a solution $\nu$ with $\nu(x_{\ell})\geq s$ and
$\nu(x_{\ell})\geq t$. In particular, the set $D^{\prime}$ relates all $s+1$
atoms in $h_{\ell}^{s}h_{\ell}^{y_{\ell}}$ to all $t+1$ atoms in
$h_{r}^{t}h_{r}^{y_{r}}$. We claim that we can now omit the disjointness
constraint between $h_{\ell}^{y_{\ell}}$ and $h_{r}^{y_{r}}$ in $D^{\prime}$,
i.e. $I^{\prime}$ is equivalent to
$I^{\prime\prime}=(E^{\prime},L^{\prime},D^{\prime\prime})$ where
$D^{\prime\prime}=D^{\prime}\setminus\\{(\ell+s,r+s+t)\\}.$
Clearly, every solution for $I^{\prime}$ is a solution for $I^{\prime\prime}$.
Conversely, assume that $\nu$ is a solution for $I^{\prime\prime}$. and that
the disjointness constraint $(\ell+s,r+s+t)$ is violated, i.e. there exist
$u\in[0,\nu(y_{\ell})-1],v\in[0,\nu(y_{r})-1]$ such that
$\nu(e_{1}\dots e_{\ell-1})h_{\ell}^{s}h_{\ell}^{u}=\nu(e_{1}\dots
e_{r-1})h_{r}^{t}h_{r}^{v}.$
We can choose the pair $(u,v)$ to be minimal with respect to the partial order
$\preceq$ on $\mathbb{Z}^{2}$ defined by $(u,v)\preceq(u^{\prime},v^{\prime})$
if there exists $d\geq 0$ such that
$(u,v)+d\cdot(s,t)=(u^{\prime},v^{\prime})$. Since we have
$\nu(e_{1}\dots e_{\ell-1})h_{\ell}^{s}h_{\ell}^{u-s}=\nu(e_{1}\dots
e_{r-1})h_{r}^{t}h_{r}^{v-t}$
we must have $u<s$ or $v<t$ by minimality of $(u,v)$. This contradicts the
fact that $D^{\prime\prime}$ is satisfied by $\nu$.
In conclusion, we construct the set
$\mathcal{I}=\\{I^{\prime\prime}\\}\cup\\{I[x_{\ell}=c]\mid 0\leq
c<s\\}\cup\\{I[x_{r}=c]\mid 0\leq c<t\\}.$
Notice that the number of disjointness pairs that violate the orthgonality
property has decreased in each of these instances. Furthermore, the
transformation preserves torsion-freeness so that we can repeat this process
until all instances are orthogonal.
### C.4 Proof of Lemma 5.6
We begin with a definition of the set $J\subseteq[0,m]^{2}$ of loop
constraints. We define $J$ on $\hat{E}$ so as to express the following
conditions
1. 1.
all conditions from $L$, which refer to positions in the prefix $E$ of
$\hat{E}$,
2. 2.
$E=1$,
3. 3.
for every subexpression $E_{i,c,s}=\hat{e}_{k+1}\dots\hat{e}_{k+n+2}$
occurring at position $k$ in $E$:
1. (a)
$E_{i,c,s}=1$
2. (b)
$e_{1}\dots e_{i-1}=\hat{e}_{k+1}\dots\hat{e}_{k+i-1}$
3. (c)
$e_{1}\dots e_{i}=\hat{e}_{k+1}\dots\hat{e}_{k+i+2}$.
Before we go on to the proof of Lemma 5.6, we need a lemma.
###### Lemma C.3.
For all valuations $\mu$ and $i\in[1,m]$ we have
$\mu(\hat{e}_{i})\in\mathsf{P}_{a,t}$.
###### Proof C.4.
We can verify $\gamma(\hat{e}_{i})\in GH$ easily from (1). If $i\in
Q_{\hat{E}}$ then $\tau(\mu(\hat{e}_{i}))$ has a support of size $\leq 1$ and
hence it has 1 as a period. If $i\in P_{\hat{E}}$ and
$\gamma(\hat{e}_{i})\notin\langle a\rangle H$ then
$\sigma(\gamma(\hat{e}_{i}))=t^{-s}h_{j_{k}}t^{s}$ for some $s\in\mathbb{N}$
and $k\in[1,d]$ with $j_{k}\in P_{\hat{E}}$. By assumption $h_{j_{k}}$ is not
commensurable to $t$ and therefore $t^{-s}h_{j_{k}}t^{s}$ is not commensurable
to $t$ either.
We are now ready to prove Lemma 5.6. Let $\nu$ be an valuation such that
$\nu(E)=1$ and $\nu$ satisfies all loop and disjointness constraints in $L$
and $D$. We claim that $(\hat{E},J)$ has a solution $\mu$. Let
$\pi_{\nu,E}=\pi_{1}\dots\pi_{n}$ be the induced factorized walk. We extend
$\nu$ to a valuation $\mu$ over all variables in $\hat{E}$ by assigning to the
copied variables the same values as the original variables. Then for all
$i\in[1,n]$, $c\in G$, $s\in\mathbb{N}$ we have $\sigma(\mu(E_{i,c,s}))=1$ and
$\tau(\mu(E_{i,c,s}))(h)=\begin{cases}c,&\text{if
}ht^{-s}\in\mathsf{supp}(\pi_{i}),\\\ 1,&\text{otherwise}.\end{cases}$
Since all disjointness constraints in $D$ are satisfied we know that
$\mu(E_{i_{k},a,s}\cdot E_{j_{k},b,s}\cdot E_{i_{k},a^{-1},s}\cdot
E_{j_{k},b^{-1},s})=1$
for all $1\leq k\leq d$ and $s\in S_{k}$, and therefore $\mu(\hat{E})=1$.
Furthermore, $\mu$ satisfies all conditions in $J$.
Next we claim
$\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}\hat{e}_{1}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots\hat{e}_{m}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}}=1$
for some $r\leq Nm^{2}$. Let $g_{i}=\mu(\hat{e}_{i})$ for $i\in[1,m]$. Set
$\sigma_{i}=\sigma(g_{1}\dots g_{i})$ for all $i\in[0,m]$, which satisfy
$\sigma_{i}=\sigma_{j}$ for all $(i,j)\in J$. For all $r\in\mathbb{N}$ we have
$\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}g_{1}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots
g_{m}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}}=\tensor*[^{\sigma_{0}t^{r}\\!\\!}]{{f}}{{}_{0}}\prod_{i=1}^{m}w_{i}\tensor*[^{\sigma_{i}t^{r}\\!\\!}]{{f}}{{}_{i}}$
(10)
where $w_{i}=\tensor*[^{\sigma_{i-1}}]{{\tau(g_{i})}}{}$ for all $i\in[1,m]$.
It suffices to find a number $r\leq[0,Nm^{2}]$ such that each function
$\tensor*[^{\sigma_{i}t^{r}\\!\\!}]{{f}}{{}_{i}}$ commutes with each function
$w_{k}$ since
$\displaystyle\prod_{i=0}^{m}\tensor*[^{\sigma_{i}t^{r}\\!\\!}]{{f}}{{}_{i}}\prod_{i=1}^{m}w_{i}=\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}\sigma(g_{1})\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots\sigma(g_{m})\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}}g_{1}\dots
g_{m}=1$
where the last equation uses Lemma 5.2 and $g_{1}\dots g_{m}=\mu(\hat{E})=1$.
Define the set
$K=\\{k\in[1,m]\mid\gamma(\hat{e}_{k})\notin\langle a\rangle H\\}.$
If $k\in[1,m]\setminus K$ then $w_{k}\in\langle a\rangle^{(H)}$ commutes with
all functions $\tensor*[^{\sigma_{i}t^{r}\\!\\!}]{{f}}{{}_{i}}\in\langle
a\rangle^{(H)}$. We call a shift $r\in\mathbb{N}$ _good_ if
$\sigma_{i}t^{r+j}\notin\mathsf{supp}(w_{k})\text{ for all $j\in[0,N-1]$ and
$i\in[1,m]$}$
In other words, if we set
$F_{i}=[0,N-1],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
A_{i}=\\{s\in\mathbb{Z}\mid\text{$\sigma_{i}t^{s}\in\mathsf{supp}(w_{k})$ for
some $k\in K$}\\}$
for $i\in[1,m]$, then $r$ is good if and only if $(r+F_{i})\cap
A_{i}=\emptyset$ for every $i\in[1,m]$. By Lemma C.3, all
$h,h^{\prime}\in\mathsf{supp}(w_{k})$ with $h\neq h^{\prime}$ satisfy
$h^{-1}h^{\prime}\notin\langle t\rangle$, which means $|A_{i}|\leq|K|\leq m$.
Thus, Lemma 5.4 tells us that there is a good $r\in[0,Nm^{2}]$.
Assume that $I=(E,L,D)$ has no solution. Let $\mu$ be any valuation over the
variables of $\hat{E}$. Let $g_{i}=\mu(\hat{e}_{i})$ for $i\in[1,m]$ and
$\sigma_{i}=\sigma(g_{1}\dots g_{i})$ for all $i\in[0,m]$. Suppose that $\mu$
does not satisfy the conditions in $J$, say $\sigma_{i}\neq\sigma_{j}$ for
some $(i,j)\in J$. Then by Lemma 5.2 and Lemma C.3 we know that
$f_{0}g_{1}f_{1}\dots g_{m}f_{m}\neq 1$. Furthermore, since
$t^{-r}\sigma_{i}t^{r}\neq t^{-r}\sigma_{j}t^{r}$ we can also apply Lemma 5.2
to the product $(t^{-r}g_{1}t^{r})\dots(t^{-r}g_{m}t^{r})$ and we obtain
$f_{0}(t^{-r}g_{1}t^{r})f_{1}\dots(t^{-r}g_{m}t^{r})f_{m}\neq 1$. Conjugating
with $t^{r}$ yields
$\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}g_{1}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots
g_{m}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}}\neq 1$.
Now let us assume that $\mu$ satisfies all conditions in $J$, i.e.
$\sigma_{i}=\sigma_{j}$ for all $(i,j)\in J$. Similar to (10) we have
$\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}g_{1}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots
g_{m}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}}=\tensor*[^{\sigma_{0}t^{r}\\!\\!}]{{f}}{{}_{0}}\prod_{i=1}^{m}\tensor*[^{\sigma_{i-1}}]{{\tau(g_{i})}}{}\tensor*[^{\sigma_{i}t^{r}\\!\\!}]{{f}}{{}_{i}}.$
(11)
By definition of $J$, for each $j\in[0,m]$ which occurs in $J$ there exists
$i\in[0,n-1]$ with $\sigma_{i}=\sigma_{j}$. Therefore
$F=\bigcup_{i=0}^{m}\mathsf{supp}(\tensor*[^{\sigma_{i}t^{r}\\!\\!}]{{f}}{{}_{i}})\subseteq\bigcup_{i=0}^{n-1}\\{\sigma_{i}t^{r+j}\mid
j\in[0,N-1]\\}$
Consider the following distance function on $H$: Define $\|g,h\|=|j|$ if
$g^{-1}h=t^{j}$ for some $k\in\mathbb{Z}$ and otherwise $\|g,h\|=\infty$. We
will prove that there exists a set $U\subseteq H$ such that
$\tau(\mu(\hat{E}))(h)\neq 1$ for all $h\in U$, $|U|\geq n+1$ and $\|g,h\|\geq
N$ for all $g\neq h\in U$. Since there are at most $n$ elements in $F$ with
pairwise distance $\geq N$ there must be an element $h\in U\setminus F$
satisfying
$\tau(\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{0}}g_{1}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{1}}\dots
g_{m}\tensor*[^{t^{r}\\!\\!}]{{f}}{{}_{m}})(h)\stackrel{{\scriptstyle\eqref{eq:mu-
tau}}}{{=}}\prod_{i=1}^{m}\tensor*[^{\sigma_{i-1}}]{{\tau(g_{i})}}{}(h)=\tau(g_{1}\dots
g_{m})(h)=\tau(\mu(\hat{E}))(h)\neq 1.$
Let us now construct such a set $U$. Let $\pi_{\mu,E}=\pi_{1}\dots\pi_{n}$ be
the induced factorized walk on $E$. By Condition 2 we know that
$\pi_{1}\dots\pi_{n}$ must be a loop, and therefore $\mu(E)=1$. Furthermore,
$\mu$ satisfies all loop constraints in $L$. Since $I$ has no solution, $\mu$
must violate a disjointness constraint in $D$. Recall that $I$ is
orthogonalized and therefore
$|\mathsf{supp}(\pi_{i})\cap\mathsf{supp}(\pi_{j})|\leq 1$ for all $(i,j)\in
D$. Let $K$ be the set of indices $k\in[1,d]$ where
$\mathsf{supp}(\pi_{i_{k}})\cap\mathsf{supp}(\pi_{j_{k}})\neq\emptyset$ and
let $\mathsf{supp}(\pi_{i_{k}})\cap\mathsf{supp}(\pi_{j_{k}})=\\{p_{k}\\}$.
For all $k\in K$ and $s\in S_{k}$ we have
$\mu(E_{i_{k},a,s}\cdot E_{j_{k},b,s}\cdot E_{i_{k},a^{-1},s}\cdot
E_{j_{k},b^{-1},s})=(\big{[}p_{k}t^{s}\mapsto[a,b]\big{]},1)$
in the semidirect product notation where $[h\mapsto g]$ is the function $H\to
G$ mapping $h$ to $g$ and all other elements in $H$ to $1$. For all $k\notin
K$ and $s\in S_{k}$ we have
$\mu(E_{i_{k},a,s}\cdot E_{j_{k},b,s}\cdot E_{i_{k},a^{-1},s}\cdot
E_{j_{k},b^{-1},s})=1.$
This implies
$\tau(\mu(\hat{E}))=\prod_{k\in K}\prod_{s\in
S_{k}}\big{[}p_{k}t^{s}\mapsto[a,b]\big{]}.$ (12)
Let $T_{k}=\\{p_{k}t^{s}\mid s\in S_{k}\\}$ for all $k\in K$. First notice
that
$(n+d)^{2k}N\leq\|p_{k}t^{s},p_{k}t^{s^{\prime}}\|\leq(n+d)^{2k+1}N$ (13)
for all $s,s^{\prime}\in S_{k}$ with $s\neq s^{\prime}$, by definition of
$S_{k}$. We claim that $|T_{k}\cap T_{k^{\prime}}|\leq 1$ for all
$k,k^{\prime}\in K$ with $k\neq k^{\prime}$. Take $k,k^{\prime}\in K$ with
$k<k^{\prime}$. Any two elements $g,h\in T_{k}\cap T_{k^{\prime}}$ with $g\neq
h$ satisfy
$(n+d)^{2k^{\prime}}N\leq\|g,h\|\leq(n+d)^{2k+1}N,$
which contradicts $k<k^{\prime}$. Therefore we can take an arbitrary $k\in K$
and let $U=T_{k}\setminus\bigcup_{k^{\prime}\in
K\setminus\\{k\\}}T_{k^{\prime}}$. Then $\tau(\mu(\hat{E}))(h)\neq 1$ for all
$h\in U$ by (12) and
$|U|\geq|T_{k}|-|K|+1\geq n+d-|K|+1\geq n+1$
Furthermore, by (13) any two elements in $U$ have distance at least $N$.
## Appendix D Proofs from Section 6
### D.1 The discrete Heisenberg group
Let us show (I), (II), and (III). Recall that
$A=\begin{pmatrix}1&1&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix},\leavevmode\nobreak\
\leavevmode\nobreak\ B=\begin{pmatrix}1&0&0\\\ 0&1&1\\\
0&0&1\end{pmatrix},\leavevmode\nobreak\ \leavevmode\nobreak\
C=\begin{pmatrix}1&0&1\\\ 0&1&0\\\ 0&0&1\end{pmatrix}$
and note that
$\begin{pmatrix}1&a&c\\\ 0&1&b\\\
0&0&1\end{pmatrix}\begin{pmatrix}1&a^{\prime}&c^{\prime}\\\ 0&1&b^{\prime}\\\
0&0&1\end{pmatrix}=\begin{pmatrix}1&a+a^{\prime}&c^{\prime}+ab^{\prime}+c\\\
0&1&b+b^{\prime}\\\ 0&0&1\end{pmatrix}$
for any $a,b,c\in\mathbb{Z}$. It is easy to see that $AC=CA$ and $BC=CB$.
Moreover, one readily checks that the two maps $\alpha,\beta\colon
H_{3}(\mathbb{Z})\to\mathbb{Z}$ where $\alpha$ projects to the top-middle and
$\beta$ to the right-middle entry are homomorphisms. They satisfy
$\alpha(A)=1$, $\alpha(B)=\alpha(C)=0$ and $\beta(B)=1$,
$\beta(A)=\beta(C)=0$. From this, it follows directly that (I) and (II) hold:
Indeed, if $A^{i}C^{j}=A^{i^{\prime}}C^{j^{\prime}}$, then applying $\alpha$
yields $i=i^{\prime}$ and thus $C^{j}=C^{j^{\prime}}$; since $C$ has infinite
order, we obtain $j=j^{\prime}$. A similar proof establishes (II). Let us now
show (III).
###### Lemma D.1.
$A^{i}B^{j}A^{-i^{\prime}}B^{-j^{\prime}}=C^{k}$ is equivalent to
$i=i^{\prime}$, $j=j^{\prime}$, and $k=ij$.
###### Proof D.2.
Note that $A^{i}=\begin{pmatrix}1&i&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix}$,
$B^{j}=\begin{pmatrix}1&0&0\\\ 0&1&j\\\ 0&0&1\end{pmatrix}$,
$A^{-i}=\begin{pmatrix}1&-i&0\\\ 0&1&0\\\ 0&0&1\end{pmatrix}$, and
$B^{-j}=\begin{pmatrix}1&0&0\\\ 0&1&-j\\\ 0&0&1\end{pmatrix}$. Therefore,
$A^{i}B^{j}A^{-i}B^{-j}=\begin{pmatrix}1&i&ij\\\ 0&1&j\\\
0&0&1\end{pmatrix}A^{-i}B^{-j}=\begin{pmatrix}1&0&ij\\\ 0&1&j\\\
0&0&1\end{pmatrix}B^{-j}=\begin{pmatrix}1&0&ij\\\ 0&1&0\\\
0&0&1\end{pmatrix}.$ (14)
Now suppose $A^{i}B^{j}A^{-i^{\prime}}B^{-j^{\prime}}=C^{k}$. Applying
$\alpha$ yields $i=i^{\prime}$ and applying $\beta$ yields $j=j^{\prime}$.
Hence, we have $A^{i}B^{j}A^{-i}B^{-j}=C^{k}$. By Equation 14, we get
$\begin{pmatrix}1&0&ij\\\ 0&1&0\\\
0&0&1\end{pmatrix}=C^{k}=\begin{pmatrix}1&0&k\\\ 0&1&0\\\ 0&0&1\end{pmatrix}$
and thus $ij=k$. Conversely, if $k=ij$, then Equation 14 shows that
$A^{i}B^{j}A^{-i}B^{-j}=C^{k}$.
### D.2 Solvable Baumslag-Solitar groups
Recall that for $p,q\in\mathbb{Z}\setminus\\{0\\}$ the Baumslag-Solitar group
$\mathsf{BS}(p,q)$ is the group presented by
$\mathsf{BS}(p,q):=\langle a,t\mid ta^{p}t^{-1}=a^{q}\rangle.$
In the following we consider Baumslag-Solitar groups of the form
$\mathsf{BS}(1,q)$ for $q\geq 2$. These groups are solvable and linear. It is
well-known (see, for example, [33]) that $\mathsf{BS}(1,q)$ is isomorphic to
the subgroup $T(q)$ of $\mathsf{GL}(2,\mathbb{Q})$ consisting of the upper
triangular matrices
$\begin{pmatrix}q^{k}&u\\\ 0&1\end{pmatrix}$
with $k\in\mathbb{Z}$ and $u\in\mathbb{Z}[\tfrac{1}{q}]$. Here
$\mathbb{Z}[\tfrac{1}{q}]$ denotes the set of all rational numbers with finite
$q$-ary expansion, hence $\mathbb{Z}[\tfrac{1}{q}]=\\{m\cdot q^{n}\mid
m,n\in\mathbb{Z}\\}$. We identify $\mathsf{BS}(1,q)$ with this subgroup, so
that we obtain:
$a=\begin{pmatrix}1&1\\\ 0&1\end{pmatrix},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ t=\begin{pmatrix}q&0\\\
0&1\end{pmatrix}.$ (15)
Observe that given two elements $\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix},\begin{pmatrix}q^{\ell}&v\\\ 0&1\end{pmatrix}\in T(q)$, their
product is
$\begin{pmatrix}q^{k}&u\\\ 0&1\end{pmatrix}\begin{pmatrix}q^{\ell}&v\\\
0&1\end{pmatrix}=\begin{pmatrix}q^{k+\ell}&u+q^{k}\cdot v\\\
0&1\end{pmatrix}.$
By Lemma 2.1 in [20] the transformation of an element of $\mathsf{BS}(1,q)$
given as word over the generators $a,t$ into matrix form and vice versa can be
done in polynomial time ($\mathrm{TC}^{0}$ even). Thus, for algorithmic
purposes, we can represent elements of $\mathsf{BS}(1,q)$ by matrices of
$T(q)$ where the entries are given in $q$-ary encoding.
In this section, we prove Proposition 6.1. To this end, we use an extension of
Büchi arithmetic $(\mathbb{Z},+,V_{q})$ [7]. Our extension will have the set
$\mathbb{Z}[\tfrac{1}{q}]=\\{m\cdot q^{n}\mid m,n\in\mathbb{Z}\\}$ as its
domain. $V_{q}\colon\mathbb{Z}[\tfrac{1}{q}]\to\mathbb{Z}[\tfrac{1}{q}]$ be
the function such that $V_{q}(x)$ is the largest power of $q$ dividing $x$ for
any $x\in\mathbb{Z}[\tfrac{1}{q}]$. Here we say that
$a\in\mathbb{Z}[\tfrac{1}{q}]$ _divides_ $b\in\mathbb{Z}[\tfrac{1}{q}]$ if
there is a $k\in\mathbb{Z}$ such that $ak=b$. Furthermore, for each
$\ell\in\mathbb{Z}$ we define the binary predicate $S_{\ell}$ on
$\mathbb{Z}[\tfrac{1}{q}]$ such that $xS_{\ell}y$ is fulfilled if and only if
there exist $r\in\mathbb{Z}$ and $s\in\mathbb{N}$ such that $x=q^{r}$ and
$y=q^{r+\ell s}$. Then for any $N\in\mathbb{N}$ we define the structure
$\mathcal{B}_{N}:=(\mathbb{Z}[\tfrac{1}{q}],+,\geq,0,1,V_{q},(S_{\ell})_{-N\leq\ell\leq
N}).$
###### Lemma D.3.
For each given $N$, the first-order theory of $\mathcal{B}_{N}$ is decidable.
###### Proof D.4.
We show that $\mathcal{B}_{N}$ is an automatic structure which implies that
$\mathsf{Th}(\mathcal{B}_{N})$ is decidable (see [18]). We can write each
element of $\mathbb{Z}[\tfrac{1}{q}]$ as $\pm\sum_{i=-r}^{r-1}a_{i}q^{i}$
where $r\geq 1$ and $a_{i}\in[0,q-1]$. This representation is unique if we
choose $r$ minimal. We encode such an element with the word
$\pm\begin{pmatrix}a_{-1}\\\ a_{0}\end{pmatrix}\begin{pmatrix}a_{-2}\\\
a_{1}\end{pmatrix}\cdots\begin{pmatrix}a_{-r}\\\ a_{r-1}\end{pmatrix}$
over the alphabet $\\{+,-\\}\cup[0,q-1]^{2}$. Then all the predicates of
$\mathcal{B}_{N}$ are clearly regular for each $N\in\mathbb{N}$.
We will also need some preparatory observations. Note that in
$\mathcal{B}_{N}$ we can define the set of integers. It holds that
$x\in\mathbb{Z}$ if and only if $V_{q}(x)\geq 1$. This means that in the
following we can quantify over $\mathbb{Z}$ and therefore also over
$\mathbb{N}$. We will make use of the following extension of Lemma 4.5 in
[20]:
###### Lemma D.5.
Given the $q$-ary representation of a number $r\in\mathbb{Z}[\tfrac{1}{q}]$ we
can effectively construct a formula over $(\mathbb{Z}[\tfrac{1}{q}],+)$ which
expresses $y=r\cdot x$ for $x,y\in\mathbb{Z}[\tfrac{1}{q}]$.
###### Proof D.6.
Let $r=\sum_{-k\leq t\leq\ell}a_{t}q^{t}$ with $k,\ell\geq 0$ and
$a_{t}\in[0,q-1]$. We have that $y=rx$ if and only if $q^{k}y=r^{\prime}x$
where $r^{\prime}:=\sum_{t=0}^{k+\ell}a_{t-k}q^{t}\in\mathbb{Z}$. Since
$q^{k}$ and $r^{\prime}$ are constant integers, we can use iterated addition
to express $q^{k}y$ and $r^{\prime}x$ by formulas over
$(\mathbb{Z}[\tfrac{1}{q}],+)$.
We are now prepared to prove Proposition 6.1.
###### Proof D.7 (Proof of Proposition 6.1).
It remains to show that for each finite subset $F\subseteq\mathsf{BS}(1,q)$,
the structure $(\mathsf{BS}(1,q),(\xrightarrow{g})_{g\in
F},(\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}})_{g\in F})$ can be
interpreted in $\mathcal{B}_{N}$ for some $N$. We represent each element
$\begin{pmatrix}q^{k}&u\\\ 0&1\end{pmatrix}$ of $\mathsf{BS}(1,q)$ by the pair
$(q^{k},u)$ over $\mathbb{Z}[\tfrac{1}{q}]$. Moreover, we set $N$ to be the
maximal value of $|k|$ for which there is an element
$\begin{pmatrix}q^{k}&u\\\ 0&1\end{pmatrix}$ in $F$ for some
$u\in\mathbb{Z}[\tfrac{1}{q}]$.
We now use the idea of the proof of Theorem 4.1 in [20] to interpret the
structure $(\mathsf{BS}(1,q),(\xrightarrow{g})_{g\in
F},(\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}})_{g\in F})$ in
$\mathcal{B}_{N}$.
Let us fix an element $g=\begin{pmatrix}q^{\ell}&v\\\ 0&1\end{pmatrix}\in
T(q)$. For all $\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix},\begin{pmatrix}q^{m}&w\\\ 0&1\end{pmatrix}\in T(q)$ we have
that
$\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix}\xrightarrow{g}\begin{pmatrix}q^{m}&w\\\ 0&1\end{pmatrix}$
is fulfilled if and only if
$q^{m}=q^{k}q^{\ell}\wedge w=u+q^{k}v$
which can be expressed by formulas over $\mathcal{B}_{N}$ for all
$N\in\mathbb{N}$ by Lemma D.5. To express
$\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}$, we use the following
observation:
$\begin{split}\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix}\begin{pmatrix}q^{\ell}&v\\\
0&1\end{pmatrix}^{s}&=\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix}\begin{pmatrix}q^{\ell s}&v+q^{\ell}v+\dots+q^{(s-1)\ell}v\\\
0&1\end{pmatrix}\\\ &=\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix}\begin{pmatrix}q^{\ell s}&v\frac{q^{\ell s}-1}{q^{\ell}-1}\\\
0&1\end{pmatrix}=\begin{pmatrix}q^{k+\ell s}&u+v\frac{q^{k+\ell
s}-q^{k}}{q^{\ell}-1}\\\ 0&1\end{pmatrix}\end{split}$
for $\ell\neq 0$ and $s\in\mathbb{N}$. Then for $\ell\neq 0$ and all
$\begin{pmatrix}q^{k}&u\\\ 0&1\end{pmatrix},\begin{pmatrix}q^{m}&w\\\
0&1\end{pmatrix}\in T(q)$ we have that
$\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix}\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}\begin{pmatrix}q^{m}&w\\\
0&1\end{pmatrix}$
is fulfilled if and only if
$\exists x\in\mathbb{Z}[\tfrac{1}{q}]\colon\exists s\in\mathbb{N}\colon
q^{m}=q^{k+\ell s}\wedge w=u+vx\wedge(q^{\ell}-1)x=q^{m}-q^{k}$
where we can quantify $x$ over $\mathbb{Z}[\tfrac{1}{q}]$ since $\frac{q^{\ell
s}-1}{q^{\ell}-1}$ is an integer and therefore $q^{k}\frac{q^{\ell
s}-1}{q^{\ell}-1}\in\mathbb{Z}[\tfrac{1}{q}]$. By Lemma D.5 we have that
$w=u+vx$ and $(q^{\ell}-1)x=q^{m}-q^{k}$ are expressible by formulas over
$\mathcal{B}_{N}$ for all $N\in\mathbb{N}$. Moreover, we can express $\exists
s\in\mathbb{N}\colon q^{m}=q^{k+\ell s}$ by $q^{k}S_{\ell}q^{m}$ with
$|\ell|\leq N$ and therefore in $\mathcal{B}_{N}$.
If $\ell=0$, it holds that $g^{s}=\begin{pmatrix}1&sv\\\ 0&1\end{pmatrix}$.
Thus, we have that $\begin{pmatrix}q^{k}&u\\\
0&1\end{pmatrix}\xrightarrow{g}\mathrel{\vphantom{\to}{}^{*}}\begin{pmatrix}q^{m}&w\\\
0&1\end{pmatrix}$ is equivalent to
$\exists s\in\mathbb{N}\colon w=u+q^{k}sv\wedge q^{m}=q^{k}$
which holds if and only if
$\exists t\in\mathbb{N}\colon V_{q}(t)\geq q^{k}\wedge w=u+vt\wedge
q^{m}=q^{k}$
since we can set $t=q^{k}s$. Again by Lemma D.5 we can express $w=u+vt$ by a
formula over $\mathcal{B}_{N}$ for all $N\in\mathbb{N}$.
## Appendix E Exponent equations in Baumslag-Solitar groups
The following unpublished proof is due to Moses Ganardi and Markus Lohrey
[13]. With their kind permission, we include the proof for the convenience of
the reader.
###### Theorem E.1.
$\mathsf{ExpEq}(\mathsf{BS}(1,2))$ is undecidable.
###### Proof E.2.
Consider the function $P\colon(x,y)\mapsto x\cdot 2^{y}$ on the natural
numbers. Büchi and Senger [31, Corollary 5] have shown that the existential
fragment of the first-order theory of $(\mathbb{N},+,P)$ is undecidable. We
reduce this fragment to $\mathsf{ExpEq}(\mathsf{BS}(1,2))$. For this, it
suffices to consider an existentially quantified conjunction of formulas of
the following form: $x\cdot 2^{y}=z$, $x+y=z$, and $x<y$ (the latter allow us
to express inequalities and thus negations). We replace each of these formulas
by an equivalent exponent equation over $\mathsf{BS}(1,2)$. For this we use
the two generators $a$ and $t$ as in Equation 15. The formula $x+y=z$ is
clearly equivalent to $a^{x}a^{y}=a^{z}$, i.e., $a^{x}a^{y}a^{-z}=1$. The
formula $x<y$ is equivalent to $a^{x}a^{z}aa^{-y}=1$ for some fresh variable
$z$. Finally, $x\cdot 2^{y}=z$ is equivalent to $t^{y}a^{x}t^{-y}a^{-z}=1$.
|
# Re-defining the concept of hydration water in water under soft confinement
Fausto Martelli IBM Research Europe, Hartree Centre, Daresbury, WA4 4AD,
United Kingdom<EMAIL_ADDRESS>Department of Physics and CNR
Institute of Complex Systems, Sapienza University of Rome, P.le Aldo Moro 2,
00185 Roma, Italy Carles Calero Secció de Física Estadística i
Interdisciplinària–Departament de Física de la Matèria Condensada, Universitat
de Barcelona, C. Martí i Franquès 1, 08028 Barcelona, Spain
<EMAIL_ADDRESS><EMAIL_ADDRESS>Institut de Nanociència i
Nanotecnologia (IN2UB), Universitat de Barcelona, C. Martí i Franquès 1, 08028
Barcelona, Spain Giancarlo Franzese Secció de Física Estadística i
Interdisciplinària–Departament de Física de la Matèria Condensada, Universitat
de Barcelona, C. Martí i Franquès 1, 08028 Barcelona, Spain Institut de
Nanociència i Nanotecnologia (IN2UB), Universitat de Barcelona, C. Martí i
Franquès 1, 08028 Barcelona, Spain
###### Abstract
Water shapes and defines the properties of biological systems. Therefore,
understanding the nature of the mutual interaction between water and
biological systems is of primary importance for a proper assessment of
biological activity and the development of new drugs and vaccines. A handy way
to characterize the interactions between biological systems and water is to
analyze their impact on water density and dynamics in the proximity of the
interfaces. It is well established that water bulk density and dynamical
properties are recovered at distances in the order of $\sim 1$ nm from the
surface of biological systems. Such evidence led to the definition of
_hydration_ water as the thin layer of water covering the surface of
biological systems and affecting-defining their properties and functionality.
Here, we review some of our latest contributions showing that phospholipid
membranes affect the structural properties and the hydrogen bond network of
water at greater distances than the commonly evoked $\sim 1$ nm from the
membrane surface. Our results imply that the concept of hydration water should
be revised or extended, and pave the way to a deeper understanding of the
mutual interactions between water and biological systems.
keywords here
††preprint: AIP/123-QED
## I Introduction
Water is a peculiar substance characterized by a plethora of dynamic and
thermodynamic anomalies that make it the only liquid capable to sustain life
as we know it WaterandLife ; Chaplin ; Ball:2008aa . For example, the very
large heat capacity allows water to absorb and release heat at much slower
rates compared to similar materials like silica. As a consequence, water acts
as a thermostat that regulates the temperature of our bodies and, overall, of
our planet sheltering us from otherwise lethal daily and seasonal temperature
variations. Water has also a very low compressibility, that allows blood to be
pumped without crystallizing down to the most peripherals and tight vessels
delivering oxygen. Nonetheless, water stabilizes proteins and DNA restricting
the access to unfolded states, and shapes the basic structure of cells
membranes. Cells membranes are very complex systems made of a large number of
components, including proteins, cholesterol, glycolipids and ionic channels
among others, but their framework is provided by phospholipid molecules
forming a bilayer. Being solvated by water, the hydrophilic heads of the
phospholipid molecules are exposed to the surrounding solvent molecules, while
the hydrophobic tails are arranged side by side hiding from water and
extending in the region between two layers of heads. Stacked membranes are
important constituents in several biological structures, including endoplasmic
reticulum and Golgi apparatus, that processes proteins for their use in animal
cells, or thylakoid compartments in chloroplasts and cyanobacteria, involved
in photosynthesis. When in contact with membranes, water modulates their
fluidity and mediates the interaction between different membranes as well as
between membranes and solutes (ions, proteins, DNA, etc.), regulating cell-
membrane tasks such as, e.g., transport and signaling functions hamley . A
thin layer of water, with a thickness of only $\sim 1$ nm corresponding to a
couple of molecular diameters, hydrates biological systems and is therefore
called _biological_ , or _hydration_ water Zhong:2011ab . So far, it has been
thought that hydration water is directly responsible for the proper
functioning of biological systems Chaplin , although many issues are still
open Zhong:2011ab .
Several experimental techniques have been adopted to study the interaction
between hydration water molecules and membrane surfaces. Insights on the
orientation of water molecules and on their order have been obtained from
vibrational sum frequency generation spectroscopy and nuclear magnetic
resonance (NMR) experiments konig_1994 ; chen_2010 . Evidences of enhanced
hydrogen bonds (HBs) established between water molecules and the phospholipid
heads have been described in experimental investigations from infrared
spectroscopy binder_2003 ; chen_2010 . Nonetheless, far-infrared spectroscopy
has shown that resonance mechanisms entangle the motion of phospholipid
bilayers with their hydration water dangelo_2017 . Such complex interactions
between water molecules and hydrophobic heads cause perturbations in the
dynamical properties of water. NMR spectroscopy has reported a breakdown of
the isotropy on the lateral and normal diffusion of water molecules with
respect to the surface Volke1994 ; Wassall_BiophysJ1996 , and rotational
dynamics has been the focus of several experimental investigations using
ultrafast vibrational spectroscopy Zhao_Fayer_JACS2008 , terahertz
spectroscopy Tielrooij_BiophysJ2009 and neutron scattering Trapp_JCP2010 .
Atomistic molecular dynamics (MD) simulations have also been widely adopted to
inspect the microscopic details of hydration water (with the obvious drawback
of relying on a particular simulation model). The dynamical slow-down of water
dynamics due to the interaction with phospholipid membranes reported in NMR
experiments Volke1994 ; Wassall_BiophysJ1996 has been confirmed in MD
simulations Berkowitz_chemrev2006 ; Bhide_JCP2005 . MD simulations have also
provided important insights on the molecular ordering and rotation dynamics in
water solvating phospholipid headgroups Berkowitz_chemrev2006 ; pastor_1994 ,
as well as in quantifying –introducing correlation functions– the decay of
water orientational degrees of freedom Zhang_Berkowitz_JPhysChemB2009 ;
Gruenbaum_JChemPhys_2011 ; calero_2016 ; martelli_fop ; 2018arXiv181101911S .
We here review some of our recent computational investigations on water
nanoconfined between stacked phospholipid membranes, reporting evidences that
the membrane affects the structural properties of water and its hydrogen bond
network at distances much larger than the often invoked $\sim 1$ nm. Our
results are the outcome of MD simulations of water nanoconfined in
phospholipid membranes. Water is described via a modified TIP3P tip3p_1 model
of water. As a typical model membrane, we have used 1,2-Dimyristoyl-sn-
glycero-3-phosphocholine (DMPC) lipids. The DMPC is a phospholipid with a
hydrophobic tail formed of two myristoyl chains and a hydrophilic head,
containing a phosphate and a choline, where the N atom interacts mostly with
water oxygen atoms and the P atom interacts mostly with the hydgrogen atoms.
Choline-based phospholipids are ubiquitous in cell membranes and commonly used
in drug-targeting liposomes hamley . In Fig. 1 we report a representative
snapshot of the water-DMPC system.
Figure 1: Representative snapshot of a molecular system composed by water
molecules (sticks) and DMPC leaflets (blur fields).
As observed in Ref.martelli_fop , at ambient conditions the density profile of
water molecules as function of the distance with respect to the average
position of the phosphorus atoms in the DMPC lipids displays no layered
structure. In fact, due to the thermal fluctuations, it forms a smeared out
interface that is $\sim 1$ nm wide, based on the phospholipid head density
martelli_fop . However, the interface forms instantaneous layers that can be
revealed if, following Pandit et al. pandit_algorithm , we consider the
instantaneous local distance $\xi$, defined as the distance of each water
molecule from the closest cell of a Voronoi tessellation centered on the
phosphorous and nitrogen atoms of the phospholipid heads (Fig. 2) calero_2016
.
Figure 2: Density profile $\rho$ of water molecules as a function of the
instantaneous local distance $\xi$ from the membrane interface at ambient
conditions ($T=303$ K, average pressure 1 atm, corresponding to bulk density
$\rho=1$g/cm3) and with at hydration level, defined as the number of water
molecules per phospholipid, $\omega=34$. Water at $\xi<0$ belongs to the
interior of the membrane, while that at $\xi>5$Å has the same density as the
bulk and can be associated to the exterior of the membrane. The density of
water at $0<\xi<5$Å shows a clear maximum revealing the presence of a
hydration layer calero_2016 . At higher density we observe more than one
hydration layer.
## II Dynamics
Numerical simulations have shown that hydration water suffers a dramatic slow
down not just in stacked phospholipids Rog_ChemPhysLett2002 ;
Lopez_JPhysChemB2004 ; Berkowitz_chemrev2006 ; Bhide_JCP2005 ;
Zhang_Berkowitz_JPhysChemB2009 ; Gruenbaum_JChemPhys_2011 ; pandit_algorithm ;
Yang_JCP2014 ; calero_2016 ; martelli_fop ; calero_membranes_2019 , but also
in proteins and sugars camisasca_2018 ; iorio_2019 ; iorio_2019_2 ;
iorio_2019_3 ; iorio_2020 . Insights on the dynamical slow down can be
obtained by inspecting the translational diffusion ($D_{\parallel}$) and
rotational dynamics of hydration water molecules. The diffusion coefficient
parallel to the surface of the membrane can be obtained from the linear regime
reached by the mean squared displacement at sufficiently long times from the
Einstein relation:
$D_{\parallel}\equiv\lim_{t\rightarrow\infty}\frac{\left<\left|\mathbf{r}_{\parallel}(t)-\mathbf{r}_{\parallel}(0)\right|^{2}\right>}{4t}$
(1)
where $\mathbf{r}_{\parallel}(t)$ is the projection of the center of mass of a
water molecule on the plane of the membrane and the angular brackets
$\left<...\right>$ indicate average over all water molecules and time origins.
Using the DMPC as a model phospholipid membrane, Calero et al. calero_2016
have found that water molecules are slowed down by an order of magnitude when
the hydration level $\omega$ is reduced from 34 to 4 (Fig.3).
Figure 3: Dynamics of water molecules between stacked phospholipid bilayers
at different hydration level $\omega$ at ambient conditions: Diffusion
coefficient $D_{\parallel}$ of water molecules projected on the plane of the
membrane (black circles, left vertical axis); Rotational relaxation time
$\tau_{rot}$ of all the water in the system (red squares, right vertical
axis). Lines are guides for the eyes.
This result is in qualitative agreement with experimental and other
computational studies Wassall_BiophysJ1996 ; Zhao_Fayer_JACS2008 ;
Tielrooij_BiophysJ2009 ; Zhang_Berkowitz_JPhysChemB2009 ;
Gruenbaum_JChemPhys_2011 . In particular, in conditions of very low hydration,
the parallel diffusion is as low as $0.13$ nm2/ns because water molecules
interact with both the upper and the lower leaflet, hence remaining trapped.
Increasing the level of hydration $\omega$, Calero et. al calero_2016 have
shown that $D_{\parallel}$ increases monotonically. This observation suggests
that, increasing the physical separation between the leaflets, the hydration
water acts as a screen for the electrostatic interactions between water and
the leaflets.
The decreasing interaction of hydration water with the two leaflets can also
be observed inspecting the rotational dynamics of water molecules via the
rotational dipolar correlation function:
$C_{\hat{\mu}}(t)\equiv\left<\hat{\mu}(t)\cdot\hat{\mu}(0)\right>$ (2)
where $\hat{\mu}(t)$ is the direction of the water dipole vector at time $t$
and $\left<...\right>$ denotes the ensemble average over all water molecules
and time origins. Such quantity is related to terahertz dielectric relaxation
measurements used to probe the reorientation dynamics of water
Tielrooij_BiophysJ2009 . From Eq. 2 it is possible to define the relaxation
time
$\tau_{rot}\equiv\int_{0}^{\infty}C_{\hat{\mu}}(t)dt$ (3)
which is independent on the analytical form of the correlation function
$C_{\hat{\mu}}(t)$. As for $D_{\parallel}$, the rotational dynamics speeds up
with the degree of hydration (Fig.3), confirming that the interactions between
hydration water and the two leaflets modify the overall water dynamics
calero_2016 ; Zhao_Fayer_JACS2008 ; Tielrooij_BiophysJ2009 ;
Zhang_Berkowitz_JPhysChemB2009 ; Gruenbaum_JChemPhys_2011 .
To account for the rapidly relaxing signals associated with the reorientation
of water molecules in experiment Righini_PRL2007 , Tielrooij et al.
Tielrooij_BiophysJ2009 assumed the existence of three water species near a
membrane: (i) bulk-like, with characteristic rotational correlation times of a
few picoseconds; (ii) fast, with rotational correlation times of a fraction of
picosecond; and (iii) irrotational, with characteristic times much larger than
10 ps. Calero et al. calero_2016 show that it is possible to analyze their
simulations using this assumption (Fig. 4), however, the resulting fitting
parameters for the correlation times are not showing any regular behavior as a
function of $\omega$, questioning the existence of fast water near a membrane.
This possibility, on the other hand, cannot be ruled out completely, as it
could be related to the presence of heterogeneities, such as those associated
with water molecules with a single hydrogen bond to a lipid at low hydration
Righini_PRL2007 .
Figure 4: Partition of membrane hydration water into fast (squares),
irrotational (triangles) and bulk-like (circles) water molecules, following
the assumption in Ref. Tielrooij_BiophysJ2009 , as a function of the hydration
level $\omega$. As discussed in Ref. calero_2016 , the assumption of the
existence of fast water leads to inconsistencies.
Nevertheless, Calero et al. calero_2016 have shown that a consistent
explanation of the changes in the dynamics as a function of $\omega$ is
reached by observing that, upon increasing the hydration level, water first
fills completely the interior of the membrane and next accumulate in layers in
the exterior region. The authors rationalized this observation observing that
the inner-membrane (or interior) water has an extremely slow dynamics as a
consequence of the robustness of water-lipid HBs. Moreover, the water-water
HBs within the first hydration layer of the membrane slow down, with respect
to bulk water, due to the reduction of hydrogen bond-switching at low
hydration. As shown by Samatas et al. 2018arXiv181101911S , these effects are
emphasized when the temperature decreases: water near the membrane has a
glassy-like behavior when $T=288.6$ K, with the rotational correlation time of
vicinal water, within 3 Å from the membrane, comparable to that of bulk water
$\approx 30$ K colder, but with a much smaller stretched exponent, suggesting
a larger heterogeneity of relaxation modes.
Figure 5: Dynamics of water molecules between stacked phospholipid bilayers
as a function of the instantaneous local distance $\xi$ from the membrane
interface at ambient conditions and hydration level $\omega=34$: Diffusion
coefficient $D_{\parallel}$ of water molecules projected on the plane of the
membrane (black circles, left vertical axis); Rotational relaxation time
$\tau_{rot}$ of all the water in the system (red squares, right vertical
axis). Lines are guides for the eyes. Vertical dashed lines at $\xi=0$ and 5 Å
mark the interfaces between the water within the interior of the membrane, the
first hydration layer of water, and the water exterior to the membrane. The
interface at $\xi=5$ Å separates bound water and unbound water.
Both the translational and rotational dynamics of water molecules are strongly
determined by their local distance to the membrane. Calero and Franzese have
recently shown calero_membranes_2019 that the hydration water within the
interior of the membrane is almost immobile, the first hydration layer, with
$\xi\leq 5$ Å, is _bound_ to the membrane, and the exterior water is _unbound_
(Fig. 5). The authors have identified the existence of an interface between
the bound and the unbound hydration water at which the dynamics undergoes an
abrupt change: bound water rotates 63% less than bulk and diffuses 85% less
than bulk, while unbound water only 20% and 17%, respectively.
Figure 6: Average number of HBs $\langle n_{\rm HB}\rangle$ as a function of
the instantaneous local distance $\xi$ from the membrane interface at ambient
conditions and hydration level $\omega=34$. Full circles represent the HBs
formed between water molecules, and empty circles the HBs formed by water
molecules with selected groups of the phospholipid. Vertical dashed lines at
$\xi=0$ and 5 Å mark the interfaces between the interior, the first hydration
layer, and the exterior water of the membrane.
To rationalize the origin of the three dynamically different populations of
water, (i) immobile within the membrane interior, (ii) bound in the first
hydration layer, and (iii) unbound at the exterior of the membrane, Calero and
Franzese have turned their attention to the investigation of the hydrogen
bonds (HBs, Fig. 6). Based on the calculation of the average number of HBs
$\langle n_{\rm HB}\rangle$, they have found that the inner water is an
essential component of the membrane that plays a structural role with HBs
bridging between lipids, consistent with previous results Pasenkiewicz-
Gierula:1997aa ; lopez_2004 . In particular, Calero and Franzese have found
that, in the case of a fully hydrated membrane, $\approx 45\%$ of the water-
lipids HBs in the interior of the membrane are bridging between two lipids.
The fraction of bridging HBs, with respect to the total number of water-lipids
HBs, reduces to approximately 1/4 within the first hydration shell. Hence,
also the bound water has a possible structural function for the membrane and,
in this sense, can be considered as another _constituent_ of the membrane that
regulates its properties and contributes to its stability. Moreover, they
found that unbound hydration water has no water-lipids HBs. However, even at
hydration level as low as $\omega=4$, they find that $\approx 25\%$ of inner
water, and $\approx 18\%$ in the first hydration shell, is unbound, i.e. has
only water-water HBs. This could be the possible reason why it has been
hypothesized the existence of _fast_ water in weakly hydrated phospholipid
bilayers in previous works Tielrooij_BiophysJ2009 . Nevertheless, as already
discussed, Calero and Franzese clearly showed that unbound water is definitely
not fast, being at least one order of magnitude slower than bulk water.
In order to further rationalize the interactions between hydration water and
phospholipid heads, we computed martelli_fop the correlation function
$C_{\bm{\delta}}(t)\equiv\left<\bm{\delta}(t)\cdot\bm{\delta}(0)\right>$ (4)
where $\bm{\delta}$ is the N-O vector or the P-HO vector. Interestingly, we
have found that the P-HO vector has a longer lifetime compared to the N-O
vector, indicating that the interactions between P and water hydrogen atoms
are stronger than the interactions between N and O martelli_fop . This
conclusion is consistent with the observation that the P-HO two body pair
correlation function is characterized by a first peak at a distance shorter
than the N-O two body pair correlation function (Fig. 7 upper panel).
Figure 7: Upper panel: Two body pair correlation function computed for the
N-O and the P-HO vectors in black and red, respectively. Middle and lower
panels: _Slow_ and _very slow_ relaxation times $\tau_{1}$ and $\tau_{2}$,
respectively, computed for the $\mu$ (green open circles) and for the OH (blue
open squares) vectors, as a function of the distance from the surface. The
magenta lines define the average position of the water-lipid fluctuating
surfaces.
Starting from the observation that the N-O and the P-HO vectors have different
lifetimes, we hypothesized that such difference can have an effect on the
rotational dynamics of hydration water. In particular, we supposed that the
rotations around the water dipole moment $\bf{\mu}$ are different with respect
to the rotations around $\overrightarrow{\rm OH}$ vector. In Ref. martelli_fop
, we computed $C_{\hat{\mu}}$ and $C_{\overrightarrow{\rm OH}}$ and we fit the
two correlation functions with a double exponential, with characteristic times
$\tau_{1}$ and $\tau_{2}$, that intuitively reveals the effects of the
electrostatic interactions on the slow relaxation. We calculated the
relaxation times $\tau_{1}$ and $\tau_{2}$ in bins parallel to the membrane
surface and centered at increasing distances from the membrane (Fig. 7, middle
and lower panels).
We found that the _slow_ relaxation time, $\tau_{1}$, is orders of magnitude
smaller than the _very slow_ relaxation time, $\tau_{2}$. In particular,
approaching the membrane, the $\overrightarrow{\rm OH}$ vector relaxes slower
than to the $\hat{\mu}$ vector. This is in agreement with the finding that the
P-HO interaction is stronger than the N-O interaction. This result can be
rationalized by observing that the lipids have different (delocalized) charges
on the N-heads and on the P-functional groups and that these charges affect
the rotation of water around the two vectors in different way.
The slowing down of the rotational degrees of freedom (Fig. 7) decreases upon
increasing the distance from the membrane surface. In particular, at distances
of $\sim 1.3$ nm from the membrane the relaxation times for the $\hat{\mu}$
vector and for the $\overrightarrow{\rm OH}$ vector become indistinguishable,
as expected in bulk water.
In view of the very high values of the relaxation times in the proximity of
the membrane, we hypothesized that the electrostatic interactions with
phospholipid heads might cause a slow down in the diffusivity of water
molecules comparable –and hence measurable– with that of water at low
temperatures martelli_fop . To check our hypothesis, we measured the standard
displacement of water molecules in terms of bond units (BU), defined as the
distance traveled by water molecules normalized with respect to the oxygen-
oxygen mean distance (which is a temperature-independent quantity), and we
compared it with the same quantity for water at supercooled conditions. For a
large enough simulated time, a standard displacement of $<1$ BU would
correspond to water molecules rattling in the cage formed by their nearest
neighbors. This case would represent a liquid in which the translational
degrees of freedom are frozen.
We found that, in the proximity of the membrane surface, water molecules
suffer from a dramatic slow down of $\sim 60\%$ with respect to the value of
bulk water at biological thermodynamic conditions. Moreover, upon increasing
the distance from the lipid heads, we found that bulk diffusivity is recovered
at $\sim 1$ nm, the domain of definition of hydration water. Considering that
the diffusivity of water close to the lipid heads is comparable with that of
water at supercooled conditions, we concluded that such a slow-down could be
interpreted effectively as a reduction of the thermal energy of water
martelli_fop .
## III Structure
As presented above, the dynamics of bulk water is recovered approximately at
$\sim 1.3$ nm away from a membrane. However, as we will discuss in the
following, the structure analysis of hydration water martelli_fop shows how
long-range interactions spread at much larger distances, opening a completely
new scenario for the understanding of water-membrane coupling. In particular,
we analyzed martelli_fop how the water intermediate range order (IRO) changes
moving away from a membrane.
Modifications in the connectivity of disordered materials induce effects that
extend beyond the short range. This is, for example, the case for amorphous
silicon and amorphous germanium www . Likewise, at specific thermodynamic
conditions, water acquires structural properties that go beyond the
tetrahedral short range and are comparable to that of amorphous silicon
martelli_hyperuniformity .
In Ref. martelli_fop we adopted a sensitive local order metric (LOM)
introduced by Martelli et al. martelli_LOM to characterize local order in
condensed phase. The LOM provides a measure of how much a local neighborhood
of a particle $j$ ($j=1,\dots,N$) is far from the ground state. For each
particle $j$, the LOM maximizes the spatial overlap between the $j$ local
neighborhood, made of $M$ neighbours $i$ with coordinates $\mathbf{P}_{i}^{j}$
($i=1,\dots,M$), and a reference structure –the ground state– with coordinates
$\mathbf{R}^{j}$. The LOM is defined as:
$S(j)\equiv\max_{\theta,\phi,\psi;\mathcal{P}}\prod_{i=1}^{M}\exp\left(-\frac{\left|\mathbf{P}_{i_{\mathcal{P}}}^{j}-\mathbf{R}^{j}\right|^{2}}{2\sigma^{2}M}\right)$
(5)
where $(\theta,\phi,\psi)$ are the Euler angles for a given orientation of the
reference structure $\mathbf{R}^{j}$, $i_{\mathcal{P}}$ are the indices of the
neighbours $i$ under the permutation $\mathcal{P}$, $\sigma$ is a parameter
representing the spread of the Gaussian domain. The parameter $\sigma$ is
chosen such that the tails of the Gaussian functions stretch to half of the
O-O distance in the second coordination shell of $j$ in the structure
$\mathbf{R}^{j}$. As reference $\mathbf{R}^{j}$, we choose the ground state
for water at ambient pressure, i.e. cubic ice. The site-average of Eq. (5),
$S_{C}\equiv\frac{1}{N}\sum_{j=1}^{N}S(j),$ (6)
is by definition the _score function_ and gives a global measure of the
symmetry in the system with respect to the reference structure. The LOM and
the score function has provided physical insights into a variety of systems
martelli_searching ; martelli_unravelling_2019 ; santra_bnnt , hence they are
particularly suitable also to characterize martelli_fop and quantify
martelli_acsnano how far the membrane affects the water structural
properties.
We found martelli_fop that the overall score function, Eq. (6), for water
tends to increase at very short distances from the membrane and is comparable
to bulk at $\gtrsim 1.3$ nm away from the membrane (Fig. 8 upper panel). The
IRO enhancement is not dramatic, but can not be simply discarded.
Hence, both the dynamics and the IRO are affected as far as $\approx 1.3$ nm
away from the membrane. Therefore, in Ref. martelli_fop we proposed that the
dynamical slow-down and the enhancement of the IRO are two effects related to
each other. We suggested that the dynamical slow-down corresponds to an
effective reduction of thermal noise that, ultimately, allows water molecules
to adjust in slightly more ordered spatial configurations in the proximity of
the membrane.
Figure 8: Score function $S_{C}$ for water between DMPC membrane leaflets.
Vertical magenta lines indicate the average positions of the water-lipid
interfaces. The majority of water is, on average, in the range between $z=1.5$
and 7 nm. Upper panel: $S_{C}$ of water molecules belonging to a bin centered
at distance $z$ from the center of the lipid bilayer at 0 and with a bin-width
of 1/10 of the entire system. Vertical dashed orange lines mark the region
where $S_{C}$ approaches the value in bulk water. Lower panel: Water reaches
the $S_{C}$ bulk value only at $\approx 2.8$ nm away from the water-lipid
interfaces, as shown by the difference $\Delta P(S_{C})$ between the
probability density distribution $P(S_{C})$ for bulk water and that at a
specific distance $\delta$ from the membrane. Here we show $\Delta P(S_{C})$
for $\delta z=2.0$ nm (red line), with the bin centered at $z=3.5$ nm, and for
$\delta z=2.8$ nm (green line), with the bin centered at $z=4.3$ nm.
Moving away from the membrane, at distances $\gtrsim 1.3$ nm, $S_{C}$ seems to
reach a plateau, suggesting that a convergence to the bulk value should fall
into the distance domain of hydration water. To check this, we computed the
probability density distribution $P(S_{C})$ of Eq. (6) in the bin centered at
$\delta z=2$ nm away from the surfaces ($z=3.5$ nm), and we compared it with
the distribution of $S_{C}$ computed in a box of bulk water at the same
thermodynamic conditions (Fig. 8 lower panel).
Surprisingly, the two distributions _do not_ overlap. This result indicates
that the membrane perturbs the structure of water at the intermediate range
of, at least, $\sim 1.6$ nm, considering half bin-width. This distance is much
larger than that defining hydration water.
We found martelli_acsnano an overlap between the bulk-water distribution and
that for the confined water only if between the two membrane leaflets there is
enough water to reach distances as far as $\delta z=2.8$ nm from the membrane.
Such a remarkable result indicates that the membrane affects the structural
properties of water at least as far as $\sim 2.4$ nm, accounting for the $\sim
0.4$ nm half bin-width. This distance can be considered twice the domain of
definition of hydration water.
Therefore, the definition of hydration water, as well as its role, should be
extended to account for the repercussion of the membrane on the water
structure. Or it should be revised, in order to further re-define its concept.
In order to properly frame our observations into a consistent picture, in
addition to our structural analysis of the membrane effects on the water-O
positions, we have analyzed next the topology of the hydrogen bond network
(HBN) which provides another measure of the IRO, but from the perspective of
the HBs.
## IV Network topology
The properties of network-forming materials are governed by the underlying
network of bonds martelli_rings . However, the topology of this network is
very rarely investigated because of the difficulty of such analysis.
A possible approach is through the _ring statistics_. It consists in defining,
characterizing and counting the number of closed loops that are made of links
(or bonds) between the vertices of the network. The ring statistics allows to
study, in particular, the network topology of amorphous systems leroux_ring ;
yuan_efficient , clathrate hydrates chihaia_molecular , and chalgogenide
glasses blaineau_vibrational . It is, also, an essential tool to characterize
continuous random networks www ; wooten_structure ; djordjevic_computer ;
barkema_event ; barkema_high ; hudson_systematic .
After some hesitant debut in the field of water martonak_2004 ; martonak_2005
, ring statistics has been embraced more and more as a tool to study water
properties, starting from its application by Martelli et al. to characterize
the transformations in the bulk water HBN near the liquid-liquid critical
point martelli_nature . Since then, ring statistics has been an essential tool
for investigating the properties of water in its liquid phase santra_2015 ;
martelli_rings ; camisasca_proposal , as well as its amorphous states
martelli_searching ; martelli_rings ; martelli_LOM , and for inspecting the
dynamics of homogeneous nucleation russo_2014 ; leoni_2019 ; fitzner_ice .
Based on the idea that the connectivity in network-forming materials governs
theirs properties, we explored how the topology of the HBN changes when water
is confined between phospholipid membranes martelli_acsnano . In fact, the HBN
is what differentiates water from ”simple” liquids pauling .
In water the HBN is directional. Hence, there are several ways for defining
and counting rings. Martelli et al. showed that each of these possibilities
carries different, but complementary, physical meaning martelli_rings .
Here we use a definition for the HB that was initially introduced by Luzar and
Chandler chandler_HB and is common in the field. However, other definitions
are possible, due to our limited understanding of the HBs. Nevertheless, it
has been shown that all these definitions have a satisfactory qualitative
agreement over a wide range of thermodynamic conditions prada_2013 ;
shi_2018_2 .
Figure 9: Schematic representation of three possible ways of defining the
rings in the water directional network. In each case, we start counting from
the water molecules labeled as 1, with O atoms in solid brown and H atoms in
white, and we follow the directional HBs from H to O (arrows) along the HBN,
until we return to molecule 1 or until we exceeds 12 steps. We consider only
rings that cannot be decomposed into sub-rings. Top: A ring is formed only
when molecule 1 donates HBs (brown arrow). In the example, the shortest ring
is the hexagonal one (blue arrows). Center: A ring is formed when molecule 1
donates or accepts (brown arrows) HBs. In the example, the shortest ring is
the pentagonal ring (arrows). Bottom: Any ring formed by molecule 1 is
considered, starting from any of its HBs (brown arrows), without bond or
ring’s length constraints. In the example, there are a hexagonal and a
pentagonal ring. Martelli et al. adopted the latter definition in Ref.
martelli_acsnano .
In Fig. 9 we present three possible ways of defining rings in a directional
network, as in the case of water. The first (Fig. 9 Top) explicitly looks for
the shortest ring king starting from the molecule 1, when this molecule
donates one HB, regardless whether other molecules in the ring accept or
donate a bond. This definition emphasizes the intrinsic directional nature of
the HBN. The second definition (Fig. 9 Center) considers only the shortest
ring formed when molecule 1 can only accept a HB. The third definition (Fig. 9
Bottom), adopted by Martelli et al. martelli_rings , ignores both the
donor/acceptor nature of the starting molecule and the shortest-rings
restriction, leading to a higher number of rings. The reader can refer to the
original work martelli_rings for further details about the definitions and
their physical meaning in the case of bulk liquid and glassy water at several
thermodynamic conditions.
Figure 10: HBN ring statistics at a distance $z$ from the average position of
the fluctuating membrane and in bulk water. In both panels the sets of data
are for bulk water (open orange triangles), and $z=0.4$ nm (black dots), $1.2$
nm (red squares), $2.0$ nm (green diamonds), and $2.8$ nm (blue triangles).
Quantities at a given distance from the membrane are calculated in $0.8$ nm-
wide bins centered at $z$. Upper panel: Probability of having $n$-member
rings, $P(n)$. All $P(n)$ are normalized to unity and, therefore, do not
reflect the total number of rings of a given size. Lower panel: Percentage-
wise decomposition of the HBs per water molecule into acceptor-(A) and
donor-(D). The $x$-axis labels $\textit{A}_{x}\textit{D}_{y}$ indicate the
number of acceptor ($\textit{A}_{x}$) and donor ($\textit{D}_{y}$) HBs,
respectively, of the configurations schematically represented in the plot
(with the oxygen of central water molecule in blue). For clarity we omit
combinations with minor contributions, e.g., $\textit{A}_{3}\textit{D}_{1}$,
$\textit{A}_{0}\textit{D}_{y}$, and $\textit{A}_{x}\textit{D}_{0}$.
The authors of Ref. martelli_acsnano computed the probability of having a
$n$-folded ring, $P(n)$, as a function of the distance $z$ from the membrane.
They found that near the membrane the $P(n)$ is strikingly different from that
of bulk water (Fig. 10, upper panel). In particular, the distribution is
richer in hexagonal and shorter rings and is poorer in longer rings.
This result points towards two main conclusions: (i) For membrane-hydration
water, at a distance $z\leq 0.8$ nm, the HBN tends to be preferentially ice-
like, i.e., dominated by hexagonal rings. This observation is consistent with
the results, discussed in the previous sections, showing that membrane-vicinal
water is characterized by enhanced IRO and slower dynamics than bulk water.
(ii) The reduced number of longer rings in the hydration water is consistent
with the reduction of the overall dimensionality of the system due to the
interface. The membrane fluctuating surface reduces the available space for
the HBN in the first layer of hydration water.
All the $P(n)$ calculated at larger distances, $z>0.8$ nm, are quite different
from that for the hydration water and gradually converge towards a the bulk
case upon increasing $z$. In particular, the probability of hexagonal rings
decreases progressively, while longer rings become more and more frequent.
This sudden change in $P(n)$, between the first and the following bins, is
consistent with the results, discussed in the previous sections, demonstrating
the existence of a drastic change in structure and dynamics between bound
water, in the first hydration layer, and unbound water, away from the membrane
calero_membranes_2019 . Here, the border between the two regions is increased
from $\sim 0.5$ nm calero_membranes_2019 to $\sim 0.8$ nm due to the membrane
fluctuations, that are not filtered out in Ref. martelli_acsnano , and to the
spatial resolution, i.e., the bin-size, of the analysis.
The HBN of bulk water is finally recovered in the bin centered at $z=2.8$ nm
away from the membrane, i.e., for $z\geq 2.4$ nm. Remarkably, this distance
corresponds to the same at which water recovers the IRO of bulk water
martelli_fop , as discussed in the previous section. This important result
indicates a clear connection between the structural properties of water
molecules and the topology of the HBN, while further pointing toward the
necessity of revising the concept of hydration water.
The quality of the HBN, in terms of broken and intact HBs, is a tool of
fundamental importance to fully cast the topology of the HBN in a consistent
and complete physical framework. As a matter of fact, the presence of
coordination defects affects the fluidity of water and is directly related to
its capability of absorb long range density fluctuations
martelli_hyperuniformity . Therefore, the authors in Ref. martelli_acsnano
complemented their investigation of the HBN topology with the analysis of its
quality.
They decomposed the HBs per water molecule into acceptor-(A) and donor-(D)
types (Fig. 10 lower panel). They label as $\textit{A}_{2}\textit{D}_{2}$ a
water molecule with perfect coordination, i.e., donating two bonds and
accepting two bonds and as $\textit{A}_{x}\textit{D}_{y}$ the others accepting
$x$ and donating $y$ bonds. They focused their attention on the following
coordination configurations: $\textit{A}_{1}\textit{D}_{1}$,
$\textit{A}_{2}\textit{D}_{1}$, $\textit{A}_{1}\textit{D}_{2}$,
$\textit{A}_{2}\textit{D}_{2}$ and $\textit{A}_{3}\textit{D}_{2}$, as other
configurations do not contribute significantly.
First, they checked that in bulk water, at ambient conditions, the predominant
configuration is $\textit{A}_{2}\textit{D}_{2}$. For the TIP3P model of water,
this configuration accounts for $\sim 35\%$ of the total composition. The
second most dominant configuration in bulk is $\textit{A}_{1}\textit{D}_{2}$
with $\sim 20\%$, followed by $\textit{A}_{2}\textit{D}_{1}$ with $\sim 13\%$,
$\textit{A}_{1}\textit{D}_{1}$ with $\sim 12\%$ and, finally,
$\textit{A}_{3}\textit{D}_{2}$ accounting for less then $10\%$ (Fig. 10 lower
panel).
Such distribution qualitatively reflects the distribution in _ab initio_
liquid water at the same thermodynamic conditions distasio_2014 . Hence, it
suggests that classical potentials can carry proper physical information even
in very complex systems such as biological interfaces.
In the proximity of the membrane, the network of HBs largely deviates from
that of bulk water, except for the under-coordinated configuration
$A_{\textit{2}}D_{\textit{1}}$. In particular, the coordination defects
$A_{\textit{1}}D_{\textit{1}}$ and $A_{\textit{1}}D_{\textit{2}}$ dominate the
distribution, with $\sim 25\%$ each, followed by the configurations
$A_{\textit{2}}D_{\textit{1}}$ and $A_{\textit{2}}D_{\textit{2}}$, with $\sim
15\%$ each, and a minor percentage of higher coordination defects
$A_{\textit{3}}D_{\textit{2}}$, with $\sim 3\%$.
However, the small percentage of perfectly coordinated configurations,
$A_{\textit{2}}D_{\textit{2}}$, near the membrane seems inconsistent with the
higher local order observed at the same distance martelli_fop ;
martelli_acsnano , and with the enhanced hexagonal ring-statistics of the HBN
martelli_acsnano , already discussed. Such discrepancy is only apparent for
the following two reasons.
First, both the structural score function, $S_{C}$, and the ring statistics
are a measure of the IRO beyond the short range. On the contrary, the quality
of the HBN, in terms of defects, is a measure only of the short range order.
Second, the defects analysis includes only HBs between water molecules and do
not account for the strong HBs between water molecules and the phospholipid
headgroups. Instead, as discussed in the previous section
calero_membranes_2019 , $\sim 30\%$ of the water molecules in the first
hydration shell are bound to the membrane with at least one HB.
Away from the membrane, upon increasing the distance, Martelli et al.
martelli_acsnano observed a progressive enhancement of perfectly tetra-
coordinated configurations (Fig. 10 lower panel). They found a progressive
depletion of all coordination defects, up to recovering the bulk-water case at
distance $z\geq 2.4$ nm from the membrane, as for the probability distribution
of $S_{C}$ and the HBN topology.
The intriguing evidence that the under-coordinated defect
$A_{\textit{2}}D_{\textit{1}}$ remains almost constant at all distances is,
for the moment, not explained. Indeed, it could be due to a variety of
reasons, going from the presence of water-membrane HBs in the first hydration
layer, to the propagation of defects in bulk, and it would require a detailed
study.
## V Conclusions and future research directions
The results summarized in this short review question our common understanding
of hydration water near soft membranes, such as those in biological systems.
This water layer, often called bio-water, is usually considered as $\sim 1$ nm
wide and is regarded as the amount of water that directly shape and define the
biological activity in proteins, cells, DNA, etc. Such definition has been
proposed based on results, both from experiments and computations, showing
that the water dynamics and density are affected by the biological interface
within $\sim 1$ nm, while they recover the bulk behavior at larger distances.
In our calculations based on well-established models of water nanoconfined
between DMPC membranes, instead, we found new evidences that indicate the need
for a revised definition of hydration water. We achieved this conclusion by
focusing on physical quantities that have been not thoroughly, or not at all,
considered before.
In particular, by considering the instantaneous local distance of water from
the membrane, Calero and Franzese were able to unveil the existence of a new
interface between bound and unbound water $\sim 0.5$ nm away from the
membrane-water interface calero_membranes_2019 . Bound water behaves like a
structural component of the membrane and has a translational and rotational
dynamics that is intermediate between water inside and outside the membrane
calero_membranes_2019 . Bound-water dynamics is dominated by the strong HB
with the membrane and is orders of magnitude slower than the unbound water.
The dynamics of bulk water is recovered only $\sim 1.3$ nm away from the
membrane.
However, we showed that the membrane interface has an effect on the structure
of the hydration water at a distance almost twice as large, up to, at least,
$\sim 2.4$ nm martelli_fop . We got such a result by analyzing how the water
structure, and its IRO, changes by moving away from the membrane. To this
goal, we evaluated the score function, a structural observable that quantifies
how close is the local structure to a reference configuration, in our case the
cubic ice. Also in this case, we found that water $\sim 1.3$ nm away from the
membrane has a small but measurable IRO enhancement. Hence, within this range
both the dynamics and the structure of hydration water undergo an effective
reduction of the thermal noise, that we interpret as a consequence of the
interaction with the membrane. Also, we have shown that different chemical
species constituting the lipid heads interact with water molecules with
different strengths, hence providing a rationale for the contributions to the
observed dynamical slow-down in the proximity of the surface martelli_fop .
Furthermore, Martelli et al. martelli_acsnano analyzed the IRO from the HB
perspective by studying the HBN topology and its ring statistics. They found
that water within $\sim 0.8$ nm from the average position of the fluctuating
membrane has an excess of hexagonal and shorter rings, and a lack of longer
rings, with respect bulk water. Moreover, the defect analysis of the HBN
showed that water in this $\sim 0.8$ nm-wide layer has a lack of water-tetra-
coordinated molecules and an excess of water bi-coordinated molecules. This
result does not contradict the enhanced water IRO within the same layer,
because the HBN defects analysis measures only the short range order and does
not account for the water-membrane HBs.
Martelli et al. martelli_acsnano found also a sudden change in the HBN around
$0.8$ nm, with a ring statistics that approaches that of bulk. This result
confirms the qualitative difference between bound and unbound water
calero_membranes_2019 .
The analysis of the HBN ring statistics and the HBN defects show that the
membrane interface generates a perturbations in the ring statistics that
extends as far as, at least, $\sim 2.4$ nm martelli_acsnano . These
observations, therefore, corroborate that the water structure is affected by
the membrane interface up to a distance at least twice as large as that
usually associated to the hydration water.
All these findings should be taken into account when interpreting experimental
results and when developing membrane-water interaction potentials. They can
help in better understanding water in biological processes at large and, in
particular, those phenomena where hydration plays a role. From a more general
perspective, these calculations imply that the concept of hydration should be
revised in order to account for the results presented here.
Our conclusions entail further investigation about the relationship between
diseases, possibly promoted by extracellular matrix variations, e.g., of
hydration or ionic concentration, with the water HBN rearrangements. Example
of such illness are cardiac disease and arterial hardening in healthy men
Arnaoutis:2017aa , or atherosclerosis and inflammatory signaling in
endothelial cells 10.1371/journal.pone.0128870 . Indeed, variations of ionic
concentration drastically change the water HBN structure Mancinelli:2007fk
and dynamics Fayer:2009zx , with an effect that is similar to an increase of
pressure Gallo:2014ab . While dehydration has consequences on the dynamics and
the structure of the water near a membrane that resemble those of a
temperature decrease calero_membranes_2019 .
In particular, we foresee the extension of these calculations to out-of-
equilibrium cases. Indeed, it has been recently shown that the potency of
antimicrobial peptides may not be a purely intrinsic chemical property and,
instead, depends on the mechanical state of the target membrane losasso_2019 ,
which varies at normal physiological conditions.
###### Acknowledgements.
F.M. acknowledges support from the STFC Hartree Centre’s Innovation Return on
Research programme, funded by the Department for Business, Energy and
Industrial Strategy. C.C. and G.F. acknowledge the support of Spanish grant
PGC2018-099277-B-C22 (MCIU/AEI/ERDF), and G.F. the support by ICREA Foundation
(ICREA Academia prize).
## References
* (1) R. Lynden-Bell, S. Morris, J. Barrow, J. Finney, and C. Harper, editors. Water and Life. CRC Press (Boca Raton), 2010.
* (2) M. Chaplin. Do we underestimate the importance of water in cell biology? Nat Rev Mol Cell Biol, 7(11):861–866, 2006.
* (3) P. Ball. Water as a biomolecule. ChemPhysChem, 9(18):2677–2685, 2008.
* (4) W. Hamley. Introduction to Soft Matter. John Wiley and Sons, West Sussex, England, 2007.
* (5) D. Zhong, S. K. Pal, and A. H. Zewail. Biological water: A critique. Chemical Physics Letters, 503(1):1–11, 2011.
* (6) S. König, E. Sackmann, D. Richter, R. Zorn, C. Carlile, and T. Bayerl. Molecular dynamics of water in oriented dppc multilayers studied by quasielastic neutron scattering and deuterium-nuclear magnetic resonance relaxation. J. Chem. Phys., 100:3307–3316, 1994.
* (7) X. Chen, W. Hua, Z. Huang, and H. C. Allen. Interfacial water structure associated with phospholipid membranes studied by phase–sensitive vibrational sum frequency generation spectroscopy. J. Am. Chem. Soc., 132:1336–11342, 2010.
* (8) H. Binder. The molecular architecture of lipid membranes: New insights from hydration–huning infrared linear dichroism spectroscopy. Appl. Spectrosc. Rev., 30:15–69, 2003.
* (9) G. DÁngelo, V. Conti Nibali, C. Crupi, S. Rifici, U. Wanderlingh, A. Paciaroni, F. Sacchetti, and C. Branca. Probing intermolecular interactions in phospholipid bilayers by far-infrared spectroscopy. J. Phys. Chem. B, 121:1204–1210, 2017.
* (10) F. Volke, S. Eisenblätter, S. Galle, and G. Klose. Dynamic properties of water at phosphatidylcholine lipid-bilayer surfaces as seen by deuterium and pulsed field gradient proton nrm. Chem. Phys. Lipids, 1194:2199–2205, 1994.
* (11) S. R. Wassall. Pulsed field gradient-spin echo nmr studies of water diffusion in a phospholipid model membrane. Biophys. J., 71:2724–2732, 1996.
* (12) W. Zhao, D. E. Moilanen, E. E. Fenn, and M. D. Fayer. Water at the surfaces of aligned phospholipid multibilayer model membranes probed with ultrafast vibrational spectroscopy. J. Am. Chem. Soc., 130(42):13927–13937, 2008.
* (13) K. J. Tielrooij, D. Paparo, L. Piatkowski, H. J. Bakker, and M. Bonn. Dielectric relaxation dynamics of water in model membranes probed by terahertz spectroscopy. Biophys. J., 97:2848–2492, 2009.
* (14) M. Trapp, T. Gutberlet, F. Juranyi, T. Unruh, B. Demé, M. Tehei, and J. Peters. Hydration dependent studies of highly aligned multilayer lipid membranes by neutron scattering. J. Chem. Phys., 133(16):164505, 2010.
* (15) M. L. Berkowitz, D. L. Bostick, and S. Pandit. Aqueous solutions next to phospholipid membrane surfaces: Insights from simulations. Chem. Rev., 106(4):1527–1539, 2006.
* (16) S. Y. Bhide and M. L. Berkowitz. Structure and dynamics of water at the interface with phospholipid bilayers. J. Chem. Phys., 123(22):224702, 2005.
* (17) R. W. Pastor. Molecular dynamics and monte carlo simulations of lipid bilayers. Curr. Opin. Struct. Biol., 4:486–492, 1994.
* (18) Z. Zhang and M. L. Berkowitz. Orientational dynamics of water in phospholipid bilayers with different hydration levels. J. Phys. Chem. B, 113(21):7676–7680, 2009.
* (19) S. M. Gruenbaum and J. L. Skinner. Vibrational spectroscopy of water in hydrated lipid multi–bilayers. i. infrared spectra and ultrafast pump–probe observables. J. Chem. Phys., 135(7):075101, 2011.
* (20) C. Calero, H. E. Stanley, and G. Franzese. Structural interpretation of the large slowdown of water dynamics at stacked phospholipid membranes for decreasing hydration level: All-atom molecular dynamics. Materials, 9(5):319, 2016.
* (21) F. Martelli, H.-Y. Ko, C. C. Borallo, and G. Franzese. Structural properties of water confined by phospholipid membranes. Front. Phys., 13:136801, 2018.
* (22) S. Samatas, C. Calero, F. Martelli, and G. Franzese. Biomembrane Simulations: Computational Studies of Biological Membranes, chapter Water between membranes: Structure and Dynamics, page 69. Number 4 in Series in Computational Biophysics. CRC Press, ISBN 9781498799799, 1st edition, June 2019.
* (23) W. L. Jorgensen, J. Chandrasekhar, and J. D. Madura. Comparison of simple potential functions for simulating liquid water. J. Chem. Phys., 79:926, 1983.
* (24) S. A. Pandit, D. Bostick, and M. L. Berkowitz. An algorithm to describe molecular scale rugged surfaces and its application to the study of a water/lipid bilayer interface. J. Chem. Phys., 119:2199–2205, 2003.
* (25) T. Róg, K. Murzyn, and M. Pasenkiewicz-Gierula. The dynamics of water at the phospholipid bilayer surface: A molecular dynamics simulation study. Chem. Phys. Lett., 352(5):323–327, 2002.
* (26) F. C. Lopez, S. O. Nielsen, M. L. Klein, and P. B. Moore. Hydrogen bonding structure and dynamics of water at the dimyristoylphosphatidylcholine lipid bilayer surface from a molecular dynamics simulation. J. Phys. Chem. B, 108(21):6603–6610, 2004.
* (27) J. Yang, C. Calero, and J. Martí. Diffusion and spectroscopy of water and lipids in fully hydrated dimyristoylphosphatidylcholine bilayer membranes. J. Chem. Phys., 140(10):104901, 2014.
* (28) C. Calero and G. Franzese. Membranes with different hydration levels: The interface between bound and unbound hydration water. J. Mol. Liq., 273:488–496, 2019.
* (29) G. Camisasca, A. Iorio, M. De Marzio, and P. Gallo. Structure and slow dynamics of protein hydration water. J. Mol. Liq., 268:903–910, 2018.
* (30) A. Iorio, G. Camisasca, M. Rovere, and P. Gallo. Characterization of hydration water in supercooled water–trehalose solutions: The role of the hydrogen bonds network. J. Chem. Phys., 51:044507, 2019.
* (31) A. Iorio, G. Camisasca, and P. Gallo. Glassy dynamics of water at interface with biomolecules: A mode coupling theory test. Sci. China Phys. Mech., 62:107011, 2019.
* (32) A. Iorio, G. Camisasca, and P. Gallo. Slow dynamics of hydration water and the trehalose dynamical transition. J. Mol. Liq., 282:617–625, 2019.
* (33) A. Iorio, M. Minozzi, G. Camisasca, M. Rovere, and P. Gallo. Slow dynamics of supercooled hydration water in contact with lysozyme: Examining the cage effect at different length scales. Philos. Mag, 100(20):1–14, 2020.
* (34) V. V. Volkov, D. J. Palmer, and R. Righini. Distinct water species confined at the interface of a phospholipid membrane. Phys. Rev. Lett., 99:078302, Aug 2007.
* (35) M. Pasenkiewicz-Gierula, Y. Takaoka, H. Miyagawa, K. Kitamura, and A. Kusumi. Hydrogen bonding of water to phosphatidylcholine in the membrane as studied by a molecular dynamics simulation: Location, geometry, and lipid–lipid bridging via hydrogen-bonded water. J. Phys. Chem. A, 101(20):3677–3691, 1997.
* (36) C. F. Lopez, S. O. Nielsen, M. L. Klein, and P. B. Moore. Hydrogen bonding structure and dynamics of water at the dimyristoylphosphatidylcholine lipid bilayer surface from a molecular dynamics simulation. J. Phys. Chem. B, 108:6603–6610, 2004.
* (37) F. Wooten, K. Winer, and D. Weaire. Computer generation of structural models of amorphous si and ge. Phys. Rev. Lett., 54:1392, 1985.
* (38) F. Martelli, S. Torquato, N. Giovambattista, and R. Car. Large-scale structure and hyperuniformity of amorphous ices. Phys. Rev. Lett., 119:136002, 2017.
* (39) F. Martelli, H.-Y. Ko, E. C. Oğuz, and R. Car. Local-order metric for condensed phase wnvironments. Phys. Rev. B, 97:064105, 2016.
* (40) F. Martelli, N. Giovambattista, S. Torquato, and R. Car. Searching for crystal-ice domains in amorphous ices. Phys. Rev. Materials, 2:075601, 2018.
* (41) F. Martelli. Unravelling the contribution of local structures to the anomalies of water: The synergistic action of several factors. J. Chem. Phys., 150:094506, 2019.
* (42) B. Santra, H.-Y. Ko, Y.-W. Yeh, F. Martelli, I. Kaganovich, Y. Raitses, and R. Car. Root-growth of boron nitride nanotubes: Experiments and Ab Initio simulations. Nanoscale, 10:22223, 2018.
* (43) F. Martelli, J. Crain, and G. Franzese. Network topology in water nanoconfined between phospholipid membranes. ACS Nano, 14:8616–8623, 2020.
* (44) M. Formanek and F. Martelli. Probing the network topology in network–forming materials: the case of water. AIP Adv., 10:055205, 2020.
* (45) S. Le Roux and P. Jund. Ring statistics analysis of topological networks: New approach and application to amorphous ges2 and sio2 systems. Comp. Mater. Sci., 49:70–83, 2010.
* (46) X. Yuan and A. N. Cormack. Efficient algorithm for primitive ring statistics in topological networks. Comp. Mater. Sci., 24:343–360, 2002.
* (47) V. Chihaia, S. Adams, and W. F. Kuhs. Molecular dynamics simulations of properties of a (001) methane clathrate hydrate surface. Chem. Phys., 317:208–225, 2005.
* (48) S. Blaineau and P. Jund. Vibrational signature of broken chemical order in a ges2 glass: A molecular dynamics simulation. Phys. Rev. B, 69:064201, 2004.
* (49) F. Wooten. Structure, odd lines and topological entropy of disorder of amorphous silicon. Acta Cryst. A, 58:346–351, 2002.
* (50) B. R. Djordjevic, M.F.Thorpe, and F. Wooten. Computer model of tetrahedral amorphous diamond. Phys. Rev. B, 52:5685, 1985.
* (51) G. T. Barkema and N. Mousseau. Event-based relaxation of continuous disordered systems. Phys. Rev. Lett., 71:4358, 1996.
* (52) G. T. Barkema and N. Mousseau. High-quality continuous random networks. Phys. Rev. B, 62:4985–4990, 2000.
* (53) T. S. Hudson and P. Harrowell. A systematic enumeration of local topological relaxation mechanisms in amorphous networks and their efficiency in network relaxation. J. Chem. Phys., 126:184502, 2007.
* (54) R. Marton̆ák, D. Donadio, and M. Parrinello. Polyamorphism of ice at low temperatures from constant-pressure simulations. Phys. Rev. Lett., 92:225702, 2004.
* (55) R. Marton̆ák, D. Donadio, and M. Parrinello. Evolution of the structure of amorphous ice: From low-density amorphous through high-density amorphous to very high-density amorphous ice. J. Chem. Phys., 122:134501, 2005.
* (56) J. C. Palmer, F. Martelli, Y. Liu, R. Car, A. Z. Panagiotopoulos, and P. G. Debenedetti. Metastable liquid–liquid transition in a molecular model of water. Nature, 510:385–388, 2014.
* (57) B. Santra, R. A. DiStasio Jr., F. Martelli, and R. Car. Local structure analysis in $abinitio$ liquid water. Mol. Phys., 113:2829–2841, 2015.
* (58) G. Camisasca, D. Schlesinger, I. Zhovtobriukh, G. Pitsevich, and L. G. M. Pettersson. A proposal for the structure of high– and low–density fluctuations in liquid water. J. Chem. Phys., 151:034508, 2019.
* (59) J. Russo and H. Tanaka. Understanding water’s anomalies with locally favoured structures. Nat. Commun., 5:3556, 2014.
* (60) F. Leoni, R. Shi, H. Tanaka, and J. Russo. Crystalline clusters in mw water: Stability, growth, and grain boundaries. J. Chem. Phys., 151:044505, 2019.
* (61) M. Fitzner, G. C. Sosso, S. J. Cox, and A. Michaelides. Ice is born in low-mobility regions of supercooled liquid water. Proc. Natl. Acad. Sci. USA, 116:2009–2014, 2019.
* (62) L. Pauling. The Nature of the Chemical Bond, and the Structure of Molecules and Crystals. Ithaca, NY: Cornell University Press, 3 edition, 1960.
* (63) A. Luzar and D. Chandler. Hydrogen–bond kinetics in liquid water. Nature, 379:55–57, 1996.
* (64) D. Prada-Gracia, R. Shevchuk, and F. Rao. The quest for self–consistency in hydrogen bond definitions. J. Chem. Phys., 139:084501, 2013.
* (65) R. Shi, J. Russo, and H. Tanaka. Common microscopic structural origin for water’s thermodynamic and dynamic anomalies. J. Chem. Phys., 149:224502, 2018.
* (66) S. V. King. Ring configurations in a random network model of vitreous silica. Nature, 213:1112–1113, 1967.
* (67) R. A. DiStasio Jr., B. Santra, Z. Li, X. Wu, and R. Car. The individual and collective effects of exact exchange and dispersion interactions on the Ab Initio structure of liquid water. J. Chem. Phys., 141:084502, 2014.
* (68) G. Arnaoutis, S. A. Kavouras, N. Stratakis, M. Likka, A. Mitrakou, C. Papamichael, L. S. Sidossis, and K. Stamatelopoulos. The effect of hypohydration on endothelial function in young healthy adults. Eur. J. Nutr., 56(3):1211–1217, 2017.
* (69) N. I. Dmitrieva and B. B. Maurice. Elevated sodium and dehydration stimulate inflammatory signaling in endothelial cells and promote atherosclerosis. PLOS ONE, 10(6):1–22, 06 2015.
* (70) R. Mancinelli, A. Botti, F. Bruni, M. A. Ricci, and A. K. Soper. Perturbation of water structure due to monovalent ions in solution. Phys. Chem. Chem. Phys., 9(23):2959–2967, 2007.
* (71) M. D. Fayer, D. E. Moilanen, D. Wong, D. E. Rosenfeld, E. E. Fenn, and S. Park. Water dynamics in salt solutions studied with ultrafast two-dimensional infrared (2d ir) vibrational echo spectroscopy. Acc. Chem. Res., 42(9):1210–1219, 2009.
* (72) P. Gallo, D. Corradini, and M. Rovere. Do ions affect the structure of water? the case of potassium halides. J. Mol. Liq., 189:52–56, 2014.
* (73) V. Losasso, Y.-W. Hsiao, F. Martelli, M. Winn, and J. Crain. Modulation of antimicrobial peptide potency in stressed lipid bilayers. Phys. Rev. Lett., 208103:122, 2019.
|
ECU
Electronic Control Unit
IoT
Internet of Things
IoV
Internet of Vehicles
CAV
Connected Autonomous Vehicle
CAVs
Connected Autonomous Vehicles
DoS
Denial of Service
SQL
Structured Query Language
SAE
Society of Automotive Engineers
DGPS
Differential Global Positioning System
SLAM
Simultaneous Localization and Mapping
OEMs
Original Equipment Manufacturers
LIDAR
Light Detection and Ranging
IMU
Inertial Measurement Unit
RTK
Real Time Kinematic
GNSS
Global Navigation Satellite System
E/ E-Architecture
Electrical/Electronic-Architecture
TPMS
Tire Pressure Monitoring System
ECU
Electronic Control Unit
A-GPS
Assisted GPS
GSM
Global System for Mobile Communication
GPS
Global Positioning System
ISP
Internet Service Provider
SSID
Service Set Identifier
TTFF
Time To First Fix
NRTK
Network Real Time Kinematic
HMI
Human-Machine Interface
ADAS
Advanced Driver Assistance System
ML
Machine Learning
ROI
Region of Interest
NDT
Normal Distributions Transform
AUTOSAR
AUTomotive Open System ARchitecture
API
Application Programming Interface
APIs
Application Programming Interfaces
OS
Operating System
ARA
AUTOSAR Runtime Environment for Adaptive Applications
OBU
On-Board Diagnostics
EU
European Union
ETSI
European Telecommunications Standards Institute
SIEM
Security Information and Event Management
MTTC
Mean Time-to-Compromise
CVSS
Common Vulnerability Scoring System
DREAD
Damage Reproducibility Exploitability Affected users Discoverability
STRIDE
Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege
DLT
Distributed Ledger Technology
GDPR
General Data Protection Regulation
TTC
Time-to-Compromise
ASIL
Automotive Safety Integrity Level
UML
Unified Modeling Language
IT
Information Technology
CIA
Confidentiality Integrity Availability
CAPEC
Common Attack Patterns Enumeration and Classification
TAL
Threat Agent Library
OCTAVE
Operationally Critical Threat, Asset, and Vulnerability Evaluation
TVRA
Threat Vulnerability and Risk Analysis
TARA
Threat Agent Risk Assessment
DFD
Data Flow Diagrams
V2I
Vehicle to Infrastructure
P2I
Peripheral to Infrastructure
CAN
Controller Area Network
ISO 26262
Road Vehicles Functional Safety
SAE J3061
Cybersecurity Guidebook for Cyber-Physical Vehicle Systems
V2C
Vehicle to Cloud
MOL
Methods and Objectives Library
CEL
Common Exposure Library
RISOS
Research in Secured Operating Systems
PA
Protection Analysis
CRUD
Create, Read, Update, and Delete
CTC
Cost-to-Compromise
PoC
Proof of Concept
NCVA
Network Communications Vulnerability Assessment
OSI
Open Systems Interconnection
TPMS
Tire Pressure Monitoring System
TCM
Telematics Control Module
OBD
On-Board Diagnostics
HSM
Hardware Security Module
MITM
man-in-the-middle
BS
Base Score
TS
Temporal Score
ES
Environmental Score
OS
Operating System
IB
Impact Bias
CIB
Confidentiality Impact Bias
IIB
Integrity Impact Bias
AIB
Availability Impact Bias
AV
Access Vector
AC
Access Complexity
A
Authentication
# Quantitative System-Level Security Verification of the IoV Infrastructure
Jan Lauinger * — , , Mudassar Aslam * — , Mohammad Hamad * — , Shahid Raza
* — , , and Sebastian Steinhorst * — J. Lauinger, M. Hamad, S. Steinhorst
are with the Department of Electrical and Computer Engineering, Technical
University of Munich, Munich, 80333 Germany, E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]).M. Aslam and S. Raza are
with the Cybersecurity Unit, RISE Research Institutes of Sweden, Stockholm,
16440 Sweden, E-mail<EMAIL_ADDRESS>[email protected]).This work has
received funding by the European Union’s Horizon 2020 Research and Innovation
Programme through the nIoVe project (https://www.niove.eu/) under grant
agreement no. 833742, and the CONCORDIA project (https://concordia-h2020.eu/)
under the grant agreement no. 830927.With the support of the Technische
Universität München - Institute for Advanced Study, funded by the German
Excellence Initiative and the European Union Seventh Framework Programme under
grant agreement no. 291763.Manuscript received July 14, 2020;
###### Abstract
The Internet of Vehicles (IoV) equips vehicles with connectivity to the
Internet and the Internet of Things (IoT) to support modern applications such
as autonomous driving. However, the consolidation of complex computing domains
of vehicles, the Internet, and the IoT limits the applicability of tailored
security solutions. In this paper, we propose a new methodology to
quantitatively verify the security of single or system-level assets of the IoV
infrastructure. In detail, our methodology decomposes assets of the IoV
infrastructure with the help of reference sub-architectures and the 4+1 view
model analysis to map identified assets into data, software, networking, and
hardware categories. This analysis includes a custom threat modeling concept
to perform parameterization of Common Vulnerability Scoring System (CVSS)
scores per view model domain. As a result, our methodology is able to allocate
assets from attack paths to view model domains. This equips assets of attack
paths with our IoV-driven CVSS scores. Our CVSS scores assess the attack
likelihood which we use for Markov Chain transition probabilities. This way,
we quantitatively verify system-level security among a set of IoV assets. Our
results show that our methodology applies to arbitrary IoV attack paths. Based
on our parameterization of CVSS scores and our selection of use cases, remote
attacks are less likely to compromise location data compared to attacks from
close proximity for authorized and unauthorized attackers respectively.
###### Index Terms:
Internet of Vehicles (IoV) Security, Threat Modeling, Risk Assessment, Attack
Vector, Markov Chain, IoV Reference Model, Connected Autonomous Vehicles
(CAVs).
## I Introduction
New connectivity capabilities in the IoV provide vehicles with access to the
infrastructure of the Internet.
III-A: Definition of ScenariosIII-A: Reference Architecture Domains(Cloud,
Infra & Edge, Vehicle & Peripherals, …) III-B: Log ViewAssetsIII-C: Dev
ViewAssetsIII-D: Proc ViewAssetsIII-E: Phy ViewAssetsIII-B2,
III-B3$\leftrightarrow$III-E2, III-E3Attack
AnalysisIII-B4$\leftrightarrow$III-E4Security DesignThreat & Vuln.
AnalysisRisk AnalysisSecurity RequirementsPreventionDetectionReactionV-A:
Attack Vector ConstructionIV-B: Quantitative Unit/System Security Verification
Figure 1: High-level illustration, including paper section references, of the
proposed methodology for quantitative system-level security verification.
As a result, upcoming services around connected vehicles access new forms of
data for enhanced driving experiences, safety, and automation such as
autonomous decision making over maneuvers [1]. Simultaneously, increasing
connectivity causes an increase in complexity which, from a security
perspective, opens up a larger attack surface. Attackers, who successfully
compromise vulnerabilities of the IoV infrastructure, face new opportunities
to remotely interfere with vehicles. As a direct consequence, the potential of
attacks that affect vehicle safety by accident or on purpose increases
[2]–[3].
For the reason that jeopardized safety-critical systems threaten IoV
acceptance, the investigation of holistic IoV security concepts represents a
common interest of IoV stakeholders [4]. Despite the existence of new and
comprehensive security solutions for the IoV, they remain in an early
development stage [5], or face difficulties with administrative, legal, or
technical development [6]. Thus, the interplay of different technological
domains in the IoV demand tailored, automated, dynamic, and adaptive security
solutions.
To address this challenge and to evaluate new security concepts for assets of
the IoV infrastructure, we propose a new methodology that allows to
quantitatively verify system-level security solutions. Our methodology
requires the definition of attack paths to define assets for the security
verification. Additionally, our methodology requires an analysis of the IoV
reference architecture to allocate, equip, and assess identified assets.
In order to analyze complex assets in a structured way, reference models,
layers, or view models provide ways to categorize the structure of an asset by
highlighting different groups of aspects. The 4+1 architectural view model,
used in our work, provides the logical, process, developer, and physical views
to analyze data, communication, libraries and dependencies, and hardware
aspects respectively [7]. We leverage the separated analysis of the IoV assets
per view to (1) identify assets of the IoV infrastructure and (2) to
accurately map attacks as well as defense mechanisms to assets. As a result,
we can label properties of Common Vulnerability Scoring System (CVSS) scores
for IoV assets, respecting each view category individually. This view model-
based attack analysis allows us to identify security measures per asset that
an attacker needs to compromise.
An attack is successful if the attacker exploits vulnerabilities or if the
attacker breaches security mechanisms [8]. To reach the goal of an attack
path, an attacker is required to perform successful attacks repetitively. In
order to model the attacker perspective at different stages as well as
quantitatively verify system security, our work leverages state transitions
probabilities of Markov Chains. In this context, state transitions represent
attacker stages of attack trees. To assess each individual stage of an attack
path, we leverage the vulnerability, risk, and security analysis based on CVSS
scores. The structure of Markov Models enables our quantitative security
verification of IoV assets as well as opportunities to verify system-level
security of multiple assets that are part of attack path [9].
To recap the consecutive steps of our methodology, Figure 1 indicates each
step that are necessary for the quantitative system-level security
verification of the IoV infrastructure. At the same time, Figure 1 refers to
the sections of our work which apply the respective analysis. With a general
focus on the IoV location service application (see Section III-A), we leverage
sub-architectures, defined in the work [10], to model IoV system assets. This
measure reduces the complexity and facilitates our security analysis. Section
III applies the 4+1 view model analysis of the IoV architecture from a
security perspective. Based on our knowledge, our work applies the 4+1 view
model in the IoV security context for the first time. Section II-B applies our
agile threat modeling concept to handle dynamics of the IoV architecture
during the assessment [11]–[12]. Section III-B to III-E investigate identified
assets to determine IoV-specific CVSS vulnerability scores. After selecting
IoV attack paths in Section V-A, we take the IoV-driven CVSS scores as
parameters to our Markov Chain model in Section V.
Our 4+1 view model analysis of the IoV infrastructure reveals attacks and
security requirements per asset. The collected assumptions about existing
attacks and security mechanisms enable us to define IoV driven CVSS scores.
Based on our CVSS scores, it is possible to quantitatively verify system-level
security of components that are part of different attack trees. Attack trees
depend on our selection of existing IoV attacks that target location services.
Our results ascertain less chances for remote attacks to compromise location
data of CAVs compared to close proximity attacks. Apart from system-level
security verification, our results show that it is possible to apply our
methodology to multiple existing IoV attacks to achieve a comparable security
assessment among components of the IoV infrastructure.
To sum up in bullet points, we contribute with:
* •
Applying the 4+1 view model analysis in the IoV security context for the first
time to the best of our knowledge.
* •
Performing security risk assessment based on IoV-driven CVSS scores.
* •
Proposing an agile, modular, view model-based methodology to design and verify
security concepts for IoV systems.
* •
Applying and evaluating our proposed methodology using existing IoV attacks
targeting location services of CAVs.
## II Background & Related Work
### II-A View Model Frameworks in the Security Context
Logical ViewDataDeveloper ViewSoftwareProcess ViewCommunicationPhysical
ViewHardwareScenariosInfra, EdgeCloudVehiclePeripherals Figure 2: The 4+1 View
Model in the IoV context.
There are multiple view angles to analyze an IoV infrastructure [13].
Considering functional, communication, implementation, enterprise, usage,
information, physical, stakeholder, and user viewpoints all together is not
beneficial regarding security analysis [10]. Categories may introduce either
too much complexity and inconsistencies or, in essence, do not contribute to
security-related purposes such as attack analysis. Hence, a sufficiently
balanced portfolio of viewpoints increases the applicability of appropriate
security concepts [14]. It is possible to balance the tradeoff between the
applicability of security concepts and the complexity of reference
architectures by utilizing the approach of the 4+1 view model which describes
the architecture of a scenario using multiple abstract views [7]. Figure 2
shows the 4+1 view model in the IoV context together with common
characteristics that apply in each of the views. The abstraction levels of the
view model enable the identification of security-relevant system boundaries
and information flows.
Focusing on each view individually, the _logical view_ decomposes the system
by leveraging principles of abstraction, encapsulation, and inheritance to
describe end-user functionality and services. The _process view_ utilizes
requirements such as performance, availability, concurrency, distribution,
integrity, fault tolerance to map logical view abstractions into a process
model of communicating processes and tasks which reveal computing load.
The _development view_ organizes software modules and their import and export
relationships by considering rules such as partitioning, grouping, scope,
decoupling, reuse, portability, communication, time dependence and reveals
allocations of requirements, costs, and planning.
The _physical view_ model determines the physical components of computer
networks, processors and interfaces. Thereby, the physical view model
considers non-functional system requirements such as performance, scalability,
reliability, and availability to drive configuration, deployment and testing
decisions of various nodes. In essence, these properties determine capacity
requirements of the physical architecture of the hardware.
Last, the scenario defines application procedures, sequences, and
interactions, identifies validation, verification and illustration concepts,
and marks the input to all view models. In the context of threat modeling, the
scenario definition is essential, as it reduces the complexity of the attack
surface by prioritizing assets [15]. As such, the scenarios enable a target-
oriented modeling of the system and assets which represents the initial step
of threat modeling and risk assessment. As of today, standardized tool sets
and development frameworks facilitate implementations of each view model
individually.
### II-B Threat Modeling
To identify the main characteristics among different threat modeling
approaches, chapter two of the comprehensive work of Shostack [16] introduces
asset-centric, attack-centric, and software-centric strategies of threat
modeling. By iterating over the threat modeling methodologies of the survey of
Hussain et al. [17], the Spoofing, Tampering, Repudiation, Information
Disclosure, Denial of Service, Elevation of Privilege (STRIDE) threat model,
which identifies spoofing, tampering, repudiation, information disclosure,
denial of service, and elevation of privilege as the main threats, counts as a
software-centric threat model. Likewise, the STRIDE Average Model, Fuzzy Logic
model and the Abuser Stories methodology [16] utilize STRIDE.
Graphical threat modeling concepts, such as attack trees, model system assets
or attacks at different attack propagation stages depending on the assignment
of the security expert. Hence, it is not possible to allocate attack trees to
either of the three threat modeling approaches. Nevertheless, graphical
concepts provide flexibility and extendibility and fit into iterative threat
modeling procedures.
Attack libraries, such as the Common Attack Patterns Enumeration and
Classification (CAPEC) [18] or the Intel Threat Agent Library (TAL) [19],
address the attributes of the attacker and represent attacker-centric models.
The automotive-compatible Threat Agent Risk Assessment (TARA) model marks
another attacker-centric model that is based on different threat-related
libraries [20]. To close the scope of approaches, the asset-centric
Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)
[21] and Threat Vulnerability and Risk Analysis (TVRA) [22] models analyze
threats, risks, and vulnerabilities together.
The work of [20] and [23] address the versatility and applicability of threat
modeling approaches where [20] proposes a tailored procedure of threat
modeling for the IoV. This procedure adapts the TARA and STRIDE strategy.
Based on the work in [20] and due to overlapping threat modeling strategies,
the threat modeling of this work follows the strategy of Figure 3. The
strategy of the threat model of Figure 3 represents an iterative process of
system modeling, threat, vulnerability, and risk analysis, security
requirement definition, and tailored security design. This concept aligns with
the proposed threat modeling procedures of [16], [20], and [24].
System ModelingThreat AnalysisVulnerability AnalysisRisk AnalysisSecurity
RequirementsTailored Security Design Figure 3: Agile threat modeling approach
for the IoV.
To clarify the statement, [16] relies on the four step framework of system
modeling, threat analysis, threat mitigation, and validation. In [20], the
adapted TARA model lists threat-agent risk analysis, high-level threat-agent
risk evaluation, attack method and exposure analysis, and strategy design to
target exposures. The adapted TAL, Methods and Objectives Library (MOL), and
Common Exposure Library (CEL) drive this approach.
The work of [24] introduces a risk assessment framework for the automotive
domain. The strategy relates to threat modeling approaches and consists of
system definition, threat analysis, risk assessment, and definition of
security requirements. Their threat analysis identifies assets before the
actual threats. Moreover, the risk assessment block comprises threat and
impact level estimation and security level determination. Finally, the work of
Hamad et al. [25] combines all aspects of threat modeling, attack tree
construction, and risk assessment in a comprehensive threat modeling approach
tailored to vehicles.
In contrast to general threat modeling approaches, such as fuzzy,
probabilistic, tree-based, graphical, and legacy threat modeling, our work
follows the agile threat modeling approach. The iterative nature of agile
threat modeling provides the necessary flexibility of security analysis for
the constantly evolving IoV domain. With this approach, updates of security
goals and requirements remain customizable [12]. Software security analysts
have the possibility to iteratively decrease abstraction levels, or change the
quantification of scenarios.
Moreover, our approach of threat modeling of Figure 3 includes a final block
of tailored security design. The reason for this is the work of Xiong et al.
[26], which states the design of a tailored security concept as future work,
and the requirement definition of a mitigation concept in [27].
Furthermore, the structures of the systematic threat modeling approach of [28]
derive from the prominent Road Vehicles Functional Safety (ISO 26262) [29] and
Cybersecurity Guidebook for Cyber-Physical Vehicle Systems (SAE J3061) [30]
standards which define a combination of TARA and STRIDE for risk assessment.
Our work relates in the way that it analyzes a high-level as well as in-depth
details of modules and the implementation of the IoV architecture through the
4+1 view model analysis. Likewise, our work identifies the security
requirements of assets and provides a methodology to validate and verify
security requirement effectiveness.
### II-C System Design for Security Verification
The work of Xiong et al. [26] enhances threat modeling with probabilistic
attack simulations that are based on networking graphs with attack paths.
Their work builds upon the in-vehicle 2014 Jeep Cherokee Controller Area
Network (CAN) network model of [31] and utilizes the software tool securiCAD
for automated attack modeling and risk assessment. The attack simulations
based on the attack path incorporate attack types, vulnerabilities, and
countermeasures at every propagation stage and manage to evaluate Time-to-
Compromise (TTC) behavior. The findings of this work demand more tailored
definitions of meta-models, reference architectures, investigation of
countermeasures, security architectures, validation of the approach through
case studies, and quantitative input to the quantitative and probabilistic
security metric of TTC.
The work of Iqbal et al. [32] describes the transition of the traditional IoV
architecture into a data-driven-intelligent framework. Their framework
translates the architecture into data collection, preprocessing, data
analysis, service, and application layers. Security, privacy, and trust
management affect all layers. Last, the approach of Zou et al. [33] proposes
an architecture that keeps a security monitoring system, threat intelligence,
and networking security modules at the bottom layer. Validation and
verification services build on top of the lowest layer. Defense,
reinforcement, and response systems complete their so-called 360 connected
vehicle safety framework.
To address the outcomes of [26], our work reproduces their threat modeling
concept with the following changes. Regarding the reference architecture, our
work leverages the outcomes of the IoV reference model architecture analysis
of [10]. Based on this model and using a scenario, we apply the 4+1 view model
to break down assets to extract detailed vulnerability properties. Our
abstraction concept differentiates between hardware, software, networking, and
data and enables mappings of attacks and defense mechanisms per system asset.
## III System Decomposition and Agile Threat Modeling based on 4+1 View Model
Analysis
This section introduces the IoV location service application as the base
scenario for aligning assets with reference models. Next, our 4+1 view model
analysis in Sections III-B to III-E identify all sub-architectures as well as
security domains of hardware, networking, software, and data. This step marks
one of our contributions and allows fine-grained identification and mappings
of assets for our security verification method.
### III-A Location Services of Connected Autonomous Vehicles
Stationary GPS Receivers A-GPS Server Stationary DGPS Base Station Satellite
Figure 4: High-level application overview of the CAV location service scenario
which includes A-GPS [34], DGPS [35], and Cellular Internet [36] services.
The location service application of CAVs represents the main scenario due to
the following reasons. Location accuracy contributes to the safety criticality
level of a vehicle which, in turn, determines the level of driving automation
of CAVs [37]. Autonomous mini-bus shuttles of Original Equipment Manufacturers
(OEMs) aim towards running on Society of Automotive Engineers (SAE) driving
automation level four [38]. Driving automation level four expects automated
steering, acceleration, deceleration, monitoring, handling of dynamic tasking,
and driving modes of the vehicle system. To prevent vehicles from stopping due
to location inaccuracy, which causes a high safety criticality level, vehicles
rely on redundant location services [39].
Several processing services of odometry, Light Detection and Ranging (LIDAR),
and Differential Global Positioning System (DGPS) data, as shown in Figure 4,
establish necessary localization redundancy. Additionally, the comparison of
calculations of local positions of sensor and receiver data with predetermined
Simultaneous Localization and Mapping (SLAM) trajectories enhances location
estimation. All in all, the services of vehicle DGPS communication, sensor
(odometry, LIDAR, camera) data processing and communication, SLAM trajectory
comparison, and Vehicle to Cloud (V2C) communication make up the foundation of
the following view model analysis.
With the scenario defined, it is necessary to determine the reference model
domains of the IoV architecture for the analysis of the attack surface. The
work [10] provides a comprehensive IoV reference architecture which considers
a physical IoV infrastructure consisting of four sub-architectures of CAVs,
devices and peripherals, edge, and cloud. Their reference architecture for
attack surface analysis is based on a functional-communication viewpoint which
creates feasible complexity and manages incorporation of security relevant
details. To further simplify the sub-architecture categorization, we consider
the peripherals and the vehicle as one domain. The reasons of (1) dynamic
connectivity requirements that apply to vehicles and peripherals in the same
way [40] and (2) wired connections of peripherals to the in-vehicle network
[41] justify this assumption.
Video ImageLIDAR Front ViewLIDAR Bird View3D Proposal NetworkCon/Deconv
LayerCon/Deconv LayerCon/Deconv LayerROI PoolingROI PoolingDetection & 3D
ProposalsROI PoolingFully Connected LayerRegion Based Fusion NetworkObject
DetectionOdometryGPS Receiver TrackingVision-based TrackingInter Vehicles
Communication LinkDGPSPseudo-Range & Navigation
DataAntennaReferenceOscillatorAmplifier & ConverterFrequency SynthesizerCode
Tracking ChannelCarrier Tracking ChannelMulti Sensor
FusionAltitudeVelocityPosition Figure 5: Logical View of GPS Receiver Tracking
System [42], Sensor Data Processing [43], and Vehicle Localization Components
[44].
### III-B Logical View Analysis (Data Management)
#### III-B1 System Modeling
The works [45] and [46] provide a general overview of the logical software
design. Figure 5 combines three detailed logical views where the top part
consists of a Global Positioning System (GPS) receiver tracking and vision-
based object detection system. The object detection sub-module processes LIDAR
front and bird view data as well as video images. Latest Machine Learning (ML)
frameworks for visual data processing rely on proposal networks for
preprocessing that feed fully connected fusion networks [45]. The GPS receiver
tracking system estimates signal traveling times using code and carrier
synchronization techniques in order to determine first pseudo ranges. In cases
of unreliable GPS signal reception, location services need to rely on
predictive DGPS carrier phase corrections [47].
The lower part of the logical view indicates a multi sensor fusion system
which processes the results of vision-based tracking, GPS tracking, DGPS
correction, and odometry data to determine high precision altitude, position,
and velocity values. The combination of LIDAR point cloud and odometry data
allows Kalman filter estimation of particle motion between LIDAR scans. To
determine a high precision offset of localization, it is possible to match
point clouds between the estimated live LIDAR scans and predefined SLAM maps
[48].
#### III-B2 Threat & Vulnerability Analysis
To calculate the CVSS scores of the logical view, it is necessary to gather
the threats on identified _assets_ of the logical view. Logical software
design, which describes the basic structure of data relationships and,
thereby, application logic, belongs to field of system design engineering of
software [49]. Hence, the vulnerability taxonomies of [50], the Research in
Secured Operating Systems (RISOS) [51], and Protection Analysis (PA) project
[52], which describe software Operating System (OS) flaws, apply to any other
system design challenges as well. The reason for this is that an OS requires
holistic system design modeling.
All possible software data flaws, of the referenced collection, affect the
identified _assets_ of Figure 5. For instance, incomplete or inconsistent
parameter validation, privileges, identification, authentication,
authorization, serialization, or logic errors mark vulnerabilities of the
logical view domain. By violating the vulnerabilities of data, an attacker may
perform one or multiple Create, Read, Update, and Delete (CRUD) operations. In
our use case, data manipulation of applications of location services
represents the ultimate goal of an attacker.
#### III-B3 Risk Analysis & Security Requirements
With the help of the threat and vulnerability analysis of the logical view, we
apply risk analysis with the determination of the CVSS metric in Table I,
where higher vulnerability scores refer to more severe risks. This table
indicates our decisions on CVSS parameters that we derive in Section IV-A.
Logical security requirements require consideration of best practices of
security by design concepts. Another requirement is the incorporation of
procedures of incident detection and reaction. Our methodology provides the
possibility to cover existing types of such defense concepts in form of
backward transition probabilities.
#### III-B4 Security Considerations
Based on the outcomes of the logical view analysis, a tailored security design
from the logical perspective first of all needs to minimize the number of
components and functionality requirements which belong to security by design
concepts [53]. Other preventive measures such as error handling, consistency
of data over time, authentication, validation, modularity, exposure, etc.
require consideration and need incorporation into the logical design of the
application scenario [50]. For detective and reactive measures, the logical
design must detect injections of logic bombs which would intentionally hide,
delete, or start processes that affect application logic.
### III-C Developer View Analysis (Software Management)
#### III-C1 System Modeling
The software implementation of location services builds upon the software
stack of AUTomotive Open System ARchitecture (AUTOSAR) Classic and AUTOSAR
Adaptive which structure libraries, dependencies, and program interactions
[54]. The classic version of AUTOSAR applies to deeply embedded systems that
focus on safety, real-time capability, and process determinism. By contrast,
the adaptive platform targets high-end computing in the form of custom
applications.
The classic AUTOSAR software architecture divides into four main layers. On
top of the microcontroller layer, which groups Electronic Control Unit (ECU)
hardware, the basic software layer as well as the AUTOSAR runtime environment
abstract hardware functionality through software modules. The top-level
application layer utilizes the runtime environment for software and
application module communication [55]. Equal to the classic AUTOSAR
architecture, the adaptive AUTOSAR software architecture builds the adaptive
AUTOSAR foundation on top of a virtual machine/hardware layer. The adaptive
AUTOSAR foundation consists of Application Programming Interfaces (APIs) and
services for the management of the OS, time, execution, communication,
configuration, security, and monitoring. This layer enables the AUTOSAR
Runtime Environment for Adaptive Applications (ARA) to expose these APIs to
applications that run on top of ARA [54].
Regarding the software architecture of the cloud, infrastructure, and edge
domains of the reference model, the works [56] and [57] introduce recent
software networking stacks and cloud software architecture stacks
respectively. These software assets mark potential entry points for an
attacker and we consider this investigation as future work.
#### III-C2 Threat & Vulnerability Analysis
Since the developer view is part of the software context, it is possible to
consider the traditional software threats of the STRIDE model. Additionally,
the software vulnerability taxonomy of [58] lists input validation and
representation, states of APIs, timing, errors, code quality, encapsulation,
and environment as flaws. These software flaws clearly focus on the
implementation and software library modules and do not consider weak spots of
data and system design. All the stated flaws apply to the analysis of modules
of software architecture which the developer view identifies.
#### III-C3 Risk Analysis & Security Requirements
Regarding the security requirements of the software context, the security
requirements authenticity, integrity, non-repudiability, confidentiality,
availability, and authorization of the STRIDE model apply. With the help of
the asset analysis of the developer view and the software vulnerability
taxonomy, it is possible to determine the CVSS parameters of the software
implementation layer in Table I. This software CVSS score represents the
software risk analysis for the IoV location service scenario.
#### III-C4 Security Considerations
Tailored security design in the software domain of the developer view concerns
safe development, implementation, verification, testing, deployment, and
maintenance of services such as interoperability, dynamic and automated risk
assessment, attack prediction and attribution, threat predictive analytics,
monitoring, and detection intelligence, encrypted traffic analysis, forensic
readiness, intrusion detection and prevention, and penetration testing
[59]-[60]. It is necessary to apply all stated security concepts for location
service networking, sensor fusion algorithms, modules, and dependencies of the
OS in use.
### III-D Process View Analysis (Networking Protocols)
#### III-D1 System Modeling
The process view indicates the interplay of logical components of localization
services in a sequential order. Figure 6 represents the order which starts
with Assisted GPS (A-GPS) utilization for faster satellite localization.
Reception of GPS data from satellites and subsequent merging of DGPS
correction data determines the initial position estimation of the vehicle.
GNSS Data Reception (A-GPS, DGPS)Vision-based TrackingOdometry(IMU) Trajectory
EstimationVisual Scan and SLAM Map MatchingLocation Data Communication Figure
6: High-level Process View of Localization Service.
At the same time, Inertial Measurement Unit (IMU) data of vehicle movement
passes the Kalman filter to feed the estimation between vision-based tracking
scans. When it comes to the matching of scan and map data, the initial GPS
position narrows the area of map matching which optimizes and stabilizes the
position estimation [61]. The last part of the process view is the
transmission of vehicle location data to the cloud services for vehicle
tracking. Services of Advanced Driver Assistance System (ADAS) modules such as
maneuver estimation services [62] and lane hypothesis estimation benefit from
the SLAM map matching as well [63].
#### III-D2 Threat & Vulnerability Analysis
All communication protocols of the mentioned services outside of the vehicle
make up a direct attack surface for the attacker. Even though communication
protocols differ, common attacks such as jamming, spoofing, timing, capturing,
modification, removal, payload, etc. apply to all communication protocols
independent of the software Open Systems Interconnection (OSI) reference
layers [64]. The collection in [50] provides network vulnerability taxonomies
which equally apply in the IoV networking domain.
#### III-D3 Risk Analysis & Security Requirements
The risk analysis of the identified assets, threats, and vulnerabilities of
the networking category provides another CVSS score of Table I. Due to the
exposure of communication messages and interfaces, it is necessary to
emphasize on the reaction patterns of security requirements for the networking
domain. The large scale communication attack mitigation analysis of [65]
identifies sixteen reactive defense mechanism and provides pros and cons of
each mitigation strategy. Packet dropping, replication, isolation,
disconnection, termination, restart, redirection, inspection, filtering, etc.
belong to this collection of concepts. The defense mechanisms, thereby,
counteract the malicious communicator and attacker types of sensor disruptor
of the IoV specific attacker model of [66].
#### III-D4 Security Considerations
Security considerations for the networking domain affect the safe and reliable
connectivity of Vehicle to Infrastructure (V2I) and Peripheral to
Infrastructure (P2I). Since man-in-the-middle (MITM) and other networking
attacks are difficult to prevent, the focus in this domain lies on detection
and, especially, reaction concepts [67]. Another reason for this fact is the
necessary exposure of networking interfaces which enable the localization
services in the first place. Hence, safe routing, redirect adaptivity,
redundant connectivity, etc. point out the direction of tailored communication
security for the location services of CAVs.
### III-E Physical View Analysis (Hardware Management)
#### III-E1 System Modeling
Figure 7 presents a simplified physical view of the in-vehicle architecture.
There exist different architecture designs such as zone, domain, or central
gateway based architectures [68]. The reference architecture shown in Figure 7
follows the domain-based architectural design. The reason for it is that the
modern anatomy of automotive Ethernet has computationally powerful domain
controllers which group ADAS, drive-train, infotainment, Human-Machine
Interface (HMI), etc. network segments [41]. The design enables isolation,
criticality, and bandwidth measures to unload the gateway component [69].
The automotive Electrical/Electronic-Architecture attaches sensors and
actuators to ECUs which in turn connect to domain controllers or directly
connect to the central gateway component depending on safety critical
functionality [69]. With the transmission of location data to the cloud, the
gateway enables cloud services to publish vehicle information to smartphone
applications [36]. Our physical analysis neglects the focus on infrastructure
for SLAM map construction as it happens before CAV deployment [70].
#### III-E2 Threat & Vulnerability Analysis
It is unlikely for an attacker to gain physical access to infrastructure units
in the cloud, networking, or satellite domain due to their remote location.
For this reason, we focus on the threats and vulnerabilities of the physical
vehicle architecture. The general attack taxonomy of physical attack on
Internet of Things (IoT) devices of [71] counts twelve types of hardware
threats.
GatewayDomain ControllerDomain Controller…..…..Domain ControllerSensing &
DiagnosticsAdaptive Cruise Control ModuleTelematics Module Figure 7: Physical
View of Simplified In-Vehicle E/E-Architecture
Here, threats and attacks map to affected security requirements and
countermeasures. Object tampering, outage, object replication, camouflage,
side-channel, hardware trojans, physical damage etc. attacks are among the
threats of the work in [71].
Highlighting in-vehicle attacks specifically, the work in [41] provides a
detailed attack surface. Here, non-CAN attacks, in the form of Tire Pressure
Monitoring System (TPMS) and KeeLoq Cipher, and CAN attacks, in the form of
media player, On-Board Diagnostics (OBD), bluetooth module, and Telematics
Control Module (TCM) attacks, exploit physical vulnerabilities of the listed
devices. The vulnerability assessment of [72] further identifies boot memory,
debug interfaces, inter-chip communication channels, and side-channel attacks
as susceptible hardware units.
#### III-E3 Risk Analysis & Security Requirements
With the attack surface, threat modeling, and vulnerability analysis, it is
possible to calculate the CVSS scores of physical assets in Table I. Regarding
physical security requirements, the taxonomies in [41] mention monitoring for
intrusion detection as well as authentication as the main requirements. To
further protect hardware vulnerabilities, [72] and [11] emphasize on stack
canaries, no execute bit, address space layout randomization, protection
units, management units, privilege separation, and Hardware Security Module
(HSM) mitigation concepts.
#### III-E4 Security Considerations
Opposed to networking components, access to physical components remains a
challenging task due to location and speed dynamics of vehicles and the
distance to cloud or infrastructure assets. Thus, tailored security for
physical components of the IoV location service focuses on insider attacks
[73]. This means physical attack surfaces such as OBD assets require
misbehavior detection frameworks and secure aggregation mechanisms.
## IV Vulnerability Scores and Markov Chain-based Security Verification
This section walks through each CVSS vulnerability metric and defines each
metric per view model perspective. All abbreviations used throughout this
section refer to CVSS parameters and can be found in Table I. Our scores mark
the first input to probability calculations for state transition of our Markov
Chain model. Section IV-B describes our quantitative system-level security
verification concept. The second input for our Markov Chain model are attack
vectors that contain assets for the system-level security verification.
Possible attack vectors are presented in the evaluation Section V-A.
### IV-A Labeling of CVSS Parameters
The connectivity of the IoV architecture components enables the label ”remote”
(R) for the Access Vector (AV) in every category. The Access Complexity (AC)
has a similar distribution where every category except networking fulfills the
label ”high” (H). The reason for this choice is the safety-critical
application of CAV which requires the highest access control standards at
every stage. Networking AC remains ”low” (L) in the location service scenario
due to the fact that attackers face direct access to networking applications
of redundant location services.
Regarding authentication (A), software provides data access and authentication
privileges per default. To access the IoV cloud and vehicle environment
”requires” (R) authentication but infrastructure services such as GPS data
reception does ”not require” (N) authentication. Every category requires
authentication concepts except the data domain. Regular GPS receivers do not
necessarily authenticate satellites. However, software behind signal reception
interfaces authenticates correct signals.
Compromising software has the potential to cause ”complete” (C)
confidentiality, integrity, and availability loss in the system. Equally,
successful data and networking integrity manipulation could allow data or
network participants to propagate through the system, if not correctly
detected in initial checks. Otherwise, the impact on the confidentiality,
integrity, and availability requirements remains ”partial” (P).
TABLE I: CVSS Scores per View Model Layer Parameters | Data | Software | Networking | Hardware
---|---|---|---|---
Access Vector | R | R | R | R
Access Complexity | H | H | L | H
Authentication | N | R | R | R
Confidentiality Impact | P | C | P | P
Integrity Impact | C | C | C | P
Availability Impact | P | C | P | P
Impact Bias | I | A | I | N
Base Score | 6.8 | 4.8 | 5.1 | 3.4
Exploitability | PoC | U | F | PoC
Remediation Level | TF | OF | TF | OF
Report Confidence | UCB | UCF | UCB | UCF
Temporal Score | 5.2 | 3.2 | 4.1 | 2.4
Collateral Damage Potential | H | H | M | M
Target Distribution | M | L | H | L
Environmental Score | 5.7 | 1.6 | 5.3 | 1.2
Total | 17.7 | 9.6 | 14.5 | 7
For the Impact Bias (IB) and with regard to the location service scenario,
data and networking components weight ”integrity” (I) over other requirements,
as incorrect location data or communication entities potentially destroy the
service. Since there is a centralized sensor fusion software module, the IB of
software applies greater weighting to ”availability” (A). With respect to the
hardware category, exploiting any of the listed security requirements leads to
comparable ”normal” (N) impact of the attack on the system. Regarding data
attacks, existing research on GPS spoofing provide a ”proof of concept” (PoC)
to manipulate location data [74]. This fact can be used to assume the
existence of additional ”uncorroborated” (UCB) sources for the report
confidence (RC). At the same time and concerning the Remediation Level (RL),
”temporal fix” (TF) solutions exist for the detection and prevention of such
attacks.
The networking category behaves similar except that it is possible to access
”functional” (F) exploit code for networking attacks by using specific OSs for
hacking. With software, it is possible to assume non-disclosed algorithms
which implement sensor fusion and localization. This fact sets the
exploitability (E) of location service ECU software to ”unproven” (U). Due to
the criticality of location service correctness, one must expect ”official
fixes” (OF) of newly confirmed vulnerabilities. However, if software bugs
remain undiscovered, they remain ”unconfirmed” (UCF) from the report
confidence perspective.
The collateral damage potential (CDP) of data and software is ”high” (H), as
it directly affects system safety. Redundancy and robustness of location
services enables temporal autonomy of a vehicle and reduces the damage
potential of networking and hardware attacks to ”medium” (M) [75]. Regarding
target distribution, the multi sensor fusion software as well as the
physically reachable hardware deserve a ”low” (L) target distribution (TD)
value. Communication and location data propagate from infrastructure nodes
through the vehicle to the cloud and require a ”high” (H) distribution value.
However, compromised location data of cloud services does not affect the
location service functionality of the vehicle itself. Only the redistribution
of malicious location data from cloud services to other vehicles causes
problems. For this reason, the evaluation labels the distribution of highly
critical location data as ”medium” (M).
With all parameters specified, it is possible to calculate the overall CVSS
scores Base Score (BS), Temporal Score (TS), and Environmental Score (ES). The
equations 1, 2, and 3 calculate the main CVSS scores and can be found in [76],
where the values of Confidentiality Impact Bias (CIB), Integrity Impact Bias
(IIB), and Availability Impact Bias (AIB) depend on the setting of the IB.
$BS=10\cdot AV\cdot AC\cdot A\cdot((CI\cdot CIB)+(II\cdot IIB)+(AI\cdot AIB))$
(1)
$TS=BS\cdot E\cdot RL\cdot RC$
(2)
$ES=(TS+(10-TS)\cdot CDP)\cdot TD$
(3)
### IV-B Quantitative Security Verification Model
It is possible to choose a slightly simplified version of the attack
realization metric and algorithm in [9] to demonstrate the applicability of
extended Markov Chain models on attack propagation graphs that follow the
categorization structures of the 4+1 view model analysis. The reason not to
rely on non-homogenous continuous-time Markov models, as in [77], is the fact
that the extended Markov Chain suffices in modeling the high-level attack
stages of our IoV use cases.
The discrete-time finite state Markov Chain represents a time and state
discrete stochastic process where future states at time $t_{i+1}$ depend on
current states at time $t_{i}$ only, without relying on past states at time
$t_{i-1}$. Per definition, the Markov Chain $MC(I,P,A)$ is a 3-tuple
consisting of system state space $I$, transition probability matrix $P$, and a
set of possible atomic actions $A$. We assume no empty action that affects the
realization metric $E$, hence, setting it to $E=1$. To further simplify, it is
possible to remove both sums of the state to target probability. The reason
for this is the interest in the worst-case attack with maximum significance.
This fact maintains state transitions that connect starting and target states
without detours. As a result, the following characteristics count for (1)
state, (2) transition, (3) action, and (4) total state to target probability
of attack realization respectively:
1. 1.
$S_{i}\in I$, where the state $S_{i}$ has one of the labels of $HW$, $SW$,
$Net$, or $Data$ of the view model perspectives.
2. 2.
$\sum_{j=1}^{\infty}p_{ij}=1$, $\forall p\in P$.
3. 3.
$a_{i},d_{i}\in A$ are probabilities of successful attacks and defense
mechanisms.
4. 4.
$W^{n}(S_{i=1})=\sum_{S_{i}\in\text{SUBSEQ}(S_{1})}p_{ij}\cdot
W^{n-1}(S_{i})$, where SUBSEQ returns the set of remaining states $S_{i}$.
$i=1$$\dots$$i$$\dots$$i=n$$1-a$$a$$a(1-d)$$ad+(1-a)(1-d)$$d(1-a)$$1-d$$d$
Figure 8: Markov Chain Transition Probability Graph of Attack and Reaction
Transition Probabilities
The state transition probabilities, shown in Figure 8, include attack $a$ and
defense $d$ actions. Only the initial state starts with either a successful
attack or remains in the set of initial states. Similarly, the last state
changes through an effective response action against the attacker only. For
all intermediate steps, there is no state transition if a successful attack
faces an immediate countermeasure (ad), or neither an attack nor a defense
action happens $((1-a)(1-d))$. If an attack action succeeds and no defense
reaction occurs $a(1-d)$, the attacker moves to the next state. Vice versa,
failing attacks and successful reactions $d(1-a)$ may transition the attacker
backwards. Section V-B provides sample calculations of the probabilities which
enable the quantitative security verification. As a last rule, outgoing
weights of each state sum up to the value of one.
$a=\frac{f_{cvss}(v_{\text{domain}})}{42.5};\hskip 14.22636pti\in\mathbb{N}.$
(4) $a_{i}=1-e^{-2i\frac{f_{cvss}(v_{\text{domain}})}{42.5}};\hskip
14.22636pti\in\mathbb{N}.$ (5)
It is possible to model parameters of the attack and defense probabilities
with the CVSS vulnerability assessment. Additionally, the stage of the Markov
Chain depends on the view model perspectives of type hardware, networking,
software, and data of the asset under attack. Since the CVSS score has a
maximum possible value of $42.5$ (choose largest possible value for every CVSS
parameter), it is possible to normalize each vulnerability score with respect
to this value. Equation 4 shows the resulting attack probability, where $i$
refers to the attacking stage of the attack vector. The attack stage
determines what view model perspective type to choose. It is possible to
improve the model of the attack probability $a$ per stage $i$ by introducing
Equation 5 (adapted from the work in [78]). Equation 5 describes an increase
of the attacking likelihood for increasing stages $i$. The underlying
assumption of Equation 5 is the fact that a single successful attack opens up
opportunities to compromise more vulnerabilities or combinations of
vulnerabilities. More chances for the attacker to find vulnerabilities
increases the likelihood of a successful attack.
The defense mechanism probabilities $d$ depend on actual attack actions. It is
possible to model probabilities of successful countermeasures independent of
the attack due to missing attack attribution formulas which would enable
attack identification, assessment, and reaction actions for all attacking
stages. This measure simplifies the quantitative calculations in the Markov
Chain model, but requires further investigation in the future.
TABLE II: IoV Cyber Attack Path Propagation ID | Attacker Type | Model Type | Sample Attack Path Propagation
---|---|---|---
1 | Unauthorized | Cloud | Browser redirect attack & Shell access (C-Net) $\Rightarrow$ Privilege escalation (C-SW) $\Rightarrow$ Access to ECU (V-Net) $\Rightarrow$ CAN bus attack (V-Data) [10]
2 | Unauthorized | Infra & Edge | Road sign attack (I-HW (a) or I-Net (b)) $\Rightarrow$ Road sign distortion (I-Data) $\Rightarrow$ Camera image data modification (V-Data) [25]
3 | Unauthorized | Vehicle & Peripherals | Eavesdropping wireless TPMS (V-Net) $\Rightarrow$ Reverse engineering attack (V-SW) $\Rightarrow$ Packet injection attack (V-Data) [79]
4 | Authorized | Cloud or Infra & Edge | Malicious software update (V-SW) & Driver assistance attack (V-Data) [80]
5 | Authorized | Vehicle & Peripherals | Disabled ECU hardening & CAN replay attack (V-Data) [26] (based on [81])
## V Evaluation of Quantitative system-level security verification
This section performs and evaluates our quantitative system-level security
verification on a selection of attack paths that contain assets of the
location service scenario. To do so, Section V-A introduces chosen attack
vectors. The assets of these attack paths are allocated to 4+1 view
perspectives. Section V-B utilizes the assets of the attack vectors and
applies them together with our CVSS scores to the Markov Chain verification
model. The last section lists the results of the security verification and
evaluates its features and trends.
### V-A Selection of IoV Attack Paths
Attack vectors define the points of an infrastructure where an attacker enters
the system unauthorized. The sum of an attack vector represents the attack
surface which is what an attacker faces when attacking a system [82]. There
are different methods for an attacker to enumerate, analyze, exploit, and
enter the attack surface [83]. Afterwards, an attacker follows an arbitrary
path until she reaches the target. Regarding attack propagation
characteristics, attacking capability and scope of the attacker model remain
the dominating properties for the determination of the depth of the attack
[84].
For the scope of our work, we consider unauthorized as well as authorized
attackers with equal skill level to specify different initial starting points
for attacks. Table II shows our selection of IoV attacks that contain assets
identified during the 4+1 view model analysis in Section III. The attack path
of attack with ID 1 start in the cloud domain to eventually compromise the
vehicle location service by provoking a lane departure. With the help of our
analysis, the affected assets at different stages of the attack can be mapped
to a networking attack in the cloud (C-Net), software compromise in the cloud
(C-SW), in-vehicle network attack (V-Net), and vehicle data attack (V-Data).
The reason for grouping the propagation stages with regard to hardware,
networking, software, and data attacks serves for asset to view category
mapping to facilitate the application of our Markov Model for security
verification.
The infrastructure attack with ID 2 initially targets road signs to cause
distortions in camera images that are processed by the 3D proposal network and
thus, the multi sensor fusion unit. The attack with ID 3 directly targets the
vehicle TPMS with the intention to either stop or compromise vehicle privacy
(tracking location data) by indicating wrong tire pressure values. Analysing
this variety of attacks with different length of attack paths allows to
investigate the behavior of our security methodology as well as if our
methodology can be applied to any attacks in the IoV infrastructure. The
assumption for the authorized attack with ID 5 is a malicious but trusted
developer with OBD and ECU authentication credentials. For this type of
attacker, performing CAN replay attacks to eventually affect vehicle
trajectories of the ADAS system should not be possible.
Future replacement of ECU software modules in AUTOSAR adaptive requires the
update and configuration service within ARA to check integrity, authenticity,
and sometimes confidentiality of module binaries [80]. We assume that attack
path with ID 4 requires an authorized attacker with knowledge of security
credentials (symmetric cryptography session keys as well as asymmetric
cryptography key pairs) to pass integrity, authenticity, and confidentiality
checks of the wireless communication service. Such an attack can be performed
by stealing credentials from dedicated communication devices located in the
cloud or IoV infrastructure. For the attack target, we assume an ECU software
module running a location service component (e.g. part of ADAS).
### V-B Evaluation of our Methodology (Perform Security Verification of
Attack Paths)
In order to evaluate the 4+1 view model analysis in the security context, we
apply our methodology to our selection of attacks (see Table II). With the 4+1
view model analysis, it becomes feasible to allocate every asset of IoV attack
paths to one of the domains. At the same time, each view model domain marks a
stage of our Markov Chain transition model to verify security quantitatively.
The following paragraphs demonstrate the process of applying one of the attack
paths to our results gained from the 4+1 view model analysis. Afterwards, we
contrast the results of different attack paths to determine the behavior,
features, and possibilities of our concept. For showcasing the application of
the view model security verification concept, we consider the attack path with
ID 1, consisting of a could network (C-Net) and software (C-SW) attack as well
as vehicle networking (V-Net) and data (V-Data) attack.
The values for the calculation of the Markov Chain state transition matrix $P$
depend on the probabilities of successful attacks and patches. Table I shows
the vulnerability scores per asset for each domain of the view model
perspectives. Higher values determine a higher likelihood of attacking an
asset successfully. The domain specific CVSS score over the maximum possible
CVSS score determines the attacking probability $a$ (see Formula 4).
Considering the initial cloud networking attack of attack path with ID 1, the
attacking probability $a$ depends on the networking stage CVSS score
$f_{cvss}(\text{Net})=14.5$ and calculates as shown in Equation 6. To simplify
the evaluation, we leverage a constant value for a successful countermeasure
probability $d$, which can be seen in Equation 7.
$a=\frac{14.5}{42.5}=0.34;\hskip 2.84526pt(1-a)=\frac{28}{42.5}=0.65$ (6)
$d=\frac{1}{10}=0.1;\hskip 2.84526pt(1-d)=\frac{9}{10}=0.9$ (7)
With the attack path with ID 1, the attack stages one to four consist of type
C-NET, C-SW, V-Net, and V-Data. Furthermore, considering the attack forward
transition probabilities $a_{1}=a$ at stage $i=1$, $a_{i}=a(1-d)$, and
$a_{i=n}=0$ of Figure 8, the attack probabilities calculate as follows: At
stage $i=1$, the first attack transition probability calculates as
$a_{1}=1-e^{-2(\frac{14.5}{42.5})}=0.4946$, assessing a could networking
attack. Subsequent stages calculate as
$a_{2}=(1-e^{-4(\frac{9.6}{42.5})})\cdot(1-0.1)=0.53541$,
$a_{3}=(1-e^{-6(\frac{14.5}{42.5})})\cdot(1-0.1)=0.78381$, and
$a_{4}=(1-e^{-8(\frac{7}{42.5})})\cdot(1-0.1)=0.9643$, assessing a cloud
software, vehicle network and data attack respectively. Figure 9 indicates
these numbers with the blue line of the attack path with ID 1. For a total
attack probability (includes all forward transition probabilities), the
product of these values result in $a_{1}*a_{2}*a_{3}*a_{4}=0.2001=20\%$ as
indicated in Table III. Other values of Table III correspond to all other
attack paths of Table II, where Figure 9 shows intermediate probability
values.
C-HWC-NetC-SWC-DataI-HWI-NetI-SWI-DataV-HWV-NetV-SWV-
Data$0.4$$0.6$$0.8$$1$Attack Vector DomainsAttack ProbabilityAttack ID 1Attack
ID 2 (a)Attack ID 2 (b)Attack ID 3Attack ID 4Attack ID 5 Figure 9: Attack
probabilities (see Equation 5) of attack paths (Table II) at different domain
stages $i$ with CVSS score $f_{cvss}(i)$.
Our results show that with the 4+1 view model analysis, arbitrary IoV attack
paths can be mapped to view model domains. This enables comparative and
quantitative system-level security verification of system assets. The attack
realization probabilities of the initial states align with the expected
behavior of lower probable attacks for longer attack paths. The lower
percentages for paths that originate from the cloud and infrastructure
locations confirm this claim.
Furthermore, authorized attackers have higher probabilities to successfully
attack localization services which aligns with expectations. This fact can be
seen when inspecting authorized versus unauthorized attack probability
results. This outcome makes sense due to the possible size of an in-vehicle
attack propagation compared to the Internet attack propagation path.
Similarly, the results of direct vehicle or close proximity attacks are more
likely to affect location data. An explanation could be that the set of
attacks of the local attacker contains the attacks of the remote attacker as
subset.
It is important to emphasize that changing our assumptions and with that the
CVSS parameterization changes the outcomes of the security verification.
Hence, the variance in the security verification results depends on our IoV
driven parameterization. Additionally, the decomposition of multiple views
into more fine-grained categories lowers the attack probabilities drastically
(longer paths) due to the multiplicative aggregation. Here, additional
calculation are required to stabilize the multiplications of numbers lower
than one. In general, it is possible to state that the level of detail should
remain similar for security designs with comparable complexity.
TABLE III: Attack Realization Probabilities of All Initial Attack States Attacker Type | Cloud | Infra & Edge | Vehicle
---|---|---|---
Authorized | 29.47 % | 29.47 % | 56.52 %
Unauthorized | 20.01 % | 18.80 % (a) 33.13 % (b) | 24.30 %
## VI Conclusion
This paper applies the well-established 4+1 view model in the security context
of the IoV and utilizes agile threat modeling and risk assessment for a
structured identification and security assessment of IoV assets. The view
model analysis separates data, software, networking, and hardware categories
and enables the allocation of attack path assets to these respective domains.
With the mapping of attack path assets to respective 4+1 view model domains,
our Markov Chain model uses state transition probabilities to assess attack
and defense probabilities of individual assets. Attack paths with comparable
size allow system-level security verification of multiple IoV assets. The
results show the applicability of our methodology to arbitrary IoV assets
included in attack paths. Our CVSS parameterization is driven by the IoV
infrastructure analysis and indicates security critical parts of IoV
architecture.
### VI-A Future Work
* •
To support the quantitative security verification results based on the 4+1
view model analysis, hacker teams need to conduct comprehensive and
multidisciplinary methodologies such as QuERIES [85].
* •
No research has been conducted with regard to automation of the security
analysis approach. To cope with complex systems of the IoV, automation of
analysis concepts is mandatory for system wide security coverage [59]. Here,
it is possible to utilize existing automated threat modeling and risk
assessment tools, as in [26], on separated perspectives.
* •
The CVSS is an old vulnerability scoring system which is not tailored to IoV
specific properties. By the time of writing this paper, the work [86]
introduced a new vulnerability scoring system that is more tailored to cover
vulnerabilities of the IoV. Changing our assumptions and parameterization of
CVSS scores changes our security verification results.
## References
* [1] O. Kaiwartya, A. H. Abdullah, Y. Cao, A. Altameem, M. Prasad, C.-T. Lin, and X. Liu, “Internet of vehicles: Motivation, layered architecture, network model, challenges, and future aspects,” _IEEE Access_ , vol. 4, pp. 5356–5373, 2016.
* [2] J. C. Wong, “Uber concealed massive hack that exposed data of 57m users and drivers,” _The Guardian_ , vol. 22, 2017.
* [3] M. H. Eiza and Q. Ni, “Driving with sharks: Rethinking connected vehicles with vehicle cybersecurity,” _IEEE Vehicular Technology Magazine_ , vol. 12, no. 2, pp. 45–51, 2017.
* [4] S. Rizvi, J. Willet, D. Perino, S. Marasco, and C. Condo, “A threat to vehicular cyber security and the urgency for correction,” _Procedia computer science_ , vol. 114, pp. 100–105, 2017.
* [5] C. Schmittner, G. Griessnig, and Z. Ma, “Status of the development of iso/sae 21434,” in _European Conference on Software Process Improvement_. Springer, 2018, pp. 504–513.
* [6] S. Parkinson, P. Ward, K. Wilson, and J. Miller, “Cyber threats facing autonomous and connected vehicles: Future challenges,” _IEEE transactions on intelligent transportation systems_ , vol. 18, no. 11, pp. 2898–2915, 2017.
* [7] P. B. Kruchten, “The 4+ 1 view model of architecture,” _IEEE software_ , vol. 12, no. 6, pp. 42–50, 1995.
* [8] G. Elahi, E. Yu, and N. Zannone, “Security risk management by qualitative vulnerability analysis,” in _2011 Third International Workshop on Security Measurements and Metrics_. IEEE, 2011, pp. 1–10.
* [9] Y. Cheng, Y. Du, J. Xu, C. Yuan, and Z. Xue, “Research on security evaluation of cloud computing based on attack graph,” in _2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems_ , vol. 1. IEEE, 2012, pp. 459–465.
* [10] C. Maple, M. Bradbury, A. T. Le, and K. Ghirardello, “A connected and autonomous vehicle reference architecture for attack surface analysis,” _Applied Sciences_ , vol. 9, no. 23, p. 5101, 2019.
* [11] A. Oyler and H. Saiedian, “Security in automotive telematics: a survey of threats and risk mitigation strategies to counter the existing and emerging attack vectors,” _Security and Communication Networks_ , vol. 9, no. 17, pp. 4330–4340, 2016.
* [12] D. S. Cruzes, M. G. Jaatun, K. Bernsmed, and I. A. Tøndel, “Challenges and experiences with applying microsoft threat modeling in agile development projects,” in _2018 25th Australasian Software Engineering Conference (ASWEC)_. IEEE, 2018, pp. 111–120.
* [13] P. Karkhanis, M. G. van den Brand, and S. Rajkarnikar, “Defining the c-its reference architecture,” in _2018 IEEE International Conference on Software Architecture Companion (ICSA-C)_. IEEE, 2018, pp. 148–151.
* [14] J. Hughes and G. Cybenko, “Quantitative metrics and risk assessment: The three tenets model of cybersecurity,” _Technology Innovation Management Review_ , vol. 3, no. 8, 2013.
* [15] M. G. Jaatun, K. Bernsmed, D. S. Cruzes, and I. A. Tøndel, “Threat modeling in agile software development,” in _Exploring Security in Software Architecture and Design_. IGI Global, 2019, pp. 1–14.
* [16] A. Shostack, _Threat modeling: Designing for security_. John Wiley & Sons, 2014.
* [17] S. Hussain, A. Kamal, S. Ahmad, G. Rasool, and S. Iqbal, “Threat modelling methodologies: a survey,” _Sci. Int.(Lahore)_ , vol. 26, no. 4, pp. 1607–1609, 2014.
* [18] S. Barnum, “Common attack pattern enumeration and classification (capec) schema description,” _Cigital Inc, http://capec. mitre. org/documents/documentation/CAPEC_Schema_Descr iption_v1_ , vol. 3, 2008.
* [19] T. Casey, “Threat agent library helps identify information security risks,” _Intel White Paper_ , vol. 2, 2007.
* [20] A. Karahasanovic, P. Kleberger, and M. Almgren, “Adapting threat modeling methods for the automotive industry,” in _Proceedings of the 15th ESCAR Conference_ , 2017, pp. 1–10.
* [21] C. Albert and A. J. Dorofee, “Octave criteria, version 2.0,” 2001.
* [22] I. ETSI, “Intelligent transport systems (its); security; threat, vulnerability and risk analysis (tvra),” Technical report, ETSI TR 102 893, European Telecommunications Standards …, Tech. Rep., 2010.
* [23] G. Macher, E. Armengaud, E. Brenner, and C. Kreiner, “A review of threat analysis and risk assessment methods in the automotive context,” in _International Conference on Computer Safety, Reliability, and Security_. Springer, 2016, pp. 130–141.
* [24] M. M. Islam, A. Lautenbach, C. Sandberg, and T. Olovsson, “A risk assessment framework for automotive embedded systems,” in _Proceedings of the 2nd ACM International Workshop on Cyber-Physical System Security_ , 2016, pp. 3–14.
* [25] M. Hamad and V. Prevelakis, “Savta: A hybrid vehicular threat model: Overview and case study,” _Information_ , vol. 11, no. 5, p. 273, 2020.
* [26] W. Xiong, F. Krantz, and R. Lagerström, “Threat modeling and attack simulations of connected vehicles: a research outlook,” in _Proceedings of the 5th International Conference on Information Systems Security and Privacy (ICISSP)_ , 2019.
* [27] J. Pacheco and S. Hariri, “Iot security framework for smart cyber infrastructures,” in _2016 IEEE 1st International Workshops on Foundations and Applications of Self* Systems (FAS* W)_. IEEE, 2016, pp. 242–247.
* [28] Z. Ma and C. Schmittner, “Threat modeling for automotive security analysis,” _Advanced Science and Technology Letters_ , vol. 139, pp. 333–339, 2016.
* [29] I. ISO, “26262: Road vehicles-functional safety-part 6: Product development at the software level,” _International Organization for Standardization (ISO)_ , 2011.
* [30] V. Committee _et al._ , “J3061 cybersecurity guidebook for cyber-physical vehicle systems,” _tech. rep., SAE International_ , 2016.
* [31] C. Miller and C. Valasek, “A survey of remote automotive attack surfaces,” _black hat USA_ , vol. 2014, p. 94, 2014.
* [32] R. Iqbal, T. A. Butt, M. O. Shafique, M. W. A. Talib, and T. Umer, “Context-aware data-driven intelligent framework for fog infrastructures in internet of vehicles,” _IEEE Access_ , vol. 6, pp. 58 182–58 194, 2018\.
* [33] B. Zou, M. Gao, and X. Cui, “Research on information security framework of intelligent connected vehicle,” in _Proceedings of the 2017 International Conference on Cryptography, Security and Privacy_ , 2017, pp. 91–95.
* [34] D. Rubino, “Gps vs. agps: A quick tutorial,” _Saatavilla: http://www. windowscentral. com/gps-vs-agps-quick-tutorial (Luettu 30.11. 2014)_ , 2009.
* [35] M. Pepe, “Cors architecture and evaluation of positioning by low-cost gnss receiver,” _Geodesy and Cartography_ , vol. 44, no. 2, pp. 36–44, 2018.
* [36] S. Lee, G. Tewolde, and J. Kwon, “Design and implementation of vehicle tracking system using gps/gsm/gprs technology and smartphone application,” in _2014 IEEE world forum on internet of things (WF-IoT)_. IEEE, 2014, pp. 353–358.
* [37] M. Bolle, S. Knoop, F. Niewels, and T. Schamm, “Early level 4/5 automation by restriction of the use-case,” in _17\. Internationales Stuttgarter Symposium_. Springer, 2017, pp. 531–545.
* [38] A. Stocker and S. Shaheen, “Shared automated mobility: early exploration and potential impacts,” in _Road Vehicle Automation 4_. Springer, 2018, pp. 125–139.
* [39] J. C. Kolb, L. Wech, M. Schwabe, C. Ruzok, and C. Trost, “Technische aspekte des automatisierten fahrens am projekt des autonomen shuttlebusses in bad birnbach,” in _Autonome Shuttlebusse im ÖPNV_. Springer, 2020, pp. 57–91.
* [40] J. Cheng, J. Cheng, M. Zhou, F. Liu, S. Gao, and C. Liu, “Routing in internet of vehicles: A review,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 16, no. 5, pp. 2339–2352, 2015.
* [41] C. Sharma, S. Moylan, G. Amariucai, and E. Y. Vasserman, “An extended survey on vehicle security,” _arXiv preprint arXiv:1910.04150_ , 2019.
* [42] G. Navstar, “User equipment introduction,” _Department of Defense Document MZ10298_ , vol. 1, 1996.
* [43] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 1907–1915.
* [44] A. R. Vetrella, G. Fasano, D. Accardo, and A. Moccia, “Differential gnss and vision-based tracking to improve navigation performance in cooperative multi-uav systems,” _Sensors_ , vol. 16, no. 12, p. 2164, 2016.
* [45] M. Á. de Miguel, F. M. Moreno, F. García, J. M. Armingol, and R. E. Martin, “Autonomous vehicle architecture for high automation,” in _Computer Aided Systems Theory – EUROCAST 2019_ , R. Moreno-Díaz, F. Pichler, and A. Quesada-Arencibia, Eds. Cham: Springer International Publishing, 2020, pp. 145–152.
* [46] J. K. Suhr, J. Jang, D. Min, and H. G. Jung, “Sensor fusion-based low-cost vehicle localization system for complex urban environments,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 18, no. 5, pp. 1078–1086, 2016.
* [47] A. Indriyatmoko, T. Kang, Y. J. Lee, G.-I. Jee, Y. B. Cho, and J. Kim, “Artificial neural networks for predicting dgps carrier phase and pseudorange correction,” _Gps Solutions_ , vol. 12, no. 4, pp. 237–247, 2008\.
* [48] Y. Xu, V. John, S. Mita, H. Tehrani, K. Ishimaru, and S. Nishino, “3d point cloud map based vehicle localization using stereo camera,” in _2017 IEEE Intelligent Vehicles Symposium (IV)_. IEEE, 2017, pp. 487–492.
* [49] T. J. Teorey, S. S. Lightstone, T. Nadeau, and H. Jagadish, _Database modeling and design: logical design_. Elsevier, 2011.
* [50] V. M. Igure and R. D. Williams, “Taxonomies of attacks and vulnerabilities in computer systems,” _IEEE Communications Surveys & Tutorials_, vol. 10, no. 1, pp. 6–19, 2008.
* [51] R. P. Abbott, J. S. Chin, J. E. Donnelley, W. L. Konigsford, S. Tokubo, and D. A. Webb, “Security analysis and enhancements of computer operating systems,” NATIONAL BUREAU OF STANDARDS WASHINGTONDC INST FOR COMPUTER SCIENCES AND …, Tech. Rep., 1976.
* [52] R. Bisbey and D. Hollingworth, “Protection analysis: Final report,” _ISI/SR-78-13, Information Sciences Inst_ , vol. 3, 1978.
* [53] M. Bishop _et al._ , “A taxonomy of unix system and network vulnerabilities,” Technical Report CSE-95-10, Department of Computer Science, University of …, Tech. Rep., 1995.
* [54] S. Fürst and M. Bechter, “Autosar for connected and autonomous vehicles: The autosar adaptive platform,” in _2016 46th annual IEEE/IFIP international conference on Dependable Systems and Networks Workshop (DSN-W)_. IEEE, 2016, pp. 215–217.
* [55] R. Warschofsky, “Autosar software architecture,” _Hasso-Plattner-Institute für Softwaresystemtechnik: Potsdam, Germany_ , 2009.
* [56] L. Bertaux, S. Medjiah, P. Berthou, S. Abdellatif, A. Hakiri, P. Gelard, F. Planchou, and M. Bruyere, “Software defined networking and virtualization for broadband satellite networks,” _IEEE Communications Magazine_ , vol. 53, no. 3, pp. 54–60, 2015.
* [57] M. Zhang, T. Wo, T. Xie, X. Lin, and Y. Liu, “Carstream: an industrial system of big data processing for internet-of-vehicles,” _Proceedings of the VLDB Endowment_ , vol. 10, no. 12, pp. 1766–1777, 2017.
* [58] K. Tsipenyuk, B. Chess, and G. McGraw, “Seven pernicious kingdoms: A taxonomy of software security errors,” _IEEE Security & Privacy_, vol. 3, no. 6, pp. 81–84, 2005.
* [59] A. Rao, N. Carreón, R. Lysecky, and J. Rozenblit, “Probabilistic threat detection for risk management in cyber-physical medical systems,” _IEEE Software_ , vol. 35, no. 1, pp. 38–43, 2017.
* [60] R. E. Haas, D. P. Möller, P. Bansal, R. Ghosh, and S. S. Bhat, “Intrusion detection in connected cars,” in _2017 IEEE International Conference on Electro Information Technology (EIT)_. IEEE, 2017, pp. 516–519.
* [61] L. Qingqing, J. P. Queralta, T. N. Gia, Z. Zou, and T. Westerlund, “Multi sensor fusion for navigation and mapping in autonomous vehicles: Accurate localization in urban environments,” _The 9th IEEE CIS-RAM_ , 2019.
* [62] W. K. Alhajyaseen, M. Asano, and H. Nakamura, “Estimation of left-turning vehicle maneuvers for the assessment of pedestrian safety at intersections,” _IATSS research_ , vol. 36, no. 1, pp. 66–74, 2012.
* [63] J. Rabe, _Lane-Precise Localization with Production Vehicle Sensors and Application to Augmented Reality Navigation_. KIT Scientific Publishing, 2019, vol. 42.
* [64] J. Cui, L. S. Liew, G. Sabaliauskaite, and F. Zhou, “A review on safety failures, security attacks, and available countermeasures for autonomous vehicles,” _Ad Hoc Networks_ , vol. 90, p. 101823, 2019.
* [65] H. A. Kholidy, A. Erradi, S. Abdelwahed, and F. Baiardi, “A risk mitigation approach for autonomous cloud intrusion response system,” _Computing_ , vol. 98, no. 11, pp. 1111–1135, 2016.
* [66] J.-P. Monteuuis, J. Petit, J. Zhang, H. Labiod, S. Mafrica, and A. Servel, “Attacker model for connected and automated vehicles,” in _ACM Computer Science in Car Symposium_ , 2018.
* [67] A. S. Khader and D. Lai, “Preventing man-in-the-middle attack in diffie-hellman key exchange protocol,” in _2015 22nd International Conference on Telecommunications (ICT)_. IEEE, 2015, pp. 204–208.
* [68] S. Brunner, J. Roder, M. Kucera, and T. Waas, “Automotive e/e-architecture enhancements by usage of ethernet tsn,” in _2017 13th Workshop on Intelligent Solutions in Embedded Systems (WISES)_. IEEE, 2017, pp. 9–13.
* [69] S. Shreejith, P. Mundhenk, A. Ettner, S. A. Fahmy, S. Steinhorst, M. Lukasiewycz, and S. Chakraborty, “Vega: A high performance vehicular ethernet gateway on hybrid fpga,” _IEEE Transactions on Computers_ , vol. 66, no. 10, pp. 1790–1803, 2017.
* [70] L. Li, M. Yang, C. Wang, and B. Wang, “Road dna based localization for autonomous vehicles,” in _2016 IEEE Intelligent Vehicles Symposium (IV)_. IEEE, 2016, pp. 883–888.
* [71] H. A. Abdul-Ghani, D. Konstantas, and M. Mahyoub, “A comprehensive iot attacks survey based on a building-blocked reference model,” _International Journal of Advanced Computer Science and Applications_ , vol. 9, no. 3, 2018.
* [72] M. Salfer and C. Eckert, “Attack surface and vulnerability assessment of automotive electronic control units,” in _2015 12th International Joint Conference on e-Business and Telecommunications (ICETE)_ , vol. 4. IEEE, 2015, pp. 317–326.
* [73] S. Dietzel, R. van der Heijden, J. Petit, and F. Kargl, “Context-adaptive detection of insider attacks in vanet information dissemination schemes,” in _2015 IEEE Vehicular Networking Conference (VNC)_. IEEE, 2015, pp. 287–294.
* [74] J. A. Larcom and H. Liu, “Modeling and characterization of gps spoofing,” in _2013 IEEE International Conference on Technologies for Homeland Security (HST)_. IEEE, 2013, pp. 729–734.
* [75] R. Miucic, _Connected Vehicles: Intelligent Transportation Systems_. Springer, 2018.
* [76] M. Schiffman, A. Wright, D. Ahmad, and G. Eschelbeck, “The common vulnerability scoring system,” _National Infrastructure Advisory Council, Vulnerability Disclosure Working Group, Vulnerability Scoring Subgroup_ , 2004.
* [77] S. M. Abraham, “Estimating mean time to compromise using non-homogenous continuous-time markov models,” in _2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC)_ , vol. 2. IEEE, 2016, pp. 467–472.
* [78] M. A. McQueen, W. F. Boyer, M. A. Flynn, and G. A. Beitel, “Time-to-compromise model for cyber risk reduction estimation,” in _Quality of Protection_. Springer, 2006, pp. 49–64.
* [79] I. Rouf, R. D. Miller, H. A. Mustafa, T. Taylor, S. Oh, W. Xu, M. Gruteser, W. Trappe, and I. Seskar, “Security and privacy vulnerabilities of in-car wireless networks: A tire pressure monitoring system case study.” in _USENIX Security Symposium_ , vol. 10, 2010.
* [80] M. Steger, C. Boano, M. Karner, J. Hillebrand, W. Rom, and K. Römer, “Secup: Secure and efficient wireless software updates for vehicles,” in _2016 Euromicro Conference on Digital System Design (DSD)_. IEEE, 2016, pp. 628–636.
* [81] C. Miller and C. Valasek, “Remote exploitation of an unaltered passenger vehicle,” _Black Hat USA_ , vol. 2015, p. 91, 2015.
* [82] Oriyano and R. Blockmon, _CEH v9: Certified Ethical Hacker Version 9 Kit_ , 1st ed. USA: SYBEX Inc., 2016.
* [83] H. Al-Mohannadi, Q. Mirza, A. Namanya, I. Awan, A. Cullen, and J. Disso, “Cyber-attack modeling analysis techniques: An overview,” in _2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW)_. IEEE, 2016, pp. 69–76.
* [84] J. M. De Fuentes, L. González-Manzano, A. I. González-Tablas, and J. Blasco, “Security models in vehicular ad-hoc networks: A survey,” _IETE Technical Review_ , vol. 31, no. 1, pp. 47–64, 2014.
* [85] L. Carin, G. Cybenko, and J. Hughes, “Cybersecurity strategies: The queries methodology,” _Computer_ , vol. 41, no. 8, pp. 20–26, 2008.
* [86] Y. Lee, S. Woo, Y. Song, J. Lee, and D. H. Lee, “Practical vulnerability-information-sharing architecture for automotive security-risk analysis,” _IEEE Access_ , 2020.
*[IoV]: Internet of Vehicles
*[CAVs]: Connected Autonomous Vehicles
*[CVSS]: Common Vulnerability Scoring System
*[STRIDE]: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege
*[CAPEC]: Common Attack Patterns Enumeration and Classification
*[TAL]: Threat Agent Library
*[TARA]: Threat Agent Risk Assessment
*[OCTAVE]: Operationally Critical Threat, Asset, and Vulnerability Evaluation
*[TVRA]: Threat Vulnerability and Risk Analysis
*[MOL]: Methods and Objectives Library
*[CEL]: Common Exposure Library
*[ISO 26262]: Road Vehicles Functional Safety
*[SAE J3061]: Cybersecurity Guidebook for Cyber-Physical Vehicle Systems
*[CAN]: Controller Area Network
*[TTC]: Time-to-Compromise
*[CAV]: Connected Autonomous Vehicle
*[OEMs]: Original Equipment Manufacturers
*[SAE]: Society of Automotive Engineers
*[LIDAR]: Light Detection and Ranging
*[DGPS]: Differential Global Positioning System
*[SLAM]: Simultaneous Localization and Mapping
*[V2C]: Vehicle to Cloud
*[GPS]: Global Positioning System
*[ML]: Machine Learning
*[RISOS]: Research in Secured Operating Systems
*[PA]: Protection Analysis
*[OS]: Operating System
*[CRUD]: Create, Read, Update, and Delete
*[AUTOSAR]: AUTomotive Open System ARchitecture
*[ECU]: Electronic Control Unit
*[APIs]: Application Programming Interfaces
*[ARA]: AUTOSAR Runtime Environment for Adaptive Applications
*[A-GPS]: Assisted GPS
*[IMU]: Inertial Measurement Unit
*[ADAS]: Advanced Driver Assistance System
*[OSI]: Open Systems Interconnection
*[V2I]: Vehicle to Infrastructure
*[P2I]: Peripheral to Infrastructure
*[MITM]: man-in-the-middle
*[HMI]: Human-Machine Interface
*[IoT]: Internet of Things
*[TPMS]: Tire Pressure Monitoring System
*[OBD]: On-Board Diagnostics
*[TCM]: Telematics Control Module
*[HSM]: Hardware Security Module
*[AV]: Access Vector
*[AC]: Access Complexity
*[IB]: Impact Bias
*[BS]: Base Score
*[TS]: Temporal Score
*[ES]: Environmental Score
*[CIB]: Confidentiality Impact Bias
*[IIB]: Integrity Impact Bias
*[AIB]: Availability Impact Bias
|
# TrustSECO: An Interview Survey
into Software Trust
Universiteit Utrecht
Floris Jansen - 6002919 Dr. R.L. Jansen
(January 2021)
## Abstract
The software ecosystem is a trust-rich part of the world. Collaboratively,
software engineers trust major hubs in the ecosystem, such as package
managers, repository services, and programming language ecosystems. This
trust, however, is often broken by vulnerabilities, ransomware, and abuse from
malignant actors.
But what is trust? In this paper we explore, through twelve in-depth
interviews with software engineers, how they perceive trust in their daily
work. From the interviews we conclude three things. First, software engineers
make a distinction between an adoption factor and a trust factor when
selecting a package. Secondly, while in literature mostly technical factors
are considered as the main trust factors, the software engineers in this study
conclude that organizational factors are more important. Finally, we find that
different kinds of software engineers require different views on trust, and
that it is impossible to create one unified perception of trust.
Keywords: software ecosystem trust, empirical software engineering, TrustSECO,
external software adoption, cross-sectional exploratory interview analysis,
trust perception.
###### Contents
1. 1 Introduction
2. 2 Framework
1. 2.1 Trust
2. 2.2 Bias
3. 3 Research Methods
1. 3.1 Literature study
2. 3.2 Interviews
4. 4 Results
1. 4.1 Literature study
1. 4.1.1 Technical adoption factors
2. 4.1.2 Human/organizational adoption factors
3. 4.1.3 Economic adoption factors
4. 4.1.4 Trust factors
2. 4.2 Interviews
1. 4.2.1 Section 1, adoption factors and selection procedure
2. 4.2.2 Section 2, Trust factors and metrics
3. 4.2.3 Section 3, Personal bias and experience
5. 5 Discussion
6. 6 Conclusion
7. 7 Appendix
1. 7.1 Literature review
1. 7.1.1 Article list SLR
2. 7.2 Interviews
1. 7.2.1 Informed consent
2. 7.2.2 Interview layout
3. 7.2.3 Interview protocol
3. 7.3 Adoption factors
1. 7.3.1 Technical adoption factors
2. 7.3.2 Organizational adoption factors
3. 7.3.3 Economical adoption factors
4. 7.4 Trust factors
1. 7.4.1 Technical trust factors
2. 7.4.2 Organizational trust factors
5. 7.5 Trust metrics
1. 7.5.1 Technical trust metrics
2. 7.5.2 Organizational trust metrics
6. 7.6 Odyssey Momentum
## Chapter 1 Introduction
Software engineers use software packages for creating new solutions from
different package managers. [Mojica et al., 2014] [Nguyen et al., 2020]. There
is a significant amount of implicit trust in these packages. While the
packages could easily be compromised, software engineers assume that as the
package comes from a reliable source, it must be trustworthy. This is not
always the case [Duan et al., 2020]. Moreover, there are several attack
vectors that can compromise a package like registry exploitation of typo
squatting [Hou F, 2020].
Before the factors that constitute trust in software packages and package
repositories are looked upon, one must look at how software engineers choose
software packages and how trust is gained. Moreover, what the impact factors
are that influence this. The TrustSECO project aims to uncover all the factors
that influence the trust that software engineers have in software packages. In
order to uncover these impact factors, a survey will be developed that has as
its main aim to uncover how software engineers perceive trust. [Vargas et al.,
2020] has done similar research and found 26 factors that influence the
selection process of software packages. Rather than looking at the whole
selection process, this research will just focus on the trust aspect of that
selection process. This will be done by analyzing existing literature and the
results of cross-sectional interviews with experts. This information will then
be the basis for a large scale survey. This research thus sets out to find
what trust factors influence the decision to choose software packages.
## Chapter 2 Framework
### 2.1 Trust
One of the most important aspects of collaboration is trust [Bunduchi, 2013].
Moreover, trust creates the basis in decision making for the usage of long
term product use. [Cho et al., 2015]. Using external software is just that,
collaboration. In order to find out what factors induce trust in packages. One
must take a look at the term trust. The term is widely used in computer
science and has many different definitions across the spectrum. [Artz and Gil,
2007].
There has been a lot of research on this subject that lead to the following
three most general and common definitions of trust [Artz and Gil, 2007].
* •
“Trust is a subjective expectation an agent has about another’s future
behavior based on the history of their encounters.” [Mui et al., 2002]
* •
“Trust is the firm belief in the competence of an entity to act dependably,
securely, and reliably within a specified context.” [Grandison and Sloman,
2000]
* •
“Trust of a party A to a party B for a service X is the measurable belief of A
in that B behaves dependably for a specified period within a specified context
(in relation to service X).” [Olmedilla et al., 2006]
The first definition is a reputation based one because it concerns the
producer of the software rather than the software itself. While the producer
is highly relevant in gaining trust in a software package. The producer of the
package will not be more important than the product or service itself. This is
because this research looks at what factors induce trust in software packages,
not at the factors that induce trust in the entities that develop the
software.
The second definition is a definition that suits this research better since it
concerns the belief in the competence of the software. Therefore the
characteristics for the software have to ensure that it acts dependably,
securely and reliably. These characteristics will induce a list of factors
that ensure that the system acts this way.
The third and final definition of trust concerns a service from a party to
another party. This trust also comes from the belief in the product or
service, rather than the party that produces this. Therefore this definition
of trust will also suit this research.
The definition of trust that will be leading for this research is:
“Trust of a party A to a party B for a service X is the measurable belief of A
in that B behaves dependably for a specified period within a specified context
(in relation to service X).”
For this definition specifies the collaboration of two parties regarding a
specific service or product. Thus the trust influence of the developer is not
discarded while the focus is on the product, as is the scope for this
research.
### 2.2 Bias
Bias is a form of error that can affect research. [Sica, 2006]. For this
research, this may occur in the form of selection bias. Selection bias may
occur on the participants in the interview if they do not represent the
general ideas and thoughts of the average participant. [Hernán et al., 2004]
Therefore it is important that the participants have different perspectives
and roles on the same matter. This thus leads to participants in different
organizations across different fields across different jobs and functions.
Another more intuitive form of bias present will be when several trust factors
will be discussed in the interviews, personal bias. After all each participant
has their own personal experiences with packages. Minimizing this bias will be
next to impossible, since the participant might not even be aware of this
bias. However this personal bias will be decreased by first letting the
interviewee answer some more technical impersonal questions before diving into
the personal part.
## Chapter 3 Research Methods
To find what factors influence trust in software packages, one must first find
all the adoption factors that are considered when making such decision. After
this, a selection can be made of the factors that actually contribute to
trusting a package. In order to explore the current state of impact factors
already present in literature, a literature study will be conducted. These
results will be compared with the results from the cross-sectional interviews
with software engineers to complement this current literature. This process is
shown in figure 3.1.
Figure 3.1: Research process
### 3.1 Literature study
The literature study will be based on the SLR done by the TrustSECO te am.
This SLR will to try uncover the already existing factors in literature.
Various search queries will be entered in the following search engines:
* •
Google scholar
* •
IEEE Xplore
* •
ScienceDirect
* •
Jstor
Figure 3.2 illustrates all the used search queries. By systematically using
all the combinations as search queries, all the relevant literature is found.
This relevant literature will be stored and analyzed for trust factors.
Figure 3.2: Search terms
The combination of these search terms will result in a list of articles. This
list will be narrowed down through exclusion and inclusion rules. These are as
follows:
* •
Literature should be about open source software
* •
Literature should list at least one impact factors on adopting open source
software or one factor for gaining trust in a package
* •
Literature should be public and accessible
The last round of elimination will be done through abstract analysis of the
articles. This will result in a final list of articles to be analyzed.These
articles can be found in appendix 7.1.1 These articles will be scoured for
adoption and trust factors and will be categorised as follows:
* •
Technical factors
* •
Organizational factors
* •
Economic factors
The categorization is based on [Vargas et al., 2020] which holds an
explanation as to why a certain factor is categorized as such. Technical
factors are factors related to the release process, code quality attributes
and the functionality. Organizational factors concern the individual
perception, community around the project and other aspects of the organization
where the package is developed. The economical aspects cover the financial
aspect of package selection like licences, total cost of ownership and risks.
### 3.2 Interviews
The literature review will provide a solid foundation of knowledge to start
creating interview questions. Interviews can be used to get detailed personal
experiences and thought processes in the selection process [Hiller and
DiLuzio, 2004]. These semi-structured interviews shall be conducted with
software engineers, DevOps, architects and other experts (see table 3.1) The
choice for semi-structured is made since this gives the freedom for the
interviewee to really elaborate on personal experiences. In addition, this
allows the interviewer to create a new line of questioning based on those
experiences [Hove and Anda, 2005]. These interviews are held in English or
Dutch. Since the literature provided English factors, a list of translated
terms will be provided to ensure that the interviews in Dutch will not yield
different results because of difference in understanding in certain
terminology across these languages. Prior to the interview, an interview
protocol is created based on [Jacob and Furgerson, 2012].
These semi-structured interviews are a form of exploratory research. The
subject of exploratory research has been named by among others [Glaser and
Strauss, 2017] with the discovery of grounded theory. The goal of exploratory
research has been to form hypotheses rather than the testing of hypotheses
[Kothari, 2004]. This is the case for this research since the objective is to
define a hypothesis describing which factors lead to trusting a software
package.
Each interview discusses roughly 9 questions regarding software trust. In
order to ensure the participants have the required knowledge to answer the
interview questions, certain standards have to be met, namely:
* •
The participant needs to speak English or Dutch fluently
* •
The participant has to have been involved with the selection process of
software packages
* •
The participant needs at least three years of relevant working experience
Nr | Organisation | Sector | Size | Function/role | Experience
---|---|---|---|---|---
P1 | Triodos Bank | Banking | 1000+ | Software Engineer | 9
P2 | Bol.com | E-commerce | 1500+ | Product Owner | 6
P3 | Universiteit Twente | Education | 3000+ | Tech Product owner | 20
P4 | Keylane | Insurance | 400+ | Software Engineer | 7
P5 | Xebia | IT Consultancy | 300+ | DevSecOps | 26
P6 | BOC Group | IT Consultancy | 200+ | Web developer | 5
P7 | Channable | Marketing | 100+ | DevOps | 5
P8 | Ministry of Defence | Military | 3000+ | Software Engineer | 12
P9 | NOS | News | 600+ | Software Engineer | 7
P10 | Gemboxx | Software dev | 10+ | Software Engineer | 5
P11 | Sogeti | Software dev | 2500+ | Software Architect | 23
P12 | Grasple | Ed-sec | 10+ | Software Engineer | 16
Table 3.1: Participant overview
The responses give insight as to why certain packages are chosen and the
factors that contributed to this decision. Moreover to answer:
How do software engineers develop trust in a package?
In order to answer this question, the interviews must first provide answer to
the following questions:
* •
What factors are important when selecting external software packages, what is
the protocol?
* •
Which of the selecting factors contribute to gaining trust in a package?
* •
How do personal aspects influence trust in packages?
The first and second sub question may seem alike. However the literature
review has shown that the factors that induce trust are a subset of the
factors that influence the choice of a package. These sub questions will each
be answered through a section in the interview. Each section contains the
actual interview questions that the interviewee will be answering. Section one
will contain Q1…Q4, section two will contain Q5 and Q6 and section three will
contain Q7…Q9. The full interview structure can be seen in appendix 7.2.2.
These interviews are recorded and transcribed. After transcript approval from
the participant a proper analysis will conclude the most important trust
factors. This will create a list of factors that have quotes to back them up.
## Chapter 4 Results
### 4.1 Literature study
The literature study will first take a look at what factors influence the
adoption of certain software packages, these will be categorized based on the
research of [Vargas et al., 2020]. What follows is an analysis of which of
those factors actually lead to more trust in a software package.
[Sánchez et al., 2018] was found during the SLR. This research describes a
massive systematic literature review that was conducted to find the most
important adoption factors when choosing open source software packages. This
research will be setting the stage for the first part of the literature
review. The factors will be discussed briefly to provide context and
eventually be analyzed to see if they also induce trust.
#### 4.1.1 Technical adoption factors
The technical factors are encountered in literature very frequently. These
factors are concerning the technical aspects of software packages and are
found in 49 of 54 pieces of literature. Impact factors such as compatibility
with software, reliability usability and customization are the most common and
thus have the highest importance. A brief description of the impact factors
found according to [Wheeler, 2011] to get a good understanding how important
certain factors can be:
* •
Compatibility: This impact factor refers to the degree to which a piece of
software integrates with existing software. Also whether or not additional
programs are required to adopt this piece of software.
* •
Reliability: This factor measures if the software gives the wanted answers.
This could be compared to availability. It is not a quantitative property and
thus hard to measure.
* •
Usability: This describes how intuitive the program is for the user. This
impacts the difficulty of the software to learn. When software is very usable
it will be adopted faster.
* •
Customization: Customization is the degree to which a component can be changed
to do something it could not do before. Configure changes to its initial
configuration.
* •
Documentation: This refers to the available qualitative and quantitative
documentation on a software package, this impact factor also falls under the
’support’ impact factor, however it is heavily discussed in literature since
it describes the technical capabilities of a package.
* •
Re-usability: Re-usability concerns the quantity of actual code that can be
reused in different fashions than the particular library uses it for. The more
general a piece of software is, the more goals it can serve.
* •
Triability: This factor describes the ease of implementation in a system. When
this is the case, several other factors can be tested quite easily and thus
benefit the package.
* •
Portability: This is the least mentioned factor and thus carries the least
importance of the technical factors. It concerns the ease to deploy the
package in multiple different systems.
Technical adoption factors | Mentioned number of times
---|---
Compatibility | 34
Reliability | 23
Usability | 17
Customization | 17
Documentation | 12
Maintainability | 12
Re-usability | 8
Triability | 9
Portability | 6
Table 4.1: The 9 technical factors found in literature according to [Sánchez
et al., 2018]
#### 4.1.2 Human/organizational adoption factors
This second category of factors concerns the organizational aspects of the
entity. This category of factors did also have a dominant presence in
literature. Overall this was a bit less than the technical factors however,
this category does contain the most important impact factor: ”support”. This
was named 45 times out of the 54 pieces.
* •
Support: This impact factor also contains ’technicalities’ like documentation
and release frequency. It is a very broad impact factor since it contains a
variety of factors that express the need to have a backup plan when the
organization does not have the technical skills to solve certain problems.
This ranges from documentation to the community.
* •
Training: A presence of sufficient training material ensures that technical
staff can learn to fix problems themselves. It also ensures the users know the
package is easy to learn and does not have to figure out the technical details
themselves.
* •
Top management support: Some organizations’ strategy can also impact the
decision in which packages to adopt. The users select packages based on
policies that the company has, e.g. privacy policy or information protection.
[Vargas et al., 2020]
* •
Attitude towards change: This factor describes how employees are looking
towards the adoption of new packages.
* •
Case studies of FLOSS adoption: This factor describes the success of
implementing new software packages. When a package is widely adopted it gains
in reputation. Hence it can influence the decision process.
* •
Time adoption: This factor concerns the total time it takes to implement a
package. The longer it takes to fully adapt to new software the less appealing
it is.
* •
Centrality IT: This factor describes the dependency of the organization to the
new software. Since the proper implementation of more important systems is way
more urgent less important systems.
* •
Business process engineering: This describes ”when an organization is changing
its internal business processes due to any particular circumstance(e.g.
quality improvement, organizational restructure).” This factor is only
mentioned once throughout all literature and makes it the least named and thus
least important.
Organizational adoption factors | Mentioned number of times
---|---
Support | 45
Training | 25
Vendor locks-in | 13
Top management support | 10
Attitude | 6
Centrality IT | 3
Time of adoption | 2
Case studies | 2
Business process re-engineering | 1
Table 4.2: The 9 Organizational factors found in literature according to
[Sánchez et al., 2018]
#### 4.1.3 Economic adoption factors
The last category of factors concerns the evaluation of economic factors.
These factors are have the least dominance in literature however are still
important for software practitioners to look upon.
* •
Total cost of ownership: Total cost of ownership contains all the costs
related to the software package. This includes licensing, however also
includes operational and support costs.
* •
Licensing cost: Is a part of the total cost of ownership. This is the main
cost that practitioners think about when discussing software costs, and
concerns the cost of obtaining a particular license.
* •
Operational cost: This consists of three aspects: the cost to change from
systems, the maintenance of the solution and the costs to implement the
solution.
* •
Support cost: These are the costs related to external support as well as
keeping the system updated. This is only referenced twice in literature and
thus not a very influencing factor.
Economical adoption factors | Mentioned number of times
---|---
Total cost of ownership | 10
License cost | 16
Operational cost | 4
Support cost | 2
Table 4.3: The 4 economical factors found in literature according to [Sánchez
et al., 2018]
#### 4.1.4 Trust factors
The factors that influence the adoption of open source software may differ
from the factors that induce trust. Factors like functionality and reliability
are constantly ranked as high for building trust, next to some other technical
and organizational factors. [Del Bianco et al., 2011]. However it seems that
the trustworthiness of a system can supersede other adoption factors.
[Bernstein, 2005].
The research done by [Del Bianco et al., 2011] provides a list of factors that
are believed to affect trustworthiness the most according to it’s
interviewees. The two most important factors are ”Reliability” and ”Degree to
which an OSS product satisfies/covers functional requirements the most”. These
factors are the two biggest technical impact factors in this research.
Technical trust factor | Rank
---|---
Reliability | 8
Alignment with software | 8
Interoperability | 7
Maintainability | 6
Standard compliance | 6
Performance | 5
Usability | 5
Security | 5
Portability | 4
Reusability | 4
Modularity | 4
Standard architecture | 4
Human interface | 3
Complexity | 2
Patterns | 2
Self-containedness | 2
Size | 1
Table 4.4: The technical attributes ranked on inducing the most trust
according to [Del Bianco et al., 2011]
The most important adoption factor for this research was support. However on
the list of trustworthiness ”Short-term support” is number fourteen on the
list. This illustrates that even though for adoption it is the most important
factor, for trusting certain software the technical factors are more
important.
Another remarkable difference between adoption and trustworthiness factors is
that the economic factors are mentioned 12(licence) and 13(total cost of
ownership) times out of the 31 articles. Yet in the list of trustworthiness
factors there is only one economical factor present out of the 37 named trust
factors. That is also why there is no table for economical trust factors.
On the contrary, a factor that is in the second highest scoring group for
trustworthiness is customer ”Satisfaction”. While it is trivial that this
factor is important for most businesses, it is not explicitly named in
literature about adopting open source software. This could be since it is
often used to create a solution for customers rather than being the solution
itself.
Organizational trust factor | Rank
---|---
Documentation | 7
Mid-/long existent community | 6
Community experience | 5
Short-term support | 5
Availability of support tools | 4
Environmental issues | 4
Availability of best practices | 3
Programming language uniformity | 3
Training | 2
Benchmarks/ test suites | 2
Organization | 2
Reputation | 2
Distribution channel | 1
Table 4.5: The organizational attributes ranked on inducing the most trust
according to [Del Bianco et al., 2011]
### 4.2 Interviews
The interview protocol is divided into 3 sections. These sections will be
discussed separately and will answer the sub question set for that section.
#### 4.2.1 Section 1, adoption factors and selection procedure
Section 1 of the interview consists of two separate parts containing four
questions. The answers to these questions aim to answer the following sub
question: What factors are important when selecting external software
packages, what is the protocol?
##### Selection procedure
In order to get all the relevant information from the interviewees. The focus
of the first section of the interview was to find out the procedure when
selecting other packages, as well as finding out all the relevant factors.
This procedure is highly relevant because it describes what role a developer
has when selecting packages and the processes they have to go through. This
sets the stage on their perspective on this matter. When asked about the
procedure, all participants somehow mentioned that this was dependant of the
situation. Important and sensitive project require delicate measures and a
more sophisticated procedure. Where less important projects would not require
these. After that, the participants were asked to describe the typical
procedure when selecting packages in their company. In some occasions there
was a formal protocol when selecting packages (P1, P8, P12). These procedures
stated the responsibilities of certain actors within the company and delegated
roles. These procedures were set because of the impact a vulnerability could
lead to. P8: ”So if we are want to use a package it first arrives in the
’demilitarized-zone’ , then we conduct our analysis. Both by hand and with
automated systems, and they tell us whether or not it meets all the set
requirements.”
A more common practice amongst the interviewees is an informal procedure. (P2,
P3, P5..P7, P9..P11). This was not a clear set of rules and responsibilities.
However there is a common practice when it comes to selecting packages. The
most important aspect across all informal procedures is that a decision to
adopt a package is always discussed within the development team. P3: ”We do
this by reviewing each others work, and review of the library’s code. Each
code change is monitored by the team.” In two occasions this sometimes has to
be explicitly discussed with an architect or DevOps (P7, P11). This social
security ensures that all code is reviewed and no code is accepted without
being looked by at least two members of the team.
In addition to discussing these matters within the team, there is one other
aspect that is heavily relied on when selecting packages with an informal
selection process. This is the common sense and self-awareness of the
developer. P:6 ”… rules are not written in stone to be honest, and we just do
what we think is right.” While this is trivial, it is named very specifically
by P3, P5, P6, P10. The participants that had a formal process did not mention
this, since the risk a development team could have is safeguarded by the
formal procedure. So for the informal procedure this implies that there is a
lot of implicit trust amongst team members, this can bring a significant risk
to the project.
When looking at the participants who either had a formal or informal protocol,
one can see that P4 has none. This is because this participant has not
operated in a development team. However when continuing the line of
questioning, after a while he revealed he had some sort of self-made protocol
in his mind for selecting packages. Through time a mind model was created on
how to approach these matters.
##### Adoption factors
All the adoption factors that were found during the interviews are divided
into the following categories:
* •
Technical adoption factors
* •
Organizational adoption factors
* •
Economic adoption factors
| P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | P12
---|---|---|---|---|---|---|---|---|---|---|---|---
Technical | | | | | | | | | | | |
Compatibility | | • | | | | • | • | | • | • | | •
Documentation | | • | • | • | • | • | • | • | | • | | •
Complexity | • | | • | | • | | | • | | | |
Security | | | | | | | • | • | | • | |
Source code | | • | | | | | | | • | | |
Code quality | | • | | | | | | | • | | | •
Customization | | | • | | | | | | | | |
Usability | | | | | | • | | | | | |
Organizational | | | | | | | | | | | |
Active maintenance | • | | • | • | • | • | | | • | | • | •
Number of contributors | • | | | | | | | | • | | • |
Contributor process | | | | | • | | | • | | | |
Git Issues | • | | | • | • | • | | | • | | | •
Number of users | • | • | • | | | • | | | | • | |
Backing company | | • | | • | | | | | | | • |
Ease of integration | • | • | • | | | • | | | | • | | •
Community | | | | | • | • | • | | | | |
Known vulnerabilities | | | | | | | • | • | | | |
Reputation | | • | | | | | • | • | | | |
Support | | | | • | • | | | | | | • |
Git stars | | | | • | | | | | | | |
Knowledge of team | | • | | | | | | | | | |
Economical | | | | | | | | | | | |
Licence | | | | | | | | • | | | • | •
Total cost of ownership | • | | | | | | | | | • | |
Table 4.6: Adoption factors overview
One may notice that the table holds more concrete examples of factors rather
than the factors itself. Another noticeable aspect is that some of the named
aspects may overlap in their meaning. E.g. the ’Number of users’ and ’Number
of contributors’ are two separate aspects, where ’Community’ also holds the
number of users and contributors. This is because the majority of the
participants described the actual actions and thought processes of why they
would look at a certain aspect. Therefore it would not be correct to
categorize those specific aspects in a category, even though they do fall
under that category.
A brief description of the specific aspects will now follow, along with a list
of quotes to back them up.
##### Technical factors
Appendix 4.1 holds the found technical aspects that influence the decision of
which package to choose. After every aspect there is at least one quote from
experts that support this. The following technical aspects were found in the
interviews that influenced the interviewees as follows:
* •
Compatibility \- The package in question must align with the current framework
or build. This is an aspect that has to hold before any other characteristics
are considered.
* •
Documentation \- Dependent on the size and the complexity, documentation can
play a big role in the decision of a package. This can also illustrates a
picture on how serious the project is.
* •
Complexity \- A package must not be too complex for the problem that is trying
to solve, and should make coding easier.
* •
Security \- A package should be secure, however there is no clear ways to
determine this were mentioned.
* •
Source code openness \- If the source code is available, a global scan can
give an indication of the quality of the code and what is does.
* •
Code quality \- The code should do what is claims it does and nothing more.
Tests can help to validate this.
* •
Customization \- If a package does not fully cover your needed
functionalities, it is very convenient is small adjustments can be made to get
full coverage.
* •
Usability \- If a package is easy to use, it will contribute to the selection
of that package.
##### Organizational factors
Appendix 4.2 holds the found Organizational aspects that influence the
decision of which package to choose. After every aspect there is at least one
quote from experts that support this. The following organizational aspects
were found in the interviews that influenced the interviewees as follows:
* •
Active maintenance \- If a package is actively maintained, and has been for a
period of time. It illustrates a stability, which is an important factor in
the selection process.
* •
Number of contributors \- When a lot of contributors work on a project it
increases the stability of the package as well because if one of the
contributors quits, there are enough others to take over the workload.
* •
Contributor process \- This describes the ease of becoming a contributor for a
project. There is no general way of finding this out, however one can get a
feel of how hard it is to make contributions to a project. If this is very
easy, it also means that people with a bad agenda can do this and potentially
create a vulnerability in an upcoming release.
* •
Git Issues \- Dependant on the type of Git issues, this can influence the
choice of a package a lot. Issues can be used for feature requests but also
for bugs or possible conflicts. The fashion in which these are taken care of
shows a good picture on how serious the project is.
* •
Number of users \- The number of users influences the choice of packages in
two ways. Firstly, if a lot of people use it, it is implicitly tested. So if
there is a bug or vulnerability, it will eventually be found. The second
reason is that if a vulnerability was to be found, a lot of people would
experience this so it will be patched very fast.
* •
Backing company \- A backing company can ensure the project has enough
resources to continue and therefore contributes to the stability of the
project. However if such backing company gains too much influence, it can
alter certain aspects of the project that would endanger its integrity.
* •
Ease of integration \- When a package is easy to integrate, it can be verified
quickly and this is important when selecting packages.
* •
Community \- A large and active community contributes to the selection of a
package since the power of open source is because of the diversity of
contributors, users and documentation writers from different backgrounds to
contribute to one project.
* •
Known vulnerabilities \- The Common Vulnerabilities and Exposures (CVE) holds
a database that has all known public security vulnerabilities. This database
can be accessed to see if a project has had vulnerabilities. The number of
vulnerabilities and the resolve time can provide a good indication of how
serious the project is.
* •
Reputation \- The reputation can either be from the project itself, if it has
made a name for itself. Or from the contributors or developing entity. One
tends to believe that an entity with a good reputation creates good software,
and that one with a bad reputation can create bad software. Yet an important
aspect is that no reputation has no negative influence.
* •
Support \- Support is not a necessary aspect, however does positively
influence the choice of a package if present.
* •
Git stars \- The Git stars can give somewhat of an indication on the
popularity of the project.
* •
Knowledge of team \- Rather than an aspect about the project, this is an
aspect of the current team a developer is operating in. To look at the
strengths and weaknesses of the team, and start selecting packages with that
in mind rather than choosing a package solely based on characteristics of that
package.
##### Economical factors
Appendix 7.3.3 holds the found economical aspects that influence the decision
of which package to choose. After every aspect there is at least one quote
from experts that support this. The following economical aspects were found in
the interviews that influenced the interviewees as follows:
* •
License \- Licenses may restrict the usage of projects for certain purposes
and may influence one’s own project.
* •
Total cost of ownership \- The total costs are considered when selecting a
package. When using open source software this is not very relevant most of the
times. However this can sometimes result in some tricky business.
#### 4.2.2 Section 2, Trust factors and metrics
The second section of the interview consists of two questions regarding trust
in software. Trust factors as well as trust metrics were discussed in order to
answer the following subquestion: Which factors contribute to gaining trust in
a package?
##### Trust factors
For the first three categories, the interviewees will think of all the factors
that influence the decision of what packages will be selected. After this,
they will create a subset of all those factors that actually cause them to
trust a certain package. While these two aspects are very related, they are
definitely not the same. P1: ”For me the economical factors play a role in
deciding what package, however not in trusting the package.”
As with the adoption factors, some participants named the actual factors where
other named concrete examples of those factors. They are displayed separately
to prevent information loss. The found trust influencing aspects can also be
categorized as technical and organizational aspects. Appendix 7.4.1 holds the
found technical trust influencing aspects. Each of these aspects is supported
by at least one quote and appendix 7.4.2 holds the found organizational trust
influencing aspects. Table 4.7 displays all aspects that were named during the
interviews:
| P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | P12
---|---|---|---|---|---|---|---|---|---|---|---|---
Technical | | | | | | | | | | | |
Documentation | • | • | | | | | | | | • | |
Source code openness | • | • | | | | | | | | | | •
Code quality | | • | | | | | • | | • | | |
Organizational | | | | | | | | | | | |
Number of users | • | • | | | | • | | | • | • | | •
Active maintenance | | | | • | | • | | | | | |
Community | | | • | | | • | • | | | | • | •
Contributors | | | | | • | | • | • | • | | |
Git Issues | | | | • | • | | | | | | • |
Git stars | | | | • | | | | | • | | |
Backing company | | • | | | | | | • | • | | • |
Ease of integration | | | | | | • | | | | | |
Stack overflow activity | | | • | | | | | | | | |
Table 4.7: Trust factors overview
There are some trust factors that are not mentioned at the adoption factors.
These factors are:
* •
Source code openness \- When the source code is available it provides the
possibility to go through the actual code and to let one decide for itself
whether or not this piece of code is trustworthy. However as P1 pointed out:
”There are so many sneaky ways to get a backdoor into software…” This does not
guarantee that the code is safe however does contribute to the possible ways
of finding out.
* •
Stackoverflow activity \- This is a factor that one participant named
specifically, and it is related to the community and the issues. Since
stackoverflow is a question-and-answer platform that displays the activeness
of the community and users.
##### Trust metrics
Trust factors and trust metrics seem very alike, however there certainly is a
distinction between them, moreover in the context that they were asked. One
could argue that the trust metrics contain the concrete examples of trust
factors, Which is true. However, the distinction for this research comes from
the context in which the two were asked.
The question regarding the trust factors aims to uncover what aspects are
relevant for an expert to gain trust in a software package. This question
regards their personal perspective on what is important. Sometimes this would
lead to concrete examples of trust factors: trust metrics. However, the
question regarding trust metrics was asked after a detailed explanation about
the TrustSECO project and what it aims to accomplish. A situation was
described to the participants in which TrustSECO provided a trust score and
they were asked what metrics they would value most in a trust score-breakdown.
This contextual inconsistency makes all the difference here.
The remarkable aspect about this contextual difference is that some
participants came up with metrics that represent a certain category. That were
not named when asked about what factors are relevant when trusting a package,
one question earlier.
As with the adoption and trust factors, some of the participants would name a
category of metrics where others would name concrete examples. For now these
will be named separately to prevent information loss. These can also be
categorized in technical and organizational metrics.
Table 4.8 shows what categories of metrics, or concrete examples of metrics
were found:
| P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | P12
---|---|---|---|---|---|---|---|---|---|---|---|---
Technical | | | | | | | | | | | |
Code quality | | • | | | | | | • | • | | | •
Complexity | | | | • | | | | • | | | |
Tests | | • | | • | | • | | | | | |
Organizational | | | | | | | | | | | |
Active maintenance | | | • | | | • | • | | • | • | |
Number of contributors | | | | | | • | | • | • | | |
Users | | • | • | • | | • | | • | • | • | |
Stability metrics | | | | | • | | | | | | • |
Supporting platforms | | | | | | | • | | | | |
Reputation | | • | | | | | | | | | • |
Git issues | | | | | • | | • | | • | | | •
Known vulnerabilities | | | • | | • | | • | | | | | •
Stack overflow | | | | | | | • | | | | |
Developing entity | | • | | | | | | | | • | |
Table 4.8: Trust metrics overview
One may notice that P1 does not have any trust metrics, this is because the
question was added after the first interview. Appendix 7.5.1 and 7.5.2 hold a
list of all found categories of metrics and metrics themselves with quotes to
back them up.
#### 4.2.3 Section 3, Personal bias and experience
This section of the interview consists of three questions aimed to answer the
question: How do personal aspects influence trust in a package?
This section was created since the literature mainly concerned characteristics
of the packages and projects, rather than the actual people who have to make
decisions based on these characteristics. The difficult part in this is that
there is no one correct answer as to how personal aspects influence this
trust. This is because each of the participants has another perspective on
this due to different past experiences. However there is one main conclusion
to be drawn from this. That is that each of the participants have changed the
way they look at selecting software over the years. This is due to the
experiences they, but also experiences that the people around them had. Some
have experienced more severe situations than others, and this results in them
being more careful and paying more attention to this problem.
Another important aspect that could be concluded from this last section of the
interview, is that some participants came up with additional factors or
metrics that were not mentioned before. This illustrates that this is not a
subject that is consciously thought about a lot, since it takes some thought
process to come up with all the relevant aspects related to this matter. Hence
proving the importance of this research. This then also implicitly means that
there could be more relevant factors that are not uncovered during the
interviews and would be if there would be more or different questions. Some
participants, who had taken prior interest in this matter did not experience
this growth in the interview since they had given this subject some more in-
depth thought at some point in their career.
The interviews lead to the thought that there is no one-size-fits-all list of
factors why a certain individual trusts a certain package since it there was a
lot of variance between participants. However that by looking at enough
individuals, a well defined list of factors can be defined that determines if
a package is trustworthy.
## Chapter 5 Discussion
One of the most prominent differences between the results from the literature
and interviews, is the presence of technical factors for both the adoption and
trust aspects. Technical factors are a lot more represented in literature than
in the interviews. This is because only the factors or category of factors was
noted if the participant named it before they were asked about it explicitly.
Which revealed the following: a lot of the technical factors that were named
in literature have to hold for interviewees to even consider looking at other
factors. So this is definitely relevant when either adopting or trusting a
package, and has to match with the current project in order for the project to
be looked further into.
Another very important difference between some results are the results between
the trust factors and trust metrics. First the participants were what factors
could influence the trust on a package. After this they were asked what
metrics they would like to seen in an overview for determining trust. The
question was initially not for this research and was to get a better idea for
the general TrustSECO project, however turned out to be quite the contrary.
These two questions would sometimes give vastly different answers.
This is due to a complication that this research has seen previously. Namely
the difference in naming categories of factors vs. naming concrete examples of
factors and the fact that there is a form of growth in the matter as the
interview progresses. When discussing the trust metrics, the participants
would sometimes name completely different aspects since the aspects had to be
somewhat measurable and had not thought of certain aspects before.
This then also illustrates that there could be more trust factors than the
participants have named collectively, since a lot of the steps and thought
processes they go through are implicit.
##### TrustSECO
This research set out to create a list of adoption and trust factors for
software packages. The purpose of this list is to serve as guidance for
creating a survey that will sort the factors on importance. This will then be
a major guidance for the TrustSECO project to weigh certain metrics connected
to the prioritized factors, in order to create the final project. So from now
on a survey needs to be created that ranks all the found factors, as well as
all the metrics. The metrics are eventually going to be the most relevant for
the TrustSECO project. A prototype of the project was recently created during
Odyssey Momentum. This is a mass-online collaboration event in which the whole
TrustSECO team invested a whole weekend to start creating the first version of
the software. Appendix 7.6 holds more detail regarding my personal experience
for this event.
## Chapter 6 Conclusion
This research has shown that there are a lot of different factors that play a
role when selecting or trusting software packages. It has also shown that
these factors have a different impact on stakeholders depending on their role
and the and situation.
For this research, there is a distinction between adoption factors and trust
factors, both in literature and in the interviews. The adoption factors in
literature share a great similarity with the ones found in the interviews.
There are some differences. For example the interviews hold concrete examples
of the better formulated categories in literature and there are few aspects
that really differ from each other. Where the trust factors found in
literature differ vastly from the majority of the found trust factors in the
interviews. In literature the technical factors are the highest scoring
factors where in the interviews the organizational seemed to be the most
important.
Then there is the comparison between all found adoption and trust factors.
Where the adoption factors display all the factors that are looked upon when
selecting a package, the trust factors aim to illustrate what factors make
people gain trust in a package. For the latter, this research has shown the
difficulty to create a one size fits all trust factor list for individuals to
gain trust in a package. This is because each individual has his or hers own
past experiences with this matter, and thus has a different perspective on
this. However, a good estimation of factors that make a package trustworthy
can be generated by learning from enough individuals’ experiences.
## Chapter 7 Appendix
### 7.1 Literature review
#### 7.1.1 Article list SLR
Article title | Author | Year
---|---|---
The infeasibility of experimental quantification of life-critical software reliability | RW Butler, GB Finelli | 1991
The infeasibility of quantifying the reliability of life-critical real-time software | RW Butler, GB Finelli | 1993
Software reliability and system reliability | JC Laprie, K Kanoun | 1996
Method and system for determining software reliability | MR Siegel, JI Ferrell | 1996
| Predicting software reliability from testing taking into account other
knowledge about a
---
program
A Bertolino, L Strigini | 1996
Understanding the sources of variation in software inspections | | A Porter, H Siy, A Mockus,
---
L Votta
1998
Software metrics: successes, failures and new directions | NE Fenton, M Neil | 1999
The paradoxes of free software | SM McJohn | 2000
| Open source software projects as virtual organisations: competency rallying
for
---
software development
K Crowston, B Scozzi | 2002
| Government preferences for promoting open-source software: A solution in
search
---
of a problem
B Reddy, DS Evans | 2002
| Why hackers do what they do: Understanding motivation and effort in
free/open
---
source software projects
KR Lakhani, RG Wolf | 2003
| Motivation of software developers in Open Source projects: an Internet-based
---
survey of contributors to the Linux kernel
| G Hertel, S Niedner,
---
S Herrmann
2003
Why open source software can succeed | A Bonaccorsi, C Rossi | 2003
| Open-source software development as gift culture: Work and identity
formation in
---
an internet community
M Bergquest | 2003
Open source software for the public administration | | GL Kovács, S Drozdik,
---
P Zuliani…
2004
Open source software and open data standards in public administration | | GL Kovács, S Drozdik,
---
P Zuliani…
2004
The Collaborative Integrity of Open-Source Software | GR Vetter | 2004
Resistance as motivation for innovation: Open source software | JF Kavanagh | 2004
Agents of responsibility in software vulnerability processes | | A Takanen, P Vuorijärvi,
---
M Laakso, J Röning
2004
Article title | Author | Year
---|---|---
| Relationships between open source software companies and communities:
---
Observations from Nordic firms
L Dahlander, MG Magnusson | 2005
Participant satisfaction with open source software | BL Chawner | 2005
| Motivation, governance, and the viability of hybrid forms in open source
software
---
development
SK Shah | 2006
| Assessing the Impact of Project Founder Reputation and Project Structure on
---
Motivation to Participate in Open Source Software Projects
| K Ghosh, J Ziegelmayer,
---
A Ammeter
2006
| Location, location, location: How network embeddedness affects project
success
---
in open source systems
| R Grewal, GL Lilien,
---
G Mallapragada
2006
| Impacts of license choice and organizational sponsorship on user interest
and
---
development activity in open source software projects
KJ Stewart, AP Ammeter… | 2006
Software estimation: demystifying the black art | S McConnell | 2006
Bounty programs in free/libre/open source software | S Krishnamurthy, AK Tripathi | 2006
A software component quality model: A preliminary evaluation | A Alvaro, ES De Almeida… | 2006
OSS opportunities in open source software—CRM and OSS standards | | G Bruce, P Robson,
---
R Spaven
2006
| New Perspectives on Public Goods Production: Policy Implications of Open
Source
---
Software
JA Lee | 2006
| Developing an open source software development process model using grounded
---
theory
Y Tian | 2006
A Reputation-Based Mechanism for Software Vulnerability Disclosure | X Zhao | 2007
| The governance of free/open source software projects: monolithic,
multidimensional,
---
or configurational?
ML Markus | 2007
Intrinsic motivation in open source software development | | J Bitzer, W Schrettl,
---
PJH Schröder
2007
| An empirical analysis of the impact of software vulnerability announcements
on
---
firm stock price
R Telang, S Wattal | 2007
Reputation in Open Source Software Virtual Communities | | LV Casaló, J Cisneros,
---
C Flavián…
2008
| Emergence of new project teams from open source software developer networks:
---
Impact of prior collaboration ties
| J Hahn, JY Moon,
---
C Zhang
2008
Temporal metrics for software vulnerabilities | | JA Wang, F Zhang,
---
M Xia
2008
Method and apparatus for detecting vulnerabilities and bugs in software applications | | VC Sreedhar, GF Cretu,
---
JT Dolby
2008
| User and developer mediation in an Open Source Software community: Boundary
---
spanning through cross participation in online discussions
| F Barcellini, F Détienne,
---
JM Burkhardt
2008
An approach for selecting software-as-a-service (SaaS) product | M Godse, S Mulik | 2009
Impact of license choice on open source software development activity | J Colazo, Y Fang | 2009
Research on testing-based software credibility measurement and assessment | Q Hongbing, Z Xiaojie… | 2009
| Designers wanted: participation and the user experience in open source
software
---
development
| PM Bach, R DeLine,
---
JM Carroll
2009
3.5 Open Source Software Research and Blockchain | J Lindman | 2009
“Constructing the users” in open source software development | N Iivari | 2009
System and method for maximizing software package license utilization | | S Varadarajan, G Sridhar,
---
KK Rao
2010
Software metrics and software metrology | A Abran | 2010
| Creating and evolving developer documentation: understanding the decisions
of
---
open source contributors
B Dagenais, MP Robillard | 2010
Code forking in open-source software: a requirements perspective | | NA Ernst, S Easterbrook,
---
J Mylopoulos
2010
Trust and reputation for successful software self-organisation | JM Seigneur, P Dondio | 2011
Article title | Author | Year
---|---|---
| SLA-based resource allocation for software as a service provider (SaaS) in
cloud
---
computing environments
L Wu, SK Garg, R Buyya | 2011
| A systematic literature review on fault prediction performance in software
---
engineering
| T Hall, S Beecham,
---
D Bowes, D Gray…
2011
Software quality: theory and management | A Gillies | 2011
| A risk assessment framework for evaluating Software-as-a-Service (SaaS)
cloud
---
services before adoption
L Bernard | 2011
Understanding broadcast based peer review on open source software projects | PC Rigby, MA Storey | 2011
A theory-grounded framework of Open Source Software adoption in SMEs | RD Macredie, K Mijinyawa | 2011
| Understanding open source software peer review: Review processes, parameters
---
and statistical models, and underlying behaviours and mechanisms
PC Rigby | 2011
| Design and evaluation of a process for identifying architecture patterns in
open
---
source software
| KJ Stol, P Avgeriou,
---
MA Babar
2011
| Analyzing and Identifying SaaS for Development of a Project by calculating
its
---
Reputation
BR Rao | 2012
| Carrots and rainbows: Motivation and social practice in open source software
---
development
| G Von Krogh, S Haefliger,
---
S Spaeth, MW Wallin
2012
A model of open source developer foundations | D Riehle, S Berschneider | 2012
Research of trustworthy software system in the network | | Y Liu, L Zhang,
---
P Luo, Y Yao
2012
| Study on credibility level of trustworthy software development process based
---
on grey nonlinear cluster
| S Liu, J Forrest, Y
---
Yangjie, K Zhang,
C Mi, N Xie…
2012
A non-functional requirements tradeoff model in trustworthy software | | MX Zhu, XX Luo,
---
XH Chen, DD Wu
2012
How peripheral developers contribute to open-source software development | P Setia, B Rajagopalan… | 2012
| Research note—Lock-in strategy in software competition: Open-source software
---
vs. proprietary software
KX Zhu, ZZ Zhou | 2012
Why do commercial companies contribute to open source software? | | M Andersen-Gott,
---
G Ghinea, B Bygstad
2012
| Do the allocation and quality of intellectual assets affect the reputation
of open
---
source software projects?
R Méndez-Durón | 2013
Towards reputation-as-a-service | C Hillebrand, M Coetzee | 2013
Software fault prediction metrics: A systematic literature review | | D Radjenović,
---
M Heričko, R Torkar…
2013
Automatic polymorphic exploit generation for software vulnerabilities | | M Wang, P Su, Q Li,
---
L Ying, Y Yang, D Feng
2013
’Computing’Requirements in Open Source Software Projects | | X Xiao, A Lindberg,
---
S Hansen, K Lyytinen
2013
| From closed to open: Job role changes, individual predispositions, and the
adoption
---
of commercial open source software development
| O Alexy, J Henkel,
---
MW Wallin
2013
Learning and best practices for learning in open-source software communities | V Singh, L Holt | 2013
| All complaints are not created equal: text analysis of open source software
defect
---
reports
U Raja | 2013
| How social QnA sites are changing knowledge sharing in open source software
---
communities
| B Vasilescu, A Serebrenik,
---
P Devanbu…
2014
| Secured trust and reputation system: analysis of malicious behaviors and
---
optimization
A Bradai | 2014
| Measuring the health of open source software ecosystems: Beyond the scope of
---
project health
S Jansen | 2014
Software Reliability: State of the Art Report 14: 2 | A Bendell, P Mellor | 2014
Auditing and maintaining provenance in software packages | Q Pham, T Malik, I Foster | 2014
| Estimating development effort in free/open source software projects by
mining
---
software repositories: a case study of openstack
| G Robles, JM Gonzále
---
-Barahona, C Cervigón…
2014
Article title | Author | Year
---|---|---
| Transactive memory system, communication quality, and knowledge sharing in
distributed
---
teams: An empirical examination in open source software project teams
X Chen | 2014
The Spack package manager: bringing order to HPC software chaos | | T Gamblin, M LeGendre,
---
MR Collette…
2015
Analysis and assessment of software library projects | | JW Nicol, BL Roberts,
---
JO Pillgram-Larsen…
2015
Software applications have on average 24 vulnerabilities inherited from buggy components | L Constantin | 2015
| Raising the general public’s awareness and adoption of open source software
through social
---
QnA interactions
N Choi, K Yi | 2015
Group Reputation in an Open Source Software Community: Antecedents and Outcomes | Y Cai, D Zhu | 2016
Maintenance effort estimation for open source software: A systematic literature review | | H Wu, L Shi,
---
C Chen, Q Wang…
2016
Modeling library dependencies and updates in large software repository universes | | RG Kula, C De Roover,
---
DM German, T Ishio…
2017
Secure dependency enforcement in package management systems | | L Catuogno, C Galdi,
---
G Persiano
2017
Large-scale Modeling, Analysis, and Preservation of Free and Open Source Software | S Zacchiroli | 2017
Open Source Software Hosting Platforms: A Collaborative Perspective’s Review. | G Alamer, S Alyahya | 2017
Software processes analysis with provenance | | GCB Costa, HLO Dalpra,
---
EN Teixeira…
2018
Software Provenance: Track the Reality Not the Virtual Machine | | D Wilkinson, L Oliveira,
---
D Mossé…
2018
Hackers vs. testers: A comparison of software vulnerability discovery processes | | D Votipka, R Stevens,
---
E Redmiles, J Hu…
2018
A business model for commercial open source software: A systematic literature review | | S Shahrivar, S Elahi,
---
A Hassanzadeh…
2018
Collaborative SLA and reputation-based trust management in cloud federations | | K Papadakis-
---
Vlachopapadopoulos…
2019
A systematic examination of knowledge loss in open source software projects | | M Rashid, PM Clarke,
---
RV O’Connor
2019
| THE TAKEOFF OF OPEN SOURCE SOFTWARE: A SIGNALING PERSPECTIVE
---
BASED ON COMMUNITY ACTIVITIES.
| P Setia, BL Bayus,
---
B Rajagopalan
2020
### 7.2 Interviews
#### 7.2.1 Informed consent
#### 7.2.2 Interview layout
#### 7.2.3 Interview protocol
| Utrecht University is researching the factors that influence software
packages selection. Such an
---
impact factor can be defined as a characteristic of a software package that
results in trust in that
package. The results of these interviews, together with a literature study
will lead to a survey that,
if done on a large scale, will reveal what these impact factors are. This
interview will be about
45 minutes. This information will be available through the informed-consent,
that needs to be
signed before the interview starts.
| State: name, age, nationality, education, function(profession), years of
experience, organization,
---
involvement in choosing packages, selection process used in the organization,
type of organization,
industrial sector of organization, usage of external products within
organization
1\. What is the protocol when choosing new external software products?
| 2\. What factors related to technical aspects of the packages will you
consider when making a
---
decision on choosing a package?
| 3\. What factors related to organizational aspects of the packages will you
consider when making
---
a decision on choosing a package?
| 4\. What factors related to economical aspects of the packages will you
consider when making a
---
decision on choosing a package?
| 5\. What of these factors, or any other unmentioned factors will make you
gain trust in a package,
---
when do you consider a package trusted?
6\. What metrics would you like to see in an overview for determining trust?
7\. How do you think your personal bias impacts your trust on packages?
8\. When do you think a developer should be made aware that a package contains
a vulnerability?
| 9\. Do you have personal experience with vulnerabilities in packages? If so,
can you comment on
---
how you looked at them before, and after it happened?
### 7.3 Adoption factors
#### 7.3.1 Technical adoption factors
Compatibility
---
• ”Well first of all it has to fit within the current application”
• ”So the most important one is that it is compliant with your current
framework.”
• ”First we take a look how secure the project is, and how compliant it is
with our current build”
• ”I globally scan the code to see if I understand what it does, also to see
if it fits my use case”
• ”We take a look at the functionality, and does this fit within the current
framework”
• ”I look at the documentation to see what datastructures are used, and if
they are compatible with
my project”
Documentation
• ”Good comments and a good readme are crucial”
• ”With Python I mainly look at the documentation, what does it look like …”
• ”dependant on the size and complexity, the documentation can be really
important or irrelevant…”
| • ”If every new function requires 2 days of reverse engineering because the
documentation is bad,
---
drop this package!”
| • ”if there is no documentation I have to figure it out myself. It makes it
hard for us to work with so
---
it is not a matter of trust but then it comes back down to the pillar’s ease
of use.”
• ”… and after that you look at things like documentation and community”
| • ”Documentation can be a good indication on how mature a project is, and
that multiple people are
---
seriously working on this project.”
• ”By looking at the documentation we can see if it is a good or bad package”
• ”I check that by looking at the documentation or the API”
Complexity
| • ”If I want to solve a problem, I want to find a dependency that only
solves this problem and
---
nothing else”
| • ”It should be compact, especially with Python people do this elegantly, is
should serve a
---
single purpose and be clear”
• ”A solution should not be too complex for what it is trying to solve, it
should make the coding easier”
| • ”Oh I forgot to mention, I also check some of the source code of the
project. To get a feel on how the
---
comments are, how complex are the functions etc. So if i want to make little
adjustments I can easily
understand how and where this should be done.”
• ”… if that is the case we might use it, one of the aspects we then look at
is the size of the package”
• ”One other aspect to look at is: how difficult is it to replace that
component?
Security
• ”First we take a look how secure the project is, and how compliant it is
with our current build”
• ”We take a look at how easy it is to integrate in our system, and whether or
not it is secure”
| • ”We think security is very important, so we use several static code
analysis tools created
---
by the software improvement group”
Source code openness
• ”I globally scan the code to see if I understand what it does, also to see
if it fits my use case”
• ”If you can read what the code does exactly, that is very important for me”
Usability
• ”And releases are very important, and also the usability aspect should not
be overlooked”
Customization
| • ”I want a package that is not too complex for what it should do, so you
want it to be
---
customizable enough to ensure it can be used for your purpose.”
Code Quality
• ”If there are tests available, so you can see that some functions really do
the things they claim they do”
• ”There are also several code quality tools that grade a package, they are
also good indicators to watch”
| • ”It should do what you need it do you, and what it says that it does. If
that is not the case then
---
I will not even consider it”
#### 7.3.2 Organizational adoption factors
Active maintenance
---
• ”For me one of the first things I look at it how regularly it is updated”
”When was it updated for the last time, first time in 3 years or recently?”
• ”If the last update was in 2016, we will not even consider this package”
• ”You could then use an external library, yet you want to know how active it
is and if it is maintained well ”
• ”The first step for me is to go to GitHub and check how often there are new
commits”
• ”the second step is how regular those commits are…””
• ”An important factor is to get the feeling that the project is actively
maintained, so check the list of releases”
• ”They are constantly updating and releasing. And releases are very
important,”
• ”The thing is that I want to see that this is updated regularly.”
• ”The first thing I do is go to GitHub and check the amount of contributors
and how active it is maintained”
• ”I don’t like it either when nothing has changed over the past 2 years, or
only 1 file at the time”
| • ”How many commits there are, when the last commit was and how active it
still is. also take a look at the git issues
---
and how they are handled”
Amount of contributors
• ”Also the amount of contributors is important”
• ”The first thing I do is go to GitHub and check the amount of contributors
and how active it is maintained”
• ”If there are many users, but only 2 contributors I do not like it. Since
that comes with big risks”
• ”If there are many contributors, that are also bonus points”
Contributor process
| • ”Another aspect is to know what kind of people are involved in the
project, have they done project like this
---
in the past or is it their first time, and how easy is it to become involved”
| • ”… another important thing is to try to get an idea of how easy someone
can become a maintainer, and
---
have there been many over the years or is there a steady team”
Git Issues
• ”If it is on GitHub, it is certainly important how many issues are open and
how often they are responded to”
• ”The 3rd step is to look at the git issues, how many are open and what types
of issues are there”
| • ”A good way to get a feeling about how serious a project is, is to look at
the issue list in GitHub. There
---
you can see what the responses are and how they are being handled”
• ”And the community as well. What are the known issues, and how fast are they
resolved?”
• ”If there are 10k issues, which do not get a response then this is a no go
for me”
| • ”How many commits there are, when the last commit was and how active it
still is. also take a look at
---
the git issues and how they are handled”
Amount of Users
| • ”The amount of users is also a good indication, because if many people use
this, it is implicitly tested.
---
Even if this was not the case before it launched. It certainly tested now in
practice.
• ”If many others use it as well, that helps a lot”
• ”An important matter is how many other people use this”
| • ”Amount of users! If I see 2 frameworks, 1 with 20 and 1 with 80 people.
The choice will go to which
---
is most used.””
• ”The amount of users for example, that is important”
Backing company
| • ”Big frameworks like angular, react or vue, angular is backed by google
and react by Facebook. So they
---
will go down if the company goes down and that is not going to happen anytime
soon”
• ”If there is a backing company that delivers support it really gives bonus
points”
• ”For me it is really important to see the maturity of the organisation”
• ”If a company has a certain way of developing , that is important”
Ease of integration
• “It is very dependant on how easy it is to implement.”
• ”It should be easy to implement…”
• ”If it is a small library, I look at the implementation to see if it is easy
to integrate”
• ”How easy is it to implement, what is the available documentation for that?”
• ”a trustworthy package is one where I can install it and see if it is
working within a couple of minutes. ”
• ”We take a look at how easy it is to integrate in our system, and whether or
not it is secure”
Community
---
• ”Another thing for me is the community.”
• ”And the community as well. What are the known issues, and how fast are they
resolved?”
• ”… and after that you look at things like documentation and community”
• ”After that I start to take a look at the community, to see if I know some
people.
I might have met them at a conference maybe.
• ”Next to reputation and community there are not a lot of things I look at,
they are the 2 most important ones”
Known vulnerabilities
• ”I do not even consider the package if there are a lot of cve’s known for
it”
• ”You can look at the history of the security vulnerabilities to get a
feeling of how serious the project is”
Reputation
• ”Are other people talking about this, if it is popular that helps”
• ”Either the project or the developers can have a reputation that influences
the process”
• ”Next to reputation and community there are not a lot of things I look at,
they are the 2 most important ones”
Support
• ”as a user of open source software you need to know how serious you are
treated, and what kind of
support is available to you”
• ”If there is a backing company that delivers support it really gives bonus
points”
• ”The frameworks I use, I always want them to have long term support”
Git stars
• ”Oh and I forgot to mention GitHub stars, they are essential!”
Knowledge of team
• ”If there already are people with knowledge of the package in my team, that
helps a to decide whether
or not it will fit in our project”
#### 7.3.3 Economical adoption factors
License
---
| • ”A license can actually be very dangerous for us, there are licenses that
forbid military usage, but
---
also others that require that our code needs to be open source too”
| • ”We had to take a look at licenses, since if we used certain software with
limited licenses, we could
---
not sell the end product”
• ”License is definitely a big factor, you do need to have a clear image on
costs or limitations there”
Total costs
| • ”For me the economical factors play a role in deciding what package,
however not in trusting the
---
package”
• ”We take a look at the pricing, open source is better since its free”
### 7.4 Trust factors
#### 7.4.1 Technical trust factors
Documentation
---
• ”Good documentation definitely contributes to trusting a package, it shows
me I can use the software for what I want
to use it for”
• ”A good read me is crucial, has someone put real effort in that. In other
words how is the documentation”
• ”By looking at the documentation we can see if it is a good or bad package”
Code quality
• ”especially to validate that it does what it should do tests are very
important”
• ”Not just the code quality, but also if there is a general test suite and
how much it covers the whole package”
Source code
• ”The good thing about open source is that the source code is often
available, so I can often check the code itself.
This really helps with trusting the package
• ”Another aspect is the quality of the code itself, sometimes you cannot read
this but you can get a general feeling on the
code quality”
• ”There are also several code quality tools that grade a package, they are
also good indicators to watch”
#### 7.4.2 Organizational trust factors
Amount of users
---
• ”A package that is used by 300.000 others, I trust more than a package that
is released last week and has been used by
5 people, my trust in that is way lower even though it might be better that
the bigger one”
• ”If many others use it as well, that helps a lot”
• ”Amount of users! If I see 2 frameworks, 1 with 20 and 1 with 80 people. The
choice will go to which is most used.””
• ”The amount of users for example, that is important”
• ”A good indication on how active the project is used is the weekly downloads
for example”
• ”After that I look at the size of the community and other users… If there
are enough other users, it means there are
enough people who can also experience problems that need fixing”
Community
• ”I think community is very important, it should be actively used”
• ”I would like to add something to the previous question, namely an active
mailing list around a library actually does
gives me more trust in the library or package”
• ”Another thing for me is the community.”
• ”And the community as well. What are the known issues, and how fast are they
resolved?”
• ”Things that really give me a feeling of trust are, are there smart
developers involved, how are the discussions online
and to the developers give reasoning why they do certain things”
• ”How large the community is says a lot”
• ”After that I look at the size of the community and other users…”
Contributors
• ”… and if a project is maintained by 1 or two people, the chance that the
project stops due to issues in their personal life
are quite real. So for me, community really gives trust if it is a large and
stable community.”
• ”If a project with 3000 contributors, however 99,9 of the code is written by
1 person than the amount of contributors
still doesn’t tell me anything, so I will have less trust for that project”
• ”Things that really give me a feeling of trust are, are there smart
developers involved, how are the discussions online
and to the developers give reasoning why they do certain things”
• ”Try to get an idea of how hard it is to become a contributor, over the last
years several back doors have appeared in
libraries because it was to easy to become a contributor for example”
Active maintenance
---
• ”My trust in a package is mainly from GitHub, so the stars, commit history
and the open and closed issues”
• ”They are constantly updating and releasing. And releases are very
important,”
• ”The thing is that I want to see that this is updated regularly.”
Reputation
| • ”I work a lot with PHP, and there are some well known companies and people
in the community. If you see a
---
project is guided or written by one of them you know you are in the clear.
That ensures more trust instantly”
• ”If you can read what the code doe exactly that is important”
• ”I think that being able to see the source code really creates trust”
• ”Like I previously said, it first has to do what I need it to do before I
look any further”
Ease of integration
• ”a trustworthy package is one where I can install it and see if it is
working within a couple of minutes. ”
Stack overflow activity
• ”Stack overflow activity also adds trust”
Git Issues
• ”My trust in a package is mainly from GitHub, so the stars, commit history
and the open and closed issues”
• ”I have seen several teams that used a project and then stopped using it
after a while because the issues were not resolved,
so I think that is really important for trusting a project”
• ”I look at everything from Git Issues, til meetups and how the community
communicates”
Git Stars
• ”My trust in a package is mainly from GitHub, so the stars, commit history
and the open and closed issues”
• ”One thing I then look at is the stars on GitHub”
Backing company
• ”I also look if there is a commercial interest, with perhaps a company that
supports this project. Red-hat is a nice
example of that”
• ”If a project is backed by a good company, that also creates trust”
• ”For me it is really important to see the maturity of the organisation”
• ”If a company has a certain way of developing , that is important”
### 7.5 Trust metrics
#### 7.5.1 Technical trust metrics
Complexity
---
• ”A metric to determine this is the cyclomatic complexity, or how many lines
of code in general. I think that would
be a good start.”
• ”Another aspect would be to get an indication how complex it is, and how
many people use the full complexity”
Tests
• ”Would a metric like test coverage be a solid one?”
• ”Is there something that can prove that it is functional, and how is the
test coverage in that?”
• ”What is important is basically, you can see with many package managers I
think these build scripts that are run to
test this. And to validate this.
• ”Yes you can see how many times has it passed, and how many times has it
failed. And I think these are important
metrics to calculate trust.
Code quality
---
• ”I think it is hard because it is about trust, however I think code quality
is the most important in this one”
• ”If we then have the possibility I would like to see some code quality
metrics, dependant on the ecosystem.”
• ”This does fall under code quality however does deserves to be named
separately is the documentation.”
• ”We would then be discussing activity and security metrics… For security and
quality I would grasp to some
models I already know from the Software Improvement group.”
#### 7.5.2 Organizational trust metrics
Active maintenance
---
• ”It is not about the bugs, but how fast they are fixed. This way we can see
that there are active patches coming out.”
So how often is it updated? This is a tricky one since if a certain package is
released and it is a very basic function
but it works, then why would you update it?
• ”So how would you then define the number of releases. It is a hard metric.”
• ”Yeah so some measurable aspects then, this could be the last release and
amount of commits per year I think.”
• ”In addition, the last release date and perhaps some data on how often there
are new releases”
• ”In addition I would also like to see it actively maintained, unless it is
such a fundamental package that is does not
need maintenance.”
Amount of contributors
• ”I think users, contributors, releases. ”
• ”I think community size, with that I mean 2 aspects: the amount of
contributors and the amount of users that use
that component”
• ”I would like to see the amount of stars, the amount of contributors and
amount of weekly installs.”
Users
• ”I think community size, with that I mean 2 aspects: the amount of
contributors and the amount of users that use
that component”
• ”Who made it and how many people are using it?”
• ”A really cool thing would be the ratio between people who try to use it and
3 months later still do”
• ”I would like to see how many people use the component and for how long they
have used it”
• ”I think users, contributors, releases. ”
• ”I would like to see the amount of stars, the amount of contributors and
amount of weekly installs.”
• ”I would like to see that it is active, more specific actively used”
Stability metrics
• ”So what would be nice to see is something to cover the stability, so how
many incidents have happened with
a certain version.”
• ”If we look at open source projects, I would like to see something on the
stability of the team”
• ”Another aspect is how diverse their revenue streams are, if they are
relying on 1 source this could fall apart more
easily”
• ”You should mention the supported frameworks, platforms and operating
systems”
Git issues
• ”Most of the times there is no response to the issues, so if those are
responded to that already tells a great deal”
• ”If we could get a score on how fast issues on GitHub are closed, that would
score points”
• ”I think that the amount of GitHub stars as well as the amount of issues
that are open”
• ”I think the ratio between the amount of closed and open issues would be a
nice addition”
• ”I would like to see the amount of stars, the amount of contributors and
amount of weekly installs.”
• ”I think they also did some measurements for an issue resolution time, that
would be a great addition”
Known vulnerabilities
---
• ”At first we want to know that there are few known vulnerabilities”
• ”Another thing to note is the amount of CVE’s that are known”
• ”Some statistics and history on prior security bugs”
• ”Look if there are known vulnerabilities I would like to know, even if it’s
that simple, it’s still something already”
Stackoverflow presence
• ”If we may fantasize about it a little I think that the amount of questions
on stackoverflow would be a great
addition, as well as blog posts or presence on reddit.”
Reputation
• ”What important is for me, is that e.g. someone like Linus Torvalds
expresses faith in the project
• ”Reputation of the developer is a good metric as well”
Developing entity
• ”I would like to see who is behind it, the person or company. Even though it
then is difficult to find out if you
can trust it but you can at least try.”
• ”It can very well be that you do not get any information out of it but let’s
take react for example, that is created
by Facebook. Even though I do not like Facebook as a company, I do trust that
the software they produce is good.
• ”Reputation of the developer is a good metric as well”
### 7.6 Odyssey Momentum
The massive online collaboration event or hackathon as it is normally called
was a three day event focused on creating projects and sharing knowledge.
There were 13 different tracks that each covered different social or technical
problems. A team could then reach out to the organisation with a possible
solution, and this weekend then provided the opportunity to start creating.
This weekend had a very large focus on the collaboration aspect. This is a
yearly physical event and due to the corona virus could not be held this year.
Instead the creators created this online collaboration platform in which one
could roam freely and socialize with other participants of the event. This way
if a team was stuck on a certain problem, they could fly around in the
Odyssey-world and find some other participant who does have the knowledge to
solve this issue. This is a beautifully designed concept that allowed teams to
collaborate remotely whilst working on their own projects. Since my coding
skills are not nearly as good as my team members’. I was part of the
’marketing team’ for this weekend. This meant keeping the social media
platforms up-to-date with the team’s progress. It also meant communicating
with other teams to find the appropriate other participants that could help us
solve issues that we had. There also were several presentations that were done
by the marketing team. Overall this was an amazing addition to the research
project. Even though I did not code too much of the actual prototype, this
weekend was a great learning experience in terms of communicating within a
team and being the middle man for communication towards our team and the
organization. The cherry on top is that we won the track in which we competed!
This was an amazing finalization of the weekend!
## Bibliography
* [Artz and Gil, 2007] Artz, D. and Gil, Y. (2007). A survey of trust in computer science and the semantic web. Journal of Web Semantics, 5(2):58–71.
* [Bernstein, 2005] Bernstein, L. (2005). Trustworthy software systems. ACM SIGSOFT Software Engineering Notes, 30(1):4–5.
* [Bunduchi, 2013] Bunduchi, R. (2013). Trust, partner selection and innovation outcome in collaborative new product development. Production planning & control, 24(2-3):145–157.
* [Cho et al., 2015] Cho, J.-H., Chan, K., and Adali, S. (2015). A survey on trust modeling. ACM Computing Surveys, 48:1–40.
* [Del Bianco et al., 2011] Del Bianco, V., Lavazza, L., Morasca, S., and Taibi, D. (2011). A survey on open source software trustworthiness. IEEE software, 28(5):67–75.
* [Duan et al., 2020] Duan, R., Alrawi, O., Kasturi, R. P., Elder, R., Saltaformaggio, B., and Lee, W. (2020). Measuring and preventing supply chain attacks on package managers. arXiv preprint arXiv:2002.01139.
* [Glaser and Strauss, 2017] Glaser, B. G. and Strauss, A. L. (2017). Discovery of grounded theory: Strategies for qualitative research. Routledge.
* [Grandison and Sloman, 2000] Grandison, T. and Sloman, M. (2000). A survey of trust in internet applications. IEEE Communications Surveys & Tutorials, 3(4):2–16.
* [Hernán et al., 2004] Hernán, M. A., Hernández-Díaz, S., and Robins, J. M. (2004). A structural approach to selection bias. Epidemiology, pages 615–625.
* [Hiller and DiLuzio, 2004] Hiller, H. H. and DiLuzio, L. (2004). The interviewee and the research interview: Analysing a neglected dimension in research. Canadian Review of Sociology/Revue canadienne de sociologie, 41(1):1–26.
* [Hou F, 2020] Hou F, Farshidi S, J. S. (2020). Trustseco: A distributed infrastructure for providing trust in the software ecosystem.
* [Hove and Anda, 2005] Hove, S. E. and Anda, B. (2005). Experiences from conducting semi-structured interviews in empirical software engineering research. In 11th IEEE International Software Metrics Symposium (METRICS’05), pages 10–pp. IEEE.
* [Jacob and Furgerson, 2012] Jacob, S. A. and Furgerson, S. P. (2012). Writing interview protocols and conducting interviews: tips for students new to the field of qualitative research. Qualitative Report, 17:6.
* [Kothari, 2004] Kothari, C. R. (2004). Research methodology: Methods and techniques. New Age International.
* [Mojica et al., 2014] Mojica, I. J., Adams, B., Nagappan, M., Dienst, S., Berger, T., and Hassan, A. E. (2014). A large-scale empirical study on software reuse in mobile apps. IEEE Software, 31(2):78–86.
* [Mui et al., 2002] Mui, L., Mohtashemi, M., and Halberstadt, A. (2002). A computational model of trust and reputation. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences, pages 2431–2439. IEEE.
* [Nguyen et al., 2020] Nguyen, P. T., Di Rocco, J., Di Ruscio, D., and Di Penta, M. (2020). Crossrec: Supporting software developers by recommending third-party libraries. Journal of Systems and Software, 161:110460.
* [Olmedilla et al., 2006] Olmedilla, D., Rana, O. F., Matthews, B., and Nejdl, W. (2006). Security and trust issues in semantic grids. In Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
* [Sica, 2006] Sica, G. T. (2006). Bias in research studies. Radiology, 238(3):780–789.
* [Sánchez et al., 2018] Sánchez, V. R., Ayuso, P. N., Galindo, J. A., and Benavides, D. (2018). Open source adoption factors—a systematic literature review. IEEE Access, 8:94594–94609.
* [Vargas et al., 2020] Vargas, E. L., Aniche, M., Treude, C., Bruntink, M., and Gousios, G. (2020). Selecting third-party libraries: The practitioners’ perspective. CoRR.
* [Wheeler, 2011] Wheeler, D. A. (2011). How to evaluate open source software/free software (oss/fs) programs. URL http://www. dwheeler. com/oss_fs_eval. html.
|
# Operationalizing Framing to Support Multiperspective Recommendations of
Opinion Pieces
Mats Mulder Delft University of TechnologyDelftThe Netherlands
<EMAIL_ADDRESS>, Oana Inel Delft University of TechnologyDelftThe
Netherlands<EMAIL_ADDRESS>, Jasper Oosterman BlendleUtrechtThe
Netherlands<EMAIL_ADDRESS>and Nava Tintarev Maastricht
University Maastricht
The Netherlands<EMAIL_ADDRESS>
(2018)
###### Abstract.
Diversity in personalized news recommender systems is often defined as
dissimilarity, and based on topic diversity (_e.g._ , corona versus farmers
strike). Diversity in news media, however, is understood as multiperspectivity
(_e.g._ , different opinions on corona measures), and arguably a key
responsibility of the press in a democratic society. While viewpoint diversity
is often considered synonymous with source diversity in communication science
domain, in this paper, we take a computational view. We operationalize the
notion of framing, adopted from communication science. We apply this notion to
a re-ranking of topic-relevant recommended lists, to form the basis of a novel
viewpoint diversification method. Our offline evaluation indicates that the
proposed method is capable of enhancing the viewpoint diversity of
recommendation lists according to a diversity metric from literature. In an
online study, on the Blendle platform, a Dutch news aggregator platform, with
more than 2000 users, we found that users are willing to consume viewpoint
diverse news recommendations. We also found that presentation characteristics
significantly influence the reading behaviour of diverse recommendations.
These results suggest that future research on presentation aspects of
recommendations can be just as important as novel viewpoint diversification
methods to truly achieve multiperspectivity in online news environments.
recommender systems, viewpoint diversity, framing aspects
††copyright: acmcopyright††journalyear: 2018††doi:
10.1145/1122445.1122456††conference: FAccT ’21: ACM Conference on Fairness,
Accountability, and Transparency; March, 2021; ††booktitle: FAccT ’21: ACM
Conference on Fairness, Accountability, and Transparency, March 2021††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Information systems Recommender
systems††ccs: Human-centered computing User studies††ccs: Human-centered
computing Empirical studies in HCI
## 1\. Introduction
In recent years, traditional news sources are increasingly using online news
platforms to distribute their content. Digital-born news websites and news
aggregators, which combine content from various sources in one service, are
also gaining ground (Newman et al., 2015). In 2015, 23% of survey respondents
reported online media as their primary news source, and 44% considered digital
and traditional sources equally relevant (Newman et al., 2015). This change
also induces a wide adoption of news recommender systems that automatically
provide personalized news recommendations to users.
Communication studies generally acknowledge two important roles of media in a
democratic society (Helberger, 2019). The first role is to inform citizens
about important societal and political issues. The second role is to foster a
diverse public sphere. Both roles are then related to multiple social-cultural
objectives of democracy, such as informed decision-making, cultural pluralism
and citizens welfare (McQuail, 1992; Strömbäck, 2005).
The role of news recommender systems in promoting these democratic values is
under heavy discussion in academic debate. For example, the term filter
bubbles received increasing awareness, suggesting that high levels of
personalisation would lock people up people in bubbles of what they already
know or think (Pariser, 2011). According to Helberger, the democratic role of
news recommender systems mainly depends on the democratic theory that is being
followed. In their conceptual framework, this role is being evaluated for the
most common theories: the liberal, the participatory and the deliberative
(Helberger, 2019). In particular in relation to the participatory and
deliberative model, the development of viewpoint diversification methods can
be motivated.
However, current diversification methods (Kunaver and Požrl, 2017; Ziegler et
al., 2005) do not address viewpoint diversity, but define diversity as
dissimilarity and operationalize it through topic diversity (_e.g._ , corona
versus farmers strike). Therefore, current diversification methods are not
applicable in the news domain, and novel viewpoint diversification methods are
needed to maintain and assure multiperspectivity in online news environments.
To truly enable _multiperspectivity_ , users should be willing to consume
viewpoint-diverse recommendations. Moreover, their behaviour should be studied
in real, online scenarios. Thus, we investigate the following research
questions:
R1: How is reading behaviour affected by viewpoint diverse news
recommendations?
R2: How is reading behaviour affected by presentation characteristics of
viewpoint diverse news recommendations?
To answer these questions, we propose a re-ranking approach for lists of
recommended articles based on aspects of news frames, a concept taken from
communication studies. In particular, a news frame describes how to identify a
view on an issue, in a given article (Entman, 1993). Thus, by bridging aspects
from the social and the computational domains, we aim to overcome the current
gap between the definition of diversity in recommender systems and news media.
During an offline evaluation, the proposed method increased the viewpoint
diversity of recommended lists of news articles on several topics. Further, we
measured the influence of the viewpoint diversification method on the reading
behaviour of more than 2000 users, which are likely to interact with the
recommended articles, in an online study on the Blendle platform, a Dutch news
aggregator platform. We found that reading behaviour of users that received
diverse recommendations was comparable with the reading behaviour of users
that received news articles optimized only for relevance. However, we did find
a positive influence of two presentation characteristics on the click-through
rate of recommendations, _i.e._ , news articles with thumbnails and news
articles with more hearts are more often read.
Therefore, we make the following contributions:
* •
a novel method for viewpoint diversification using re-ranking of news
recommendation lists, based on framing aspects;
* •
an online evaluation with more than 2000 users, on the Blendle platform, to
understand:
1. (a)
how viewpoint-diverse recommendations affect the reading behaviour of users;
and
2. (b)
how article’s presentation characteristics affect the reading behaviour of
users.
## 2\. Related Work
In this section, we first investigate how communication science understands
diversity. Then, we review current approaches for diversity in recommender
systems. These allow us to bridge the gap between the domains of communication
and computer science, by operationalizing framing aspects in a diversification
algorithm.
### 2.1. Diversity in News Media
In news media, diversity refers to multiperspectivity or a diversity of
viewpoints (Gans, 2003). In communication science, diversity is, in general, a
key measure for news quality (Porto, 2007; Choi, 2009; Masini et al., 2018),
thus fostering multiple democratic aspects, such as informed decision-making,
cultural pluralism and citizens welfare (Napoli, 1999; Voakes et al., 1996).
Two main approaches for assessing diversity can be distinguished: source and
content diversity (Napoli, 1999; Baden and Springer, 2017; Benson, 2009), with
most studies focusing on source diversity (Napoli, 1999; Baden and Springer,
2017; Voakes et al., 1996; Baden and Springer, 2014). When measuring source
diversity, most methods follow Bennett (1996)’s indexing theory, which assumes
that including non-official or non-elite sources corresponds to high levels of
diversity (Baden and Springer, 2017). Alternatively, Napoli (1999) approaches
the issue from a policymaker point of view and distinguishes three aspects of
source diversity: content ownership or programming, ownership of media
outlets, and the workforce within individual media outlets.
Critics, however, state that multiple sources can still foster the same point
of view and therefore, source diversity is not a direct measure for viewpoint
diversity (Voakes et al., 1996). Multiple studies also indicate that power
distributions in society, commercial pressure of news media and journalistic
norms and practices, significantly influence which sources gain media access
(Benson, 2009; Baden and Springer, 2017). Therefore, it is often argued that
viewpoint diversity can only be achieved by fostering content diversity
(Masini et al., 2018; Napoli, 1999; Choi, 2009; Gans, 2003; Baker, 2001;
Voakes et al., 1996). Content diversity is defined in (Van Cuilenburg, 1999)
as _“heterogeneity of media content in terms of one or more specified
characteristics”_. Baden and Springer (2017) identified six common approaches
to assess content diversity. The first three methods focus on the tone or
political position represented in the news, _i.e._ , the inclusion of non-
official positions, the diversity of political tone or analysis of political
slant. These methods, however, assume that political disagreement equals
viewpoint diversity (Baden and Springer, 2017). Another approach uses language
diversity to evaluate content diversity. However, this is again no direct
measure, since different language can describe the same perspective (Baden and
Springer, 2017).
The final two approaches use the concept of frames to assess content
diversity. Framing theory states that every communicative message selectively
emphasizes certain aspects of the complex reality (Baden and Springer, 2017).
Thereby, frames enable different interpretations of the same issue (Scheufele,
1999). Framing has been put forward by many scholars to enhance content
diversity. For example, Porto (2007) states that news environments need to be
evaluated by their ability to provide diverse frames. Baden and Springer
(2017) describe three frames’ aspects that are central to the role of
viewpoint diversity in democratic media. First, frames create different
interpretations of the same issue by selecting some aspects of the complex
reality (Gamson and Modigliani, 1989). Second, frames are not neutral but
suggest specific evaluations and courses of actions that serve some purpose
better than other (Entman, 1993). Third, frames are often strategically
constructed to advocate particular political views and agendas. Framing, thus,
can be a suitable conceptualization of viewpoint diversity.
### 2.2. Diversity in Recommender Systems
Traditionally, research on recommender systems focused on evaluating their
performance in terms of accuracy metrics (Ziegler et al., 2005). Such focus,
however, induced a problem which is known as over-fitting, _e.g._ , a model is
fitted so strongly to a user that it is unable to detect any other interests
(Kunaver and Požrl, 2017). Additionally, there is a need for a more user-
centric evaluation of recommender systems. Thus, diversity has become one of
the most prominent beyond-accuracy metrics for recommender systems (Ziegler et
al., 2005). In this context, diversity is generally defined as the opposite of
similarity (Kunaver and Požrl, 2017), and it is often based on topic diversity
(_e.g._ , corona versus farmers strike). For example, Ziegler et al. (2005)
proposed a topic diversification method based in the intra-list diversity
metric.
Current diversification methods for recommender systems, thus, do not focus on
viewpoint diversity and are not applicable in the news domain. To the best of
our knowledge, only one study for viewpoint diversification has been proposed
so far (Tintarev et al., 2018). Tintarev et al. (2018) propose a new distance
measure for viewpoint diversity based on linguistic representations of news
articles. This diversity measure was then applied in a post-processing re-
ranking algorithm (Carbonell and Goldstein, 1998) to a list of news articles.
These allowed optimizing for the balance between topic relevance and viewpoint
diversity. In a small scale user study (Tintarev et al., 2018), readers
indicated a lower intent to consume diversified content, motivating the need
to study behavioural measures for newsreaders on a larger scale. Thus, we
argue that more research is required to understand the relationship between
the metric and the influence on readers behaviour.
In this work, we aim to bridge the current gap between the notion of framing
in communication science and potential computational measure. Additionally, we
aim to study how viewpoint diversification affects the behaviour of
newsreaders in an applied setting. The next section justifies the
operationalization of _framing_ in the computational domain.
## 3\. Framing for viewpoint diversity
Framing is an extensively researched concept in different domains, including
psychology, communication and sociology, having its roots in the latter
domain. Bateson (1955) state that communication only gets meaning in its
context and by the way the message is constructed. Later, frame theory gained
increasing momentum and was generally understood as follows: every
communicative message selectively emphasizes certain aspects of a complex
reality (Baden and Springer, 2017). Thus, every news article (unintentionally)
comprises some form of framing (Baden and Springer, 2017). Frames are often
deliberately used to construct strategic, often political, views on a topic.
Consequently, frames enable different interpretations of the same issue (Baden
and Springer, 2017). However, every frame inevitably deselects other, equally
plausible and relevant frame (Baden and Springer, 2017).
When considering frames in news articles, multiple definitions exist (Giltin,
1980; Gamson and Modigliani, 1989; De Vreese, 2005). However, the definition
of Entman (1993) is the most commonly adopted in the literature. It states
that framing includes the selection of _“some aspects of perceived reality and
make the more salient in a communicating text, in such a way as to promote a
particular definition of a problem, causal interpretation, moral evaluation
and treatment recommendation for the item described”_. Within this definition,
the problem describes _four framing functions_ \- for which we also provide a
running example -, namely:
1. (1)
Problem Definition : “what a causal agent is doing with what costs and
benefits”; _e.g., a second Coronavirus wave is approaching_ ;
2. (2)
Causal Attribution : “identifying the forces creating the problem”; _e.g., (it
is due to the) government policy response_ ;
3. (3)
Moral Evaluation : _“evaluate causal agents and their effects”_ ; _e.g._ ,
response to approaching second wave came too late (negative evaluation);
4. (4)
Treatment Recommendation : “offer and justify treatments for the problems and
predict their likely effects”; _e.g., there must be predefined measures to be
deployed at a critical threshold of virus spread._
Additionally, Entman (1993) describes how to find frames at different levels
of analysis, including single sentences, paragraphs or articles as a whole.
Also, a frame may not necessarily include all the four functions.
Most framing analysis approaches focus on manual analysis of articles (Kroon
et al., 2016; Matthes and Kohring, 2008; Vliegenthart, 2012). Only recently,
some computer-assisted methods gained interest (Burscher et al., 2014; Vu and
Lynn, 2020; Greussing and Boomgaarden, 2017). As a result, the identification
of frames often falls into a methodological black box (Matthes and Kohring,
2008). Thereby, the main issue includes the ambiguity of _“which elements
should be present in an article or news story to signify the existence of a
frame”_ (Matthes and Kohring, 2008). To overcome this problem, some recent
studies (Matthes and Kohring, 2008; Vliegenthart, 2012; Baden and Springer,
2017) propose a novel identification method based on the extraction of the
four aforementioned framing aspects in the definition of Entman (1993).
### 3.1. Focus Group Setup
To guide the operationalization of the framing aspects, we started with a
qualitative analysis. Through a small focus group, we aimed to gain insights
into how the four framing functions of the main frame of an article manifest
in its content and how we can identify them computationally.
#### 3.1.1. Participants
We invited three experts in the field of news article and framing analysis.
All experts had a background in journalism, communication, or news media. They
all had multiple years of relevant work experience.
#### 3.1.2. Materials
As a basis for discussion during the focus group, we used opinion pieces on
the topic of _Dutch farmers protests_. Opinion pieces refer to news articles
that reflect the authors opinion and thus, do not claim to be objective. An
initial discussion with domain experts indicated that this type of news
article is the most suitable to identify framing functions.
#### 3.1.3. Procedure
The focus group procedure consisted of two steps.
_1\. Annotation session:_ First, the participants were asked to perform
framing analysis on an opinion piece, using the four framing functions as
described by Entman (1993). In particular, the participants had to
individually highlight parts of the article, such as word clauses or
sentences, that can be related to one of the four framing functions of the
main frame of the news article.
_2\. Review session:_ Second, the results were discussed, together with some
general questions on news article analysis and framing. For every highlighted
part, we asked the participants to motivate why the highlighted part is
related to one of the four framing functions. Besides, we used the results as
input to a broader discussion on news article analysis and framing, such as:
* •
What is the main heuristic that you used to analyze the article?
* •
What procedure did you follow to analyze the framing functions of the article?
* •
Can you derive any patterns in the way framing functions manifest in opinion
pieces?
### 3.2. Results of Framing Analysis
During the review session, all experts indicated that they used the article
structure as the main heuristic to find the framing functions regarding the
main frame. They also pointed out that opinion pieces are still strongly
shaped by journalistic values on how an article should be structured. We
further analyzed this heuristic according to the four framing functions:
1. (1)
Problem Definition : In opinion pieces, the first part of the article often
presents the main problem that the author addresses and includes the title,
the lead, and the first x paragraphs. Work on manual frame analysis (Kroon et
al., 2016) supports this finding. The number of introductory paragraphs, x,
can be different per source, author, or article.
2. (2)
Causal Attribution \+ Moral Evaluation : The body of an article is used to
analyze the main problem and usually contains different factors that
contribute to the problem under investigation and their evaluation. We can
match this with: a) the causal attribution of a frame (forces creating the
problem), and b) the moral judgements (evaluate the causal attribution and
their effect) (Entman, 1993).
3. (3)
Treatment Recommendation : Treatment recommendations can be seen as
suggestions to improve or solve the issue described by the problem definition
of the main frame. They normally appear in the concluding paragraphs,
according to the focus group members.
Note, however, that this structure is only a heuristic and it only applies to
opinion pieces. Other types, such as interviews, are structured differently.
The results of the annotation session also indicate that each framing function
related to the main frame of an article can normally be found within one
paragraph. Additionally, a paragraph can include multiple framing functions,
but words, clauses, and sentences generally represent a single framing
function.
## 4\. Dataset
In this section, we describe the experimental dataset, which consists of
opinion pieces, in Dutch. The choice of article type is motivated by the focus
group session presented in Section 3, in which the structure of this article
type is put forward as the primary heuristic to find framing aspects. We
picked topics that we expected a) to be present on the Blendle platform at the
time when we performed the online user study; b) to contain different
viewpoints addressed in the news; and c) to balance issues that more current
versus long-standing. The dataset consists of four ongoing topics: _Black
Lives Matter_ , _Coronavirus_ , _U.S. Elections_ \- as more current topics,
and the dominance and privacy issues around _Big Tech_ \- as a long-standing
topic.
Table 1. Queries used (in Dutch) to retrieve news articles for the four topics
in our dataset.
Topic | Search Query | | Start
---
Date
| Black Lives
---
Matter
| (’black lives matter’ OR ’racisme debat’ OR ’blm-demonstraties’ OR ’George
Floyd’ OR ’racisme-debat’) AND NOT (’belastingdienst’ OR ’corona’)
---
| June 15
---
2020
Coronavirus | | ’corona’ OR ’covid-19’ OR ’mondkapjes’ OR ’mondkapje’ OR ’mondmasker’ OR ’mondkapjesplicht’ OR ’coronatest’ OR ’coronatesters’ OR ’rivm’
---
OR ’virus’ OR ’viroloog’ OR ’golf’ OR ’topviroloog’ OR ’uitbraak’ OR
’uitbraken’ OR ’coronaregels’ OR ’versoeplingen’ OR ’staatssteun’ OR ’vaccin’
| June 1
---
2020
U.S. Elections | | ’Donald Trump’ AND (’presidentsverkiezingen’ OR ’Verkiezingen’ OR ’campagne’ OR ’verkiezingsstrijd’ OR ’verkiezingscampagne’ OR ’Joe biden’)
---
| June 1
---
2020
Big Tech | | (’macht’ OR ’machtig’ OR ’privacy’ OR ’data’ OR ’privacyonderzoek’ OR privacy-schandaal’) AND (’big tech’ OR ’tech-bedrijven’ OR techbedrijven’)
---
2018
We collected our dataset from an archive containing more than 5 million Dutch
news articles. The archive is known to undergo checks for articles quality, to
remove undesirable content, such as the weather or short actualities. For each
topic, we used the search terms (queries) and restrictions shown in Table 1.
We provide the list of search terms in their original language, Dutch, because
we do not want to add additional bias through translation. Additionally, since
the proposed method heavily relies on the structure of the article, we set up
a filter for the minimum number of words to 450 and a filter for the minimum
number of paragraphs to 5.
Table 2 provides an overview of the dataset, per topic. While the length of
the articles varies across topics, they are usually far longer than the
450-word limit we chose. Four publishers are present for all topics: De
Volkskrant, De Standaard, Trouw and Het Algemeen Dagblad. Furthermore, De
Volkskrant is the most prominent publisher for all topics, except for the
_U.S. Elections_ topic. The inclusion of other, less frequent, publishers
varies per topic. Overall, our dataset covers a set of 15 unique publishers.
We also present some properties concerning the presentation characteristics of
the articles on the news aggregator website. We observe that the ratio of
articles that contains a thumbnail image depends on the topic. For the _Black
Lives Matter_ and _Coronavirus_ topics, more than half of the articles have a
thumbnail image, while the opposite holds for the other two topics. The number
of custom titles from the editorial team and the average title length also
differ considerable per topic. Only a few articles have an editorial title,
and they usually appear for the _Big Tech_ and _U.S. Elections_ topics.
Table 2. Overview of the experimental dataset, per topic.
Topic | Articles | Publishers | | Avg
---
#Words
| With
---
thumb.
| With
---
ed. title
| Avg title
---
length
| Black Lives
---
Matter
69 | 10 | 697 | 39 | 1 | 6.3
Coronavirus | 52 | 7 | 608 | 27 | 4 | 5.2
U.S. Elections | 42 | 6 | 744 | 20 | 8 | 9.6
Big Tech | 51 | 10 | 761 | 17 | 10 | 8.1
## 5\. Viewpoint Diversity Methodology
We proposed a novel diversification method based on framing aspects, using the
insights from the focus group. First, we describe the extraction pipeline,
which supports the structure heuristic described in the results of the focus
group session (Section 3). The pipeline forms the basis for the generation of
recommendation lists that we use in the offline evaluation (Section 6) and the
online study (Section 7). We implemented the pipeline using methods employed
by the news aggregator platform and off-the-shelf natural language processing
toolkits, such as IBM-Watson. We chose to use state-of-the-art and off-the-
shelf methods used by the news aggregator platform to ensure output quality.
Then we describe the distance function, which combines the metadata related to
each framing aspect in a measure for viewpoint diversity for news articles.
Finally, we present the re-ranking algorithm based on this viewpoint diversity
measure. Our contribution, therefore, stands in the novelty of the overall
diversification framework, rather than the implementation of specific
components. Figure 1 shows an overview of the end-to-end pipeline.
Figure 1. Viewpoint diversification pipeline
### 5.1. Metadata Extraction
For each framing aspect, as described in the definition of Entman, we
implemented an extraction pipeline:
##### Problem Definition
As described in Section 2, the problem definition can be understood as the
central issue or topic under investigation (Matthes and Kohring, 2008).
Therefore, we decided to use a topic model as the main extraction method for
this framing aspect. The model, provided by the research partner, included a
1000-topic latent Dirichlet allocation (LDA) model trained on 900k Dutch news
articles. Based on the conclusions from the focus group described in Section
3, the title and the first x paragraphs are used to retrieve metadata related
to this framing aspect. We also applied multiple pre-processing steps on the
content, including cleaning, chunking, tokenization, lemmatization and stop-
word removal.
##### Causal Attribution \+ Moral Evaluation
According to Entman (1993), the causal attribution of a frame relates to the
forces creating the problem, while the moral judgements evaluate the causal
attribution and their effect. From the discussion of the focus group session,
described in Section 3, we concluded that the body of an article usually
elaborates on these aspects. Additionally, paragraph-level seems to be the
most suitable level of analysis. Therefore, a text-classification algorithm
was applied using the IBM Watson Natural Language Processing API. The service
returns a category for each paragraph according to a predefined five-level
taxonomy, from the most general category (_e.g._ level 1 - technology and
computing), to the most specific one (_e.g._ , level 5 - portable computer).
To extract information related to the evaluation of these attributions, we
also analyze the sentiment of these paragraphs, using the IBM Watson NLP API.
Thereby, it would be able to identify if two articles evaluate the same
aspects of a problem differently. The content of interest for this task
includes all paragraphs except the $x$ introductory and $y$ concluding
paragraphs. We optimize these variables during the offline evaluation.
##### Treatment Recommendation
Following the definition of Entman (1993), a treatment recommendation suggests
remedies for problems and predicts their likely effect. The research domain of
suggestion mining, which involves the task of retrieving sentences that
contain advice, tips, warnings and recommendations from opinionated texts
(Negi, 2019), was found to be highly relevant for this framing aspect (Negi et
al., 2016). However, the state-of-the-art models are topic-specific (Negi et
al., 2016), and can not be easily applicable to our domain. Thus, only the
more naive rule-based approach could be applied for this study, being more
generally applicable. In a crowdsourcing task with domain experts, we
evaluated, and we optimized the generally applicable rules from the literature
on the news article content. Afterwards, we implemented the method to extract
sentences that contain suggestions from the article content. Then, to obtain
comparable information between the suggestions of two articles, the suggestion
sentences of each were classified using the same text-classification algorithm
that was used for the causal attribution framing aspect. Corresponding to the
conclusion of the focus group described in Section 3, the content of interest
for this framing aspect includes the $y$ concluding paragraphs of an article.
We optimize this variable in the offline evaluation.
### 5.2. Distance Functions
Having defined the extraction pipeline for each framing aspect, _i.e._ ,
problem definition, causal attribution, moral evaluation and treatment
recommendation (Entman, 1993), we now define our distance function. We compare
the extracted metadata for every pair of articles. Thus, we implement a
distance function for each framing aspect.
##### Problem Definition
The metadata regarding the problem definition framing aspect involves a
probability distribution over 1000 topics. Thus, we need a statistical
distance measure. We chose the Kullback-Leibler divergence because it is one
of the most commonly used statistical distance measures for LDA-models, and it
is used in the comparable work (Tintarev et al., 2018) on viewpoint
diversification.
##### Causal Attribution and Moral Evaluation
We compare the five-level taxonomy categories extracted from the pipeline
described in the previous section, to obtain a distance measure for the causal
attribution framing function of the primary frame. Thus, we use the weighted
Jaccard index, which measures the similarity (or diversity) of two sets
(Jaccard, 1901). The index is calculated for each level of detail in the five-
level taxonomy, such that we apply weight factors per taxonomy level. Thereby,
overlap in higher levels of detail can contribute more to the overall
similarity score. In the offline evaluation, we compare different weight
factors per taxonomy-levels.
For the moral evaluation framing aspect, we implement the distance function by
multiplying the Jaccard distance and the absolute sentiment difference between
each paragraph combination of two articles. Thus, paragraphs with no
overlapping categories yield a value of zero, while highly similar paragraphs,
with different sentiment scores, lead to high levels of diversity related to
the moral evaluation framing aspect.
##### Treatment Recommendation
For the treatment recommendation we used the five-level taxonomy
classification, _i.e._ , from the most general to the most specific category,
as returned by IBM Watson Natural Language Processing API, and the Jaccard
index.
### 5.3. Re-ranking
We implement the re-ranking of the input list of articles using the Maximal
Marginal Relevance (MMR) algorithm (Carbonell and Goldstein, 1998). In our
case, the re-ranking consists of ranking news articles that are more diverse
higher. First, we normalize the output of the distance functions related to
each framing aspect using a min-max normalization, and then we combine them in
a diversity score through a weighted sum. We optimize the weight factors
during the offline evaluation. We note here that we re-rank news articles that
are known to also be relevant for the given topic.
Where most re-ranking algorithms for recommender systems order lists only on
relevance, the MMR algorithm provides a linear combination between diversity,
in our case viewpoint diversity, and relevance, set by the parameter
$\lambda$. Thus, the re-ranking algorithm is defined as follows:
(1) $MMR\equiv max_{i\in R\setminus S}[\lambda(Rel(i)-(1-\lambda)max_{j\in
S}(1-Div(i||j))]$
Since this work proposes a measure for viewpoint diversity rather than a
relevance measure, we decided to implement the relevance score using a simple
frequency-inverse document frequency (TF-IDF) score.
## 6\. Offline Evaluation
In this section, we describe the offline evaluation of our viewpoint
diversity-driven approach for re-ranking lists of news articles.
### 6.1. Materials
For our offline experiment, we used the news dataset introduced in Section 4,
which covers 214 news articles on four topics.
### 6.2. Procedure
The experimental procedure consists of four main steps that we detail as
follows. First, we process and enrich all the news articles in our dataset
according to the four framing aspects as defined by Entman (1993): problem
definition, causal attribution, moral evaluation, and treatment
recommendations (for details see Section 5.1).
Second, we generate the diversity matrix by comparing all combinations of two
articles, based on the enrichment described in Section 5.1. Thus, using the
distance function defined in Section 5.2 we measure the dissimilarity of two
articles based on the framing aspects. Finally, since the MMR algorithm re-
ranks a list of news articles based on a linear combination between diversity
and relevance, we calculate the TF-IDF relevance matrix, including a relevance
score for each two article combination.
Third, we optimize the model variables and evaluate the performance using
cross-validation. For each article $i$ in the dataset, we calculate a set of
$s$ recommendations by re-ranking the remainder articles in the dataset. To
prevent over-fitting, we use cross-validation. Thus, we split the dataset into
$k$ distinct sets. We experimented with different values of $k={5,10,20}$ and
$s={3,6,9}$. For every set, we take the following steps:
1. (1)
Grid search of model variables on training set: The training set contains the
$k-1$ subsets of articles. We obtain the optimal combination of the model
variables for the training set using a grid search. An overview of the model
variables can be found in Table 3 and in Section 6.2.1.
2. (2)
Evaluation on test set: After the variables are trained on the $k-1$ subsets,
the model is evaluated on the test set for different values of $\lambda$,
between 0 and 1 with a step of 0.1. As described before, for each article in
the test set, a set of $s$ recommendations is calculated by re-ranking the
remaining articles in the dataset.
And finally, we combined the results of all $k$ cross-validations.
#### 6.2.1. Model variables
Table 3 shows the model variables that we optimize during the offline
evaluation. We choose the variation of the weights for each framing aspect
such that no single framing aspect can have the majority. Additionally, a
step-size of 0.1 is assumed to bring enough variation. We consider two
variations for the taxonomy level weights: equal weights for each taxonomy
level or ascending weights. Finally, the number of introductory and concluding
paragraphs can be either $1$ or $2$.
Table 3. Overview of possible values of model variables
Variable | Values
---|---
Weight Framing function - Problem Definition | [0.1, 0.2, 0.3, 0,4]*
Weight Framing function - Causal Attribution | [0.1, 0.2, 0.3, 0,4]*
Weight Framing function - Moral Evaluation | [0.1, 0.2, 0.3, 0,4]*
Weight Framing function - Treatment Recommendation | [0,1, 0.2, 0.3, 0,4]*
Taxonomy level weight | [equal, ascending]
Number of introducing paragraphs | [1, 2]
Number of concluding paragraphs | [1, 2]
$\lambda$ | [0.0, 0.1, …, 0.9]
* •
*Note that all framing function weight factors should sum up to 1
### 6.3. Evaluation Metrics
We assess the performance of the viewpoint diversification method using a
metric from literature (Tintarev et al., 2018). The metric is based on the
Intra-List Diversity metric (Zhang and Hurley, 2008; Ziegler et al., 2005;
Tintarev et al., 2018; Vargas and Castells, 2011) and it is defined as the
average distance between all pairs of articles $i$ and $j$, such that $i\neq
j$. Thereby, the distance between a pair is defined by the articles’ channels
(predefined taxonomy of 20 high-level topics) and the articles’ LDA topic-
distribution, as derived from the enrichment methods in Section 5:
(2) $Distance(i,j)=0.5\times Distance_{Channels}+0.5\times Distance_{LDA}$
The channel distance is calculated using the cosine distance, whereas the LDA
distance is computed using the Kullback-Leibler divergence.
#### 6.3.1. Additional metrics
Besides the viewpoint diversity metric, we also measure the effectiveness of
the diversification model on other properties, as follows:
_Relevance_ : We measure the TF-IDF relevance for the recommendation lists,
such that we can measure the effectiveness of the viewpoint diversification
method.
_Kendall’s $\tau$_: We compute the Kendall’s $\tau$ rank correlation
coefficient (Kendall, 1948) to measure the similarity between two ranks of
recommended items.
_Average number of words_ : We compute the average number of words for the
recommended article lists as a measure of quality (_i.e._ , longer news
articles can be considered to be higher quality).
_Publisher Ratio_ : We measure the publisher ratio for the recommendation
lists because this could potentially provide insights on the effect of the
content diversity on the source diversity.
### 6.4. Baseline
To assess if the proposed diversification method can increase the viewpoint
diversity based on the presented metric, we compare it with a baseline,
consisting of a full relevance MMR, where $\lambda=1$, such that we rank the
recommendations purely on the TF-IDF relevance. We chose this baseline because
it has minimal effects on the recommendations in terms of viewpoint diversity.
(a) Topic: Black Lives Matter
(b) Topic: Coronavirus
(c) Topic: U.S. Elections
(d) Topic: Big Tech
Figure 2. Diversity and relevance scores for different values of $\lambda$ per
topic.
(a) Topic: Black Lives Matter
(b) Topic: Coronavirus
(c) Topic: U.S. Elections
(d) Topic: Big Tech
Figure 3. Average number of publishers in recommendation lists, normalised by
the input ratio, for all topics.
### 6.5. Results
In Figure 2, we show the performance of the model in terms of viewpoint
diversity and relevance for different values of $\lambda$, and the optimal
setting of the model variables. The red bars represent the results of the
viewpoint diversity metric, while the blue bars represent the relevance
scores. Variations of the cross-validation variable $k$ did not yield
significant differences between the results, and thus, we fixed $k=10$. The
list size $s$ did show to influence the number of publishers included in the
recommended list, but the results were not significant. Thus, we fixed the
list size to $s=3$, to better align the offline evaluation set up with the
online evaluation set up, where only 3 recommended news articles can be shown
at a time. Table 4 shows the optimal model variables values, per topic.
Across all topics, the proposed diversification method is capable of
increasing the viewpoint diversity of recommendation lists. According to the
metric, the viewpoint diversity increases on average from 0.55 to 0.79 between
$\lambda=1$ and $\lambda=0$. Additionally, the average relevance score
decreases from 0.58 to 0.27.
Table 4. Overview of model variables used during the offline and online
evaluation for each topic: cross validation folds ($k$), recommended list size
($s$), number of introductory paragraphs, number of concluding paragraphs,
general weights for the four framing aspects, category weights and $\lambda$.
Topic | $k$ | $s$ | | intro.
---
par.
| concl.
---
par
| general
---
weight
| cat.
---
weight
$\lambda$
| Black Lives Matter
---
10 | 3 | 2 | 1 | [0.2, 0.4, 0.1, 0.3] | eq | 0
Coronavirus | 10 | 3 | 2 | 1 | [0.1, 0.4, 0.1, 0.4] | eq | 0
U.S. Elections | 10 | 3 | 1 | 2 | [0.1, 0.4, 0.1, 0.4] | eq | 0
Big Tech | 10 | 3 | 1 | 2 | [0.2, 0.4, 0.1, 0.3] | asc | 0
##### Kendall’s $\tau$
We computed the Kendall’s $\tau$ rank correlation to assess whether the
proposed diversification method is capable of providing different
recommendation lists compared to the baseline. We computed the coefficient
between the baseline ($\lambda=1$) and each other value of
$\lambda=[0.0,0.1,...,0.9]$. Overall, we observed that the re-ranking of the
set of recommendations based on viewpoint diversity results in different
recommendation lists compared to the baseline. The coefficient decreases for
smaller values of $\lambda$, but it is bounded around $\tau=0$ for decreasing
values of $\lambda$.
##### Average number of words
We observe no consistent pattern in the average number of words for different
values of $\lambda$ across topics. For the _Black Lives Matter_ and _Big Tech_
topics, the average number of words increases for larger values of $\lambda$,
for the _U.S. Elections_ topic the average decreases and for _Coronavirus_ the
average is stable.
##### Publisher ratio
Figure 3 shows the average number of articles in the recommended lists,
normalized by the input ratio, for each value of $\lambda$. For every topic,
the number of publishers increases for larger values of $\lambda$ and the
number of different publishers for the baseline recommendation list is larger
than the one in the diverse recommendation list. Thus, we observe that the
diversification method influences the publisher ratio. For small values of
$\lambda$, some publishers get amplified, while others are excluded. We see
this effect primarily for the topics of _U.S. Elections_ and _Big Tech_. The
topic of _Corona Virus_ seems to be the only exception.
## 7\. Online Study
We conducted a between-subjects online study on the Blendle platform to
compare the reading behaviour of users who receive news articles optimized
only for relevance, versus news articles that are also diverse on viewpoint.
### 7.1. Materials
In the online study, we used the articles collected in Section 4.
### 7.2. Participants
We selected 2076 active users of the news aggregator platform. These users
were assumed to most likely see and use the recommendation functionality. We
included only users who clicked at least four times on a recommended article
below any article read, in the last 14 days before the study. Groups for
baseline and diversified recommendations were created by randomly splitting
the users.
### 7.3. Independent Variables
In the between-subjects user study we manipulated the following conditions,
referring to the recommended list of news articles:
* •
baseline recommendation: was implemented using a MMR that was based only on
relevance ($\lambda=1.0$)
* •
diversified recommendation: was implemented using a MMR that maximized
viewpoint diversity ($\lambda=0.0$)
### 7.4. Procedure
During the two-week experiment, six days per week, we provided recommendations
for two articles featured on the selected users’ homepage. We provided sets of
three recommendations below the content on the reading page of the original
article. Every morning, we chose these two articles manually, to match any of
the topics that we selected (_Black Lives Matter_ , _Big Tech_ , _Coronavirus_
, and _U.S. Elections_). Afterwards, both the baseline and diversified
recommendation sets were calculated for both articles and included in the news
aggregator platform.
### 7.5. Dependent Variables
To analyze the reading behaviour of the two different user groups and answer
RQ1, we measure specific events on the news aggregator platform (_i.e._ ,
check whether the user opened the article and if the user finished reading the
article). Based on these available events, we observe multiple implicit
(click-through-rate per news article, click-through-rate per recommendation
set and completion rate of recommendation) and explicit (heart ratio) measures
of the reading behaviour. To answer RQ2, we look into presentation
characteristics of the recommended articles (_i.e._ , presence of editorial
title, presence of thumbnail and counting number of hearts).
_1\. Click-through rate per article:_ The number of clicks on a news article
is divided by the total number of users who finished one of the original news
articles for which that article was recommended. The completion of an original
news article is registered using a scroll-position.
_2\. Click-through rate per recommendation set:_ The total number of clicks on
either of the three news articles in the recommendation set is divided by the
number of users who finished the original news article (using scroll-position)
for which the recommendation set was presented.
_3\. Completion rate of recommendation:_ Is implemented as the number of users
that read the full recommended article (using scroll-position) divided by the
number of users who opened the news article. The completion rate is assumed to
be a measure for the user satisfaction with the recommendations. We can argue
that short news articles are more likely to be completed than long news
articles. Thus, we also analyze the completion rate of a news article in
relation to the number of words in the news article.
_4\. Favourite ratio:_ The news aggregator platform allows users to mark an
article as a favourite, illustrated by an icon of a heart. The users can click
this icon at the end of the article content. We implemented the measure as the
number of users of the user group (baseline or diverse) that clicked on the
icon, divided by the number of users in the same group that completed the
article. The metric is assumed to be a marker of user satisfaction with the
article.
_5\. Presentation characteristics:_ We measured three additional properties of
a recommended article during the experiment, which referred to the
presentation characteristics of recommended news articles. First, the
editorial team can replace the original title of a news article with a custom,
editorial title. In general, these custom titles are longer and more
explanatory than the original ones. Second, articles can be presented with or
without a thumbnail image. Third, the number of users who selected the article
as a favourite is visualised by a counting number of hearts in the left-upper
corner of an article banner. All three properties are assumed to potentially
influence the click-through rate and are, therefore, measured during the
experiment.
_6\. Source diversity:_ Finally, we also measured the influence of the source
diversity of the recommendation set on the click-through rate. As seen in
Section 6, higher levels of viewpoint diversity showed to influence the number
of times a publisher is included in the recommendation.
### 7.6. Results
The online study ran six days a week for two weeks. Thus, we provided
recommendations below 24 articles. During the experiment, the topic of
_Coronavirus_ became extremely prominent, so we provided recommendations below
18 out of 24 news articles on this topic. In contrast, the _Black Lives
Matter_ topic lost all actuality, resulting in no recommendations for this
topic. For the _U.S. Elections_ topic, we provided recommendations below four
articles, and for the _Big Tech_ topic, below two news articles.
##### Click-through rate per recommended article
The mean click -through rate per recommended article for the baseline
recommendations was 0.11 (stderr. = 0.011) while for the diversified
recommendations was 0.087 (stderr. = 0.0083) when looking at all topics.
Furthermore, according to the Mann-Whitney U test (U=570, p-val$>$0.05), we
did not find a significant difference between the two user groups in terms of
click-through rate per recommended article. The same result holds per topic.
##### Click-through rate per recommended set
The mean click-through rate per recommended set for the baseline
recommendations was 0.31 (stderr. = 0.016) while for the diversified
recommendations was 0.25 (stderr. = 0.016) when looking at all topics (Figure
4(a)). According to the Mann-Whitney U test (U=2.9, p-val$<$0.05), we find a
significant difference between the mean click-through rate per recommended
sets for the two user groups. Per topic, we find such difference significant
only for _Coronavirus_ , shown in Figure 4(b), with a click-through rate per
recommended set of 0.32 (stderr. = 0.018) for the baseline recommendations and
0.25 (stderr. = 0.018) for the diversified recommendations (U=80.0,
p-val$<$0.05). For the other topics, we found no significant difference
between the two user groups.
##### Completion rate
We found no significant difference in terms of completion rate for the two
user groups. We also applied the Spearman’s rank correlation to see whether
the completion rate is correlated with the length of the articles. However, we
did not find any correlation in either of the two conditions.
##### Heart ratio
We found no significant difference, for all topics and across topics, in terms
of heart ratio for the two user groups. This suggests that the quality of the
recommendations was comparable between the two conditions.
(a) Click-through rate per recommended set, for the two user groups.
(b) Click-through rate per recommended set and per topic, for the diversified
user group.
(c) Influence of the thumbnail image as presentation characteristic, for the
two user groups.
(d) Influence of the hearts as presentation characteristic, for the two user
groups.
Figure 4. Overview of significant results in the online study
#### 7.6.1. Influence of presentation characteristics
We measured the influence of three factors, namely the presence of an
editorial title, the presence of a thumbnail and the number of users that
chose the article as a favourite on the click-through rate of an article.
##### Editorial title
Regarding the influence of the inclusion of an editorial title on the click-
through rate, no statistical significance was found for neither user groups.
##### Thumbnail image
We found no significant influence of the inclusion of a thumbnail image on the
click-through rate for baseline users. In contrast, recommendations with a
thumbnail are 3.1% more times opened than recommendations without a thumbnail
for diverse users, as seen in Figure 4(c), a difference that is also
statistically significant.
##### Favorite articles
We applied the Spearman’s rank correlation to see whether we find a
correlation between the click-through rate and the number of hearts. Figure
4(d) shows the distribution of click-through rates and the number of hearts.
We only found a moderate positive correlation of 0.57, also statistically
significant (p-val$<<$0.05) for the diversified user group.
#### 7.6.2. Source diversity
As seen in the offline evaluation, higher levels of viewpoint diversity turned
out to have remarkable effects on the publisher ratio. Therefore, we also
evaluated the effect of the source diversity of a recommendation set on the
click-through rate. For each recommendation set, we computed the number of
different publishers and we found recommendation sets in which all articles
are from a different publisher and sets in which two articles are from the
same publisher. Afterwards, the click-through was calculated for each
category. The results for both the baseline users and diverse users show that
no statistically significant difference can be found in the click-through rate
between two or three different publishers in the recommendation set for
neither baseline nor diversified users.
## 8\. Discussion
We first discuss the results of the offline and online evaluation and then
provide an overview of the limitations of our approach.
### 8.1. Offline Evaluation
The offline evaluation indicated that the proposed method is capable of
increasing the viewpoint diversity of recommendation sets according to the
metric defined in (Tintarev et al., 2018). The average viewpoint diversity
scores across all topics increased from 0.55 to 0.79 for an increasing level
of diversity in the MMR algorithm. Simultaneously, the average relevance score
decreased from 0.58 to 0.27. Remarkably, the diversity score of 0.41 in
(Tintarev et al., 2018) is considerably smaller than the maximum average value
of 0.79 found in this work. A possible factor could be the fact that in
(Tintarev et al., 2018) the LDA topic model was excluded from the
diversification method to prevent any interference with the evaluation metric,
whereas the diversification method in this work still depends on an LDA topic-
model. Therefore, the difference in viewpoint diversity scores between the
methods can possibly appear due to the interference of metadata between the
viewpoint diversity metric and diversification method in this work.
A remarkable effect of the diversification algorithm that was found in the
offline evaluation includes the decreasing publisher ratio for larger
contributions of diversity in the MMR. After investigating the effect in more
detail, it was found that the maximum frequency an article is included in the
recommendation lists is around 2 to 4 times higher at $\lambda$ = 0, compared
to $\lambda$ = 1. Thus, for larger contributions of diversity, the algorithm
increasingly selects the same article for the recommendation lists. This could
be a possible explanation of the decreasing publisher ratio, suggesting that
some outliers in the dataset get amplified, thereby suppressing the inclusion
of different sources. To be able to study this effect thoroughly, the offline
evaluation could have benefited from a setup in which it was possible to
assess the contribution of individual framing aspects to the global viewpoint
diversity score per article.
We can conduct a broader discussion about the viewpoint diversity metric used.
Although approaches that use source diversity are more popular, scholars
generally agree that viewpoint diversity can only be achieved by fostering
content diversity, because, multiple sources can still refer to the same point
of view (Voakes et al., 1996). Based on these findings, this study used a
content-based approach. From the results of the offline evaluation, it became
clear that increasing levels of content diversity exclude multiple publishers
and thus, decreases source diversity. Moreover, some specific publishers got
amplified remarkably for high levels of content diversity. Therefore,
viewpoint diversification methods could benefit from considering both content
and source diversity.
### 8.2. Online Evaluation
No major influence of viewpoint diversification on the reading behaviour was
found, except for the click-through rate calculated per recommendation set,
which indicated a statistically significant difference between baseline and
diverse users of 6.5% (in favour for baseline recommendations). However, the
results of the click-through rate calculated per recommendation indicated no
significant difference between the two user groups. Likewise, the other two
measurements of the reading behaviour, including the completion rate of
recommendations and the ratio of users who selected a recommendation as a
favourite, showed no significant difference between baseline and diverse
users.
In reflection on the motivation of this study, the proposed diversification
for news media is capable of enhancing the viewpoint diversity of news
recommendation, while maintaining comparable measures of the reading behaviour
of users. The results thus suggest that recommender systems are capable of
preserving the quality standards of multiperspectivity in online news
environments. Thereby, situations of extreme low diversity, known as filter
bubbles, could also be mitigated.
These results are in contrast with the most comparable study, Tintarev et al.
(2018), who found a negative effect on intent to read diversified news
articles. The authors proposed a viewpoint diversification method based on the
MMR-algorithm with linguistic features, such as gravity, complexity and
emotional tone. During a user study, 15 participants were asked to make a
forced choice between a recommendation from the diverse set and a
recommendation from the baseline set, after reading an article on the same
topic. It was found that 66% of the participants chose the baseline article,
compared with 33% who chose the diverse article. However, in the current
study, we observed the reading behaviour of both user groups without them
being aware, and we argue that the present setup simulates the situation in a
more realistic way.
Additionally, the results shed light on the importance of how a recommendation
is presented. Multiple presentation properties, such as the inclusion of a
thumbnail image and the number of times an article is marked as favourite,
were shown to have a significant influence on the click-through rate of
recommendations. Future research, thus, should not only address the capability
of a model to enhance viewpoint diversity according to an offline metric but
also evaluate what presentation characteristics could impact the users’
willingness to read multiperspectival news. Related research on viewpoint-
aware interfaces, which aim to explain the recommendation choices to users,
can be seen as very valuable (Tintarev, 2017; Nagulendra and Vassileva, 2014).
### 8.3. Limitations
We further discuss the limitations of our approach.
Choice of participants in the online study. Only users who frequently followed
recommendations below articles were selected for the experiment. Thus, the
click-through rates presented in this study are higher than for average news
readers.
Limited number of topics and articles. For both the online and offline
evaluations, we used only opinion pieces. Furthermore, each evaluation had a
limited number of topics, namely four, as well as a limited number of news
articles. New topics could reveal additional results that hold across topics.
Missing user perceptions. While we were able to study user behavior at a
reasonable scale, a notable omission is users’ qualitative judgement of
viewpoint diversity in the resulting recommendations. We plan to continue
collaborating with the news aggregator platform to refine the proposed
framework, _i.e._ , to improve the viewpoints extraction.
Presentation characteristics. Some presentation characteristics, and in
particular the heart ratio, could also be markers of quality. Further
qualitative analysis is needed to _e.g._ , understand how much of user
behavior is directed by quality. We also saw that for some topics the presence
of thumbnail was more common than for other topics, and it would be relevant
to study whether this also interacted with user perceptions of relevance or
quality.
Relevance metric. The offline study could use a more sophisticated relevance
measure between the recommendation and the original article. The relevance
score was based on a simple TF-IDF score, limited to the terms in a
handcrafted search query.
Influence of $\lambda$. Given limited time for online testing, we only
compared against a maximum viewpoint diversity score.
Influence of publishers. In Figure 3 we see that, although 15 publishers are
represented in the datasets, three publishers are predominant. Due to the
limited number of articles and the unbalance in terms of publishers, the
inclusion of a wide variety of perspectives on a topic can be challenged.
## 9\. Conclusions
In this paper, we proposed a novel method for enhancing the diversity of
viewpoints in lists of news recommendations. Inspired by research in
communication science, we identified frames as the most suitable
conceptualization for news content diversity. We operationalized this concept
as a computational measure, and we applied it in a re-ranking of topic
relevant recommended lists, to form the basis of a novel viewpoint
diversification method.
In an offline evaluation, we found that the proposed method improved the
diversity of the recommended items considerably, according to a viewpoint
diversity metric from literature. We also conducted an online study with more
than 2000 users, on the Blendle platform, a Dutch news aggregator. The reading
behaviour of users receiving diversified recommendations was largely
comparable to those in the baseline. Besides, the results suggest that
presentation characteristics (thumbnail image, and the number of hearts) lead
to significant differences in reading behaviour. These results suggest that
research on presentation aspects for recommendations may be just as relevant
as novel viewpoint diversification methods, to achieve multiperspectivity in
automated online news environments.
As future work, we plan to investigate further the presentation
characteristics and how they influence user experience, in addition to
behaviour. In more controlled settings, we will study the relative effects of
actual (e.g., as judged by experts) versus perceived quality (e.g., number of
hearts in the interface) of recommended news items. Future work will also
focus on defining a better metric to measure viewpoint diversity, as opposed
to topic diversity, c.f., (Draws et al., [n.d.]). Additionally, we learnt that
contextual information, _i.e._ , general knowledge about a topic (_e.g._ , the
current measures in place to stop the spread of coronavirus) can also be
essential to reveal a specific frame. We hope that this work will encourage
further research on how framing can be defined, conceptualized, and evaluated
in the computational domain.
## References
* (1)
* Baden and Springer (2014) Christian Baden and Nina Springer. 2014. Com (ple) menting the news on the financial crisis: The contribution of news users’ commentary to the diversity of viewpoints in the public debate. _European journal of communication_ 29, 5 (2014), 529–548.
* Baden and Springer (2017) Christian Baden and Nina Springer. 2017. Conceptualizing viewpoint diversity in news discourse. _Journalism_ 18, 2 (2017), 176–194.
* Baker (2001) C Edwin Baker. 2001\. _Media, markets, and democracy_. Cambridge University Press.
* Bateson (1955) Gregory Bateson. 1955\. A theory of play and fantasy; a report on theoretical aspects of the project of study of the role of the paradoxes of abstraction in communication. _Psychiatric research reports_ 2 (1955), 39–51.
* Bennett (1996) W Lance Bennett. 1996\. An introduction to journalism norms and representations of politics. (1996).
* Benson (2009) Rodney Benson. 2009\. What makes news more multiperspectival? A field analysis. _Poetics_ 37, 5-6 (2009), 402–418.
* Burscher et al. (2014) Björn Burscher, Daan Odijk, Rens Vliegenthart, Maarten De Rijke, and Claes H De Vreese. 2014\. Teaching the computer to code frames in news: Comparing two supervised machine learning approaches to frame analysis. _Communication Methods and Measures_ 8, 3 (2014), 190–206.
* Carbonell and Goldstein (1998) Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In _Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval_. 335–336.
* Choi (2009) Jihyang Choi. 2009\. Diversity in foreign news in US newspapers before and after the invasion of Iraq. _International Communication Gazette_ 71, 6 (2009), 525–542.
* De Vreese (2005) Claes H De Vreese. 2005\. News framing: Theory and typology. _Information design journal & document design_ 13, 1 (2005).
* Draws et al. ([n.d.]) Tim Draws, Nava Tintarev, Ujwal Gadiraju, Alessandro Bozzon, and Benjamin Timmermans. [n.d.]. Assessing Viewpoint Diversity in Search Results Using Ranking Fairness Metrics. In _BIAS Workshop in association with ECMLPKDD’2020_.
* Entman (1993) Robert M Entman. 1993\. Framing: Toward clarification of a fractured paradigm. _Journal of communication_ 43, 4 (1993), 51–58.
* Gamson and Modigliani (1989) William A Gamson and Andre Modigliani. 1989. Media discourse and public opinion on nuclear power: A constructionist approach. _American journal of sociology_ 95, 1 (1989), 1–37.
* Gans (2003) Herbert J Gans. 2003\. _Democracy and the News_. Oxford University Press.
* Giltin (1980) Todd Giltin. 1980\. _The whole world is watching: Mass media in the making and unmaking of the new left_. McGraw-Hill.
* Greussing and Boomgaarden (2017) Esther Greussing and Hajo G Boomgaarden. 2017. Shifting the refugee narrative? An automated frame analysis of Europe’s 2015 refugee crisis. _Journal of Ethnic and Migration Studies_ 43, 11 (2017), 1749–1774.
* Helberger (2019) Natali Helberger. 2019\. On the democratic role of news recommenders. _Digital Journalism_ 7, 8 (2019), 993–1012.
* Jaccard (1901) Paul Jaccard. 1901\. Étude comparative de la distribution florale dans une portion des Alpes et des Jura. _Bull Soc Vaudoise Sci Nat_ 37 (1901), 547–579.
* Kendall (1948) Maurice George Kendall. 1948\. Rank correlation methods. (1948).
* Kroon et al. (2016) Anne C Kroon, Alena Kluknavska, Rens Vliegenthart, and Hajo G Boomgaarden. 2016. Victims or perpetrators? Explaining media framing of Roma across Europe. _European Journal of Communication_ 31, 4 (2016), 375–392.
* Kunaver and Požrl (2017) Matevž Kunaver and Tomaž Požrl. 2017. Diversity in recommender systems–A survey. _Knowledge-Based Systems_ 123 (2017), 154–162.
* Masini et al. (2018) Andrea Masini, Peter Van Aelst, Thomas Zerback, Carsten Reinemann, Paolo Mancini, Marco Mazzoni, Marco Damiani, and Sharon Coen. 2018\. Measuring and explaining the diversity of voices and viewpoints in the news: A comparative study on the determinants of content diversity of immigration news. _Journalism Studies_ 19, 15 (2018), 2324–2343.
* Matthes and Kohring (2008) Jörg Matthes and Matthias Kohring. 2008. The content analysis of media frames: Toward improving reliability and validity. _Journal of communication_ 58, 2 (2008), 258–279.
* McQuail (1992) Denis McQuail. 1992\. _Media performance: Mass communication and the public interest_. Vol. 144. Sage London.
* Nagulendra and Vassileva (2014) Sayooran Nagulendra and Julita Vassileva. 2014. Understanding and controlling the filter bubble through interactive visualization: a user study. In _Proceedings of the 25th ACM conference on Hypertext and social media_. 107–115.
* Napoli (1999) Philip M Napoli. 1999\. Deconstructing the diversity principle. _Journal of communication_ 49, 4 (1999), 7–34.
* Negi (2019) Sapna Negi. 2019\. _Suggestion mining from text_. Ph.D. Dissertation. NUI Galway.
* Negi et al. (2016) Sapna Negi, Kartik Asooja, Shubham Mehrotra, and Paul Buitelaar. 2016. A study of suggestions in opinionated texts and their automatic detection. In _Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics_. 170–178.
* Newman et al. (2015) Nic Newman, David AL Levy, and Rasmus Kleis Nielsen. 2015\. _Reuters Institute digital news report 2015: Tracking the future of news_. Reuters Institute for the Study of Journalism.
* Pariser (2011) Eli Pariser. 2011\. _The filter bubble: How the new personalized web is changing what we read and how we think_. Penguin.
* Porto (2007) Mauro P Porto. 2007\. Frame diversity and citizen competence: Towards a critical approach to news quality. _Critical Studies in Media Communication_ 24, 4 (2007), 303–321.
* Scheufele (1999) Dietram A Scheufele. 1999\. Framing as a theory of media effects. _Journal of communication_ 49, 1 (1999), 103–122.
* Strömbäck (2005) Jesper Strömbäck. 2005\. In search of a standard: Four models of democracy and their normative implications for journalism. _Journalism studies_ 6, 3 (2005), 331–345.
* Tintarev (2017) Nava Tintarev. 2017\. Presenting diversity aware recommendations: Making challenging news acceptable. (2017).
* Tintarev et al. (2018) Nava Tintarev, Emily Sullivan, Dror Guldin, Sihang Qiu, and Daan Odjik. 2018. Same, same, but different: algorithmic diversification of viewpoints in news. In _Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization_. 7–13.
* Van Cuilenburg (1999) Jan Van Cuilenburg. 1999\. On competition, access and diversity in media, old and new: Some remarks for communications policy in the information age. _New media & society_ 1, 2 (1999), 183–207.
* Vargas and Castells (2011) Saúl Vargas and Pablo Castells. 2011. Rank and relevance in novelty and diversity metrics for recommender systems. In _Proceedings of the fifth ACM conference on Recommender systems_. ACM, 109–116.
* Vliegenthart (2012) Rens Vliegenthart. 2012\. Framing in mass communication research–An overview and assessment. _Sociology Compass_ 6, 12 (2012), 937–948.
* Voakes et al. (1996) Paul S Voakes, Jack Kapfer, David Kurpius, and David Shano-yeon Chern. 1996. Diversity in the news: A conceptual and methodological framework. _Journalism & Mass Communication Quarterly_ 73, 3 (1996), 582–593.
* Vu and Lynn (2020) Hong Tien Vu and Nyan Lynn. 2020. When the news takes sides: Automated framing analysis of news coverage of the Rohingya crisis by the elite press from three countries. _Journalism Studies_ (2020), 1–21.
* Zhang and Hurley (2008) Mi Zhang and Neil Hurley. 2008. Avoiding monotony: improving the diversity of recommendation lists. In _Proceedings of the 2008 ACM conference on Recommender systems_. ACM, 123–130.
* Ziegler et al. (2005) Cai-Nicolas Ziegler, Sean M McNee, Joseph A Konstan, and Georg Lausen. 2005. Improving recommendation lists through topic diversification. In _Proceedings of the 14th international conference on World Wide Web_. ACM, 22–32.
|
# Ask Me or Tell Me? Enhancing the Effectiveness of Crowdsourced Design
Feedback
Fritz Lekschas<EMAIL_ADDRESS>1234-5678-9012 Harvard School of
Engineering and Applied SciencesCambridgeMAUSA , Spyridon Ampanavos
<EMAIL_ADDRESS>Harvard Graduate School of DesignCambridgeMAUSA ,
Pao Siangliulue<EMAIL_ADDRESS>B12New York CityNYUSA , Hanspeter Pfister
<EMAIL_ADDRESS>Harvard School of Engineering and Applied
SciencesCambridgeMAUSA and Krzysztof Z. Gajos<EMAIL_ADDRESS>Harvard School of Engineering and Applied SciencesCambridgeMAUSA
(2021)
###### Abstract.
Crowdsourced design feedback systems are emerging resources for getting large
amounts of feedback in a short period of time. Traditionally, the feedback
comes in the form of a declarative statement, which often contains positive or
negative sentiment. Prior research has shown that overly negative or positive
sentiment can strongly influence the perceived usefulness and acceptance of
feedback and, subsequently, lead to ineffective design revisions. To enhance
the effectiveness of crowdsourced design feedback, we investigate a new
approach for mitigating the effects of negative or positive feedback by
combining open-ended and thought-provoking questions with declarative feedback
statements. We conducted two user studies to assess the effects of question-
based feedback on the sentiment and quality of design revisions in the context
of graphic design. We found that crowdsourced question-based feedback contains
more neutral sentiment than statement-based feedback. Moreover, we provide
evidence that presenting feedback as questions followed by statements leads to
better design revisions than question- or statement-based feedback alone.
crowdsourced design feedback, feedback framing, sentiment, questioning
††journalyear: 2021††copyright: acmlicensed††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††price: 15.00††doi:
10.1145/3411764.3445507††isbn: 978-1-4503-8096-6/21/05††ccs: Human-centered
computing Human computer interaction (HCI)
## 1\. Introduction
Feedback is a central part of learning and achievement that can help evaluate
one’s work, uncover problems, and promote new ideas for improvement. Yet, its
effectiveness greatly varies by type and how it is framed, and its impact can
be either positive or negative (Hattie and Timperley, 2007). In graphic
design, feedback is a vital part of the iterative design process and is
typically solicited in critique sessions. However, these sessions are time and
resource intensive. Moreover, feedback from alternative sources like peers and
online communities can be scarce (Marlow and Dabbish, 2014; Xu et al., 2014;
Luther et al., 2015), biased (Tohidi et al., 2006; Xu et al., 2014), and
superficial (Willett et al., 2012; Xu and Bailey, 2012). Crowdsourced online
feedback is an emerging mechanism to gather large amounts of feedback quickly
(Luther et al., 2014; Greenberg et al., 2015; Yen et al., 2016). When
structured appropriately, crowdsourced feedback can be as effective as expert
feedback (Yuan et al., 2016) and help designers produce more and better design
revisions than they could have done otherwise (Xu et al., 2014, 2015; Luther
et al., 2015).
For crowdsourced feedback to be effective, it needs to foster productive
reflection on the design to generate useful ideas for design revisions.
Furthermore, the feedback needs to be acceptable to the designer, or else they
will ignore it. However, this is challenging because there is a tension
between the productive value of feedback and acceptability, which is related
to the feedback’s perceived sentiment. For instance, Crain et al. (Crain and
Bailey, 2017) found that feedback with positive sentiment, which we will refer
to as positive feedback, is typically preferred by content creators. However,
positive feedback is less likely to lead to improvements through iteration. On
the other hand, in their study, feedback with negative sentiment encouraged
more design iterations but tended to have lower acceptance. In the worst case,
feedback with negative sentiment, which we will refer to as negative feedback,
influences the recipient’s affective state (Baumeister et al., 2001; Wu and
Bailey, 2018) and can reduce their overall task performance (Cairns et al.,
2014).
[Two example feedback items for a flyer from the first user study]Each
feedback item consists of an open-ended question followed by a traditional
statement. Although the related questions and statements target the same
aspects of the flyer design, the questions carry more neutral sentiment than
the statements.
Figure 1. Enhanced Design Feedback: Two example feedback items for a flyer
from the first user study (Section 4). Each feedback item consists of an open-
ended question followed by a traditional statement. Although the related
questions and statements target the same aspects of the flyer design, the
questions carry more neutral sentiment than the statements.
To improve the effectiveness of crowdsourced feedback on design revisions, we
contribute a novel approach of enhancing traditional statement-based feedback
with open-ended and thought-provoking questions (Figure 1). We hypothesized
that presenting feedback in the form of a question followed by a statement
would result in higher-quality design revisions compared to statement-based or
question-based feedback alone. Building on prior work from several fields, our
rationale for this hypothesis is twofold. First, we hypothesized that feedback
in the form of open-ended questions carries less sentiment than statements
and, subsequently, improves the acceptance of the feedback. Second, we
hypothesized that the preceding open-ended question promotes productive
reflection even if the statement-based feedback is superficial or unacceptable
to the designer.
In design, reflection is fundamental in evaluating the current state of one’s
work relative to its goals and for generating ideas for improvements (Schön,
1984). It is suggested that combining feedback with reflection is a superior
format (Brandt, 2008) compared to feedback alone. For instance, feedback that
incorporates a reflective task can lead to more extensive revisions and
increased quality (Yen et al., 2017) compared to traditional feedback. An
effective way to promote reflection is facilitative questioning. For example,
in teaching, questioning is known as an effective technique to trigger
reflection and critical thinking among students (Carnine et al., 1982; Tofade
et al., 2013). However, questioning should not be the only type of feedback as
it can otherwise irritate students (Berghmans et al., 2012). Besides
reflection, questions could balance the acceptance of feedback statements,
assuming they contain neutral sentiment. For instance, ordering feedback from
positive to negative has been shown to lead to a more balanced perception of
negative feedback by improving the recipients’ happiness and excitement (Wu
and Bailey, 2017).
We conducted two online user studies in the context of graphic design to study
the effects of enhancing statement-based with question-based feedback. In the
first study, we investigated if feedback in the form of open-ended and
thought-provoking questions can be crowdsourced and if these questions contain
more neutral sentiment compared to corresponding feedback statements. The
results show that 85% of the questions created by the crowd workers are open-
ended and thought-provoking. We also found that the questions derived from
negative or positive statements contained significantly more neutral sentiment
than the corresponding statements, as exemplified in Figure 1. In the second
study, we examined the effectiveness of feedback enhanced with open-ended
questions on the quality of design revisions. We recruited 36 non-professional
designers to design a flyer and revise it based on crowdsourced feedback. To
test our hypothesis, we assessed three ways of presenting the feedback:
statements only, questions only, and questions followed by statements. We
employed an external jury of expert designers to rate the flyers’ design
quality for comparison. We found that participants who were shown questions
followed by statements improved their designs to a significantly greater
degree than participants who saw either statements or questions alone.
We make two contributions to the area of crowdsourced design feedback. First,
we introduce the first method for framing crowdsourced design feedback as
questions and combining them with traditional feedback statements. Second, we
provide empirical evidence that presenting crowdsourced feedback in the form
of open-ended questions followed by statements improves the quality of design
revisions compared to presenting feedback as either statements or questions
alone. Combining statement-based feedback with open-ended questions is
complementary to other strategies for enhancing the effectiveness of design
feedback. Therefore, our approach can easily be integrated into existing
crowdsourced design feedback systems to increase the overall productive value
of the feedback for design revisions.
## 2\. Related Work
### 2.1. Background
Within the inherently iterative design process, feedback is essential to
evaluate the design’s current state and generate revision ideas (Sadler, 1989;
Hattie and Timperley, 2007). Design studios are a fundamental element in
design education, where students receive feedback in various types of critique
sessions (Schön, 1985). These critique sessions consist of a work presentation
by the student followed by an individual critique from the teacher (i.e.,
“desk crit”), multi-layered critique by a jury, or open feedback from other
students (Uluoğlu, 2000). Ideally these sessions result in a dialogue for
finding a common ground between one’s own design intentions and the received
feedback. In the professional practice, designers are seeking such detailed
feedback from peers. Overall, design critiques provide in-depth analyses and
foster a deep understanding of the designer’s work (Dannels et al., 2008;
Connor and Irizarry, 2015). However, while providing rich feedback, critiques
can be infrequent, time-consuming, and resource-intensive. Therefore,
designers may require additional feedback in preparation for the more
structured critique sessions. Peers and online communities can provide such
additional feedback but it can be limited in quantity (Marlow and Dabbish,
2014; Xu et al., 2014; Luther et al., 2015), biased (Tohidi et al., 2006; Xu
et al., 2014), and superficial (Willett et al., 2012; Xu and Bailey, 2012).
Crowdsourcing is an approach to overcome these limitations (Luther et al.,
2014; Greenberg et al., 2015; Yen et al., 2016) and provide almost expert-
quality feedback when elicited and structured effectively (Yuan et al., 2016).
### 2.2. Sentiment and Valence
Prior research on crowdsourced feedback systems found that the sentiment of
feedback impacts its perceived usefulness. For example, Yuan et al. (Yuan et
al., 2016) found that “positively written and emotional critiques received
higher average ratings”. Their findings provide evidence that valence and
arousal are positively correlated with designers’ ratings of feedback.
Similarly, Nguyen et al. (Nguyen et al., 2017) studied feedback on writing
tasks and found that positive tone in critical feedback leads to better work
quality overall. Krause et al. (Krause et al., 2017) systematically
investigated the perceived usefulness of feedback along various dimensions
such as length, specificity, or complexity. They found that the perceived
usefulness peaks for feedback with neutral to very mildly negative sentiment.
Wu et al. (Wu and Bailey, 2017) build upon these findings and studied the
effects of presenting feedback with varying sentiments in different orders.
They present empirical evidence that showing negative feedback at the end
improved the feedback’s perception.
However, in contrast to the perceived usefulness, Crain et al. (Crain and
Bailey, 2017) studied the long-term effects of different types of feedback on
design iterations in a large meta-study on feedback collected from Reddit.
They found that longer and less positive feedback is predictive of a higher
number of design iterations. Although the study could only take publicly
shared iterations into account, it highlights a disparity between the
perceived usefulness and the actual effectiveness of feedback with diverging
sentiment.
Sargeant et al. (Sargeant et al., 2008) studied the impact of positive and
negative feedback on the recipient. They found that negative feedback can
evoke negative feelings, especially when the feedback disagrees with the
recipient’s self-perception. In this case, the recipient perceives the
feedback to be addressed against themselves rather than the task at hand. Wu
et al. (Wu and Bailey, 2018) confirmed these findings and additionally showed
that balancing the valence of feedback can mitigate the impact of negative
feedback on its perceived usefulness.
We hypothesize that framing feedback as a question will alleviate sentiment.
Subsequently, we hypothesize that showing feedback in the form of questions
prior to the traditional statement-based feedback will increase the feedback’s
overall acceptability.
### 2.3. Reflection
The ultimate goal of feedback is to help improve the critiqued work. In order
to achieve this goal, feedback needs to facilitate new productive ideas.
Beyond direct feedback, reflection is another popular tool (Schön, 1984) in
the design community to generate ideas for design revisions. See Baumer et al.
(Baumer et al., 2014) for a review on how reflection can be leveraged in the
design process as a whole. In regard to feedback, Caroline Brandt (Brandt,
2008) showed that feedback alone might not always be sufficient. She suggests
that combining feedback with a reflection task is generally superior.
Yen et al. (Yen et al., 2017) confirmed this hypothesis by showing that
reflection alone can be as beneficial as crowdsourced feedback. They implement
a reflective activity where designers have to respond to three generic
questions about their design. In their study, the combination of reflection
and feedback led to the best design quality overall. Moreover, Sargeant et al.
(Sargeant et al., 2009) found that facilitated reflection can alleviate the
distress caused by negative feedback and enhance feedback acceptance.
In this work, we build upon these findings and hypothesize that feedback in
the form of questions will act as a lightweight reflective activity that
promotes useful ideas for design revisions. Moreover, we extend previous
reflection approaches by preceding a negative feedback statement with an open-
ended question related to the same aspect of the design to help designers to
better cope with potential distress caused by the negative feedback.
### 2.4. Facilitative Questioning
For questions to be effective, they need to facilitate reflection and promote
critical thinking. For instance, in evaluating writing, Knoblauch and Brannon
(Knoblauch and Brannon, 1984) have established an approach called
“Facilitative Response”, which argues that the reviewer should adopt a
“facilitative posture”. Instead of directly telling the writer what to do, the
reviewer should raise open-ended questions to encourage the writer to think
about their ideas and expressions more fully. Facilitative responses do not
need to come in the form of questions, but studies have found questions to be
an effective implementation.
For example, Carnine et al. (Carnine et al., 1982) found positive effects for
facilitative questioning in combination with feedback in teaching children.
Berghmans et al. (Berghmans et al., 2012) studied the benefits of facilitative
questioning against direct teaching approaches for medical students. They
found that facilitative questioning is beneficial for students with less
expertise. Interestingly, they also discovered that questioning alone is not
perceived well as students demand information after facilitative questions
were raised.
In general, questioning has been studied as a tool for teaching. For example,
Alison King developed a technique called “reciprocal questioning” (King, 1992,
1990) in which she provides evidence that thought-provoking questions lead to
a deep discussion about topics and encourage critical thinking (King, 1995).
Ciardiello et al. (Ciardiello, 1998) discuss how to identify and generate
divergent questions to promote literacy. Chambers et al. (Chambers and
Vickers, 2006) compared questioning as a teaching tool for swimmers and found
that deliberately delaying extensive amounts of feedback and replacing it with
insightful questions elicits better reflection and ultimately improves the
swimmers’ technique.
In our approach, we implement facilitative questioning as a tool to promote
reflection and critical thinking.
### 2.5. Framing & Structuring Feedback
Irrespective of the feedback’s sentiment and reflective nature, the way a
system elicits and structures feedback from non-expert crowd workers can
change the feedback’s focus and quality. For example, Hicks et al. (Hicks et
al., 2016) investigate three different ways of framing feedback. They found
that asking for numerical ratings of the design leads to more explanatory
feedback of lower quality.
Sadler describes effective feedback to be specific (following a predefined
concept), goal-oriented (comparing the work’s current to a reference state),
and actionable (promoting actions that close the performance gap) (Sadler,
1989). As elaborated by Connor and Irizarry, these three elements are equally
necessary for design critiques (Connor and Irizarry, 2015). They additionally
argued that the critique’s goal should be an analysis of the performance gap
to drive effective design iterations. In the context of crowdsourcing, several
studies (Luther et al., 2015; Greenberg et al., 2015; Xu et al., 2014; Robb et
al., 2015; Yuan et al., 2016; Ngoon et al., 2018; Kang et al., 2018) have
evaluated the effects of structuring and scaffolding feedback and found that
an appropriate structure elicits more diverse and higher quality feedback. For
example, Voyant (Xu et al., 2014) prompts non-expert feedback providers to
provide smaller feedback on various specific aspects of a design. In CrowdCrit
(Luther et al., 2015), Luther et al. built upon these findings and further
structured the feedback task into problem identification and explanation.
In our method, we utilize these findings by asking the feedback providers to
focus on three different aspects of the design.
## 3\. Approach and Hypotheses
Previous research indicates a design tension (Section 2). Positive feedback is
more acceptable to the recipient, but it is less likely to lead to substantial
revisions compared to negative feedback. On the other hand, negative feedback
can lead to substantial design improvements, but it is a source of
discouragement and it is likely to be dismissed. This is particularly
challenging in the context of crowdsourced design feedback systems, an
otherwise promising source of feedback. How can we enhance crowdsourced design
feedback to be acceptable and substantive to promote useful ideas for design
revisions? And how can we elicit such feedback robustly from non-expert crowd
workers?
Our approach is to structure feedback such that a potentially negative or
positive statement is preceded by an open-ended question related to the same
concern. For instance, in the context of designing an event flyer, “This image
is not relevant to the event” might be preceded by “What made you choose this
image?”, or “How is this image related to the event?”. To ensure that the
question and statement relate to the same concern, the feedback provider is
asked to first provide statement-based feedback and subsequently rephrase the
statement into an open-ended and thought-provoking question. We consider a
question to be open-ended when it requires an elaborating answer beyond “yes”,
“no”, or simple facts. The goal of such a question is to promote critical
thinking and reflection about a specific aspect of the critiqued work without
carrying overly positive or negative sentiment.
In this context, our main hypothesis is the following:
H-Main: Feedback in the form of an open-ended question followed by a statement
improves the overall quality of design revisions compared to statement-based
or question-based feedback alone. Our reasoning is twofold. We hypothesize
that the preceding question increases the acceptance of negative feedback and
that asking a question will act as a lightweight reflective task, which can
promote better design revision, as shown by Yen et al. (Yen et al., 2017).
However, we expect feedback consisting of questions alone to lead to less
effective design revisions as it can irritate the feedback receiver (Berghmans
et al., 2012).
To answer our main hypothesis, we pose the following supporting hypotheses on
the effects of question-based feedback:
H-Support 1: Non-expert crowd workers can ask open-ended and thought-provoking
questions. Given prior work on the effectiveness of structuring feedback
acquisition (Section 2.5), in particular the work by Greenberg et al.
(Greenberg et al., 2015), we hypothesize that providing a clear structure on
how to provide feedback in combination with relevant example questions will
teach the workers how to pose open-ended and thought-provoking questions, just
like Alison King did with her students (King, 1992, 1990).
H-Support 2: Feedback in the form of an open-ended question has more neutral
sentiment than feedback addressing the same concern, but framed as a
statement. Assuming that crowd workers are able to pose such questions, we
hypothesize that open-ended questions carry more neutral sentiment than
statements given the nature of open-ended questions.
H-Support 3: Preceding question-based feedback leads to more balanced
acceptance of subsequent statement-based feedback compared to statement-based
feedback alone. Assuming open-ended questions contain more neutral sentiment
than statements and taking into account the improvement in perception of
negative feedback when preceded by positive feedback (Wu and Bailey, 2017), we
hypothesize that presenting the question-based feedback first will cause the
recipients to focus on the design rather than themselves and perceive
subsequent statement-based feedback more neutrally compared to statement-based
feedback alone.
## 4\. Study 1: Eliciting Open-Ended Feedback Questions From Crowd Workers
In support of H-Main, we investigated if open-ended question-based feedback
can be crowdsourced from non-experts (H-Support 1) and if such question-based
feedback contains more neutral sentiment than statement-based feedback
(H-Support 2). To this end, we asked online crowd workers to provide feedback
for graphic designs in the form of statements and questions.
### 4.1. Experimental Design
In our approach (Section 3), we ask each feedback provider to rephrase their
feedback statement into a question to ensure that the feedback addresses the
same aspect of the design. However, the act of rephrasing might be a
confounding factor that influences the sentiment and open-endedness. To
control for this potential confounding factor, we conducted a within-subjects
experiment with two factors: _framing_ and _rephrasing_. Framing has two
levels, which refer to posing feedback as either declaratory statements or
open-ended questions. Rephrasing describes the strategy of eliciting
statements-questions pairs and has the following two levels: rephrasing
statements into questions (S→Q) or vice versa (Q→S).
### 4.2. Task
We presented each participant with four diverse designs of a flyer advertising
a local event. We asked each participant to provide three written feedback
items (addressing the theme of the design, the layout of the design, and a
specific visual element in the flyer). For the first two flyers, the
participants had to write a statement first and then rephrase it into a
question (S→Q). For the other two flyers, the participant had to first write
the question and then rephrase the question into a statement (Q→S). Following
Greenberg et al. (Greenberg et al., 2015), we provided three diverse examples
to promote creativity (Siangliulue et al., 2015b; Siangliulue et al., 2015a)
and encourage feedback that addresses a variety of aspects. Each example
consisted of a statement and question.
### 4.3. Participants
We recruited 24 participants (16 male and 8 female) on Amazon Mechanical Turk
(AMT) who were located in the US and spoke English natively. Only participants
with an acceptance rate above 97% and more than 500 approved HITs were
accepted. The majority of participants (16) were aged between 30–40. Three
were between 20-30 years old. Another three were between 40-50 years old. And
two were aged between 50–60. On average, the participants reported to be
somewhat familiar with graphic design principles (M=3.17) and not very
proficient in generating graphic designs (M=2.58). The results were reported
on a 5-point Likert scale from “very unfamiliar” to “very familiar” and “very
unproficient” to “very proficient” respectively. Participants were paid 5 USD
for completing the task.
### 4.4. Procedure
We divided the participants into two groups, where the first group started
with rephrasing statements into questions (S→Q) two times and then switched to
Q→S. The second group started with Q→S and switched to S→Q after the first two
flyers. Supplementary Figures S2–S4 show how the task was implemented. To
avoid mistakes when the participants switched from S→Q to Q→S and vice versa,
we added a dedicated step to inform about the upcoming switch in the
rephrasing strategy. In total, each participant provided 12 feedback items:
three feedback items for each of the four flyer designs. The order of the
flyers was randomized.
### 4.5. Measurements
#### Open-endedness.
We measured the rate of successfully-rephrased statements into open-ended and
thought-provoking questions through coding. The first two authors of this
paper coded all statements as being either successfully rephrased into open-
ended and thought-provoking questions or not. We considered a question to be
open-ended and thought-provoking if it required more than a yes/no answer or a
statement of simple facts. Specifically, we used Alison King’s (King, 1990,
1992, 1995) question stems (e.g., “How did you choose…”, “What is the purpose
of…”, or “Why did you decide on…”) as guidance and we assessed if the question
targeted the rationale behind a design choice.
Prior to the analysis, feedback that did not target the actual design was
removed. Such peripheral feedback questions typically focus on predefined
requirements (e.g., “What made you name it Harvard Open Boathouse if it’s
technically not ”open” to anyone except for Harvard students?”) or facts about
the photographic material (e.g., “Is this one of the actual boats that are
currently being used by the crew?”).
The authors initially coded all questions individually using separate Google
Sheets with questions in randomized order. They achieved high agreement of
Krippendorff’s $\alpha=.81$ (calculated in Python using Grill’s
krippendorff_alpha method (Grill, 2017)). Subsequently, they collaboratively
resolved conflicts to reach complete agreement. Most conflicts were due to two
types of questions: questions that ask for a reason (e.g., “Is there some
reason why you did not decide to go with a more blue color to kind of go along
with boating?”) and questions that ask for an alternative (e.g., “Does the
text at the bottom contrast enough against the water? Is there another color
that might work better?”).
#### Sentiment.
We analyzed the sentiment of every feedback statement and question using VADER
(Hutto and Gilbert, 2014)—an automated sentiment analysis tool. VADER provides
a polarity score ranging from $-1$ to $1$, where $-1$ refers to negative
sentiment, $1$ refers to positive sentiment. We consider scores between
$-0.05$ and $0.05$ as neutral sentiment.
### 4.6. Results
Ten out of 288 feedback questions ($3.5\%$) were removed from the analysis as
they did not pertain to the graphical design choices. Of the remaining 278
questions, 236 ($84.9\%$) were found to be open-ended and thought-provoking.
The distribution of sentiment polarity scores for the statement- and question-
based feedback items are shown in Figure 2. As confirmed by a Shapiro-Wilk
test of normality, the polarity scores are not normally distributed (W=.92,
p¡.0001). Therefore, we conducted a Wilcoxon signed-rank test to compare the
absolute polarity of statement-based and question-based feedback. We found
that statement-based feedback had significantly higher absolute polarity
(M=.33, SD=.27) than question-based feedback (M=.18, SD=.23; W=5703.5,
p¡.0001).
[Three bar charts showing the distribution of feedback polarity]Three bar
charts showing the polarity distribution of statement-based and question-based
feedback. Statements show stronger positive and negative polarity than
questions. Both feedback types peak around neutral sentiment.
Figure 2. Feedback Polarity: Distribution of polarity scores (x-axes) across
all feedback items (left), items related to negative statements (middle), and
items related to positive statements (right). Questions have more neutral
sentiment on average than the corresponding statements.
To better understand how the question and statement sentiment differed, we
separately analyzed the polarity scores of statement-question pairs associated
to statements with a polarity smaller than $-.05$ (i.e., negative statements),
larger than $.05$ (i.e., positive statements), and polarity in [-.05, .05]
(i.e., neutral statements). For negative statements (n=87), we found that
statement-based feedback had significantly more negative polarity scores
(M=-.34, SD=.20) than the related question-based feedback (M=0.07, SD=0.28;
W=112, p¡.0001). Similarly, for positive statements (n=128), statement-based
feedback had significantly higher polarity scores (M=.50, SD=.21) than the
related question-based feedback (M=.17, SD=.27; W=643.0, p¡.0001). For neutral
statements (n=63), we did not find any significant difference in the scores
for statement-based (M=.00, SD=.01) and question-based feedback (M=.04,
SD=.21; W=89.5, p=.14).
To determine the influence of rephrasing (S→Q and Q→S), which might be a
potential confounding factor (Section 4.1), we analyzed its impact on the
questions’ open-endedness and sentiment. Knowing the influence of rephrasing
can also inform future practical uses of our method. A Cochran’s Q test showed
that there was no significant association between rephrasing and open-
endedness of the questions (Q=8.92, p=.63).
Regarding the impact of rephrasing on the sentiment polarity scores, we were
additionally interesting in testing for potential interactions effects between
rephrasing and framing. To use a nonparametric factorial analysis, we first
applied the Aligned Rank Transform (Wobbrock et al., 2011) on the polarity
scores. Using the aligned polarity scores, we conducted a repeated-measures
analysis of variance (ANOVA) with framing and rephrasing as the two within-
subjects factors. As expected, we observed a significant effect of framing on
absolute polarity (F(1,552)=65.51, p¡.0001) and no significant effect of
rephrasing on the absolute polarity (F(1,552)=1.23, p=.27). We also did not
find any significant interaction between framing and rephrasing
(F(1,552)=1.46, p=.23).
We separately repeated the same analysis for question-statement pairs
associated with negative and positive statements. For negative statements, we
again find a significant effect for framing (F(1,170)=191.98, p¡.0001) and no
significant effect for rephrasing (F(1,170)=.53, p=.47). However, this time we
found a significant interaction between framing and rephrasing (F(1,170)=5.41,
p=.021). Investigating the simple main effects for Q→S and S→Q separately, we
find that questions (M=.09, SD=.3) had a more neutral polarity score (Q→S:
M=.09, SD=.3; S→Q: M=.05, SD=.27) than statements (Q→S: M=-.38, SD=.21; S→Q:
M=-.31, SD=.19) in both cases (Q→S: F(1,86)=110.87, p¡.0001; S→Q:
F(1,84)=81.1, p¡.0001). Similarly, for positive statements, we find a
significant effect for framing (F(1,252)=115.92, p¡.0001), no significant
effect for rephrasing (F(1,252)=.96, p=.33), and a significant interaction
between framing and rephrasing (F(1,252)=6.84, p=.01) We again investigated
the simple main effects for Q→S and S→Q separately and found that questions
had a more neutral polarity score (Q→S: M=.19, SD=.25; S→Q: M=.15, SD=.30)
than statements (Q→S: M=.47, SD=.21; S→Q: M=.52, SD=.22) in both cases (Q→S:
F(1,128)=48.95, p¡.0001; S→Q: F(1,124)=68.67, p¡.0001).
### 4.7. Summary and Discussion
The results of this study demonstrate that non-experts recruited online can
produce open-ended questions with a high degree of success ($84.9\%$), which
supports H-Support 1. Our results also demonstrate that feedback phrased as
questions has weaker polarity than equivalent feedback presented as
declarative statements according to automated sentiment analysis. That is,
questions related to negative feedback express more neutral sentiment than
their corresponding statements, and questions related to positive feedback
also express more neutral sentiment than statements expressing equivalent
critique. These findings support H-Support 2. Finally, our results suggest
that the order in which feedback is rephrased does not have a strong effect on
the feedback’s sentiment. While we see an interaction between framing and
rephrasing, the simple main effects indicate that questions have significantly
less sentiment compared to statement in both rephrasing orders.
One concern is the influence of the payment on the feedback. Prior research
suggests that the principal effect of payment is the increased quantity of
work: Unpaid crowds provide less feedback than paid workers (Xu and Bailey,
2012; Xu et al., 2014). A factor that may be of greater relevance is
anonymity, which can improve the feedback quality by avoiding peer pressure
(Marlow and Dabbish, 2014). Thus, we assume that our results on the quality
and sentiment of feedback will generalize to unpaid settings as long as the
feedback is anonymous. However, more studies are necessary to verify this
assumption.
## 5\. Study 2: The Effects of Combining Statement- With Question-Based
Feedback
In the second user study, we examined our main hypothesis H-Main and the
supporting hypothesis H-Support 3 in the context of a graphic design task. The
study consisted of two sessions. In the first session, participants designed
an event flyer, for which we subsequently crowdsourced feedback. Based on this
feedback, participants revised their initial design in the second session.
Finally, an independent jury of design experts rated the improvements of the
revised designs.
### 5.1. Experimental Design
We conducted a between-subjects experiment in which we compared the following
three conditions: statement-based feedback only (S), question-based feedback
only (Q), and question-based feedback followed by statement-based feedback
(Q+S). While our main hypothesis (H-Main) is that the revision quality in Q+S
will be higher than in S, we included Q to be able to determine whether the
hypothesized improvement is due to the combination or framing of feedback. The
participants were equally and randomly distributed across the three
conditions.
[Screenshots of the feedback presentation and thought-provokingness
rating.]Four screenshots showing the user interface of the feedback
presentation and thought-provokingness rating using a 5-point Likert scale.
Figure 3. Feedback Presentation: In the combined condition (Q+S), the
statement was only shown after the thought-provokingness was rated.
### 5.2. Task
The participants were asked to design a flyer for a local sports event. The
event, called “Harvard Open Boathouse” was a fictional open house day of a
university-affiliated rowing club that invites university members to learn
about the sport, facilities, and meet senior club members. We chose this
fictional event to focus on a specific event type that is popular in the local
area.
In the first session, participants designed their initial flyer, which they
subsequently in the second session. Before revising their flyer design, the
participants were presented with crowdsourced feedback (Figure 3), which we
asked them to address in their revision. See Supplementary Figure S14 for a
full example. During the feedback presentation, participants had to rate how
much each statement or question made them think about their design in new
ways. Since our goal was to capture the immediately-perceived _thought-
provokingness_ of each feedback item, the form fields disappeared after the
corresponding feedback was rated. In the Q+S condition, the participants saw
only the question-based feedback until they rated the thought-provokingness,
but a text label indicated that more information (i.e., the feedback
statement) would appear after rating. In all conditions, participants were not
allowed to proceed and upload their revised design until all feedback items
had been rated. Inspired by Yen et al. (Yen et al., 2017), we wanted the
participants to think about the question-based feedback explicitly to
encourage reflection. Furthermore, in Q+S, we wanted to contrast the reported
thought-provokingness against the final feedback ratings (Section 5.6) to
assess whether preceding questions increase the perceived usefulness of the
feedback.
### 5.3. Participants
#### Designers
We recruited 36 participants (8 male and 28 female) located around Harvard
University (Cambridge, MA) using flyers and mailing lists. The majority of
participants (21) were aged between 18–25 while the rest (15) were aged
between 26–35. We targeted participants who were relatively inexperienced in
graphic design, as prior research (Berghmans et al., 2012; Dow et al., 2011)
has shown that experienced designers have often built high confidence in their
skill sets and rely primarily on their experience rather than feedback. In a
pre-study questionnaire, most participants (25 out of 36) reported that they
had never created a graphic design in a professional capacity. Per completion
of both sessions, participants received a 35-USD gift card.
#### Feedback Providers
We recruited 187 participants on AMT to provide feedback on the flyer designs.
As in the first study (Section 4), we only accepted US-based workers with an
acceptance rate above 97% and more than 500 approved HITs. To prevent any
potential learning effects and ensure an equal distribution of independent
feedback providers per design, we used Unique Turker (Ott, 2020), which
stopped feedback providers from completing the user study multiple times. For
statement- (S) and question-only (Q) feedback, we paid 0.85 USD per task. For
the combined feedback (Q+S), we paid 1.25 USD per task.
#### Judges
To evaluate and rate the improvement of the flyer designs, we recruited a jury
of eight design experts (three male and five female). We considered someone to
be a design expert if they hold an academic degree in a field related to
graphic design, had at least two years of work experience as a professional
designer or had taught at least one course related to graphic design. Three
experts earned a doctor degree while the others held a master degree in
architecture, UI/UX/HCI, or fine arts. Five judges were professors, two were
graduate research assistants with teaching experience, and one was a
professional designer. Each expert received a 50-USD gift card as
compensation.
[Flow chart of the user study procedure]In the first session, the participants
started by completing a pre-study questionnaire and then created an initial
flyer design. Afterward, we crowdsourced feedback from AMT. In the second
session, the participants first read the feedback, then revised their design,
then rated the feedback, and finally completed a post-study questionnaire. At
the end, a jury of design experts rated the improvement of the flyer designs.
Figure 4. User Study Procedure: In the first session, the participants
completed a pre-study questionnaire (1) and created an initial flyer design
(2). Afterward, we crowdsourced feedback from AMT. (See Figure 1 for an
example.) In the second session, the participants read the feedback (3),
revised their design (4), rated the feedback (5), and completed a post-study
questionnaire (4). Finally, a jury of design experts rated the improvement of
the flyer designs (Figure 7).
### 5.4. Main Study Procedure
We conducted the study online to allow participants to work on their designs
anywhere and anytime. Our web application guided the participants through each
step of the user study. See Supplementary Figures S5–S19 for a complete
walkthrough. We split the experiment into two sessions to allow for enough
time to collect feedback. Figure 4 shows an overview of the procedure.
The first session comprised the consent process, pre-study questionnaire,
design brief, and the first design iteration. The participants were free to
use their software of choice for designing the flyer. For participants who did
not have access to any graphics software, we recommended Google Drawings
(Google, 2020) and Gravit Designer (Corel, 2020). After each participant
completed the first session, we acquired, filtered, and randomly selected
crowdsourced feedback. In the second session, the participants were presented
with the feedback, revised their initial design, rated the received feedback,
and completed the post-study questionnaire. Each session took 45–60 minutes.
We started measuring the time before presenting the instructions for designing
and revising the flyer and showed a timer for convenience.
Finally, an independent jury of design experts rated the improvement of the
design revisions and selected the three best designs. We randomized the order
of the flyers for each jury member to avoid interaction effects between the
flyer’s position and rating. The participant with the highest average quality
rating received an additional 100-USD gift card. We included the competition
to increase the participants’ motivation throughout the two sessions.
### 5.5. Acquisition and Selection of Crowdsourced Feedback
For each flyer design, we collected 15 feedback items from five unique crowd-
workers (i.e., three feedback items per worker) using the S→Q feedback
acquisition procedure from Section 4. Anticipating how the S and Q conditions
might be implemented in practice, we asked the feedback providers to only give
statement-based or question-based feedback, respectively. Hence, the
rephrasing step was omitted in S and Q.
After collecting the feedback (Figure 5), the first two authors of this paper
inspected each set of three feedback items to ensure a minimum level of
quality. In 7 out of 180 cases, the crowd worker provided incomprehensible or
nonsensical answers (e.g., “Element is Fine text”). We rejected these
submissions and obtained new feedback. From the pool of 540 feedback items, we
removed four peripheral feedback items that did not target the design itself,
e.g., “Why is the open boathouse restricted to only people with a university
Harvard affiliations?”.
After filtering out invalid feedback, the first two authors of this paper
grouped the feedback items that targeted the very same aspect of the flyer
design and arrived at the same conclusion. For instance, as shown in Figure 5
(bottom), the three statements target the same visual element, but only the
conclusion of the first and second are the same. Therefore, we grouped the
first two but not the third feedback item. For each group, we randomly
selected only one item. We used these groupings to avoid presenting the same
critique multiple times. While the number of identical feedback items can
provide an estimate for the critique’s severity, we opted for diverse feedback
instead. Finally, we randomly selected five feedback items per design from the
selection of unique feedback items, which were then shown to the participant
during the second session. Given the time constraints for the revision task,
we chose to limit the number of feedback items so that the participants did
not have to spend much time on organizing the feedback.
### 5.6. Measurements
We used the results of three survey questions related to the feedback’s
thought-provokingness, usefulness, and tone as measures for the acceptance of
feedback (H-Support 3). See Supplementary Figure S17 for an example.
#### Thought-provokingness.
In the second session, after having read each feedback item, but before
submitting the revised design, we asked the participants: “Does this
[statement/question] make you think about your design in a new way?”. The
participants provided their answers on a 5-point Likert scale ranging from
“no, not at all” (1) to “yes, very much” (5).
#### Usefulness.
After the participants submitted their revised designs, we showed them the
feedback again with the original and revised flyer design. This time, the
participants had to rate each feedback item’s usefulness in regards to the
design revision by answering “Was this feedback useful for revising your
design?” using a 5-point Likert scale ranging from “no, not at all” to “yes,
very much”. Our goal was to find out which feedback was perceived useful for
revising the design as an indicator of the feedback acceptance.
#### Tone.
We also asked the participants to rate the tone of the feedback on a 5-point
Likert scale from “very negative” to “very positive” to get a subjective
rating of the feedback’s sentiment polarity. To indicate that the tone is
different from the feeling, we additionally asked the participants how the
feedback made them feel.
#### Improvement.
To assess the impact of the feedback on the design revision (H-Main), we asked
the jury members to rate the improvement of each flyer design on a diverging
7-point Likert scale ranging from “worsened significantly” (1) to “significant
improvement” (7).
[Flow chart of our feedback selection procedure]First, we rejected nonsensical
submission and removed peripheral feedback items. For each flyer design, we
grouped the feedback by the main aspect (e.g., font size) and conclusion
(e.g., too small) and randomly selected one feedback item per group. From the
remaining feedback items we randomly sampled five feedback items that were
presented to the participant.
Figure 5. Feedback Selection: First, we rejected nonsensical submissions and
removed peripheral feedback items. Next, for each flyer design, we grouped the
feedback by the aspect (e.g., font size) and conclusion (e.g., too small) and
randomly selected one feedback item per group. From the remaining feedback
items we randomly sampled five feedback items that were presented to the
participant.
### 5.7. Results
The 36 participants created a total of 72 flyer designs (two designs per
participant). Figure 7 shows a diverse sample of eight flyer designs created
by the participants. The distributions of key measures per condition (S, Q,
and Q+S) are shown in Figure 6.
To assess the overall effect of the feedback conditions on the quality of the
design revisions, we analyzed the experts’ improvement ratings of the revised
flyers. The distribution is shown in Figure 6 (right side). A Kruskal-Wallis
rank sum test with condition (S, Q, and Q+S) as the independent variable and
improvement as the dependent variable shows a significant effect of the
conditions (H=7.34, df=2, p=.0255). A pairwise post-hoc Dunn test with
Benjamini-Hochberg correction was significant for Q+S versus S (p=.0341) and
Q+S versus Q (p=.0479). However, S does not significantly differ from Q
(p=.89). The results show that the mean improvement for Q+S (M=4.77, SD=1.36)
was significantly greater than the mean improvement for S (M=4.41, SD=1.14,
d=.29) and Q (M=4.32, SD=1.26, d=.34). The effect sizes for these analyses
(d=.29 and d=.34) were found to exceed Cohen’s (Cohen, 1988) convention for a
small effect (d=.2).
A Kruskal-Wallis rank sum test with the condition (S, Q, and Q+S) as the
independent variable and thought-provokingness as the dependent variable shows
a significant effect of the condition (H=10.17, df=2, p=.0061). A pairwise
post-hoc Dunn test with Benjamini-Hochberg correction was significant for S
versus Q (p=.0079) and Q+S versus Q (p=.0232). The results show that the mean
thought-provokingness of S (M=3.80, SD=1.22) and Q+S (M=3.57, SD=1.25) were
significantly higher than Q (M=3.03, SD=1.33). However, Q+S did not
significantly differ from S (p=.56). Apart from that, we found no significant
effect of condition on either usefulness (H=3.62, df=2, p=.16) or tone
(H=1.75, df=2, p=.42).
To determine whether the feedback differed by some other measure, we conducted
a Wilcoxon signed-rank test to compare the statement length between S and Q+S
and the question length between Q between Q+S. We found that statements in S
(M=120.3, SD=52.4) are significantly longer than in Q+S (M=87.8, SD=45.0;
W=383.0, p¡.0001). In contrast, the question length in Q (M=90.2, SD=47.5) did
not differ significantly Q+S (M=90.9, SD=47.0; W=835.5, p=.88). We also
compared the feedback’s absolute polarity using a Wilcoxon signed-rank test
but did not find any significant differences in the statements between S
(M=.36, SD=.29) and Q (M=.33, SD=.35; W=781.5, p=.56) and the questions in Q
(M=.15, SD=.21) and Q+S (M=.2, SD=.27; W=341.5, p=.17).
To verify if the redesigns were based primarily on the feedback obtained
through this study, we asked participants after the study: “Did you collect
feedback or ideas for the revision elsewhere?” (1 = “no, not at all” to 5 =
“yes, very much”). On average, the participants reported that they did not
collect ideas elsewhere (M=1.39, SD=.99), and there was no significant
difference between the conditions with respect to this question.
## 6\. Overall Discussion
#### Enhancing Feedback With Open-Ended Questions.
In terms of the overall effect of S (statements only), Q (questions only), and
Q+S (question-based feedback followed by statement-based feedback) on the
quality of design revisions, we found that Q+Sled to significantly better
revisions than either S or Q, which provides evidence in support of our main
hypothesis (Table 1). Even though the statement-based feedback we collected
lacked strong sentiment on average, the effect sizes of Q+S compared to S
(d=.29) and Q+S compared to Q (d=.34) show a clear impact on the overall
effectiveness of design feedback. Such impact was not evident in previous work
on enhancing crowdsourced design feedback (Greenberg et al., 2015; Luther et
al., 2015; Ngoon et al., 2018), which instead focused on improved feedback
perception. The improvement in design iteration that we saw might in part be
due to the reflective nature of question-based feedback. In this regard, our
work extends the findings from Yen et al. (Yen et al., 2017), who demonstrated
that a reflective activity alone can be as effective as feedback for design
iterations. Yet, their results did not show a benefit of combining the
reflective activity with traditional feedback, which was the case for Q+S in
our study. Overall, we assume that the impact of Q+S will be even greater in
contexts where the crowdsourced feedback contains stronger sentiment, such as
in social networks or web forums (Yen et al., 2016).
[Four violin plots of the feedback rating and improvement measure
distributions]Violin plots showing the distribution of the thought-
provokingness, usefulness, tone, and improvement measures split by condition.
The usefulness and tone distributions are very similar. For the thought-
provokingness, statements-only and statements+questions are greater than
questions-only. Finally, the improvement in the statements+questions condition
is visibly larger than statements-only and questions-only.
Figure 6. Feedback Ratings and Design Improvements: Distribution of the
feedback ratings from the participants and improvement ratings of the jury.
Note, the improvement score is provided on a diverging 7-point Likert scale
where 1 refers to “worsened significantly” and 7 refers to “significant
improvement”.
[Eight flyer designs from our second user study]Eight flyer design pairs
(initial and revised design) with decreasing (left to right) quality (top row)
and improvement (bottom row) scores.
Figure 7. Flyer Designs: Eight flyer designs from study 2. The top row shows
flyers with decreasing average quality scores of the revised design. The
bottom row shows flyers with decreasing average improvement scores. Each pair
of images shows the original design on the left and the revised design on the
right. The first flyer (1) won the best design award.
Furthermore, as expected, we found that feedback in the form of questions only
(Q) led to the least-improved design revisions. These results, albeit the
difference between S and Q was not significant, are in line with previous work
(Berghmans et al., 2012) and suggest that question-only feedback should not
replace statement-based feedback for novices.
In support of our approach, through manually coding questions as either open-
ended and thought-provoking or not, we show that it is indeed possible to
enable online crowd workers to rephrase their statements into open-ended and
thought-provoking questions. In total, 85% of all questions were successfully
rephrased, which we believe is a strong indicator that our AMT task design is
an effective approach to crowdsource question-based feedback. Therefore,
H-Support 1 is supported. To further improve the success rate, future work
could guide the elicitation of question-based feedback with natural language
processing towards open-endedness.
The results of the polarity analysis strongly indicate that questioning is an
effective technique to neutralize sentiment. In particular, the sentiment of
negative statements is resolved entirely, which is essential to avoid
negatively influencing the recipient’s affective state. Interestingly, the
sentiment of positive statements is also reduced, which suggests that
question-based feedback carries less sentiment overall. In conclusion, our
results suggest that H-Support 2 is supported. By presenting question-based
feedback prior to statement-based feedback, our method is an implementation of
Wu et al.’s approach for mitigating unwanted effects of negative sentiment (Wu
and Bailey, 2017).
Hypothesis | | Support
---|---|---
H-Main | Feedback presented as questions followed by statements improves design revisions compared to statement-based or question-based feedback alone. | Yes
H-Support 1 | Non-expert crowd workers can ask open-ended and thought-provoking feedback questions. | Yes
H-Support 2 | Question-based feedback has more neutral sentiment than statement-based feedback. | Yes
H-Support 3 | Feedback presented as questions followed by statements leads to more balanced acceptance of subsequent statement-based feedback. | No
Table 1. Key Findings: The results support our main hypothesis and two out of
three supporting hypotheses.
[Key Findings]The results support our main hypothesis and two out of three
supporting hypotheses.
Regarding the effects of questions on the perception of statements with overly
positive or negative sentiment, we did not find any significant differences
between the conditions in the reported usefulness ratings. Therefore, we
cannot confirm H-Support 3. In comparison, related work (Greenberg et al.,
2015; Luther et al., 2015; Ngoon et al., 2018) found that structuring and
scaffolding can improve the feedback’s perceived usefulness. A potential
explanation why we still saw an improved effectiveness of the Q+S feedback
compared to S and Q could be that preposed question-based feedback primarily
changes the recipient’s focus from themselves to the design task. This change
might have mitigated the effects of negative feedback (Sargeant et al., 2008).
Contrary to our expectations, the only significantly different feedback rating
was thought-provokingness, which was the lowest in Q. In hindsight, asking
participants about the magnitude of how much a feedback item made them think
about their design might have been too unspecific. For instance, instructional
feedback could have prompted the participants to think a lot about how to
execute suggestions rather than to think about alternative designs. A more in-
depth analysis of the revised designs could uncover which feedback was indeed
addressed. It might also be necessary to study this question by limiting the
feedback to highly negative and positive statements to emphasize the potential
effect of questions on the perceived usefulness.
#### Generalizability.
Given the breadth of related work, we would assume to see similar effects of
question-based feedback in other domains. In particular, question-based
feedback should easily be applicable to different areas of creative work due
to the similar processes of iteration. Regarding our method for crowdsourcing
question-based feedback, there are no technical limitations to expanding this
method to other types of work. However, the success of crowdsourcing question-
based feedback depends on the accessibility of the work to non-expert crowd
workers. While graphic design in general and flyer-based advertisement in
specific should be accessible by most people, this might not be the case for
other types of work.
Beyond crowdsourcing, questions could also be employed as a generic method to
enhance feedback. However, the usefulness of question-based feedback might be
limited by the ability of the feedback providers to ask effective questions.
More work needs to be done to better understand how the effectiveness of
questions and statements are related when the feedback is obtained in other
contexts, for instance, from domain experts.
#### Limitations.
On average, the design revision improvement across all conditions was in line
with previous work on the effectiveness of crowdsourced feedback (Luther et
al., 2015). However, by splitting the second study into two separate sessions,
we might have lowered the participants’ motivation and excitement, as they
were compensated only after completing both sessions. An effort-based
compensation approach might have helped to increase the participants’
motivation.
In this study we focused on the feedback’s effectiveness for design iteration.
In terms of the perceived feedback quality, we did not find any differences
except for the thought-provokingness. And while the statement lengths differed
between S and Q+S, it is unclear how to interpret the comparison given that
Q+S additionally included the questions. One option to generically quantify
the quality could be to ask designers to enumerate revision ideas prior to the
actual redesign, which we leave as an idea for future work.
More fundamentally, assuming that the statements and questions are of the same
quality, questions can reduce the sentiment of feedback statements and
potentially facilitate reflection, but they cannot make the feedback, as a
whole, more substantive.
## 7\. Conclusion and Future Work
In this study, we empirically compared the effectiveness of crowdsourced
design feedback on design revisions when presented as statements, questions,
and a combination of both. Our results show that the combination of question-
and statement-based feedback leads to better design revisions. We believe that
these findings are generalizable to other kinds of creative work beyond
graphic design. Also, we regard presenting feedback as open-ended questions to
be complementary to other approaches for improving crowdsourced feedback.
Therefore, it can be integrated into existing online feedback systems to
improve the overall effectiveness of crowdsourced feedback further.
Future studies may analyze how exactly questions influence the perception of
related statements by exclusively examining feedback that carries strongly
positive and negative sentiment, or explicitly letting the designer elaborate
on their revision to relate changes to specific feedback items. Moreover, it
would be interesting to evaluate what aspects determine the quality of
question-based feedback regarding reflection. We assume that, similar to
statements, the ability of questions to generate productive ideas for design
revisions depends on their specificity. However, more aspects likely come into
play. Also, given that designers with varying expertise make sense of and
provide feedback differently (Foong et al., 2017; Dannels and Martin, 2008),
it would be interesting to determine if question-based feedback is perceived
differently by non-professional and professional designers.
###### Acknowledgements.
We would like to express our gratitude to Humphrey Obuobi for his help with
the pilot study. Also, we thank all the participants who took part in our user
studies. This research was supported in part by a gift from Adobe Research.
The second author is partially funded by an Onassis Scholarship (Scholarship
ID: F ZO 002/1 – 2018/2019).
## References
* (1)
* Baumeister et al. (2001) Roy F Baumeister, Ellen Bratslavsky, Catrin Finkenauer, and Kathleen D Vohs. 2001. Bad is stronger than good. _Review of general psychology_ 5, 4 (2001), 323–370.
* Baumer et al. (2014) Eric PS Baumer, Vera Khovanskaya, Mark Matthews, Lindsay Reynolds, Victoria Schwanda Sosik, and Geri Gay. 2014. Reviewing reflection: on the use of reflection in interactive system design. In _Proceedings of the 2014 conference on Designing interactive systems_ _(DIS ’14)_. ACM, New York, NY, USA, 93–102.
* Berghmans et al. (2012) Inneke Berghmans, Nathalie Druine, Filip Dochy, and Katrien Struyven. 2012. A facilitative versus directive approach in training clinical skills? Investigating students’ clinical performance and perceptions. _Perspectives on medical education_ 1, 3 (2012), 104–118.
* Brandt (2008) Caroline Brandt. 2008\. Integrating feedback and reflection in teacher preparation. _ELT journal_ 62, 1 (2008), 37–46.
* Cairns et al. (2014) Paul Cairns, Pratyush Pandab, and Christopher Power. 2014\. The influence of emotion on number entry errors. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ _(CHI ’14)_. ACM, New York, NY, USA, 2293–2296.
* Carnine et al. (1982) Douglas Carnine, Candy Stevens, Jean Clements, and Edward J Kameenui. 1982. Effects of Facultative Questions and Practice on Intermediate Students’ Understanding of Character Motives. _Journal of Reading Behavior_ 14, 2 (1982), 179–190.
* Chambers and Vickers (2006) Kristine L Chambers and Joan N Vickers. 2006. Effects of bandwidth feedback and questioning on the performance of competitive swimmers. _The Sport Psychologist_ 20, 2 (2006), 184–197.
* Ciardiello (1998) Angelo V Ciardiello. 1998\. Did you ask a good question today? Alternative cognitive and metacognitive strategies. _Journal of Adolescent & Adult Literacy_ 42, 3 (1998), 210–219.
* Cohen (1988) Jacob Cohen. 1988\. _Statistical Power Analysis for the Behavioral Sciences_ (3 ed.). Lawrence Erlbaum Associates, Hillsdale, NJ, USA.
* Connor and Irizarry (2015) Adam Connor and Aaron Irizarry. 2015. _Discussing Design: Improving Communication and Collaboration Through Critique_. O’Reilly, Sebastopol, CA, USA.
* Corel (2020) Corel. 2020. Gravit Designer. https://designer.io. Accessed: 2020-02-16.
* Crain and Bailey (2017) Patrick A Crain and Brian P Bailey. 2017. Share Once or Share Often?: Exploring How Designers Approach Iteration in a Large Online Community. In _Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition_ _(C &C ’17)_. ACM, New York, NY, USA, 80–92.
* Dannels et al. (2008) Deanna Dannels, Amy Housley Gaffney, and Kelly Norris Martin. 2008. Beyond Content, Deeper than Delivery: What Critique Feedback Reveals about Communication Expectations in Design Education. _International Journal for the Scholarship of teaching and Learning_ 2, 2 (2008), n2.
* Dannels and Martin (2008) Deanna P Dannels and Kelly Norris Martin. 2008. Critiquing critiques: A genre analysis of feedback across novice to expert design studios. _Journal of Business and Technical Communication_ 22, 2 (2008), 135–159.
* Dow et al. (2011) Steven Dow, Julie Fortuna, Dan Schwartz, Beth Altringer, Daniel Schwartz, and Scott Klemmer. 2011\. Prototyping Dynamics: Sharing Multiple Designs Improves Exploration, Group Rapport, and Results. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ _(CHI ’11)_. ACM, New York, NY, USA, 2807–2816.
* Foong et al. (2017) Eureka Foong, Darren Gergle, and Elizabeth M Gerber. 2017\. Novice and Expert Sensemaking of Crowdsourced Design Feedback. _Proceedings of the ACM on Human-Computer Interaction_ 1, CSCW (2017), 1–18.
* Google (2020) Google. 2020. Google Drawings. https://docs.google.com/drawings. Accessed: 2020-02-16.
* Greenberg et al. (2015) Michael D Greenberg, Matthew W Easterday, and Elizabeth M Gerber. 2015. Critiki: A scaffolded approach to gathering design feedback from paid crowdworkers. In _Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition_ _(C &C ’15)_. ACM, New York, NY, USA, 235–244.
* Grill (2017) Thomas Grill. 2017\. Python implementation of Krippendorff’s alpha. https://github.com/grrrr/krippendorff-alpha/. Accessed: 2020-02-16.
* Hattie and Timperley (2007) John Hattie and Helen Timperley. 2007. The power of feedback. _Review of Educational Research_ 77, 1 (2007), 81–112.
* Hicks et al. (2016) Catherine M Hicks, Vineet Pandey, C Ailie Fraser, and Scott Klemmer. 2016. Framing feedback: Choosing review environment features that support high quality peer assessment. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems_ _(CHI ’16)_. ACM, New York, NY, USA, 458–469.
* Hutto and Gilbert (2014) Clayton J Hutto and Eric Gilbert. 2014. VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. In _Eighth International AAAI Conference on Weblogs and Social Media_ _(ICWSM ’14)_. AAAI, Menlo Park, CA, USA.
* Kang et al. (2018) Hyeonsu B Kang, Gabriel Amoako, Neil Sengupta, and Steven P Dow. 2018. Paragon: An Online Gallery for Enhancing Design Feedback with Visual Examples. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ _(CHI ’18)_. ACM, New York, NY, USA, 1–13.
* King (1990) Alison King. 1990\. Enhancing peer interaction and learning in the classroom through reciprocal questioning. _American Educational Research Journal_ 27, 4 (1990), 664–687.
* King (1992) Alison King. 1992\. Facilitating elaborative learning through guided student-generated questioning. _Educational psychologist_ 27, 1 (1992), 111–126.
* King (1995) Alison King. 1995\. Inquiring minds really do want to know: Using questioning to teach critical thinking. _Teaching of Psychology_ 22, 1 (1995), 13–17.
* Knoblauch and Brannon (1984) Cyril H Knoblauch and Lil Brannon. 1984. _Rhetorical Traditions and the Teaching of Writing_. Boynton/Cook Publishers, Upper Montclair, NJ, USA.
* Krause et al. (2017) Markus Krause, Tom Garncarz, JiaoJiao Song, Elizabeth M Gerber, Brian P Bailey, and Steven P Dow. 2017. Critique Style Guide: Improving Crowdsourced Design Feedback with a Natural Language Model. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_ _(CHI ’17)_. ACM, New York, NY, USA, 4627–4639.
* Luther et al. (2014) Kurt Luther, Amy Pavel, Wei Wu, Jari-lee Tolentino, Maneesh Agrawala, Björn Hartmann, and Steven P Dow. 2014. CrowdCrit: crowdsourcing and aggregating visual design critique. In _Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing_ _(CSCW ’14)_. ACM, New York, NY, USA, 21–24.
* Luther et al. (2015) Kurt Luther, Jari-Lee Tolentino, Wei Wu, Amy Pavel, Brian P Bailey, Maneesh Agrawala, Björn Hartmann, and Steven P Dow. 2015\. Structuring, aggregating, and evaluating crowdsourced design critique. In _Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing_ _(CSCW ’15)_. ACM, New York, NY, USA, 473–485.
* Marlow and Dabbish (2014) Jennifer Marlow and Laura Dabbish. 2014. From rookie to all-star: professional development in a graphic design social networking site. In _Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing_ _(CSCW ’14)_. ACM, New York, NY, USA, 922–933.
* Ngoon et al. (2018) Tricia J Ngoon, C Ailie Fraser, Ariel S Weingarten, Mira Dontcheva, and Scott Klemmer. 2018\. Interactive Guidance Techniques for Improving Creative Feedback. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ _(CHI ’18)_. ACM, New York, NY, USA, 55.
* Nguyen et al. (2017) Thi Thao Duyen T Nguyen, Thomas Garncarz, Felicia Ng, Laura A Dabbish, and Steven P Dow. 2017\. Fruitful Feedback: Positive affective language and source anonymity improve critique reception and work outcomes. In _Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing_ _(CSCW ’17)_. ACM, New York, NY, USA, 1024–1034.
* Ott (2020) Myle Ott. 2020. Unique Turker. https://uniqueturker.myleott.com. Accessed: 2020-02-16.
* Robb et al. (2015) David A Robb, Stefano Padilla, Britta Kalkreuter, and Mike J Chantler. 2015. Crowdsourced feedback with imagery rather than text: Would designers use it?. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ _(CHI ’15)_. ACM, New York, NY, USA, 1355–1364.
* Sadler (1989) D Royce Sadler. 1989\. Formative assessment and the design of instructional systems. _Instructional Science_ 18, 2 (1989), 119–144.
* Sargeant et al. (2008) Joan Sargeant, Karen Mann, Douglas Sinclair, Cees Van der Vleuten, and Job Metsemakers. 2008\. Understanding the influence of emotions and reflection upon multi-source feedback acceptance and use. _Advances in Health Sciences Education_ 13, 3 (2008), 275–288.
* Sargeant et al. (2009) Joan M Sargeant, Karen V Mann, Cees P Van der Vleuten, and Job F Metsemakers. 2009. Reflection: a link between receiving and using assessment feedback. _Advances in Health Sciences Education_ 14, 3 (2009), 399–410.
* Schön (1984) Donald A Schön. 1984\. _The Reflective Practitioner: How professionals think in action_. Vol. 5126. Basic Books, New York, NY, USA.
* Schön (1985) Donald A Schön. 1985\. _The design studio: An exploration of its traditions and potentials_. RIBA Publications for RIBA Building Industry Trust, London, UK. 99 pages.
* Siangliulue et al. (2015a) Pao Siangliulue, Kenneth C Arnold, Krzysztof Z Gajos, and Steven P Dow. 2015a. Toward collaborative ideation at scale: Leveraging ideas from others to generate more creative and diverse ideas. In _Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing_ _(CSCW ’15)_. ACM, New York, NY, USA, 937–945.
* Siangliulue et al. (2015b) Pao Siangliulue, Joel Chan, Krzysztof Z Gajos, and Steven P Dow. 2015b. Providing timely examples improves the quantity and quality of generated ideas. In _Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition_ _(C &C ’15)_. ACM, ACM, New York, NY, USA, 83–92.
* Tofade et al. (2013) Toyin Tofade, Jamie Elsner, and Stuart T Haines. 2013\. Best practice strategies for effective use of questions as a teaching tool. _American journal of pharmaceutical education_ 77, 7 (2013), 155\.
* Tohidi et al. (2006) Maryam Tohidi, William Buxton, Ronald Baecker, and Abigail Sellen. 2006. Getting the right design and the design right. In _Proceedings of the SIGCHI conference on Human Factors in computing systems_ _(CHI ’06)_. ACM, New York, NY, USA, 1243–1252.
* Uluoğlu (2000) Belkis Uluoğlu. 2000\. Design knowledge communicated in studio critiques. _Design Studies_ 21, 1 (2000), 33–58.
* Willett et al. (2012) Wesley Willett, Jeffrey Heer, and Maneesh Agrawala. 2012\. Strategies for crowdsourcing social data analysis. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ _(CHI ’12)_. ACM, New York, NY, USA, 227–236.
* Wobbrock et al. (2011) Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In _Proceedings of the SIGCHI conference on human factors in computing systems_ _(CHI ’11)_. ACM, New York, NY, USA, 143–146.
* Wu and Bailey (2017) Y Wayne Wu and Brian P Bailey. 2017. Bitter Sweet or Sweet Bitter? How Valence Order and Source Identity Influence Feedback Acceptance. In _Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition_ _(C &C ’17)_. ACM, New York, NY, USA, 137–147.
* Wu and Bailey (2018) Y Wayne Wu and Brian P Bailey. 2018. Soften the Pain, Increase the Gain: Enhancing Users’ Resilience to Negative Valence Feedback. _Proceedings of the ACM on Human-Computer Interaction_ 2, CSCW (2018), 1–20.
* Xu and Bailey (2012) Anbang Xu and Brian Bailey. 2012. What do you think? A case study of benefit, expectation, and interaction in a large online critique community. In _Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work_ _(CHI ’12)_. ACM, New York, NY, USA, 295–304.
* Xu et al. (2014) Anbang Xu, Shih-Wen Huang, and Brian Bailey. 2014\. Voyant: generating structured feedback on visual designs using a crowd of non-experts. In _Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing_ _(CSCW ’14)_. ACM, New York, NY, USA, 1433–1444.
* Xu et al. (2015) Anbang Xu, Huaming Rao, Steven P Dow, and Brian P Bailey. 2015\. A classroom study of using crowd feedback in the iterative design process. In _Proceedings of the 18th ACM conference on computer supported cooperative work & social computing_ _(CSCW ’15)_. ACM, New York, NY, USA, 1637–1648.
* Yen et al. (2016) Yu-Chun Yen, Steven P Dow, Elizabeth Gerber, and Brian P Bailey. 2016. Social network, web forum, or task market? Comparing different crowd genres for design feedback exchange. In _Proceedings of the 2016 ACM Conference on Designing Interactive Systems_ _(DIS ’16)_. ACM, New York, NY, USA, 773–784.
* Yen et al. (2017) Yu-Chun Grace Yen, Steven P Dow, Elizabeth Gerber, and Brian P Bailey. 2017. Listen to Others, Listen to Yourself: Combining Feedback Review and Reflection to Improve Iterative Design. In _Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition_ _(C &C ’17)_. ACM, New York, NY, USA, 158–170.
* Yuan et al. (2016) Alvin Yuan, Kurt Luther, Markus Krause, Sophie Isabel Vennix, Steven P Dow, and Bjorn Hartmann. 2016\. Almost an expert: The effects of rubrics and expertise on perceived value of crowdsourced design critiques. In _Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing_ _(CSCW ’16)_. ACM, New York, NY, USA, 1005–1017.
|
Department of Biochemistry, University of Oxford, South Parks Road, Oxford,
OX1 3QU, United Kingdom
# Network Topology in Water Nanoconfined between Phospholipid Membranes
Fausto Martelli IBM Research Europe, Hartree Centre, Daresbury, WA4 4AD,
United Kingdom<EMAIL_ADDRESS>Jason Crain IBM Research Europe,
Hartree Centre, Daresbury, WA4 4AD, United Kingdom Giancarlo Franzese Secció
de Física Estadística i Interdisciplinària–Departament de Física de la Matèria
Condensada, Universitat de Barcelona, & Institut de Nanociència i
Nanotecnologia (IN2UB), Universitat de Barcelona, C. Martí i Franquès 1, 08028
Barcelona, Spain
###### Abstract
Water provides the driving force for the assembly and stability of many
cellular components. Despite its impact on biological functions, a nanoscale
understanding of the relationship between its structure and dynamics under
soft confinement has remained elusive. As expected, water in contact with
biological membranes recovers its bulk density and dynamics at $\sim 1$ nm
from phospholipid headgroups but surprisingly enhances its intermediate-range
order (IRO) over a distance, at least, twice as large. Here, we explore how
the IRO is related to the water’s hydrogen bond network (HBN) and its
coordination defects. We characterize the increased IRO by an alteration of
the HBN up to more than eight coordination shells of hydration water. The HBN
analysis emphasizes the existence of a bound-unbound water interface at $\sim
0.8$ nm from the membrane. The unbound water has a distribution of defects
intermediate between bound and bulk water, but with density and dynamics
similar to bulk, while bound water has reduced thermal energy and much more
HBN defects than low-temperature water. This observation could be fundamental
for developing nanoscale models of biological interactions and for
understanding how alteration of the water structure and topology, for example,
due to changes in extracellular ions concentration, could affect diseases and
signaling. More generally, it gives us a different perspective to study
nanoconfined water.
###### keywords:
water confined, phospholipid membrane, hydrogen bond, hydrogen bond network,
coordination defects, order parameter
It has been long recognized that the structure and function of biological
membranes are largely determined by the properties of hydration water, i.e.,
of the water in contact with the membrane 1. Indeed, the presence of water
strongly influences membrane stability, fluidity, and phase behavior, thereby
affecting membrane function and properties. Also, hydration water mediates the
interactions of biological membranes with other biomolecules and with ions 2,
3.
Biological membranes are composed of a large number of components, including
proteins, cholesterol, glycolipids, and ion channels, among others, but their
framework is provided by phospholipid molecules that self-assemble into
bilayers driven by the hydrophobic effect 4, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12,
1, 13, 14, 15, 16. To decrease the interfacial free-energy, polar head groups
form contacts with water, while the apolar hydrocarbon tails minimize exposure
to water forming extended bilayers. Water is abundant in the interfacial
region of bilayers (lipid headgroups), establishing strong hydrogen bonds
(HBs) with the membrane. As a result of these strong interactions, the
orientational and translational dynamics of interfacial water is markedly
slowed down 7, 8, 9, 14, 15, 16, 6. Such slowing down has been observed also
in water in contact with proteins and sugars 17, 18, 19, 20, 21.
Recently we found an increase in the structural order at the intermediate
range when the dynamics of water confined by phospholipid membranes slows
down. This intermediate range order (IRO) propagates as far as (at least)
$\sim 2.4$ nm from the membrane surface 22, a larger distance than previously
calculated using other observables such as density and dynamical properties.
We recovered water’s bulk density and dynamical properties at a distance of
$\sim 1.2$ nm from the membrane surface 22, 23. Nonetheless, water is a
complex network-forming material, with a directional HBs network (HBN), the
topology of which is correlated to anomalous behavior, as we showed recently
24. Therefore, understanding how the HBN of water is affected by the
interactions with the membrane is of primary importance in understanding
biological properties at the molecular scale also in conditions in which a
biological membrane interacts with alien components such as, e.g., viruses.
In this article, we investigate the properties of water confined by
phospholipid membranes. Specifically, we measure the extent to which
phospholipid membranes affect the structural properties of water as well as
its HBN. As a typical model membrane, we use 1,2-Dimyristoyl-sn-
glycero-3-phosphocholine (DMPC) lipids. The DMPC is a phospholipid with a
choline headgroup and a tailgroup formed of two myristoyl chains(see fig. 1).
Choline-based phospholipids are ubiquitous in cell membranes and commonly used
in drug-targeting liposomes 25.
Figure 1: Chemical structure of the DMPC phospholipid.
In this investigation, we probe the structural properties of water by
inspecting the IRO using a sensitive local metric, recently introduced by
Martelli et al. 26 and already applied in a wide variety of studies 26, 24,
22, 23, 27, 28. Next, we examine the topology and the quality of the HBN and
explore correlations of these observables with the behaviour of the structural
order. We probe the topology of the HBN via ring statistics, a tool widely
adopted in network-forming materials as a measure of closed loops present in
the network. Ring statistics have been previously employed in water to
characterize its different phases 26, 27, 29, 30, 31, as well as to
investigate the origin of water anomalies 24, 32, 33, 34, 35.
## Results
We inspect the IRO by computing the score function (Eq. 2), which provides a
nonlinear measure of the deviation in atomic or molecular arrangements from
those of a reference structure that, usually, corresponds to the medium’s
ground state at $T=0$ K. Therefore, we here compute $S$ using, as a reference,
the position of the oxygen in the second coordination shell in cubic ice,
i.e., a cuboctahedron ($\bar{C}$) that belongs to the class of Archimedean
solids enriched with edge transitivity 36 (inset in fig. 2). Similar results
hold if we use, as a reference, the position of the oxygen in the second
coordination shell in hexagonal ice.
At fixed $T=303$ K and $P=1$ atm, we compare the distributions of
$S_{\bar{C}}$ for bulk water and confined water at a distance of $1.6$ nm
$\leq z\leq 2.4$ nm (bin centered at $2.0$ nm) and $2.4$ nm $\leq z\leq 3.2$
nm (bin centered at $2.8$ nm) from the membrane (fig. 2 a). We emphasize that
the bin width of $0.8$ nm adopted in this work ensures that water molecules
centered in the middle of the bin have a second shell of neighbours falling
inside the same bin.
The distribution at $2.0$ nm does not match that of bulk water. In particular,
we find that water $2.0$ nm away from the membrane is more structured than
bulk water, with a $\sim 8\%$ increase of the $S_{\bar{C}}$ with maximum
probability, and a higher population in the large-$S_{\bar{C}}$ tail of the
probability distribution $P(S_{\bar{C}})$ (fig. 2 b).
On the other hand, the $P(S_{\bar{C}})$ for water at $2.8$ nm from the
membrane overlaps with that of bulk water, within our resolution, showing that
the value of the local order metric (LOM) of water between $2.4$ nm and $3.2$
nm is not affected by the membrane. Hence, our result implies that the effect
of the membrane on the structural properties of water should extend as far as
$1.6$ nm plus the second coordination shell distance ($\sim 0.45$ nm 37), and
at about $2.4$ nm minus $0.45$ nm, i.e., up to $(2.0\pm 0.05)$ nm,
approximately (fig. 2 b).
Since the properties of water emanate from the underlying network of HBs 24, a
natural question follows: Is there a connection between i) the observed
perturbations on the IRO of confined water, ii) the underlying HBN, and iii)
its quality in terms of broken and intact HBs? We will address this question
in the following discussion.
Figure 2: Panel a): Probability distributions of score function $S_{\bar{C}}$
computed using the reference in the inset: the black line is for bulk water,
the green line is for water in the bin centered at $z=2.0$ nm, the red dashed
line is for water in the bin centered at $z=2.8$ nm. Inset: Reference made of
the oxygen positions (blue spheres) of the second coordination shell in cubic
ice. Blue sticks are a guide to the eyes to emphasize the geometrical
structure. Panel b): Difference $\Delta P$ between $P(S_{\bar{C}})$ for bulk
water and for the bin at $z=2.0$ nm (blue, continuous line), and between bulk
water and the bin at $z=2.8$ nm (orange, dashed line). Figure 3: Probability
of the HB $n$-member rings, $P(n)$, computed from structures in bulk water
(open orange triangles), and in water in bins centered at different distances
from the membrane surfaces: $0.4$ nm (black dots), $1.2$ nm (red squares),
$2.0$ nm (green diamonds), and $2.8$ nm (blue triangles). All $P(n)$ are
normalized to unity and, therefore, do not reflect the total number of rings
of a given size.
We probe the HBN of water using the ring statistics, and we inspect the
quality of the network quantifying and characterizing coordination defects. We
compare the probability $P(n)$ of having an $n$-member ring,
$n\in\left[3,12\right]$, for the four bins that discretize the simulation box,
and for bulk water at the same thermodynamic conditions, $T=303$ K and $P=1$
atm (fig. 3).
For bulk water, as expected in diffusive media, $P(n)$ is broad and accounts
for very large rings. We find that for distances within the bin closer to the
bilayer, i.e., at a distance $z\leq 0.8$ nm from the membrane, $P(n)$ strongly
deviates from the corresponding probability in bulk water. Namely, we observe
a depletion in the number of larger rings and an increase, notably sharp in
the case of $n=6$, of shorter rings. We attribute the depletion of larger
rings to the proximity of the membrane, which represents a reduction of
dimensionality in the connectivity search pathways of water molecules. On the
other hand, we remark that $n=6$ represents the typical connectivity in
crystalline ice. Therefore, the increased number of hexagonal rings at $z=0.8$
nm indicates that, closer to the interface, the HBN seems to acquire a
topology closer to that of an ordered crystalline network. However, as we will
discuss later, this similarity is only apparent.
The increased number of short rings at $z\leq 0.8$ nm from the membrane is in
agreement with the dynamical slowing down 38, and the increment in the IRO 22
reported for similar distances. Hence, 1) the diffusion and rotational slowing
down 38, 2) the increased value of $S_{\bar{C}}$ (fig. 2), and 3) the
increased fraction of hexagonal rings (fig. 3) for water at $z\leq 0.8$ nm
from the membrane, suggest a connection between dynamics and structure as
measured by i) the positions of the oxygen atoms and, ii) the topology of the
HBN.
Moving from $z\leq 0.8$ nm to $0.8$ nm $<z\leq 2.4$ nm, we observe a marked
change in the distribution of $n$-rings with a decrease for $n\leqslant 6$ and
an increase for $n>6$ (fig. 3). We attribute the larger probability for
extended rings for $z>0.8$ nm to the increased dimensionality of the space
available. In particular, the difference in the $P(n)$ and in the
$S_{\bar{C}}$ (fig. 2), between bulk and water within the bin centered at
$z=2.0$ nm from the membrane, further points toward a close correlation
between structural properties at the level of the medium range, and the
topology of the HBN.
The drastic change in the ring probability between the bin centered in $z=0.4$
nm and the bins at a larger distance, is consistent with the recent discovery
of an interface between bound and unbound hydration water at about $0.5$ nm
from the membrane38. In Ref. 38 Calero and Franzese identify the interface
between i) the first hydration shell, partially made of water bound to the
membrane, with a structural role and an extremely slow dynamics, and ii) the
next shells with no water-lipids HBs and a dynamics ten time faster than bound
water, but still one order of magnitude slower than bulk water. Therefore,
ring probability can mark the structural difference between bound and unbound
water.
Moving to a distance $2.4$ nm $<z\leq 3.2$ nm from the surface, the $P(n)$
overlaps perfectly with the bulk case. Moreover, the $P(S_{\bar{C}})$ computed
within this bin (fig. 2) overlaps with the $P(S_{\bar{C}})$ of bulk water.
Therefore, we conclude that water recovers the structural (both IRO and HBN)
properties of bulk water only if at a distance larger than $2.4$ nm from the
membrane. This value is twice the $1.2$ nm at which water retrieves bulk
density and dynamics 22, 23.
To get further insights into the network topology, we inspect its quality.
When water is in the glass state, we can map its HBN to a nearly-hyperuniform
network, i.e., to a continuous random network characterized by a low fraction
of coordination defects and a suppression of long-range density fluctuations
39. Therefore, the number of broken HBs is a measure of the quality of the
HBN. In particular, it quantifies how far the HBN is from the two extreme
cases: a) the liquid and b) the continuous random network. Furthermore,
coordination defects directly affect the fluidity of liquid water. Therefore,
they can be related to water dynamics 40.
We perform a decomposition of the HBs per water molecule into acceptor-(A) and
donor-(D) types. We label as $\textit{A}_{2}\textit{D}_{2}$ a water molecule
with perfect coordination, i.e., donating two bonds and accepting two bonds.
We evaluate the quality of the HBN by computing the ratio of water molecules
that have different coordination, i.e., are not in the
$\textit{A}_{2}\textit{D}_{2}$ configuration. In particular, we focus our
attention on the following coordination configurations:
$\textit{A}_{1}\textit{D}_{1}$, $\textit{A}_{2}\textit{D}_{1}$,
$\textit{A}_{1}\textit{D}_{2}$, $\textit{A}_{2}\textit{D}_{2}$ and
$\textit{A}_{3}\textit{D}_{2}$. Other configurations do not contribute
significantly 41.
Figure 4: Percentage-wise decomposition of the intact HBs per water molecule
into acceptor-(A) and donor-(D) for water in bins centered at different
distances from the membrane and for bulk water. Sets are for bins at $0.4$ nm
(black dots), $1.2$ nm (red squares), $2.0$ nm (green diamonds), $2.8$ nm
(blue triangles) and bulk (orange open triangles). The $x$-axis labels
$\textit{A}_{x}\textit{D}_{y}$ indicate the number of acceptor
($\textit{A}_{x}$) and donor ($\textit{D}_{y}$) HBs, respectively, of the
configurations schematically represented on the panel’s top (with the oxygen
of central water molecule in blue). For clarity we omit combinations with
minor contributions, e.g., $\textit{A}_{3}\textit{D}_{1}$,
$\textit{A}_{0}\textit{D}_{y}$, $\textit{A}_{x}\textit{D}_{0}$, etc.
We compare the percentage of intact HBs for bulk and confined water, as a
function of the distance from the membrane (fig. 4). We find that the HBN in
bulk water is dominated by $\textit{A}_{2}\textit{D}_{2}$ ($\sim 37\%$)
perfect coordinations. Water molecules involved in three HBs in the form of
the defect $\textit{A}_{1}\textit{D}_{2}$ comprise the next largest percentage
($\sim 20\%$), followed by the $\textit{A}_{2}\textit{D}_{1}$ and
$\textit{A}_{1}\textit{D}_{1}$ types and, finally, by the
$\textit{A}_{3}\textit{D}_{2}$.
This result, based on TIP3P water, is in agreement with the trend in _ab
initio_ liquid water at ambient conditions examined with different functionals
41. In particular, in _ab initio_ liquid water, the frequency of
$\textit{A}_{1}\textit{D}_{2}$ is almost twice that of
$\textit{A}_{2}\textit{D}_{1}$ at all levels of theories 41.
Close to the surface of the membrane, at $z\leq 0.8$ nm, the network of HBs
largely deviates from that of bulk water. The network is dominated by
$\textit{A}_{1}\textit{D}_{1}$ and $\textit{A}_{1}\textit{D}_{2}$ defects
($\sim 25\%$), followed by $\textit{A}_{2}\textit{D}_{1}$ and
$\textit{A}_{2}\textit{D}_{2}$ configurations ($\sim 15\%$), and a small
percentage of higher coordination defects $\textit{A}_{3}\textit{D}_{2}$
($\sim 3\%$). Such composition is very consistent with the results found for
bound water at $z\leq 0.5$ nm 38. In particular, we find here the same
percentage of defects with three water-water HBs ($40\%$). Furthermore, we
observe numbers, very close to those in Ref.38 , for perfectly coordinated
configurations ($\sim 20\%$), and defects with two ($\sim 30\%$) and five
water-water HBs ($\sim 1\%$).
However, close to the membrane, the decrease of perfectly coordinated water
molecules seems to be inconsistent with the higher local order of water 22,
23, and also with the enhanced contribution of six-fold rings (fig. 3). This
discrepancy is only apparent, for two reasons. First, both the IRO and the
ring statistics are a measure of local order beyond short range, while the
quality of the HBN is strictly a short-range measure. Second, our calculations
include only water-water HBs and do not account for the (strong) HBs between
water molecules and the phospholipid headgroups 42, 43, 38. Instead, $\sim
30\%$ of the water molecules in the first hydration shell are bound to the
membrane with at least one HB 38. This observation explains why the dynamical
slowing down 22 of bound water can be interpreted as a local reduction of
thermal noise that allows water molecules to organize in space in more ordered
geometrical configurations 38.
Moving away from the surface, at a distance of $0.8$ nm$<z\leq 2.4$ nm, the
most appreciable effect on the quality of the HBN is a marked reduction of
$\textit{A}_{1}\textit{D}_{1}$ defects down to $\sim 18\%$, mostly accounting
for the absence of HBs between water molecules and phospholipid headgroups 38,
and a corresponding drastic increase in the percentage of perfectly
coordinated water molecules ($\textit{A}_{2}\textit{D}_{2}$) up to $\sim
25\%$, confirming the analysis done for unbound water 38.
At these distances bulk density and dynamical properties of water are
recovered almost fully 22, 38. However, the quality (defects) of the HBN
strongly deviates from that of bulk water, accounting for its different
topology (ring probability, fig. 3) and its different structural properties
22.
Upon increasing the distance from the membrane, we find a reduction of most of
the coordination defects and a corresponding increase of perfectly coordinated
water molecules, i.e., an improvement in the quality of the HBN (fig. 4).
Nevertheless, we recover the bulk-like composition only at a distance $z>2.4$
nm from the membrane, as in our analysis for both the IRO (fig. 2) and the
topology of the HBN (fig. 3).
It is interesting to observe that the percentage of the defect type
$\textit{A}_{2}\textit{D}_{1}$ is mostly constant in all bins and, therefore,
is independent of the distance from the membrane (fig. 4). We are currently
working on rationalizing this intriguing evidence.
## Conclusions
The relation between water dynamics and structure is elusive in bulk 44 and
even more under nanoconfinement 45, especially when the confining surfaces are
soft 46. On the other hand, the relationships among the hydration structure
and molecular fluidity at membrane/water interfaces are relevant in many
biological processes 47. Here, we study why water recovers its density and
dynamics at $\sim 1.2$ nm from a membrane 22, 38 while has an intermediate
range order (IRO) 26 higher than bulk up to a distance twice as large 22. To
understand this surprising result, we focus on the hydrogen bond network
(HBN), analyzing its topology (ring statistics) and its quality (population of
perfectly coordinated water molecules and defects). We find that the increased
IRO is characterized by an alteration of the HBN.
In particular, for bound water 38, i.e., water at short distances (here less
than $0.8$ nm) from the membrane, we show that the HBN topology and quality
are very different from those of low-temperature bulk water. Although bound
water has an HBN with a large fraction of hexagonal rings as in crystalline
water, it has a much higher number of defects than low-temperature water. We
find that $\textit{A}_{1}\textit{D}_{1}$ and $\textit{A}_{1}\textit{D}_{2}$
account together for 50% of all the defects due to water strong HBs with the
membrane. These strong HBs locally reduce the water’s thermal energy and slow
down its dynamics.
We show that the HBN analysis is able to mark the existence of the bound-
unbound water interface 38. We find a sudden qualitative change in the ring
statistics for hydration water at a distance $z>0.8$ nm from the membrane.
Also the defects distribution clearly shows that water in the range $0.8$
nm$<z\leq 2.4$ nm is neither bound to the membrane, neither bulk. Indeed, it
has much less $\textit{A}_{1}\textit{D}_{1}$ defects than bound water, and
much less perfectly-coordinated molecules than bulk. Nevertheless, at these
distances, the structural differences between unbound and bulk water are
disguised in water’s density and dynamics 38.
The difference in topology and defects smear out at distances larger than
$2.4$ nm. This distance corresponds to more than eight coordination shells of
hydration water. Hence, our results support the evidence of long-range effects
measured in terahertz and dielectric relaxation experiments 48, 9, 49, 50. We
expect our conclusions to hold and eventually be emphasized by water
potentials more realistic than TIP3P, which is quite poor in terms of
structural properties beyond the short-range.
Our findings should be taken into account when interpreting experimental
results and when developing membrane-water interaction potentials. They can
help in better understanding water in biological processes at large, in
particular those where hydration or structural changes play a role. Variations
of ions concentration drastically change the water HBN 51 and its dynamics 52,
with an effect that is similar to an increase of pressure 53, or a decrease of
temperature for dehydration 38. These variations in the extracellular matrix
can promote, for example, cardiac disease and arterial hardening in healthy
men 54 or atherosclerosis and inflammatory signaling in endothelial cells 55.
Hence, our results entail further investigation about the relationship between
this category of diseases with the water HBN rearrangements due to changes in
hydration or ionic concentrations.
## Methods
### Simulation details
The systems considered here have the same geometry as in our previous
simulations 22 but with a 15% increase in hydration, i.e., they are composed
of 128 DMPC lipids in a bilayer and $8100$ water molecules, with periodic
boundary conditions in such a way that water is confined between the two sides
of two replicas of the same membrane. We perform molecular dynamics (MD)
simulations on IBM POWER8 machines with NVIDIA Kepler K80 GPUs using the
simulation package NAMD 2.9 56 at a temperature of $T=303$ K and an average
pressure of $p=1$ atm. We set the simulation timestep to $2$ fs. We describe
the structure of phospholipids and their mutual interactions by the recently
parameterized force field CHARMM36 57, 58, which is able to reproduce the area
per lipid in excellent agreement with experimental data. The water model
employed in our simulations, consistent with the parametrization of CHARMM36,
is the modified TIP3P 59. We cut off the Van der Waals interactions at $12$ Å
with a smooth switching function starting at $10$ Å. We compute the long-
ranged electrostatic forces with the particle-mesh Ewald method 60, using a
grid spacing of $1$ Å. Our simulation box is anisotropic, with $L_{z}>L_{x}$,
with $L_{x}=L_{y}$. This anisotropy ensures that there are no errors caused by
the calculation of long-range electrostatics. During the $NpT$ simulations we
always keep this condition, with $\sim 5\%$ fluctuations for the values of
$L_{x}$, $L_{y}$, and $L_{z}$. After energy minimization, we equilibrate the
hydrated phospholipid bilayers for $10$ ns followed by a production run of $2$
ns in the $NpT$ ensemble at $p=1$ atm. The energy profile is shown in fig. 5.
Figure 5: Energy profile for the system under consideration. The red dashed
arrow defines the limit after which we start the production and data analysis.
In the simulations, we control the temperature with a Langevin thermostat 61
using a damping coefficient of $0.1$ ps-1 and we control the pressure by a
Nosé-Hoover Langevin barostat 62 with a piston oscillation time of $0.2$ ps
and a damping time of $0.1$ ps. We also perform numerical simulations of bulk
TIP3P water (4000 molecules) adopting the same protocol and at the same
thermodynamic conditions. It is worthy to mention, at this point, that the
isotropic $NpT$ ensures that experimental observables such as, e.g., the area
per lipid and NMR order parameters, are properly reproduced 63, 64, 65.
In order to investigate the IRO and the HBN, we divide the systems along the
direction perpendicular to the phospholipid bilayer in equally spaced bins
such that the thickness of each bin is $0.8$ nm. This thickness is chosen in
such a way that a molecule of water in the bin’s center has, approximately,
it’s second coordination shell included in the same bin at $T=303$ K and $P=1$
atm. For our hydration and bin’s size, the two membranes are separated by
eight bins, hence we can analyze four different distances between 0 and $3.2$
nm. All the distances are measured taking as reference distance, for each side
of the membrane, the position where the phospholipid density distribution, at
thermodynamic equilibrium, has a maximum. We measure the observables of
interest for each water molecule within each bin centered at a distance $z$
from the center of the bilayer. It is worthy to mention that several more
sophisticated ways of computing the distances from a rough membrane surface
have been reported in the literature 38, 66. On the other hand, such methods
show differences whit respect to our approach when thinner bins are
implemented and only in the proximity of the surface.
### The local order metric
We here briefly discuss the basic ideas behind the LOM we have used to probe
the structural properties of water. Details can be found in Ref. 26.
The local environment of a water molecule $j$ in a snapshot defines a local
pattern formed by $M$ neighboring sites. Here, we consider only the oxygen
atoms’ second neighbors of the oxygen of the molecule $j$.
There are $N$ local patterns, one for each atomic site $j$ in the system.
Indicating by $\mathbf{P}_{i}^{j}(i=1,M)$ the position vectors in the
laboratory frame of the $M$ neighbors of site $j$, their centroid is given by
$\mathbf{P}_{c}^{j}\equiv\frac{1}{M}\sum_{i=1}^{M}\mathbf{P}_{i}^{j}$. In the
following we refer the positions of the sites of the pattern to their
centroid, i.e.
$\mathbf{P}_{i}^{j}-\mathbf{P}_{c}^{j}\rightarrow\mathbf{P}_{i}^{j}$.
The local reference is a set of $M$ sites, labeled by indices $i(i=1,M)$,
located at positions $\mathbf{R}_{i}^{j}$ around the molecules $j$ in ideal
positions, typically as in a lattice of choice. The step of the reference
lattice is fixed equal to equilibrium O–O distance, $d$, in the water
coordination shell at the thermodynamic conditions of interest.
For each oxygen site $j$ the centroid of the reference is set to coincide with
the centroid of the pattern. The reference orientation is, instead, arbitrary,
forming angles $\theta,\phi,\psi$ with the pattern.
The LOM $S(j)$ at site $j$ is the maximum of the overlap function with respect
to the orientation of the reference and the permutation of the pattern
indices,
$S(j)\equiv\max_{\theta,\phi,\psi;\mathcal{P}}\prod_{i=1}^{M}\exp\left(-\frac{\left|\mathbf{P}_{i_{\mathcal{P}}}^{j}-\mathbf{R}_{i}^{j}\right|^{2}}{2\sigma^{2}M}\right).$
(1)
Here $i_{\mathcal{P}}$ are the permuted indices of the pattern sites
corresponding to a permutation $\mathcal{P}$, and $\sigma=d/4.4$ is a
parameter that controls the spread of the Gaussian functions.
If $L$ is the number of proper point symmetry operations of the reference, the
overlap function (Eq. 1) has $L$ equivalent maxima. Therefore, it is
sufficient to compute $S(j)$ for only a fraction $1/L$ of the Euler angle
domain $\Omega$, which we may call $\Omega/L$, the irreducible domain of the
Euler angles. Inside $\Omega/L$ we pick at random, with uniform probability,
$15$ orientations and we optimize them using a conjugate gradients procedure.
The LOM is an intrinsic property of the local environment at variance with the
overlap function $\mathcal{O}(j)$ that depends on the orientation of the
reference and on the ordering of the sites in the pattern. The LOM satisfies
the inequalities $0\lesssim S(j)\leq 1$. The two limits correspond,
respectively, to a completely disordered local pattern ($S(j)\rightarrow 0$)
and to an ordered local pattern matching perfectly the reference
($S(j)\rightarrow 1$), therefore grading each local environment on an
increasing scale of local order from zero to one.
The order parameter score function $S$ is the site-averaged LOM:
$S\equiv\frac{1}{N}\sum_{j=1}^{N}S(j),$ (2)
### Definition of rings
Several definitions of rings and counting schemes have been reported in the
literature 67, 68, 69, 70, 71, 72, 73. Recently, Formanek and Martelli have
shown that different schemes allow us to access different information 74.
Here, we construct rings as in fig. 6. We adopt the geometric definition of HB
75, that is in qualitative agreement with other definitions over a wide range
of thermodynamic conditions 76, 77. We start from a tagged water molecule and
recursively traverse the HBN until we reached again the starting point, or we
exceed the maximal ring size considered, 12 water molecules in our case. We
consider only the primitive rings, i.e., rings that can not be decomposed into
smaller ones 78, 71, 67. As shown in Ref. 74, this definition provides a rich
amount of information about the network.
Figure 6: Schematic representation of the ring construction for a given HBN
between water molecules. We start from water molecule labeled as 1 (O atoms
are represented in red, H atoms in white). By following the directional HBs
from H to O (blue arrows), we cross the HBN until we return to molecule 1 or
we exceeds 12 steps and then take only those rings that cannot be decomposed
in sub-rings. Here, we find a hexagonal ring from molecule 1 to 6 and and a
pentagonal ring from 1 to 9.
## Acknowledgements
F.M. and J.C. acknowledge support from the STFC Hartree Centre’s Innovation
Return on Research programme, funded by the Department for Business, Energy
and Industrial Strategy. G.F. acknowledges support from the Spanish grant
PGC2018-099277-B-C22 (MCIU/ AEI/ ERDF) and ICREA Foundation (ICREA Academia
prize).
## References
* Berkowitz et al. 2006 Berkowitz, M. L.; Bostick, D. L.; Pandit, S. Aqueous Solutions next to Phospholipid Membrane Surfaces: Insights from Simulations. _Chem. Rev._ 2006, _106_ , 1527–1539
* Yeagle 2011 Yeagle, P. L., Ed. _The Structure of Biological Membranes_ , 3rd ed.; CRC Press: New York, 2011
* Berkowitz 2019 Berkowitz, M. L., Ed. _Biomembrane Simulations: Computational Studies of Biological Membranes_ , 1st ed.; Series in Computational Biophysics; CRC Press: New York, 2019
* Fitter et al. 1999 Fitter, J.; Lechner, R. E.; Dencher, N. A. Interactions of Hydration Water and Biological Membranes Studied by Neutron Scattering. _J. Phys. Chem. B_ 1999, _103_ , 8036–8050
* Trapp et al. 2010 Trapp, M.; Gutberlet, T.; Juranyi, F.; Unruh, T.; Demé, B.; Tehei, M.; Peters, J. Hydration Dependent Studies of Highly Aligned Multilayer Lipid Membranes by Neutron Scattering. _J. Chem. Phys._ 2010, _133_ , 164505
* Wassall 1996 Wassall, S. R. Pulsed Field Gradient-Spin Echo NMR Studies of Water Diffusion in a Phospholipid Model Membrane. _Biophys. J._ 1996, _71_ , 2724–2732
* Volkov et al. 2007 Volkov, V. V.; Palmer, D. J.; Righini, R. Distinct Water Species Confined at the Interface of a Phospholipid Membrane. _Phys. Rev. Lett._ 2007, _99_ , 078302
* Zhao et al. 2008 Zhao, W.; Moilanen, D. E.; Fenn, E. E.; Fayer, M. D. Water at the Surfaces of Aligned Phospholipid Multibilayer Model Membranes Probed with Ultrafast Vibrational Spectroscopy. _J. Am. Chem. Soc._ 2008, _130_ , 13927–13937
* Tielrooij et al. 2009 Tielrooij, K. J.; Paparo, D.; Piatkowski, L.; Bakker, H. J.; Bonn, M. Dielectric Relaxation Dynamics of Water in Model Membranes Probed by Terahertz Spectroscopy. _Biophys. J._ 2009, _97_ , 2848–2492
* Hua et al. 2015 Hua, W.; Verreault, D.; Allen, H. C. Solvation of Calcium–Phosphate Headgroup Complexes at the DPPC/Aqueous Interface. _ChemPhysChem_ 2015, _16_ , 3910–3915
* Róg et al. 2002 Róg, T.; Murzyn, K.; Pasenkiewicz-Gierula, M. The Dynamics of Water at the Phospholipid Bilayer Surface: A Molecular Dynamics Simulation Study. _Chem. Phys. Lett._ 2002, _352_ , 323–327
* Bhide and Berkowitz 2005 Bhide, S. Y.; Berkowitz, M. L. Structure and Dynamics of Water at the Interface with Phospholipid Bilayers. _J. Chem. Phys._ 2005, _123_ , 224702
* von Hansen et al. 2013 von Hansen, Y.; Gekle, S.; Netz, R. R. Anomalous Anisotropic Diffusion Dynamics of Hydration Water at Lipid Nembranes. _Phys. Rev. Lett._ 2013, _111_ , 118103
* Zhang and Berkowitz 2009 Zhang, Z.; Berkowitz, M. L. Orientational Dynamics of Water in Phospholipid Bilayers with Different Hydration Levels. _J. Phys. Chem. B_ 2009, _113_ , 7676–7680
* Gruenbaum and Skinner 2011 Gruenbaum, S. M.; Skinner, J. L. Vibrational Spectroscopy of Water in Hydrated Lipid Multi–Bilayers. I. Infrared Spectra and Ultrafast Pump–Probe Observables. _J. Chem. Phys._ 2011, _135_ , 075101
* Borallo et al. 2016 Borallo, C. C.; Stanley, E. H.; Franzese, G. Structural Interpretation of the Large Slowdown of Water Dynamics at Stacked Phospholipid Membranes for Decreasing Hydration Level: All-Atom Molecular Dynamics. _Materials_ 2016, _9_ , 319
* Camisasca et al. 2018 Camisasca, G.; Iorio, A.; Marzio, M. D.; Gallo, P. Structure and Slow Dynamics of Protein Hydration Water. _J. Mol. Liq._ 2018, _268_ , 903–910
* Iorio et al. 2019 Iorio, A.; Camisasca, G.; Rovere, M.; Gallo, P. Characterization of Hydration Water in Supercooled Water–Trehalose Solutions: The Role of the Hydrogen Bonds Network. _J. Chem. Phys._ 2019, _51_ , 044507
* Iorio et al. 2019 Iorio, A.; Camisasca, G.; Gallo, P. Glassy Dynamics of Water at Interface with Biomolecules: A Mode Coupling Theory Test. _Sci. China Phys. Mech._ 2019, _62_ , 107011
* Iorio et al. 2019 Iorio, A.; Camisasca, G.; Gallo, P. Slow Dynamics of Hydration Water and the Trehalose Dynamical Transition. _J. Mol. Liq._ 2019, _282_ , 617–625
* Iorio et al. 2020 Iorio, A.; Minozzi, M.; Camisasca, G.; Rovere, M.; Gallo, P. Slow Dynamics of Supercooled Hydration Water in Contact with Lysozyme: Examining the Cage Effect at Different Length Scales. _Philos. Mag_ 2020, _0_ , 1–14
* Martelli et al. 2018 Martelli, F.; Ko, H.-Y.; Borallo, C. C.; Franzese, G. Structural Properties of Water Confined by Phospholipid Membranes. _Front. Phys._ 2018, _13_ , 136801
* Samatas et al. 2019 Samatas, S.; Calero, C.; Martelli, F.; Franzese, G. In _Biomembrane Simulations Computational Studies of Biological Membranes_ ; Berkowitz, M., Ed.; CRC Press: New York, 2019; p 69
* Martelli 2019 Martelli, F. Unravelling the Contribution of Local Structures to the Anomalies of Water: The Synergistic Action of Several Factors. _J. Chem. Phys._ 2019, _150_ , 094506
* Hamley 2007 Hamley, W. _Introduction to Soft Matter_ ; John Wiley and Sons, West Sussex, England, 2007
* Martelli et al. 2016 Martelli, F.; Ko, H.-Y.; Oğuz, E. C.; Car, R. Local-Order Metric for Condensed Phase Wnvironments. _Phys. Rev. B_ 2016, _97_ , 064105
* Martelli et al. 2018 Martelli, F.; Giovambattista, N.; Torquato, S.; Car, R. Searching for Crystal-Ice Domains in Amorphous Ices. _Phys. Rev. Materials_ 2018, _2_ , 075601
* Santra et al. 2018 Santra, B.; Ko, H.-Y.; Yeh, Y.-W.; Martelli, F.; Kaganovich, I.; Raitses, Y.; Car, R. Root-Growth of Boron Nitride Nanotubes: Experiments and Ab Initio Simulations. _Nanoscale_ 2018, _10_ , 22223
* Marton̆ák et al. 2004 Marton̆ák, R.; Donadio, D.; Parrinello, M. Polyamorphism of Ice at Low Temperatures from Constant-Pressure Simulations. _Phys. Rev. Lett._ 2004, _92_ , 225702
* Marton̆ák et al. 2005 Marton̆ák, R.; Donadio, D.; Parrinello, M. Evolution of the Structure of Amorphous Ice: From Low-Density Amorphous through High-Density Amorphous to Very High-Density Amorphous Ice. _J. Chem. Phys._ 2005, _122_ , 134501
* Camisasca et al. 2019 Camisasca, G.; Schlesinger, D.; Zhovtobriukh, I.; Pitsevich, G.; Pettersson, L. G. M. A Proposal for the Structure of High– and Low–Density Fluctuations in Liquid Water. _J. Chem. Phys._ 2019, _151_ , 034508
* Russo and Tanaka 2014 Russo, J.; Tanaka, H. Understanding Water’s Anomalies with Locally Favoured Structures. _Nat. Commun._ 2014, _5_ , 3556
* Palmer et al. 2014 Palmer, J. C.; Martelli, F.; Liu, Y.; Car, R.; Panagiotopoulos, A. Z.; Debenedetti, P. G. Metastable Liquid–Liquid Transition in a Molecular Model of Water. _Nature_ 2014, _510_ , 385–388
* Santra et al. 2015 Santra, B.; DiStasio Jr., R. A.; Martelli, F.; Car, R. Local Structure Analysis in $AbInitio$ Liquid Water. _Mol. Phys._ 2015, _113_ , 2829–2841
* Ansari et al. 2020 Ansari, N.; Ansari, N.; Onat, B.; Sosso, G. C.; Hassanali, A. Insights into the Emerging Networks of Voids in Simulated Supercooled Water. _J. Phys. Chem. B_ 2020, _124_ , 2180–2190
* Read and Wilson 2016 Read, R. C.; Wilson, J. R. _An Atlas of Graphs_ ; Oxford University Press: Oxford, 2016
* Mark and Nilsson 2001 Mark, P.; Nilsson, L. Structure and Dynamics of the TIP3P, SPC, and SPC/E Water Models at 298 K. _J. Phys. Chem. A_ 2001, _105_ , 9954–9960
* Calero and Franzese 2019 Calero, C.; Franzese, G. Membranes with Different Hydration Levels: The Interface between Bound and Unbound Hydration Water. _J. Mol. Liq._ 2019, _273_ , 488–496
* Martelli et al. 2017 Martelli, F.; Torquato, S.; Giovambattista, N.; Car, R. Large-Scale Structure and Hyperuniformity of Amorphous Ices. _Phys. Rev. Lett._ 2017, _119_ , 136002
* de los Santos and Franzese 2012 de los Santos, F.; Franzese, G. Relations between the Diffusion Anomaly and Cooperative Rearranging Regions in a Hydrophobically Nanoconfined Water Monolayer. _Phys. Rev. E_ 2012, _85_ , 010602–
* DiStasio Jr. et al. 2014 DiStasio Jr., R. A.; Santra, B.; Li, Z.; Wu, X.; Car, R. The Individual and Collective Effects of Exact Exchange and Dispersion Interactions on the Ab Initio Structure of Liquid Water. _J. Chem. Phys._ 2014, _141_ , 084502
* Binder 2003 Binder, H. The Molecular Architecture of Lipid Membranes: New Insights from Hydration–Huning Infrared Linear Dichroism Spectroscopy. _Appl. Spectrosc. Rev._ 2003, _30_ , 15–69
* Chen et al. 2010 Chen, X.; Hua, W.; Huang, Z.; Allen, H. C. Interfacial Water Structure Associated with Phospholipid Membranes Studied by Phase–Sensitive Vibrational Sum Frequency Generation Spectroscopy. _J. Am. Chem. Soc._ 2010, _132_ , 1336–11342
* Verde et al. 2019 Verde, A. R.; Montes de Oca, J. M.; Accordino, S. R.; Alarcón, L. M.; Appignanesi, G. A. Comparing the Performance of Two Structural Indicators for Different Water Models while Seeking for Connections between Structure and Dynamics in the Glassy Regime. _J. Chem. Phys._ 2019, _150_ , 244504
* Joseph and Aluru 2008 Joseph, S.; Aluru, N. R. Why Are Carbon Nanotubes Fast Transporters of Water? _Nano Letters_ 2008, _8_ , 452–458
* Ruiz Pestana et al. 2018 Ruiz Pestana, L.; Felberg, L. E.; Head-Gordon, T. Coexistence of Multilayered Phases of Confined Water: The Importance of Flexible Confining Surfaces. _ACS Nano_ 2018, _12_ , 448–454
* Asakawa et al. 2012 Asakawa, H.; Yoshioka, S.; Nishimura, K.-i.; Fukuma, T. Spatial Distribution of Lipid Headgroups and Water Molecules at Membrane/Water Interfaces Visualized by Three-Dimensional Scanning Force Microscopy. _ACS Nano_ 2012, _6_ , 9013–9020
* Ebbinghaus et al. 2007 Ebbinghaus, S.; Kim, S. J.; Heyden, M.; Yu, X.; Heugen, U.; Gruebele, M.; Leitner, D. M.; Havenith, M. An Extended Dynamical Hydration Shell around Proteins. _Proc. Natl. Ac. Sci. USA_ 2007, _104_ , 20749–20752
* Zhang et al. 2013 Zhang, C.; Gygi, F.; Galli, G. Strongly Anisotropic Dielectric Relaxation of Water at the Nanoscale. _J. Phys. Chem. Lett._ 2013, _4_ , 2477–2481
* Hishida and Tanaka 2013 Hishida, M.; Tanaka, K. Long–Range Hydration Effect of Lipid Membrane Studied by Terahertz Time-Domain Spectroscopy. _Phys. Rev. Lett._ 2013, _106_ , 158102
* Mancinelli et al. 2007 Mancinelli, R.; Botti, A.; Bruni, F.; Ricci, M. A.; Soper, A. K. Perturbation of Water Structure Due to Monovalent Ions in Solution. _Phys. Chem. Chem. Phys._ 2007, _9_ , 2959–2967
* Fayer et al. 2009 Fayer, M. D.; Moilanen, D. E.; Wong, D.; Rosenfeld, D. E.; Fenn, E. E.; Park, S. Water Dynamics in Salt Solutions Studied with Ultrafast Two-Dimensional Infrared (2D IR) Vibrational Echo Spectroscopy. _Acc. Chem. Res._ 2009, _42_ , 1210–9
* Gallo et al. 2014 Gallo, P.; Corradini, D.; Rovere, M. Do Ions Affect the Structure of Water? The Case of Potassium Halides. _J. Mol. Liq._ 2014, _189_ , 52–56
* Arnaoutis et al. 2017 Arnaoutis, G.; Kavouras, S. A.; Stratakis, N.; Likka, M.; Mitrakou, A.; Papamichael, C.; Sidossis, L. S.; Stamatelopoulos, K. The Effect of Hypohydration on Endothelial Function in Young Healthy Adults. _Eur. J. Nutr._ 2017, _56_ , 1211–1217
* Dmitrieva and Burg 2015 Dmitrieva, N. I.; Burg, M. B. Elevated Sodium and Dehydration Stimulate Inflammatory Signaling in Endothelial Cells and Promote Atherosclerosis. _PLOS ONE_ 2015, _10_ , 1–22
* Phillips et al. 2005 Phillips, J. C.; Braun, R.; Wang, W.; Gumbart, J.; Tajkhorshid, E.; Villa, E.; Chipot, C.; Skeel, R. D.; Kalé, L.; Schulten, K. Scalable Molecular Dynamics with NAMD. _J. Comput. Chem._ 2005, _26_ , 1781–1802
* Klauda et al. 2010 Klauda, J. B.; Venable, R. M.; Freites, J. A.; O’Connor, J. W.; Tobias, D. J.; Mondragon-Ramirez, C.; Vorobyov, I.; MacKerell, A. D.; Pastor, R. W. Update of the CHARMM All-Atom Additive Force Field for Lipids: Validation on Six Lipid Types. _J. Phys. Chem. B_ 2010, _114_ , 7830–7843
* Lim et al. 2012 Lim, J. B.; Rogaski, B.; Klauda, J. B. Update of the Cholesterol Force Field Parameters in CHARMM. _J. Phys. Chem. B_ 2012, _116_ , 203–210
* Jorgensen et al. 1983 Jorgensen, W. L.; Chandrasekhar, J.; Madura, J. D. Comparison of Simple Potential Functions for Simulating Liquid Water. _J. Chem. Phys._ 1983, _79_ , 926
* Essmann et al. 1995 Essmann, U.; Perera, L.; Berkowitz, M. L. A Smooth Particle Mesh Ewald Method. _J. Chem. Phys._ 1995, _103_ , 8577
* Berendsen et al. 1984 Berendsen, H. J. C.; Postma, J. P. M.; van Gunsteren, W. F.; DiNola, A.; Haak, J. R. Molecular Dynamics with Coupling to an External Bath. _J. Phys. Chem._ 1984, _81_ , 3684
* Feller et al. 1995 Feller, S. E.; Zhang, Y.; Pastor, R. W.; Brooks, B. R. Constant Pressure Molecular Dynamics Simulation: The Langevin Piston Method. _J. Phys. Chem._ 1995, _103_ , 4613
* Taylor et al. 2009 Taylor, J.; Whiteford, N. E.; Bradley, G.; Watson, G. W. Validation of All–Atom Phosphatidylcholine Lipid Force Fields in the Tensionless NPT Ensemble. _Biochim. Biophys. Acta_ 2009, _1788_ , 638–649
* Davis et al. 2009 Davis, J. E.; Rahaman, O.; Patel, S. Molecular Dynamics Simulations of a DMPC Bilayer Using Nonadditive Interaction Nodels. _Biophys. J.,_ 2009, _96_ , 285–402
* Gapsys et al. 2013 Gapsys, V.; de Groot, B. L.; Briones, R. Computational Analysis of Local Membrane Properties. _J. Comput. Aided Mol. Des._ 2013, _27_ , 845–858
* Pandit et al. 2003 Pandit, S. A.; Bostick, D.; Berkowitz, M. L. An Algorithm to Describe Molecular Scale Rugged Surfaces and its Application to the Study of a Water/Lipid Bilayer interface. _J. Chem. Phys._ 2003, _119_ , 2199–2205
* King 1967 King, S. V. Ring Configurations in a Random Network Model of Vitreous Silica. _Nature_ 1967, _213_ , 1112–1113
* Rahman and Stillinger 1973 Rahman, A.; Stillinger, F. H. Hydrogen-Bond Patterns in Liquid Water. _J. Am. Chem. Soc._ 1973, _95_ , 7943–7948
* Guttman 1990 Guttman, L. Ring Structure of the Crystalline and Amorphous Forms of Silicon Dioxide. _J. Non-Cryst. Solids_ 1990, _116_ , 145–147
* Franzblau 1991 Franzblau, D. S. Computation of Ring Statistics for Network Models of Solids. _Phys. Rev. B_ 1991, _44_ , 4925
* Wooten 2002 Wooten, F. Structure, Odd Lines and Topological Entropy of Disorder of Amorphous Silicon. _Acta Cryst. A_ 2002, _58_ , 346–351
* Yuan and Cormack 2002 Yuan, X.; Cormack, A. N. Efficient Algorithm for Primitive Ring Statistics in Topological Networks. _Comp. Mater. Sci._ 2002, _24_ , 343–360
* Roux and Jund 2010 Roux, S. L.; Jund, P. Ring Statistics Analysis of topological Networks: New Approach and Application to Amorphous GeS2 and SiO2 Systems. _Comp. Mater. Sci._ 2010, _49_ , 70–83
* Formanek and Martelli 2020 Formanek, M.; Martelli, F. Probing the Network Topology in Network–Forming Materials: the Case of Water. _AIP Adv._ 2020, _10_ , 055205
* Luzar and Chandler 1996 Luzar, A.; Chandler, D. Hydrogen–Bond Kinetics in Liquid Water. _Nature_ 1996, _379_ , 55–57
* Prada-Gracia et al. 2013 Prada-Gracia, D.; Shevchuk, R.; Rao, F. The Quest for Self–Consistency in Hydrogen Bond Definitions. _J. Chem. Phys._ 2013, _139_ , 084501
* Shi et al. 2018 Shi, R.; Russo, J.; Tanaka, H. Common Microscopic Structural Origin for Water’s Thermodynamic and Dynamic Anomalies. _J. Chem. Phys._ 2018, _149_ , 224502
* Goetzke and Klein 1991 Goetzke, K.; Klein, H.-J. Properties and Efficient Algorithmic Determination of Different Classes of Rings in Finite and Infinite Polyhedral Networks. _J. Non-Cryst. Solids_ 1991, _127_ , 215–220
|
# Tunable spin-flop transition in artificial ferrimagnets
N. O. Antropov Institute of Metal Physics, 620180 Ekaterinburg, Russia Ural
Federal University, 620002 Ekaterinburg, Russia E. A. Kravtsov Institute of
Metal Physics, 620180 Ekaterinburg, Russia Ural Federal University, 620002
Ekaterinburg, Russia M. V. Makarova Institute of Metal Physics, 620180
Ekaterinburg, Russia Ural Federal University, 620002 Ekaterinburg, Russia V.
V. Proglyado Institute of Metal Physics, 620180 Ekaterinburg, Russia T.
Keller Max-Planck-Institut für Festkörperforschung, Heisenbergstraße 1,
D-70569 Stuttgart, Germany Max Planck Society Outstation at the Heinz Maier-
Leibnitz Zentrum (MLZ), D-85748 Garching, Germany I. A. Subbotin National
Research Center ”Kurchatov Institute”, 123182 Moscow, Russia E. M. Pashaev
National Research Center ”Kurchatov Institute”, 123182 Moscow, Russia G. V.
Prutskov National Research Center ”Kurchatov Institute”, 123182 Moscow,
Russia A. L. Vasiliev National Research Center ”Kurchatov Institute”, 123182
Moscow, Russia Yu. M. Chesnokov National Research Center ”Kurchatov
Institute”, 123182 Moscow, Russia N. G. Bebenin Institute of Metal Physics,
620180 Ekaterinburg, Russia V. V. Ustinov Institute of Metal Physics, 620180
Ekaterinburg, Russia B. Keimer Max-Planck-Institut für Festkörperforschung,
Heisenbergstraße 1, D-70569 Stuttgart, Germany Yu. N. Khaydukov Max-Planck-
Institut für Festkörperforschung, Heisenbergstraße 1, D-70569 Stuttgart,
Germany Max Planck Society Outstation at the Heinz Maier-Leibnitz Zentrum
(MLZ), D-85748 Garching, Germany Skobeltsyn Institute of Nuclear Physics,
Moscow State University, Moscow 119991, Russia
###### Abstract
Spin-flop transition (SFT) consists in a jump-like reversal of
antiferromagnetic magnetic moments into a non-collinear state when the
magnetic field increases above the critical value. Potentially the SFT can be
utilized in many applications of a rapidly developing antiferromagnetic
spintronics. However, the difficulty of using them in conventional
antiferromagnets lies in (a) too large switching magnetic fields (b) the need
for presence of a magnetic anisotropy, and (c) requirement to apply magnetic
field along the correspondent anisotropy axis. In this work we propose to use
artificial ferrimagnets in which the spin-flop transition occurs without
anisotropy and the transition field can be lowered by adjusting exchange
coupling in the structure. This is proved by experiment on artificial Fe-Gd
ferrimagnets where usage of Pd spacers allowed us to suppress the transition
field by two orders of magnitude.
Antiferromagnetic (AF) spintronic is nowadays a rapidly developing area [1, 2,
3, 4, 5]. In addition to non-volatility of conventional ferromagnetic
spintronics the AF devices can offer immunity to external magnetic
disturbances, absence of cross-talks between small-area devices and much
faster dynamics (THz vs MHz). The antiferromagnetic systems are featured by
spin-flop transition (SFT) when there is the transition from antiferromagnetic
ordering to noncollinear (NC) state at magnetic field exceeding certain value
$H_{SP}$. Creation of noncollinear magnetic state and possibility to switch
between AF and NC states may have useful applications by utilizing anomalous
Hall or Nernst effects [6, 7, 8, 9, 10, 11]. In addition, proximity of
noncollinear magnetic texture to superconducting layer generates long-range
triplet superconductivity which may also find diverse applications in
superconducting spintronics [12, 13, 14, 15, 16].
The utilization of the spin-flop effect in AF systems is overly complicated
due to at least two reasons. The first thing is the existence of SFT in AF
requires uniaxial anisotropy and an external field applied along the
corresponding axis. Secondly, typical transition fields $H_{SP}$ in bulk
antiferromagnets are tens of Tesla [17, 18, 19, 20] thus they are too high for
real applications. The need to have anisotropy inside the system can be
circumvented by replacing antiferromagnets with ferrimagnets (FEMs). In the
FEMs one does not require presence of anisotropy and the SFT takes place at
$H_{SP}=\lambda|m_{1}-m_{2}|$ [21], where $m_{1,2}$ are the magnetic moment of
first and second sublattices and $\lambda$ is the exchange parameter. In bulk
systems the $H_{SP}$ are still too high for applications and can hardly be
tuned.
In contrast, artificial ferrimagnets based on magnetic heterostructures give a
possibility to tune the SFT field by varying parameters of ferromagnetic
layers and by introducing non-magnetic spacers. Heterostructures based on 3d
transition metals (TM) and heavy 4f rare-earth (RE) metals, like Fe/Gd, are
model ferrimagnetic systems demonstrating a rich magnetic phase diagram with
complex types of magnetic ordering [22, 23, 24, 25, 26, 27]. Coupling between
4f electrons of Gd and 3d electrons of Fe leads to the antiferromagnetic
alignment of TM and RE magnetic moments which due to the difference in
magnetic moments of Fe($\sim 2\mu_{B}$) and Gd ($\sim 7\mu_{B}$) leads to the
emergence of a one-dimensional ferrimagnetic lattice. The spin-flop transition
was found in Gd/Fe systems at typical value $H_{SP}\sim$3kOe [28], which is
much smaller than that for bulk FEMs but still quite high for applications.
Further tuning of $H_{SP}$ can be gained by suppression of interlayer exchange
coupling which can be performed by spacing of Fe and Gd with a non-magnetic
material like Cr [29, 30], Pt [31] or Si [32].
The SFT can be detected by integral magnetic techniques as a kink on a
magnetic hysteresis loop at $H_{SP}$. In case of artificial FEMs magnetic
signal from thin films is heavily polluted by dia- or paramagnetic signal of
thick substrates.This makes it difficult, if not impossible at all, to use
integral magnetometric methods to study the SFTs. Neutron scattering, being a
depth-selective magnetometric method is a widely used method for studying AFs
and FEMs [33, 34, 35]. Similar to X-ray and light, neutrons diffract at
periodic lattice with period $D$ according to the well-known Bragg law
$n\lambda=2D\sin\theta$. Here $\lambda$ and $\theta$ are the neutron
wavelength and incident angle, and $n$ is integer number corresponding to
order of Bragg peak. Presence of spin one-half makes neutron scattering
sensitive to the magnetic lattice. In case of antiferromagnetic lattice
magnetic peak is doubled comparing to the structural one, so that the magnetic
Bragg peak appears on the positions of $n/2$ of the structural Bragg peaks.
Applying spin analysis, that is detecting neutron spin-states before and after
scattering, allows one to get additional information about magnetic
configuration. The non-spin-flip (NSF) channels (++) and (- -) are sensitive
to the sum and difference of nuclear potential and collinear to the neutron
polarization part of magnetization. Here first and second sign codes neutron
polarization along the external magnetic field $H$ before and after the
scattering process. Presence of non-collinear magnetization causes spin-flip
(SF) scattering (+-) and (-+). In Born approximation the amplitude of the SF
scattering is proportional to the spatial profile of the noncollinear
magnetization in reciprocal space. Thus the SF scattering is very sensitive
channel to detect the SFTs.
In our prior work [36] we studied superlattice
[Fe(3.5nm)/Pd(1.2nm)/Gd(5nm)/Pd(1.2nm)]12. In the neutron experiment we
measured intensity of SF scattering at the position of the first Bragg peak
$R^{SF}_{1}$ as a function of external magnetic field at a temperature of 10K.
Above magnetic field of $H_{SP}$=1.5kOe we detected a 20-fold increase of SF
scattering which is the direct evidence for the presence of SFT in our system.
We note that the $H_{SP}$ field is much smaller than in spacer free Fe/Gd
systems. Subsequent structural studies by transmission electron microscopy and
synchrotron radiation [37] indicated presence of mutual diffusion at Gd/Pd
interface. For thin ($\sim$1nm) Pd spacers this interdiffusion leads to almost
complete dissolution of Pd in Gd. As a result the Curie temperature (and hence
exchange energy) of the (nominal) Gd layer decreases from 294K for bulk Gd to
$\lesssim$ 100K. Thus ability of Pd and Gd to form an alloy with controllable
suppression of exchange energy paves the way for tuning of SFT by varying
thickness of Pd spacer. To do this we prepared series of samples of nominal
composition [Fe(3.5nm)/Pd(t)/Gd(5.0nm)/Pd(t)]12 varying $t$ from 1.0 to 1.6 nm
(details can be found in our prior works [36, 37]). Further we will code
samples as PdYY, where YY is thickness of Pd layer in Angstroms.
Fig. 1a shows the X-ray low-angle diffraction patterns (reflectivities)
measured at a wavelength of $\lambda$=1.54Å from the samples under study. More
than 10 orders of Bragg reflection are seen on the reflectivities, which
indicates good repeatability of the Fe/Gd unit cell. Fig. 1b shows the energy
dispersive X-ray (EDX) microanalysis of scanning transmission electron
microscopy (STEM) of Pd12 sample. The EDX analysis shows well-defined Fe
layers depicted by blue color and yellow layers of GdPd alloy instead of
separate red Gd layers and green Pd spacers. For the sake of simplicity, we
will keep naming Gd layer, remembering however that in reality the layer is a
GdxPd1-x alloy.
Figure 1: (a) X-ray low-angle diffraction (reflectivity) of samples under
study. Vertical arrows show the position of several Bragg peaks for sample
Pd10. (b) The energy dispersive X-ray (EDX) microanalysis of Pd12 sample.
Polarized neutron reflectometry (PNR) experiment was conducted on the
monochromatic ($\lambda$=4.3Å) reflectometer NREX of the research reactor
FRM-2 (Garching, Germany). Fig.2 shows the PNR data measured on sample Pd10 at
$T$=10 K in magnetic field $H$=1kOe and additional SF curve at $T$=10 K in
magnetic field $H$=3kOe (solid line). In the neutron experiment 4 Bragg peaks
were confidently measured. A large splitting of (++) and (- -) NSF Bragg peaks
indicates the presence of a collinear magnetic moment in the system. At the
same time we observed a much weaker (1-2 orders below NSF signal) SF
scattering at Bragg peaks. The origin of this small, though not negligible SF
signal can be associated with noncollinear inhomogeneities at the Fe/Gd
interfaces. The data at $H$=1kOe can be quantitatively described by a
predominantly collinear AF state with magnetic moments of Gd $M_{Gd}\approx
5\mu_{B}$ and Fe $M_{Fe}\approx 2\mu_{B}$ aligned parallel and antiparallel to
$H$. By increasing the magnetic field above $H_{SP}$=2.3kOe (inset in Fig.2)
we observed a 20-fold increase of SF scattering at the first Bragg peak
$R^{SF}_{1}$. This SFT is similar to observed previously spin-flop in Pd12
sample though taking place at 1kOe higher magnetic field.
Figure 2: Polarized neutron reflectivities of sample Pd10 measured at $T=10$
K at magnetic field $H=1$ kOe (symbols) and SF curve at $T$=10 K, $H$=3kOe
(solid line) Inset shows the field dependence of intensity of SF scattering at
the first Bragg peak $R^{SF}_{1}(H)$. Vertical arrow denotes the magnetic
field at which spin-flop transition takes place.
By measuring family of $R^{SF}_{1}$(H) scans at different temperatures we were
able to construct the noncollinear magnetic phase diagram for the sample Pd10
in $H$-$T$ coordinates (Fig. 3a). For this sample we observe a collinear AF
state in the temperature range up to 30 K in magnetic fields not exceeding 2
kOe. Above this field, the collinear AF state is replaced by a NC spin-flop
state. Increasing the temperature to 60K leads to a gradual shift of the SFT
field towards lower values. Finally, above 60K, the spin-flip signal
disappears due to the absence of magnetic ordering in Gd layer. Fig.3b and
Fig.3c shows similar phase diagrams for Pd12 and Pd14 samples. One can see
that the transition field $H_{SP}$ decreases with increase of $t$. For the
samples with $t$=1.6nm (not shown) we did not observe any detectable SF signal
evidencing absence of coupling of Fe and Gd layers.
Figure 3: (a)-(c) Experimental ($H$,$T$) maps of $R^{SF}_{1}$ for samples with
different Pd spacer. (d) Simulated map for Pd10 sample (e) Fit-resulted
$J_{1}$ and $J_{2}$ terms vs temperature for Pd10 sample. (f) Thickness
dependence of bilinear and biquadratic energies $J_{1}$ and $J_{2}$ obtained
for $T$=10K.
To describe magnetic state of our systems we applied extended Stoner-Wohlfarth
model widely used for description of magnetic multilayers [38, 8]. Density of
magnetic energy of one Fe/Gd unit cell can be written as
$\begin{split}E(\alpha_{Gd},\alpha_{Fe})=-H[m_{Gd}cos(\alpha_{Gd})+m_{Fe}cos(\alpha_{Fe})]+\\\
J_{1}cos(\alpha_{Gd}-\alpha_{Fe})+J_{2}cos^{2}(\alpha_{Gd}-\alpha_{Fe}).\\\
\end{split}$ (1)
In Eq.1 $m_{X}=M_{X}d_{X}$ is a product of magnetization and thickness
(magnetic moment), $\alpha_{X}$ is the angle between magnetization and $H$ of
a layer $X$ ($X$=Fe,Gd). The first term in (1) is Zeeman coupling which tends
to align magnetic moments of the layers along the external field. The second
term is bilinear antiferromagnetic exchange coupling of Fe and Gd layers with
strength parameter $J_{1}$. The third term describes biquadratic coupling
tending to align the magnetic moments non-collinearly. As seen from (1) in
case $J_{2}$=0 the transition field can be estimated as $H_{SP}\approx
J_{1}|m_{Gd}-m_{Fe}|/m_{Gd}\cdot m_{Fe}$.
For every magnetic field $H$ the magnetic configuration of the system as a
function of $J_{1,2}$ can be obtained by minimizing energy (1) varying angles
$\alpha_{Gd}$ and $\alpha_{Fe}$. The magnetization amplitudes $M_{Gd,Fe}$ and
thicknesses $d_{Gd,Fe}$ were taken from PNR and SQUID data and fixed during
calculations. The angles $\alpha^{{}^{\prime}}_{Gd}$ and
$\alpha^{{}^{\prime}}_{Fe}$ corresponding to the minimum of energy for a given
set of $H$ and $J_{1,2}$ is used to construct a theoretical SF reflectivity at
the first Bragg peak in Born approximation:
$\begin{split}R^{SF}_{1,th}=c[m_{Gd,\bot}^{2}+m_{Fe,\bot}^{2}+\\\
2m_{Gd,\bot}m_{Fe,\bot}\cos\frac{d_{Fe}}{d_{Fe}+d_{Gd}}]+R_{bg},\end{split}$
(2)
where $m_{Gd(Fe),\bot}=m_{Gd(Fe)}\sin\alpha^{{}^{\prime}}_{Gd(Fe)}$ is the
non-collinear component of magnetic moment of Gd(Fe) layer, $c$ is scaling
constant and $R_{bg}$ is background intensity. The latter two values were
adjusted manually before the fit. We fitted then theoretical $R^{SF}_{1,th}$
to the experimental $H$-dependencies $R^{SF}_{1}$ by varying $J_{1}$ and
$J_{2}$. The procedure was repeated for every $T$ so that for every sample we
obtained temperature dependencies of $J_{1,2}$. Fig.3d shows results of such a
fit for sample Pd10. It is rather noticeable that despite of the simplicity of
the Stoner-Wohlfarth approach it allows to reproduce experimental features
quite well. Fig.3e shows the fit-resulted $T$-dependence of the exchange
energies $J_{1}$ and $J_{2}$ for Pd10 sample. It can be seen that the bilinear
term has a predominant contribution, which gradually decreases with decreasing
temperature. Thus our analysis showed that for a qualitative description of
the SFT, a bilinear term is sufficient, but quantitatively the data are
described better by including an additional biquadratic term.
The data for the other samples were fitted in a similar way. Fig.3f shows the
dependency of coupling energies on thickness of Pd spacer. As follows from the
figure, the bilinear energy decreases almost linearly from 1.5 erg/cm2 at
$t$=1nm to 0 at $t$=1.6nm. Biquadratic energy in turn increases with $t$. The
obtained values are of the same orders as $J_{1}\sim$ 0.8 erg/cm2 and
$J_{2}\sim$ 0.2 erg/cm2 obtained in Ref.[39] for Gd/Pt/Co multilayers at
$T$=10K.
The decrease in the bilinear component with the increase in $t$ can obviously
be correlated with a decrease in the effective concentration of Gd in the GdPd
layer. At the same time, structural studies carried out earlier [37] indicate
an increase in structural inhomogeneities with increasing of $t$ . It seems
prudent to correlate this growth with an increase in the biquadratic
component.
In conclusion, using PNR we performed a systematic study of magnetic
configuration of [Fe(3.5nm)/Pd(t)/Gd(5.0nm)/Pd(t)]12 heterostructures with
t=1.0-1.6nm. By measuring neutron spin-flip scattering we have detected
presence of magnetically non-collinear state at temperatures $T\lesssim$ 50 K
in magnetic fields of above $H>$500 Oe for the samples with 1nm$<t<$1.4nm. By
using of an extended Stoner-Wohlfarth model we were able to describe the
observed transition as a competition of Zeeman energy, bilinear interaction of
order of 1 erg/cm2 and biquadratic addition of order of 0.5 erg/cm2. The
coupling energies can be tuned by varying thickness of spacer between 1nm and
1.4nm leading to the shift of the transition field below kilo-Oersted range.
Our study opens perspectives for a purposeful design of artificial FEMs with
adjustable field of spin-flop transition. Thus, the FEMs systems with low
Curie temperature components studied in this work can be used in
superconducting spintronics for generation of triplet superconductivitiy. An
additional advantage here is the good compatibility of gadolinium with
superconducting niobium [40, 41]. For the room temperature applications one
can use well-studied synthetic AFs such as Fe/Cr [34, 33, 35], Fe/V [42, 43]
or Co/Cu [44, 45] where subsequent adjustment can be carried out by tuning of
the coupling energy and the imbalance of the magnetic moments of the sub-
lattices.
We would like to thank M.A. Milyaev for assistance in preparation of the
samples, A.B. Drovosekov and D.I. Kholin for fruitful discussion of the
results. This work is partially based on experiments performed at the NREX
instrument operated by the Max Planck Society at the MLZ), Garching, Germany,
and supported by the Deutsche Forschungsgemeinschaft (Project No.
107745057-TRR80). Research in Ekaterinburg was performed within the state
assignment of Minobrnauki of Russia (theme ”Spin” No. AAAA-A18-118020290104-2)
and was partly supported by Russian Foundation for Basic Research (Project No.
19-02-00674).
## References
* MacDonald and Tsoi [2011] A. H. MacDonald and M. Tsoi, Antiferromagnetic metal spintronics, Philos. Trans. R. Soc. A 369, 3098 (2011).
* Jungwirth _et al._ [2018] T. Jungwirth, J. Sinova, A. Manchon, X. Marti, J. Wunderlich, and C. Felser, The multiple directions of antiferromagnetic spintronics, Nat. Phys. 14, 200 (2018).
* Duine _et al._ [2018] R. A. Duine, K.-J. Lee, S. S. P. Parkin, and M. D. Stiles, Synthetic antiferromagnetic spintronics, Nat. Phys. 14, 217 (2018).
* Hirohata _et al._ [2020] A. Hirohata, K. Yamada, Y. Nakatani, I.-L. Prejbeanu, B. Diény, P. Pirro, and B. Hillebrands, Review on spintronics: Principles and device applications, J. Magn. Magn. Mater. 509, 166711 (2020).
* Shi _et al._ [2020] J. Shi, V. Lopez-Dominguez, F. Garesci, C. Wang, H. Almasi, M. Grayson, G. Finocchio, and P. K. Amiri, Electrical manipulation of the magnetic order in antiferromagnetic PtMn pillars, Nat. Electron. 3, 92 (2020).
* Chen _et al._ [2014] H. Chen, Q. Niu, and A. H. MacDonald, Anomalous hall effect arising from noncollinear antiferromagnetism, Phys. Rev. Lett. 112, 017205 (2014).
* Nakatsuji _et al._ [2015] S. Nakatsuji, N. Kiyohara, and T. Higo, Large anomalous hall effect in a non-collinear antiferromagnet at room temperature, Nature 527, 212 (2015).
* Hoffman _et al._ [2016] J. D. Hoffman, B. J. Kirby, J. Kwon, G. Fabbris, D. Meyers, J. W. Freeland, I. Martin, O. G. Heinonen, P. Steadman, H. Zhou, C. M. Schlepütz, M. P. M. Dean, S. G. E. te Velthuis, J.-M. Zuo, and A. Bhattacharya, Oscillatory noncollinear magnetism induced by interfacial charge transfer in superlattices composed of metallic oxides, Phys. Rev. X 6, 041038 (2016).
* Hoffman _et al._ [2018] J. D. Hoffman, S. M. Wu, B. J. Kirby, and A. Bhattacharya, Tunable noncollinear antiferromagnetic resistive memory through oxide superlattice design, Phys. Rev. Applied 9, 044041 (2018).
* Qin _et al._ [2019] P.-X. Qin, H. Yan, X.-N. Wang, Z.-X. Feng, H.-X. Guo, X.-R. Zhou, H.-J. Wu, X. Zhang, Z.-G.-G. Leng, H.-Y. Chen, and Z.-Q. Liu, Noncollinear spintronics and electric-field control: a review, Rare Metals 39, 95 (2019).
* Yang [2020] S.-H. Yang, Spintronics on chiral objects, Appl. Phys. Lett. 116, 120502 (2020).
* Bergeret _et al._ [2001] F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Long-range proximity effects in superconductor-ferromagnet structures, Phys. Rev. Lett. 86, 4096 (2001).
* Volkov _et al._ [2003] A. F. Volkov, F. S. Bergeret, and K. B. Efetov, Odd triplet superconductivity in superconductor-ferromagnet multilayered structures, Phys. Rev. Lett. 90, 117006 (2003).
* Eschrig [2011] M. Eschrig, Spin-polarized supercurrents for spintronics, Phys. Today 64, 43 (2011).
* Klose _et al._ [2012] C. Klose, T. S. Khaire, Y. Wang, W. P. Pratt, N. O. Birge, B. J. McMorran, T. P. Ginley, J. A. Borchers, B. J. Kirby, B. B. Maranville, and J. Unguris, Optimization of spin-triplet supercurrent in ferromagnetic josephson junctions, Phys. Rev. Lett. 108, 127002 (2012).
* Lenk _et al._ [2017] D. Lenk, R. Morari, V. I. Zdravkov, A. Ullrich, Y. Khaydukov, G. Obermeier, C. Müller, A. S. Sidorenko, H.-A. K. von Nidda, S. Horn, L. R. Tagirov, and R. Tidecks, Full-switching fsf-type superconducting spin-triplet magnetic random access memory element, Phys. Rev. B 96, 184521 (2017).
* Yokosuk _et al._ [2016] M. O. Yokosuk, A. al Wahish, S. Artyukhin, K. R. O’Neal, D. Mazumdar, P. Chen, J. Yang, Y. S. Oh, S. A. McGill, K. Haule, S.-W. Cheong, D. Vanderbilt, and J. L. Musfeldt, Magnetoelectric coupling through the spin flop transition in ${\mathrm{ni}}_{3}{\mathrm{teo}}_{6}$, Phys. Rev. Lett. 117, 147402 (2016).
* Machado _et al._ [2017] F. L. A. Machado, P. R. T. Ribeiro, J. Holanda, R. L. Rodríguez-Suárez, A. Azevedo, and S. M. Rezende, Spin-flop transition in the easy-plane antiferromagnet nickel oxide, Phys. Rev. B 95, 104418 (2017).
* Becker _et al._ [2017] J. Becker, A. Tsukamoto, A. Kirilyuk, J. C. Maan, T. Rasing, P. C. M. Christianen, and A. V. Kimel, Ultrafast magnetism of a ferrimagnet across the spin-flop transition in high magnetic fields, Phys. Rev. Lett. 118, 117203 (2017).
* Vibhakar _et al._ [2019] A. M. Vibhakar, D. D. Khalyavin, P. Manuel, L. Zhang, K. Yamaura, P. G. Radaelli, A. A. Belik, and R. D. Johnson, Magnetic structure and spin-flop transition in the $a$-site columnar-ordered quadruple perovskite ${\mathrm{tmmn}}_{3}{\mathrm{o}}_{6}$, Phys. Rev. B 99, 104424 (2019).
* Clark and Callen [1968] A. E. Clark and E. Callen, Neel ferrimagnets in large magnetic fields, J. Appl. Phys. 39, 5972 (1968).
* Ishimatsu _et al._ [1999] N. Ishimatsu, H. Hashizume, S. Hamada, N. Hosoito, C. S. Nelson, C. T. Venkataraman, G. Srajer, and J. C. Lang, Magnetic structure of fe/gd multilayers determined by resonant x-ray magnetic scattering, Phys. Rev. B 60, 9596 (1999).
* Haskel _et al._ [2001] D. Haskel, G. Srajer, J. C. Lang, J. Pollmann, C. S. Nelson, J. S. Jiang, and S. D. Bader, Enhanced interfacial magnetic coupling of gd $/$fe multilayers, Phys. Rev. Lett. 87, 207201 (2001).
* Montoya _et al._ [2017a] S. A. Montoya, S. Couture, J. J. Chess, J. C. T. Lee, N. Kent, D. Henze, S. K. Sinha, M.-Y. Im, S. D. Kevan, P. Fischer, B. J. McMorran, V. Lomakin, S. Roy, and E. E. Fullerton, Tailoring magnetic energies to form dipole skyrmions and skyrmion lattices, Phys. Rev. B 95, 024415 (2017a).
* Montoya _et al._ [2017b] S. A. Montoya, S. Couture, J. J. Chess, J. C. T. Lee, N. Kent, M.-Y. Im, S. D. Kevan, P. Fischer, B. J. McMorran, S. Roy, V. Lomakin, and E. E. Fullerton, Resonant properties of dipole skyrmions in amorphous fe/gd multilayers, Phys. Rev. B 95, 224405 (2017b).
* Takanashi _et al._ [1992] K. Takanashi, Y. Kamiguchi, H. Fujimori, and M. Motokawa, Magnetization and magnetoresistance of fe/gd ferrimagnetic multilayer films, J. Phys. Soc. Japan 61, 3721 (1992).
* Baczewski _et al._ [1998] L. T. Baczewski, R. R. Kalinowski, and A. Wawro, Magnetization and anisotropy in fe/gd multilayers, J. Magn. Magn. Mater. 177, 1305 (1998).
* Kamiguchi _et al._ [1989] Y. Kamiguchi, Y. Hayakawa, and H. Fujimori, Anomalous field dependence of magnetoresistance in fe/gd multilayered ferrimagnets, Appl. Phys. Lett. 55, 1918 (1989).
* Drovosekov _et al._ [2015] A. B. Drovosekov, N. M. Kreines, A. O. Savitsky, E. A. Kravtsov, D. V. Blagodatkov, M. V. Ryabukhina, M. A. Milyaev, V. V. Ustinov, E. M. Pashaev, I. A. Subbotin, and G. V. Prutskov, Interlayer coupling in fe/cr/gd multilayer structures, J. Exp. Theor. 120, 1041 (2015).
* Drovosekov _et al._ [2018] A. B. Drovosekov, M. V. Ryabukhina, D. I. Kholin, N. M. Kreines, E. A. Manuilovich, A. O. Savitsky, E. A. Kravtsov, V. V. Proglyado, V. V. Ustinov, T. Keller, Y. N. Khaydukov, Y. Choi, and D. Haskel, Effect of cr spacer on structural and magnetic properties of fe/gd multilayers, J. Exp. Theor. 127, 742 (2018).
* Takanashi _et al._ [1993] K. Takanashi, H. Kurokawa, and H. Fujimori, A novel hysteresis loop and indirect exchange coupling in co/pt/gd/pt multilayer films, Appl. Phys. Lett. 63, 1585 (1993).
* Merenkov _et al._ [2001] D. N. Merenkov, A. B. Chizhik, S. L. Gnatchenko, M. Baran, R. Szymczak, V. O. Vas’kovskiy, and A. V. Svalov, H–t phase diagram of a multilayered gd/si/co film with ferrimagnetic ordering of the layers, Low Temp. Phys. 27, 137 (2001).
* te Velthuis _et al._ [2002] S. G. E. te Velthuis, J. S. Jiang, S. D. Bader, and G. P. Felcher, Spin flop transition in a finite antiferromagnetic superlattice: Evolution of the magnetic structure, Phys. Rev. Lett. 89, 127203 (2002).
* Lauter-Pasyuk _et al._ [2002] V. Lauter-Pasyuk, H. J. Lauter, B. P. Toperverg, L. Romashev, and V. Ustinov, Transverse and lateral structure of the spin-flop phase in $\mathrm{F}\mathrm{e}/\mathrm{C}\mathrm{r}$ antiferromagnetic superlattices, Phys. Rev. Lett. 89, 167203 (2002).
* Nagy _et al._ [2002] D. L. Nagy, L. Bottyán, B. Croonenborghs, L. Deák, B. Degroote, J. Dekoster, H. J. Lauter, V. Lauter-Pasyuk, O. Leupold, M. Major, J. Meersschaut, O. Nikonov, A. Petrenko, R. Rüffer, H. Spiering, and E. Szilágyi, Coarsening of antiferromagnetic domains in multilayers: The key role of magnetocrystalline anisotropy, Phys. Rev. Lett. 88, 157202 (2002).
* Antropov _et al._ [2019] N. O. Antropov, Y. N. Khaydukov, E. A. Kravtsov, V. V. Makarova, M. V. Progliado, and V. V. Ustinov, Transition in a magnetic non-collinear spin-flop state in a fe/pd/gd/pd superlattice, JETP Lett. 109, 406 (2019).
* Pashaev _et al._ [2020] E. Pashaev, A. Vasiliev, I. Subbotin, G. Prutskov, Y. M. Chesnokov, M. Kovalchuk, N. Antropov, E. Kravtsov, V. Proglyado, and V. Ustinov, Analysis of structural features of periodic fe/pd/gd/pd multilayered systems, Crystallography Reports 65, 985 (2020).
* Solignac _et al._ [2012] A. Solignac, R. Guerrero, P. Gogol, T. Maroutian, F. Ott, L. Largeau, P. Lecoeur, and M. Pannetier-Lecoeur, Dual antiferromagnetic coupling at ${\mathrm{la}}_{0.67}{\mathrm{sr}}_{0.33}{\mathrm{mno}}_{3}/{\mathrm{srruo}}_{3}$ interfaces, Phys. Rev. Lett. 109, 027201 (2012).
* Suciu _et al._ [2002] G. Suciu, J. Toussaint, and J. Voiron, 4f–3d exchange coupling in gd/x/co (x=pt, cr) multilayers, J. Magn. Magn. Mater. 240, 229 (2002).
* Khaydukov _et al._ [2018] Y. N. Khaydukov, A. S. Vasenko, E. A. Kravtsov, V. V. Progliado, V. D. Zhaketov, A. Csik, Y. V. Nikitenko, A. V. Petrenko, T. Keller, A. A. Golubov, M. Y. Kupriyanov, V. V. Ustinov, V. L. Aksenov, and B. Keimer, Magnetic and superconducting phase diagram of nb/gd/nb trilayers, Phys. Rev. B 97, 144511 (2018).
* Khaydukov _et al._ [2019] Y. N. Khaydukov, E. A. Kravtsov, V. D. Zhaketov, V. V. Progliado, G. Kim, Y. V. Nikitenko, T. Keller, V. V. Ustinov, V. L. Aksenov, and B. Keimer, Magnetic proximity effect in nb/gd superlattices seen by neutron reflectometry, Phys. Rev. B 99, 140503 (2019).
* Hjörvarsson _et al._ [1997] B. Hjörvarsson, J. A. Dura, P. Isberg, T. Watanabe, T. J. Udovic, G. Andersson, and C. F. Majkrzak, Reversible tuning of the magnetic exchange coupling in fe/v (001) superlattices using hydrogen, Phys. Rev. Lett. 79, 901 (1997).
* Leiner _et al._ [2002] V. Leiner, K. Westerholt, B. Hjörvarsson, and H. Zabel, Tunability of the interlayer exchange coupling, J. Phys. D: Appl. Phys 35, 2377 (2002).
* Schreyer _et al._ [1993] A. Schreyer, K. Bröhl, J. F. Ankner, C. F. Majkrzak, T. Zeidler, P. Bödeker, N. Metoki, and H. Zabel, Oscillatory exchange coupling in co/cu(111) superlattices, Phys. Rev. B 47, 15334 (1993).
* Hecker _et al._ [2005] M. Hecker, S. Valencia, P. M. Oppeneer, H.-C. Mertins, and C. M. Schneider, Polarized soft-x-ray reflection spectroscopy of giant magnetoresistive co/cu multilayers, Phys. Rev. B 72, 054437 (2005).
|
# Effective potential of a spinning heavy symmetric top when magnitudes of
conserved angular momenta are not equal
V. Tanrıverdi
###### Abstract
Effective potential for a spinning heavy symmetric top is studied when
magnitudes of conserved angular momenta are not equal to each other. The
dependence of effective potential on conserved angular momenta is analyzed.
This study shows that the minimum of effective potential goes to a constant
derived from conserved angular momenta when one of the conserved angular
momenta is greater than the other one, and it goes to infinity when the other
one is greater. It also shows that the usage of strong or weak top separation
does not work adequately in all cases.
<EMAIL_ADDRESS>
## 1 Introduction
Motion of a symmetric top can be studied by using either a cubic function or
effective potential. The cubic function is mostly used in works that utilize
geometric techniques [1, 2, 3, 4, 5, 6], and effective potential is mostly
used in works considering physical parameters [7, 8, 9, 10, 11]. In some other
works, both the cubic function and effective potential are used [12, 13, 14,
15, 16, 17, 18].
Effective potential shows different characteristics when one of the conserved
angular momenta greater than the other one or equal to. One can find different
aspects of effective potential in the literature when magnitudes of the
conserved angular momenta are equal to each other [7, 19]. However, it is not
studied when magnitudes of the conserved angular momenta are not equal to each
other except in Greiner’s work, and his study does not cover different
possibilities related to the conserved angular momenta and the minimum of
effective potential [17]. Studying this topic helps understand the motion of a
spinning heavy symmetric top, and in this study, we will study this case
together with the relation between the minimum of effective potential and a
constant derived from parameters of gyroscope and conserved angular momenta.
In section 2, we will give a quick overview of constants of motion and
effective potential. In section 3, we will study effective potential when
magnitudes of the conserved angular momenta are not equal to each other. Then,
we will give a conclusion. In the appendix, we will compare the cubic function
with effective potential.
## 2 Constants of motion and effective potential
For a spinning heavy symmetric top, Lagrangian is [12]
$\displaystyle L$ $\displaystyle=$ $\displaystyle T-U$ (1) $\displaystyle=$
$\displaystyle\frac{I_{x}}{2}(\dot{\theta}^{2}+\dot{\phi}^{2}\sin^{2}\theta)+\frac{I_{z}}{2}(\dot{\psi}+\dot{\phi}\cos\theta)^{2}-Mgl\cos\theta,$
where $M$ is the mass of the symmetric top, $l$ is the distance from the
center of mass to the fixed point, $I_{x}=I_{y}$ and $I_{z}$ are moments of
inertia, $g$ is the gravitational acceleration, $\theta$ is the angle between
the stationary $z^{\prime}$-axis and the body $z$-axis, $\dot{\psi}$ is the
spin angular velocity, $\dot{\phi}$ is the precession angular velocity and
$\dot{\theta}$ is the nutation angular velocity. The domain of $\theta$ is
$[0,\pi]$. For a spinning symmetric top on the ground $\theta$ should be
smaller than $\pi/2$, and if $\theta>\pi/2$, then the spinning top is
suspended from the fixed point.
There are two conserved angular momenta which can be obtained from Lagrangian,
and one can define two constants $a$ and $b$ by using these conserved angular
momenta as [12]
$\displaystyle a$ $\displaystyle=$
$\displaystyle\frac{I_{z}}{I_{x}}(\dot{\psi}+\dot{\phi}\cos\theta),$ (2)
$\displaystyle b$ $\displaystyle=$
$\displaystyle\dot{\phi}\sin^{2}\theta+a\cos\theta,$ (3)
where $a=L_{z}/I_{x}$ and $b=L_{z^{\prime}}/I_{x}$. Here, $L_{z}$ and
$L_{z^{\prime}}$ are conserved angular momenta in the body $z$ direction and
stationary $z^{\prime}$ direction, respectively.
One can define a constant from energy as
$E^{\prime}=\frac{I_{x}}{2}\dot{\theta}^{2}+\frac{I_{x}}{2}\dot{\phi}^{2}\sin^{2}\theta+Mgl\cos\theta,$
(4)
and its relation with the energy is $E^{\prime}=E-I_{x}^{2}a^{2}/(2I_{z})$.
By using change of variable $u=\cos\theta$, one can obtain the cubic function
from (4) as[12]
$f(u)=(\alpha-\beta u)(1-u^{2})-(b-au^{2})$ (5)
which is equal to $\dot{u}^{2}$, where $\alpha=2E^{\prime}/I_{x}$ and
$\beta=2Mgl/I_{x}$. This cubic function can be used to find turning angles.
From $E^{\prime}=I_{x}\dot{\theta}^{2}/2+U_{eff}$ [9], it is possible to
define an effective potential
$U_{eff}(\theta)=\frac{I_{x}}{2}\frac{(b-a\cos\theta)^{2}}{\sin^{2}\theta}+Mgl\cos\theta.$
(6)
By using the derivative of $U_{eff}$ with respect to $\theta$
$\frac{dU_{eff}(\theta)}{d\theta}=\frac{I_{x}}{\sin^{3}\theta}\left[(b-a\cos\theta)(a-b\cos\theta)-\frac{Mgl}{I_{x}}\sin^{4}\theta\right],$
(7)
it is possible to find the minimum of $U_{eff}$. The factor $\sin\theta$ is
equal to zero when $\theta$ is equal to $0$ or $\pi$, and effective potential
goes to infinity at these angles. The root of equation (7) is between $0$ and
$\pi$, and it will be designated by $\theta_{r}$ giving the minimum of
effective potential, and it can be found numerically. Then, the form of
effective potential is like a well. The general structure of $U_{eff}$
together with $E^{\prime}$ can be seen in figure 1.
Figure 1: General structure of $U_{eff}(\theta)$ and $E^{\prime}$.
$\theta_{min}$ and $\theta_{max}$ show turning angles, and $\theta_{r}$
represents the angle where minimum of $U_{eff}$ occurs. Curve (red) shows
$U_{eff}$, dashed (blue) line shows $E^{\prime}$ and horizontal continious
(black) line shows the minimum of $U_{eff}$.
By using equation (7), one can write [12]
$\dot{\phi}^{2}\cos\theta-\dot{\phi}a+\frac{Mgl}{I_{x}}=0.$ (8)
The root of this equation can also be used to obtain the minimum of $U_{eff}$.
By using the discriminant of this equation, one can define a parameter
$\tilde{a}=\sqrt{4Mgl/I_{x}}$ to make a disrimination between ”strong top” (or
fast top) where $a>\tilde{a}$ and ”weak top” (or slow top) where $a<\tilde{a}$
[20, 21].
The position of the minimum and the shape of $U_{eff}$ can be helpful in
understanding the motion. If $E^{\prime}$ is equal to the minimum of $U_{eff}$
then the regular precession is observed. If $E^{\prime}$ is greater than the
minimum of $U_{eff}$, like figure 1, the intersection points of $E^{\prime}$
and $U_{eff}$ give turning angles. And, symmetric top nutates between these
two angles periodically. There can be different types of motion, and some of
these motions can be determined by using relations between $E^{\prime}$ &
$Mglb/a$ and $a$ & $b$ when $|a|\neq|b|$ [21].
## 3 Effective potential
The relation between $a$ and $b$ can affect effective potential. There are
three possible relation between $a$ and $b$: $|a|>|b|$, $|a|<|b|$ and
$|a|=|b|$. We will consider two different possibilities, $|a|>|b|$ and
$|a|<|b|$, to study effective potential since the third one is studied
previously, i.e. $|a|=|b|$ [7, 19]. We will give examples to studied cases,
and for examples, the following constants will be used: $Mgl=0.068\,J$,
$I_{x}=0.000228\,kg\,m^{2}$ and $I_{z}=0.0000572\,kg\,m^{2}$.
### 3.1 Effective potential when $|a|>|b|$
In this section, we will study the case when $|a|>|b|$. After factoring
equation (7), it can be written as
$\frac{dU_{eff}(\theta)}{d\theta}=\frac{a^{2}I_{x}}{\sin^{3}\theta}\left[(\frac{b}{a}-\cos\theta)(1-\frac{b}{a}\cos\theta)-\frac{Mgl}{I_{x}a^{2}}\sin^{4}\theta\right].$
(9)
The angle, making the terms in the parentheses zero, gives the minimum of
effective potential. If $|a|>|b|$, the second term in the parentheses is
always negative, and then $b/a-\cos\theta$ should also be positive for the
root. Therefore, the inclination angle should satisfy $\pi>\theta>\arccos
b/a$. In the limit where $a$ goes to infinity, $\theta_{r}$ goes to $\arccos
b/a$. In $a$ goes to zero limit, $b$ should also go to zero since $|a|>|b|$,
then the first term goes to zero (see equation (7)) and the second term should
also go to zero for the root which is possible when $\theta_{r}$ goes to
$\pi$. If both $a$ and $b$ are negative or positive, $\theta_{r}$ is between
$\pi/2$ and $\pi$ when $|a|$ is close to zero, and it is between $0$ and
$\pi/2$ when $|a|$ and $|b|$ are great enough. If only one of them is
negative, then $\theta_{r}$ is always greater than $\pi/2$.
When $b=0$, in $|a|$ goes to infinity limit $\theta_{r}$ goes to $\pi/2$, and
$a$ goes to zero limit does not change and remains as $\pi$.
These shows that $\theta_{r}\in(\arccos b/a,\pi)$. If $b/a$ goes to $1$, then
$\arccos b/a$ goes to $0$. Therefore, $\theta_{r}$ can take values between $0$
and $\pi$ depending on signs of $a$ and $b$, the ratio $b/a$ and greatness of
$a$ and $b$.
Now, we will consider the change of $U_{eff_{min}}$ when $|a|>|b|$. We have
seen that as $|a|$ goes to zero, $\theta_{r}$ goes to $\pi$ . Then, it can be
seen from equation (6) that $U_{eff_{min}}$ goes to $-Mgl$ as $|a|$ goes to
zero. As $|a|$ goes to infinity $\theta_{r}$ goes to $\arccos b/a$, then
$U_{eff_{min}}$ goes to $Mglb/a$ from below. Then, $Mglb/a$ is always grater
than $U_{eff_{min}}$ when $|a|>|b|$.
(a) $U_{eff}$ (b) $\theta_{r}$ (c) $U_{eff_{min}}$
Figure 2: $U_{eff}$, change of $\theta_{r}$ with respect to $a$ and change of
$U_{eff_{min}}$ with respect to $a$. a) Three different effective potential:
$a=10\,rad\,s^{-1}$ (green dashed-dotted curve), $a=30\,rad\,s^{-1}$ (blue
dashed curve) and $a=60\,rad\,s^{-1}$ (red continious curve), and all of them
satisfy $b/a=0.5$. Black line shows $Mglb/a$. b) Change of $\theta_{r}$ with
respect to $a$ for constant $b/a=0.5$ ratio (red curve). Black line shows
$\arccos(b/a)=1.05$. Vertical dotted line shows position of $\tilde{a}$. c)
Change of $U_{eff_{min}}$ with respect to $a$ for constant $b/a=0.5$ ratio
(red curve). Black line shows $Mglb/a$. Vertical dotted line shows position of
$\tilde{a}$.
As an example, we will consider that there is a constant ratio between $a$ and
$b$: $b/a=0.5$. In figure 2(a), three different effective potentials for three
different $a$ values are shown together with $Mglb/a$. In this figure, it can
be seen that the form and magnitude of the minimum of $U_{eff}$ are changing
as $a$ changes, and it can also be seen that $\theta_{r}$ is also changing. In
figure 2(b), it can be seen that $\theta_{r}$ takes very close values to $\pi$
for very small values of $a$ and goes to $\arccos 0.5=1.05\,rad$ as $a$
increases. In figure 2(c), it can be seen that the minimum of $U_{eff}$ takes
very close values to $-Mgl$ when $a$ is small, and it goes to $Mglb/a$ as $a$
goes to infinity. These are consistent with previous considerations.
It can be considered that there is a shift in the behaviour of $\theta_{r}$
and $U_{eff_{min}}$ near $a=\tilde{a}$. But this shift is not sudden, and one
can say that the usage $\tilde{a}$ gives an approximate separation when
$|a|>|b|$.
In some cases, $Mgl$ can be negative and there are some differences in
effective potential in these cases. When $Mgl$ is negative, the second term in
equation (9) becomes positive, and then $\arccos b/a>\theta>0$ for the root.
In the limit where $a$ goes to infinity, again $\theta_{r}$ goes to $\arccos
b/a$. In $a$ goes to zero limit, $\theta_{r}$ goes to $0$. These show that the
interval for the minimum of effective potential changed from $(\arccos
b/a,\pi)$ to $(0,\arccos b/a)$ when $Mgl$ changed sign from positive to
negative. If both $a$ and $b$ are negative or positive, $\theta_{r}$ is
between $0$ and $\pi/2$. If only one of them is negative, then $\theta_{r}$
can be greater than $\pi/2$ when $|a|$ is great enough. The minimum of
$U_{eff}$ goes to $-|Mgl|$ when $a$ goes to $0$, and it goes to $-|Mgl|b/a$
when $a$ goes to infinity when $Mgl$ is negative.
### 3.2 Effective potential when $|b|>|a|$
In this section, we will study the case when $|b|>|a|$. After factoring
equation (7) in another way, it can be written as
$\frac{dU_{eff}(\theta)}{d\theta}=\frac{b^{2}I_{x}}{\sin^{3}\theta}\left[(1-\frac{a}{b}\cos\theta)(\frac{a}{b}-\cos\theta)-\frac{Mgl}{I_{x}b^{2}}\sin^{4}\theta\right].$
(10)
Similar to the previous case, the first term should be positive, and
$a/b-\cos\theta$ should be positive when $|b|>|a|$ for the root, and then
$\pi>\theta>\arccos a/b$. In $b$ goes to infinity limit, the second term in
the parentheses goes to zero. Then, as $|b|$ goes to infinity, $\theta_{r}$
should go to $\arccos a/b$. In $b$ goes to zero limit, $\theta_{r}$ goes to
$\pi$ which can be seen from equation (7) similar to the previous section.
Then, $\theta_{r}$ goes to $\pi$ when $b$ goes to zero, and it goes to
$\arccos a/b$ when $|b|$ goes to infinity.
When $a$ and $b$ are both positive or negative, as $|b|$ increases from zero
to infinity, $\theta_{r}$ decreases from $\pi$ to $\arccos a/b<\pi/2$. If only
one of them is positive, then $\theta_{r}$ is always greater than $\pi/2$ and
shows a similar decrease to both positive or negative cases.
When $a=0$, as $|b|$ goes to infinity $\theta_{r}$ goes to $\pi/2$ and it goes
to $\pi$ as $|b|$ goes to $0$.
Similar to the previous case, $\theta_{r}$ can take values between $0$ and
$\pi$ depending on signs of $a$ and $b$, the ratio $a/b$ and greatness of $a$
and $b$.
The magnitude of the minimum of $U_{eff}$ changes with respect to $b$. In $b$
goes to zero limit, $U_{eff_{min}}$ goes to $-Mgl$ since $\theta_{r}$ goes to
$\pi$. In $b$ goes to infinity limit, $\theta_{r}$ goes to $\arccos a/b$, and
then the minimum of $U_{eff}$ goes to infinity with
$I_{x}b^{2}(1-(a/b)^{2})/2$.
(a) $U_{eff}$
(b) $\theta_{r}$
(c) $U_{eff_{min}}$
Figure 3: $U_{eff}$, change of $\theta_{r}$ with respect to $b$ and change of
$U_{eff_{min}}$ with respect to $b$. a) Three different effective potential:
$b=10\,rad\,s^{-1}$ (green dashed-dotted curve), $b=30\,rad\,s^{-1}$ (blue
dashed curve) and $b=60\,rad\,s^{-1}$ (red continious curve) with $a/b=0.5$.
Black line shows $Mglb/a$. b) Change of $\theta_{r}$ with respect to $b$ for
constant $a/b=0.5$ ratio (red curve). Black line shows
$\arccos(a/b)=1.05\,rad$. Vertical dotted line shows the position of
$b=2\tilde{a}$. c) Change of $U_{eff_{min}}$ with respect to $b$ for constant
$a/b=0.5$ ratio (red curve). Black line shows $Mglb/a$. Vertical dotted line
shows position of $b=2\tilde{a}$. Dotted curve shows
$I_{x}b^{2}(1-(a/b)^{2})/2$.
For examples, similar to the previous case, a constant ratio between $a$ and
$b$ is considered: This time $a/b=0.5$. In figure 3(a), three different
effective potentials for three different $b$ values are shown similar to the
previous section. In this figure, there are some similarities and differences
from figure 2(a). One can see that $\theta_{r}$ is also different for
different $b$ values similar to the previous section. It can be seen that as
$b$ takes different values, the form and magnitude of the minimum of $U_{eff}$
becomes different similar to previous case, and it can be greater than
$Mglb/a$, unlike the previous case. In figure 3(b), it can be seen that for
very small values of $b$, $\theta_{r}$ is close to $\pi$ and it goes to
$\arccos 0.5=1.05\,rad$ as $b$ increases. In figure 3(c), it can be seen that
the minimum of $U_{eff}$ is close to $-Mgl$ if $b$ is small, and it goes to
infinity with $I_{x}b^{2}(1-(a/b)^{2})/2$ as $b$ goes to infinity. These are
the expected results from the explanations given above.
By considering these results, it can be said that $Mglb/a$ is not important
differently from $|a|>|b|$ case. From figures 3(b) and 3(c), one can say that
the shift in the behaviour of $\theta_{r}$ and $U_{eff_{min}}$ does not take
place around $a=\tilde{a}$, and the usage of $\tilde{a}$ for seperation is not
suitable when $|b|>|a|$.
When $Mgl$ is negative, the second term in equation (9) becomes positive, and
then in this case, $a/b-\cos\theta$ should be negative which is possible when
$\arccos a/b>\theta>0$. In the limit where $b$ goes to infinity, again
$\theta_{r}$ goes to $\arccos a/b$. In $b$ goes to zero limit, $\theta_{r}$
goes to $0$. Similar to the previous case, the interval for the minimum of
effective potential changed from $(\arccos b/a,\pi)$ to $(0,\arccos b/a)$. If
both $a$ and $b$ are negative or positive, $\theta_{r}$ is between $0$ and
$\pi/2$. If only one of them is negative, then $\theta_{r}$ can be greater
than $\pi/2$ when $|b|$ goes to infinity, and $\theta_{r}$ goes to $0$ as $b$
goes to zero. When $a=0$, in $|b|$ goes to infinity limit $\theta_{r}$ goes to
$\pi/2$, and $|b|$ goes to zero limit does not change and remains as $0$. If
$Mgl$ is negative, the minimum of $U_{eff}$ goes to $-|Mgl|$ when $b$ goes to
$0$, and it goes to infinity as $|b|$ goes to infinity.
## 4 Conclusion
Effective potential can be helpful in understanding the motion of a symmetric
top in different ways. $E^{\prime}$ should be equal to or greater than the
minimum of $U_{eff}$ for physical motions. By using the limits given in
section 3, one can say that the regular precession takes place at greater
angles when $a$ and $b$ are small, and as $a$ and $b$ increase, it takes place
at smaller angles. To observe regular precession smaller than $\pi/2$, $a$ and
$b$ should have the same sign and have greater magnitudes. The limiting angle
when $|a|$ or $|b|$ goes to infinity can be found by using inverse cosine of
$b/a$ and $a/b$ when $|a|>|b|$ and $|b|>|a|$, respectively. If $E^{\prime}$ is
greater than the minimum of $U_{eff}$, then different types of motions can be
seen [21]. These motion will take place closer angles to $\theta_{r}$ when
$E^{\prime}$ is close to the minimum of $U_{eff}$, and by considering signs
and magnitudes of $a$ and $b$ one can have an opinion on the angles where the
motion takes place.
If $a$ and/or $b$ are small, then there can be a high asymmetry in the form of
$U_{eff}$. From the definitions of $U_{eff}$ and $E^{\prime}$, one can say
that $\dot{\theta}$ is propotional to the difference
$E^{\prime}-U_{eff}(\theta)$ for a specific $\theta$ value. Therefore, one can
say that as $\theta$ increases from $\theta_{min}$ to $\theta_{r}$, the change
in $\dot{\theta}$ is gradual, and as $\theta$ increases from $\theta_{r}$ to
$\theta_{max}$, the change in $\dot{\theta}$ is more rapid when $a$ and/or $b$
are small. As $\theta$ changes from $\theta_{max}$ to $\theta_{min}$, this
change in $\dot{\theta}$ is firstly rapid and then gradual.
If $a$ and $b$ are great enough and the difference $E^{\prime}-U_{eff_{min}}$
is small enough, then the asymmetry in $U_{eff}$ can be ignored. In these
cases, one can make an approximation and find an exact solution for this
approximation [12, 13]. This approximation works better when the asymmetry in
$U_{eff}$ is least.
We have seen that comparison of $|a|$ with $\tilde{a}$ can be used when
$|a|>|b|$ for an approximate seperation, and it is not suitable when
$|b|>|a|$. But comparison between $|b|$ and $\tilde{a}$ can be used when
$|b|>|a|$, and if it is used, one should use a naming other than ”strong top”
or ”weak top”. We should note that comparison of $|a|$ with $\tilde{a}$ is
very useful when $|a|=|b|$ [19].
Another thing that should be taken into account is the relation between
$Mglb/a$ and $E^{\prime}$ [21]. This study has shown that the minimum of
$U_{eff}$ is always smaller than $Mglb/a$ when $|a|>|b|$, which shows that one
can always observe all possible motions when $|a|>|b|$. On the other hand,
$Mglb/a$ can be greater than or smaller than the minimum of $U_{eff}$ when
$|b|>|a|$.
These results show that effective potential has different advantages over the
cubic function in understanding the motion of a spinning heavy symmetric top.
However, the cubic function is still important since it is better for proofs.
## 5 Appendix
There is an alternative to effective potential: the cubic function given in
equation (5).
Here, we will compare the cubic function with effective potential. The cubic
function is equal to $\dot{u}^{2}$, and its roots give the points where
$\dot{u}=0$. $\dot{\theta}$ is equal to zero at two of these three points, and
the third root is irrelevant to turning angles. Then, one can use the cubic
function to obtain turning angles. If these two roots are the same, i.e.
double root, then one can also say that this case gives regular precession.
These turning angles can also be obtained from effective potential by using
$E^{\prime}=U_{eff}(\theta)$. And, if $E^{\prime}=U_{eff_{min}}$ then the
regular precession is observed as explained above.
On the other hand, there is not any correspondence between the minimum of
$U_{eff}$ and the maximum of $f(u)$. The reason for this is the multiplication
with $1-u^{2}$ during the change of variable. Then, $f(u)$ can not be used to
make further analyses similar to $U_{eff}$, given above.
We will consider a case satisfying $\alpha=575.1\,s^{-2}$,
$a=10\,rad\,s^{-1}$, $b=2\,rad\,s^{-1}$ as an example. For the symmetric top
with previously given parameters, $\beta$ becomes $596.5\,s^{-2}$. $U_{eff}$
and $f(u)$ can be seen in figure 4. One can see that $\theta_{min}=1.83\,rad$
and $\theta_{max}=2.57\,rad$ can be obtained from $\arccos(u2)=1.83\,rad$ and
$\arccos(u1)=2.57\,rad$, respectively. On the other hand,
$\theta_{r}=2.28\,rad$ can not be obtained from $\arccos(u_{m})=2.18$.
(a) $U_{eff}$ (b) $f(u)$
Figure 4: $U_{eff}$ and $f(u)$ when $\alpha=575.1\,s^{-2}$,
$\beta=596.5\,s^{-2}$, $a=10\,rad\,s^{-1}$ and $b=2\,rad\,s^{-1}$. a)
$U_{eff}$ continious (red) curve, $E^{\prime}=-0.0150\,J$ dashed (blue) line,
$\theta_{min}=1.83\,rad$, $\theta_{max}=2.57\,rad$, $\theta_{r}=2.28\,rad$ and
$U_{eff_{min}}=-0.0299J$. b) $f(u)$ continious (red) curve, $u_{1}=-0.841$,
$u_{2}=-0.258$, $u_{3}=1.05$, $u_{m}=-0.575$ and $f_{max}=81.6\,s^{-2}$.
These show that $f(u)$ can be used to obtain turning angles, however, it can
not be used to obtain $\theta_{r}$ where the minimum of $U_{eff}$ occurs.
## References
* [1] Routh E J 1955 Advanced Dynamics of a System of Rigid Bodies (New York: Dover)
* [2] Scarborough J B 1958 The Gyroscope Theory and Applications (London: Interscience Publishers)
* [3] MacMillan W D 1960 Dynamics Of Rigid Bodies (New York: Dover)
* [4] Arnold R N and Maunder L 1961 Gyrodynamics and Its Engineering Applications (New York: Acdemic Press)
* [5] Groesberg S W 1968 Advanced mechanics (New York: Wiley)
* [6] Jose J V and Saletan E J 1998 Classical dynamics a contemporary approach (New York: Cambridge University Press)
* [7] Symon K R 1971 Mechanics 3rd Ed (Massachusetts: Addison-Wesley)
* [8] McCauley J L 1997 Classical mechanics transformations, flows, integrable and chaotic dynamics (Cambridge: Cambridge University Press)
* [9] Landau L D and Lifshitz E M 2000 Mechanics 3rd Ed (New Delhi: Butterworth-Heinenann)
* [10] Thornton S T and Marion J B 2004 Classical dynamics of particles and systems 5th Ed (Belmont: Thomson Brooks/Cole)
* [11] Taylor J R 2005 Classical Mechanics (Dulles: University Science Books)
* [12] Goldstein H, Poole C and Safko J 2002 Classical Mechanics 3rd Ed (New York: Addison-Wesley)
* [13] Arnold V I 1989 Mathematical Methods of Classical Mechanics 2nd Ed (New York: Springer-Verlag)
* [14] Corinaldesi E 1998 Classical Mechanics for Physics Graduate Students (Singapore: Worls Scientific)
* [15] Matzner R A and Shepley L C 1991 Classical Mechanics (New Jersey: Prentice Hall)
* [16] Arya A P 1998 Introduction to classical mechanics (New Jersey: Prentice Hall)
* [17] Greiner W 2003 Classical Mechanics, Systems of particles and Hamiltonian dynamics (New York: Springer)
* [18] Fowles G R and Cassiday G L 2005 Analytical mechanics 7th Ed (Belmont: Thomson Brooks/Cole)
* [19] Tanrıverdi V 2020 Motion of the Gyroscope With Equal Conserved Angular Momenta Eur. J. Phys. 41 025004 https://doi.org/10.1088/1361-6404/ab6415
* [20] Klein F and Sommerfeld A 2010 The theory of the Top, Volume II (New York: Birkhauser)
* [21] Tanrıverdi V 2020 Motion of the heavy symmetric top when magnitudes of conserved angular momenta are different https://arxiv.org/abs/2011.09348
|
# Linear simultaneous measurements of position and momentum with minimum
error-trade-off in each minimum uncertainty state
Kazuya Okamura<EMAIL_ADDRESS>Research Origin for Dressed
Photon, 3-13-19 Moriya-cho, Kanagawa-ku, Yokohama, Kanagawa 221-0022, Japan
Graduate School of Informatics, Nagoya University, Chikusa-ku, Nagoya
464-8601, Japan
###### Abstract
So-called quantum limits and their achievement are important themes in
physics. Heisenberg’s uncertainty relations are the most famous of them but
are not universally valid and violated in general. In recent years, the
reformulation of uncertainty relations is actively studied, and several
universally valid uncertainty relations are derived. On the other hand,
several measuring models, in particular, spin-1/2 measurements, are
constructed and quantitatively examined. However, there are not so many
studies on simultaneous measurements of position and momentum despite their
importance. Here we show that an error-trade-off relation (ETR), called the
Branciard-Ozawa ETR, for simultaneous measurements of position and momentum
gives the achievable bound in minimum uncertainty states. We construct linear
simultaneous measurements of position and momentum that achieve the bound of
the Branciard-Ozawa ETR in each minimum uncertainty state. To check their
performance, we then calculate probability distributions and families of
posterior states, sets of states after the measurements, when using them. The
results of the paper show the possibility of developing the theory of
simultaneous measurements of incompatible observables. In the future, it will
be widely applied to quantum information processing.
simultaneous measurement of position and momentum, error-trade-off relation,
the Branciard-Ozawa error-trade-off relation, minimum uncertainty states
## I Introduction
In quantum physics, uncertainty relations and construction of measurement
models are important themes since Heisenberg [1] and von Neumann [2]. In the
last forty years, quantum measurement theory has developed. There has been a
great deal of study of quantum measurement focused on applications to quantum
information technology nowadays. Above all, the theory of uncertainty
relations [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], the
central topic of the paper, has advanced dramatically in the last two decades.
Experimental tests of uncertainty relations [20, 21, 22, 23, 24, 25, 26, 27,
28, 29] also have been performed due to the rapid improvement of experimental
techniques in recent years. In the paper, we present linear simultaneous
measurements of position and momentum with minimum error-trade-off in each
minimum uncertainty state. The construction of measurements of observables
with minimum uncertainty in some class of states is significant but there are
few examples. In fact, such measurements are given for spin [22, 12] and
position [30]. Therefore, we believe that the results of the paper are an
important contribution.
Here we consider a one-dimensional nonrelativistic single-particle system
$\mathbf{S}$ whose position $Q_{1}$ and momentum $P_{1}$ are defined as self-
adjoint operators on $\mathcal{H}_{\mathbf{S}}=L^{2}(\mathbb{R})$ and satisfy
the canonical commutation relation $[Q_{1},P_{1}]=i\hbar 1$. A unit vector
$\psi$ in $\mathcal{H}_{\mathbf{S}}$ is called a minimum uncertainty state if
it satisfies $\sigma(Q_{1}\|\psi)\sigma(P_{1}\|\psi)=\hbar/2$. Throughout the
paper, we suppose that the state $\psi$ of $\mathbf{S}$ is a minimum
uncertainty state with $\langle Q_{1}\rangle_{\psi}=q_{1}$, $\langle
P_{1}\rangle_{\psi}=p_{1}$ and $\sigma(Q_{1}\|\psi)=\sigma_{1}$, i.e.,
$\psi(x)=\sqrt[4]{\dfrac{1}{(2\pi)\sigma_{1}^{2}}}e^{-\frac{(x-q_{1})^{2}}{4\sigma_{1}^{2}}+i\frac{p_{1}}{\hbar}x}$
(1)
in the coordinate representation. Minimum uncertainty states appear in
Heisenberg’s original paper [1] and are also called Gaussian wave packets.
In order to define linear simultaneous measurements of $Q_{1}$ and $P_{1}$, we
prepare a probe system $\mathbf{P}$ whose positions $Q_{2},Q_{3}$ and momenta
$P_{2},P_{3}$ are described by self-adjoint operators on
$\mathcal{H}_{\mathbf{P}}=L^{2}(\mathbb{R}^{2})$ and satisfy
$[Q_{2},Q_{3}]=[P_{2},P_{3}]=0$ and $[Q_{j},P_{k}]=i\hbar\delta_{jk}1$ for
$j,k=2,3$, and whose states are described by density operators on
$\mathcal{H}_{\mathbf{P}}$. $\mathbf{P}$ is supposed to be a one-dimensional
nonrelativistic two-particle system or a two-dimensional nonrelativistic
single-particle system. $Q_{2}$ and $P_{3}$ are used as the meters to measure
$Q_{1}$ and $P_{1}$, respectively. In considering linear simultaneous
measurements of position and momentum from now on, we ignore the intrinsic
dynamics of $\mathbf{S}$ and $\mathbf{P}$. Here we adopt the following
interaction Hamiltonian, the measurement interaction between $\mathbf{S}$ and
$\mathbf{P}$:
$\displaystyle
H_{int}=K[\alpha_{1}Q_{1}P_{2}+\beta_{1}P_{1}Q_{2}+\gamma_{1}(Q_{1}P_{1}-Q_{2}P_{2})$
$\displaystyle+\alpha_{2}Q_{2}P_{3}+\beta_{2}P_{2}Q_{3}+\gamma_{2}(Q_{2}P_{2}-Q_{3}P_{3})$
$\displaystyle+\alpha_{3}Q_{3}P_{1}+\beta_{3}P_{3}Q_{1}+\gamma_{3}(Q_{3}P_{3}-Q_{1}P_{1})],$
(2)
where $K$ is a positive real number, the coupling constant, and $\alpha_{1}$,
$\alpha_{2}$, $\alpha_{3}$, $\beta_{1}$, $\beta_{2}$, $\beta_{3}$,
$\gamma_{1}$, $\gamma_{2}$ and $\gamma_{3}$ are real numbers. This interaction
is a natural extension of linear measurements given by Ozawa [31] to
simultaneous measurements. His model is exactly solvable and contains both the
error-free linear position measurement [32] and von Neumann’s model [2]. In
particular, the former contributed to the resolution of the dispute on the
sensitivity limit to the gravitational wave detector (see also [33, 34, 35,
36, 37]).
We treat an error-trade-off relation (ETR) based on the noise-operator based
q-rms error $\varepsilon(A)$ for each observable $A$. This error is considered
standard and is defined later. For every simultaneous measurement of $Q_{1}$
and $P_{1}$, the errors $\varepsilon(Q_{1})$ of $Q_{1}$ and
$\varepsilon(P_{1})$ of $P_{1}$ in $\psi$ then satisfy
$\varepsilon(Q_{1})^{2}\sigma(P_{1})^{2}+\sigma(Q_{1})^{2}\varepsilon(P_{1})^{2}\geq\hbar^{2}/4,$
(3)
which is a special case of the Branciard-Ozawa ETR. We say that a simultaneous
measurement of $Q_{1}$ and $P_{1}$ has the minimum error-trade-off in $\psi$
if it achieves the lower bound of Eq.(3) in $\psi$, that is to say, it
satisfies
$\varepsilon(Q_{1})^{2}\sigma(P_{1})^{2}+\sigma(Q_{1})^{2}\varepsilon(P_{1})^{2}=\hbar^{2}/4$
(4)
in $\psi$. As suggested by the existence of the error-free linear position
measurements, Heisenberg’s ETR, one of his uncertainty relations,
$\varepsilon(Q_{1})\varepsilon(P_{1})\geq\hbar/2$ (5)
is violated in general. Its violation always occurs when we use linear
simultaneous measurements of $Q_{1}$ and $P_{1}$ with the minimum error-trade-
off in each minimum uncertainty state. A famous example of simultaneous
measurement of position and momentum is the Arthurs-Kelly model (see [38] and
Methods). Since their model is motivated by von Neumann’s model and satisfies
Heisenberg’s ETR, it has been considered plausible. On the other hand, our
discussion is based on the general description of measuring processes in
modern quantum measurement theory. The general theory of quantum measurement
tells us that a broader class of simultaneous measurement models besides the
Arthurs-Kelly model is physically valid. We expect that our models introduced
in the paper become the new, good example.
In Sec. II, measuring process and the noise-operator based q-rms error are
defined. Linear simultaneous measurement of position and momentum is then
defined. In Sec. III, we first present a theorem that gives a necessary and
sufficient condition for a linear simultaneous measurement of position and
momentum to satisfy Eq. (4) in $\psi$. Next, we give four families of linear
simultaneous measurements of position and momentum which satisfy Eq. (4) in
$\psi$. We then investigate probability distributions and states after the
measurement when using such families of linear simultaneous measurements of
position and momentum. In Sec. IV, the results of the paper are examined. In
Sec. V, we prove the theorem and show a systematic construction of linear
simultaneous measurements of position and momentum which satisfy Eq. (4) in
$\psi$.
Conventions. Let $\mathcal{H}$ be a Hilbert space. For every self-adjoint
operator $X$ on $\mathcal{H}$, $E^{X}$ denotes its spectral measure. Let $n$
be a natural number, and $X_{1},\cdots,X_{n}$ mutually commuting self-adjoint
operators on $\mathcal{H}$, and $\phi$ a unit vector in $\mathcal{H}$. The
expectation value and standard deviation of an observable $X$ in a vector
state $\phi$ are denoted by
$\displaystyle\langle X\rangle=\langle
X\rangle_{\phi}=\langle\phi|X\phi\rangle=\langle\phi|X|\phi\rangle,$ (6)
$\displaystyle\sigma(X)=\sigma(X\|\phi)=\langle\phi|(X-\langle
X\rangle_{\phi})^{2}\phi\rangle^{\frac{1}{2}},$ (7)
respectively. Then the (joint) probability measure
$\mu_{\phi}^{X_{1},\cdots,X_{n}}$ of $X_{1},\cdots,X_{n}$ in $\phi$ is defined
by
$\mu_{\phi}^{X_{1},\cdots,X_{n}}(I_{1}\times\cdots\times
I_{n})=\langle\phi|E^{X_{1}}(I_{1})\cdots E^{X_{n}}(I_{n})\phi\rangle$ (8)
for all intervals(, more generally, all Borel sets) $I_{1},\cdots,I_{n}$ of
$\mathbb{R}$. $p_{\phi}^{X_{1},\cdots,X_{n}}(x_{1},\cdots,x_{n})$ denotes the
probability density function of $\mu_{\phi}^{X_{1},\cdots,X_{n}}$ with respect
to the Lebesgue measure on $\mathbb{R}^{n}$ if it exists. For every linear
operator $X$ and $Y$ on $\mathcal{H}$ and $\mathcal{K}$, linear operators
$X\otimes Y$, $X\otimes 1$ and $1\otimes Y$ on $\mathcal{H}\otimes\mathcal{K}$
are abbreviated as $XY$, $X$ and $Y$, respectively.
## II Preliminaries
### II.1 Measuring process
First, we shall define a measuring process for $\mathbf{S}$, which is a
quantum mechanical modeling of the probe part $\mathbf{P}_{0}$ of a measuring
apparatus $\mathbf{A}_{0}$. Let $n$ be a natural number. Here a $(n+3)$-tuple
$\mathbb{M}_{0}=(\mathcal{K},\zeta,M_{1},\cdots,M_{n},U)$ is called a
$n$-meter measuring process for $\mathbf{S}$ (or for
$\mathcal{H}_{\mathbf{S}}$) if it satisfies the following conditions: $(1)$
$\mathcal{K}$ is a Hilbert space. $(2)$ $\zeta$ is a unit vector of
$\mathcal{K}$, the vector state of $\mathbf{P}_{0}$, $(3)$
$M_{1},\cdots,M_{n}$ are mutually commuting self-adjoint operators on
$\mathcal{K}$ as meters, mutually compatible observables of $\mathbf{P}_{0}$,
$(4)$ $U$ is a unitary operator on
$\mathcal{H}_{\mathbf{S}}\otimes\mathcal{K}$, the measuring interaction which
turns on at time $0$ and turns off at time $\tau$ between $\mathbf{S}$ and
$\mathbf{P}_{0}$. We then adopt the following notation for every linear
operator $Z$ on $\mathcal{H}_{\mathbf{S}}\otimes\mathcal{K}$:
$Z(0)=Z,\hskip 14.22636ptZ(\tau)=U^{\dagger}ZU.$ (9)
A $2$-meter measuring process $(\mathcal{K},\zeta,M_{1},M_{2},U)$ for
$\mathbf{S}$ is called a simultaneous measurement of position $Q_{1}$ and
momentum $P_{1}$ or a simultaneous $(Q_{1},P_{1})$-measurement if $M_{1}$ and
$M_{2}$ are used to measure $Q_{1}$ and $P_{1}$, respectively.
Let $n$ be a natural number. Let $X_{1},\cdots,X_{n}$ be observables of
$\mathbf{S}$, $\phi$ a vector state of $\mathbf{S}$, and
$\mathbb{M}_{0}=(\mathcal{K},\zeta,M_{1},\cdots,M_{n},U)$ a $n$-meter
measuring process for $\mathbf{S}$. We consider that $X_{1},\cdots,X_{n}$ are
measured in terms of
$\mathbb{M}_{0}=(\mathcal{K},\zeta,M_{1},\cdots,M_{n},U)$, and that
$X_{1},\cdots,X_{n}$ are compared with $M_{1},\cdots,M_{n}$, respectively. The
noise-operator based q-rms error
$\varepsilon(X_{j})=\varepsilon(X_{j},\mathbb{M}_{0},\phi)$ of $X_{j}$ is then
defined by
$\varepsilon(X_{j})=\varepsilon(X_{j},\mathbb{M}_{0},\phi)=\langle
N_{j}^{2}\rangle_{\phi\otimes\zeta}^{\frac{1}{2}}$ (10)
for all $j=1,\cdots,n$, where $N_{j}$ is the noise operator defined by
$N_{j}=N(X_{j},\mathbb{M}_{0})=M_{j}(\tau)-X_{j}(0)$ (11)
for all $j=1,\cdots,n$. The error defined here is applicable to the case where
$X_{j}(0)$ and $M_{j}(\tau)$ does not commute, and is considered standard.
For every simultaneous $(Q_{1},P_{1})$-measurement
$\mathbb{M}_{0}=(\mathcal{K},\zeta,M_{1},M_{2},U)$, Eq. (3) holds in $\psi$
for
$\displaystyle\varepsilon(Q_{1})$
$\displaystyle=\varepsilon(Q_{1},\mathbb{M}_{0},\psi)=\langle(M_{1}(\tau)-Q_{1}(0))^{2}\rangle_{\psi\otimes\zeta}^{\frac{1}{2}},$
(12) $\displaystyle\varepsilon(P_{1})$
$\displaystyle=\varepsilon(P_{1},\mathbb{M}_{0},\psi)=\langle(M_{2}(\tau)-P_{1}(0))^{2}\rangle_{\psi\otimes\zeta}^{\frac{1}{2}}.$
(13)
### II.2 Linear simultaneous measurement of position and momentum
A $2$-meter measuring process
$\mathbb{M}=(\mathcal{H}_{\mathbf{P}},\xi,Q_{2},P_{3},U(\tau))$ for
$\mathbf{S}$ is called a linear simultaneous measurement of position $Q_{1}$
and momentum $P_{1}$ or a linear simultaneous $(Q_{1},P_{1})$-measurement if
$Q_{2}$ and $P_{3}$ are used to measure $Q_{1}$ and $P_{1}$, respectively,
where $\xi$ is a unit vector of
$\mathcal{H}_{\mathbf{P}}=L^{2}(\mathbb{R}^{2})$ satisfying
$\|Q_{2}^{m_{2}}Q_{3}^{m_{2}}P_{2}^{n_{2}}P_{3}^{n_{3}}\xi\|<+\infty$ for all
non-negative integers $m_{2},m_{3},n_{2},n_{3}$, $\tau(>0)$ is the time the
measurement finishes and $U(t)$ is defined by $U(t)=e^{-itH_{int}/\hbar}$ for
all $t\in\mathbb{R}$. Since we ignore the intrinsic dynamics of $\mathbf{S}$
and $\mathbf{P}$, $K$ contributes only to the time scale of the measurement
time. For simplicity, we assume $K=1$ in the paper. For every observable $Z$
of $\mathbf{S}+\mathbf{P}$ at time $0$ and $t\in\mathbb{R}$, the same
observable $Z(t)$ at time $t$ is given by
$Z(t)=U(t)^{\dagger}ZU(t)$ (14)
for all $t\in\mathbb{R}$. This is consistent with the notation before, Eq.(9).
By solving Heisenberg’s equations of motion, we have
$\displaystyle\left(\begin{array}[]{c}Q_{1}(t)\\\ Q_{2}(t)\\\
Q_{3}(t)\end{array}\right)=e^{tR}\left(\begin{array}[]{c}Q_{1}(0)\\\
Q_{2}(0)\\\ Q_{3}(0)\end{array}\right),$ (21)
$\displaystyle\left(\begin{array}[]{c}P_{1}(t)\\\ P_{2}(t)\\\
P_{3}(t)\end{array}\right)=e^{-tR^{T}}\left(\begin{array}[]{c}P_{1}(0)\\\
P_{2}(0)\\\ P_{3}(0)\end{array}\right)$ (28)
for all $t\in\mathbb{R}$, where
$R=\left(\begin{array}[]{ccc}\gamma_{1}-\gamma_{3}&\beta_{1}&\alpha_{3}\\\
\alpha_{1}&\gamma_{2}-\gamma_{1}&\beta_{2}\\\
\beta_{3}&\alpha_{2}&\gamma_{3}-\gamma_{2}\end{array}\right)$ (29)
and $R^{T}$ denotes the transpose of $R$. We see that $e^{tR},e^{-tR^{T}}\in
SL(3,\mathbb{R})$ for all $t\in\mathbb{R}$. $e^{\tau R}$ and $e^{-\tau R^{T}}$
are denoted by $A=(a_{ij})$ and $B=(b_{ij})$, respectively. When we use a
linear simultaneous $(Q_{1},P_{1})$-measurement, the noise-operator based
q-rms errors $\varepsilon(Q_{1})$ and $\varepsilon(P_{1})$ have the following
representations:
$\displaystyle\varepsilon(Q_{1})^{2}$
$\displaystyle=\varepsilon(Q_{1},\mathbb{M},\psi)^{2}$
$\displaystyle=(a_{21}-1)^{2}\sigma(Q_{1}\|\psi)^{2}+\sigma(a_{22}Q_{2}+a_{23}Q_{3}\|\xi)^{2}$
$\displaystyle\hskip 5.69054pt+((a_{21}-1)\langle
Q_{1}\rangle_{\psi}+a_{22}\langle Q_{2}\rangle_{\xi}+a_{23}\langle
Q_{3}\rangle_{\xi})^{2},$ (30) $\displaystyle\varepsilon(P_{1})^{2}$
$\displaystyle=\varepsilon(P_{1},\mathbb{M},\psi)^{2}$
$\displaystyle=(b_{31}-1)^{2}\sigma(P_{1}\|\psi)^{2}+\sigma(b_{32}P_{2}+b_{33}P_{3}\|\xi)^{2}$
$\displaystyle\hskip 14.22636pt+((b_{31}-1)\langle
Q_{1}\rangle_{\psi}+b_{32}\langle P_{2}\rangle_{\xi}+b_{33}\langle
P_{3}\rangle_{\xi})^{2}.$ (31)
## III Results
### III.1 Characterization theorem
The following theorem is the first result of the paper:
###### Theorem.
A linear simultaneous $(Q_{1},P_{1})$-measurement
$\mathbb{M}=(\mathcal{H}_{\mathbf{P}},\xi,Q_{2},P_{3},U(\tau))$ satisfies Eq.
(4) in $\psi$ if and only if it satisfies the following three conditions:
$(i)$ $\displaystyle{(a_{21}-1)\langle Q_{1}\rangle_{\psi}+a_{22}\langle
Q_{2}\rangle_{\xi}+a_{23}\langle Q_{3}\rangle_{\xi}=0}$ and $(b_{31}-1)\langle
P_{1}\rangle_{\psi}+b_{32}\langle P_{2}\rangle_{\xi}+b_{33}\langle
P_{3}\rangle_{\xi}=0$.
$(ii)$
$\sigma(a_{22}Q_{2}+a_{23}Q_{3}\|\xi)=|a_{21}b_{31}|^{\frac{1}{2}}\sigma(Q_{1}\|\psi)$
and
$\sigma(b_{32}P_{2}+b_{33}P_{3}\|\xi)=|a_{21}b_{31}|^{\frac{1}{2}}\sigma(P_{1}\|\psi)$.
$(iii)$ $a_{21}>0$, $b_{31}>0$ and $a_{21}+b_{31}=1$.
Furthermore, for every $\nu\in(0,1)$, there exists a linear simultaneous
$(Q_{1},P_{1})$-measurement such that
$\varepsilon(Q_{1})^{2}=(1-\nu)\sigma(Q_{1})^{2}\hskip
8.53581pt\text{and}\hskip
8.53581pt\varepsilon(P_{1})^{2}=\nu\sigma(P_{1})^{2}$ (32)
in $\psi$.
By the above theorem, any linear simultaneous $(Q_{1},P_{1})$-measurement with
the minimum error-trade-off in $\psi$ satisfies
$\varepsilon(Q_{1})<\sigma(Q_{1}),\hskip
14.22636pt\varepsilon(P_{1})<\sigma(P_{1}),$ (33)
and
$\varepsilon(Q_{1})\varepsilon(P_{1})=\dfrac{\hbar}{2}\sqrt{\dfrac{1}{4}-\left(\nu-\dfrac{1}{2}\right)^{2}}\leq\dfrac{\hbar}{4}<\dfrac{\hbar}{2}.$
(34)
Thus, the range of possible values of the error pairs
$(\varepsilon(Q_{1}),\varepsilon(P_{1}))$ in the state $\psi$ is as shown in
FIG. 1.
Figure 1: When the state of $\mathbf{S}$ is $\psi$, possible values of the
pair $(\varepsilon(Q_{1}),\varepsilon(P_{1}))$ of the errors are indicated by
the area with a grid of dotted magenta lines and with magenta boundary except
for two points $(\sigma(Q_{1}),0)$ and $(0,\sigma(P_{1}))$. By the theorem,
$\varepsilon(Q_{1})^{2}\sigma(P_{1})^{2}+\sigma(Q_{1})^{2}\varepsilon(P_{1})^{2}=\hbar^{2}/4$
($\varepsilon(Q_{1}),\varepsilon(P_{1})>0$), a part of its boundary, is
achieved by linear simultaneous $(Q_{1},P_{1})$-measurements, and gives the
unbreakable limitation for the pair $(\varepsilon(Q_{1}),\varepsilon(P_{1}))$.
The cyan line is Heisenberg’s bound,
$\varepsilon(Q_{1})\varepsilon(P_{1})=\hbar/2$. On the other hand, the dashed
green line indicates $\varepsilon(Q_{1})\varepsilon(P_{1})=\hbar/4$.
### III.2 Concrete models
The above theorem does not directly tell us how to construct simultaneous
$(Q_{1},P_{1})$-measurements with the minimum error-trade-off in $\psi$.
Notably, in contrast to exactly solvable linear measurements [31], $e^{tR}$
has no more explicit formula. Therefore, we adandon analyzing $e^{tR}$ as it
is. We remind the reader that $K=1$ is assumed. We shall give a novel, exactly
solvable subclass of linear simultaneous $(Q_{1},P_{1})$-measurements. The
following two constraints for $R$ are imposed:
$(\mathrm{C1})$ $\alpha_{2}=\beta_{2}=\gamma_{1}=\gamma_{3}=0$.
$(\mathrm{C2})$ $\alpha_{1}\beta_{1}=\alpha_{3}\beta_{3}$.
Under these constraints, $R$ is denoted by $S$, that is,
$S=\left(\begin{array}[]{ccc}0&\beta_{1}&\alpha_{3}\\\
\alpha_{1}&\gamma_{2}&0\\\ \beta_{3}&0&-\gamma_{2}\end{array}\right).$ (35)
Let $\nu\in(0,1)$ and $\kappa\in\mathbb{R}\backslash\\{0\\}$. We define a
state $\xi_{\nu,\kappa}$ of $\mathbf{P}$, which satisfies the following
conditions: $(1)$
$\sigma(Q_{2})\sigma(P_{2})=\sigma(Q_{3})\sigma(P_{3})=\hbar/2$ and $\langle
Q_{2}Q_{3}\rangle=\langle Q_{2}\rangle\langle Q_{3}\rangle$, $(2)$
$\sigma(Q_{2})=\sqrt{\frac{\nu(1-\nu)}{2\kappa^{2}}}\sigma_{1}$ and
$\sigma(Q_{3})=\sqrt{\frac{2\kappa^{2}}{\nu(1-\nu)}}\sigma_{1}$, $(3)$
$\langle Q_{2}\rangle=\frac{1-\nu}{\kappa}q_{1}$, $\langle Q_{3}\rangle=0$,
$\langle P_{2}\rangle=0$ and $\langle P_{3}\rangle=\frac{\nu}{\kappa}p_{1}$,
i.e.,
$\xi_{\nu,\kappa}(x_{2,3})=\dfrac{1}{\sqrt{(2\pi)\sigma_{1}^{2}}}e^{-\frac{1}{4}\|G^{-\frac{1}{2}}(x_{2,3}-u)\|^{2}+\frac{i}{\hbar}\langle
v,x_{2,3}\rangle}$ (36)
for all $x_{2,3}=\left(\begin{array}[]{c}x_{2}\\\
x_{3}\end{array}\right)\in\mathbb{R}^{2}$ in the coordinate representation,
where $u=\left(\begin{array}[]{c}\frac{1-\nu}{\kappa}q_{1}\\\
0\end{array}\right)$, $v=\left(\begin{array}[]{c}0\\\
\frac{\nu}{\kappa}p_{1}\end{array}\right)$ and
$G=\sigma_{1}^{2}\left(\begin{array}[]{cc}\frac{\nu(1-\nu)}{2\kappa^{2}}&0\\\
0&\frac{2\kappa^{2}}{\nu(1-\nu)}\end{array}\right)$.
For every $\nu\in(0,1)$, we present four linear simultaneous
$(Q_{1},P_{1})$-measurements
$(\mathcal{H}_{\mathbf{P}},\xi_{\nu,\kappa},Q_{2},P_{3},U(\tau))$ satisfying
Eq. (32) herein, denoted by $\mathbb{X}_{\nu}$, $\mathbb{Y}_{\nu}^{2}$,
$\mathbb{Y}_{\nu}^{0}$ and $\mathbb{Z}_{\nu}$, respectively. Each model is
specified by the triplet of $\tau$, $S$ and $\kappa$ in the following table,
Table 1.
Table 1: | $\tau$ | $S$ | $\kappa$ | $E$
---|---|---|---|---
$\mathbb{X}_{\nu}$ | $\dfrac{\pi}{2}$ | $\left(\begin{array}[]{ccc}0&-\frac{2}{\nu}&-\frac{1-\nu}{2}\\\ \frac{\nu}{2}&1&0\\\ \frac{2}{1-\nu}&0&-1\end{array}\right)$ | $2$ | $1$
$\mathbb{Y}_{\nu}^{2}$ | 1 | $\left(\begin{array}[]{ccc}0&-\frac{4}{\nu}&\frac{\nu-1}{2}\\\ \frac{\nu}{2}&2&0\\\ \frac{4}{1-\nu}&0&-2\end{array}\right)$ | $4$ | $0$
$\mathbb{Y}_{\nu}^{0}$ | 1 | $\left(\begin{array}[]{ccc}0&0&-(1-\nu)\\\ \nu&0&0\\\ 0&0&0\end{array}\right)$ | $1$ | $0$
$\mathbb{Z}_{\nu}$ | $\log 2$ | $\left(\begin{array}[]{ccc}0&0&\nu-1\\\ \nu&1&0\\\ 0&0&-1\end{array}\right)$ | $2$ | $-1$
Here $E$ is a real number defined by
$\alpha_{1}\beta_{1}=\alpha_{3}\beta_{3}=-\dfrac{\gamma_{2}^{2}+E}{2},$ (37)
and is used to explicitly solve $e^{tS}$ for all $t\in\mathbb{R}$ (see Sec.
V.1).
### III.3 Probability distributions and families of posterior states
Our next interest is to give probability distributions and families of
posterior states when using concrete models $\mathbb{X}_{\nu}$,
$\mathbb{Y}_{\nu}^{2}$, $\mathbb{Y}_{\nu}^{0}$ and $\mathbb{Z}_{\nu}$. First,
we show probability distributions related to $\varepsilon(Q_{1})$ and
$\eta(P_{1})$, and check the validity of $\mathbb{X}_{\nu}$,
$\mathbb{Y}_{\nu}^{2}$, $\mathbb{Y}_{\nu}^{0}$ and $\mathbb{Z}_{\nu}$. For
every $\nu\in(0,1)$, whether we use $\mathbb{X}_{\nu}$,
$\mathbb{Y}_{\nu}^{2}$, $\mathbb{Y}_{\nu}^{0}$ or $\mathbb{Z}_{\nu}$, we get
the following probability density functions:
$\displaystyle p^{Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,\kappa}}(z,w)$
$\displaystyle=p_{\nu\sigma_{1}^{2}}(z-q_{1})p_{(1-\nu)\hat{\sigma}_{1}^{2}}(w-p_{1}),$
(38) $\displaystyle
p^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi_{\nu,\kappa}}(x,z)$
$\displaystyle=p_{(1-\nu)\sigma_{1}^{2}}(x-z)p_{\nu\sigma_{1}^{2}}(z-q_{1}),$
(39) $\displaystyle
p^{P_{1}(0),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,\kappa}}(y,w)$
$\displaystyle=p_{\nu\hat{\sigma}_{1}^{2}}(y-w)p_{(1-\nu)\hat{\sigma}_{1}^{2}}(w-p_{1}),$
(40)
where $\hat{\sigma}_{1}=\hbar/(2\sigma_{1})$ and $p_{\sigma^{2}}(x)$ denotes
the probability density function of the Gaussian probability measure with mean
$0$ and variance $\sigma^{2}$ (equivalently, standard deviation $\sigma$),
i.e.,
$p_{\sigma^{2}}(x)=\dfrac{1}{\sqrt{(2\pi)\sigma^{2}}}e^{-\frac{1}{2\sigma^{2}}x^{2}}.$
(41)
We see that all of Eqs. (38), (39) and (40) depend on $\psi$ and $0<\nu<1$. Of
the three equations, only Eq. (38) can be directly confirmed by any of
$\mathbb{X}_{\nu}$, $\mathbb{Y}_{\nu}^{2}$, $\mathbb{Y}_{\nu}^{0}$ or
$\mathbb{Z}_{\nu}$. The rest two equations, Eqs. (39) and (40), are essential
for understanding the performance of $\mathbb{X}_{\nu}$,
$\mathbb{Y}_{\nu}^{2}$, $\mathbb{Y}_{\nu}^{0}$ and $\mathbb{Z}_{\nu}$. From
Eq. (39), the probability density function of the conditional probability
measure of $Q_{1}(0)$ in $\psi\otimes\xi_{\nu,\kappa}$ under the condition
that the value $z$ of $Q_{2}(\tau)$ is given is determined as
$p^{Q_{1}(0)}_{Q_{2}(\tau)=z,\psi\otimes\xi_{\nu,\kappa}}(x)=p_{(1-\nu)\sigma_{1}^{2}}(x-z).$
(42)
Since $\varepsilon(Q_{1})^{2}=(1-\nu)\sigma_{1}^{2}$, Eq. (42) means that,
when the value $z$ of $Q_{2}(\tau)$ is output, $Q_{1}(0)$ obeys the Gaussian
probability measure with mean $z$ and standard deviation $\varepsilon(Q_{1})$.
The same argument can be made for Eq. (40) and
$\varepsilon(P_{1})^{2}=\nu\hat{\sigma}_{1}^{2}$. The noise-operator based
q-rms errors $\varepsilon(Q_{1})$ and $\varepsilon(P_{1})$ are then equal to
Gauss’ errors $\varepsilon_{G}(\mu^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi})$
and $\varepsilon_{G}(\mu^{P_{1}(0),P_{3}(\tau)}_{\psi\otimes\xi})$,
respectively, i.e.,
$\varepsilon(Q_{1})=\varepsilon_{G}(\mu^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi}),\hskip
8.53581pt\varepsilon(P_{1})=\varepsilon_{G}(\mu^{P_{1}(0),P_{3}(\tau)}_{\psi\otimes\xi}).$
(43)
Here Gauss’ error $\varepsilon_{G}(\mu)$ for a probability distribution $\mu$
on $\mathbb{R}^{2}$ is defined by
$\varepsilon_{G}(\mu)=\left(\int_{\mathbb{R}^{2}}(x-y)^{2}\;d\mu(x,y)\right)^{\frac{1}{2}}.$
(44)
Following Laplace’s pioneering work, Gauss [39] defined his error in 1821. His
error is now redefined as above and widely used in the setting of measure-
theoretical probability theory.
Next, we consider a family of posterior states, which is the set of the states
after the measurement for each output value of the meter (see [40, 41] for the
general theory). It is difficult to find families of posterior states for
general linear simultaneous $(Q_{1},P_{1})$-measurements with the minimum
error-trade-off in $\psi$. Here we shall give them for
$\\{\mathbb{Y}^{0}_{\nu}\\}_{\nu\in(0,1)}$ and
$\\{\mathbb{Z}_{\nu}\\}_{\nu\in(0,1)}$. For every $\nu\in(0,1)$, the family
$\\{\psi_{y}\\}_{y\in\mathbb{R}^{2}}$ of posterior states for
$(\mathbb{Y}^{0}_{\nu},\psi)$ is the set of the minimum uncertainty state
$\psi_{y}$ with $\langle
Q_{1}\rangle_{\psi_{y}}=\frac{y_{1}-(1-\nu)q_{1}}{\nu}$, $\langle
P_{1}\rangle_{\psi_{y}}=\frac{y_{2}-\nu p_{1}}{1-\nu}$ and
$\sigma(Q_{1}\|\psi_{y})=\sqrt{\frac{1-\nu}{\nu}}\sigma_{1}$ for all
$y=\left(\begin{array}[]{c}y_{1}\\\ y_{2}\end{array}\right)\in\mathbb{R}^{2}$,
i.e.,
$\psi_{y}(x)=\frac{e^{-\frac{\nu}{4(1-\nu)\sigma_{1}^{2}}\left(x-\frac{y_{1}-(1-\nu)q_{1}}{\nu}\right)^{2}+i\frac{y_{2}-\nu
p_{1}}{(1-\nu)\hbar}x}}{\sqrt[4]{\frac{2\pi(1-\nu)\sigma_{1}^{2}}{\nu}}}$ (45)
for all $y=\left(\begin{array}[]{c}y_{1}\\\
y_{2}\end{array}\right)\in\mathbb{R}^{2}$ in the coordinate represenation. For
every $\nu\in(0,1)$, the family $\\{\psi_{y}\\}_{y\in\mathbb{R}^{2}}$ of
posterior states for $(\mathbb{Z}_{\nu},\psi)$ is the same as that for
$(\mathbb{Y}^{0}_{\nu},\psi)$.
## IV Discussion
### IV.1 The Arthurs-Kelly model
Here we shall mention the differences between this paper and the paper [38] of
Arthurs and Kelly, an important previous study, on the treatment of
simultaneous measurements of position and momentum. They use the $2$-meter
measuring process
$\mathbb{M}_{\mathrm{AK}}=(\mathcal{H}_{\mathbf{P}},\xi,Q_{2},Q_{3},U_{\mathrm{AK}}(K^{-1}))$
for $\mathbf{S}$, where $U_{\mathrm{AK}}(t)=e^{-itH_{\mathrm{AK}}/\hbar}$ is a
one-parameter group on
$\mathcal{H}_{\mathbf{S}}\otimes\mathcal{H}_{\mathbf{P}}=L^{2}(\mathbb{R})\otimes
L^{2}(\mathbb{R}^{2})$ with $H_{\mathrm{AK}}=K(Q_{1}P_{2}+P_{1}P_{3})$, and
use $Q_{2}$ and $Q_{3}$ to measure $Q_{1}$ and $P_{1}$, respectively. Their
interaction Hamiltonian is obtained from that of the linear simultaneous
$(Q_{1},P_{1})$-measurement with $\alpha_{1}=-\alpha_{3}=1$ and
$\alpha_{2}=\beta_{1}=\beta_{2}=\beta_{3}=\gamma_{1}=\gamma_{2}=\gamma_{3}=0$
by replacing $Q_{3}$ and $P_{3}$ by $-P_{3}$ and $Q_{3}$, respectively. Then
we have
$\displaystyle
U_{\mathrm{AK}}(K^{-1})^{\dagger}Q_{2}U_{\mathrm{AK}}(K^{-1})=Q_{1}+Q_{2}+\frac{1}{2}P_{3},$
(46) $\displaystyle
U_{\mathrm{AK}}(K^{-1})^{\dagger}Q_{3}U_{\mathrm{AK}}(K^{-1})=P_{1}-\frac{1}{2}P_{2}+Q_{3},$
(47)
so that the q-rms errors
$\varepsilon(Q_{1})=\varepsilon(Q_{1},\mathbb{M}_{\mathrm{AK}},\psi)$ and
$\varepsilon(P_{1})=\varepsilon(P_{1},\mathbb{M}_{\mathrm{AK}},\psi)$ satisfy
Heisenberg’s ETR, Eq. (5). This result shows that the Arthurs-Kelly model is
not what we desire.
On the other hand, the measuring interaction of Ozawa’s exactly solvable
linear measurements is given by
$H_{O}=K[\alpha Q_{1}P_{2}+\beta P_{1}Q_{2}+\gamma(Q_{1}P_{1}-Q_{2}P_{2})],$
(48)
where $K$ is a positive real number, the coupling constant, and $\alpha$,
$\beta$ and $\gamma$ are real numbers. In [31], Ozawa systematically analyzed
his exactly solvable measuring models using this interaction, and calculated
the noise-operator baed q-rms error and the disturbance-operator based q-rms
disturbance. His investigation motivated the author just as von Neumann’s work
inspired Arthurs and Kelly.
### IV.2 The Branciard-Ozawa ETR and the noise-operator based q-rms error
The reformulation of uncertainty relations is a currently developing project.
As part of this research project, this study has the significance of
connecting the recent knowledge about uncertainty relations with the
construction of measurement models. After Ozawa’s inequality
$\varepsilon(X)\varepsilon(Y)+\varepsilon(X)\sigma(Y)+\sigma(X)\varepsilon(Y)\geq
C_{XY}$ (49)
was proved, the study of uncertainty relations became active, where
$C_{XY}=|\mathrm{Tr}(\rho[X,Y])|/2$ and $\rho$ is a density operator on
$L^{2}(\mathbb{R})$ describing the state of $\mathbf{S}$. Note, however, that
the noise-operator based q-rms error $\varepsilon(X)$ and the standard
deviation $\sigma(Y)$ are defined for $\rho$. The tightest ETR, which is now
known, is the Branciard-Ozawa ETR
$\displaystyle\hskip
8.53581pt\varepsilon(X)^{2}\sigma(Y)^{2}+\sigma(X)^{2}\varepsilon(Y)^{2}$
$\displaystyle+2\varepsilon(X)\varepsilon(Y)\sqrt{\sigma(X)^{2}\sigma(Y)^{2}-D_{XY}^{2}}\geq
D_{XY}^{2},$ (50)
where $D_{XY}=\mathrm{Tr}|\sqrt{\rho}[X,Y]\sqrt{\rho}|/2$ satisfies
$D_{XY}\geq C_{XY}$ (see [12]). This inequality is first proved for pure
(vector) states by Branciard [10], and is extended to mixed states by Ozawa
[12]. Eq. (3) is the case where $X=Q_{1}$, $Y=P_{1}$ and the state of
$\mathbf{S}$ is $\psi$.
There is a claim that the use of the noise-operator based q-rms error is
questionable because it sometimes vanishes for inaccurate measurements of
observables (see [8] for example). In constrast to such a claim, it is shown
in [42] that the q-rms error satisfies satisfactory conditions except for the
completeness. A q-rms error is said to be complete if it never vanishes for
inaccurate measurements of observables in each state [42]. The noise-operator
based q-rms error is regarded as a straightfoward generalization of Gauss’
error to quantum measurement. Instead of sticking to the noise-operator based
q-rms error only, its improved versions that satisfy the completeness are also
proposed in [42]. In statistics and information theory, various quantitative
measures are defined for different purposes. In that sense, it is valid that
we use the noise-operator based q-rms error as a standard, and that we use its
improved versions as alternatives when its use is problematic.
## V Methods
As in standard textbooks of quantum mechanics, $Q_{j}$ and $P_{k}$ satisfy
$\displaystyle(Q_{j}f)(x_{1},x_{2},x_{3})$
$\displaystyle=x_{j}f(x_{1},x_{2},x_{3}),$ (51)
$\displaystyle(P_{k}g)(x_{1},x_{2},x_{3})$
$\displaystyle=\dfrac{\hbar}{i}\dfrac{\partial}{\partial
x_{k}}g(x_{1},x_{2},x_{3}),$ (52)
respectively, in the coordinate representation for every $j,k=1,2,3$, and for
appropriate functions $f$ and $g$ on $\mathbb{R}^{3}$. We do not explicitly
use the above representation in the paper.
### V.1 Proof of Theorem and the construction of models
To begin with, we shall prove Theorem. When the state of $\mathbf{S}$ is
$\psi$ and a linear $(Q_{1},P_{1})$-measurement
$\mathbb{M}=(\mathcal{H}_{\mathbf{P}},\xi,Q_{2},P_{3},U(\tau))$ is used, we
have the following evaluation:
$\displaystyle\hskip
14.22636pt\varepsilon(Q_{1})^{2}\sigma(P_{1})^{2}+\sigma(Q_{1})^{2}\eta(P_{1})^{2}$
$\displaystyle\geq(a_{21}-1)^{2}\sigma(Q_{1})^{2}\sigma(P_{1})^{2}+\sigma(a_{22}Q_{2}+a_{23}Q_{3})^{2}\sigma(P_{1})^{2}$
$\displaystyle\hskip
14.22636pt+(b_{31}-1)^{2}\sigma(Q_{1})^{2}\sigma(P_{1})^{2}+\sigma(Q_{1})^{2}\sigma(b_{32}P_{2}+b_{33}P_{3})^{2}$
$\displaystyle=\dfrac{\hbar^{2}}{4}\\{(a_{21}-1)^{2}+(b_{31}-1)^{2}\\}$
$\displaystyle\hskip
14.22636pt+\sigma(a_{22}Q_{2}+a_{23}Q_{3})^{2}\sigma(P_{1})^{2}+\sigma(Q_{1})^{2}\sigma(b_{32}P_{2}+b_{33}P_{3})^{2}$
$\displaystyle=\dfrac{\hbar^{2}}{4}\\{(a_{21}-1)^{2}+(b_{31}-1)^{2}\\}$
$\displaystyle\hskip
14.22636pt+2\sigma(Q_{1})\sigma(P_{1})\sigma(a_{22}Q_{2}+a_{23}Q_{3})\sigma(b_{32}P_{2}+b_{33}P_{3})$
$\displaystyle\hskip
14.22636pt+(\sigma(a_{22}Q_{2}+a_{23}Q_{3})\sigma(P_{1})-\sigma(Q_{1})\sigma(b_{32}P_{2}+b_{33}P_{3}))^{2}$
$\displaystyle\geq\dfrac{\hbar^{2}}{4}\\{(a_{21}-1)^{2}+(b_{31}-1)^{2}\\}$
$\displaystyle\hskip
14.22636pt+\hbar\sigma(a_{22}Q_{2}+a_{23}Q_{3})\sigma(b_{32}P_{2}+b_{33}P_{3})$
$\displaystyle\geq\hbar^{2}l(a_{21},b_{31}),$ (53)
where $l(a_{21},b_{31})$ is the function on $\mathbb{R}^{2}$ defined by
$l(a_{21},b_{31})=\dfrac{1}{4}\\{(a_{21}-1)^{2}+(b_{31}-1)^{2}\\}+\dfrac{1}{2}|a_{21}b_{31}|,$
(54)
and takes the minimal value $1/4$ when $a_{21},b_{31}\geq 0$ and
$a_{21}+b_{31}=1$. By $[Q_{2}(\tau),P_{3}(\tau)]=0$, we have
$a_{21}b_{31}+a_{22}b_{32}+a_{23}b_{33}=0$. We see that
$a_{22}Q_{2}+a_{23}Q_{3}$ and $b_{32}P_{2}+b_{33}P_{3}$ satisfy the following
commutation relation
$[a_{22}Q_{2}+a_{23}Q_{3},b_{32}P_{2}+b_{33}P_{3}]=i\hbar(-a_{21}b_{31})1.$
(55)
Therefore, we obtain
$\sigma(a_{22}Q_{2}+a_{23}Q_{3})\sigma(b_{32}P_{2}+b_{33}P_{3})\geq\dfrac{\hbar}{2}|a_{21}b_{31}|.$
(56)
A linear simultaneous $(Q_{1},P_{1})$-measurement
$\mathbb{M}=(\mathcal{H}_{\mathbf{P}},\xi,Q_{2},P_{3},U(\tau))$ satisfies Eq.
(4) in $\psi$ if and only if it satisfies the conditions $(i)$ and
$(ii.1)$
$\displaystyle{\sigma(P_{1})\sigma(a_{22}Q_{2}+a_{23}Q_{3})=\sigma(Q_{1})\sigma(b_{32}P_{2}+b_{33}P_{3})}$.
$(ii.2)$
$\displaystyle{\sigma(a_{22}Q_{2}+a_{23}Q_{3})\sigma(b_{32}P_{2}+b_{33}P_{3})=\dfrac{\hbar}{2}|a_{21}b_{31}|}$.
$(iii\mathrm{-})$ $a_{21}\geq 0$, $b_{31}\geq 0$ and $a_{21}+b_{31}=1$.
From the conditions $(ii.1)$ and $(ii.2)$, we obtain the condition $(ii)$ of
the theorem. If $a_{21}b_{31}=0$, we get
$\sigma(a_{22}Q_{2}+a_{23}Q_{3})=\sigma(b_{32}P_{2}+b_{33}P_{3})=0$. Since at
least one of $a_{22}$, $a_{23}$, $b_{32}$ and $b_{33}$ is non-zero,
$\sigma(a_{22}Q_{2}+a_{23}Q_{3})=\sigma(b_{32}P_{2}+b_{33}P_{3})=0$ never
holds for any unit vector $\xi$ of $L^{2}(\mathbb{R}^{2})$. Therefore,
$a_{21}b_{31}\neq 0$ must be satisfied, so that we have the condition $(iii)$
of the theorem. We then have
$\displaystyle\varepsilon(Q_{1})^{2}$
$\displaystyle=(a_{21}-1)^{2}\sigma_{1}^{2}+|a_{21}b_{31}|\sigma_{1}^{2}=(1-a_{21})\sigma_{1}^{2},$
(57) $\displaystyle\eta(P_{1})^{2}$
$\displaystyle=(b_{31}-1)^{2}\hat{\sigma}_{1}^{2}+|a_{21}b_{31}|\hat{\sigma}_{1}^{2}=a_{21}\hat{\sigma}_{1}^{2}.$
(58)
To complete the proof, for every $\nu\in(0,1)$, we find $S$ and $\tau>0$ such
that $a_{21}=\nu$ and $b_{31}=1-\nu$. $S$ satisties $S^{3}=(-E)S$, so that we
have
$e^{tS}=\left\\{\begin{array}[]{ll}\displaystyle{I+\dfrac{\sin(t\sqrt{E})}{\sqrt{E}}S+\dfrac{1-\cos(t\sqrt{E})}{E}S^{2}},&(E>0)\\\
\displaystyle{I+tS+\dfrac{1}{2}t^{2}S^{2}},&(E=0)\\\
\displaystyle{I+\dfrac{\sinh(t\sqrt{-E})}{\sqrt{-E}}S}&\\\ \hskip
39.83385pt\displaystyle{+\dfrac{\cosh(t\sqrt{-E})-1}{-E}S^{2}}&(E<0)\end{array}\right.$
(59)
for all $t\in\mathbb{R}$. Independent of the sign of $E$, $e^{\tau
S}=(a_{ij})$ and $e^{-\tau S^{T}}=(b_{ij})$ satisfy $a_{22}=b_{33}$ and
$a_{23}=b_{32}$. Since $[Q_{2}(\tau),P_{3}(\tau)]=0$, we have
$a_{21}b_{31}+2a_{22}a_{23}=0$. Then, we use $\xi_{a_{21},a_{22}}$ as the
state of $\mathbf{P}$, i.e., $\xi_{\nu,\kappa}$ with $\nu=a_{21}$ and
$\kappa=a_{22}$. $\xi_{a_{21},a_{22}}$ is the product of two Gaussian states
$\xi_{2}$ and $\xi_{3}$: It has the form
$\xi_{a_{21},a_{22}}(x_{2},x_{3})=\xi_{2}(x_{2})\xi_{3}(x_{3})$ in the
coordinate representation, where $\xi_{2}$ and $\xi_{3}$ are given by
$\displaystyle\xi_{2}(x_{2})$
$\displaystyle=\sqrt[4]{\frac{|a_{22}|}{(2\pi)|a_{23}|\sigma_{1}^{2}}}e^{-\frac{|a_{22}|}{4|a_{23}|\sigma_{1}^{2}}\left(x_{2}-\frac{1-a_{21}}{a_{22}}q_{1}\right)^{2}},$
(60) $\displaystyle\xi_{3}(x_{3})$
$\displaystyle=\sqrt[4]{\frac{|a_{23}|}{(2\pi)|a_{22}|\sigma_{1}^{2}}}e^{-\frac{|a_{23}|}{4|a_{22}|\sigma_{1}^{2}}x_{3}^{2}+i\frac{a_{21}p_{1}}{a_{22}\hbar}x_{3}},$
(61)
respectively, in the coordinate representation. By Eq (59), the cases $E>0$,
$E=0$ and $E<0$ must be handled separately.
[$E>0$] Both $a_{21}=\nu$ and $b_{31}=1-\nu$ are satisfied if and only if it
holds that
$\dfrac{\sin(\tau\sqrt{E})}{\sqrt{E}}+\gamma_{2}\dfrac{1-\cos(\tau\sqrt{E})}{E}=\dfrac{\nu}{\alpha_{1}}=\dfrac{1-\nu}{-\alpha_{3}}.$
(62)
For example, for every $0<\nu<1$, $E>0$, $\gamma_{2}>0$ and
$0<\tau<\dfrac{\pi}{\sqrt{E}}$, there uniquely exist $\alpha_{1}>0$ and
$\alpha_{3}<0$ satisfying Eq (62), which completes the proof of the theorem.
The family $\\{\mathbb{X}_{\nu}\\}_{\nu\in(0,1)}$ of linear simultaneous
$(Q_{1},P_{1})$-measurements are contained in this case.
[$E=0$] Both $a_{21}=\nu$ and $b_{31}=1-\nu$ are satisfied if and only if it
holds that
$\tau+\dfrac{1}{2}\gamma_{2}\tau^{2}=\dfrac{\nu}{\alpha_{1}}=\dfrac{1-\nu}{-\alpha_{3}}.$
(63)
For every $0<\nu<1$, $\gamma_{2}\geq 0$ and $\tau>0$, there uniquely exist
$\alpha_{1}>0$ and $\alpha_{3}<0$ satisfying Eq (63). The families
$\\{\mathbb{Y}_{\nu}^{2}\\}_{\nu\in(0,1)}$ and
$\\{\mathbb{Y}_{\nu}^{0}\\}_{\nu\in(0,1)}$ of linear simultaneous
$(Q_{1},P_{1})$-measurements are contained in this case.
[$E<0$] Both $a_{21}=\nu$ and $b_{31}=1-\nu$ are satisfied if and only if it
holds that
$\dfrac{\sinh(\tau\sqrt{-E})}{\sqrt{-E}}+\gamma_{2}\dfrac{\cosh(\tau\sqrt{-E})-1}{-E}=\dfrac{\nu}{\alpha_{1}}=\dfrac{1-\nu}{-\alpha_{3}}.$
(64)
For every $0<\nu<1$, $E<0$, $\gamma_{2}>0$ and $\tau>0$, there uniquely exist
$\alpha_{1}>0$ and $\alpha_{3}<0$ satisfying Eq (64). The family
$\\{\mathbb{Z}_{\nu}\\}_{\nu\in(0,1)}$ of linear simultaneous
$(Q_{1},P_{1})$-measurements are contained in this case.
$e^{\tau S}=A=(a_{ij})$ and $e^{-\tau S^{T}}=B=(b_{ij})$ in each model are
then given as follows:
Table 2: | $e^{\tau S}$ | $e^{-\tau S^{T}}$
---|---|---
$\mathbb{X}_{\nu}$ | $\left(\begin{array}[]{ccc}-1&-\frac{4}{\nu}&0\\\ \nu&2&-\frac{\nu(1-\nu)}{4}\\\ 0&-\frac{4}{\nu(1-\nu)}&0\end{array}\right)$ | $\left(\begin{array}[]{ccc}-1&0&0\\\ 0&0&-\frac{4}{\nu(1-\nu)}\\\ 1-\nu&-\frac{\nu(1-\nu)}{4}&2\end{array}\right)$
$\mathbb{Y}_{\nu}^{2}$ | $\left(\begin{array}[]{ccc}-1&-\frac{8}{\nu}&0\\\ \nu&4&-\frac{\nu(1-\nu)}{8}\\\ 0&-\frac{8}{\nu(1-\nu)}&0\end{array}\right)$ | $\left(\begin{array}[]{ccc}-1&0&-\frac{8}{1-\nu}\\\ 0&0&-\frac{8}{\nu(1-\nu)}\\\ 1-\nu&-\frac{\nu(1-\nu)}{8}&4\end{array}\right)$
$\mathbb{Y}_{\nu}^{0}$ | $\left(\begin{array}[]{ccc}1&0&-(1-\nu)\\\ \nu&1&-\frac{\nu(1-\nu)}{2}\\\ 0&0&1\end{array}\right)$ | $\left(\begin{array}[]{ccc}1&-\nu&0\\\ 0&1&0\\\ 1-\nu&-\frac{\nu(1-\nu)}{2}&1\end{array}\right)$
$\mathbb{Z}_{\nu}$ | $\left(\begin{array}[]{ccc}1&0&-\frac{1-\nu}{2}\\\ \nu&2&-\frac{\nu(1-\nu)}{4}\\\ 0&0&\frac{1}{2}\end{array}\right)$ | $\left(\begin{array}[]{ccc}1&-\frac{\nu}{2}&0\\\ 0&\frac{1}{2}&0\\\ 1-\nu&-\frac{\nu(1-\nu)}{4}&2\end{array}\right)$
### V.2 Probability distributions and families of posterior states
The characteristic function $\lambda$ of the probability measure $\mu$ on
$\mathbb{R}^{d}$ is defined as the inverse Fourier transform of $\mu$:
$\lambda(k)=\int_{\mathbb{R}^{d}}e^{i\langle x,k\rangle}\;d\mu(x),$ (65)
where $\langle\cdot,\cdot\rangle$ is the inner product of $\mathbb{R}^{d}$.
For any observables $X_{1}$, $X_{2}$ and vector state $\phi$, the
characteristic function of $\mu^{X_{1},X_{2}}_{\phi}$ is denoted by
$\lambda^{X_{1},X_{2}}_{\phi}$. The characteristic function of a Gaussian
measure
$d\mu_{V,m}(x)=\dfrac{1}{\sqrt{(2\pi)^{d}\det(V)}}e^{-\frac{1}{2}\langle
x-m,V^{-1}(x-m)\rangle}\;dx$ (66)
has the following form:
$\lambda_{V,m}(k)=e^{i\langle m,k\rangle-\frac{1}{2}\langle k,Vk\rangle},$
(67)
where $V>0$ is a covariance matrix and $m\in\mathbb{R}^{d}$ is a mean vector.
Conversely, if a characteristic function is given by Eq. (67), then the
corresponding probability measure is a Gaussian measure given by Eq. (66). We
refer the reader to textbooks of probability theory and statistics.
The characteristic function
$\lambda^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi_{a_{21},a_{22}}}$ of
$\mu^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi_{a_{21},a_{22}}}$ is given by
$\displaystyle\hskip
14.22636pt\lambda^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi_{a_{21},a_{22}}}(k)$
$\displaystyle=\langle
e^{ik_{1}Q_{1}(0)+ik_{2}Q_{2}(\tau)}\rangle_{\psi\otimes\xi_{a_{21},a_{22}}}$
$\displaystyle=\langle
e^{i(k_{1}+a_{21}k_{2})Q_{1}(0)+ia_{22}k_{2}Q_{2}(0)+ia_{23}k_{2}Q_{3}(0)}\rangle_{\psi\otimes\xi_{a_{21},a_{22}}}$
$\displaystyle=\langle\psi|e^{i(k_{1}+a_{21}k_{2})Q_{1}}\psi\rangle\langle\xi_{2}|e^{i(a_{22}k_{2})Q_{2}}\xi_{2}\rangle\langle\xi_{3}|e^{i(a_{23}k_{2})Q_{3}}\xi_{3}\rangle$
$\displaystyle=e^{iq_{1}(k_{1}+a_{21}k_{2})-\frac{1}{2}\sigma_{1}^{2}(k_{1}+a_{21}k_{2})^{2}}$
$\displaystyle\hskip 14.22636pt\times
e^{i\frac{1-a_{21}}{a_{22}}q_{1}(a_{22}k_{2})-\frac{1}{2}\left|\frac{a_{23}}{a_{22}}\right|\sigma_{1}^{2}(a_{22}k_{2})^{2}}e^{-\frac{1}{2}\left|\frac{a_{22}}{a_{23}}\right|\sigma_{1}^{2}(a_{23}k_{2})^{2}}$
$\displaystyle=e^{iq_{1}k_{1}+iq_{1}k_{2}-\frac{1}{2}\sigma_{1}^{2}\left\\{(k_{1}+a_{21}k_{2})^{2}+2|a_{22}a_{23}|k_{2}^{2}\right\\}}$
$\displaystyle=\lambda_{W,q}(k)$ (68)
for all $k=\left(\begin{array}[]{c}k_{1}\\\
k_{2}\end{array}\right)\in\mathbb{R}^{2}$, where
$q=\left(\begin{array}[]{c}q_{1}\\\ q_{1}\end{array}\right)$ and
$W=\sigma_{1}^{2}\left(\begin{array}[]{cc}1&a_{21}\\\
a_{21}&a_{21}\end{array}\right)$. Here we used $0<a_{21}<1$,
$a_{21}(1-a_{21})+2a_{22}a_{23}=0$, which is obtained from the condition
$(iii)$ of the theorem and $a_{21}b_{31}+2a_{22}a_{23}=0$, and the relation
$\langle\psi|e^{i(aQ_{1}+bP_{1})}\psi\rangle=e^{iq_{1}a-\frac{1}{2}\sigma_{1}^{2}a^{2}}e^{ip_{1}b-\frac{1}{2}\hat{\sigma}_{1}^{2}b^{2}}$
(69)
for all $a,b\in\mathbb{R}$. From $\det(W)=\sigma_{1}^{4}a_{21}(1-a_{21})$ and
$W^{-1}=\dfrac{1}{(1-a_{21})\sigma_{1}^{2}}\left(\begin{array}[]{cc}1&-1\\\
-1&1\end{array}\right)+\dfrac{1}{a_{21}\sigma_{1}^{2}}\left(\begin{array}[]{cc}0&0\\\
0&1\end{array}\right),$ (70)
we obtain
$p^{Q_{1}(0),Q_{2}(\tau)}_{\psi\otimes\xi_{a_{21},a_{22}}}(x,z)=p_{(1-a_{21})\sigma_{1}^{2}}(x-z)p_{a_{21}\sigma_{1}^{2}}(z-q_{1}).$
(71)
Eq. (38) is obtained from Eq. (71) for $\mathbb{X}_{\nu}$,
$\mathbb{Y}_{\nu}^{2}$, $\mathbb{Y}_{\nu}^{0}$ and $\mathbb{Z}_{\nu}$.
Similarly, we have
$\displaystyle p^{P_{1}(0),P_{3}(\tau)}_{\psi\otimes\xi_{a_{21},a_{22}}}(y,w)$
$\displaystyle=p_{a_{21}\hat{\sigma}_{1}^{2}}(y-w)p_{(1-a_{21})\hat{\sigma}_{1}^{2}}(w-p_{1}),$
(72) $\displaystyle
p^{Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{a_{21},a_{22}}}(z,w)$
$\displaystyle=p_{a_{21}\sigma_{1}^{2}}(z-q_{1})p_{(1-a_{21})\hat{\sigma}_{1}^{2}}(w-p_{1}).$
(73)
In particular, Eqs. (39) and (40) are derived in the same way.
Next, for every $\nu\in(0,1)$, we find the family of posterior states for
$(\mathbb{Y}_{\nu}^{0},\psi)$. We check the following probability density
functions via their characteristic functions:
$\displaystyle\hskip
14.22636ptp^{Q_{1}(\tau),Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,1}}(x,z,w)$
$\displaystyle=p_{\frac{(1-\nu)\sigma_{1}^{2}}{\nu}}\left(x-\frac{z-(1-\nu)q_{1}}{\nu}\right)$
$\displaystyle\hskip 56.9055pt\times
p_{\nu\sigma_{1}^{2}}(z-q_{1})p_{(1-\nu)\hat{\sigma}_{1}^{2}}(w-p_{1}),$ (74)
$\displaystyle\hskip
14.22636ptp^{P_{1}(\tau),Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,1}}(y,z,w)$
$\displaystyle=p_{\frac{\nu\hat{\sigma}_{1}^{2}}{1-\nu}}\left(y-\frac{w-\nu
p_{1}}{1-\nu}\right)p_{\nu\sigma_{1}^{2}}(z-q_{1})p_{(1-\nu)\hat{\sigma}_{1}^{2}}(w-p_{1}).$
(75)
For example, the characteristic function
$\lambda^{Q_{1}(\tau),Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,1}}$ of
$\mu^{Q_{1}(\tau),Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,1}}$ is given
by
$\displaystyle\hskip
14.22636pt\lambda^{Q_{1}(\tau),Q_{2}(\tau),P_{3}(\tau)}_{\psi\otimes\xi_{\nu,1}}(k)$
$\displaystyle=\langle
e^{ik_{1}Q_{1}(\tau)+ik_{2}Q_{2}(\tau)+ik_{3}P_{3}(\tau)}\rangle_{\psi\otimes\xi_{\nu,1}}$
$\displaystyle=\langle\psi|e^{i(k_{1}+\nu
k_{2})Q_{1}+i(1-\nu)k_{3}P_{1}}\psi\rangle\langle\xi_{2}|e^{ik_{2}Q_{2}-i\frac{\nu(1-\nu)}{2}k_{3}P_{2}}\xi_{2}\rangle$
$\displaystyle\hskip
14.22636pt\times\langle\xi_{3}|e^{-i(1-\nu)k_{1}Q_{3}-i\frac{\nu(1-\nu)}{2}k_{2}Q_{3}+ik_{3}P_{3}}\xi_{3}\rangle$
$\displaystyle=e^{iq_{1}(k_{1}+\nu k_{2})-\frac{1}{2}\sigma_{1}^{2}(k_{1}+\nu
k_{2})^{2}}e^{ip_{1}(1-\nu)k_{3}-\frac{1}{2}\hat{\sigma}_{1}^{2}(1-\nu)^{2}k_{3}^{2}}$
$\displaystyle\hskip 14.22636pt\times
e^{i(1-\nu)q_{1}k_{2}-\frac{1}{2}\frac{\nu(1-\nu)}{2}\sigma_{1}^{2}k_{2}^{2}}e^{-\frac{1}{2}\frac{2}{\nu(1-\nu)}\hat{\sigma}_{1}^{2}\left(\frac{\nu(1-\nu)}{2}k_{3}\right)^{2}}$
$\displaystyle\hskip 14.22636pt\times
e^{-\frac{1}{2}\frac{2}{\nu(1-\nu)}\sigma_{1}^{2}\left(-(1-\nu)k_{1}-\frac{\nu(1-\nu)}{2}k_{2}\right)^{2}}e^{i\nu
p_{1}k_{3}-\frac{1}{2}\frac{\nu(1-\nu)}{2}\hat{\sigma}_{1}^{2}k_{3}^{2}}$
$\displaystyle=\lambda_{Z,q}(\tilde{k})\lambda_{(1-\nu)\hat{\sigma}_{1}^{2},p_{1}}(k_{3})$
(76)
for all $k=\left(\begin{array}[]{c}k_{1}\\\ k_{2}\\\
k_{3}\end{array}\right)\in\mathbb{R}^{3}$, where
$\tilde{k}=\left(\begin{array}[]{c}k_{1}\\\ k_{2}\end{array}\right)$ and
$Z=\sigma_{1}^{2}\left(\begin{array}[]{cc}\frac{2-\nu}{\nu}&1\\\
1&\nu\end{array}\right)$. From $\det Z=(1-\nu)\sigma_{1}^{4}$ and
$Z^{-1}=\frac{\nu}{(1-\nu)\sigma_{1}^{2}}\left(\begin{array}[]{cc}1&-\frac{1}{\nu}\\\
-\frac{1}{\nu}&\frac{1}{\nu^{2}}\end{array}\right)+\frac{1}{\nu\sigma_{1}^{2}}\left(\begin{array}[]{cc}0&0\\\
0&1\end{array}\right),$ (77)
we obtain Eq. (74). The relation
$\dfrac{(1-\nu)\sigma_{1}^{2}}{\nu}\cdot\dfrac{\nu\hat{\sigma}_{1}^{2}}{1-\nu}=\dfrac{\hbar^{2}}{4}$
implies that the family of posterior states for $(\mathbb{Y}_{\nu}^{0},\psi)$
is given by Eq. (45) and is unique up to phase. For every $\nu\in(0,1)$, the
family of posterior states for $(\mathbb{Z}_{\nu},\psi)$ is derived in the
same way.
For every rectangular(, more generally, Borel subset) $J$ in $\mathbb{R}^{2}$,
we then obtain the state $\rho_{J}$ after the measurement under the condition
that output values not contained in $J$ is excluded, which is given by
$\mathrm{Tr}[X\rho_{J}]=\dfrac{\langle
U(\tau)(\psi\otimes\xi)|XE(J)U(\tau)(\psi\otimes\xi)\rangle}{\langle
U(\tau)(\psi\otimes\xi)|E(J)U(\tau)(\psi\otimes\xi)\rangle}$ (78)
whenever $\langle U(\psi\otimes\xi)|(1\otimes
E(J))U(\psi\otimes\xi)\rangle\neq 0$. Here $E$ is the spectral measure of
$\mathbb{R}^{2}$ on $L^{2}(\mathbb{R}^{2})$ such that $E(J_{1}\times
J_{2})=E^{Q_{2}}(J_{1})E^{P_{3}}(J_{2})$ for all Borel sets $J_{1},J_{2}$ of
$\mathbb{R}$. For every $\nu\in(0,1)$, the family
$\\{\psi_{y}\\}_{y\in\mathbb{R}^{2}}$ of posterior states for
$(\mathbb{Y}^{0}_{\nu},\psi)$ satisfies
$\rho_{J}=\dfrac{1}{\mu_{V_{\nu},r}(J)}\int_{J}|\psi_{y}\rangle\langle\psi_{y}|\;d\mu_{V_{\nu},r}(y)$
(79)
for all Borel set $J$ of $\mathbb{R}^{2}$, where
$V_{\nu}=\left(\begin{array}[]{cc}\nu\sigma_{1}^{2}&0\\\
0&(1-\nu)\hat{\sigma}_{1}^{2}\end{array}\right)$ and
$r=\left(\begin{array}[]{c}q_{1}\\\ p_{1}\end{array}\right)$.
## VI Summary and Perspectives
We have given a necessary and sufficient condition for a linear simultaneous
$(Q_{1},P_{1})$-measurement to satisfy Eq. (4), and constructed four families
$\\{\mathbb{X}_{\nu}\\}_{(0,1)}$, $\\{\mathbb{Y}_{\nu}^{2}\\}_{(0,1)}$,
$\\{\mathbb{Y}_{\nu}^{0}\\}_{(0,1)}$ and $\\{\mathbb{Z}_{\nu}\\}_{(0,1)}$ of
linear simultaneous $(Q_{1},P_{1})$-measurements. Furthermore, we have
probability distributions when using $\\{\mathbb{X}_{\nu}\\}_{(0,1)}$,
$\\{\mathbb{Y}_{\nu}^{2}\\}_{(0,1)}$, $\\{\mathbb{Y}_{\nu}^{0}\\}_{(0,1)}$ and
$\\{\mathbb{Z}_{\nu}\\}_{(0,1)}$, and families of posterior states for
$\\{\mathbb{Y}_{\nu}^{0}\\}_{(0,1)}$ and $\\{\mathbb{Z}_{\nu}\\}_{(0,1)}$. We
believe that the results of the paper have important implications for future
research on simultaneous measurements. There are not so many studies on
simultaneous measurements of position and momentum since Heisenberg’s paper in
spite of their importance. In fact, this paper shows that there is still room
for studying simultaneous measurements of position and momentum. The same is
true for simultaneous measurements of different components of the spin. It is
desirable to study simultaneous measurements more and more actively, in
connection with the recent progress of uncertainty relations. We believe that
it will make a significant contribution to the resolution of various problems
in the field of quantum information through quantitative analysis. In the
future, the theory of simultaneous measurements of imcompatible observables
will be widely applied to quantum information processing.
###### Acknowledgements.
The author thanks Prof. Motoichi Ohtsu and Prof. Fumio Hiroshima for their
warmful encouragements.
## References
* Heisenberg [1927] W. Heisenberg, Über den anschaulichen inhalt der quantentheoretischen kinematik und mechanik, Z. Phys. 43, 172 (1927).
* von Neumann [2018] J. von Neumann, _Mathematical foundations of quantum mechanics: New edition_ (Princeton UP, Princeton, 2018).
* Ozawa [2003a] M. Ozawa, Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement, Phys. Rev. A 67, 042105 (2003a).
* Ozawa [2003b] M. Ozawa, Physical content of Heisenberg’s uncertainty relation: limitation and reformulation, Phys. Lett. A 318, 21 (2003b).
* Ozawa [2004a] M. Ozawa, Uncertainty relations for joint measurements of noncommuting observables, Phys. Lett. A 320, 367 (2004a).
* Ozawa [2004b] M. Ozawa, Uncertainty relations for noise and disturbance in generalized quantum measurements, Ann. Phys. (N.Y.) 311, 350 (2004b).
* Hall [2004] M. J. Hall, Prior information: How to circumvent the standard joint-measurement uncertainty relation, Phys. Rev. A 69, 052113 (2004).
* Busch _et al._ [2007] P. Busch, T. Heinonen, and P. Lahti, Heisenberg’s uncertainty principle, Phys. Rep. 452, 155 (2007).
* Lund and Wiseman [2010] A. Lund and H. Wiseman, Measuring measurement-disturbance relationships with weak values, New J. Phys. 12, 093011 (2010).
* Branciard [2013] C. Branciard, Error-tradeoff and error-disturbance relations for incompatible quantum measurements, Proc. Nat. Acad. Sci. 110, 6742 (2013).
* Branciard [2014] C. Branciard, Deriving tight error-trade-off relations for approximate joint measurements of incompatible quantum observables, Phys. Rev. A 89, 022124 (2014).
* Ozawa [2014] M. Ozawa, Error-disturbance relations in mixed states (2014) Preprint at https://arxiv.org/abs/1404.3388 .
* Busch _et al._ [2013] P. Busch, P. Lahti, and R. F. Werner, Proof of Heisenberg’s error-disturbance relation, Phys. Rev. Lett. 111, 160405 (2013).
* Busch _et al._ [2014a] P. Busch, P. Lahti, and R. F. Werner, Colloquium: Quantum root-mean-square error and measurement uncertainty relations, Rev. Mod. Phys. 86, 1261 (2014a).
* Busch _et al._ [2014b] P. Busch, P. Lahti, and R. F. Werner, Measurement uncertainty relations, J. Math. Phys. 55, 042111 (2014b).
* Ozawa [2013] M. Ozawa, Disproving Heisenberg’s error-disturbance relation (2013), arXiv:1308.3540 [quant-ph] .
* Buscemi _et al._ [2014] F. Buscemi, M. J. Hall, M. Ozawa, and M. M. Wilde, Noise and disturbance in quantum measurements: an information-theoretic approach, Phys. Rev. Lett. 112, 050401 (2014).
* Dressel and Nori [2014] J. Dressel and F. Nori, Certainty in Heisenberg’s uncertainty principle: revisiting definitions for estimation errors and disturbance, Phys. Rev. A 89, 022106 (2014).
* Korzekwa _et al._ [2014] K. Korzekwa, D. Jennings, and T. Rudolph, Operational constraints on state-dependent formulations of quantum error-disturbance trade-off relations, Phys. Rev. A 89, 052108 (2014).
* Erhart _et al._ [2012] J. Erhart, S. Sponar, G. Sulyok, G. Badurek, M. Ozawa, and Y. Hasegawa, Experimental demonstration of a universally valid error-disturbance uncertainty relation in spin measurements, Nature Phys. 8, 185 (2012).
* Sulyok _et al._ [2013] G. Sulyok, S. Sponar, J. Erhart, G. Badurek, M. Ozawa, and Y. Hasegawa, Violation of Heisenberg’s error-disturbance uncertainty relation in neutron-spin measurements, Phys. Rev. A 88, 022110 (2013).
* Baek _et al._ [2013] S.-Y. Baek, F. Kaneda, M. Ozawa, and K. Edamatsu, Experimental violation and reformulation of the Heisenberg’s error-disturbance uncertainty relation, Sci. Rep. 3, 2221 (2013).
* Kaneda _et al._ [2014] F. Kaneda, S.-Y. Baek, M. Ozawa, and K. Edamatsu, Experimental test of error-disturbance uncertainty relations by weak measurement, Phys. Rev. Lett. 112, 020402 (2014).
* Ringbauer _et al._ [2014] M. Ringbauer, D. N. Biggerstaff, M. A. Broome, A. Fedrizzi, C. Branciard, and A. G. White, Experimental joint quantum measurements with minimum uncertainty, Phys. Rev. Lett. 112, 020401 (2014).
* Sulyok _et al._ [2015] G. Sulyok, S. Sponar, B. Demirel, F. Buscemi, M. J. Hall, M. Ozawa, and Y. Hasegawa, Experimental test of entropic noise-disturbance uncertainty relations for spin-1/2 measurements, Phys. Rev. Lett. 115, 030401 (2015).
* Demirel _et al._ [2016] B. Demirel, S. Sponar, G. Sulyok, M. Ozawa, and Y. Hasegawa, Experimental test of residual error-disturbance uncertainty relations for mixed spin-$1/2$ states, Phys. Rev. Lett. 117, 140402 (2016).
* Demirel _et al._ [2019] B. Demirel, S. Sponar, A. A. Abbott, C. Branciard, and Y. Hasegawa, Experimental test of an entropic measurement uncertainty relation for arbitrary qubit observables, New J. Phys. 21, 013038 (2019).
* Liu _et al._ [2019a] Y. Liu, Z. Ma, H. Kang, D. Han, M. Wang, Z. Qin, X. Su, and K. Peng, Experimental test of error-tradeoff uncertainty relation using a continuous-variable entangled state, npj Quantum Inf. 5, 68 (2019a).
* Liu _et al._ [2019b] Y. Liu, H. Kang, D. Han, X. Su, and K. Peng, Experimental test of error-disturbance uncertainty relation with continuous variables, Photonics Res. 7, A56 (2019b).
* Okamura [2020] K. Okamura, Linear position measurements with minimum error-disturbance in each minimum uncertainty state (2020), arXiv:2012.12707 [quant-ph] .
* Ozawa [1990] M. Ozawa, Quantum-mechanical models of position measurements, Phys. Rev. A 41, 1735 (1990).
* Ozawa [1988] M. Ozawa, Measurement breaking the standard quantum limit for free-mass position, Phys. Rev. Lett. 60, 385 (1988).
* Caves _et al._ [1980] C. M. Caves, K. S. Thorne, R. W. Drever, V. D. Sandberg, and M. Zimmermann, On the measurement of a weak classical force coupled to a quantum-mechanical oscillator. I. Issues of principle, Rev. Mod. Phys. 52, 341 (1980).
* Yuen [1983] H. P. Yuen, Contractive states and the standard quantum limit for monitoring free-mass positions, Phys. Rev. Lett. 51, 719 (1983).
* Caves [1985] C. M. Caves, Defense of the standard quantum limit for free-mass position, Phys. Rev. Lett. 54, 2465 (1985).
* Ozawa [1989] M. Ozawa, Realization of measurement and the standard quantum limit, in _Squeezed and Nonclassical Light_ (Springer, 1989) pp. 263–286.
* Maddox [1988] J. Maddox, Beating the quantum limits (cont’d), Nature 331, 559 (1988).
* Arthurs and Kelly [1965] E. Arthurs and J. Kelly, On the simultaneous measurement of a pair of conjugate observables, The Bell Sys. Tech. J. 44, 725 (1965).
* Gauss [1821] C. F. Gauss, Theoria combinationis observationum erroribus minimis obnoxiae, pars prior, in _Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores V (Classis Mathematicae)_ (societati regiae exhibita, febr. 15, 1821) English translation: Theory of the Combination of Observations Least Subject to Errors, Part One, Part Two, Supplement, translated by G.W. Stewart (SIAM, Philadelphia, PA, 1995), https://epubs.siam.org/doi/pdf/10.1137/1.9781611971248 .
* Ozawa [1985] M. Ozawa, Conditional probability and a posteriori states in quantum mechanics, Publ. Res. Inst. Math. Sci. 21, 279 (1985).
* Okamura and Ozawa [2016] K. Okamura and M. Ozawa, Measurement theory in local quantum physics, J. Math. Phys. 57, 015209 (2016).
* Ozawa [2019] M. Ozawa, Soundness and completeness of quantum root-mean-square errors, npj Quantum Inf. 5, 1 (2019).
|
# Probabilistic Inference for Learning from Untrusted Sources
Duc Thien Nguyen Shiau Hong Lim Laura Wynter Desmond Cai The authors are
with IBM Research, Singapore. Emails: {Duc.Thien.Nguyen<EMAIL_ADDRESS>lwynter@sg, <EMAIL_ADDRESS>
###### Abstract
Federated learning brings potential benefits of faster learning, better
solutions, and a greater propensity to transfer when heterogeneous data from
different parties increases diversity. However, because federated learning
tasks tend to be large and complex, and training times non-negligible, it is
important for the aggregation algorithm to be robust to non-IID data and
corrupted parties. This robustness relies on the ability to identify, and
appropriately weight, incompatible parties. Recent work assumes that a
reference dataset is available through which to perform the identification. We
consider settings where no such reference dataset is available; rather, the
quality and suitability of the parties needs to be inferred. We do so by
bringing ideas from crowdsourced predictions and collaborative filtering,
where one must infer an unknown ground truth given proposals from participants
with unknown quality. We propose novel federated learning aggregation
algorithms based on Bayesian inference that adapt to the quality of the
parties. Empirically, we show that the algorithms outperform standard and
robust aggregation in federated learning on both synthetic and real data.
## Introduction
For deep neural networks to address more complex tasks in the future it is
likely that the participation of multiple users, and hence multiple sources of
data, will need become more widespread. This practice has been widely used in
object recognition Li and Deng (2019); Sohn et al. (2011); Li et al. (2016);
Rahimpour et al. (2016); Paris et al. (2015), but less so in domains such as
finance, medicine, prediction markets, internet of things, etc. Federated
learning, as defined by McMahan et al. (2017b) is an answer to the problem of
training complex, heterogeneous tasks. It involves distributing model training
across a number of parties in a centralized manner while taking into account
communication requirements, over potentially remote or mobile devices, privacy
concerns requiring that data remains at the remote location, and the lack of
balanced or IID data across parties.
One challenge in federated learning, as noted by Kairouz et al. (2019), is the
quality and data distribution of the sources being used for the training
tasks. A related challenge is the potential for random failures or adversarial
parties to disrupt the federated training. For these reasons, robust federated
learning has seen a flurry of activity Sattler et al. (2019); Bhagoji et al.
(2019); Mohri, Sivek, and Suresh (2019); Ghosh et al. (2019); Pillutla,
Kakade, and Harchaoui (2019b). Some, like Alistarh, Allen-Zhu, and Li (2018);
Bhagoji et al. (2019); Xie, Koyejo, and Gupta (2018) focus on the adversarial
setting, and others, like Konstantinov and Lampert (2019); Pillutla, Kakade,
and Harchaoui (2019b), focus on the general setting of distributed learning
under different source distributions. In both cases, this requires identifying
the weight with which to include each party in the aggregation.
Konstantinov and Lampert (2019) proposed to give the aggregator a reference
dataset with which to measure the quality of each party update. Like
Konstantinov and Lampert (2019), we explore the question of efficient
federated learning with unequal and possibly untrusted parties. However, the
assumption of access to a reference dataset is, for many real-world problems,
problematic. Consider a federation of medical diagnosis facilities, each with
its own patient population. Not only would it violate privacy concerns to
generate a reference dataset but it would not in fact be feasible. The same
problem arises in virtually any real-world domain for which federated learning
offers an appealing solution.
We propose instead to adapt inference methods from collaborative filtering Cai
et al. (2020) to the problem of heterogeneous federated learning aggregation.
Using a Gaussian model, we model each party’s estimate as a noisy observation
of an unknown ground truth and define new probabilistic inference algorithms
to iteratively estimate the ground truth. We show that the estimated ground
truth is robust to faulty and poor quality data. Specifically, the
contributions of this work are as follows:
* •
We provide a maximum likelihood estimator of the uncertainty level of each
party in a federated learning training task. The estimator gives rise to an
appropriate weighting for each party in each aggregation. When each party’s
data sample is independent, the estimator reduces to the standard averaging
scheme of McMahan et al. (2017a); in the more general case of overlapping
samples, it offers a new maximum likelihood estimator.
* •
We define two new algorithms for federated learning that make use of the MLE:
an inverse variance weighting and an inverse covariance weighting scheme.
* •
As the maximum likelihood estimator can overfit when the available data is
scarce and tends to be computationally expensive for the inverse covariance
scheme, we define a new Variational Bayesian (VB) approach to approximate the
posterior distributions of the ground truth under both independent and latent
noise models.
Both the MLE and VB methods are tested on synthetic and real datasets; the
tests show the superiority of aggregation with probabilistic inference over
standard baselines including the mean and the more robust median-based
approaches: geometric median and coordinate-wise median.
(a) Full participation. Full batch (300 samples)
(b) Full participation. Mini batch (32 samples)
(c) Partial: 3 random parties per round. Full batch (300 samples)
Figure 1: Linear regression. ICOV and IVAR outperform other methods when there
are adversaries.
(a) 5 genuine parties, 0 adversaries
(b) 5 genuine parties, 5 adversaries
(c) 5 genuine parties, 10 adversar.
Figure 2: Adversarial MNIST testing performance. ICOV and IVAR outperform
other methods with adversaries.
(a) 5 genuine parties, 0 adversaries
(b) 5 genuine parties, 5 adversaries
(c) 5 genuine parties, 10 adversar.
Figure 3: Adversarial Shakespeare testing performance. ICOV and IVAR
outperform other methods with adversaries.
## Related work
### Robust Federated Learning
Konstantinov and Lampert (2019) propose a method for federated classification
and regression using a reference dataset with which to weight the parties in
the federation, in a manner similar to that of Song et al. (2018) for single-
party, i.e. non-federated, training. They aggregate the parties using either
the geometric median or the component-wise version thereof. Some methods such
as Xie, Koyejo, and Gupta (2018) score the contribution of each party and then
accept only those up to a threshold. Pillutla, Kakade, and Harchaoui (2019b)
propose a stable variant of the geometric median algorithm for model parameter
aggregation. The authors argue that parameter aggregation, as opposed to
gradient aggregation, allows for more computation to occur on the devices and
that assumptions on the distributions of parameters are easier to interpret.
In our work we provide a mechanism to estimate the ground truth values for
each party in a manner that applies to both gradients and model parameters.
A number of works such as Alistarh, Allen-Zhu, and Li (2018); Blanchard et al.
(2017); Yin et al. (2018a); Bhagoji et al. (2019); Chen et al. (2018) study
the byzantine setting with assumptions on the maximum number of adversarial
parties, but do not in general consider the case of unbalanced data. Blanchard
et al. (2017) propose a novel aggregation mechanism based on the distance of a
party’s gradients to other gradients. Li et al. (2019) address the byzantine
setting with non-iid data by penalizing the difference between local and
global parameters, but do not consider unbalanced data. Chen et al. (2018)
offer strong guarantees but under rather strong assumptions on the collusion
of the parties, running contrary to most privacy requirements, and requiring
significant redundancy with each party computing multiple gradients. Portnoy
and Hendler (2020) are concerned with unbalanced data in a byzantine setting
where parties erroneously report the sample size, and so propose to truncate
weights reported by the parties to bound the impact of byzantine parties.
### Collaborative Filtering
One of the earliest efforts in collaborative filtering was that of Dawid and
Skene (1979) who proposed a Bayesian inference algorithm to aggregate
individual worker labels and infer the ground truth in categorical labelling.
Their approach defined the two main components of a collaborative filtering
algorithm: estimating the reliability of each worker, and inferring the true
label of each instance. They applied expectation maximization and estimated
the ground truth in the E-step. Then, using the estimated ground truth, they
compute the maximum likelihood estimates of the confusion matrix in the
M-step. In continuous value labelling, Raykar et al. (2010) modeled each
worker prediction as an independent noisy observation of the ground truth.
Based on this independent noise assumption, Raykar et al. (2010) developed a
counterpart to the Dawid-Skene framework for the continuous domain to infer
both the unknown individual variance and the ground truth. In their M-step,
the variance, which corresponds to the confusion matrix in categorical
labelling, is computed to minimize the mean square error with respect to the
estimated ground truth. Their E-step involves re-estimating the ground truth
with a weighted sum of the individual predictions, where the weights are set
as the inverses of individual variances. Liu, Peng, and Ihler (2012) point to
the risk of convergence to a poor-quality local optimum of the above-mentioned
EM approaches and propose a variational approach for the problem. Welinder et
al. (2010) model each worker as a multi-dimensional quantity including bias
and other factors, and group them as a function of those quantities. In
federated learning, a party may also be considered to have a multidimensional
set of attributes. In collaborative filtering, workers seldom participate in
all of the tasks. This sparsity motivates the application of matrix
factorization techniques. Federated learning also may exhibit this
characteristic: if a party does not participate in all training rounds for
reasons of latency, or suffers a failure, the result would be similar to the
sparsity found in collaborative filtering. In continuous applications parties
may exhibit correlations in their estimates. Li, Rubinstein, and Cohn (2019),
in the context of crowdsourced classification, showed that the incorporation
of cross-worker correlations significantly improves accuracy. That work relies
on an extension of the (independent) Bayesian Classifier Combination model of
Kim and Ghahramani (2012) in which worker correlation is modeled by
representing true classes by mixtures of subtypes and motivates our inverse
covariance scheme.
## Problem Setup and Inference Models
Consider a global loss function
$F(\mathbf{w})=\mathbb{E}_{\mathbf{z}}f(\mathbf{z};\mathbf{w})$
where $\mathbf{w}$ is the parameter of interest and $\mathbb{E}$ denotes the
expectation with respect to $\mathbf{z}\sim\mathcal{P}$ for some unknown
distribution $\mathcal{P}$. In a federated learning setting, each worker party
has access to samples from $\mathcal{P}$ and wish to jointly minimize
$F(\mathbf{w})$ without revealing the local samples. Beginning with some
initial $\mathbf{w}=\mathbf{w}_{0}$, learning happens over single or multiple
rounds where each worker party submits a local update to a central aggregator.
The local update can be in the form of model parameter $\mathbf{w}$ or
gradient $\nabla_{\mathbf{w}}F(\mathbf{w})$.
Each round of such updates is considered a task; we use $i=1,\ldots,I$ to
index such tasks. Workers are indexed by $j=1,\ldots,J$. We do not assume full
participation in every update round, and use $J_{i}\subset\\{1\ldots J\\}$ to
denote the set of participating workers for task $i$. Similarly, let
$I_{j}\subset\\{1\ldots I\\}$ denote the set of tasks in which worker $j$
participates. Note that the term worker and party are synonymous, as both are
used in the federated learning setting. In task $i$, each worker $j\in J_{i}$
sends an update $\mathbf{x}_{ij}$ to the aggregator. We make the following
assumption regarding $\mathbf{x}_{ij}$:
###### Assumption 1.
The local update $\mathbf{x}_{ij}$ follows a Gaussian distribution
$\mathbf{x}_{ij}\sim\mathcal{N}(\mathbf{y}_{i},\Sigma_{j})$.
We argue that the assumption is well-founded through the following examples.
###### Example 1.
Consider a learning scheme where each update to $\mathbf{w}$ computes an
estimate of the global gradient
$\nabla_{\mathbf{w}}F=\mathbb{E}_{\mathbf{z}}\nabla_{\mathbf{w}}f(\mathbf{z};\mathbf{w})$.
Suppose that each worker $j$ has access to a sample $\mathcal{D}_{j}$ of
independent examples from $\mathcal{P}$ and computes
$\mathbf{x}_{ij}=\frac{1}{|\mathcal{D}_{j}|}\sum_{\mathbf{z}\in\mathcal{D}_{j}}\nabla_{\mathbf{w}}f(\mathbf{z};\mathbf{w})$.
Let $\mathbf{y}_{i}=\mathbb{E}[\nabla_{\mathbf{w}}f(\mathbf{z};\mathbf{w})]$
and $\Sigma=\mathrm{Cov}[\nabla_{\mathbf{w}}f(\mathbf{z};\mathbf{w})]$. By the
central limit theorem, as $|\mathcal{D}_{j}|\to\infty$, $\mathbf{x}_{ij}$
approches $\mathcal{N}(\mathbf{y}_{i},\Sigma_{j})$ in distribution, with
$\Sigma_{j}=\frac{\Sigma}{|\mathcal{D}_{j}|}$.
###### Example 2.
Suppose that each local update is obtained by finding the maximum likelihood
estimator for a linear model
$\mathbf{z}_{j}=H_{j}\mathbf{y}_{i}+\mathbf{\epsilon}_{j}$ where
$(H_{j},\mathbf{z}_{j})$ contains the observed local data. Assuming that
$H_{j}$ is fixed while $\mathbf{\epsilon}_{j}$ follows a Gaussian distribution
$\mathcal{N}(0,\sigma^{2}\mathbf{I})$, then the least-squares solution, given
by $\mathbf{x}_{ij}=(H_{j}^{\top}H_{j})^{-1}H_{j}^{\top}\mathbf{z}_{j}$ also
follows a Gaussian $\mathcal{N}(\mathbf{y}_{i},\Sigma_{j})$ where
$\Sigma_{j}=\sigma^{2}(H_{j}^{\top}H_{j})^{-1}$.
Under Assumption 1, further suppose that each local sample is _independent_ ,
the maximum likelihood estimator (MLE) for $\mathbf{y}_{i}$ is given by
$\displaystyle\widehat{\mathbf{y}}_{i}$
$\displaystyle=\arg\max_{\mathbf{y}}\sum_{j}-(\mathbf{x}_{ij}-\mathbf{y})^{\top}\Sigma_{j}^{-1}(\mathbf{x}_{ij}-\mathbf{y})$
$\displaystyle=\big{(}\sum_{j}\Sigma_{j}^{-1}\big{)}^{-1}\sum_{j}\Sigma_{j}^{-1}\mathbf{x}_{ij}.$
(1)
In the case of Example 1, where $\Sigma_{j}=\frac{\Sigma}{|\mathcal{D}_{j}|}$,
equation (Problem Setup and Inference Models) reduces to
$\widehat{\mathbf{y}}_{i}=\frac{\sum_{j}|\mathcal{D}_{j}|\mathbf{x}_{ij}}{\sum_{j}|\mathcal{D}_{j}|}.$
(2)
This justifies the standard averaging scheme in federated learning (McMahan et
al., 2017a). Note that even under the Gaussian assumption, the standard
averaging scheme is the MLE only when each worker has independent samples.
In general, if $\mathbf{x}_{ij}$ and $\mathbf{x}_{ij^{\prime}}$ are not
independent, the MLE for $\mathbf{y}_{i}$ will be more complicated. Consider
the simpler case where each component in $\mathbf{x}_{ij}$, denoted
$x_{ij}^{k}$ for $k=1\ldots K$, is independent across $k$, fixing $i,j$. On
the other hand, they may be correlated among the workers, i.e. across $j$
fixing $i,k$. Assumption 1 specializes to:
###### Assumption 2.
The local update $\mathbf{x}_{ij}$ follows a Gaussian distribution
$\mathbf{x}_{ij}\sim\mathcal{N}(\mathbf{y}_{i},\sigma_{j}^{2}\mathbf{I})$.
Furthermore, let $\Phi$ be a $J\times J$ covariance matrix where
$\Phi_{j,j}=\sigma_{j}^{2}$ and
$\Phi_{j,j^{\prime}}=\mathrm{Cov}(x_{ij}^{k},x_{ij^{\prime}}^{k})$ for all $k$
and $j\neq j^{\prime}$. The vector $\mathbf{x}_{i,:}^{k}=[x_{i1}^{k}\ldots
x_{iJ}^{k}]^{\top}$ follows a Gaussian distribution
$\mathbf{x}_{i,:}^{k}\sim\mathcal{N}(y_{i}^{k}\mathbf{1},\Phi)$.
The MLE for $\mathbf{y}_{i}$ and $\Phi$ under this setting is given by:
###### Proposition 1.
Under Assumption 2, let $X_{i,\mathbf{j}_{i}}$ be the matrix whose columns are
$\mathbf{x}_{ij}$ for participating workers $j\in J_{i}$ and
$\Phi_{\mathbf{j}_{i}}$ the corresponding submatrix of $\Phi$. The MLE for
$\mathbf{y}_{i}$ (fixing $\Phi_{\mathbf{j}_{i}}$) and $\Phi_{\mathbf{j}_{i}}$
(fixing $\mathbf{y}_{i}$) are given, respectively, by
$\widehat{\mathbf{y}}_{i}=\frac{X_{i,\mathbf{j}_{i}}\Phi_{\mathbf{j}_{i}}^{-1}\mathbf{1}}{\mathbf{1}^{\top}\Phi_{\mathbf{j}_{i}}^{-1}\mathbf{1}}$
(3)
and
$\widehat{\Phi}_{\mathbf{j}_{i}}=\frac{1}{K}(X_{i,\mathbf{j}_{i}}-\mathbf{y}_{i}\mathbf{1}^{\top})^{\top}(X_{i,\mathbf{j}_{i}}-\mathbf{y}_{i}\mathbf{1}^{\top}).$
(4)
###### Proof.
Let $\mathbf{x}_{i,\mathbf{j}_{i}}^{k}$ be the (column) vector corresponds to
the $k$-th row of $X_{i,\mathbf{j}_{i}}$. Under Assumption 2, we have that
$\mathbf{x}_{i,\mathbf{j}_{i}}^{k}\sim\mathcal{N}(y_{i}^{k}\mathbf{1},\Phi_{\mathbf{j}_{i}})$.
The log-likelihood for $\mathbf{x}_{i,\mathbf{j}_{i}}^{k}$ is given by
$\displaystyle\log
p(\mathbf{x}_{i,\mathbf{j}_{i}}^{k}|y_{i}^{k},\Phi_{\mathbf{j}_{i}})$
$\displaystyle=$
$\displaystyle\frac{1}{2}\log|\Phi_{\mathbf{j}_{i}}^{-1}|-\frac{1}{2}(\mathbf{x}_{i,\mathbf{j}_{i}}^{k}-y_{i}^{k}\mathbf{1})^{\top}\Phi_{\mathbf{j}_{i}}^{-1}(\mathbf{x}_{i,\mathbf{j}_{i}}^{k}-y_{i}^{k}\mathbf{1})+c$
for $c$ constant. The MLE can be obtained by computing
$\frac{\partial}{\partial y_{i}^{k}}\log
p(\mathbf{x}_{i,\mathbf{j}_{i}}^{k}|y_{i}^{k},\Phi_{\mathbf{j}_{i}})$ and
$\frac{\partial}{\partial(\Phi_{\mathbf{j}_{i}}^{-1})}\log
p(\mathbf{x}_{i,\mathbf{j}_{i}}^{k}|y_{i}^{k},\Phi_{\mathbf{j}_{i}})$
respectively and finding the stationary points. ∎
###### Remark 1.
Note that under Assumption 2, $\Phi$ is shared by all tasks $i=1\ldots I$.
Equation (4) can therefore be extended to use the data across multiple tasks,
resulting in the following update for all $j,j^{\prime}$:
$\widehat{\Phi}_{j,j^{\prime}}=\frac{1}{K|I_{j}\cap I_{j^{\prime}}|}\sum_{i\in
I_{j}\cap
I_{j^{\prime}}}(\mathbf{x}_{ij}-\mathbf{y}_{i})^{\top}(\mathbf{x}_{ij^{\prime}}-\mathbf{y}_{i}).$
Let us go back to Example 1 where each local update $\mathbf{x}_{ij}$ is the
average of independent examples from $\mathcal{D}_{j}$ but for any two workers
$j\neq j^{\prime}$, $\mathcal{D}_{j}$ and $\mathcal{D}_{j^{\prime}}$ can
_overlap_. We have:
###### Proposition 2.
Under Assumption 2, let
$\mathbf{x}_{ij}=\frac{1}{|\mathcal{D}_{j}|}\sum_{\mathbf{g}\in\mathcal{D}_{j}}\mathbf{g}$
where $\mathbf{g}\sim\mathcal{N}(\mathbf{y}_{i},\sigma^{2}\mathbf{I})$. Assume
that for each $j$, all $\mathbf{g}\in\mathcal{D}_{j}$ are independent, but
$\mathcal{D}_{j}\cap\mathcal{D}_{j^{\prime}}$ may be non-empty for any $j\neq
j^{\prime}$. Then
$\Phi_{j,j^{\prime}}=\frac{|\mathcal{D}_{j}\cap\mathcal{D}_{j^{\prime}}|}{|\mathcal{D}_{j}||\mathcal{D}_{j^{\prime}}|}\sigma^{2}.$
(5)
###### Proof.
Fix a component $k$ of $\mathbf{g}$, we have that
$g^{k}\sim\mathcal{N}(y_{i}^{k},\sigma^{2})$. Let $|\mathcal{D}_{j}|=n_{1}+m$,
$|\mathcal{D}_{j^{\prime}}|=n_{2}+m$ and $n=n_{1}+n_{2}+m$. Draw $n$
independent examples $g^{k}_{1}\ldots g^{k}_{n}$ from
$\mathcal{N}(y_{i}^{k},\sigma^{2})$ such that
$\mathcal{D}_{j}^{k}=\\{g^{k}_{1}\ldots
g^{k}_{n_{1}},g^{k}_{n_{1}+n_{2}+1}\ldots g^{k}_{n_{1}+n_{2}+m}\\}$ and
$\mathcal{D}_{j^{\prime}}^{k}=\\{g^{k}_{n_{1}+1}\ldots
g^{k}_{n_{1}+n_{2}},g^{k}_{n_{1}+n_{2}+1}\ldots g^{k}_{n_{1}+n_{2}+m}\\}$.
Note that $m$ is the number of overlapping examples.
Let $\mathbf{x}=[g^{k}_{1}\ldots g^{k}_{n}]^{\top}$ and choose $A$ such that
$A\mathbf{x}=[x_{ij}^{k},x_{ij^{\prime}}^{k}]^{\top}$. We use the fact that
for a constant matrix $A$ and random vector $\mathbf{x}$,
$\mathrm{Cov}(A\mathbf{x})=A\mathrm{Cov}(\mathbf{x})A^{\top}$. Note that
$\mathrm{Cov}(\mathbf{x})=\sigma^{2}\mathbf{I}$. The result then follows by
inspecting the entries in $A\mathrm{Cov}(\mathbf{x})A^{\top}$. ∎
With overlapping local samples, one can solve the MLE of $\mathbf{y}_{i}$
using Equation (3) with $\Phi$ from Equation (5). If there is no overlap, then
we again obtain (2). In practice, however, it is unlikely that the aggregator
has access to the sample size as well as the sample overlap between any
workers. Our proposed approach is therefore to jointly estimate both
$\mathbf{y}_{i}$ _and_ the unknown $\Phi$ under Assumption 2. We present in
what follows two new methods for doing so. In the first we suppose that $\Phi$
is diagonal; this results in an Inverse Variance Weighting method, called
IVAR. In the second we estimate the full covariance matrix, $\Phi$, in what we
term Inverse Covariance Weighting, or ICOV.
### Inverse Variance Weighting
Inverse variance weighting has been used in collaborative filtering for
aggregation without a ground truth. Inverse variance weighting has an
appealing interpretation as the maximum-likelihood estimation under a bias-
variance model, based on the assumption that parties have independent additive
prediction noise Liu, Ihler, and Steyvers (2013); Raykar et al. (2010); Kara
et al. (2015). As such, the Gaussian model of Assumption 2 is a good
approximation.
We adapt this idea to federated learning as follows. Let the ground truth be
$\mathbf{y}_{i}$ for each $i$. Learning the full covariance matrix $\Phi$ can
be expensive if the number of parties $J$ is large. This justifies developing
a method that uses a diagonal matrix with $\Phi_{j,j^{\prime}}=0$ for $j\neq
j^{\prime}$. Then, the maximum likelihood aggregation can be computed as
follows:
###### Proposition 3.
Under Assumption 2, let $\Phi$ be diagonal. The MLE for $\mathbf{y}_{i}$
(fixing $\Phi$) is given by
$\widehat{\mathbf{y}}_{i}=\frac{\sum_{j\in
J_{i}}(1/\sigma^{2}_{j})\mathbf{x}_{ij}}{\sum_{j\in J_{i}}1/\sigma^{2}_{j}}.$
(6)
For each $j$, the MLE for $\sigma_{j}^{2}$ (fixing $\mathbf{y}_{i}$) is given
by
$\widehat{\sigma}_{j}^{2}=\frac{1}{K}\|\mathbf{x}_{ij}-\mathbf{y}_{i}\|^{2}$
(7)
where $\|\cdot\|$ is the Euclidean norm.
###### Proof.
The results follow from Proposition 1. ∎
The MLE for $\mathbf{y}_{i}$ and $\sigma_{j}^{2}$ can be jointly optimized by
iterating on Equations (6) and (7). In particular, beginning with
$\widehat{\mathbf{y}}_{i}^{(0)}$, each update is given by:
$\widehat{\mathbf{y}}_{i}^{(t+1)}=\frac{\sum_{j\in
J_{i}}\big{(}1/\|\mathbf{x}_{ij}-\widehat{\mathbf{y}}_{i}^{(t)}\|^{2}\big{)}\mathbf{x}_{ij}}{\sum_{j\in
J_{i}}\big{(}1/\|\mathbf{x}_{ij}-\widehat{\mathbf{y}}_{i}^{(t)}\|^{2}\big{)}}.$
This bears a resemblance to Weiszfeld’s algorithm to estimate the geometric
median Pillutla, Kakade, and Harchaoui (2019a), where each update is given by:
$\widehat{\mathbf{y}}_{i}^{(t+1)}=\frac{\sum_{j\in
J_{i}}\big{(}1/\|\mathbf{x}_{ij}-\widehat{\mathbf{y}}_{i}^{(t)}\|\big{)}\mathbf{x}_{ij}}{\sum_{j\in
J_{i}}\big{(}1/\|\mathbf{x}_{ij}-\widehat{\mathbf{y}}_{i}^{(t)}\|\big{)}}.$
Input:
$\langle\sigma_{j},\mathbf{x}_{ij}\rangle_{i\in\tilde{I},j\in\mathbf{J}_{i}}$
1 for _$t\rightarrow 1:T$_ do
2
$\mathbf{y}^{(t)}_{i}\leftarrow\frac{\sum_{j\in\mathbf{j}_{i}}1/\sigma^{2}_{j}\mathbf{x}_{ij}}{\sum_{j\in\mathbf{j}_{i}}1/\sigma^{2}_{j}},\forall
i\in\tilde{I}$
3
$\sigma^{2}_{j}\leftarrow\max\\{\epsilon,(1/(K|\tilde{I}_{j})|)\sum_{i\in\tilde{I}_{j}}\|\mathbf{y}^{(t)}_{i}-\mathbf{x}_{ij}\|^{2}_{2}\\}$,
$\forall j$
4
Output: $\mathbf{y}^{(T)}_{i},\langle\sigma_{j}\rangle_{j\in\mathbf{j}_{i}}$
Algorithm 1 Inverse-Variance Weighting Aggregator
The algorithm for inverse variance weight aggregation, IVAR, provided in
Algorithm 1, works as follows: upon receiving the local update for tasks
$\tilde{I}$, the aggregator iteratively computes the “consensus”
$\mathbf{y}^{(t)}_{i}$ for each task $i$ using the variance $\sigma_{j}$ of
each worker $j$. Note that, as $\sigma_{j}$ is assumed invariant over tasks,
it can be computed as the average variance.
### Inverse Covariance Weighting
The independence assumption in the bias-variance model can be violated in
federated learning scenarios when parties use similar information and methods.
This gives rise to a collective bias within groups of parties. Ideally one
would like then to estimate the full covariance matrix $\Phi$, such as using
iterative updates from Proposition 1. The number of parameters grows with
$J^{2}$ however and may give poor estimations if groups do not jointly
participate in many of the tasks. This motivates the use of a latent feature
model that allows for noise correlation across parties while addressing the
challenge of sparse observations. In particular, consider the following
probabilistic model for each local update. Without loss of generality, let
$K=1$ and omit index $k$:
$x_{ij}\sim\mathcal{N}\left(y_{i}+\mathbf{u}_{i}^{\top}\mathbf{v}_{j},\sigma^{2}\right),$
(8)
where $\mathbf{u}_{i}\in R^{D}$ and $\mathbf{v}_{j}\in R^{D}$ are latent
feature vectors associated with task $i$ and worker $j$, respectively. As
such, all observations are correlated by the unknown latent feature vectors.
Let $X$ be the local updates over multiple tasks with entries $X_{ij}=x_{ij}$.
Consider maximizing the log-likelihood:
$\displaystyle\log
p\left(\mathbf{X}\left|\mathbf{y},\mathbf{U},\mathbf{V},\sigma^{2}\right.\right)$
$\displaystyle=$ $\displaystyle\sum_{(i,j):i\in I_{j}}\log
p\left(x_{ij}\left|y_{i},\mathbf{u}_{i},\mathbf{v}_{j},\sigma^{2}\right.\right),$
where matrices
$\mathbf{U}\coloneqq\left[\mathbf{u}_{1},\dots,\mathbf{u}_{I}\right]^{\top}\in
R^{I\times D}$ and
$\mathbf{V}\coloneqq\left[\mathbf{v}_{1},\dots,\mathbf{v}_{J}\right]^{\top}\in
R^{J\times D}$. In particular, we extend inverse covariance weighting by a
nonlinear matrix factorization technique based on Gaussian processes Lawrence
and Urtasun (2009) to jointly infer the ground truth and the latent feature
vectors. From (8), observe that, by placing independent zero mean Gaussian
priors $\mathcal{N}(\mathbf{0},\sigma_{u}^{2}\mathbf{I})$ on $\mathbf{u}_{i}$,
we recover the probabilistic model of Assumption 2 where
$\mathbf{x}_{i,:}\sim\mathcal{N}(y_{i}\mathbf{1},\Phi)$ with the covariance
matrix:
$\Phi=\sigma_{u}^{2}\mathbf{V}\mathbf{V}^{\top}+\sigma^{2}\mathbf{I}.$
Thus, the problem of covariance estimation has been transformed into the
problem of estimating $\mathbf{V}$, $\sigma_{u}^{2}$, $\sigma^{2}$. The
degrees of freedom are now determined by the size of $\mathbf{V}$ which
contains $J\times D$ values. Since we expect $D\ll J$ in practical
applications, this problem has significantly fewer degrees of freedom than the
original problem of estimating the $J^{2}$ values of the entire covariance
matrix.
Maximizing the log-likelihood involves alternating between the optimization of
$\mathbf{y}$ and $(\mathbf{V},\sigma^{2},\sigma_{u}^{2})$. Specifically,
update $\mathbf{y}$ using equation (3) and perform stochastic gradient descent
on the model parameters as there is no closed-form solution for the latter.
The log-likelihood for round $i$ is:
$\displaystyle
E_{i}(\mathbf{V},\sigma^{2},\sigma_{u}^{2})=-\log\left|\Phi_{\mathbf{j}_{i}}\right|-\boldsymbol{\delta}_{i,\mathbf{j}_{i}}^{\top}\Phi_{\mathbf{j}_{i}}^{-1}\boldsymbol{\delta}_{i,\mathbf{j}_{i}}+\text{const.}$
and the gradients with respect to the parameters are:
$\displaystyle\nabla_{\mathbf{V}_{\mathbf{j}_{i},:}}E_{i}(\mathbf{V},\sigma^{2},\sigma_{u}^{2})$
$\displaystyle=2\sigma_{u}^{2}\mathbf{G}_{i}\mathbf{V}_{\mathbf{j}_{i},:},$
(9a)
$\displaystyle\nabla_{\sigma^{2}}E_{i}(\mathbf{V},\sigma^{2},\sigma_{u}^{2})$
$\displaystyle=\mathrm{Tr}\left(\mathbf{G}_{i}\right),$ (9b)
$\displaystyle\nabla_{\sigma_{u}^{2}}E_{i}(\mathbf{V},\sigma^{2},\sigma_{u}^{2})$
$\displaystyle=\mathrm{Tr}\left(\mathbf{G}_{i}\mathbf{V}_{\mathbf{j}_{i},:}\mathbf{V}_{\mathbf{j}_{i},:}^{\top}\right).$
(9c)
where
$\boldsymbol{\delta}_{i,\mathbf{j}_{i}}=(\mathbf{x}_{i,\mathbf{j}_{i}}-y_{i}\mathbf{1})$,
$\mathbf{G}_{i}\coloneqq\Phi_{\mathbf{j}_{i}}^{-1}\boldsymbol{\delta}_{i,\mathbf{j}_{i}}\boldsymbol{\delta}_{i,\mathbf{j}_{i}}^{\top}\Phi_{\mathbf{j}_{i}}^{-1}-\Phi_{\mathbf{j}_{i}}^{-1}$
and $\mathbf{V}_{\mathbf{j}_{i},:}\in R^{|J_{i}|\times D}$ is the submatrix of
$\mathbf{V}$ containing the rows corresponding to the indices in $J_{i}$.
After inferring the covariance matrix, computing the ground truth for new
instances can be done with Eq. (3). One can also model the covariance matrix
with non-linear kernel functions by replacing the inner products
$\mathbf{v}_{j}^{\top}\mathbf{v}_{j^{\prime}}$ in the covariance expression by
a Mercer kernel function $k(\mathbf{v}_{j},\mathbf{v}_{j^{\prime}})$. The
parameters in the kernel representation can be optimized by gradient descent
on the log-likelihood function. We focus, however, on the linear kernel
$k(\mathbf{v}_{j},\mathbf{v}_{j^{\prime}})=\mathbf{v}_{j}^{\top}\mathbf{v}_{j^{\prime}}$.
### Variational Bayesian (VB) Inference
The maximum-likelihood estimator can lead to overfitting when the available
data is scarce, and gradient updates (9a)-(9c) for inverse covariance
weighting are computationally expensive. For improved robustness and
computational efficiency, we propose a Variational Bayesian approach to
approximate the posterior distributions of the ground truth under both
independent and latent noise models.
#### Independent Noise Model
Under Assumption 2, we place a prior over the ground truth $y_{i}$ for each
$i$. Again, assume $K=1$ without loss of generality. Consider the simplest
prior: a zero-mean Gaussian $y_{i}\sim\mathcal{N}(0,\tau^{2})$ where
$\tau^{2}$ is a hyperparameter, though this can be extended to non-zero-mean
priors. From the observed data $\mathbf{X}$, estimate the full posterior
$p(\mathbf{y}|\mathbf{X})$ instead of a point estimate $\widehat{\mathbf{y}}$.
The variational approximate inference procedure approximates the posterior
$p(\mathbf{y}|\mathbf{X})$ by finding the distribution $q_{y}$ that maximizes
the (negative of the) variational free energy:
$F\left(q_{y}\right)=\mathbb{E}_{q_{y}}\left[\log\frac{p\left(\mathbf{X},\mathbf{y}\right)}{q_{y}(\mathbf{y})}\right],$
where the joint probability is given by:
$p\left(\mathbf{X},\mathbf{y}\right)=\prod_{(i,j):i\in
I_{j}}p\left(x_{ij}\left|y_{i}\right.\right)\prod_{i}p\left(y_{i}\right).$
Setting the derivative of $F$ w.r.t $q_{y}$ to zero implies that the
stationary distributions are independent Gaussians:
$q_{y}\left(\mathbf{y}\right)=\prod_{i}\mathcal{N}\left(y_{i}\left|\bar{y}_{i},\lambda_{i}\right.\right).$
where means and covariances satisfy the following:
$\displaystyle\lambda_{i}$ $\displaystyle=\left(\frac{1}{\tau^{2}}+\sum_{j\in
J_{i}}\frac{1}{\sigma_{j}^{2}}\right)^{-1},$ (10) $\displaystyle\bar{y}_{i}$
$\displaystyle=\lambda_{i}\sum_{j\in J_{i}}\frac{x_{ij}}{\sigma_{j}^{2}}.$
(11)
In this case, Eq. (10) and (11) provide the exact posterior for the ground
truth $\mathbf{y}$ given $\mathbf{X}$. Updating the hyperparameters by
minimizing the variational free energy results in:
$\displaystyle\tau^{2}$
$\displaystyle=\frac{1}{I}\sum_{i}\lambda_{i}+\bar{y}_{i}^{2},$ (12)
$\displaystyle\sigma_{j}^{2}$ $\displaystyle=\frac{1}{|I_{j}|}\sum_{i\in
I_{j}}\left(\lambda_{i}+\left(x_{ij}-\bar{y}_{i}\right)^{2}\right).$ (13)
In summary, the proposed approach performs block coordinate descent by
applying repeatedly eq. (10) to (13) and aggregates using the posterior mean
$\bar{y}_{i}$.
#### Latent Noise Model
One of the key steps in the MLE approach to Inverse Covariance Weighting is
the marginalization of $\mathbf{U}$ conditioned on
$(\mathbf{V},\sigma^{2},\sigma_{u}^{2})$. This can be interpreted as Bayesian
averaging over $\mathbf{U}$. However, full Bayesian averaging over both
$\mathbf{U}$ and $\mathbf{V}$ is challenging, motivating the Variational Bayes
approach. First, place zero mean Gaussian priors on the latent variables:
$\displaystyle
p\left(y_{i},\sigma_{y}^{2}\right)=\mathcal{N}\left(y_{i}\left|0,\sigma_{y}^{2}\right.\right),$
$\displaystyle
p\left(\mathbf{u}_{i},\sigma_{u}^{2}\right)=\mathcal{N}\left(\mathbf{u}_{i}\left|\mathbf{0},\sigma_{u}^{2}\mathbf{I}\right.\right),$
$\displaystyle
p\left(\mathbf{v}_{j},\sigma_{v}^{2}\right)=\mathcal{N}\left(\mathbf{v}_{j}\left|\mathbf{0},\sigma_{v}^{2}\mathbf{I}\right.\right),$
where $\sigma_{y}^{2}$, $\sigma_{u}^{2}$, $\sigma_{v}^{2}$ are
hyperparameters. For notational brevity, we omit the dependence of the
distributions on the hyperparameters $\sigma^{2}$, $\sigma_{y}^{2}$,
$\sigma_{u}^{2}$, $\sigma_{v}^{2}$. The variational inference procedure finds
distributions that maximize the (negative of the) variational free energy of
the model from (8), assuming a factored distribution
$q(\mathbf{y},\mathbf{U},\mathbf{V})=q_{y}(\mathbf{y})q_{u}(\mathbf{U})q_{v}(\mathbf{V})$:
$F\left(q_{y},q_{u},q_{v}\right)=\mathbb{E}_{q_{y},q_{u},q_{v}}\left[\log\frac{p\left(\mathbf{X},\mathbf{y},\mathbf{U},\mathbf{V}\right)}{q_{y}(\mathbf{y})q_{u}(\mathbf{U})q_{v}(\mathbf{V})}\right],$
where the joint probability is:
$\displaystyle p\left(\mathbf{X},\mathbf{y},\mathbf{U},\mathbf{V}\right)$
$\displaystyle=\prod_{(i,j):i\in
I_{j}}p\left(x_{ij}\left|y_{i},\mathbf{u}_{i},\mathbf{v}_{j}\right.\right)$
$\displaystyle\qquad\;\times\prod_{i}p\left(y_{i}\right)\prod_{i}p\left(\mathbf{u}_{i}\right)\prod_{j}p\left(\mathbf{v}_{j}\right).$
Then, solve for $q_{y}$, $q_{u}$ and $q_{v}$ by performing block coordinate
descent on $F$. The resulting posterior distributions are Gaussians where
$q_{y}(\mathbf{y})=\prod_{i}\mathcal{N}(y_{i}|\bar{y}_{i},\lambda_{i})$,
$q_{u}(\mathbf{U})=\prod_{i}\mathcal{N}(\mathbf{u}_{i}|\bar{\mathbf{u}}_{i},\Phi_{i})$,
and
$q_{v}(\mathbf{V})=\prod_{j}\mathcal{N}(\mathbf{v}_{j}|\bar{\mathbf{v}}_{j},\Psi_{j})$.
The means and covariances are given by:
$\displaystyle\lambda_{i}$
$\displaystyle=\left(\frac{1}{\sigma_{y}^{2}}+\sum_{j\in
J_{i}}\frac{1}{\sigma^{2}}\right)^{-1},$ (14) $\displaystyle\bar{y}_{i}$
$\displaystyle=\lambda_{i}\sum_{j\in
J_{i}}\frac{1}{\sigma^{2}}\left(x_{ij}-\bar{\mathbf{u}}_{i}^{\top}\bar{\mathbf{v}}_{j}\right),$
(15) $\displaystyle\boldsymbol{\Phi}_{i}$
$\displaystyle=\left(\frac{1}{\sigma_{u}^{2}}\mathbf{I}+\sum_{j\in
J_{i}}\frac{1}{\sigma^{2}}\left(\boldsymbol{\Psi}_{j}+\bar{\mathbf{v}}_{j}\bar{\mathbf{v}}_{j}^{\top}\right)\right)^{-1},$
(16) $\displaystyle\bar{\mathbf{u}}_{i}$
$\displaystyle=\boldsymbol{\Phi}_{i}\sum_{j\in
J_{i}}\frac{1}{\sigma^{2}}\left(x_{ij}-\bar{y}_{i}\right)\bar{\mathbf{v}}_{j},$
(17) $\displaystyle\boldsymbol{\Psi}_{j}$
$\displaystyle=\left(\frac{1}{\sigma_{v}^{2}}\mathbf{I}+\sum_{i\in
I_{j}}\frac{1}{\sigma^{2}}\left(\boldsymbol{\Phi}_{i}+\bar{\mathbf{u}}_{i}\bar{\mathbf{u}}_{i}^{\top}\right)\right)^{-1},$
(18) $\displaystyle\bar{\mathbf{v}}_{j}$
$\displaystyle=\boldsymbol{\Psi}_{j}\sum_{i\in
I_{j}}\frac{1}{\sigma^{2}}\left(x_{ij}-\bar{y}_{i}\right)\bar{\mathbf{u}}_{i}.$
(19)
The hyperparameter updates are given by:
$\displaystyle\sigma_{y}^{2}$
$\displaystyle=\frac{1}{I}\left(\sum_{i}\left(\lambda_{i}+\bar{y}_{i}^{2}\right)\right),$
(20) $\displaystyle\sigma_{u}^{2}$
$\displaystyle=\frac{1}{DI}\left(\sum_{i}\mathrm{Tr}\left(\boldsymbol{\Phi}_{i}+\bar{\mathbf{u}}_{i}\bar{\mathbf{u}}_{i}^{\top}\right)\right),$
(21) $\displaystyle\sigma_{v}^{2}$
$\displaystyle=\frac{1}{DJ}\left(\sum_{j}\mathrm{Tr}\left(\boldsymbol{\Psi}_{j}+\bar{\mathbf{v}}_{j}\bar{\mathbf{v}}_{j}^{\top}\right)\right),$
(22) $\displaystyle\sigma^{2}$
$\displaystyle=\frac{1}{\sum_{j}|I_{j}|}\sum_{(i,j):i\in
I_{j}}\left[\lambda_{i}+\left(x_{ij}-\bar{y}_{i}\right)^{2}-2\left(x_{ij}-\bar{y}_{i}\right)\bar{\mathbf{u}}_{i}^{\top}\bar{\mathbf{v}}_{j}\right.$
$\displaystyle\qquad\qquad\quad\;+\mathrm{Tr}\left(\left(\boldsymbol{\Psi}_{i}+\bar{\mathbf{u}}_{i}\bar{\mathbf{u}}_{i}^{\top}\right)\left(\boldsymbol{\Phi}_{j}+\bar{\mathbf{v}}_{j}\bar{\mathbf{v}}_{j}^{\top}\right)\right)\Big{]}.$
(23)
In summary, the algorithm applies equations (14) to (23) repeatedly until
convergence.
Input:
$\langle\mathbf{v}_{j},\sigma_{j},\mathbf{x}_{ij}\rangle_{j\in\mathbf{j}_{i}}$
1 for _$t\rightarrow 1:T$_ do
2
$\boldsymbol{\Sigma}_{\mathbf{j}_{i}}=\sigma_{u}^{2}\mathbf{V}_{\mathbf{j}_{i}}\mathbf{V}_{\mathbf{j}_{i}}^{\top}+\textbf{diag}(\sigma^{2}_{\mathbf{j}_{i}})$
3
$y^{(t)}_{i}=\frac{\mathbf{1}^{\top}{\boldsymbol{\Sigma}}_{\mathbf{j}_{i}}^{-1}x_{i,\mathbf{j}_{i}}}{\mathbf{1}^{\top}{\boldsymbol{\Sigma}}_{\mathbf{j}_{i}}^{-1}\mathbf{1}}$
4 Update using (14)-(23).
Output:
$\mathbf{y}^{(T)}_{i},\langle\mathbf{v}_{j},\sigma_{j},\mathbf{x}_{ij}\rangle_{j\in\mathbf{j}_{i}}$
Algorithm 2 Inverse Covariance Weighting Aggregator | Synthetic | MNIST | Shakespeare
---|---|---|---
Uniform avg. | 10.17 | 0.4926 | 0.16
Geom. media | 8.13 | 0.5233 | 0.41
Coord. median | 6.131 | 0.7987 | 0.29
IVAR-VB | 4.62 | 0.8943 | 0.56
IVAR-MLE | 4.66 | 0.9043 | 0.50
ICOV-VB | 2.89 | 0.8932 | 0.52
ICOV-MLE | 8.75 | 0.5253 | N.A
Table 1: Performance of the federated learning aggregation algorithms, uniform
averaging, geometric median, and coordinate-wise median, against proposed IVAR
and ICOV, MLE and VB versions. In the Synthetic linear regression example,
with full participation of 5 genuine parties and full batch, prediction error
is shown, hence lower is better. On the one-round MNIST task and 5 genuine
parties and 5 adversaries, prediction accuracy is shown, so higher is better.
In the multi-round stochastic gradient aggregation task using the Shakespeare
dataset, with 5 genuine parties and 5 adversaries accuracy is provided so
again higher is better.
## Experiments
We present experimental results with a synthetic dataset and two real
datasets: MNIST and Shakespeare McMahan et al. (2017a). We compare (1) Uniform
averaging (2) Geometric median which uses the smoothed Weiszfeld algorithm of
Pillutla, Kakade, and Harchaoui (2019a) (3) Coordinate-wise median which uses
the coordinate-wise median as in Yin et al. (2018b) (4) our proposed IVAR,
using the MLE formulation and using the VB (5) our proposed ICOV, again using
the MLE formulation and using VB, which computes a low-rank estimation of the
covariance matrix.
#### Synthetic dataset experiment
We design a synthetic linear regression experiment to create an environment
where each party in the federation has a different noise level, and the local
data of each party is overlapping. The experimental setup is provided in the
Supplementary Materials. Figure 1 shows the algorithm performance for various
levels of participation and batch size. ICOV performs better than IVAR, and
both ICOV and IVAR outperform the other baselines.
#### MNIST
In this adversarial MNIST classification task, a Gaussian adversary submits a
random vector with components generated from a standard normal,
$\mathcal{N}(0,1)$. We first study one-round parameter estimation using using
logistic regression, as in Yin et al. (2018b) with 5 genuine parties and
$R\in[0,10]$ adversaries. Bayesian inference aggregation IVAR and ICOV
outperform the other algorithms including robust estimators coordinate-wise
median and geometric median when the number of adversaries increases. Results
show the training convergence of IVAR, ICOV and the geometric median. IVAR and
geometric median convergence are fast with less than 5 iterations. ICOV
convergence is slower, but with a large number of adversaries, ICOV converges
to a better solution than IVAR. The geometric median is less robust than the
component-wise median in one-round estimation. Details and results for this
setting can be found in the Supplementary Materials.
Next, we solve adversarial MNIST using distributed stochastic gradient descent
(SGD) with the architecture of Baruch, Baruch, and Goldberg (2019). Figure 2
shows that when there is no adversary, uniform aggregation is ideal. However,
with adversaries, both uniform averaging and coordinate-wise median perform
poorly. When adversaries account for more than half of the parties, the
Bayesian methods IVAR and ICOV are superior.
#### Shakespeare
Lastly, we consider an NLP task using the Shakespeare dataset. Results, shown
in Figure 3, illustrate the case where an adversary submits a random vector
generated from a normal distribution in place of its true parameter vector.
The different setting where the adversary performs a random local update can
be found in the Supplementary Materials. Across the board IVAR-VB is shown to
be superior to the other methods.
The results are summarized in Table 1, and further details are provided in the
Supplementary Materials. Note that the synthetic dataset is measured in terms
of error, so that a lower number is better, while the MNIST and Shakespeare
tasks report classification accuracy, so higher is better. Across the board,
the proposed methods are far superior to both standard averaging and robust
aggregation algorithms. It can be noted that the choice of which variant of
the proposed methods is superior depends upon the task. Overall, the MLE
version of ICOV tends to be computationally challenging, but the VB version of
ICOV is very competitive. The IVAR method using both MLE and VB is an ideal
choice when overlap is not extensive, as is the case in the MNIST and
Shakespeare tasks.
## Discussion
We proposed new methods for federated learning aggregation on heterogeneous
data. Given that data heterogeneity in federated learning is similar to
estimating the ground truth in collaborative filtering, we adapt techniques to
estimate the uncertainty of the party updates so as to appropriately weight
their contribution to the federation. The techniques involve both MLE and
Variational Bayes estimators and in the simplest setting reduce to the
standard average aggregation step. In more general cases, including data
overlap, they provide new techniques, which enjoy superiority in the synthetic
and real world datasets examined. We expect that these methods will help make
federated learning applicable to a wider variety of real world problems.
## References
* Alistarh, Allen-Zhu, and Li (2018) Alistarh, D.; Allen-Zhu, Z.; and Li, J. 2018. Optimal Byzantine-Resilient Stochastic Gradient Descent. In _NIPS 2018_.
* Baruch, Baruch, and Goldberg (2019) Baruch, G.; Baruch, M.; and Goldberg, Y. 2019. A little is enough: Circumventing defenses for distributed learning. In _Advances in Neural Information Processing Systems_ , 8632–8642.
* Bhagoji et al. (2019) Bhagoji, A. N.; Chakraborty, S.; Mittal, P.; and Calo, S. B. 2019. Analyzing Federated Learning through an Adversarial Lens. In _ICML_.
* Blanchard et al. (2017) Blanchard, P.; Mhamdi, E. M. E.; Guerraoui, R.; and Stainer, J. 2017. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. In _NIPS_.
* Cai et al. (2020) Cai, D.; Nguyen, D. T.; Lim, S. H.; and Wynter, L. 2020. Variational Bayesian Inference for Crowdsourcing Predictions. _arXiv preprint_ .
* Chen et al. (2018) Chen, L.; Wang, H.; Charles, Z. B.; and Papailiopoulos, D. S. 2018. DRACO: Byzantine-resilient Distributed Training via Redundant Gradients. In Dy, J. G.; and Krause, A., eds., _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018_ , volume 80 of _Proceedings of Machine Learning Research_ , 902–911. PMLR. URL http://proceedings.mlr.press/v80/chen18l.html.
* Dawid and Skene (1979) Dawid, A. P.; and Skene, A. M. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. _Applied statistics_ 20–28.
* Ghosh et al. (2019) Ghosh, A.; Hong, J.; Yin, D.; and Ramchandran, K. 2019. Robust Federated Learning in a Heterogeneous Environment. _ArXiv_ abs/1906.06629.
* Kairouz et al. (2019) Kairouz, P.; McMahan, H. B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A. N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; D’Oliveira, R. G. L.; Rouayheb, S. E.; Evans, D.; Gardner, J.; Garrett, Z. A.; Gascón, A.; Ghazi, B.; Gibbons, P. B.; Gruteser, M.; Harchaoui, Z.; He, C.; He, L.; Huo, Z.; Hutchinson, B.; Hsu, J.; Jaggi, M.; Javidi, T.; Joshi, G.; Khodak, M.; Konecný, J.; Korolova, A.; Koushanfar, F.; Koyejo, O.; Lepoint, T.; Liu, Y.; Mittal, P.; Mohri, M.; Nock, R.; Özgür, A.; Pagh, R.; Raykova, M.; Qi, H.; Ramage, D.; Raskar, R.; Song, D. X.; Song, W.; Stich, S. U.; Sun, Z.; Suresh, A. T.; Tramèr, F.; Vepakomma, P.; Wang, J.; Xiong, L.; Xu, Z.; Yang, Q.; Yu, F. X.; Yu, H.; and Zhao, S. 2019. Advances and Open Problems in Federated Learning. _ArXiv_ abs/1912.04977.
* Kara et al. (2015) Kara, Y. E.; Genc, G.; Aran, O.; and Akarun, L. 2015. Modeling annotator behaviors for crowd labeling. _Neurocomputing_ 160: 141–156.
* Kim and Ghahramani (2012) Kim, H.-C.; and Ghahramani, Z. 2012. Bayesian Classifier Combination. In Lawrence, N. D.; and Girolami, M., eds., _Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics_ , volume 22 of _Proceedings of Machine Learning Research_ , 619–627. La Palma, Canary Islands: PMLR. URL http://proceedings.mlr.press/v22/kim12.html.
* Konstantinov and Lampert (2019) Konstantinov, N.; and Lampert, C. 2019. Robust Learning from Untrusted Sources. In _ICML_.
* Lawrence and Urtasun (2009) Lawrence, N. D.; and Urtasun, R. 2009. Non-linear matrix factorization with Gaussian processes. In _Proceedings of the 26th annual international conference on machine learning_ , 601–608. ACM.
* Li et al. (2016) Li, D.; Salonidis, T.; Desai, N. V.; and Chuah, M. C. 2016. DeepCham: Collaborative Edge-Mediated Adaptive Deep Learning for Mobile Object Recognition. _2016 IEEE/ACM Symposium on Edge Computing (SEC)_ 64–76.
* Li et al. (2019) Li, L.; Xu, W.; Chen, T.; Giannakis, G. B.; and Ling, Q. 2019. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets. volume Arxiv/abs/1811.03761.
* Li and Deng (2019) Li, S.; and Deng, W. 2019. Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition. _IEEE Transactions on Image Processing_ 28: 356–370.
* Li, Rubinstein, and Cohn (2019) Li, Y.; Rubinstein, B.; and Cohn, T. 2019. Exploiting Worker Correlation for Label Aggregation in Crowdsourcing. In _International Conference on Machine Learning_ , 3886–3895.
* Liu, Ihler, and Steyvers (2013) Liu, Q.; Ihler, A. T.; and Steyvers, M. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In _Advances in Neural Information Processing Systems_ , 1914–1922.
* Liu, Peng, and Ihler (2012) Liu, Q.; Peng, J.; and Ihler, A. T. 2012. Variational Inference for Crowdsourcing. In Pereira, F.; Burges, C. J. C.; Bottou, L.; and Weinberger, K. Q., eds., _Advances in Neural Information Processing Systems 25_ , 692–700. Curran Associates, Inc. URL http://papers.nips.cc/paper/4627-variational-inference-for-crowdsourcing.pdf.
* McMahan et al. (2017a) McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017a. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Singh, A.; and Zhu, X. J., eds., _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA_ , volume 54 of _Proceedings of Machine Learning Research_ , 1273–1282. PMLR. URL http://proceedings.mlr.press/v54/mcmahan17a.html.
* McMahan et al. (2017b) McMahan, H. B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017b. Communication-Efficient Learning of Deep Networks from Decentralized Data. In _AISTATS_.
* Mohri, Sivek, and Suresh (2019) Mohri, M.; Sivek, G.; and Suresh, A. T. 2019. Agnostic Federated Learning. In _ICML_.
* Paris et al. (2015) Paris, S.; Redondi, A. E. C.; Cesana, M.; and Tagliasacchi, M. 2015. Distributed object recognition in Visual Sensor Networks. _2015 IEEE International Conference on Communications (ICC)_ 6701–6706.
* Pillutla, Kakade, and Harchaoui (2019a) Pillutla, K.; Kakade, S. M.; and Harchaoui, Z. 2019a. Robust aggregation for federated learning. _arXiv preprint arXiv:1912.13445_ .
* Pillutla, Kakade, and Harchaoui (2019b) Pillutla, V. K.; Kakade, S. M.; and Harchaoui, Z. 2019b. Robust Aggregation for Federated Learning. _ArXiv_ abs/1912.13445.
* Portnoy and Hendler (2020) Portnoy, A.; and Hendler, D. 2020. Towards Realistic Byzantine-Robust Federated Learning. _ArXiv_ abs/2004.04986.
* Rahimpour et al. (2016) Rahimpour, A.; Taalimi, A.; Luo, J.; and Qi, H. 2016. Distributed object recognition in smart camera networks. _2016 IEEE International Conference on Image Processing (ICIP)_ 669–673.
* Raykar et al. (2010) Raykar, V. C.; Yu, S.; Zhao, L. H.; Valadez, G. H.; Florin, C.; Bogoni, L.; and Moy, L. 2010. Learning from crowds. _Journal of Machine Learning Research_ 11(Apr): 1297–1322.
* Sattler et al. (2019) Sattler, F.; Wiedemann, S.; Müller, K.-R.; and Samek, W. 2019. Robust and Communication-Efficient Federated Learning from Non-IID Data. _IEEE transactions on neural networks and learning systems_ .
* Sohn et al. (2011) Sohn, K.; Jung, D. Y.; Lee, H.; and Hero, A. O. 2011. Efficient learning of sparse, distributed, convolutional feature representations for object recognition. _2011 International Conference on Computer Vision_ 2643–2650.
* Song et al. (2018) Song, C.; He, K.; Wang, L.; and Hopcroft, J. E. 2018. Improving the Generalization of Adversarial Training with Domain Adaptation. _ArXiv_ abs/1810.00740.
* Welinder et al. (2010) Welinder, P.; Branson, S.; Perona, P.; and Belongie, S. J. 2010. The Multidimensional Wisdom of Crowds. In Lafferty, J. D.; Williams, C. K. I.; Shawe-Taylor, J.; Zemel, R. S.; and Culotta, A., eds., _Advances in Neural Information Processing Systems 23_ , 2424–2432. Curran Associates, Inc. URL http://papers.nips.cc/paper/4074-the-multidimensional-wisdom-of-crowds.pdf.
* Xie, Koyejo, and Gupta (2018) Xie, C.; Koyejo, O.; and Gupta, I. 2018. Zeno: Byzantine-suspicious stochastic gradient descent. _ArXiv_ abs/1805.10032.
* Yin et al. (2018a) Yin, D.; Chen, Y.; Ramchandran, K.; and Bartlett, P. L. 2018a. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In _ICML_ , volume Arxiv/abs/1803.01498.
* Yin et al. (2018b) Yin, D.; Chen, Y.; Ramchandran, K.; and Bartlett, P. L. 2018b. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In Dy, J. G.; and Krause, A., eds., _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018_ , volume 80 of _Proceedings of Machine Learning Research_ , 5636–5645. PMLR. URL http://proceedings.mlr.press/v80/yin18a.html.
|
# Empirical Evaluation of Supervision
Signals for Style Transfer Models
Yevgeniy Puzikov1, Stanley Simoes, Iryna Gurevych1, Immanuel Schweizer2
1 Ubiquitous Knowledge Processing Lab (UKP Lab),
Department of Computer Science, Technical University of Darmstadt
2 Merck KGaA, Darmstadt, Germany
https://www.ukp.tu-darmstadt.de
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>Work done during an internship at UKP Lab.
###### Abstract
Text style transfer has gained increasing attention from the research
community over the recent years. However, the proposed approaches vary in many
ways, which makes it hard to assess the individual contribution of the model
components. In style transfer, the most important component is the
optimization technique used to guide the learning in the absence of parallel
training data. In this work we empirically compare the dominant optimization
paradigms which provide supervision signals during training: backtranslation,
adversarial training and reinforcement learning. We find that backtranslation
has model-specific limitations, which inhibits training style transfer models.
Reinforcement learning shows the best performance gains, while adversarial
training, despite its popularity, does not offer an advantage over the latter
alternative. In this work we also experiment with Minimum Risk Training Och
(2003), a popular technique in the machine translation community, which, to
our knowledge, has not been empirically evaluated in the task of style
transfer. We fill this research gap and empirically show its efficacy.
## 1 Introduction
Text style transfer is the task of changing stylistic properties of an input
text, while retaining its style-independent content. Regenerating existing
text to cater to a target audience has diverse use-cases such as rewriting
offensive language on social media dos Santos et al. (2018); making a text
more formal Rao and Tetreault (2018), romantic Li et al. (2018) or
politically-slanted Prabhumoye et al. (2018); changing its tense Ficler and
Goldberg (2017) or sentiment Shen et al. (2017).
While training unsupervised models for _generating_ style-infused texts can be
done using conditional language-modelling techniques, in order to perform
style _transfer_ , one needs to find a source of supervision signal. Parallel
corpora for this task are scarce Xu et al. (2012); Jhamtani et al. (2018); Rao
and Tetreault (2018); Kang et al. (2019), so researchers focused on finding
non-parallel supervision signals.
We analyzed previous work and came to a conclusion that, although many
approaches have been proposed, they all employ similar optimization methods
that form groups of techniques; one simply combines them to produce a style-
transfer model. The “recipe” is using denoising autoencoding as a mechanism to
teach the model to generate grammatical texts; style-infusion comes from: 1)
discriminator-based training; 2) backtranslation; 3) metric supervision via
reinforcement learning (RL). Our work examines the properties of these methods
and finds which of them contribute to the success or failure of a style-
transfer approach.
Our contributions are three-fold:
* •
We provide a structured overview of the supervision techniques used for
training style transfer models.
* •
We find evidence of the limitations of the existing techniques.
* •
To the best of our knowledge, we are the first ones to use Minimum Risk
Training technique Och (2003) in style transfer. We prove its efficacy in the
subsequent experiments.
In what follows, we first describe the notation used throughout the paper,
then introduce each of the examined model components. After that, we explain
our experimental setup, analyze the results and pinpoint the approaches’
limitations.
## 2 Overview
We assume that our training data consists of text–style pairs ($x,s$), where
$x$ is a text and $s=(s_{1},\dots,s_{m})$ is a set of style values which $x$
has. Each $s_{k}$ is a discrete value in the set $\mathcal{S}_{k}$ of possible
values for attribute $k$.
Our task is to learn a mapping from a pair of an input text $x$ and arbitrary
style $\hat{s}$, to a new text $\hat{x}$ that exhibits styles $\hat{s}$, but
has the content of $x$. Research literature does not define precisely what
content is; usually it is assumed that content is style-independent. However,
whether it is possible to decouple the two is a topic of an ongoing debate
Lample et al. (2019); John et al. (2019). In this work, content is defined as
anything in $x$ which does not depend on the style attributes.
All works we have examined employ some variant of a recurrent neural network
(RNN, Rumelhart et al. (1986)) or Transformer Vaswani et al. (2017) as a text
generator. For simplicity, as a generator network, we implemented a bi-
directional encoder and uni-directional decoder with Gated Recurrent Unit Cho
et al. (2014), attention Bahdanau et al. (2014), and the pooling mechanism of
Lample et al. (2019). The generator model first encodes text $x$ into a latent
representation $z=e(x)$, then decodes $(z,\hat{s})$ into
$\hat{x}=d(z,\hat{s})$, where $e$ and $d$ are encoder and decoder parts of the
model.
What differs between the approaches which we compare in this paper is the
optimization technique used to train the model. These techniques are described
in the following subsections. Hyperparameter values are reported in Section
A.2.
### 2.1 Autoencoding
First, the model is trained with a denoising autoencoding (Dae) objective to
learn to produce grammatical texts from corrupted inputs. An illustration of
this process is shown in Figure 1. Following Lample et al. (2019), we corrupt
a given text $x$ by randomly dropping and shuffling words, which produces
$x_{c}$. The corrupted text serves as input to the encoder; the target
sequence to reconstruct is the original text $x$. Dae training minimizes the
following objective:
$L_{ae}=-\log P\Big{(}x|e(x_{c}),s\Big{)}$ (1)
Figure 1: Schematic view of the Dae training procedure. $x$ is the input text,
$x_{c}$ is its noised version, $x^{*}$ is the reconstruction of $x$. The
dashed line shows the absence of any transformation, _i.e._ , that the output
of the noising procedure becomes the input to the model at the next step.
### 2.2 Backtranslation
Backtranslation (Bt) was originally proposed by Sennrich et al. (2016) in the
context of machine translation as a method of creating silver-standard data
and bootstrapping machine translation models. Some researchers successfully
applied it to style transfer, but used it in different ways. Zhang et al.
(2020) employed Bt to obtain additional training data, while Lample et al.
(2019) treated it as a source of indirect supervision, arguing that Bt helps
to prevent the model from doing just reconstruction.
Interestingly enough, Prabhumoye et al. (2018) used Bt to do the opposite. The
authors refer to a study of Rabinovich et al. (2017) who showed that stylistic
properties are obfuscated by both manual and automatic machine translation,
_i.e._ , backtranslation can be used to rephrase a text while reducing its
stylistic properties. It seems that sometimes Bt exhibits an additional
supervision signal Lample et al. (2019), and sometimes it has a regularization
effect Rabinovich et al. (2017); Prabhumoye et al. (2018).
An illustration of the backtranslation process for style transfer is shown in
Figure 2. Given an input text $x$ and original style $s$, we first perturb $s$
by changing at least one of the attributes to produce $\hat{s}$. Next, the
model takes ($x$,$\hat{s}$) as input and generates text $\hat{x}$. The model
then uses $\hat{x}$ and the original style $s$ to produce $x^{*}$ which,
ideally, is a reconstruction of $x$. Bt training minimizes the following
objective:
$L_{bt}=-\log P\bigg{(}x|e\Big{(}d\big{(}e(x),\hat{s}\big{)}\Big{)},s\bigg{)}$
(2)
Figure 2: Schematic view of the Bt training procedure. $x$ is the input text,
$s$ is the corresponding input style, $\hat{s}$ is the desired style.
$\hat{x}$ and $x^{*}$ are generated outputs. ?Model? is the encoder-decoder
generator.
With backtranslation, the training alternates between an autoencoding and
backtranslation steps. The final optimization function we minimize is a linear
combination of Dae and Bt losses:
$L_{total}=\lambda_{ae}L_{ae}+\lambda_{bt}L_{bt}$ (3)
The $\lambda$ parameters constitute a trade-off between performing more
content preservation or style transfer. In our experiments we follow Lample et
al. (2019) and anneal the $\lambda_{ae}$ to 0 towards the end of training,
while keeping $\lambda_{bt}$ equal to 1.
### 2.3 Adversarial Training
Adversarial training Goodfellow et al. (2014) provides means for leveraging
training signals from non-parallel corpora for style transfer. One popular
approach in this direction is to disentangle the input text’s content and
style information by employing adversarial networks that operate on the input
text’s latent representation, _i.e._ , the encoder output. This can be done by
separating the latent representation into the content representation and style
representation John et al. (2019), or learning style-invariant latent
representations Fu et al. (2018). Another approach is to use an adversarial
network within the backtranslation framework Logeswaran et al. (2018); Dai et
al. (2019), which is what we employed in our experiments. Using adversarial
discriminators in such a scenario helps matching the distribution of style-
specific latent representations of real vs. synthetic texts Shen et al.
(2017).
Figure 3: Schematic view of the Adv training procedure. The inputs and outputs
are the same as for the Bt stage.
An illustration of the adversarial training of the generator model is shown in
Figure 3. We implement the multi-class discriminator of Dai et al. (2019)
using a GRU-based encoder with a classification layer which predicts the style
of $\hat{x}$. Adversarial training involves alternating between training the
generator model to produce style-infused texts, and training the discriminator
to distinguish between real sentences of different styles, on one hand, and
model-generated texts, on the other hand. Training the latter is
straightforward; we follow Dai et al. (2019) and refer the reader to the
original paper for details.
When training both the discriminator and the generator, we minimize the cross
entropy loss, and teach the discriminator to predict style, given a text
(either real one or generated by the model), and the generator to output texts
that look _real_ , _i.e._ , similar to the texts with the desired style in the
training data.
Note that adding the adversarial component is done on top of Bt model, because
with Dae and Adv only, it is not possible to force the model to preserve the
content. For this reason, training the generator now consists of three terms:
$\displaystyle L_{adv}$ $\displaystyle=-\log P_{D}(\hat{s}|\hat{x})$
$\displaystyle L_{total}$
$\displaystyle=\lambda_{ae}L_{ae}+\lambda_{bt}L_{bt}+\lambda_{adv}L_{adv}$
We reuse the same $\lambda$ parameters as in the Bt approach. $\lambda_{adv}$
is set to 1.0. 111This seemed to be a reasonable value; we did not perform any
hyperparameter tuning.
| Sentiment | Gender | Category
---|---|---|---
FYelp | Positive | Negative | Male | Female | American | Asian | Bar | Dessert | Mexican
| 1,035,609 | 197,203 | 584,637 | 648,175 | 338,899 | 208,483 | 372,873 | 209,949 | 102,608
RottenTomatoes | Positive | Negative | Male | Female | Critic | Audience | - | - | -
| 245,241 | 118,857 | 268,564 | 95,535 | 77,467 | 286,631 | - | - | -
SYelp | Positive | Negative | - | - | - | - | - | - | -
| 266,041 | 177,218 | - | - | - | - | - | - | -
SAmazon | Positive | Negative | - | - | - | - | - | - | -
| 277,228 | 277,769 | - | - | - | - | - | - | -
Table 1: The number of training instances per attribute for each dataset.
Preprocessing details are given in the appendix.
### 2.4 Minimum Risk Training
Existing works have also explored architectures based on RL techniques for
text style transfer. For example, Gong et al. (2019) use evaluation metrics
for style, content preservation, and naturalness as the training objective
within the RL framework. Wu et al. (2019) use a hierarchical model where the
high-level agent decides where the input text needs to be modified, and the
low-level agent decides on the modification.
Following the success of the Minimum Risk Training method Och (2003) in the
machine translation community, we decided to experiment with it as a potential
candidate of the RL techniques. Since the advent of neural networks, there
have been successful attempts to use Mrt for generation tasks, like neural
machine translation Gao et al. (2014); Shen et al. (2016), but we are unaware
of any work that has explored its utility in the domain of style transfer.
Yet, it has a number of advantages over other RL alternatives. First, it is
very easy to implement and use. Second, unlike other RL algorithms, like
REINFORCE Williams (1992), Mrt uses multiple examples at a time to estimate
risk. This allows for efficient data batching, leading to faster training
speed and diversity in the generated examples.
An illustration of the Mrt training step is shown in Figure 4. Note that it is
performed on top of the Bt procedure, since we want the outputs to be similar
in content with the input text. The main idea is to use evaluation metrics
(possibly non-differentiable) as loss functions and assume that the optimal
set of model parameters should minimize the expected loss on the training
data. Given an input $x$, a model prediction $\hat{y}$, a desired output $y$
and a loss function $\Delta(\hat{y},y)$, Mrt seeks a posterior $P(\hat{y}|x)$
to minimize the expected loss $\mathbf{E}_{\hat{y}\sim
P(\hat{y}|x)}\Delta(\hat{y},y)$.
Figure 4: Schematic view of the Mrt training procedure. The inputs and outputs
are the same as for the Bt stage.
Since we do not have reference outputs, we cannot use reference-based metrics.
However, we can use style intensity classifiers to compute a metric that could
guide the model towards generating better outputs. According to Mir et al.
(2019), when evaluating style intensity, the metric that correlates most with
human judgements, is direction-corrected Earth-Mover’s Distance (EMD) Rubner
et al. (1998). We measure it between the style distributions of the texts
generated during the backtranslation process (see Section 3.2 for details):
$\displaystyle L_{mrt}$ $\displaystyle=E_{x^{*}\sim
P(x^{*}|\hat{x})}\Delta(x^{*},\hat{x})$ $\displaystyle L_{total}$
$\displaystyle=\lambda_{ae}L_{ae}+\lambda_{bt}L_{bt}+\lambda_{mrt}L_{mrt}$
We use the same $\lambda$ hyperparameters as in the Bt and Adv cases,
$\lambda_{mrt}$ is set to 1.0.
## 3 Experimental Setup
### 3.1 Datasets
Following previous work, we used publicly available Yelp restaurant and Amazon
product review datasets which vary in one attribute, the review sentiment Shen
et al. (2017); Li et al. (2018).
We followed Lample et al. (2019) and included a multi-attribute version of the
Yelp restaurant review dataset which contains texts varying in product
categories, gender of the reviewers, and sentiment of the review. We also
added a multi-attribute dataset of Ficler and Goldberg (2017) which contains
movie reviews from the Rotten Tomatoes website. The texts vary in
professionality and sentiment dimensions. We also added gender annotations,
following the same procedure as for the _Fyelp_ dataset.
The lengths of _RottenTomatoes_ and _Fyelp_ texts vary a lot — some exceed 1k
tokens. Due to computational limitations, we had to restrict ourselves to
texts no longer than 50 tokens for both datasets; _Syelp_ and _SAmazon_
datasets were not trimmed in any way. The number of training instances per
category for each of the four datasets are shown in Table 1. The details of
the preprocessing steps for all datasets are given in Section A.1.
Model | Acc (%) | EMD | BLEU | sBLEU | WMS | PPL
---|---|---|---|---|---|---
CrossAligned Shen et al. (2017) | 73.8 | 0.68 | 3.3 | 13.2 | 0.66 | 69.1
Style Embedding Fu et al. (2018) | 9.1 | 0.05 | 12.1 | 69.2 | 0.86 | 76.0
MultiDecoder Fu et al. (2018) | 46.5 | 0.42 | 7.4 | 37.8 | 0.72 | 146.7
TemplateBased Li et al. (2018) | 81.1 | 0.74 | 11.1 | 44.2 | 0.70 | 1915.0
RetrieveOnly Li et al. (2018) | 93.8 | 0.84 | 0.4 | 0.7 | 0.52 | 7.9
DeleteOnly Li et al. (2018) | 83.5 | 0.76 | 7.6 | 28.6 | 0.68 | 71.5
DeleteAndRetrieve Li et al. (2018) | 87.2 | 0.79 | 8.5 | 29.1 | 0.67 | 86.0
Dae | 24.5 | 0.20 | 11.7 | 58.1 | 0.86 | 51.8
Dae $+$ Bt | 85.8 | 0.79 | 6.8 | 21.4 | 0.70 | 42.8
Dae $+$ Bt $+$ Adv | 87.2 | 0.80 | 6.9 | 20.7 | 0.70 | 40.6
Dae $+$ Bt $+$ Mrt | 88.1 | 0.81 | 6.9 | 20.1 | 0.70 | 41.0
Input copy | 3.9 | 0.00 | 18.4 | 100.0 | 1.00 | 8.2
Table 2: Automatic metric evaluation results on the _Syelp_ test set (lower-
cased). BLEU scores are computed between the test set human references and
model outputs. For all scores, except for perplexity (PPL): the higher the
better. ACC, BLEU and sBLEU values are in range $[0,100]$; EMD in $[0,1]$; WMS
in $(0,1]$; PPL in $[0,\infty]$.
### 3.2 Evaluation Metrics
A lot of work has been done in order to make evaluation of style transfer
models more reliable Shen et al. (2017); Fu et al. (2018); Zhao et al. (2018);
Li et al. (2018); Mir et al. (2019). We combine the evaluation setups of
Lample et al. (2019) and Mir et al. (2019) in order to make our results
comparable to the previous work; the details are in Section A.3. We evaluate
the system outputs across three quality dimensions.
Attribute control is assessed by in-domain fasttext Joulin et al. (2016)
classifiers. For each dataset, we use the train portion of the data to train
attribute-specific classifiers. Given a predicted text, a classifier outputs a
probability distribution over possible styles. We use the highest-scoring
class as a classifier prediction and compare it with the gold-standard label
to compute the accuracy. We also compute EMD between the probability
distributions of the predicted text, on one hand, and the original text, on
the other. Finally, all scores are averaged across attributes.
Fluency is approximated by the perplexity computed by a 5-gram KenLM model
Heafield (2011) with Kneser–Ney smoothing Ney et al. (1994).
Content preservation is measured by two groups of metrics. First, we use an
embedding-based Word-Mover’s Similarity (WMS), the normalized inverse of the
Word-Mover’s Distance. This is done in order to make it easier for the reader
to compare approaches: the higher the score, the better (similar to the other
metrics). The second group includes BLEU Papineni et al. (2002) and self-BLEU
(or sBLEU). The _Syelp_ and _SAmazon_ test sets have human references, so we
compute BLEU scores between these references and the model outputs. _Fyelp_
and _RottenTomatoes_ do not have human references, and we compute sBLEU scores
between the input texts and the generated outputs.
## 4 Results
Model | Acc (%) | EMD | BLEU | sBLEU | WMS | PPL
---|---|---|---|---|---|---
CrossAligned Shen et al. (2017) | 74.5 | 0.45 | 0.4 | 0.5 | 0.55 | 20.5
Style Embedding Fu et al. (2018) | 39.7 | 0.19 | 10.2 | 29.5 | 0.67 | 81.1
MultiDecoder Fu et al. (2018) | 72.1 | 0.41 | 4.9 | 14.4 | 0.61 | 78.9
TemplateBased Li et al. (2018) | 69.9 | 0.40 | 26.6 | 64.0 | 0.78 | 91.1
RetrieveOnly Li et al. (2018) | 73.5 | 0.43 | 0.9 | 2.1 | 0.54 | 7.7
DeleteOnly Li et al. (2018) | 51.0 | 0.26 | 25.4 | 60.9 | 0.80 | 37.7
DeleteAndRetrieve Li et al. (2018) | 56.4 | 0.30 | 23.3 | 54.3 | 0.77 | 57.4
Dae | 20.2 | 0.03 | 30.2 | 79.6 | 0.94 | 30.1
Dae $+$ Bt | 34.4 | 0.13 | 30.9 | 78.9 | 0.92 | 29.3
Dae $+$ Bt $+$ Adv | 47.3 | 0.23 | 28.5 | 72.0 | 0.89 | 36.9
Dae $+$ Bt $+$ Mrt | 50.4 | 0.25 | 28.1 | 70.9 | 0.88 | 38.3
Input copy | 17.1 | 0.00 | 38.4 | 100.0 | 1.00 | 8.5
Table 3: Automatic metric evaluation results on the _SAmazon_ test set (lower-
cased). BLEU scores are computed between the test set human references and
model outputs.
### 4.1 Single-Attribute (_Syelp_ , _SAmazon_)
We first evaluate the described methods in the single-attribute scenario.
Table 2 and Table 3 show their performance on the test portion of the _Syelp_
and _SAmazon_ datasets, respectively. The results for previous work are
computed based on the outputs from Li et al.
(2018).222https://github.com/lijuncen/Sentiment-and-Style-Transfer.
The first striking observation is that all models achieve low BLEU scores.
Taking into consideration the high WMS scores of some models, this suggests
that using an n-gram overlap between a human reference and model output is
inadequate for style transfer — the potential variability of re-generating
text in a different style is too high to be captured by an overlap with one
reference text. This observation is reinforced by the fact that the models
with the best transfer performance (accuracy and EMD) also exhibit lowest BLEU
scores. The fact that sBLEU and WMS have a large gap indicates that computing
an n-gram overlap between the input and system output is also a very
superficial way of measuring content preservation, calling for the usage of
vector-space models, like WMD.
Interestingly, the performance of the models proposed in the literature is not
consistent across datasets. _SAmazon_ has longer and more diverse sentences
than _Syelp_ , which could explain why template- and retrieval-based
approaches underperform, compared to the data-driven alternatives. However, it
is not clear why both the previously proposed neural models and the approaches
we implemented and experimented with in this paper show such a large gap
between the results on _Syelp_ and _SAmazon_.
It is surprising that Dae by itself can do some amount of style transfer, even
without the additional supervision signal. This most likely is the consequence
of indiscriminate noising of tokens in the input text and removing of style-
bearing words during the noising step. The work of Shen et al. (2019) offers a
plausible explanation for that: denoising seems to help autoencoders to map
similar texts to similar latent representation and promote sequence
neighborhood preservation.
Among the tested supervision signals, Mrt has a slight preference. However, in
the single-attribute scenario, the best way to do style transfer seems to be a
simple nearest-neighbour approach (RetrieveOnly): by retrieving a
semantically-similar text with the desired style from the available corpus.
Manual examination of model predictions revealed that none of the approaches
goes further than replacing several style-bearing words. This happens due to a
limited variation in the data. For example, _Syelp_ texts are at most 15
tokens long, and most reviews have similar structure, so the models learn to
do minimal edits to perform style transfer. They also fail when it is needed
to go beyond that. For example, all examined approaches failed to change the
style in the following cases and produce almost unchanged input text as
prediction:
* •
_i just walked out , called the manager to complain_
* •
_she does n’t say anything and just walks away_
### 4.2 Multi-Attribute (_Fyelp_ , _RottenTomatoes_)
Table 4 and Table 5 show the performance of the considered approaches on
_Fyelp_ , and _RottenTomatoes_ data, respectively.
Model | Acc (%) | EMD | sBLEU | WMS | PPL
---|---|---|---|---|---
Dae | 13.9 | 0.02 | 38.5 | 0.76 | 67.1
Dae $+$ Bt | 32.2 | 0.24 | 22.9 | 0.69 | 29.6
Dae $+$ Bt $+$ Adv | 42.4 | 0.33 | 22.1 | 0.68 | 31.8
Dae $+$ Bt $+$ Mrt | 46.8 | 0.36 | 21.5 | 0.68 | 33.1
Table 4: Automatic metric evaluation results on the _Fyelp_ test set (lower-cased). Model | Acc (%) | EMD | sBLEU | WMS | PPL
---|---|---|---|---|---
Dae | 35.1 | 0.015 | 39.9 | 0.78 | 73.79
Dae $+$ Bt | 55.5 | 0.18 | 28.5 | 0.69 | 83.1
Dae $+$ Bt $+$ Adv | 57.6 | 0.20 | 28.2 | 0.69 | 83.2
Dae $+$ Bt $+$ Mrt | 59.6 | 0.22 | 25.6 | 0.68 | 98.5
Table 5: Automatic metric evaluation results on the _RottenTomatoes_ test set
(lower-cased).
The trends from the single-attribute transfer seem to be present here as well.
The sBLEU and WMS scores achieved by the Dae model are the highest, which is
intuitive — the model learns to reconstruct the input.
The correlation between higher EMD and accuracy scores vs. lower WMD and sBLEU
scores supports the hypothesis that there is a trade-off between preserving
input content and performing style transfer. Figure 5 shows how content
preservation (measured by sBLEU) and style intensity (Acc) criteria start
competing during Bt model training.
Figure 5: Style transfer accuracy vs sBLEU score during training phase of the
Bt model. Data: development set of the _RottenTomatoes_ dataset.
This phenomenon was also observed by Lai et al. (2019): the authors note that
a model trained longer was better able to transfer style, but worse at
retaining the input’s content. An evaluation perspective of this issue was
also studied by Mir et al. (2019). We are not sure whether it is possible to
define a performance upper bound for a particular class of models and,
therefore, deciding which model is state-of-the-art in the task of style
transfer is not easy — the aforementioned trade-off complicates this issue.
This makes finding ways to control this trade-off a very interesting future
research direction.
Quality-wise, Adv and Mrt produce more style-infused instances, but even they
have two flaws. First, they struggle to perform transfer across all styles
simultaneously. The issue is complicated by the difficulty of the chosen style
attributes themselves. For example, transferring _gender_ style proved to be a
challenge even for the authors of the paper.
The second issue is that models cannot cope with cases when words usual for
one style are used for expressing the opposite style. For example:
* •
_there are much better places for breakfast ._
* •
_anything they say , ask in writing ._
In such cases all models tend to output the input text as a prediction.
## 5 Results Analysis
We believe that autoencoding is the most important stage of the training
process. As Lample et al. (2019) mention, it is a way to force the model
decoder to leverage the style information: since the noise applied to the
input $x$ may corrupt words conveying the values of the original input style
$s$, the decoder has to learn to use the additional style input in order to
perform a proper, style-infused, reconstruction.
Backtranslation is an easy-to-implement and conceptually appealing approach:
training is straightforward and empirical results show that it performs well
across different datasets and styles. However, we found that the effectiveness
of Bt is not model-agnostic. We experimented with using a more recent
Transformer Vaswani et al. (2017) architecture for the Dae component and found
that the model only manages to do autoencoding, but almost no style transfer.
We hypothesize that this happens when an encoder’s capacity is too high, and
is related to the ability of such models to learn an arbitrary mapping between
sequences and associated latent representation Shen et al. (2019). Prior work
for multi-attribute text style transfer suggests that the encoder is
responsible for encoding the input text into its content representation
Logeswaran et al. (2018); Lample et al. (2019). In fact, the interpolated
reconstruction loss used in the model by Logeswaran et al. (2018) is based on
this assumption. We attempted to verify whether the outputs of a Transformer
encoder are used to encourage the content representation of texts rewritten in
different styles to be the same.
During backtranslation, the model generates $\hat{x}$ and $x^{*}$, which would
be the same text written in different styles, if the model were perfect.
Assuming that the encoder outputs represent the content, we can assess how
similar the two encoder outputs are. Since the encoder outputs may have
different sequence lengths, we performed a global pooling over the encoder
output vectors, yielding one vector for each text. Following the single-
attribute model of Tikhonov et al. (2019), we calculate the mean squared error
(MSE) between these two vectors. The results of this experiment on the
_RottenTomatoes_ dataset are shown in Figure 6.
Figure 6: Mean squared error between the pooled encoder outputs of the source
text and the backtranslated text. Development set of the _RottenTomatoes_
dataset, Transformer-based Dae $+$ Bt model.
The pooled representations become almost the same at the start of training.
Looking back at Figure 2, we can see that it is possible to ?game? the
optimization procedure and achieve an optimal loss value without doing much
style transfer. This happens if during the Dae step the generator model learns
to reconstruct the input without using style information. In this case,
$\hat{x}$ and $x^{*}$ in Figure 2 become $x$ and Bt loss becomes 0. Our
experiments suggest that this happens with the Transformer networks and not
with RNN ones. However, the reasons of this phenomenon are not clear.
Adversarial training showed more consistent results in our experiments,
although the training results exhibit more variation. This is expected — many
researchers reported on the instability issues with adversarial training,
_e.g._ , vanishing gradients, convergence difficulties, mode-collapse
Goodfellow et al. (2014); Arjovsky and Bottou (2017); Roth et al. (2017).
Nevertheless, the results are generally lower than the ones we obtained with
Mrt models. A plausible explanation for this could be the findings of Elazar
and Goldberg (2018) who showed that adversarial discriminators exhibit
inferior performance when compared to external supervision signals.
The Minimum Risk Training method showed both stable training results and
consistent performance gains over the vanilla Bt training regime. This is a
little bit surprising, given that in the neural machine translation community
(where the method is most popular) it is known to be sensitive to the choice
of hyperparameter values Shen et al. (2016).
The additional benefit of the Mrt method is that, unlike adversarial training,
one is safe-guarded against the optimization instability issues: the model is
first pretrained with a maximum-likelihood estimation criterion at the
beginning and the worst-case scenario is staying at the same performance
levels. Finally, adversarial approaches are limited by their use of loss
functions that must be differentiable with respect to the model parameters.
Mrt, on the other hand, can incorporate arbitrary metrics on any level of
output granularity. The biggest weakness of the method is training time —
getting good parameter estimates depends on the number of samples in the pool
of candidates which are used for approximating the full search space. As this
pool grows, the training time also increases.
## 6 Discussion
The approaches we examined perform on par, with a slight preference towards
the Mrt method. However, more experiments are needed to confirm our findings,
_e.g._ to understand the strange behavior of the Transformer model trained
with Bt. We did not perform additional experiments comparing the performance
of Mrt and Adv models, when other generator networks (like Transformer) are
employed. We also did not experiment with hyperparameter values due to time
and computation constraints, but this is needed in order to account for the
randomness in model training.
Apart from additional experiments explaining the limitations of
backtranslation, we consider the data quality and evaluation protocols to be
two prominent directions that need to be improved.
We found three big issues about the employed datasets. Firstly, with the
exception of the data provided by Li et al. (2018), all other datasets have
multiple versions, which makes model comparison hard. Secondly, the datasets
are centered around style dimensions that often conflate the _content_ and
_style_ parts. For example, the multi-attribute Amazon dataset has the review
category as an attribute. However, unlike sentiment transfer, it is not
possible to change the category class of a review without changing its
content. Lastly, some stylistic properties are problematic to model, _e.g._ ,
the gender or age of a reviewer. Apart from ethical concerns, we also found
these attributes to be very hard to capture, even by humans. This means that
human evaluation of the models trained on such data would be problematic.
Evaluation protocols for style transfer models should be improved as well.
Current metric-based evaluation is flawed for various reasons. First, the
usage of some metrics is questionable. For example, BLEU is used for measuring
content preservation, but it penalizes differences between input and output
texts, even when they are intended (you cannot change style without changing
content). Second, the reported scores in different works vary even for the
same models. For example, the scores in Lample et al. (2019) are different
from those originally reported in Li et al. (2018), even though model outputs
are the same. This most likely happens due to the differences between the
options for training classifers or computing metric scores (_e.g._ , smoothing
method for BLEU). Finally, it is still not clear what the expected output of a
style transfer model should look like. There is no doubt that a certain trade-
off between content preservation and style transfer intensity is inevitable,
but having some common definition of what constitutes a good model is
definitely needed.
## 7 Conclusion
In this work we empirically compared three most popular approaches to
providing supervision signals in the absence of parallel data for the task of
style transfer. We successfully applied Mrt optimization techniques to style
transfer and showed that it offers the best performance gains, while staying
stable throughout the training. We revealed a model-specific limitation of the
backtranslation method, which inhibits training style transfer models. We also
evaluated a popular adversarial training approach and found that, although it
is able to improve upon vanilla backtranslation, it does not offer an
advantage over the Mrt alternative.
## Acknowledgments
This work was supported by the German Federal Ministry of Education and
Research (BMBF) as part of the Software Campus program under the promotional
reference 01IS17050. The first author of the paper is supported by the FAZIT
Foundation scholarship.
We thank Jessica Ficler for providing us with the _RottenTomatoes_ data, Raj
Dabre and Munu Sairamesh for the insightful discussions, and our colleagues
Christopher Klamm, Leonardo Ribeiro and Gözde Gül Şahin who provided
suggestions that greatly assisted our research.
## References
* Arjovsky and Bottou (2017) Martin Arjovsky and Léon Bottou. 2017. Towards principled methods for training generative adversarial networks.
* Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. _CoRR_ , abs/1409.0473.
* Bird (2006) Steven Bird. 2006. NLTK: The Natural Language Toolkit. In _Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions_ , pages 69–72, Sydney, Australia. Association for Computational Linguistics.
* Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. _CoRR_ , abs/1406.1078.
* Dai et al. (2019) Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style Transformer: Unpaired Text Style Transfer Without Disentangled Latent Representation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , ACL 2019, pages 5997–6007. Association for Computational Linguistics.
* Elazar and Goldberg (2018) Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 11–21, Brussels, Belgium. Association for Computational Linguistics.
* Ficler and Goldberg (2017) Jessica Ficler and Yoav Goldberg. 2017. Controlling Linguistic Style Aspects in Neural Language Generation. In _Workshop on Stylistic Variation_.
* Fu et al. (2018) Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style Transfer in Text: Exploration and Evaluation. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_ , AAAI-18, pages 663–670. AAAI Conference on Artificial Intelligence.
* Gao et al. (2014) Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 699–709, Baltimore, Maryland. Association for Computational Linguistics.
* Gong et al. (2019) Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, and Wen-mei Hwu. 2019. Reinforcement learning based text style transfer without parallel training corpus. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3168–3180, Minneapolis, Minnesota. Association for Computational Linguistics.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, _Advances in Neural Information Processing Systems 27_ , pages 2672–2680. Curran Associates, Inc.
* Heafield (2011) Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In _Proceedings of the Sixth Workshop on Statistical Machine Translation_ , pages 187–197, Edinburgh, Scotland. Association for Computational Linguistics.
* Jhamtani et al. (2018) Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg-Kirkpatrick. 2018. Learning to generate move-by-move commentary for chess games from large-scale social forum data. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1661–1671, Melbourne, Australia. Association for Computational Linguistics.
* John et al. (2019) Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 424–434, Florence, Italy. Association for Computational Linguistics.
* Joulin et al. (2016) Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. _arXiv preprint arXiv:1607.01759_.
* Kang et al. (2019) Dongyeop Kang, Varun Gangal, and Eduard Hovy. 2019. (male, bachelor) and (female, Ph.D) have different connotations: Parallelly annotated stylistic language dataset with multiple personas. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1696–1706, Hong Kong, China. Association for Computational Linguistics.
* Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. 2015. Adam: a method for stochastic optimization. In _Proceedings of the International Conference on Learning Representations (ICLR)_ , San Diego, USA.
* Lai et al. (2019) Chih-Te Lai, Yi-Te Hong, Hong-You Chen, Chi-Jen Lu, and Shou-De Lin. 2019. Multiple Text Style Transfer by Using Word-Level Conditional Generative Adversarial Network with Two-Phase Training. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing_ , EMNLP-IJCNLP 2019, pages 3577–3582. Association for Computational Linguistics.
* Lample et al. (2019) Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. 2019. Multiple-Attribute Text Rewriting. In _Proceedings of the 7th International Conference of Learning Representations_ , ICLR 2019.
* Li et al. (2018) Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, Retrieve, Generate: a Simple Approach to Sentiment and Style Transfer. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , NAACL-HLT 2018, pages 1865–1874. Association for Computational Linguistics.
* Logeswaran et al. (2018) Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content Preserving Text Generation with Attribute Controls. In _Advances in Neural Information Processing Systems 31_ , NeurIPS 2018, pages 5103–5113. Curran Associates, Inc.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In _Proceedings of the First International Conference on Learning Representations_.
* Mir et al. (2019) Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 495–504, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ney et al. (1994) Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependences in stochastic language modelling. _Computer Speech & Language_, 8(1):1–38.
* Och (2003) Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In _Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics_ , pages 160–167, Sapporo, Japan. Association for Computational Linguistics.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting on association for computational linguistics_ , pages 311–318. Association for Computational Linguistics.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: an imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc.
* Pele and Werman (2008) Ofir Pele and Michael Werman. 2008. A linear time histogram metric for improved sift matching. In _Computer Vision–ECCV 2008_ , pages 495–508. Springer.
* Pele and Werman (2009) Ofir Pele and Michael Werman. 2009. Fast and robust earth mover’s distances. In _2009 IEEE 12th International Conference on Computer Vision_ , pages 460–467. IEEE.
* Prabhumoye et al. (2018) Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018\. Style transfer through back-translation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 866–876, Melbourne, Australia. Association for Computational Linguistics.
* Rabinovich et al. (2017) Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 1074–1084, Valencia, Spain. Association for Computational Linguistics.
* Rao and Tetreault (2018) Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 129–140, New Orleans, Louisiana. Association for Computational Linguistics.
* Řehůřek and Sojka (2010) Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In _Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks_ , pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en.
* Roth et al. (2017) Kevin Roth, Aurélien Lucchi, Sebastian Nowozin, and Thomas Hofmann. 2017. Stabilizing training of generative adversarial networks through regularization. _CoRR_ , abs/1705.09367.
* Rubner et al. (1998) Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. 1998. A metric for distributions with applications to image databases. In _Proceedings of the 6th International Conference on Computer Vision_ , pages 59–66.
* Rumelhart et al. (1986) David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. _Nature_ , 323(6088):533–536.
* dos Santos et al. (2018) Cícero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting Offensive Language on Social Media with Unsupervised Text Style Transfer. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , ACL 2018, pages 189–194. Association for Computational Linguistics.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 86–96, Berlin, Germany. Association for Computational Linguistics.
* Shen et al. (2016) Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016\. Minimum risk training for neural machine translation. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1683–1692, Berlin, Germany. Association for Computational Linguistics.
* Shen et al. (2017) Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In _Advances in Neural Information Processing Systems 30_ , NIPS 2017, pages 6830–6841. Curran Associates, Inc.
* Shen et al. (2019) Tianxiao Shen, Jonas Mueller, Regina Barzilay, and Tommi S. Jaakkola. 2019. Latent space secrets of denoising text-autoencoders. _CoRR_ , abs/1905.12777.
* Tikhonov et al. (2019) Alexey Tikhonov, Viacheslav Shibaev, Aleksander Nagaev, Aigul Nugmanova, and Ivan P. Yamshchikov. 2019. Style Transfer for Texts: Retrain, Report Errors, Compare with Rewrites. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_ , EMNLP 2019. Association for Computational Linguistics.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In _Advances in Neural Information Processing Systems 30_ , NIPS 2017, pages 5998–6008. Curran Associates, Inc.
* Williams (1992) Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine Learning_ , 8(3–4):229–256.
* Wu et al. (2019) Chen Wu, Xuancheng Ren, Fuli Luo, and Xu Sun. 2019. A hierarchical reinforced sequence operation method for unsupervised text style transfer. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4873–4883, Florence, Italy. Association for Computational Linguistics.
* Xu et al. (2012) Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In _Proceedings of COLING 2012_ , pages 2899–2914, Mumbai, India. The COLING 2012 Organizing Committee.
* Zhang et al. (2020) Yi Zhang, Tao Ge, and Xu Sun. 2020. Parallel data augmentation for formality style transfer. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3221–3228, Online. Association for Computational Linguistics.
* Zhao et al. (2018) Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018. Adversarially regularized autoencoders. In _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pages 5902–5911, Stockholmsmässan, Stockholm Sweden. PMLR.
## Appendix A Supplemental Material
### A.1 Dataset Preparation
_Syelp_ and _SAmazon_ datasets are publicly
available.333https://github.com/lijuncen/Sentiment-and-Style-Transfer The
preprocessing steps for _Fyelp_ and _RottenTomatoes_ datasets are described
below.
_Fyelp_ was prepared using publicly released
code.444https://github.com/facebookresearch/MultipleAttributeTextRewriting.
However, due to the computational constraints, we additionally filtered out
texts that are longer than 50 tokens. Consequently, this makes our results
incomparable to those reported in Lample et al. (2019). We tried using the
same cut-off limit of 100 tokens as in the original paper, but model training
became prohibitively expensive.
The raw _RottenTomatoes_ dataset was shared with us by Ficler and Goldberg
(2017). We discarded empty reviews, reviews having only non-alphabetic
characters, meta-reviews, and reviews in languages other than English (the
review was considered to be in English only if at least 70% of tokens in the
text were identified to be English).
Using available meta-data, we added professionality annotations. We further
followed the instructions of Ficler and Goldberg (2017) to annotate reviews
with their sentiment. As for the gender annotations, we retrieved them from
user names and user ids: we replaced ids by the actual reviewer names
(obtained from the RottenTomatoes website), and followed the instructions in
Lample et al. (2019) to map the reviewer names to genders using lists of
male/female names.
During training and evaluation all texts were lower-cased.
### A.2 Training Details
All models were implemented using PyTorch Paszke et al. (2019) and PyTorch-
Lightning555https://github.com/PytorchLightning/pytorch-lightning frameworks.
Our models use the following hyperparameters:
* •
embedding dimension: 512
* •
RNN hidden dimension: 512
* •
encoder pooling kernel size: 5
* •
encoder pooling window size: 5
* •
word shuffle probability: 3
* •
intensity of word shuffling (parameter $k$): 3
The models were trained using Adam optimizer Kingma and Ba (2015) with the
following hyperparameters:
* •
lr: 0.0001
* •
betas: (0.5, 0.999)
* •
weight decay: 0
The models were trained on a cluster of eight NVIDIA Tesla V100 GPU (32G) for
30 epochs, with a dropout rate of 0.1, gradient norm was clipped to 5.0.
_SAmazon_ and _Syelp_ models were trained with a batch size of 400;
_RottenTomatoes_ and _Fyelp_ models used a smaller batch size of 200 due to
computational limitations.
We did not restrict the vocabulary size of the models, with an exception of
the _Fyelp_ model — there we followed Lample et al. (2019) and limited the
vocabulary to 60k BPE merge operations.
### A.3 Evaluation
All model outputs and references were lower-cased and tokenized by space
before evaluation. Specific details about metrics used are given below:
BLEU, sBLEU. We used the NLTK Bird (2006) package to compute BLEU scores. No
smoothing was applied.
Accuracy, EMD. We trained fasttext666https://fasttext.cc/ classifiers to
compute both the accuracy and probability distribution for EMD. We computed
the latter using the code from Mir et al. (2019). The same codebase was also
used to extract style-specific lexicons.
Perplexity. We used a publicly available KenLM
toolkit777https://kheafield.com/code/kenlm/ to train a 5-gram language model
with Kneser-Ney smoothing. Perplexities were computed on the sentence level
and averaged over the predicted texts.
WMS. We used the code from Mir et al. (2019) to compute WMD scores Pele and
Werman (2008, 2009), but normalised it in the following way:
$\textsc{WMS}(d_{1},d_{2})=\frac{1}{1+\textsc{WMD}(d_{1},d_{2})}$ (4)
Here, $\textsc{WMD}(d_{1},d_{2})$ denotes Word Mover’s distance between two
documents. The reason why we compute the inverse of WMD is to make it easier
for the reader to compare the models: the higher the score, the better the
model (similar to the other metrics). The metric is computed between Word2Vec
Mikolov et al. (2013) representations. We used the Gensim Python package
Řehůřek and Sojka (2010) and trained Word2Vec vectors from scratch on the
train portions of the datasets.
Excluded metrics. We excluded some of the metrics that Mir et al. (2019)
originally used in their study. These metrics are:
* •
masked versions of sBLEU and WMS;
* •
adversarial classifiers for measuring naturalness.
The former were excluded, because the authors showed that masked versions of
the metrics hihgly correlate with unmasked ones. The latter metric was
excluded, since the details about training the classifiers were not described
in the respective work.
|
# Inelastic neutron scattering determination of the spin Hamiltonian for
BaCdVO(PO4)2
V. K. Bhartiya<EMAIL_ADDRESS>Laboratory for Solid State Physics, ETH
Zürich, 8093 Zürich, Switzerland S. Hayashida Laboratory for Solid State
Physics, ETH Zürich, 8093 Zürich, Switzerland K. Yu. Povarov Laboratory for
Solid State Physics, ETH Zürich, 8093 Zürich, Switzerland Z. Yan Laboratory
for Solid State Physics, ETH Zürich, 8093 Zürich, Switzerland Y. Qiu NIST
Center for Neutron Research, National Institute of Standards and Technology,
Gaithersburg, Maryland 20899, USA S. Raymond Univ. Grenoble Alpes, CEA,
IRIG/MEM-MDN, F-38000 Grenoble, France A. Zheludev<EMAIL_ADDRESS>http://www.neutron.ethz.ch/ Laboratory for Solid State Physics, ETH Zürich,
8093 Zürich, Switzerland
###### Abstract
Single crystal inelastic neutron scattering is used to study spin wave
excitations in the fully polarized state of the frustrated quantum ferro-
antiferromagnet BaCdVO(PO4)2. The data analysis is based on a Heisenberg spin
Hamiltonian that includes as many distinct nearest-neighbor and next-nearest
neighbor interactions as allowed by crystal symmetry. All 8 such exchange
constants are obtained in a simultaneous fit to over 150 scans across the
dispersion manifold. This establishes a definitive quantitative model of this
material. It turns out to be substantially different from the one assumed in
numerous previous studies based on powder experiments.
## I Introduction
Despite its apparent simplicity, the square lattice $S=1/2$ Heisenberg model
with ferromagnetic (FM) nearest-neighbor (NN) coupling $J_{1}$ and frustrating
antiferromagnetic (AF) next nearest neighbor (NNN) interaction $J_{2}$ is
among the most important models in magnetism. It is famous for supporting an
exotic spin-nematic phase for sufficiently strong frustration ratios [1, 2, 2,
3] or in applied magnetic fields for moderate frustration [4, 5].
Unfortunately, no perfect experimental realizations of this model have been
discovered to date. The closest approximations are found among layered vanadyl
phosphates with the general formula AA′VO(PO4)2 (A, A′ = Ba, Cd, Pb, Sr, Zn)
[6, 7, 8]. Of these the most frustrated and the most promising spin nematic
candidate is BaCdVO(PO4)2 [7, 8]. Indeed, recent studies have produced
compelling thermodynamic and neutron diffraction evidence that this material
may have a novel exotic quantum phase in a wide range of applied magnetic
fields below saturation [9, 10].
All initial estimates of the coupling constants and frustration strengths in
AA′VO(PO4)2 materials were based on powder sample experiments analyzed with
the assumption that the underlying model is indeed a perfect $J_{1}$-$J_{2}$
square lattice [6]. However, the latter is incompatible with the crystal
symmetries of any compound in the family. All evidence points to that NN
interactions stay FM, NNN remain AFM, but beyond that the deviations from
simple square lattice symmetry are substantial. For example, single crystal
experiments on Pb2VO(PO4)2 revealed that it has as many as 5 distinct exchange
constants and a weaker frustration than suggested by previous powder studies
[11, 12]. The situation is even more complicated for BaCdVO(PO4)2, where the
powder/perfect square lattice estimate is $J_{2}/J_{1}=-0.9$ [6]. Already the
room temperature crystal structure [13] allow for 4 distinct exchange
constants. A recently discovered structural transition [10] at 240 K lowers
the symmetry even further. As many as 4 nearest-neighbor and 4 next nearest
neighbor coupling constants are allowed.
The main question is whether the rather complex interactions in BaCdVO(PO4)2
are compatible with the presence of a high-field nematic state. To answer it
one has to know the exact values of exchange parameters. A first step in this
direction was made in our preliminary inelastic neutron scattering study of
the spin wave spectrum in the fully saturated phase [10]. Due to the limited
amount of data that could be collected on a unique but small 114Cd enriched
crystalline sample, it was not possible to determine all 8 relevant parameters
unambiguously. Nonetheless, the data were enough to demystify the peculiar
“up-up-down-down” zero field magnetic structure previously detected in powder
experiment [14]. As for the field-induced nematic phase, recent theoretical
calculations made use of our preliminary estimates to demonstrate its
robustness [15]. Still, the exact Hamiltonian remains undetermined.
In the present work we report the results of a full-scale continuation of the
preliminary neutron measurement. We utilize the extremely high efficiency MACS
spectrometer at the National Institute of Standards (NIST) to map out the spin
wave dispersion in the entire Brillouin zone. We then analyze the combination
of new and previously collected data in a single global model fit. In doing so
we fully take into account the complex mosaicity of the sample and the energy-
momentum resolution of the spectrometers. The result is a definitive spin
Hamiltonian for BaCdVO(PO4)2.
Figure 1: Structure of vanadyl phosphate layers in BaCdVO(PO4)2 at 300 K in
the $Pbca$ phase (a) and at 120 K in the $Pca2_{1}$ phase (b). Distances
between nearest- and next-nearest magnetic V4+ ions are indicated. The two
inequivalent V4+ are shown with different color. The dotted black rectangle
shows the crystallographic unit cell.
## II Material and Experimental Details
The room temperature crystal structure of BaCdVO(PO4)2 is orthorhombic (space
group $Pbca$ No. 61) with lattice parameters $a=8.84$ Å, $b=8.92$ Å, and
$c=19.37$ Å [13]. Magnetism is due to $S=1/2$ V4+ ions that form layers
parallel to the $(a,b)$ plane. There are 8 magnetic ions per crystallographic
unit cell, four in each layer within a single cell. Intra-layer NN and NNN
interactions are expected to dominate. Further in-plane and inter-layer
coupling are expected to be negligible [8]. In particular, the spin wave
dispersion measured previously in the very similar Pb2VO(PO4)2 system is
perfectly modeled without taking these into account [11]. Already at room
temperature there are two distinct NN and two NNN superexchange pathways, as
illustrated in Fig. 1a, which shows a single magnetic layer. As mentioned
above, the crystal symmetry is further lowered upon cooling through a
structural transition at about 250 K. At $T=120$ K the space group is
$Pca2_{1}$ ($C_{2\nu}^{5}$, No. 29) with lattice parameters $a=8.8621(4)$ Å,
$b=8.8911(4)$ Å, and $c=18.8581(9)$ Å [10]. There are two V4+ symmetry-
inequivalent sites represented now by two different colors in each layer and 8
distinct superexchange paths as shown in Fig. 1(b). Magnetic order sets in at
$T_{N}\simeq 1.05$ K [9]. Its “up-up-down-down” character [14] is enabled by
the alternation of NN interaction strengths along the crystallographic $a$
axis [9]. A spin-flop transition observed at $\sim 0.5$ T for a field applied
along the same direction [9] suggest a tiny easy-axis magnetic anisotropy of
the order of 0.005 meV. As mentioned, the magnetic phase diagram includes an
extensive pre-saturated spin-nematic candidate phase, as discussed in detail
in Refs. [9, 10]. Full saturation is reached at $\mu_{0}H_{\mathrm{sat}}\simeq
6.5$ T.
The present measurement of the spin Hamiltonian is based on the method
pioneered by Coldea et al. in Ref. [16]. Inelastic neutron scattering is used
to measure the spin wave dispersion in the fully saturated phase. It is then
analyzed in the framework of spin wave theory, which for the Heisenberg model
becomes exact above saturation. We made use of the same $\sim$320 mg 98%
114Cd-enriched sample as in the experiments reported in Ref. [10]. In habit it
is green and transparent. The synthesis procedure is as follows. Single-phase
polycrstalline BaCdVO(PO4)2 was prepared by the solid-state reaction method in
two steps. First, stoichiometric amounts of precursors NH4H2PO4, BaCO3 and
114CdO were sintered in a Pt crucible at 700∘ C for 72 hrs to yield single-
phase BaCdP2O7. In a second step, the product was mixed with stoichiometric
amounts of V2O5 and V2O3, then compacted under hydrostatic pressure of 70 MPa
for 20 minutes. The resulting pellets were sintered at 800∘ C for 48 hrs in a
glassy graphite crucible sealed in quartz under a vacuum of 10-4 Torr. In all
cases 99.99$\%$-purity starting materials were used. The crystal was grown
from finely ground powders using the self-flux Bridgman technique at 0.2 mm/hr
at 880∘ C in a sealed glassy graphite crucible with tantalum as oxygen
scavenger. Powder and single crystal X-ray diffraction experiments in all
cases indicate a single phase with no disorder of any kind. The lack of
disorder is further supported by a total lack of Curie-like contribution to
magnetic susceptibility at low temperatures [9].
All neutron measurements were carried out with momentum transfers in the
$(h,k,0)$ reciprocal space plane. The mosaic of the crystal was characterized
by mapping out the distributions of the $(200)$ and $(020)$ Bragg peaks both
within and out of the scattering plane using a series of rocking curves. The
survey revealed 7 distinct crystallites of individual mosaic spreads
$<1^{\circ}$, but distributed over about $12^{\circ}$ in the $(a,b)$ plane and
within $\pm 5^{\circ}$ out of the plane. A tilt-integrated rocking curve of
the $(020)$ Bragg peak is shown in Fig. 2. An analysis of the measured
integrated peak intensities yielded the rotations of individual crystallites
in the $(a,b)$ plane relative to the mean setting ($-6.74^{\circ}$,
$-5.96^{\circ}$, $-5.19^{\circ}$, $-4.18^{\circ}$, $-1.8^{\circ}$,
$0.34^{\circ}$, $4.8^{\circ}$), as well as their relative masses (0.13, 0.3,
0.5, 0.65, 0.7, 0.15, 1) normalized to the largest crystallite,
correspondingly.
Figure 2: Tilt-integrated rocking curve (intensity vs. sample rotation angle
$\phi$) of the $(0,2,0)$ Bragg peak measured in the BaCdVO(PO4)2 sample
studied in this work and the error bars represent one standard deviation.
Contribution of 7 individual crystallites are color-coded. The red dot
indicates the position of center of mass, relative to which all momentum
transfers were indexed.
New inelastic data were collected with the Multi-Axis Crystal Spectrometer
(MACS) at National Institute of Standards and Technology (NIST) [17]. All
measurements were done in a 9 T magnetic field applied along the
crystallographic $c$ axis. Due to high neutron flux at the neutron absorbing
sample the stable temperature was $\sim$700 mK in a 3He-4He dilution
refrigerator. With it’s 20 detectors positioned at different scattering angles
but tuned to the same energy, MACS is optimized for measuring two-dimensional
intensity maps at a constant energy transfer, as was done, for example, in the
study of Pb2VO(PO4)2 [11]. For BaCdVO(PO4)2 we chose a different approach that
allows to better resolve the rather weakly dispersive bands at the bottom of
the dispersion manifold [10]. In our routine one particular detector performed
energy scans at fixed wave vectors transfers $\mathbf{q}$ as in a conventional
3-axis experiment: $(0.1,1.5,0)$, $(0.2,1.5,0)$, $(0.3,1.5,0)$, $(0.4,1.5,0)$,
$(0.5,1.5,0)$, $(0.6,1.5,0)$, $(0.7,1.5,0)$, $(0.8,1.5,0)$ and $(1.0,1.5,0)$.
The energy was scanned from 0.55 meV to 3 meV with an $E_{f}=2.7$ meV fixed-
final neutron energy. The energy step was 0.025 meV with counting time of
$\sim$ 6 min/point. The measured energy width of the incoherent elastic line
was $\sim$ 0.15 meV.
Figure 3: Coverage of momentum-energy space in inelastic neutron scattering
measurements with MACS (circles) and IN12 ( diamonds, Ref. [10]) instruments.
The energy is scanned between 0.55 and 3 meV. Concentric arcs represent the
magnetic form factor squared of V4+.
In the course of these constant-$\mathbf{q}$ scans, the remaining 19 detectors
performed “oblique” scans in both momentum and energy. Data points collected
with scattering angles below $\pm 20^{\circ}$ were discarded to avoid
background from the direct beam. The final data set consisted of 151 scans of
9510 data points covering a large part of the Brillouin zone, as shown by the
colored circles in Fig. 3. Representative individual scans are shown in Fig.
4. In Fig. 6 we show several representative two-dimensional energy-momentum
slices of the collected data. The new MACS data supplemented the data
previously collected at the IN12 3-axis spectrometer at ILL in a 10 T $c$-axis
magnetic field [10]. The latter were all taken in conventional
constant-$\mathbf{q}$ scans along high symmetry directions, as indicated by
diamond symbols in Fig. 3.
Figure 4: Left panels: Representative neutron scattering data collected by
individual detectors in the course of energy scans on the MACS spectrometer
(symbols) and the error bars represent one standard deviation. The solid red
line is a result of a global model fit to the entire collected data set, as
explained in the text. The shaded peaks are individual contributions of each
of the 7 crystallites in the sample. Right panels: reciprocal space
trajectories of the corresponding scans.
## III Data Analysis
The analysis of the measured magnetic scattering intensities was based on the
Heisenberg model for V4+ spin in each layer. Interactions between layers were
assumed negligible. Unlike the constrained model used in Ref. [10], we allowed
for 8 distinct exchange constants connecting nearest-neighbor and next-nearest
neighbor spins as shown in Fig. 5. To avoid over-parametrization, further in-
plane interactions, inter-layer coupling (which is in any case irrelevant for
in-plane dispersion in the first order) and anisotropy were assumed
negligible, as discussed above. The 8-parameter spin wave dispersion relation
for the fully saturated phase has been worked out in Ref. [15]. It contains
two distinct dispersion branches corresponding to two crystallographically
inequivalent V4+ sites:
$\hbar\omega_{\mathbf{q}}=\frac{A_{\mathbf{q}}+A^{\prime}_{\mathbf{q}}}{2}\pm\sqrt{\left(\frac{A_{\mathbf{q}}-A^{\prime}_{\mathbf{q}}}{2}\right)^{2}+|B_{\mathbf{q}}|^{2}}.$
(1)
Here
$\displaystyle A^{\prime}_{\mathbf{q}}$ $\displaystyle=$
$\displaystyle\tilde{h}-J_{1}^{\prime a}(1-\cos\mathbf{q}\mathbf{a})$
$\displaystyle A_{\mathbf{q}}$ $\displaystyle=$
$\displaystyle\tilde{h}-J_{1}^{a}(1-\cos\mathbf{q}\mathbf{a})$ $\displaystyle
2B_{\mathbf{q}}$ $\displaystyle=$
$\displaystyle(J_{1}^{b}e^{i\mathbf{q}\mathbf{b}}+J_{1}^{\prime
b}e^{-i\mathbf{q}\mathbf{b}})+(J_{2}^{+}e^{-i(\mathbf{q}\mathbf{a}-\mathbf{q}\mathbf{b})}$
$\displaystyle+$ $\displaystyle
J_{2}^{\prime+}e^{i(\mathbf{q}\mathbf{a}-\mathbf{q}\mathbf{b})})+(J_{2}^{-}e^{i(\mathbf{q}\mathbf{a}+\mathbf{q}\mathbf{b})}+J_{2}^{\prime-}e^{-i(\mathbf{q}\mathbf{a}+\mathbf{q}\mathbf{b})})$
$\displaystyle\tilde{h}$ $\displaystyle=$ $\displaystyle
g\mu_{B}\mu_{0}H-\frac{1}{2}(J_{1}^{b}+J_{1}^{\prime
b}+J_{2}^{+}+J_{2}^{-}+J_{2}^{\prime+}+J_{2}^{\prime-}).$
Due to the corrugated character of the V4+ layers, each of these branches will
give rise to three additional “replicas”, similarly to what was seen for zig-
zag spin chains PbNi2V2O8 [18] and BaCu2Si2O7 [19]. Fortunately, for momentum
transfers in the $(h,k,0)$ plane as those explored in the present experiment
only the two principal magnon branches are visible. The downside is that any
permutations of exchange constants that leave the dispersion relation (1)
intact ($J_{1}^{b}\leftrightarrow J_{1}^{\prime b}$, $J_{2}^{+}\leftrightarrow
J_{2}^{\prime+}$, $J_{2}^{-}\leftrightarrow J_{2}^{\prime-}$ and/or
$J_{1}^{a}\leftrightarrow J_{1}^{\prime a}$) cannot be distinguished from the
analysis of in-plane data.
The new data collected on MACS was analyzed together with data previously
obtained on IN12 [10] in a single global fit. At every wave vector, the
energies and intensities of the two magnon branches were numerically computed
using the SpinW Matlab library [20] using the 8 exchange parameters of the
model. This was done for each of the 7 crystallites, taking into account their
orientations and relative masses, resulting in a total of 14 observable modes.
Neutron intensities were modeled by numerically convoluting this result with
the energy-momentum resolution of the spectrometers, the latter calculated in
the Popovici approximation [21]. The computation was performed using the
ResLib software package [22]. For each of the two experiments we used separate
overall intensity prefactors and flat backgrounds. Thus, there are a total of
12 independent parameters in the intensity model.
Figure 5: Exchange parameters of the Heisenberg model used to analyze the
measured magnetic inelastic scattering in BaCdVO(PO4)2. The two colors of V4+
atoms correspond to two symmetry inequivalent sites.
Even in the 114Cd-enriched sample the estimated neutron penetration depth is
only about 15 mm. This results in intensity attenuations between 15 % and 35 %
due to absorption depending on scattering geometry and neutron energy. An
exact correction for absorption was unfeasible due to irregular shape of the
sample and unknown spatial distribution of the 7 crystallites. Instead,
absorption effects were simply ignored. This approximation is acceptable since
the variation of attenuation is estimated to be no more than 10 % between
different data points.
The model was fit to the bulk of experimental data from MACS and IN12 using a
Levenberg-Marcquardt least squares procedure. Randomly sampling the initial
parameter values consistently produced the same final fit result with good
convergence. In the best fit we obtain $\chi^{2}=3.05$. Considering the
numerous experimental complications and the global nature of the fit, the
degree of agreement is very good. The fitted exchange constants with 95$\%$
confidence interval are listed in Table 1. Once again we note that these
values are valid only to within the above-mentioned permutations that leave
the dispersion intact.
The magnon dispersion relation computed from the obtained exchange constants
is represented by white lines Fig. 6. Blue lines are contributions of each
individual crystallite. In Fig. 4 solid red lines shows results of the global
fit and shaded areas are again contributions of individual crystallites.
Considering the global nature of the fit, the complex measured scans profiles
are very well reproduced. Any remaining differences may be due to a structured
background (e.g., multi-phonon scattering). The possibility that an additional
strongly misaligned crystallite was overlooked by our survey can also not be
excluded. However, based on the present fit quality we can surmise that the
relative weight of the latter must be very small.
Figure 6: False color intensity plot of magnetic scattering in BaCdVO(PO4)2 shown for several representative slices of energy-momentum space. In all cases the integration range along $h$ or $x$ is $\pm$ 0.1 (r. l. u.). The white line is the dispersion relation obtained in a global fit to all collected data, as described in the text. Semi-transparent blue lines are the contributions of individual crystallites. The solid black lines separate data from two different parts of the reciprocal space. Bond $J$ (meV) | Length (Å)
---|---
$\left.\begin{array}[]{lr}J_{1}^{a}&-0.135(3)\\\ J_{1}^{{}^{\prime}a}&-0.614(4)\end{array}\right\\}$ | | 4.676
---
4.486
$\left.\begin{array}[]{lr}J_{1}^{b}&-0.314(6)\\\ J_{1}^{{}^{\prime}b}&-0.464(3)\end{array}\right\\}$ | | 4.584
---
4.574
$\left.\begin{array}[]{lr}J_{2}^{+}&0.384(6)\\\ J_{2}^{{}^{\prime}+}&0.039(7)\end{array}\right\\}$ | | 6.279
---
6.300
$\left.\begin{array}[]{lr}J_{2}^{-}&0.361(5)\\\ J_{2}^{{}^{\prime}-}&0.181(7)\end{array}\right\\}$ | | 6.292
---
6.286
Table 1: Parameters of a Heisenberg Hamiltonian for BaCdVO(PO4)2 obtained by
fitting a spin wave theory model to the entire collected data set. The
labeling of exchange parameters are as in Fig. 5 The corresponding bond
distances are also shown.Curly braces indicate that exchange constants can
only be assigned to specific crystallographic bonds modulo a petrmutation, as
explained in the text.
## IV Discussion and conclusion
As expected, BaCdVO(PO4)2 is not the simple $J_{1}-J_{2}$ square lattice
material that it was initially believed to be. Instead, it has significantly
alternating interactions along the $b$ direction, and also along the
diagonals. Understanding the structural origin of these variations is
challenging. Even for the higher-symmetry room temperature structure the
effective exchange constants represent complex multi-atom superexchange
pathways involving distorted oxygen-phosphorous complexes [8]. This said, for
nearest-neighbor exchange, we can speculate that a particularly small value of
$J_{1}^{a}$ may be associated with the longest $4.676$ Å bond length (see
Table 1). A careful consideration of the 3-dimensional structure reveals that
this bond also features the strongest out-of-plane buckling. Structural
reasons for a particularly small $J_{2}^{{}^{\prime}+}$ are not as obvious.
Despite the variation, NN and NNN interactions are all ferromagnetic and
antiferromagnetic, respectively. A quantitative correspondence with the square
lattice model can be made by computing the ratio of mean values:
$\frac{\langle J_{2}\rangle}{\langle
J_{1}\rangle}=\frac{J_{2}^{+}+J_{2}^{-}+J_{2}^{{}^{\prime}+}+J_{2}^{{}^{\prime}-}}{J_{1}^{a}+J_{1}^{{}^{\prime}a}+J_{1}^{b}+J_{1}^{{}^{\prime}b}}=-0.63.$
(2)
The relative strength of ferromagnetic interactions is actually larger that
the $J_{2}/J_{1}=-0.9$ estimate from powder studies [6], suggesting the system
may be more frustrated than originally thought. It is also considerably larger
than in the sister compound Pb2VO(PO4)2 where $\langle J_{2}\rangle/\langle
J_{1}\rangle=-2.74$ [11].
The minimum of the magnon dispersion computed using the exchange constants
listed in Table 1 is located at $\mathbf{q}_{\mathrm{min}}=(0,1/2,0)$. This
exactly corresponds to the propagation vector of the zero-field magnetic
structure in BaCdVO(PO4)2, which can thus be seen as a magnon condensate.
Correspondingly, the computed critical field of single-magnon instability is
$\mu_{0}H_{c}=3.92(3)$ T. This is consistent with the experimentally measured
field $\mu_{0}H_{c}=4.08(5)$ T, at which the $\mathbf{q}=(0,1/2,0)$ structure
collapses [10]. We conclude that the previously observed presaturation phase
between $\mu_{0}H_{c}$ and $\mu_{0}H_{\mathrm{sat}}\simeq 6.5$ T is an exotic
state from beyond the single-magnon BEC paradigm. As discussed in detail in
Refs. [3, 15], a spin nematic phase remains a strong candidate. While the
present measurement is carried out entirely outside that phase,we hope that
the newly obtained model Hamiltonian will help further refine the calculations
such as those in Ref. [15], confirming this expectation.
###### Acknowledgements.
This work was supported by the Swiss National Science Foundation, Division II.
Access to MACS was provided by the Center for High Resolution Neutron
Scattering, a partnership between the National Institute of Standards and
Technology and the National Science Foundation under Agreement No.
DMR-1508249. V. B. thanks Florian Landolt for the fruitful discussions.
## References
* Shannon _et al._ [2006] N. Shannon, T. Momoi, and P. Sindzingre, Nematic order in square lattice frustrated ferromagnets, Phys. Rev. Lett. 96, 027213 (2006).
* Shindou and Momoi [2009] R. Shindou and T. Momoi, $SU(2)$ slave-boson formulation of spin nematic states in $S=\frac{1}{2}$ frustrated ferromagnets, Phys. Rev. B 80, 064410 (2009).
* Smerald _et al._ [2015] A. Smerald, H. T. Ueda, and N. Shannon, Theory of inelastic neutron scattering in a field-induced spin-nematic state, Phys. Rev. B 91, 174402 (2015).
* Ueda and Momoi [2013] H. T. Ueda and T. Momoi, Nematic phase and phase separation near saturation field in frustrated ferromagnets, Phys. Rev. B 87, 144417 (2013).
* Ueda [2015] H. T. Ueda, Magnetic Phase Diagram Slightly below the Saturation Field in the Stacked $J_{1}-J_{2}$ Model in the Square Lattice with the $J_{C}$ Interlayer Coupling, J. Phys. Soc. Jpn. 84, 023601 (2015).
* Nath _et al._ [2008] R. Nath, A. A. Tsirlin, H. Rosner, and C. Geibel, Magnetic properties of $\text{BaCdVO}{({\text{PO}}_{4})}_{2}$: A strongly frustrated spin-$\frac{1}{2}$ square lattice close to the quantum critical regime, Phys. Rev. B 78, 064422 (2008).
* Tsirlin _et al._ [2009] A. A. Tsirlin, B. Schmidt, Y. Skourski, R. Nath, C. Geibel, and H. Rosner, Exploring the spin-$\frac{1}{2}$ frustrated square lattice model with high-field magnetization studies, Phys. Rev. B 80, 132407 (2009).
* Tsirlin and Rosner [2009] A. A. Tsirlin and H. Rosner, Extension of the spin-$\frac{1}{2}$ frustrated square lattice model: The case of layered vanadium phosphates, Phys. Rev. B 79, 214417 (2009).
* Povarov _et al._ [2019] K. Y. Povarov, V. K. Bhartiya, Z. Yan, and A. Zheludev, Thermodynamics of a frustrated quantum magnet on a square lattice, Phys. Rev. B 99, 024413 (2019).
* Bhartiya _et al._ [2019] V. K. Bhartiya, K. Y. Povarov, D. Blosser, S. Bettler, Z. Yan, S. Gvasaliya, S. Raymond, E. Ressouche, K. Beauvois, J. Xu, F. Yokaichiya, and A. Zheludev, Presaturation phase with no dipolar order in a quantum ferro-antiferromagnet, Phys. Rev. Research 1, 033078 (2019).
* Bettler _et al._ [2019] S. Bettler, F. Landolt, O. M. Aksoy, Z. Yan, S. Gvasaliya, Y. Qiu, E. Ressouche, K. Beauvois, S. Raymond, A. N. Ponomaryov, S. A. Zvyagin, and A. Zheludev, Magnetic structure and spin waves in the frustrated ferro-antiferromagnet ${\mathrm{Pb}}_{2}\mathrm{VO}{({\mathrm{PO}}_{4})}_{2}$, Phys. Rev. B 99, 184437 (2019).
* Landolt _et al._ [2020] F. Landolt, S. Bettler, Z. Yan, S. Gvasaliya, A. Zheludev, S. Mishra, I. Sheikin, S. Krämer, M. Horvatić, A. Gazizulina, and O. Prokhnenko, Presaturation phase in the frustrated ferro-antiferromagnet $\mathrm{Pb}_{2}\mathrm{VO}$$\mathrm{(PO_{4})}_{2}$, Phys. Rev. B 102, 094414 (2020).
* Meyer _et al._ [1997] S. Meyer, B. Mertens, and H. Müller-Buschbaum, SrZnVO(PO4)2 and BaCdVO(PO4)2: Vanadylphosphates Related to but not Isotypic with the BaZnVO(PO4)2 Type., Z. Naturforsch. 52b, 985 (1997).
* Skoulatos _et al._ [2019] M. Skoulatos, F. Rucker, G. J. Nilsen, A. Bertin, E. Pomjakushina, J. Ollivier, A. Schneidewind, R. Georgii, O. Zaharko, L. Keller, C. Rüegg, C. Pfleiderer, B. Schmidt, N. Shannon, A. Kriele, A. Senyshyn, and A. Smerald, Putative spin-nematic phase in $\mathrm{BaCdVO}(\mathrm{PO}_{4})_{2}$, Phys. Rev. B 100, 014405 (2019).
* [15] A. Smerald, Magnon binding in $\mathrm{BaCdVO}(\mathrm{PO}_{4})_{2}$ arXiv:2003.12747 (2020).
* Coldea _et al._ [2002] R. Coldea, D. A. Tennant, K. Habicht, P. Smeibidl, C. Wolters, and Z. Tylczynski, Direct Measurement of the Spin Hamiltonian and Observation of Condensation of Magnons in the 2D Frustrated Quantum Magnet ${\mathrm{Cs}}_{2}{\mathrm{CuCl}}_{4}$, Phys. Rev. Lett. 88, 137203 (2002).
* Rodriguez _et al._ [2008] J. A. Rodriguez, D. M. Adler, P. C. Brand, C. Broholm, J. C. Cook, C. Brocker, R. Hammond, Z. Huang, P. Hundertmark, J. W. Lynn, N. C. Maliszewskyj, J. Moyer, J. Orndorff, D. Pierce, T. D. Pike, G. Scharfstein, S. A. Smee, and R. Vilaseca, MACS—a new high intensity cold neutron spectrometer at NIST, Measurement Science and Technology 19, 034023 (2008).
* Zheludev _et al._ [2000] A. Zheludev, T. Masuda, I. Tsukada, Y. Uchiyama, K. Uchinokura, P. Böni, and S.-H. Lee, Magnetic excitations in coupled haldane spin chains near the quantum critical point, Phys. Rev. B 62, 8921 (2000).
* Zheludev _et al._ [2001] A. Zheludev, M. Kenzelmann, S. Raymond, T. Masuda, K. Uchinokura, and S.-H. Lee, Spin dynamics in the quasi-one-dimensional ${S}=\frac{1}{2}$ antiferromagnet $\mathrm{BaCu}_{2}$$\mathrm{Si}_{2}\mathrm{O}_{7}$, Phys. Rev. B 65, 014402 (2001).
* Toth and Lake [2015] S. Toth and B. Lake, Linear spin wave theory for single-$Q$ incommensurate magnetic structures, J. Phys.: Condens. Matter 27, 166002 (2015).
* Popovici [1975] M. Popovici, On the resolution of slow-neutron spectrometers. IV. The triple-axis spectrometer resolution function, spatial effects included, Acta Crystallographica Section A 31, 507 (1975).
* [22] A. Zheludev, Reslib: Resolution library for 3-axis neutron spectroscopy, https://neutron.ethz.ch/Methods/reslib.html .
|
# Temporal-Relational CrossTransformers for Few-Shot Action Recognition
Toby Perrett Alessandro Masullo Tilo Burghardt Majid Mirmehdi Dima Damen
<EMAIL_ADDRESS>Department of Computer Science University of
Bristol UK
###### Abstract
We propose a novel approach to few-shot action recognition, finding
temporally-corresponding frame tuples between the query and videos in the
support set. Distinct from previous few-shot works, we construct class
prototypes using the CrossTransformer attention mechanism to observe relevant
sub-sequences of all support videos, rather than using class averages or
single best matches. Video representations are formed from ordered tuples of
varying numbers of frames, which allows sub-sequences of actions at different
speeds and temporal offsets to be compared.111Code is available at
https://github.com/tobyperrett/TRX
Our proposed Temporal-Relational CrossTransformers (TRX) achieve state-of-the-
art results on few-shot splits of Kinetics, Something-Something V2 (SSv2),
HMDB51 and UCF101. Importantly, our method outperforms prior work on SSv2 by a
wide margin (12%) due to the its ability to model temporal relations. A
detailed ablation showcases the importance of matching to multiple support set
videos and learning higher-order relational CrossTransformers.
Figure 1: For a 3-way 5-shot example, pairs of temporally-ordered frames in
the query (red, green, blue) are compared against all pairs in the support set
(max attention with corresponding colour). Aggregated evidence is used to
construct query-specific class prototypes. We show a correctly-recognised
query using our method from SSv2 class “Failing to put something into
something because it does not fit”.
## 1 Introduction
Few-shot methods aim to learn new classes with only a handful of labelled
examples. Success in few-shot approaches for image classification [11, 19, 8]
and object recognition [26, 15] has triggered recent progress in few-shot
video action recognition [31, 32, 3, 27, 4]. This is of particular interest
for fine-grained actions where collecting enough labelled examples proves
challenging [5, 12, 6].
Recent approaches that achieve state-of-the-art performance [3, 27, 4]
acknowledge the additional challenges in few-shot video recognition, due to
varying action lengths and temporal dependencies. However, these match the
query video (the video to be recognised) to the single best video in the
support set (the few labelled examples per class), e.g. [27], or to the
average across all support set videos belonging to the same class [3, 4].
Inspired by part-based few-shot image classification [8], we consider that,
within a few-shot regime, it is advantageous to compare sub-sequences of the
query video to sub-sequences of all support videos when constructing class
prototypes. This better accumulates evidence, by matching sub-sequences at
various temporal positions and shifts.
We propose a novel approach to few-shot action recognition, which we term
Temporal-Relational CrossTransformers (TRX). A query-specific class prototype
is constructed by using an attention mechanism to match each query sub-
sequence against all sub-sequences in the support set, and aggregating this
evidence. By performing the attention operation over temporally-ordered sub-
sequences rather than individual frames (a concept similar to that in many-
shot action-recognition works, e.g. [29, 10]), we are better able to match
actions performed at different speeds and in different parts of videos,
allowing distinction between fine-grained classes. Fig. 1 shows an example of
how a query video attends to multiple support set videos using temporally-
ordered tuples.
Our key contributions can be summarised as follows:
* •
We introduce a novel method, called the Temporal-Relational CrossTransformer
(TRX), for few-shot action recognition.
* •
We combine multiple TRXs, each operating over a different number of frames, to
exploit higher-ordered temporal relations (pairs, triples and quadruples).
* •
We achieve state-of-the-art results on the few-shot benchmarks for Kinetics
[5], Something-Something V2 (SSv2) [12], HMDB51 [16] and UCF101 [21].
* •
We perform a detailed ablation, demonstrating how TRX utilises multiple videos
from the support set, of different lengths and temporal shifts. Results show
that using tuple representations improves over single-frames by 5.8% on SSv2
where temporal ordering proves critical.
## 2 Related Work
Few-shot classification methods have traditionally fallen into one of three
categories - generative [28, 9], adaptation-based [11, 17] and metric-based
[24, 20]. Generative methods use examples from the target task to generate
additional task-specific training data with which to fine-tune a network.
Adaptation-based methods (e.g. MAML [11]) aim to find a network initialisation
which can be fine-tuned with little data to an unseen target task. Metric-
based methods (e.g. Prototypical [20] or Matching [24] Networks) aim to find a
fixed feature representation in which target tasks can be embedded and
classified.
Recent works which perform well on few-shot image classification have found
that it is preferable to use a combination of metric-based feature
extraction/classification combined with task-specific adaptation [19, 2]. Most
relevant to this paper, the recently introduced CrossTransformer [8] uses an
attention mechanism to align the query and support set using image patch co-
occurrences. This is used to create query-specific class prototypes before
classification within a prototypical network [20]. Whilst this is effective
for few-shot image classification, one potential weakness is that relative
spatial information is not encoded. For example, it would not differentiate
between a bicycle and a unicycle. This distinction is typically not needed in
[8]’s tested datasets [22], where independent part-based matching is
sufficient to distinguish between the classes.
Few-shot video action recognition methods have had success with a wide range
of approaches, including memory networks of key frame representations [31, 32]
and adversarial video-level feature generation [9]. Recent works have
attempted to make use of temporal information. Notably, [3] aligns variable
length query and support videos before calculating the similarity between the
query and support set. [27] combines a variety of techniques, including
spatial and temporal attention to enrich representations, and jigsaws for
self-supervision. [4] achieves state-of-the-art performance by calculating
query to support-set frame similarities. They then enforce temporal
consistency between a pair of videos by monotonic temporal ordering. Their
method can be thought of as a differentiable generalisation of dynamic time
warping. Note that the above works either search for the single support video
[27] or average representation of a support class [3, 4] that the query is
closest to. A concurrent work to ours attempts to resolve this through query-
centred learning [33]. Importantly, all prior works perform attention
operations on a frame level, as they tend to use single-frame representations.
Compared to all prior few-shot action recognition methods, our proposed method
attends to all support set videos, using temporal-relational representations
from ordered tuples of frames, sampled from the video. By attending to sub-
sequences, our method matches actions at different speeds and temporal shifts.
Importantly, we use a combination of different CrossTransformers to match
tuples of different cardinalities, allowing for higher-order temporal
representations. We next describe our method in detail.
## 3 Method
Figure 2: Illustration of the Temporal-Relational CrossTransformer (TRX) on a
2-way 2-shot problem. First, pair and triple representations of the query
and support set videos are constructed. These are concatenated representations
of pairs/triplets of frames (sampled from the video), temporally-ordered. Two
temporal CrossTransformers, $T^{2}$ and $T^{3}$, then construct separate class
prototypes for each representation (pairs and triplets) using separate query
$\Upsilon$, key $\Gamma$ and value $\Lambda$ linear maps for each transformer.
This produces a query-specific class prototype, one per CrossTransformer. The
query representation is also passed through the value linear maps $\Lambda$,
and distances are calculated to the prototypes. The distances are averaged,
and the query is recognised as belonging to the closest class. Details are in
Section 3.
We propose a method for few-shot action recognition that considers the
similarity between an ordered sub-sequence of frames (referred to as a tuple)
to all sub-sequences in the support set, through multiple CrossTransformer
attention modules. This allows the same query video to match to tuples from
several support set videos. After stating the problem definition in Section
3.1, for ease of understanding our TRX method, we start from a simplified
version, building in complexity and generality up to the full method, which is
illustrated in Fig. 2.
In Section 3.2, we consider a single ordered pair of frames, sampled from the
query video. We propose a temporal CrossTransformer to compare this query pair
to ordered pairs of frames from videos in the support set. This allows the
construction of ‘query pair’-specific class prototypes. We then expand to
multiple ordered pairs of frames from the query video. Finally, in Section
3.3, motivated by the need to model more complex temporal relationships, we
generalise from pairs to tuples. We model a separate temporal CrossTransformer
for each tuple cardinality to construct query-cardinality-specific class
prototypes. These are combined to classify the query video, based on the
distances to all class prototypes.
### 3.1 Problem Formulation
In few-shot video classification, inference aims to classify an unlabelled
query video into one of several classes, each represented by a few labelled
examples unseen in training, referred to as the ‘support set’. In this paper,
we focus on $K$-shot where $K>1$, i.e. the support set contains more than one
video. Similar to prior works [24, 11, 4, 27], we follow episodic training,
i.e. random sampling of few-shot tasks from the training set. For each
episode, we consider a $C$-way $K$-shot classification problem. Let
${Q=\\{q_{1},\cdots,q_{F}\\}}$ be a query video with $F$ uniformly sampled
frames. The goal is to classify $Q$ into one of the classes $c\in C$. For the
class $c$, its support set $S^{c}$ contains $K$ videos, where the $k^{th}$
video is denoted $S^{c}_{k}=\\{s^{c}_{k1},\cdots,s^{c}_{kF}\\}$222We assume
videos are uniformly sampled to be of the same length $F$ for simplicity.
Alternatively, we could set $F$ to be the maximum video length and non-
existent pairs could be masked out in the attention matrix..
### 3.2 Temporal CrossTransformer
We consider the temporal relation of two frames sampled from a video to
represent the action, as actions are typically changes in appearance and are
poorly represented by a single frame. We thus sample a pair of ordered frames
from the query video with indices $p=(p_{1},p_{2})$, where ${1\leq
p_{1}<p_{2}\leq F}$, and define the query representation as:
$Q_{p}=[\Phi(q_{p_{1}})+\text{PE}(p_{1}),\Phi(q_{p_{2}})+\text{PE}(p_{2})]\in\mathbb{R}^{2\times
D}~{},$ (1)
where $\Phi:\mathbb{R}^{H\times W\times 3}\mapsto\mathbb{R}^{D}$ is a
convolutional network to obtain a $D$-dimensional embedding of an input frame,
and $\text{PE}(\cdot)$ is a positional encoding given a frame index [23].
We compare the query representation $Q_{p}$ to all possible pair
representations from the support set videos, allowing it to match actions at
various speeds and locations within the support set. We define the set of all
possible pairs as
$\Pi=\\{(n_{1},n_{2})\in\mathbb{N}^{2}:1\leq n_{1}<n_{2}\leq
F)\\}.\vspace{-3pt}$ (2)
A single frame-pair representation of video $k$ in the support set of class
$c$ with respect to the ordered pair of indices ${m=(m_{1},m_{2})\in\Pi}$ is
$S^{c}_{km}=[\Phi(s^{c}_{km_{1}})+\text{PE}(m_{1}),\Phi(s^{c}_{km_{2}})+\text{PE}(m_{2})]\in\mathbb{R}^{2\times
D}.$ (3)
The set of all pair representations in the support set for class $c$ is
$\mathbf{S}^{c}=\\{S^{c}_{km}:(1\leq k\leq K)\land(m\in\Pi)\\}.\vspace{-1pt}$
(4)
We propose a temporal CrossTransformer $T$, based on the spatial
CrossTransformer [8], but adapted from image patches to frame pairs, to
calculate query-specific class prototypes. The CrossTransformer includes query
$\Upsilon$, key $\Gamma$ and value $\Lambda$ linear maps, which are shared
across classes:
$\Upsilon,\Gamma:\mathbb{R}^{2\times D}\mapsto\mathbb{R}^{d_{k}}\hskip
14.22636pt\text{and}\hskip 14.22636pt\Lambda:\mathbb{R}^{2\times
D}\mapsto\mathbb{R}^{d_{v}}.\vspace{-2pt}$ (5)
The correspondence between the query pair and pair $m$ of support video $k$ in
class $c$ is calculated as
$a^{c}_{kmp}=L(\Gamma\cdot S^{c}_{km})\cdot L(\Upsilon\cdot
Q_{p}),\vspace{-2pt}$ (6)
where $L$ is a standard layer normalisation [1]. We apply the Softmax
operation to acquire the attention map
$\tilde{a}^{c}_{kmp}=\frac{\exp(a^{c}_{kmp})/\sqrt{d_{k}}}{\sum_{l,n}\exp(a^{c}_{lnp})/\sqrt{d_{k}}}.\vspace{-1pt}$
(7)
This is then combined with value embeddings of the support set
$\mathbf{v}^{c}_{km}{=}\Lambda\cdot S^{c}_{km}$, in order to compute the
query-specific prototype with respect to the query $Q_{p}$,
$\mathbf{t}^{c}_{p}=\sum_{km}\tilde{a}^{c}_{kmp}\mathbf{v}^{c}_{km}.\vspace{-2pt}$
(8)
Now that we have a query-specific class prototype, we calculate the embedding
of the query $Q_{p}$ with the value linear map such that
$\mathbf{u}_{p}{=}\Lambda\cdot Q_{p}$. This ensures that the query and support
representations undergo the same operations. The CrossTransformer $T$ computes
the distance between the query and support set $\mathbf{S}^{c}$ by passing
$Q_{p}$ such that
$T(Q_{p},\mathbf{S}^{c})=\|\mathbf{t}^{c}_{p}-\mathbf{u}_{p}\|.\vspace{-1pt}$
(9)
Finding a single pair that best represents the action $Q$ is a difficult
problem. Instead, we consider multiple pairs of frames from the query video,
such that the query representation is defined as
$\mathbf{Q}=\\{Q_{p}:p\in\Pi\\}$. In Section 4.3.6, we compare exhaustive
pairs to random pairs of frames.
To calculate the distance between $\mathbf{Q}$ and $\mathbf{S}^{c}$, we
accumulate the distances from all query pairs, i.e.
$T(\mathbf{Q},\mathbf{S}^{c})=\frac{1}{|\Pi|}\sum_{p\in\Pi}T(Q_{p},\mathbf{S}^{c}).\vspace{-1pt}$
(10)
During training, negative query-class distances $T$ are passed as logits to a
cross-entropy loss. During inference, the query $Q_{p}$ is assigned the class
of the closest query-specific prototype, i.e.
$\operatorname*{arg\,min}_{c}T(Q_{p},\mathbf{S^{c}})$. Note that the Softmax
operation in Eq. 7 is performed separately for each query pair $Q_{p}$
(matches are scaled separately for each $p$). Our Temporal CrossTransformer
thus accumulates matches between pair representations of the query video
(hence the term temporal) and all pairs of frames in the support set.
### 3.3 Temporal-Relational CrossTransformers
A shortcoming of the above method is that an ordered pair of frames might not
be the best representation of an action, particularly when fine-grained
distinctions between the classes are required. Consider two classes: “picking
an object up” vs “moving an object”. To discern between these two actions, a
method would require at least three frames - i.e. whether the object is put
down eventually or remains in hand. Similarly, consider full-body actions such
as “jumping” vs “tumbling”. This highlights the need to explore higher-order
temporal representations.
We extend the Temporal CrossTransformer to a Temporal-Relational
CrossTransformer (TRX) by considering a sub-sequence of ordered frames of any
length. We use $\omega$ to indicate the length, or cardinality, of a tuple.
For example, $\omega{=}2$ for a pair, $\omega{=}3$ for a triple. We generalise
to possible tuples for any $\omega$, such that
$\Pi^{\omega}=\\{(n_{1},...,n_{\omega})\in\mathbb{N}^{\omega}:\forall i(1\leq
n_{i}<n_{i+1}\leq F)\\}$ (11)
The associated query representation with respect to the tuple with indices
$p=(p_{1},...,p_{\omega})\in{\Pi^{\omega}}$, generalising the pair
representation in Eq. 1, is
$Q_{p}^{\omega}=[\Phi(q_{p_{1}})+\text{PE}(p_{1}),...,\Phi(q_{p_{\omega}})+\text{PE}(p_{\omega})]\in\mathbb{R}^{\omega\times
D}.$ (12)
This is done similarly for the support set representations.
We define the set of cardinalities as $\Omega$. For example, pairs, triples
and quadruples of frames would be $\Omega{=}\\{2,3,4\\}$. We use one TRX per
cardinality, as parameters can only be defined for a known input
dimensionality (e.g. Eq. 5). Each TRX $T^{\omega}$ includes query, key and
value linear maps corresponding to the dimensionality of $\omega$:
$\Upsilon^{\omega},\Gamma^{\omega}:\mathbb{R}^{\omega\times
D}\mapsto\mathbb{R}^{d_{k}}\hskip 8.53581pt\text{and}\hskip
8.53581pt\Lambda^{\omega}:\mathbb{R}^{\omega\times
D}\mapsto\mathbb{R}^{d_{v}}.$ (13)
Each $T^{\omega}$ outputs the distance between the query and support set with
respect to tuples of cardinality $\omega$. We then accumulate distances from
the various TRXs, such that:
$\mathbf{T}^{\Omega}(Q,\mathbf{S}^{c})=\sum_{\omega\in\Omega}T^{\omega}(\mathbf{Q}^{\omega},\mathbf{S}^{c\omega}).$
(14)
Note that averaging the outputs from each TRX first (as in Eq. 10 for
$\omega{=}2$) balances the varying number of tuples for each $\omega$. As with
a single TRX, during training, the negative distance for each class is passed
as the logit to a cross-entropy loss. During inference, the query is assigned
the class which is closest to the query with respect to $\mathbf{T}^{\Omega}$.
##### Summary:
TRX in its complete form considers a set of cardinalities $\Omega$. For each
$\omega\in\Omega$, different linear maps of corresponding dimensions are
trained (Eq. 13). These are trained jointly using a single cross-entropy loss,
that uses the summed distances (Eq. 14), where the gradient is backpropagated
for each $\omega$ (Eq. 10). The gradient is accumulated from each TRX, through
the tuple representations, and backpropagated through a convolutional network
to update frame representations. TRX is trained end-to-end with shared
backbone parameters for all $\omega\in\Omega$, and all tuples.
## 4 Experiments
### 4.1 Setup
Datasets. We evaluate our method on four datasets. The first two are Kinetics
[5] and Something-Something V2 (SSv2) [12], which have been frequently used to
evaluate few-shot action recognition in previous works [31, 3, 27, 4]. SSv2,
in particular, has been shown to require temporal reasoning (e.g. [30, 18,
14]). We use the few-shot splits for both datasets proposed by the authors of
[31, 32] which are publicly accessible333https://github.com/ffmpbgrnn/CMN. In
this setup, 100 videos from 100 classes are selected, with 64, 12 and 24
classes used for train/val/test. We also provide results for the few-shot
split of SSv2 used by [4] which uses 10x more videos per class in the training
set. Additionally, we evaluate our method on HMDB51 [16] and UCF101 [21],
using splits from [27].
Evaluation. TRX particularly benefits from the presence of a number of videos
in the support set (few-shot rather than one-shot). We thus evaluate our
method on the standard 5-way 5-shot benchmark, and report average results over
10,000 tasks randomly selected from the test sets. We provide an ablation on
X-shot, including one-shot results in the ablation and appendices for
completeness.
Baselines. We give comparisons against four seminal and recent works [31, 3,
27, 4], which reported state-of-the-art in few-shot video action recognition.
These have been discussed in Section 2.
Implementation details. We train our TRX model, with all tuple cardinalities
and frame-level backbones, end-to-end. We use a ResNet-50 backbone [13] with
ImageNet pre-trained weights [7], so we are directly comparable to previous
methods [31, 32, 4]444[27] uses Conv-3D features.. We initialise TRX
parameters randomly and set $d_{k}{=}d_{v}{=}1152$. The last 2048 dimensional
layer from the ResNet forms the frame-level input to the TRX. These are
concatenated into tuples, depending on the length $\omega$. Following [8], the
query and key linear maps of each transformer share weights, to encourage
similarity matching. Videos are re-scaled to height 256 and $F{=}8$ frames are
sampled uniformly as in [25]. They are augmented with random horizontal
flipping and 224x224 crops. For testing, just a centre crop is used.
We use SGD with a learning rate of 0.001, training for 10,000 tasks, which
takes around 3 hours (apart from the larger SSv2∗ split from [4], which uses
75,000 tasks). These hyperparameters were determined using the validation set.
We train TRX on four NVidia 2080Ti GPUs. Due to the number of backbones (e.g.
48 ResNet-50 backbones when considering 5-shot support set, and a query, with
8 frames each), we can only fit a single task in memory. We thus average
gradients and backpropagate once every 16 iterations.
### 4.2 Results
Method | Kinetics | SSv2† | SSv2∗ | HMDB | UCF
---|---|---|---|---|---
CMN [31] | 78.9 | - | - | - | -
CMN-J [32] | 78.9 | 48.8 | - | - | -
TARN [3] | 78.5 | - | - | - | -
ARN [27] | 82.4 | - | - | 60.6 | 83.1
OTAM [4] | 85.8 | - | 52.3 | - | -
TRX (Ours) | 85.9 | 59.1 | 64.6 | 75.6 | 96.1
Table 1: Results on 5-way 5-shot benchmarks of Kinetics (split from [32]),
SSv2 (†: split from [32], ∗: split from [4]), HMDB51 and UCF101 (both splits
from [27]).
(a) SSv2†: Throwing something in the air and letting it fall.
(b) Kinetics: Cutting watermelon.
(c) SSv2† (False Positive): Query GT: Failing to put S into S because S does
not fit, Support Set: putting something upright on the table.
Figure 3: Examples for TRX with $\Omega{=}\\{2,3\\}$. Colour-matching pairs
(top) and triplets (bottom) are shown between the query and support set videos
from one class. Three tuples are highlighted in each subfigure (red, green and
blue). This figure demonstrates maximum attention matches to several videos in
the support set, at different relative and absolute positions.
Table 1 shows our comparative results. TRX outperforms prior work on the four
datasets. On the most challenging dataset (SSv2), TRX outperforms prior work
by a wide margin (12% and 10% on different splits). The large improvement is
found on SSv2 because TRX is particularly beneficial when temporally ordered
tuples can assist the discrimination between classes. It also outperforms
prior work on HMDB51 and UCF101 by 15% and 13% respectively. On Kinetics, it
exceeds the state-of-the-art (by 0.1%). Kinetics is more of an appearance-
based dataset when used as a few-shot benchmark, where ordering is less
important and single frame representation can be sufficient. We ablate this in
Section 4.3.2.
Figure 3 shows qualitative results, highlighting tuple matches between the
query and support set for $\Omega{=}\\{2,3\\}$. For each subfigure, we show
query pairs (top) and triplets (bottom) with their corresponding tuples (same
colour) in the support set. For example, the red pair of frames in the first
example (frames 1 and 2) gets the maximum attention when compared to the
second support set video (frames 2 and 3). We select three tuples to highlight
in each case. The figure shows that tuples match to different videos in the
support set, as well as tuples of varying positions and frame differences. A
failure case (Fig. 3c) matches pair/triplet frames from the query “failing to
put something into something because it doesn’t fit”, with pairs/triplets of
the support set class “put something upright on the table”. In each example,
the putting action is correctly matched, but the query is closest to the wrong
prototype.
### 4.3 Ablations
Our motivation in proposing TRX is the importance of representing both the
query and the support set videos by tuples of ordered frames, and that class
prototypes should be constructed from multiple support set videos. We showcase
this motivation experimentally through several ablations. We specifically
evaluate: (4.3.1) the impact of $\Omega$, (4.3.2) the importance of ordered
frames in the tuple, (4.3.3) the importance of multiple videos in the support
set, and (4.3.4) whether tuples at various locations and frame positions are
being matched within TRX. Additionally, (4.3.5) we compare performance and
runtime as the number of sampled frames changes, and (4.3.6) compare
exhaustive to random tuples, showcasing the potential to compress TRX models
without a significant drop in performance. We primarily use the large-scale
datasets Kinetics and SSv2† for these ablations using the splits from [31,
32].
#### 4.3.1 TRX with different $\Omega$ values
Cardinalities | Num tuples | Kinetics | SSv2†
---|---|---|---
$\Omega{=}\\{1\\}$ | - | 85.2 | 53.3
$\Omega{=}\\{2\\}$ | 28 | 85.0 | 57.8
$\Omega{=}\\{3\\}$ | 56 | 85.6 | 58.8
$\Omega{=}\\{4\\}$ | 70 | 84.5 | 58.9
$\Omega{=}\\{2,3\\}$ | 84 | 85.9 | 59.1
$\Omega{=}\\{2,4\\}$ | 98 | 84.4 | 58.4
$\Omega{=}\\{3,4\\}$ | 126 | 85.3 | 59.1
$\Omega{=}\\{2,3,4\\}$ | 154 | 85.3 | 58.9
Table 2: Comparing all values of $\Omega$ for TRX, noting the number of tuples
for each model, given by $\sum_{\omega\in\Omega}|\Pi^{\omega}|$.
In our comparative analysis in Tab. 1, we reported results using
$\Omega{=}\\{2,3\\}$. This is the combined TRX of pair and triplet frames
demonstrated in Fig. 2. We now evaluate each cardinality of
$\Omega\in\\{1,2,3,4\\}$ independently as well as all their combinations on
both datasets.
In Tab. 2, results demonstrate significant improvement in SSv2 moving from
single frame comparisons to pair comparisons, of (+4.5%). The performance
increases further for triplets (+1.0%) and only marginally again for
quadruples (+0.1%). Combining two CrossTransformers $\Omega{=}\\{2,3\\}$
performs best. Using all cardinalities $\Omega{=}\\{2,3,4\\}$ results in a
slight drop in performance (-0.2%). Comparatively, differences are smaller on
Kinetics, and moving to quadruples drops the performance significantly (-1.4%)
compared to the best TRX combination $\Omega{=}\\{2,3\\}$.
The improvement using TRX with the multiple cardinalities $\Omega{=}\\{2,3\\}$
over frame-based comparisons ($\Omega{=}\\{1\\}$) is demonstrated per-class in
Fig. 4. For SSv2, some classes see little improvement (e.g. “scooping
something up with something”, “opening something”), whereas others see a
greater than 10% improvement (e.g. “pretending to take something from
somewhere”, “putting something next to something”). Aligned with the overall
results on Kinetics, Fig. 4 shows modest improvements per-class, including
marginal drop in some classes.
Figure 4: Class improvement using tuples ($\Omega{=}\\{2,3\\}$) compared to
single frames ($\Omega{=}\\{1\\}$) for SSv2† and Kinetics.
#### 4.3.2 The impact of ordered tuples
Method | Kinetics | SSv2†
---|---|---
$\Omega{=}\\{2,3\\}$ order reversed | 85.9 | 51.3
$\Omega{=}\\{2,3\\}$ | 85.9 | 59.1
Table 3: Results assess the importance of temporal ordering. When the tuple
orders are reversed for the query video, a large drop is observed for SSv2†,
but not for Kinetics.
Up to this point, we have made the assumption that tuples should be temporally
ordered to best represent actions. We evaluate the extreme scenario, where
frames in the support set are temporally ordered, but frames in the query take
the reverse order during inference only. Table 3 shows a large drop for the
reversed query sets on SSv2 (-7.8%). Supporting our prior observation that it
is more an appearance-based dataset, no drop is observed for Kinetics.
Figure 5: Comparing CMN [32] results to TRX for X-shot 5-way, for $1\leq X\leq
5$ on SSv2†. TRX clearly benefits from increasing the number of of videos in
the support set, both for $\Omega{=}\\{1\\}$ and using two CrossTransformers
$\Omega{=}\\{2,3\\}$.
(a) SSv2†, $\Omega{=}\\{2\\}$. (b) SSv2†, $\Omega{=}\\{4\\}$.
(c) Kinetics, $\Omega{=}\\{2\\}$. (d) Kinetics, $\Omega{=}\\{4\\}$.
Figure 6: Percentage of queries that match a given number of support videos
per class, with a max attention value, for SSv2† and Kinetics. True/False
Positive/Negative query percentages are shown for $\Omega{=}\\{2\\}$ and
$\Omega{=}\\{4\\}$.
#### 4.3.3 Matching to multiple support set videos
Our motivation for using CrossTransformers is that query tuples would match to
tuples from multiple support set videos in order to create the query-specific
class prototype. Note that this is not regularised during training - there is
no encouragement to use more than one support set video.
Figure 5 ablates TRX ($\Omega{=}\\{1\\}$ and $\Omega{=}\\{2,3\\}$) for the
number of videos in the support set per class. We increase this from 1-shot to
5-shot reporting the performance for each on SSv2, as well as comparative
results from CMN [32]. Whilst all methods perform similarly for 1-shot, TRX
significantly increases the margin over the CMN baseline as the number of
shots increases. For our proposed model $\Omega{=}\\{2,3\\}$, we report
improvements of +3.9%, +7.3%, +7.9% and +10.3% for 2-, 3-, 4- and 5-shots
comparatively. Note that using a single frame representation also improves
over CMN, by a smaller but significant margin. This ablation showcases TRX’s
ability to utilise tuples from the support set as the number of videos
increases.
To analyse how many support videos are used, we train TRX with pairs
($\Omega{=}\\{2\\}$) and quadruples ($\Omega{=}\\{4\\}$) on SSv2 and Kinetics.
For each query tuple, we find the support set tuple with the maximum attention
value. We then count the number of support set videos per class which contain
at least one maximal match, and average over all test tasks. Figure 6 presents
the results for true and false, positive and negative, results. The figure
demonstrates that TRX successfully matches the query to tuples from multiple
videos in the support set. Most queries ($>$ 50%) match to 2-3 videos in the
support set. Very few queries match to all 5 videos in the support set,
particularly for higher cardinality tuples. A similar distribution is seen for
both datasets, however for SSv2, more true positive queries are matched to a
single video in the support set.
Figure 7: Summed attention over the SSv2† test set, showing how query pairs
(rows) match to support pairs (columns) from the TRX $\Omega{=}\\{2,3\\}$.
Numbers in red show the distance between the frames in the pair.
#### 4.3.4 Visualising tuple matches
In addition to matching multiple support set videos, we visualise the tuple
matches between the queries and the support set. Given a query tuple (row) and
a support set tuple (col), we sum the attention values over the test set, and
then normalise per row. Fig. 7 shows the summed attention values between all
sets of pairs. While query pairs match frequently to corresponding support set
pairs (i.e. same frame positions) in the support set, pairs are also matched
to shifted locations (e.g. [1,2] with [2,4]) as well as significantly
different frame distances (e.g. [6,7] with [1,7]).
#### 4.3.5 Varying the number of frames
Figure 8: SSv2† accuracy (left y-axis) vs runtime analysis (right y-axis in
seconds/task) for TRX $\Omega=\\{2,3\\}$ as the number of sampled frames
varies from 4 to 12 frames.
All previous results sample 8 frames from each video, in the query and support
set. This allows us to directly compare to previous few-shot works that all
consider 8 uniformly sampled frames [32, 3, 4]. To demonstrate TRX is
scalable, we plot $\Omega=\\{2,3\\}$ results on SSv2 for the 5-way 5-shot
task, sampling different numbers of frames (4-12), and compare the accuracy
and runtime in Fig. 8. Accuracy is comparable for $\geq 6$ frames.
Importantly, the method’s runtime scales linearly with the number of frames.
The TRX component only contributes a margin of the runtime and memory
requirements of the network, with the ResNet-50 backbone dominating both
needs.
#### 4.3.6 Random tuples in TRX
(a) $\Omega{=}\\{2\\}$. (b) $\Omega{=}\\{3\\}$. (c) $\Omega{=}\\{4\\}$. (d)
$\Omega{=}\\{2,3\\}$.
Figure 9: Effect of retaining % of tuples, selected randomly, on TRX, reported
for SSv2†. Grey dots indicate results of different runs, with averages in red.
All the above experiments have used exhaustive sets of tuples, e.g. every
possible pair $(n_{1},n_{2})$ such that $1\leq n_{1}<n_{2}\leq F$ for
$\omega=2$. To explore the impact of randomly sampling tuples, we experiment
with 20, 40, 60 and 80% of tuples retained for $\Omega{=}\\{2\\}$, $\\{3\\}$
and $\\{4\\}$, as well as a combined $\Omega{=}\\{2,3\\}$. We report four runs
for each percentage, each with a different random selection of tuples.
| % tuples retained
---|---
Cardinalities | 20 | 40 | 60 | 80 | 100
$\Omega{=}\\{2\\}$ | 128 | 204 | 256 | 346 | 462
$\Omega{=}\\{3\\}$ | 228 | 390 | 540 | 702 | 844
$\Omega{=}\\{4\\}$ | 294 | 546 | 754 | 922 | 1152
$\Omega{=}\\{2,3\\}$ | 356 | 594 | 796 | 1048 | 1306
Table 4: GPU usage in MiB when randomly dropping tuples, corresponding to
experiments in Fig. 9.
Fig. 9 shows that while retaining all tuples gives the best performance, some
of the runs produce results comparable to exhaustive tuple selections for
$\Omega{=}\\{2,3\\}$ and even outperform these for $\Omega{=}\\{4\\}$. The
performance degrades quicker for $\Omega{=}\\{2\\}$. The associated Tab. 4
compares the corresponding GPU usage. This shows it is possible to utilise
fewer resources with comparable performance. A method for selecting tuples
that maintain performance is left for future work.
## 5 Conclusion
This paper introduced Temporal-Relational CrossTransformers (TRX) for few-shot
action recognition. TRX constructs query-specific class prototypes by
comparing the query to sub-sequences of all support set videos. To model
temporal relationships, videos are represented by ordered tuples of frames,
which allows sub-sequences of actions at different speeds and temporal offsets
to be compared. TRX achieves state-of-the-art results on the few-shot versions
of four datasets: Kinetics, Something-Something V2, HMDB51 and UCF101. An
extensive set of ablations shows how TRX observes multiple support set videos,
the importance of tuple representations over single-frame comparisons, and the
benefits of exploiting tuples of different cardinalities. As future work, we
aim to explore spatio-temporal versions of TRX.
Acknowledgements Publicly-available datasets were used for this work. This
work was performed under the SPHERE Next Steps EPSRC Project EP/R005273/1.
Damen is supported by EPSRC Fellowship UMPIRE (EP/T004991/1).
## References
* [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer Normalization. arXiv, 2016.
* [2] Peyman Bateni, Raghav Goyal, Vaden Masrani, Frank Wood, and Leonid Sigal. Improved Few-Shot Visual Classification. In Computer Vision and Pattern Recognition, 2020.
* [3] Mina Bishay, Georgios Zoumpourlis, and Ioannis Patras. TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition. In British Machine Vision Conference, 2019.
* [4] Kaidi Cao, Jingwei Ji, Zhangjie Cao, Chien-Yi Chang, and Juan Carlos Niebles. Few-Shot Video Classification via Temporal Alignment. In Computer Vision and Pattern Recognition, 2020.
* [5] Joao Carreira and Andrew Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Computer Vision and Pattern Recognition, 2017.
* [6] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. In European Conference on Computer Vision, 2018.
* [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Computer Vision and Pattern Recognition, 2009.
* [8] Carl Doersch, Ankush Gupta, and Andrew Zisserman. CrossTransformers: Spatially-Aware Few-Shot Transfer. In Advances in Nerual Information Processing Systems, 2020.
* [9] Sai Kumar Dwivedi, Vikram Gupta, Rahul Mitra, Shuaib Ahmed, and Arjun Jain. ProtoGAN: Towards Few Shot Learning for Action Recognition. In Computer Vision and Pattern Recognition, 2019.
* [10] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast Networks for Video Recognition. In International Conference on Computer Vision, 2019.
* [11] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-Agnostic Meta-Mearning for Fast Adaptation of Deep Networks. In International Conference on Machine Learning, 2017.
* [12] Raghav Goyal, Vincent Michalski, Joanna Materzy, Susanne Westphal, Heuna Kim, Valentin Haenel, Peter Yianilos, Moritz Mueller-freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The “Something Something” Video Database for Learning and Evaluating Visual Common Sense. In International Conference on Computer Vision, 2017.
* [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Computer Vision and Pattern Recognition, 2016.
* [14] Boyuan Jiang, Meng Meng Wang, Weihao Gan, Wei Wu, and Junjie Yan. STM: SpatioTemporal and Motion Encoding for Action Recognition. In International Conference on Computer Vision, 2019.
* [15] Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, and Trevor Darrell. Few-Shot Object Detection via Feature Reweighting. In International Conference on Computer Vision, 2019.
* [16] H Kuehne, T Serre, H Jhuang, E Garrote, T Poggio, and T Serre. HMDB: A large video database for human motion recognition. In International Conference on Computer Vision, nov 2011.
* [17] Alex Nichol, Joshua Achiam, and John Schulman. On First-Order Meta-Learning Algorithms. arXiv, 2018.
* [18] Will Price and Dima Damen. Retro-Actions : Learning ‘Close’ by Time-Reversing ‘Open’ Videos. In International Conference on Computer Vision Workshops, 2019.
* [19] James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, and Richard E. Turner. Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes. In Advances in Nerual Information Processing Systems, 2019.
* [20] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical Networks for Few-Shot Learning. Advances in Neural Information Processing Systems, 2017.
* [21] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. arXiv, 2012.
* [22] Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. In International Conference on Learning Representations, 2019.
* [23] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
* [24] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching Networks for One Shot Learning. In Advances in Neural Information Processing Systems, 2016.
* [25] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal Segment Networks: Towards Good Practices for Deep Action Recognition. In European Conference on Computer Vision, 2016.
* [26] Zhongwen Xu, Linchao Zhu, and Yi Yang. Few-Shot Object Recognition from Machine-Labeled Web Images. In Computer Vision and Pattern Recognition, 2017.
* [27] Hongguang Zhang, Li Zhang, Xiaojuan Qi, Hongdong Li, Philip H S Torr, and Piotr Koniusz. Few-shot Action Recognition with Permutation-invariant Attention. In European Conference on Computer Vision, 2020.
* [28] Ruixiang Zhang, Tong Che, Yoshua Bengio, Zoubin Ghahramani, and Yangqiu Song. Metagan: An Adversarial Approach to Few-Shot Learning. Advances in Neural Information Processing Systems, 2018.
* [29] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal Relational Reasoning in Videos. In European Conference on Computer Vision, 2018.
* [30] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal Relational Reasoning in Videos. In European Conference on Computer Vision, 2018.
* [31] Linchao Zhu and Yi Yang. Compound Memory Networks for Few-Shot Video Classification. In European Conference on Computer Vision, 2018.
* [32] Linchao Zhu and Yi Yang. Label Independent Memory for Semi-Supervised Few-shot Video Classification. Transactions on Pattern Analysis and Machine Intelligence, 14(8), 2020.
* [33] Xiatian Zhu, Antoine Toisoul, Juan-Manuel Pérez-Rúa, Li Zhang, Brais Martinez, and Tao Xiang. Few-shot Action Recognition with Prototype-centered Attentive Learning. arXiv, 2021.
## Appendix
## Appendix A X-Shot results
| | Shot
---|---|---
Dataset | Method | 1 | 2 | 3 | 4 | 5
| CMN [31] | 60.5 | - | - | - | 78.9
| CMN-J [32] | 60.5 | 70.0 | 75.6 | 77.3 | 78.9
| TARN [3] | 64.8 | - | - | - | 78.5
Kinetics | ARN [27] | 63.7 | - | - | - | 82.4
| OTAM [4] | 73.0 | - | - | - | 85.8
| Ours - TRX $\Omega{=}\\{1\\}$ | 63.6 | 75.4 | 80.1 | 82.4 | 85.2
| Ours - TRX $\Omega{=}\\{2,3\\}$ | 63.6 | 76.2 | 81.8 | 83.4 | 85.9
| CMN-J [32] | 36.2 | 42.1 | 44.6 | 47.0 | 48.8
SSv2∗ | Ours - TRX $\Omega{=}\\{1\\}$ | 34.9 | 43.4 | 47.6 | 50.9 | 53.3
| Ours - TRX $\Omega{=}\\{2,3\\}$ | 36.0 | 46.0 | 51.9 | 54.9 | 59.1
| OTAM [4] | 42.8 | - | - | - | 52.3
SSv2† | Ours - TRX $\Omega{=}\\{1\\}$ | 38.8 | 49.7 | 54.4 | 58.0 | 60.6
| Ours - TRX $\Omega{=}\\{2,3\\}$ | 42.0 | 53.1 | 57.6 | 61.1 | 64.6
Table 5: Comparison to few-shot video works on Kinetics (split from [32]) and
Something-Something V2 (SSv2) (†: split from [32] ∗: split from [4]). Results
are reported as the shot, number of support set videos per class, increases
from 1 to 5. -: Results not available in published works.
In the main paper, we introduced Temporal-Realational CrossTransformers (TRX)
for few-shot action recognition. They are designed specifically for $K$-shot
problems where $K>1$, as TRX is able to match sub-sequences from the query
against sub-sequences from multiple support set videos.
Table 1 in the main paper shows results on the standard 5-way 5-shot
benchmarks on Kinetics [5], Something-Something V2 (SSv2) [12], HMDB51 [16]
and UCF101 [21]. For completeness we also provide 1-, 2-, 3-, 4- and 5-shot
results for TRX with $\Omega{=}\\{1\\}$ (frame-to-frame comparisons) and
$\Omega{=}\\{2,3\\}$ (pair and triplet comparisons) on the large-scale
datasets Kinetics and SSv2. These are in Table 5 in this appendix, where we
also list results from all other works which provide these scores.
For 1-shot, in Kinetics, TRX performs similarly to recent few-shot action-
recognition methods [31, 3, 27], but these are all outperformed by OTAM [4].
OTAM works by finding a strict alignment between the query and single support
set video per class. It does not scale as well as TRX when $K>1$, shown by TRX
performing better on the 5-shot benchmark. This is because TRX is able to
match query sub-sequences against similar sub-sequences in the support set,
and importantly ignore sub-sequences (or whole videos) which are not as
useful. Compared to the strict alignment in OTAM [4], where the full video is
considered in the alignment, TRX can exploit several sub-sequences from the
same video, ignoring any distractors. Despite not being as well suited to
1-shot problems, on SSv2 TRX performs similarly to OTAM. 2-shot TRX even
outperforms 5-shot OTAM. Table 5 again highlights the importance of tuples,
shown in the main paper, where TRX with $\Omega{=}\\{2,3\\}$ consistently
outperforms $\Omega{=}\\{1\\}$.
Figure 5 in the main paper shows how TRX scales on SSv2 compared to CMN [31,
32], which also provides X-shot results ($1\leq X\leq 5)$. The equivalent
graph for Kinetics is shown in Fig. 10 here. This confirms TRX scales better
as the shot increases. There is less of a difference between TRX with
$\Omega{=}\\{1\\}$ and $\Omega{=}\\{2,3\\}$, as Kinetics requires less
temporal knowledge to discriminate between the classes than SSv2 (ablated in
Sec. 4.3.1 and 4.3.2 in the main paper).
Figure 10: Comparing CMN [32] results to TRX for X-shot 5-way, for $1\leq
X\leq 5$ on Kinetics. TRX benefits from increasing the number of of videos in
the support set, both for $\Omega{=}\\{1\\}$ and $\Omega{=}\\{2,3\\}$.
## Appendix B The impact of positional encoding
Method | Positional Encoding | Kinetics | SSv2†
---|---|---|---
$\Omega{=}\\{1\\}$ | $\times$ | 85.2 | 53.0
$\Omega{=}\\{1\\}$ | ✓ | 85.2 | 53.3
$\Omega{=}\\{2,3\\}$ | $\times$ | 85.5 | 58.5
$\Omega{=}\\{2,3\\}$ | ✓ | 85.9 | 59.1
Table 6: The importance of incorporating positional encoding for single frames
and the proposed model ${\Omega{=}\\{2,3\\}}$.
TRX adds positional encodings to the individual frame representations before
concatenating them into tuples. Table 6 shows that adding positional encodings
improves SSv2 for both single frames and higher-order tuples (by +0.3% and
+0.6% respectively). For Kinetics, performance stays the same as single frames
and improves slightly with tuples (+0.4%) for the proposed model. Overall,
positional encoding improves the results marginally for TRX.
|
# spoofing attack detection in dynamic channels with imperfect CSI
###### Abstract
Recently, channel state information (CSI) at the physical-layer has been
utilized to detect spoofing attacks in wireless communications. However, due
to hardware impairments and communication noise, the CSI cannot be estimated
accurately, which significantly degrades the attack detection performance.
Besides, the reliability of CSI based detection schemes is challenged by time-
varying scenarios. To address these issues, we propose an adaptive Kalman
based detection scheme. By utilizing the knowledge of the predicted channel we
eliminate the channel estimation error, especially the random phase error
which occurs due to the lack of synchronization between transmitter and
receiver. Furthermore, we define a Kalman residual based test statistic for
attack detection. Simulation results show that our proposed scheme makes the
detection more robust at low signal-to-noise ratio (SNR) and in dynamic
scenarios.
Index Terms— Spoofing attack, imperfect CSI, Kalman filter
© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
## 1 Introduction
The open nature of the radio propagation makes the wireless communication
vulnerable to identity-based spoofing attacks, in which the attacker attempts
to deliver a malicious message by pretending to be the legitimate user. In
particular, by the communication over commodity WiFi networks, the attacker
can simply use the command ”ifconfig” to change its media access control (MAC)
address and claim to be the authorized user [1]. Therefore, the receiver must
authenticate the message before proceeding with it. Traditional authentication
mechanisms are based on encryption keys, which do not take into account the
physical layer of the communication protocol. In recent years, unique channel
features extracted from the physical layer are exploited to enhance the
authentication performance [2]. In practical communication protocols, such as
the IEEE 802.11n [3] standard, the orthogonal frequency-division multiplexing
(OFDM) channel estimation mechanism is defined, with which the complex CSI can
be obtained in discrete Fourier transform (DFT) domain. With the help of CSI
the aforementioned spoofing attack can be effectively detected.
However, due to the lack of synchronization between transmitter and receiver,
the CSI phase information is largely distorted. In more details, the time
shift from the packet boundary detection results in packet detection delay
(PDD) leading to random phase _slope_ error. Further, the carrier frequency
offset (CFO) between the transmitter and receiver leads to random phase
_offset_ error [4, 5]. In many recent physical layer authentication studies,
the problem is avoided by completely ignoring the observed CSI phase and
focusing only on the received signal strength indicator (RSSI) [6] or CSI
magnitude [7]. The conventional approach of CSI based detection schemes is by
comparing the difference between the currently observed and the historical CSI
[8]. In recent years, machine learning (ML) based approaches have been
developed in order to distinguish different transmitters, such as Gaussian
mixture model (GMM) [9, 10] and support vector machine (SVM) [11, 12].
However, the wireless channel is time-varying. The stored historical CSI and
the off-line learned channel features need to be updated over time, otherwise,
it will cause great performance degradation.
To solve these problems, in this paper we propose an adaptive Kalman filter
based attack detection scheme that takes the predicted CSI into account. We
formulate the attack detection process mathematically as a binary hypothesis
testing problem. Unlike most state-of-art-studies that rely on historical CSI,
we exploit the predicted CSI for attack detection. Furthermore, for attack
detection we define a Kalman residual based test statistic, which follows a
chi-squared distribution. The proposed scheme is evaluated by Monto Carlo
simulations. Simulation results show that our proposed method outperforms most
state-of-art attack detection schemes.
The rest of the paper is organized as follows. The channel model is introduced
in Sec. 2. We explain the proposed attack detection scheme in Sec. 3.
Simulation results are given in Sec. 4. The paper is concluded in Sec. 5.
## 2 Channel model
Let $Q$ be the number of pilots used for channel estimation,
$q_{1},q_{2},\ldots,q_{Q}\in\mathcal{Q}$ denote the pilot indices. Due to the
communication noise and the synchronization problem the CSI can not be
estimated accurately. We use a $Q\times 1$ vector ${\bm{h}}_{\text{Obs},k}$ to
denote the imperfect CSI estimation at time $k$, which can be expressed as
$\displaystyle{\bm{h}}_{\text{Obs},k}=\bm{E}_{k}\bm{C}{\bm{h}}_{k}+\bm{w}_{k},$
(1)
where $\bm{E}_{k}$ is a diagonal matrix that represents the phase error
$\displaystyle\bm{E}_{k}=e^{j\Omega_{0,k}}{\begin{bmatrix}e^{j\Omega_{d,k}q_{1}},&&\\\
&\ddots&\\\ &&e^{j\Omega_{d,k}q_{Q}}\end{bmatrix}},$ (2)
in which $\Omega_{0,k}$ and $\Omega_{d,k}$ are the random phase distortion
parameters caused by the CFO and PDD, respectively. Let $L$ be the channel
length in time domain. The $Q\times L$ matrix $C$ is a partial DFT matrix with
${[C]}_{m,l}=e^{-j\frac{2\pi}{M}\cdot q_{m}\cdot l}$. $\bm{h}_{k}$ is the
channel in the time domain, which can be expressed as
$\bm{h}_{k}={[h_{k}^{1},h_{k}^{2},\dots,h_{k}^{L}]}^{T}$. We use
$\bm{h}_{\text{True},k}=\bm{C}\bm{h}_{k}$ to denote the ”true” channel in DFT
domain at time $k$. $\bm{w}_{k}$ is the complex circularly-symmetric Gaussian
noise with covariance $\sigma^{2}_{w}$.
For the channel in time domain $\bm{h}_{k}$, we assume a multi-path Rayleigh
fading channel with Jakes doppler spectrum [13]
$\displaystyle{\Gamma}_{h^{(l)}}=\left\\{\begin{matrix}\frac{{\sigma}_{h^{(l)}}^{2}}{\pi
f_{d}\sqrt{1-{(\frac{f}{f_{d}})}^{2}}},&if\quad\left|f\right|<f_{d}\\\
0,&if\quad\left|f\right|\geq f_{d},\end{matrix}\right.$ (3)
where $f_{d}$ denotes the doppler frequency,
${\sigma}_{h^{(l)}}^{2}=\mathbb{E}[h_{k}^{l}{h_{k}^{lH}}]$ is the variance of
the $l$-th channel path. In order to approximate the channel variations we
apply here the first-order auto-regressive (AR1)
$\displaystyle\bm{h}_{k}=\alpha\bm{h}_{k-1}+\bm{v}_{k},$ (4)
where $\alpha$ denotes the channel correlation between previous time $k-1$ and
$k$, $\bm{v_{k}}$ is the circular-symmetric complex Gaussian process noise.
According to Jakes spectrum, the transition parameter and the covariance
matrix of $\bm{v}_{k}$ can be obtained by the Yule-Walker equations [14],
which are given by
$\displaystyle\alpha=$ $\displaystyle J_{0}(2\pi f_{d}T_{s}),$ (5)
$\displaystyle\bm{R}_{k}=$ $\displaystyle\mathbb{E}[\bm{v}_{k}\bm{v}_{k}^{H}]$
(6) $\displaystyle=$
$\displaystyle(1-{\alpha}^{2})diag([{\sigma}_{h^{(1)}}^{2},{\sigma}_{h^{(2)}}^{2},\cdots,{\sigma}_{h^{(L)}}^{2}]),$
where $f_{d}T_{s}$ is the normalized Doppler frequency, $J_{0}(\cdot)$ is the
zero-order Bessel function. Here, $diag(\bm{x})$ denotes creating a diagonal
matrix whose main diagonal are elements of $\bm{x}$.
## 3 Attack detection scheme
We consider the spoofing attack model in Fig. 1. A legitimate user (Alice)
intends to communicate with Bob over the Alice-to-Bob channel. We use
$\bm{h}_{{\text{True},k}}^{A}$ and ${\bm{h}}_{{\text{Obs},k}}^{A}$ to denote
the true and the imperfect CSI of Alice-to-Bob channel, respectively. The
attacker (Eve) tries to deliver a malicious message to Bob by pretending to be
Alice. We use $\bm{h}_{{\text{True},k}}^{E}$ and
${\bm{h}}_{{\text{Obs},k}}^{E}$ to denote the true and the imperfect CSI of
Eve-to-Bob channel, respectively. Bob has to decide whether the received
message is from the legitimate user Alice or the attacker Eve. All users are
assumed to be located in different positions with the location distance
$d>\lambda$, where $\lambda$ is the radio frequency (RF) wavelength. Due to
the _location-specific_ property of the wireless channel
(${\bm{h}}_{\text{True},k}^{E}\neq{\bm{h}}_{\text{True},k}^{A}$), the CSI can
be used to distinguish different transmitters.
Bob Alice Eve ${\bm{h}}_{{\text{Obs},k}}^{A}$ ${\bm{h}}_{{\text{Obs},k}}^{E}$
Fig. 1: Overview of the spoofing attack model
However, due to lack of synchronization, the phase of the estimated CSI is
largely distorted. Many state-of-art studies only focus on the magnitude of
the CSI. In order to estimate the random phase error we have proposed an
adaptive Kalman filter based algorithm in our previous work [15]. In this work
we apply the phase recovery approach for attack detection.
We use $\hat{\bm{h}}_{k|k-1}$ and $\hat{\bm{h}}_{k|k}$ to denote the predicted
and updated channel in the time domain, respectively. Furthermore, we use the
$L\times L$ diagonal matrices $\bm{P}_{k|k-1}$ and $\bm{P}_{k|k}$ to denote
predicted and the updated estimation covariance. Based on the state-space
model defined in (1) and (4) we derive the adaptive Kalman process to jointly
estimate the phase distortion parameters and the true channel in Alg. 1.
Details can be found in [15].
Algorithm 1 Kalman filter based channel estimation
0: Initialization of $\hat{\bm{h}}_{0|0}$ and $\bm{P}_{0|0}$
repeat
$k\leftarrow k+1$
Predict channel estimate
$\hat{\bm{h}}_{k|k-1}=\alpha\hat{\bm{h}}_{k-1|k-1}$
Predict channel estimation error covariance
$\bm{P}_{k|k-1}={\alpha}^{2}\bm{P}_{k-1|k-1}+\bm{R}_{k}$
Estimate phase slope $\hat{\Omega}_{d,k}$ and phase offset
$\hat{\Omega}_{0,k}$
$\bm{B}_{k}=\bm{E}({\Omega}_{0,k},{\Omega}_{d,k})\bm{C}$$\bm{\Sigma}^{-1}=\left(\bm{B}_{k}\bm{P}_{k|k-1}\bm{B}^{H}_{k}+\sigma^{2}_{w}\bm{I}_{Q}\right)^{-1}$$g=\bm{h}^{H}_{\text{Obs},k}\bm{\Sigma}^{-1}\bm{h}_{\text{Obs},k}-2\operatorname{Re}\left[\bm{h}^{H}_{\text{Obs},k}\bm{B_{k}}\hat{\bm{h}}_{k|k-1}\right]$$\left\langle\hat{\Omega}_{d,k},\hat{\Omega}_{0,k}\right\rangle=\underset{{\Omega}_{d,k},{\Omega}_{0,k}}{\arg\min}\,g\left(\Omega_{d,k},{\Omega}_{0,k}\right)$
Kalman gain
$\bm{K}_{k}=\bm{P}_{k|k-1}\bm{B}^{H}_{k}\left(\bm{B}_{k}\bm{P}_{k|k-1}\bm{B}^{H}_{k}+\sigma^{2}_{w}\bm{I}_{Q}\right)^{-1}$
Update channel estimate
$\hat{\bm{h}}_{k|k}=\hat{\bm{h}}_{k|k-1}+\bm{K}_{k}\left(\bm{h}_{\text{Obs},k}-\bm{B}_{k}\hat{\bm{h}}_{k|k-1}\right)$
Update channel estimation error covariance
$\bm{P}_{k|k}=\left(\bm{I}_{Q}-\bm{K}_{k}\bm{B}_{k}\right)\bm{P}_{k|k-1}$
until forever
According to Alg. 1, we define the Kalman residual as
$\displaystyle\bm{\epsilon}_{k}=\bm{h}_{\text{Obs},k}-\bm{B}_{k}\hat{\bm{h}}_{k|k-1},$
(7)
where $\bm{B}_{k}=\bm{E}(\hat{\Omega}_{0,k},\hat{\Omega}_{d,k})\bm{C}$ is
introduced for convenience, in which $(\hat{\Omega}_{0,k}$,
$\hat{\Omega}_{d,k})$ denote the estimated phase distortion parameters. In the
absence of attacks, the Kalman residual $\bm{\epsilon}_{k}$ follows a complex
Gaussian distribution with zero-mean and covariance matrix
$\displaystyle{\bm{\Sigma}}_{k}=\underbrace{\bm{B}_{k}\bm{P}_{k|k-1}\bm{B}^{H}_{k}}_{a}+\underbrace{\sigma^{2}_{w}\bm{I}_{Q}}_{b},$
(8)
in which the terms $a$ and $b$ represent the channel estimation covariance in
DFT domain and the covariance of the Gaussian noise $\bm{\omega}_{k}$
according to the model defined in (1), respectively. For the simplicity of the
analysis, we assume here that the phase distortion parameters are estimated
accurately. Furthermore, we define the test statistic as
$\displaystyle{\lambda}_{k}=2{\bm{\epsilon}_{k}}^{H}{{\bm{\Sigma}}_{k}}^{-1}{\bm{\epsilon}_{k}},$
(9)
which follows a chi-squared distribution with $2Q$ degree of freedom (DoF) in
the absence of attacks. The chi-squared distribution can be expressed as
$\displaystyle{\lambda}_{k}\sim{{\chi}^{2}_{2Q}}.$ (10)
Thus, the threshold $d$ for attack detection can be evaluated for a given
false alarm rate $P_{FA}$, which is given by
$\displaystyle
d(P_{FA})=F^{-1}[1-P_{FA}|2Q]=\left\\{x:F(x|2Q)=1-P_{FA}\right\\},$ (11)
where $F^{-1}$ denotes the inverse of the cumulative distribution function
(cdf) of ${{\chi}^{2}_{2Q}}$. The cdf $F$ can be expressed as
$\displaystyle
F(x|2Q)=\int_{0}^{x}\frac{t^{(2Q-2)/2}e^{-t/2}}{2^{Q}\Gamma(Q)}dt,$ (12)
in which $\Gamma(\cdot)$ is the Gamma function. Thus, we formulate here the
spoofing attack detection procedure as a binary hypothesis testing, which is
given by
$\displaystyle\mathit{H}_{0}:{\lambda}_{k}\leqslant d(P_{FA}),$ (13)
$\displaystyle\mathit{H}_{1}:{\lambda}_{k}>d(P_{FA}),$ (14)
where the null hypothesis $\mathit{H}_{0}$ denotes that the proposed test
statistic is equal to or smaller than the threshold calculated at a given
$P_{FA}$. This means that the received message at time $k$ is considered to be
from the legitimate user Alice, while the alternative hypothesis
$\mathit{H}_{1}$ denotes the received message is considered to be from the
attacker Eve. Our proposed scheme is summarized as follows. At time $k$ the
receiver Bob observes an imperfect channel estimate $\bm{h}_{\text{obs},k}$
from an unknown transmitter. In order to eliminate the random phase errors,
the Kalman prediction and phase estimation will be performed first. After that
the receiver Bob calculates the test statistic ${\lambda}_{k}$ according to
(9). If the test statistic is equal to or smaller than the threshold, Bob will
accept the message and perform a Kalman update. Otherwise an alarm will
generated and the current received message will be rejected.
## 4 Simulation results
Fig. 2: Test statistic distribution of ${\lambda}_{k}^{A}$, SNR = 10 dB,
$f_{d}T_{s}=10^{-4}$ Fig. 3: ROC of the proposed scheme, $f_{d}T_{s}=10^{-4}$
Fig. 4: Comparison of different detection approaches: (a) Detection rate
versus SNR, false alarm rate = 0.1, $f_{d}T_{s}=10^{-4}$; (b) Detection rate
versus $f_{d}T_{s}$, false alarm rate = 0.1, SNR = 10 dB
In this section we present the numerical results. A Monte Carlo simulation is
performed to verify the proposed Kalman residual based attack detection. The
results are averaged over $10^{4}$ simulations. According to the IEEE 802.11n
standard[3], we consider a OFDM system with 114 pilots. For all instances of
the simulations, the channel in time domain is modelled as the multi-path
Rayleigh channel with Jakes doppler spectrum. Meanwhile, the imperfect CSI is
generated with complex Gaussian noise and random phase errors according to
(1). In order to perform the hypothesis testing, we generate 2000 imperfect
CSI realizations of Alice-Bob channel (${\bm{h}}_{{\text{Obs},k}}^{A}$) and
Eve-Bob channel (${\bm{h}}_{{\text{Obs},k}}^{E}$) for each simulation. The
entire Kalman filter based channel state recovery in Alg. 1 is performed only
for ${\bm{h}}_{{\text{Obs},k}}^{A}$ ($k=1,...,2000$). To evaluate the
detection performance, after the prediction step we separately obtain the
phase distortion terms given ${\bm{h}}_{{\text{Obs},k}}^{A}$ and
${\bm{h}}_{{\text{Obs},k}}^{E}$. Then, according (9) we calculate the proposed
test statistics ${\lambda}_{k}^{A}$ and ${\lambda}_{k}^{E}$ using
${\bm{h}}_{{\text{Obs},k}}^{A}$ and ${\bm{h}}_{{\text{Obs},k}}^{E}$,
respectively.
Theoretically, the proposed test statistics of Alice-Bob channel
${\lambda}_{k}^{A}$ should follow the chi-squared distribution. This is
verified in Fig. 2. It can be clearly seen that the distribution of the test
statistic obeys the chi-squared distribution well. The receiver operating
characteristics (ROC) curves of the proposed scheme with different SNR are
illustrated in Fig. 3, in which each data point is a pair of detection rate
and false alarm rate at a deterministic threshold. The threshold is calculated
with a known false alarm rate according to (11). The ROC curve represents the
trade-off between the false alarm rate and the detection rate. From Fig. 3 we
observe that as SNR increases, better detection performance can be achieved.
We compare the proposed scheme to the approaches using GMM in [9], one class
SVM (OC-SVM) in [12] and the magnitude difference between consecutive CSI in
[16]. Note that, except of our proposed scheme the remaining approaches here
only utilize the magnitude of the CSI, because the CSI phase is distorted
severely due to the random errors. In addition, since GMM is a supervised ML
based algorithm, we use the magnitude of ${\bm{h}}_{{\text{Obs},k}}^{A}$
($k=1,..,1000$) and ${\bm{h}}_{{\text{Obs},k}}^{E}$ ($k=1,..,1000$) to train
the Gaussian mixture components, while the magnitude of
${\bm{h}}_{{\text{Obs},k}}^{A}$ ($k=1001,..,2000$) and
${\bm{h}}_{{\text{Obs},k}}^{E}$ ($k=1001,..,2000$) are used for testing. For
the semi-supervised ML based OC-SVM approach, the magnitude of
${\bm{h}}_{{\text{Obs},k}}^{A}$ ($k=1,..,1000$) are used for training, while
the magnitude of ${\bm{h}}_{{\text{Obs},k}}^{A}$ ($k=1001,..,2000$) and
${\bm{h}}_{{\text{Obs},k}}^{E}$ ($k=1001,..,2000$) are used for testing.
Meanwhile we illustrate here the detection rate of the proposed scheme with
$k=1001,..,2000$ for a fair comparison. In Fig. 4 (a), the detection rate is
presented as a function of the SNR. It can be seen that our proposed scheme is
superior to other methods, especially in the case of low SNR. The reason is
that, through the Kalman filter based channel estimation in Alg. 1, we recover
the CSI by the low-dimensional channel impulse response $\hat{\bm{h}}_{k|k}$,
thereby reducing the noise corruption. Additionally, we utilize the complex
valued CSI, while the other approaches only using the magnitude. When we study
the performance of the approaches with different doppler frequency as shown in
Fig. 4 (b), we can see that the magnitude difference based approach performs
similar to the proposed scheme. The detection performance of ML-based
algorithms decreases with higher Doppler frequencies, because the trained
model for attack detection becomes obsolete due to channel variation.
## 5 Conclusion
In this paper, we have proposed a Kalman filter based spoofing attack
detection scheme for dynamic channels. The detection problem is formulated as
a binary hypothesis testing process with the defined test statistic, in which
the predicted channel and the estimated phase errors are utilized. Simulation
results have demonstrated that our proposed scheme outperforms most state-of-
art approaches, especially in the case of low SNR and in dynamic scenarios.
## References
* [1] A. Pandey and J. R Saini, “Counter measures to combat misuses of mac address spoofing techniques,” International Journal of Advanced Networking and Applications, vol. 3, no. 5, pp. 1358, 2012.
* [2] E. Jorswieck, S. Tomasin, and A. Sezgin, “Broadcasting into the uncertainty: Authentication and confidentiality by physical-layer processing,” Proceedings of the IEEE, vol. 103, no. 10, pp. 1702–1724, 2015\.
* [3] “IEEE Standard for Information technology—Telecommunications and information exchange between systems Local and metropolitan area networks—Specific requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” IEEE Std 802.11-2016 (Revision of IEEE Std 802.11-2012), pp. 1–3534, 2016.
* [4] Y. Chen, X. Su, Y. Hu, and B. Zeng, “Residual carrier frequency offset estimation and compensation for commodity wifi,” IEEE Transactions on Mobile Computing, pp. 1–1, 2019.
* [5] H. Zhu, Y. Zhuo, Q. Liu, and S. Chang, “$\pi$-Splicer: Perceiving Accurate CSI Phases with Commodity WiFi Devices,” vol. 17, no. 9, pp. 2155–2165, Sep. 2018.
* [6] S. Rumpel, A. Wolf, and E. A. Jorswieck, “Physical layer based authentication without phase detection,” in Asilomar Conference on Signals, Systems and Computers, Nov 2016, pp. 1675–1679.
* [7] H. Liu, Y. Wang, J. Liu, J. Yang, Y. Chen, and H. V. Poor, “Authenticating Users Through Fine-Grained Channel Information,” vol. 17, no. 2, pp. 251–264, Feb 2018.
* [8] L. Xiao, L. J. Greenstein, N. B. Mandayam, and W. Trappe, “Using the physical layer for wireless authentication in time-variant channels,” IEEE Transactions on Wireless Communications, vol. 7, no. 7, pp. 2571–2579, 2008.
* [9] A. Weinand, M. Karrenbauer, J. Lianghai, and H. D. Schotten, “Physical layer authentication for mission critical machine type communication using gaussian mixture model based clustering,” in 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), 2017, pp. 1–5.
* [10] X. Qiu, T. Jiang, S. Wu, and M. Hayes, “Physical layer authentication enhancement using a gaussian mixture model,” IEEE Access, vol. 6, pp. 53583–53592, 2018.
* [11] C. Dai, J. Yang, Y. Qin, and J. Liu, “Physical layer authentication algorithm based on svm,” in 2016 2nd IEEE International Conference on Computer and Communications (ICCC), 2016, pp. 1597–1601.
* [12] C. Pei, N. Zhang, X. S. Shen, and J. W. Mark, “Channel-based physical layer authentication,” in 2014 IEEE Global Communications Conference, 2014, pp. 4114–4119.
* [13] P. Dent, G. E. Bottomley, and T. Croft, “Jakes fading model revisited,” Electronics letters, vol. 29, no. 13, pp. 1162–1163, 1993.
* [14] K. E. Baddour and N. C. Beaulieu, “Autoregressive modeling for fading channel simulation,” IEEE Transactions on Wireless Communications, vol. 4, no. 4, pp. 1650–1662, 2005.
* [15] H. Vogt, C.Li, A. Sezgin, and C. Zenger, “On the precise phase recovery for physical-layer authentication in dynamic channels,” in 2019 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2019, pp. 1–6.
* [16] A. Weinand, A. Ambekar, M. Karrenbauer, and H. D. Schotten, “Providing physical layer security for mission critical machine type communication,” in 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), 2016, pp. 1–4.
|
# New Approximation Algorithms for Forest Closeness Centrality – for
Individual Vertices and Vertex Groups††thanks: This work is partially
supported by German Research Foundation (DFG) grant ME 3619/3-2 within
Priority Programme 1736 and by DFG grant ME 3619/4-1.
Alexander van der Grinten Dept. of Computer Science, Humboldt-Universität zu
Berlin, Unter den Linden 6, D-10099 Berlin. {avdgrinten, angrimae, predarim,
<EMAIL_ADDRESS>Eugenio Angriman22footnotemark: 2 Maria
Predari22footnotemark: 2 Henning Meyerhenke22footnotemark: 2
###### Abstract
The emergence of massive graph data sets requires fast mining algorithms.
Centrality measures to identify important vertices belong to the most popular
analysis methods in graph mining. A measure that is gaining attention is
forest closeness centrality; it is closely related to electrical measures
using current flow but can also handle disconnected graphs. Recently, [Jin et
al., ICDM’19] proposed an algorithm to approximate this measure
probabilistically. Their algorithm processes small inputs quickly, but does
not scale well beyond hundreds of thousands of vertices.
In this paper, we first propose a different approximation algorithm; it is up
to two orders of magnitude faster and more accurate in practice. Our method
exploits the strong connection between uniform spanning trees and forest
distances by adapting and extending recent approximation algorithms for
related single-vertex problems. This results in a nearly-linear time algorithm
with an absolute probabilistic error guarantee. In addition, we are the first
to consider the problem of finding an optimal _group_ of vertices w. r. t.
forest closeness. We prove that this latter problem is NP-hard; to approximate
it, we adapt a greedy algorithm by [Li et al., WWW’19], which is based on
(partial) matrix inversion. Moreover, our experiments show that on
disconnected graphs, group forest closeness outperforms existing centrality
measures in the context of semi-supervised vertex classification.
Keywords: Forest closeness centrality, group centrality, forest distance,
uniform spanning tree, approximation algorithm
## 1 Introduction
Massive graph data sets with millions of edges (or more) have become abundant.
Today, applications come from many different scientific and commercial fields
[24, 4]. Network analysis algorithms shall uncover non-trivial relationships
between vertices or groups of vertices in these data. One popular concept used
in network analysis is _centrality_. Centrality measures assign to each vertex
(or edge) a score based on its structural importance; this allows to rank the
vertices and to identify the important ones [7, 33]. Measures that capture not
only local graph properties are often more meaningful, yet relatively
expensive to compute [32]. Also, different applications may require different
centrality measures, none is universal.
Algebraic measures such as random-walk betweenness, electrical closeness (see
Refs. in [1, 32]), and _forest closeness centrality_ [18] are gaining
increasing attention. Forest closeness is based on forest distance, which was
introduced by Chebotarev and Shamis [13] to account not only for shortest
paths.111Instead, all paths are taken into account, but shorter ones are more
important. This notion of distance/proximity has many applications in
graph/data mining and beyond [13]. Moreover, it applies to disconnected graphs
as well. In sociology, forest distances are shown to better capture more than
one sensitive relationship index, such as social proximity and group cohesion
[12]. Consequently, forest closeness centrality has two main advantages over
many other centrality measures [18]: (i) by taking not only shortest paths
into account, it has a high discriminative power and (ii) unlike related
algebraic measures such as the above, it can handle disconnected graphs out of
the box.
Recently, Jin et al. [18] provided an approximation algorithm for forest
closeness centrality with nearly-linear time complexity. Their algorithm uses
the Johnson-Lindenstrauss transform (JLT) and fast linear solvers; it can
handle much larger inputs than what was doable before, but is still time-
consuming. For example, graphs with $\approx$1M vertices and $\approx$2-3M
edges require more than $2.3$ or $4.7$ _hours_ for a reasonably accurate
ranking in their study [18]. Obviously, this hardly scales to massive graphs
with $>50$M edges; corresponding applications would benefit significantly from
faster approximation methods.
To this end, we devise new approximation algorithms for two problems: first,
for the individual forest closeness centrality value of each node – by
adapting uniform spanning tree techniques from recent related work on
electrical closeness centrality [1, 5]. In a next step, we consider _group_
forest closeness centrality, where one seeks a set of vertices that is central
jointly. To the best of our knowledge, we are the first to address the group
case for this centrality measure. We prove that group forest closeness is
$\mathcal{NP}$-hard and adapt the greedy algorithm by Li et al. [22] to this
problem. Our experiments on common benchmark graphs show that our algorithm
for ranking individual vertices is always substantially faster than Jin et
al.’s [18] – for sufficiently large networks by one (better accuracy) to two
(similar accuracy) orders of magnitude in a sequential setting. Our new
algorithm can now rank all vertices in networks of up to $334$M edges with
reasonable accuracy in less than 20 minutes if executed in an MPI-parallel
setting. Also, experiments on semi-supervised vertex classification
demonstrate that our new group forest closeness measure improves upon existing
measures in the case of disconnected graphs.
## 2 Definitions and Notation
As input we consider finite and simple undirected graphs $G=(V,E,\mathbf{w})$
with $n$ vertices, $m$ edges, and edge weights $\mathbf{w}\in\mathbb{R}_{\geq
0}^{m}$. By $\mathbf{L}$ we denote the Laplacian matrix of $G$, defined as
$\mathbf{L}=\mathbf{diag}(\deg_{G}(1),\ldots,\deg_{G}(n))-\mathbf{A}_{G}$,
where $\mathbf{A}_{G}$ denotes the (weighted) adjacency matrix of $G$ and
$\deg_{G}(v)$ the (weighted) degree of vertex $v$.
#### Closeness centrality.
Let $d(u,v)$ denote the graph distance in $G$. The _farness_ of a vertex $u$
is defined as $f^{d}(u):=\sum_{v\neq u}d(u,v)$, i. e., up to a scaling factor
of $\frac{1}{n}$, the farness of $u$ quantifies the average distance of $u$ to
all other vertices. Given this definition, the _closeness centrality_ of $u$
is defined as $C^{d}(u):=\frac{n}{f^{d}(u)}$. Closeness is a widely used
centrality measure; the higher the numerical value of $C^{d}(u)$ is, the more
central is $u$ within the graph. It is often criticized for mapping the vertex
scores into a rather narrow interval [24].
#### Forest Distance / Closeness.
Forest distance generalizes the common graph distance and takes not only
shortest paths into account [13]. It is expressed in terms of the (parametric)
forest matrix of a graph $G$ defined as
$\mathbf{\Omega}:=\mathbf{\Omega}_{\alpha}:=(\alpha\mathbf{L}+\mathbf{I})^{-1}$,
where $\mathbf{I}$ is the identity matrix and $\alpha>0$ controls the
importance of short vs long paths between vertices (some papers prefer the
expression $(\mathbf{L}+\alpha\mathbf{I})^{-1}$, which is equivalent to
$\mathbf{\Omega}_{\alpha}$ up to scaling; non-parametric variants of forest
closeness fix $\alpha$ to $1$ [11]):
###### Definition 1 (Forest distance [13])
The forest distance $\mathbf{\rho}(u,v)$ for a vertex pair $(u,v)$ is defined
as:
(2.1)
$\begin{split}\mathbf{\rho}(u,v)&:=\mathbf{\rho}_{\alpha}(u,v):=(\mathbf{e}_{u}-\mathbf{e}_{v})^{T}\mathbf{\Omega}_{\alpha}(\mathbf{e}_{u}-\mathbf{e}_{v})\\\
&=\mathbf{\mathbf{\Omega}_{\alpha}}[u,u]+\mathbf{\mathbf{\Omega}_{\alpha}}[v,v]-2\mathbf{\mathbf{\Omega}_{\alpha}}[u,v].\end{split}$
Chebotarev and Shamis [13] show that forest distance is a metric and list
other desirable properties. The name _forest_ distance stems from the fact
that an entry $\mathbf{\mathbf{\Omega}}[u,v]$ equals the fraction of spanning
rooted forests in $G$ in which $u$ and $v$ belong to the same tree, see [18].
Forest distance closeness centrality, or forest closeness for short, then uses
forest distances instead of the usual graph distance in the sum over all other
vertices:
###### Definition 2 (Forest closeness [13])
The _forest farness_ $\mathbf{\rho}(u)$ of a vertex $u$ is defined as
$\mathbf{\rho}(u):=\sum_{v\in V\setminus\\{u\\}}\mathbf{\rho}(u,v)$. Likewise,
the _forest distance closeness centrality_ of $u$ is defined as:
$\mathbf{f_{\alpha}}(u):=\frac{n}{\mathbf{\rho}(u)}$.
To simplify notation and when clear from the context, we often omit $\alpha$
in the following.
#### Effective Resistance and Electrical Closeness.
As already realized by Chebotarev and Shamis [13], there is a close connection
between forest distance and effective resistance, a. k. a. resistance distance
(more details on this connection in Section 4.1). Effective resistance is a
pairwise metric on the vertex set of a graph and also plays a central role in
several centrality measures [30, 9]. The notion of effective resistance comes
from viewing $G$ as an electrical circuit in which each edge $e$ is a resistor
with resistance $1/\mathbf{w}(e)$. Following fundamental electrical laws, the
effective resistance $\mathbf{r}(u,v)$ between two vertices $u$ and $v$ (that
may or may not share an edge) is the potential difference between $u$ and $v$
when a unit of current is injected into $G$ at $u$ and extracted at $v$.
Effective resistance is also proportional to hitting times of random walks [8]
and thus has connections to Markov chains. Computing the effective resistance
$\mathbf{r}(u,v)$ of a vertex pair $(u,v)\in V\times V$ can be done by means
of the Laplacian pseudoinverse $\mathbf{L_{G}}^{\dagger}$ as
(2.2)
$\mathbf{r}(u,v)=\mathbf{\mathbf{L_{G}}^{\dagger}}[u,u]+\mathbf{\mathbf{L_{G}}^{\dagger}}[v,v]-2\mathbf{\mathbf{L_{G}}^{\dagger}}[u,v]$
(or by solving a Laplacian linear system). Given the definition of
$\mathbf{r}(u,v)$, one obtains the well-known definition of _electrical
closeness_ by replacing $\mathbf{\rho}(u)$ by $\mathbf{r}(u,v)$ in Definition
2. Electrical closeness (aka _current-flow closeness_ or _information
centrality_) has been widely studied (see e. g., [9, 22, 6, 32]), but only in
the context of connected graphs.
## 3 Related Work
The most relevant algorithmic work regarding forest closeness was proposed by
Jin et al. [18], who presented an $\varepsilon$-approximation algorithm for
forest distance and forest closeness for all graph nodes. The authors exploit
the Johnson-Lindenstrauss lemma [19], thus use random projections and rely on
fast Laplacian solvers [14] to avoid matrix inversions. The algorithm has a
running time of
$\mathcal{O}(m\varepsilon^{-2}\log^{2.5}{n}\log(1/\varepsilon)\operatorname{poly}(\log\log
n))$ and provides a $(1\pm\varepsilon)$-approximation guarantee with high
probability (assuming an exact Laplacian solver). In practice, as mentioned
above, their approach takes $>2$ hours on graphs with $\approx$1M vertices and
$\approx$2-3M edges for a reasonably accurate ranking. Our aim is a better
algorithmic solution for forest centrality by leveraging our recent results on
the approximation of the diagonal entries of $\mathbf{L_{G}}^{\dagger}$ [1].
The latter exploits the connection to effective resistances and electrical
closeness and is stated here for completeness:
###### Proposition 3.1 ([1])
Let $G=(V,E)$ be an undirected and weighted graph with diameter
$\operatorname{diam}(G)$ and volume $\operatorname{vol}(G)$. There is an
algorithm that computes with probability $1-\delta$ an approximation of
$\operatorname{diag}(\mathbf{L_{G}}^{\dagger})$ with absolute error
$\pm\varepsilon$ in expected time
$\mathcal{O}(\operatorname{vol}(G)\cdot\operatorname{ecc}^{3}(u)\cdot\varepsilon^{-2}\cdot\log(\operatorname{vol}(G)/\delta))$
, where $\operatorname{ecc}(u)$ is the eccentricity of a selected node $u$.
That algorithm exploits three major insights: (i) to compute the electrical
closeness of a node $u$, one only needs
$\mathbf{\mathbf{L_{G}}^{\dagger}}[u,u]$ and the trace of
$\mathbf{L_{G}}^{\dagger}$; (ii) after obtaining the $u$-th column of
$\mathbf{L_{G}}^{\dagger}$ (by solving one Laplacian linear system) and all
effective resistances $\mathbf{r}(u,v)$ between $u$ and all $v$, the remaining
elements of $\operatorname{diag}(\mathbf{L_{G}}^{\dagger})$ can be calculated
via Eq. (2.2), (iii) effective resistances can be approximated by sampling
uniform spanning trees (USTs), e. g., with Wilson’s algorithm [34], by
exploiting Kirchhoff’s theorem. For our purposes, we can state it as the
effective resistance of an edge $\\{u,v\\}\in E$ being the probability that
$\\{u,v\\}$ is in a spanning tree drawn uniformly at random from all spanning
trees of $G$ (comp. [8]).
The algorithm proposed in this paper for approximating individual centrality
scores is based on the above insights, transfers them to a different graph and
provides a new analysis with an improved running time for the case at hand.
Barthelmé et al. [5] proposed an algorithm that uses techniques similar to the
ones in Ref. [1] to estimate inverse traces that arise in regularized
optimization problems. Their algorithm is based on uniform spanning forests,
also sampled with Wilson’s algorithm. Finally, for the group centrality case,
the most relevant algorithm is Li et al.’s [22]; it employs JLT and fast
Laplacian solvers to approximate group electrical closeness centrality in
nearly-linear time.
## 4 Forest Closeness of Individual Vertices
By definition, forest closeness for a vertex $u$ can be computed from all
forest distances $\mathbf{\rho}(u,v)$, $v\in V\setminus\\{u\\}$, e. g., by
matrix inversion. Yet, inversion takes cubic time in practice and is thus
impractical for large graphs. Hence, we exploit a relation between forest
distance and effective resistance to approximate the forest farness more
efficiently than existing approximation algorithms. By adapting our algorithm
for electrical closeness [1], we obtain an algorithm with a (probabilistic)
additive approximation guarantee of $\pm\varepsilon$; it runs in nearly-linear
(in $m$) expected time.
### 4.1 From Forest Farness to Electrical Farness (And Back Again).
As mentioned, we exploit a result that relates forest distances to effective
resistances. This requires the creation of an _augmented_ graph
$G_{\star}:=G_{\star,\alpha}:=(V^{\prime},E^{\prime})$ from the original graph
$G=(V,E)$. To this end, a new _universal vertex_ $u^{\star}$ is added to $G$,
such that $V^{\prime}=V\cup\\{u^{\star}\\}$ and
$E^{\prime}=E\cup\\{u^{\star},v\\},~{}\forall v\in V$. In particular,
$u^{\star}$ is connected to all other vertices of $G_{\star}$ with edges of
weight one. Furthermore, the weights of all edges in $E^{\prime}$ that belong
to $E$ are multiplied by $\alpha$.
###### Proposition 4.1 (comp. Ref. [13])
For a weighted graph $G=(V,E)$ and any vertex pair $(v_{1},v_{2})\in V\times
V$, the forest distance $\mathbf{\rho}(v_{1},v_{2})$ in $G$ equals the
effective resistance $\mathbf{r}(v_{1},v_{2})$ in the augmented graph
$G_{\star}$.
The full proof of Proposition 4.1 can be found in Ref. [13]. Nevertheless, we
provide here an explanation of why the above proposition holds. Recall that
the effective resistance between any two vertices of $G$ is computed by means
of $\mathbf{L_{G}}^{\dagger}$, while the forest distances of the same pair are
computed by means of the forest matrix of $G$,
$\mathbf{\mathbf{\Omega}}=(\alpha\mathbf{L}+\mathbf{I})^{-1}$. When
calculating the effective resistance in $G_{\star}$, we use its Laplacian
matrix $\mathbf{L}_{\star}$, which consists of a block matrix corresponding to
$(\alpha\mathbf{L}+\mathbf{I})$ and an additional row and column that
corresponds to the universal vertex $u^{\star}$. It turns out that the Moore-
Penrose pseudoinverse of $\mathbf{L}_{\star}$ is the block matrix that
consists of $\mathbf{\Omega}$ with an additional row and column corresponding
to $u^{\star}$ [13]. Thus,
$\mathbf{\mathbf{\Omega}}[u^{\star},u^{\star}]+\mathbf{\mathbf{\Omega}}[v,v]-2\mathbf{\mathbf{\Omega}}[u^{\star},v]$
equals
$\mathbf{\mathbf{L}_{\star}^{\dagger}}[u^{\star},u^{\star}]+\mathbf{\mathbf{L}_{\star}^{\dagger}}[v,v]-2\mathbf{\mathbf{L}_{\star}^{\dagger}}[u^{\star},v]$,
which corresponds to the pairwise effective resistance
$\mathbf{r}(u^{\star},v)$ in $G_{\star}$.
###### Corollary 4.1
Forest closeness in graph $G$ equals electrical closeness in the augmented
graph $G_{\star}$.
### 4.2 Forest Farness Approximation Algorithm.
As mentioned, our new algorithm for forest closeness exploits previous
algorithmic results for approximating
$\operatorname{diag}(\mathbf{L_{G}}^{\dagger})$ and electrical closeness. To
do so, we rewrite forest farness $\mathbf{\rho}(v)$ following Ref. [23]:
(4.3)
$\displaystyle\begin{split}\mathbf{\rho}(v)&=n\cdot\mathbf{\mathbf{\Omega}}[v,v]+\operatorname{tr}(\mathbf{\Omega})-2\sum_{w\in
V}\mathbf{\mathbf{\Omega}}[v,w]\\\
&=n\cdot\mathbf{\mathbf{\Omega}}[v,v]+\operatorname{tr}(\mathbf{\Omega})-2,\end{split}$
where the last equation holds since $\mathbf{\Omega}$ is doubly stochastic
($\mathbf{\mathbf{\Omega}}[v,v]=1-\sum_{w\neq
v}\mathbf{\mathbf{\Omega}}[v,w]$) [23]. From Eq. (4.3) it is clear that we
only require the diagonal elements of $\mathbf{\Omega}$ to compute
$\mathbf{\rho}(v)$ for any $v\in V$. We approximate the diagonal elements of
$\mathbf{\Omega}$ with Algorithm 1, whose main idea is to sample uniform
spanning trees (USTs) to approximate
$\operatorname{diag}(\mathbf{L}_{\star}^{\dagger})$:
1. 1.
We build the augmented graph $G_{\star}$ (Line 4) and let the universal vertex
$u^{\star}$ of $G_{\star}=(V^{\prime},E^{\prime})$ be the so-called _pivot
vertex_ (Line 5) – due to its optimal eccentricity of $1$. Later, we compute
the column of $\mathbf{\Omega}$ that corresponds to $u^{\star}$,
$\mathbf{\mathbf{\Omega}}[:,u^{\star}]$, by solving the Laplacian linear
system
$\mathbf{L}_{\star}\mathbf{x}=\mathbf{e}_{u^{\star}}-\frac{1}{n+1}\cdot\mathbf{1}$
(Line 11). The solver’s accuracy is controlled by $\eta$, which is set in Line
7 ($\kappa$ is used to trade the accuracy of the solver with the accuracy of
the following sampling step).
2. 2.
We sample $\tau$ USTs in $G_{\star}$ with Wilson’s algorithm [34] (also see
Algorithm 3 in Section B), where the sample size $\tau$ is yet to be
determined. With this sample we approximate the effective resistance
$\mathbf{r}_{G_{\star}}(u^{\star},v)$ for all $v\in V$ (Lines 9-10). More
precisely, if an edge $\\{u^{\star},v\\}$ appears in the sampled tree, we
increase $R[v]$ by $1$ (unweighted case) or by the weight of the current tree
(weighted case) – and later “return” $R[v]/\tau$ (unweighted case) or the
relative total weight of all sampled trees (weighted case) that contain edge
$\\{u^{\star},v\\}$ in Line 13.
3. 3.
We compute the remaining $\mathbf{\mathbf{\Omega}}[v,v]$ for $v\in V$ in Lines
12 and 13 following Eqs. (2.1) and (2.2):
$\displaystyle\mathbf{\mathbf{\Omega}}[v,v]$
$\displaystyle=\mathbf{\rho}(u^{\star},v)-\mathbf{\mathbf{\Omega}}[u^{\star},u^{\star}]+2\mathbf{\mathbf{\Omega}}[v,u^{\star}]$
$\displaystyle=\mathbf{r}_{G_{\star}}(u^{\star},v)-\mathbf{\mathbf{\Omega}}[u^{\star},u^{\star}]+2\mathbf{\mathbf{\Omega}}[v,u^{\star}],$
where $\mathbf{r}_{G_{\star}}(u^{\star},v)$ is then approximated by
$R[v]/\tau$ (the weighted case is handled as described above).
1:function ApproxDiagForestMatrix($G$, $\alpha$, $\varepsilon$, $\delta$)
2: Input: Undir. graph $G=(V,E)$, control parameter $\alpha$, error bound
$0<\varepsilon<1$, probability $0<\delta<1$
3: Output: $\operatorname{diag}(\widetilde{\mathbf{\Omega}})$, i. e., an
$(\varepsilon,\delta)$-approximation of $\operatorname{diag}(\mathbf{\Omega})$
4: Create augmented graph $G_{\star}=(V^{\prime},E^{\prime})$ as described in
Proposition 4.1; compute $\operatorname{vol(G)}$ and $c$ $\triangleright$
$\mathcal{O}(m+n)$
5: $u^{\star}\leftarrow$ universal vertex of $G_{\star}$
6: Pick constant $\kappa\in(0,1)$ arbitrarily
7:
$\eta\leftarrow\frac{\kappa\varepsilon}{6\sqrt{\alpha(c+2)\operatorname{vol}(G)}}$
8: $\tau\leftarrow\lceil\log(2m/\delta)/2(1-\kappa)^{2}\varepsilon^{2}\rceil$
9: for $i\leftarrow 1$ to $\tau$ do $\triangleright$ $\tau$ times
10: $R\leftarrow$ SamplingUST($G_{\star}$, $u$) $\triangleright$
$\mathcal{O}(\alpha\operatorname{vol}(G)+n)$
11: Solve
$\mathbf{L}_{\star}\mathbf{x}=\mathbf{e}_{u^{\star}}-\frac{1}{n+1}\cdot\mathbf{1}$
for $\mathbf{x}$ $\triangleright$ accuracy: $\eta$,
$\tilde{\mathcal{O}}(m\log^{1/2}n\log(1/\eta))$
12: for $v\in V$ do $\triangleright$ All iterations: $\mathcal{O}(n)$
13: $\mathbf{\widetilde{\mathbf{\Omega}}}[v,v]\leftarrow
R[v]/\tau-\mathbf{x}(u^{\star})+2\mathbf{x}(v)$ $\triangleright$ unweighted
case, for weighted see text
14: return $\operatorname{diag}(\widetilde{\mathbf{\Omega}})$
Algorithm 1 Approximation algorithm for $\operatorname{diag}(\mathbf{\Omega})$
By using $G_{\star}$ and thus a universal vertex $u^{\star}$ as pivot, there
are several noteworthy changes compared to the algorithm in Ref. [1]. First,
the graph $G_{\star}$ has constant diameter and the vertex $u^{\star}$
constant eccentricity $1$. This will be important for our refined running time
analysis. Second, the approximation of the effective resistances can be
simplified: while Ref. [1] requires an aggregation along shortest paths, we
notice that here $u^{\star}$ and all other vertices are connected by paths of
one edge only; thus, the relative frequency of an edge $\\{u^{\star},v\\}$ in
the UST sample for $G_{\star}$ is sufficient here for our approximation:
###### Proposition 4.2
Let $u^{\star}$ be the universal vertex in $G_{\star}$. Then, for any edge
$\\{u^{\star},v\\}\in E^{\prime}$ holds: its relative frequency (or weight) in
the UST sample is an unbiased estimator for
$\mathbf{r}_{G_{\star}}(u^{\star},v)$.
The proof of Proposition 4.2 relies on Kirchhoff’s theorem (see [8, Ch. II])
and can be found in Section A.
As we will see in our main algorithmic result (Theorem 4.1), Algorithm 1 is
not only an unbiased estimator, but even provides a probabilistic
approximation guarantee. To bound its running time, we analyze Wilson’s
algorithm for generating a UST first.
###### Proposition 4.3
For an undirected graph $G$ with constant diameter, each call to Wilson’s
algorithm on $G_{\star}$ (in Line 10) takes
$\mathcal{O}(\alpha\operatorname{vol}(G)+n)$ expected time, where
$\operatorname{vol}(G)=\sum_{v\in V}\deg(v)$ is the (possibly weighted) volume
of $G$.
The proof of Proposition 4.3 can be found in Section A. Note that in the case
of unweighted graphs with $\alpha=1$ and $m=\Omega(n)$ (which is not uncommon
in our context, see for example Ref. [18]), we obtain a time complexity of
$\mathcal{O}(m)$ (the volume is $2m$ by the handshake lemma). Taking all the
above into account, we arrive at our main algorithmic result on running time
and approximation bounds of Algorithm 1. The result and its proof are
adaptations of Theorem 3 in Ref. [1]. When considering forest (as opposed to
electrical) closeness centrality, we exploit the constant diameter of
$G_{\star}$ and improve the time by a factor of $(\operatorname{ecc}(u))^{3}$,
where $u$ is a selected pivot node. This expression is
$\mathcal{O}(\log^{3}n)$ for the small-world graphs in the focus of Ref. [1]
(but can be larger for general graphs). In the following,
$\tilde{\mathcal{O}}(\cdot)$ hides polyloglog factors from the linear solver
[14].
###### Theorem 4.1
Let $\frac{n}{\alpha\cdot\operatorname{vol}(G)}$ be bounded from above by a
constant222The condition ensures that the algorithm is not affected by unduly
heavy additional edges to $u^{\star}$. If the condition is met, the graph
edges still play a reasonable role in the distances and in the UST
computations. and let $0<\varepsilon,\delta<1$. Then, with probability
$1-\delta$, Algorithm 1 computes an approximation of
$\operatorname{diag}(\mathbf{\Omega})$ with absolute error $\pm\varepsilon$ in
(expected) time
$\tilde{\mathcal{O}}((m\log^{1/2}n\log(\sqrt{\alpha\operatorname{vol}(G)}/\varepsilon)))+\mathcal{O}(\log(n/\delta)/\varepsilon^{2}\cdot\alpha\operatorname{vol}(G))$.
Theorem 4.1 is proved in Section A. Let us simplify the result for a common
case:
###### Corollary 4.2
If $G$ is unweighted, $\alpha$ a constant and $\delta:=1/n$ to get high
probability, the (expected) running time of Algorithm 1 becomes
$\tilde{\mathcal{O}}(m(\log^{1/2}n\log(n/\varepsilon)+\varepsilon^{-2}\log
n))$. Assuming $\varepsilon$ is small enough so that $\log n\leq
1/\varepsilon$, we can further simplify this to
$\tilde{\mathcal{O}}(m\varepsilon^{-2}\log^{3/2}n)$.
This is nearly-linear in $m$, which is also true for the JLT-based
approximation (with high probability) of Jin et al. [18]. They state a running
time of $\tilde{\mathcal{O}}(m\varepsilon^{-2}\log^{5/2}n\log(1/\varepsilon))$
for unweighted $G$ and fixed $\alpha=1$. While we save at least a factor of
$\log n$, they achieve a relative approximation guarantee, which is difficult
to compare to ours.
## 5 Group Forest Closeness Centrality
Since their introduction by Everett and Borgatti [15], group centrality
measures have been used in various applications (see [32]). These measures
indicate the importance of whole vertex sets – together as a group. They
usually favor sets that “cover” the graph well. Intuitively, a group variant
of forest closeness should reward vertex sets that are “forest-close” to the
remainder of the graph. More formally, to extend the concept of forest
closeness to groups of vertices, it is enough to define the forest farness
$\mathbf{\rho}(S)$ of a set $S$ of vertices; the forest closeness of $S$ is
then given by $\mathbf{f_{\alpha}}(S):=\frac{1}{\mathbf{\rho}(S)}$. Recall
(from Proposition 4.1) that the forest farness of a single vertex $v$ of $G$
is identical to the electrical farness of $v$ in the augmented graph
$G_{\star}$. We use this fact to generalize the forest farness of a set $S$ of
vertices of $G$. In particular, we define
$\mathbf{\rho}(S):=\operatorname{tr}((\left(\mathbf{L}_{\star}\right)_{-S})^{-1})$,
where $\mathbf{L}_{\star}$ is the Laplacian matrix of the augmented graph
$G_{\star}$ and by $\left(\mathbf{L}_{\star}\right)_{-S}$ we denote the matrix
that is obtained from $\mathbf{L}_{\star}$ by removing all rows and columns
with indices in $S$. This definition is based on a corresponding definition of
electrical farness by Li et al. [22]. For $|S|=1$, it coincides with the
definition of electrical closeness from Section 2 [17]; thus, our definition
of group forest closeness is compatible with the definition of the forest
closeness of individual vertices (i. e., Definition 2).
Given our definition, it is natural to ask for a set $S$ of $k$ vertices that
maximizes $\mathbf{f_{\alpha}}(S)$ over all possible size-$k$ sets $S$;
indeed, this optimization problem has also been considered for many other
group centrality measures [32]. The following theorem settles the complexity
of the problem:
###### Theorem 5.1
Maximizing GroupForestCloseness subject to a cardinality constraint is
$NP$-hard.
As Li et al.’s [22] hardness proof for group electrical closeness, our
reduction is from the vertex cover problem on 3-regular graphs. Let $G=(V,E)$
be a 3-regular graph with $n$ vertices. Our proof shows that there is a vertex
cover of size $k$ in $G$ if and only if the maximum group forest closeness
over all sets of size $k$ in $G$ exceeds a certain threshold. We make use of
the following property that is adapted from a similar result by Li et al.:
###### Lemma 5.1
Let $G$ be a connected and unweighted 3-regular graph and let $S\subset V$,
$|S|=k\geq 1$. Then
$\operatorname{tr}((\left(\mathbf{L}\right)_{-S}+\mathbf{I})^{-1})\geq(n-k)/4$
and equality holds if and only if $S$ is a vertex cover of $G$.
Our proof of Theorem 5.1 exploits the fact that we can decompose
$\left(\mathbf{L}_{\star}\right)_{-S}$ into a block that corresponds to the
universal vertex of $G_{\star}$ and into a block that equals
$\left(\mathbf{L}\right)_{-S}+\mathbf{I}$. This allows us to apply the block-
wise inversion and the Sherman-Morrison formula to partially invert
$\left(\mathbf{L}_{\star}\right)_{-S}$. In turn, we can apply Lemma 5.1 to
bound $\operatorname{tr}((\left(\mathbf{L}_{\star}\right)_{-S})^{-1})$. The
proof of Lemma 5.1 and the full proof of Theorem 5.1 can be found in Section
A.
Since an efficient algorithm for maximizing group forest closeness is unlikely
to exist (due to Theorem 5.1), it is desirable to construct an inexact
algorithm for this problem. The next two results enable the construction of
such an algorithm; they follow immediately from respective results on group
electrical closeness on $G_{\star}$ (see Ref. [22, Theorem 5.4 and Theorem
6.1]).
###### Lemma 5.2
$\mathbf{\rho}(.)$ is a non-increasing and supermodular set function.
For the following corollary, we consider a greedy algorithm that constructs a
set $S$ of size $k$. This set is initially empty; while $|S|$ is smaller than
$k$, the algorithm adds the vertex $v$ to $S$ that maximizes the marginal
gain: $v=\operatorname{argmax}_{x\in V\setminus
S}\mathbf{\rho}(S)-\mathbf{\rho}(S\cup\\{v\\})$.
###### Corollary 5.1
The greedy algorithm computes a set $S$ such that:
$\mathbf{\rho}(\\{v_{0}\\})-\mathbf{\rho}(S)\geq\left(1-\frac{k}{e(k-1)}\right)\left(\mathbf{\rho}(v_{0})-\mathbf{\rho}(\widetilde{S})\right),$
where $v_{0}$ is the vertex with highest (individual) forest closeness and
$\widetilde{S}$ is the set of size $k$ that maximizes group forest closeness.
1:Input: Undir. graph $G=(V,E)$, group size $k$
2:Output: Group $S\subseteq V$ of $k$ vertices
3:$\mathbf{P}\leftarrow\textsc{pseudoInverse}(\mathbf{L}_{\star})$
4:$v\leftarrow\operatorname{argmin}_{v\in
V}n(\mathbf{L}_{\star}^{\dagger}[v,v])+\operatorname{tr}(\mathbf{P})$
5:$\mathbf{M}\leftarrow\textsc{inverse}(\left(\mathbf{L}_{\star}\right)_{-\\{v\\}})$
$\triangleright$ Invariant:
$\mathbf{M}\leftarrow\left(\mathbf{L}_{\star}\right)_{-S}^{-1}$ throughout the
algorithm
6:$S\leftarrow\\{v\\}$
7:while $|S|\leq k$ do
8: $v\leftarrow\operatorname{argmax}_{v\in V\setminus
S}\frac{(\mathbf{M}e_{v})^{T}(\mathbf{M}e_{v})}{e_{v}^{T}\mathbf{M}e_{v}}$
9:
$\mathbf{M}\leftarrow\left(\mathbf{M}-\frac{\mathbf{M}e_{v}e_{v}^{T}\mathbf{M}}{e_{v}^{T}\mathbf{M}e_{v}}\right)_{-\\{v\\}}$
10: $S\leftarrow S\cup\\{v\\}$
Algorithm 2 Greedy algorithm for group forest closeness maximization adapted
from Li et al.
Note that a naïve implementation of the greedy algorithm would invert
$\left(\mathbf{L}_{\star}\right)_{-(S\cup\\{v\\})}$ for each $v$, i. e., it
would require $k\cdot n$ matrix inversions in total. By using the ideas of Li
et al. for group electrical closeness [22] (depicted in Algorithm 2 for the
case of group forest closeness), these inversions can be avoided, such that
only a single matrix inversion is required in total. This makes use of the
fact that whenever a vertex $u$ is added to the set $S$, we can decompose
$\left(\mathbf{L}_{\star}\right)_{-S}$ into a block that consists of
$\left(\mathbf{L}_{\star}\right)_{-(S\cup\\{u\\})}$ and a single row/column
that corresponds to $u$. It is now possible to apply block-wise matrix
inversion to this decomposition to avoid the need to recompute
$(\left(\mathbf{L}_{\star}\right)_{-(S\cup\\{u\\})})^{-1}$ from scratch (in
line 9 of the pseudocode). We remark that the greedy algorithm can be further
accelerated by utilizing the Johnson-Lindenstrauss lemma [22]; however, since
this necessarily results in lower accuracy, we do not consider this extension
in our experiments.
Furthermore, we note that by applying a standard reduction by Gremban [16], it
would also be possible to apply our UST-based algorithm (i. e., Algorithm 1)
to the case of group forest closeness. However, if the aforementioned block-
wise matrix inversion is not applied, this would require us to sample USTs for
each of the $k\cdot n$ vertex evaluations. On the other hand, in order to
apply block-wise inversion, the entire inverse of
$\left(\mathbf{L}_{\star}\right)_{-S}$ must be available (and not only the
diagonal). Computing this inverse via UST sampling is prohibitively expensive
so far. Hence, in our experiments, we prefer the algorithmic approach by Li et
al. (adapted for group forest closeness).
## 6 Experiments
We study the empirical performance of our algorithms on real-world graphs and
their impact on graph mining tasks.
#### Settings.
Unless stated otherwise, all algorithms are implemented in C++, using the
NetworKit [29] graph APIs. All experiments are conducted on Intel Xeon Gold
6126 machines with $2\times 12$ cores and 192 GiB of RAM each. Unless stated
otherwise, all experiments run on a single core. To ensure reproducibility,
all experiments are managed by the SimExPal [2] software. For the evaluation,
we use a large collection of undirected graphs of different sizes, coming from
a diverse set of domains. All graphs have been downloaded from the public
repositories KONECT [20], OpenStreetMap333https://www.openstreetmap.org and
NetworkRepository [26]. We denote our proposed algorithm for forest closeness
by UST and set $\alpha=1$ (as done in Ref. [18]) in all experiments.
#### Competitors.
For the forest closeness of individual vertices, the main competitor is the
JLT-based algorithm by Jin et al. [18], which uses the Laplacian solver from
Ref. [21]. We compare against two implementations of this algorithm; one
provided by the authors written in Julia v1.0.2 and our own implementation
based on Eigen’s CG algorithm.444 http://eigen.tuxfamily.org. We denote them
by JLT-Julia and JLT-CPP, respectively. Like in Ref. [18], we compute the
number of linear sytems for JLT-Julia and JLT-CPP as $\left\lceil\frac{\log
n}{\varepsilon^{2}}\right\rceil$ (which gives an $\varepsilon\cdot c$
approximation for a fixed constant $c>1$).
### 6.1 Performance of UST.
We measure now the performance of UST compared to the state-of-the art
competitors. Each method is executed with multiple settings of its respective
quality parameter.
Figure 1: $\max_{u}|\mathbf{\Omega}[v,v]-\widetilde{\mathbf{\Omega}}[v,v]|$
over the instances in Table 1.
#### Accuracy and Running Time.
We report the maximum absolute error of the estimated diagonal values (i. e.,
$\max_{v}|\mathbf{\Omega}[v,v]-\widetilde{\mathbf{\Omega}}[v,v]|$) over all
vertices and instances from Table 1.555Note that the top vertices in the
forest closeness ranking are the ones with the _lowest_ $\mathbf{\Omega}[v,v]$
(see Eq. (4.3)); hence, we also evaluate the ranking accuracy in a following
experiment. As ground truth, we take $\mathbf{\Omega}[v,v]$ values that are
computed using Eigen’s CG solver with a tolerance of $10^{-9}$; exact
inversion of $(\mathbf{L}+\mathbf{I})$ would be infeasible for many of the
input graphs. A preliminary comparison against the values of
$\mathbf{\Omega}[v,v]$ computed with the NumPy pinv function demonstrated that
CG provides a sufficiently accurate ground truth.
Figure 1 shows that UST achieves the best results in terms of quality and
running time for both complex and road networks. More precisely, for complex
networks and $\varepsilon=0.4$, UST yields a maximum absolute error of
$\numprint{0.14}$, which is less than the most accurate result of both
competitors ($\numprint{0.15}$ achieved by JLT-Julia with $\varepsilon=0.1$),
while being $\numprint{397.5}\times$ faster. Also, the running time of UST
does not increase substantially for lower values of $\varepsilon$, and its
quality does not deteriorate quickly for higher values of $\varepsilon$. A
similar pattern is observed for road networks as well.
Complex networks
---
Graph | $|V|$ | $|E|$ | Time (s) | KT
---|---|---|---|---
UST | JLT | UST | JLT
loc-brightkite_edges | 58K | 214K | 46.4 | 186.4 | 0.98 | 0.95
douban | 154K | 327K | 80.8 | 370.9 | 0.71 | 0.61
soc-Epinions1 | 75K | 405K | 55.5 | 339.6 | 0.95 | 0.90
slashdot-zoo | 79K | 467K | 59.9 | 412.3 | 0.95 | 0.92
petster-cat-household | 105K | 494K | 61.8 | 372.1 | 0.98 | 0.92
wikipedia_link_fy | 65K | 921K | 58.2 | 602.9 | 0.98 | 0.96
loc-gowalla_edges | 196K | 950K | 230.9 | 1,215.5 | 0.99 | 0.97
wikipedia_link_an | 56K | 1.1M | 50.7 | 562.6 | 0.96 | 0.93
wikipedia_link_ga | 55K | 1.2M | 44.8 | 578.6 | 0.98 | 0.97
petster-dog-household | 260K | 2.1M | 359.6 | 2,472.1 | 0.98 | 0.96
livemocha | 104K | 2.2M | 107.4 | 1,429.3 | 0.98 | 0.97
Road networks
---
Graph | $|V|$ | $|E|$ | Time (s) | KT
---|---|---|---|---
UST | JLT | UST | JLT
mauritania | 102K | 150K | 98.1 | 217.6 | 0.88 | 0.77
turkmenistan | 125K | 165K | 118.5 | 273.6 | 0.92 | 0.85
cyprus | 151K | 189K | 149.4 | 315.8 | 0.89 | 0.80
canary-islands | 169K | 208K | 185.5 | 382.0 | 0.92 | 0.84
albania | 196K | 223K | 192.6 | 430.2 | 0.90 | 0.82
benin | 177K | 234K | 188.1 | 406.8 | 0.92 | 0.83
georgia | 262K | 319K | 322.1 | 605.3 | 0.91 | 0.83
latvia | 275K | 323K | 355.2 | 665.4 | 0.91 | 0.83
somalia | 291K | 409K | 420.1 | 747.5 | 0.92 | 0.84
ethiopia | 443K | 607K | 825.9 | 1,209.7 | 0.91 | 0.83
tunisia | 568K | 766K | 1,200.1 | 1,629.0 | 0.89 | 0.79
Table 1: Running time and KT ranking scores of UST and JLT-based algorithms.
In the JLT column we report, for each instance, the competitor with highest KT
score. For equal KT scores (up to the second decimal place) we choose the
fastest competitor.
#### Vertex Ranking.
Moreover, we measure the accuracy in terms of vertex rankings, which is often
more relevant than individual scores [24, 25]. In Table 1 we report the
Kendall’s rank correlation coefficient (KT) of the vertex ranking w. r. t. the
ground truth along with running times for complex and road networks. For each
instance, we pick the best run, i. e., the UST and JLT columns display the run
with highest respective KT value. If the values are the same up to the second
decimal place, we pick the fastest one. UST has consistently the best vertex
ranking scores; at the same time, it is faster than the competitors. In
particular, UST is on average $\numprint{7.6}\times$ faster than the JLT-based
approaches on complex networks and $\numprint{1.9}\times$ faster on road
networks.
#### Parallel Scalability.
UST is well-suited for parallel implementations since each UST can be sampled
independently in parallel. Hence, we provide parallel implementations of UST
based on OpenMP (for multi-core parallelism) and MPI (to scale to multiple
compute nodes). The OpenMP implementation on 24 cores exhibits a speedup of
$\numprint{8.7}\times$ on complex networks and $\numprint{9.2}\times$ on road
networks – more detailed results can be found in Figure 5, Section C. The
results for MPI are depicted in Figure 3, Section C. In this setting, UST
obtains a speedup of $\numprint{12.2}\times$ on complex and
$\numprint{11.5}\times$ on road networks on up to 16 compute nodes – for this
experiment we set $\varepsilon=0.1$ and we use the instances in Table 2,
Section C. More sophisticated load balancing techniques are likely to increase
the speedups in the MPI setting; they are left for future work. Still, the
MPI-based algorithm can rank complex networks with up to $334$M edges in less
than $20$ minutes. Road networks with $31$M edges take less than $25$ minutes.
### 6.2 Semi-Supervised Vertex Classification.
To demonstrate the relevance of group forest closeness in graph mining
applications, we apply them to semi-supervised vertex classification [31].
Given a graph $G$ with labelled vertices, the goal is to predict the labels of
all vertices of $G$ by training a classifier using a small set of labelled
vertices as training set. The choice of the vertices for the training set can
influence the accuracy of the classifier, especially when the number of
labelled vertices is small compared to $|V|$ [28, 3].
A key aspect in semi-supervised learning problems is the so-called _cluster
assumption_ i. e., vertices that are close or that belong to the same cluster
typically have the same label [35, 10]. Several models label vertices by
propagating information through the graph via diffusion [31]. We expect group
forest closeness to cover the graph more thoroughly than individual forest
closeness. Hence, we conjecture that choosing vertices with high group
centrality improves diffusion and thus the accuracy of propagation-based
models. We test this hypothesis by comparing the classification accuracy of
the label propagation model [31, 35] where the training set is chosen using
different strategies.666While this model is less powerful than state-of-the-
art predictors, our strategy to select the training set could also be applied
to more sophisticated models like graph neural networks. The main idea of
label propagation is to start from a small number of labelled vertices and
each vertex iteratively propagates its label to its neighbors until
convergence.
In our experiments, we use the Normalized Laplacian variant of label
propagation [35]. We set the return probability hyper-parameter to $0.85$, and
we evaluate its accuracy on two well-known disconnected graph datasets: Cora
($|V|=\numprint{2708},|E|=\numprint{5278}$) and
Citeseer($|V|=\numprint{3264},|E|=\numprint{4536}$) [27]. Since this variant
of label propagation cannot handle graphs with isolated vertices (i. e., zero-
degree vertices), we remove all isolated vertices from these datasets. For a
fixed size $k$ of the training set, we select its vertices as the group of
vertices computed by our greedy algorithm for group forest maximization and as
the top-$k$ vertices with highest estimated forest closeness. We include
several well-known (individual) vertex selection strategies for comparison:
average over 10 random trials, the top-$k$ vertices with highest degree, the
top-$k$ vertices with highest betweenness centrality and the top-$k$ vertices
with highest Personalized PageRank.
Figure 2 shows that on graphs with disconnected components and for a moderate
number of labelled vertices, selecting the training set by group forest
closeness maximization yields consistently superior accuracy than strategies
based on existing centrality measures (including top-$k$ forest closeness). As
expected, the accuracy of existing measures improves if one considers
connected graphs (Figure 6, Section C); yet, group forest closeness is nearly
as accurate as the best competitors on these graphs.
The running time of our greedy algorithm for group forest maximization is
reported in Table 3, Section C.
Figure 2: Accuracy in semi-supervised vertex classification when using
different strategies to create the training set.
## 7 Conclusions
In this paper, we proposed a new algorithm to approximate forest closeness
faster and more accurately than previously possible. We also generalized the
definition of forest closeness to group forest closeness and demonstrated that
for semi-supervised vertex classification in disconnected graphs, group forest
closeness outperforms existing approaches. In future work, we want to consider
extensions of our approaches to directed graphs. Another challenging extension
would involve generalizing an approach based on USTs to group forest closeness
to improve upon the performance of our greedy algorithm.
## References
* [1] E. Angriman, M. Predari, A. van der Grinten, and H. Meyerhenke. Approximation of the diagonal of a laplacian’s pseudoinverse for complex network analysis. In ESA, volume 173 of LIPIcs, pages 6:1–6:24. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
* [2] E. Angriman, A. van der Grinten, M. von Looz, H. Meyerhenke, M. Nöllenburg, M. Predari, and C. Tzovas. Guidelines for experimental algorithmics: A case study in network analysis. Algorithms, 12(7):127, 2019.
* [3] K. Avrachenkov, P. Gonçalves, and M. Sokol. On the choice of kernel and labelled data in semi-supervised learning methods. In WAW, volume 8305 of LNCS, pages 56–67. Springer, 2013\.
* [4] A.-L. Barabási and Others. Network science. Cambridge university press, 2016.
* [5] S. Barthelmé, N. Tremblay, A. Gaudillière, L. Avena, and P.-O. Amblard. Estimating the inverse trace using random forests on graphs, 2019.
* [6] E. Bergamini, M. Wegner, D. Lukarski, and H. Meyerhenke. Estimating current-flow closeness centrality with a multigrid laplacian solver. In CSC, pages 1–12. SIAM, 2016.
* [7] P. Boldi and S. Vigna. Axioms for centrality. Internet Mathematics, 10(3-4):222–262, 2014.
* [8] B. Bollobás. Modern Graph Theory, volume 184 of Graduate Texts in Mathematics. Springer, 2002.
* [9] U. Brandes and D. Fleischer. Centrality measures based on current flow. In STACS, volume 3404 of LNCS, pages 533–544. Springer, 2005.
* [10] O. Chapelle, J. Weston, and B. Schölkopf. Cluster kernels for semi-supervised learning. In NIPS, pages 585–592. MIT Press, 2002.
* [11] P. Chebotarev and E. Shamis. On proximity measures for graph vertices. Automation and Remote Control, 59(10):1443–1459, 1998.
* [12] P. Chebotarev and E. Shamis. The matrix-forest theorem and measuring relations in small social groups, 2006.
* [13] P. Y. Chebotarev and E. Shamis. The forest metrics of a graph and their properties. AUTOMATION AND REMOTE CONTROL C/C OF AVTOMATIKA I TELEMEKHANIKA, 61(8; ISSU 2):1364–1373, 2000.
* [14] M. B. Cohen, R. Kyng, G. L. Miller, J. W. Pachocki, R. Peng, A. B. Rao, and S. C. Xu. Solving SDD linear systems in nearly _m_ log${}^{\mbox{1/2}}$_n_ time. In STOC, pages 343–352. ACM, 2014.
* [15] M. G. Everett and S. P. Borgatti. The centrality of groups and classes. The Journal of Mathematical Sociology, 23(3):181–201, 1999.
* [16] K. D. Gremban. Combinatorial Preconditioners for Sparse, Symmetric, Diagonally Dominant Linear Systems. PhD thesis, Carnegie Mellon University, October 1996. CMU CS Tech Report CMU-CS-96-123.
* [17] N. S. Izmailian, R. Kenna, and F. Y. Wu. The two-point resistance of a resistor network: a new formulation and application to the cobweb network. Journal of Physics A: Mathematical and Theoretical, 47(3):035003, 2013.
* [18] Y. Jin, Q. Bao, and Z. Zhang. Forest distance closeness centrality in disconnected graphs. In ICDM, pages 339–348. IEEE, 2019.
* [19] W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984.
* [20] J. Kunegis. KONECT: the koblenz network collection. In WWW (Companion Volume), pages 1343–1350. Intl. World Wide Web Conf. Steering Committee / ACM, 2013.
* [21] R. Kyng and S. Sachdeva. Approximate gaussian elimination for laplacians - fast, sparse, and simple. In FOCS, pages 573–582. IEEE Computer Society, 2016.
* [22] H. Li, R. Peng, L. Shan, Y. Yi, and Z. Zhang. Current flow group closeness centrality for complex networks? In WWW, pages 961–971. ACM, 2019.
* [23] R. Merris. Doubly stochastic graph matrices, ii. Linear and Multilinear Algebra, 45(2-3):275–285, 1998.
* [24] M. Newman. Networks. Oxford university press, 2018.
* [25] K. Okamoto, W. Chen, and X. Li. Ranking of closeness centrality for large-scale social networks. In FAW, volume 5059 of LNCS, pages 186–195. Springer, 2008\.
* [26] R. A. Rossi and N. K. Ahmed. The network data repository with interactive graph analytics and visualization. In AAAI, 2015.
* [27] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad. Collective classification in network data. AI Mag., 29(3):93–106, 2008.
* [28] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network evaluation. CoRR, abs/1811.05868, 2018.
* [29] C. L. Staudt, A. Sazonovs, and H. Meyerhenke. Networkit: A tool suite for large-scale complex network analysis. Network Science, 4(4):508–530, 2016.
* [30] A. S. Teixeira, P. T. Monteiro, J. A. Carriço, M. Ramirez, and A. P. Francisco. Spanning edge betweenness. In MLG, volume 24, pages 27–31, 2013.
* [31] P. Thomas. Semi-supervised learning by olivier chapelle, bernhard schölkopf, and alexander zien (review). IEEE Trans. Neural Networks, 20(3):542, 2009.
* [32] A. van der Grinten, E. Angriman, and H. Meyerhenke. Scaling up network centrality computations – a brief overview. it - Information Technology, 62:189 – 204, 2020.
* [33] S. White and P. Smyth. Algorithms for estimating relative importance in networks. In KDD, pages 266–275. ACM, 2003.
* [34] D. B. Wilson. Generating random spanning trees more quickly than the cover time. In STOC, pages 296–303. ACM, 1996.
* [35] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Schölkopf. Learning with local and global consistency. In NIPS, pages 321–328. MIT Press, 2003.
## A Technical Proofs
* Proof.
(Proposition 4.2) Since $\\{u^{\star},v\\}\in E^{\prime}$, we have in the
unweighted case that $\mathbf{r}_{G_{\star}}(u^{\star},v)$ is the number of
spanning trees of $G_{\star}$ that contain $\\{u^{\star},v\\}$ divided by the
number of all spanning trees of $G_{\star}$ (follows from Kirchhoff’s theorem,
see [8, Ch. II]). In the weighted case, replace “number” by “total weight”,
respectively (where the weight of a UST is the product of all edge weights).
We focus on the unweighted case in the following for ease of exposition; the
proof for the weighted case works in the same way.
Clearly, $R[v]/\tau$, as used by Algorithm 1, is an estimator for
$\mathbf{r}_{G_{\star}}(u^{\star},v)$. It remains to show that it is unbiased,
i. e., ${\mathbb{E}}[R[v]/\tau]=\mathbf{r}_{G_{\star}}(u^{\star},v)$. To this
end, let $T_{i}$ be the UST sampled in iteration $i$ and $X_{i,v}$ the random
indicator variable with $X_{i,v}=1$ if $\\{u^{\star},v\\}\in T_{i}$ and $0$
otherwise. Then:
$\displaystyle{\mathbb{E}}[R[v]/\tau]$
$\displaystyle=\frac{1}{\tau}{\mathbb{E}}[R[v]]=\frac{1}{\tau}\sum_{i=1}^{\tau}\mathbb{P}[\\{u^{\star},v\\}\in
T_{i}]\cdot X_{i,v}$
$\displaystyle=\frac{1}{\tau}\sum_{i=1}^{\tau}\mathbf{r}_{G_{\star}}(u^{\star},v)\cdot
1=\mathbf{r}_{G_{\star}}(u^{\star},v),$
which follows from the definition of expectation and the above correspondence
between (the relative frequency of an edge in) USTs and effective resistances.
* Proof.
(Proposition 4.3) By plugging the augmented graph $G_{\star}$ (with constant
diameter) into the proof of Lemma 10 of Ref. [1], we obtain for the running
time $W(n)$ on a graph with $n$ vertices:
$W(n)=\mathcal{O}(\operatorname{vol}(G_{\star}))=\mathcal{O}(\alpha\operatorname{vol}(G)+n)$
expected time per call in Line 10.
* Proof.
(Theorem 4.1) For the linear system in Line 11, we employ the SDD solver by
Cohen et al. [14]; it takes $\tilde{\mathcal{O}}(m\log^{1/2}n\log(1/\eta))$
time to achieve a relative error bound of
$\|\mathbf{\tilde{x}}-\mathbf{x}\|_{\mathbf{L^{\prime}}}$, where
$\mathbf{L^{\prime}}:=\alpha\mathbf{L}+\mathbf{I}$. We can express the
equivalence of this matrix-based norm with the maximum norm by adapting Lemma
12 of Ref. [1] with the norm for $\mathbf{L^{\prime}}$ (instead of
$\mathbf{L}$):
$\sqrt{\mu_{1}}\cdot\|\mathbf{x}\|_{\infty}\leq\|\mathbf{x}\|_{\mathbf{L^{\prime}}}\leq\sqrt{\alpha(c+2)\operatorname{vol}(G)}\|\mathbf{x}\|_{\infty}$,
where $\mu_{1}$ is the smallest eigenvalue of $\mathbf{L^{\prime}}$. In fact,
$\mu_{1}=\alpha\lambda_{1}+1=1$, where $\lambda_{1}=0$ is the smallest
eigenvalue of $\mathbf{L}$, so that we can simplify:
(A.1)
$\|\mathbf{x}\|_{\infty}\leq\|\mathbf{x}\|_{\mathbf{L^{\prime}}}\leq\sqrt{\alpha(c+2)\operatorname{vol}(G)}\|\mathbf{x}\|_{\infty}.$
Let us set $c:=\frac{n}{\alpha\cdot\operatorname{vol}(G)}$; by our assumption
in the theorem, $c$ is a constant. Hence, if we set
$\eta:=\kappa\varepsilon/6\sqrt{\alpha(c+2)\operatorname{vol}(G)}$, the SDD
solver’s accuracy can be bounded by:
$\displaystyle\|\mathbf{\tilde{x}}-\mathbf{x}\|_{\infty}$
$\displaystyle\leq\|\mathbf{\tilde{x}}-\mathbf{x}\|_{\mathbf{L^{\prime}}}\leq\eta\cdot\|\mathbf{x}\|_{\mathbf{L^{\prime}}}$
$\displaystyle\leq\eta\sqrt{\alpha(c+2)\operatorname{vol}(G)}\|\mathbf{x}\|_{\infty}$
$\displaystyle=\frac{\kappa\varepsilon}{6}\|\mathbf{x}\|_{\infty}\leq\frac{\kappa\varepsilon}{3}.$
The last inequality follows from the fact that the values in $\mathbf{x}$ are
bounded by the effective resistance, which in turn is bounded by the graph
distance and thus $2$ (via the edges to/from $u$). If each entry has accuracy
of $\kappa\varepsilon/3$ (or better), then Eq. (2.1) is solved with accuracy
$\kappa\varepsilon$ (or better). The resulting running time for the SDD solver
is thus
$\tilde{\mathcal{O}}(m\log^{1/2}n\log(1/\eta))=\tilde{\mathcal{O}}(m\log^{1/2}n\log(\sqrt{\alpha\operatorname{vol}(G)}/\varepsilon))$.
According to Proposition 4.3 and with $n\leq
c\cdot\alpha\cdot\operatorname{vol}(G)$, sampling one UST takes
$\mathcal{O}(\alpha\operatorname{vol}(G))$ expected time. It remains to
identify a suitable sample size $\tau$ for the approximation to hold. To this
end, let $\varepsilon^{\prime}:=(1-\kappa)\varepsilon$ denote the tolerable
absolute error for the UST-based approximation part. Plugging
$\tau:=\lceil\log(2m/\delta)/2(\varepsilon^{\prime})^{2}\rceil$ into the proof
of Theorem 3 of Ref. [1] (and thus essentially Hoeffding’s bound) with the
fact that the eccentricity of $u$ is $1$, we obtain the desired result.
* Proof.
(Lemma 5.1) The proof in Li et al. [22, Lemma 4.1] exploits (among others)
that the diagonal is constant. If we replace $3$ by $4$, this argument and all
others (such as positive definiteness) still hold and the result becomes
$(n-k)/4$ instead of $(n-k)/3$.
* Proof.
(Theorem 5.1) Let $G$ be 3-regular and let $S\subset V$, $|S|=k$. We prove
that $f(S)\geq\frac{4}{3n+k}+(\frac{1}{4}+\frac{1}{4(3n+k)})(n-k)=:t(n,k)$,
where equality holds if and only if $S$ is a vertex cover of $G$. Let
$\mathbf{A}$ be the $(n-k)\times(n-k)$ submatrix of
$\left(\mathbf{L}_{\star}\right)_{-S}$ that corresponds to all vertices except
the universal vertex, i. e.,
$\mathbf{A}:=\left(\mathbf{L}\right)_{-S}+\mathbf{I}$. Note that $\mathbf{A}$
is symmetric. Since $G$ is 3-regular, all diagonal entries of $\mathbf{A}$ are
4. All non-diagonal entries have value $-1$ and there can be at most three
such entries per row / column of $\mathbf{A}$. In particular, the row and
column sums of $\mathbf{A}$ are all $\geq 1$. An elementary calculation (i.
e., expanding the $ij$-th element of the matrix multiplication $A$ times
$A^{-1}$, and summing over $j$) shows:
(A.2)
$\left(\sum_{\ell}\mathbf{A}_{i\ell}\right)\left(\sum_{\ell}\mathbf{A}^{-1}_{\ell
i}\right)=1,$
hence the row sums and column sums of $\mathbf{A}^{-1}$ are all $\leq 1$. Let
us now decompose $\left(\mathbf{L}_{\star}\right)_{-S}$ into blocks as
follows:
$\left(\mathbf{L}_{\star}\right)_{-S}=\left(\begin{array}[]{c|cccc}n&-1&\ldots&-1\\\
\hline\cr-1&&&\\\ \ldots&&\mathbf{A}&\\\ -1&&&\\\ \end{array}\right).$
By blockwise inversion we obtain:
$(\left(\mathbf{L}_{\star}\right)_{-S})^{-1}=\left(\begin{array}[]{c|cccc}\frac{1}{n-\mathbf{1}^{T}\mathbf{A}^{-1}\mathbf{1}}&&\ldots&\\\
\hline\cr&&&\\\ \ldots&&(\mathbf{A}-\frac{1}{n}\mathbf{J})^{-1}&\\\ &&&\\\
\end{array}\right),$
where $\mathbf{J}$ is the $(n-k)\times(n-k)$ matrix of all ones. To compute
$(\mathbf{A}-\frac{1}{n}\mathbf{J})^{-1}$, we notice that
$-\frac{1}{n}\mathbf{J}$ can be written as
$\mathbf{1}^{T}\cdot(-1\frac{1}{n})\mathbf{1}$ and apply the Sherman-Morrison
formula. This yields
(A.3)
$(\mathbf{A}-\frac{1}{n}\mathbf{J})^{-1}=\mathbf{A}^{-1}+\frac{1}{n-\mathbf{1}^{T}\mathbf{A}^{-1}\mathbf{1}}\mathbf{A}^{-1}\mathbf{J}\mathbf{A}^{-1}.$
We note that $\mathbf{1}^{T}\mathbf{A}^{-1}\mathbf{1}$ is equal to the sum of
all entries of $\mathbf{A}^{-1}$ and this is bounded by the sum of all column
sums of $\mathbf{A}^{-1}$, i. e., $\mathbf{1}^{T}\mathbf{A}^{-1}\mathbf{1}\leq
n-k<n$ and the denominator of Eq. (A.3) is well-defined. Also, we have
$\operatorname{tr}(\mathbf{A}^{-1}\mathbf{J}\mathbf{A}^{-1})=\sum_{v\in
V\setminus S}(\sum_{j}\mathbf{A}^{-1}_{vj})(\sum_{i}\mathbf{A}^{-1}_{iv})$ and
thus $\operatorname{tr}((\mathbf{A}^{-1}-\frac{1}{n}\mathbf{J})^{-1})$ only
depends on $\operatorname{tr}(\mathbf{A}^{-1})$ and row/column sums of
$\mathbf{A}^{-1}$.
Now consider the case that $S$ is a vertex cover. In this case, $\mathbf{A}$
has no off-diagonal entries (and all row (or column) sums of $\mathbf{A}$ are
4). For the entry $(\left(\mathbf{L}_{\star}\right)_{-S})^{-1}[1][1]$, we then
obtain using Lemma 5.1: $1/(n-(n-k)/4)=4/(3n+k)$. The inverse
$(\mathbf{A}-\frac{1}{n}\mathbf{J})^{-1}$, in turn, resolves to
$\frac{1}{4}(\mathbf{I}+\frac{1}{3n+k}\mathbf{J})$, so that we obtain
$\operatorname{tr}((\left(\mathbf{L}_{\star}\right)_{-S})^{-1})=t(n,k)$.
On the other hand, assume that $S$ is not a vertex cover. In this case,
$\mathbf{A}$ is entry-wise smaller than or equal to the vertex cover case.
Furthermore, at least one element is now strictly smaller, i. e., there exists
rows/columns of $\mathbf{A}$ whose sum is smaller than 4. Due to Eq. (A.2),
this implies that some row/column sums of $\mathbf{A}^{-1}$ are strictly
larger than in the vertex cover case (namely, the rows/columns of $\mathbf{A}$
that sum to less than 4) and all others are equal to the vertex cover case (i.
e., the rows/columns of $\mathbf{A}$ that still sum to 4). Furthermore, by
applying Lemma 5.1, we notice that $\operatorname{tr}(\mathbf{A}^{-1})$ is now
larger compared to the vertex cover case. Since
$\operatorname{tr}((\mathbf{A}-\frac{1}{n}\mathbf{J})^{-1})$ only depends on
$\operatorname{tr}(\mathbf{A}^{-1})$ and the row/column sums of
$\mathbf{A}^{-1}$, the final trace can only be strictly larger than in the
vertex cover case.
## B Algorithmic Details
1:function SamplingUST($G$, $u^{\star}$)
2: Input: graph $G=(V,E)$, universal vertex $u^{\star}\in V$
3: Output: $R:=$ estimated effective resistance values
4: $R[v]\leftarrow 0~{}\forall v\in V$
5: $T\leftarrow\\{u^{\star}\\}$
6: Let $v_{1},\ldots,v_{n}$ be a reordering of $V$ according to ascending
degree
7: for $i\leftarrow 1$ to $n$ do
8: $P\leftarrow$ random walk on $G$ from $v_{i}$ to $T$
9: $LE(P)\leftarrow$ loop erasure of $P$ in order of appearance
10: $T\leftarrow T\cup LE(P)$
11: if last vertex of $LE(P)=u^{\star}$ then
12: $w\leftarrow$ last visited vertex before $u^{\star}$
13: $R[w]\leftarrow R[w]+1$
14: return $R$
Algorithm 3 Sampling algorithm for USTs (based on Wilson’s algorithm)
## C Additional Experimental Results
#### Average Accuracy.
To confirm that UST performs well on average and not only when considering the
maximal error over many instances, we additionally report the _average_ (over
all instances from Table 1) of the absolute error in Figure 4.
Figure 3: Geometric mean of the speedup of UST with
$\varepsilon=\numprint{0.1}$ on multiple compute nodes over a single compute
node ($1\times 24$ cores). Data points are aggregated over the instances in
Table 2.
#### Parallel Scalability.
In Figure 5 we report the parallel scalability of UST on multiple cores. We
hypothesize that the moderate speedup is mainly due to memory latencies: while
sampling a UST, our algorithm performs several random accesses to the graph
data structure (i. e., an adjacency array), which are prone to cache misses.
Furthermore, Table 2 reports detailed statistics about the instances used for
experiments in distributed memory along with running times of UST on $16\times
24$ cores with $\varepsilon=0.1$ and $\varepsilon=0.3$.
#### Vertex Classification.
Figure 6 shows the accuracy in semi-supervised vertex classification in
connected graphs when using different strategies to create the training set.
Compared to disconnected graphs, the competitors perform better in this
setting. However, as described in Section 6.2, choosing the training set by
group forest closeness maximization yields nearly the same accuracy as the
best competitors in our datasets.
Figure 4: Arithmetic mean of the absolute errors
$|\max_{v}\mathbf{\Omega}[v,v]-\widetilde{\mathbf{\Omega}}[v,v]|$ over the
instances in Table 1.
Figure 5: Geometric mean of the speedup of UST with
$\varepsilon=\numprint{0.05}$ on multiple cores over a sequential run (shared
memory). Data points are aggregated over the instances in Table 1.
Figure 6: Accuracy in semi-supervised vertex classification on the largest
connected component of the datasets when using different strategies to create
the training set. Cora-lcc: $|V|=\numprint{2485},|E|=\numprint{5069}$,
Citeseer-lcc: $|V|=\numprint{2110},|E|=\numprint{3668}$.
Complex networks
---
Graph | $|V|$ | $|E|$ | Time (s)
---|---|---|---
$\varepsilon=0.1$ | $\varepsilon=0.3$
soc-LiveJournal1 | 4,846,609 | 42,851,237 | 348.9 | 118.5
wikipedia_link_fr | 3,333,397 | 100,461,905 | 205.4 | 90.7
orkut-links | 3,072,441 | 117,184,899 | 293.5 | 92.2
dimacs10-uk-2002 | 18,483,186 | 261,787,258 | 1,101.3 | 365.8
wikipedia_link_en | 13,593,032 | 334,591,525 | 919.3 | 295.4
Road networks
---
Graph | $|V|$ | $|E|$ | Time (s)
---|---|---|---
$\varepsilon=0.1$ | $\varepsilon=0.3$
slovakia | 543,733 | 638,114 | 28.1 | 9.9
netherlands | 1,437,177 | 1,737,377 | 82.9 | 31.1
greece | 1,466,727 | 1,873,857 | 74.5 | 29.8
spain | 4,557,386 | 5,905,365 | 273.0 | 86.2
great-britain | 7,108,301 | 8,358,289 | 419.0 | 136.6
dach | 20,207,259 | 25,398,909 | 1,430.1 | 473.7
africa | 23,975,266 | 31,044,959 | 1,493.4 | 499.3
Table 2: Large networks used for scalability experiments in distributed memory and running time of UST on $16\times 24$ cores. Graph | Group size | Time (s)
---|---|---
cora | 200 | 1,559.3
400 | 2,210.6
600 | 2,663.4
citeseer | 200 | 2,518.6
400 | 3,666.5
600 | 4,642.4
Table 3: Running time of our greedy algorithm for group forest maximization.
Complex networks
---
Graph | Time (s)
---|---
$\hfill\varepsilon$ | $0.05$ | $0.1$ | $0.2$ | $0.3$ | $0.4$ | $0.5$
loc-brightkite_edges | 46.4 | 11.6 | 3.0 | 1.4 | 0.8 | 0.5
douban | 80.8 | 20.5 | 5.2 | 2.4 | 1.5 | 0.9
soc-Epinions1 | 55.5 | 14.0 | 3.5 | 1.6 | 1.0 | 0.7
slashdot-zoo | 59.9 | 15.6 | 3.8 | 1.8 | 1.1 | 0.7
petster-cat-household | 61.8 | 15.7 | 4.0 | 1.8 | 1.1 | 0.8
wikipedia_link_fy | 58.2 | 15.0 | 3.9 | 1.9 | 1.1 | 0.8
loc-gowalla_edges | 230.9 | 63.0 | 15.7 | 7.1 | 4.4 | 2.8
wikipedia_link_an | 50.7 | 12.1 | 3.1 | 1.5 | 0.9 | 0.7
wikipedia_link_ga | 44.8 | 11.3 | 3.1 | 1.6 | 1.1 | 0.8
petster-dog-household | 359.6 | 87.7 | 22.5 | 10.3 | 6.0 | 4.1
livemocha | 107.4 | 28.6 | 7.3 | 3.5 | 2.1 | 1.5
Road networks
---
Graph | Time (s)
---|---
$\hfill\varepsilon$ | $0.05$ | $0.1$ | $0.2$ | $0.3$ | $0.4$ | $0.5$
mauritania | 98.1 | 24.4 | 6.9 | 2.8 | 1.6 | 1.0
turkmenistan | 118.5 | 30.2 | 7.7 | 3.4 | 2.1 | 1.3
cyprus | 149.4 | 37.7 | 9.8 | 4.4 | 2.6 | 1.7
canary-islands | 185.5 | 46.7 | 11.4 | 5.2 | 3.0 | 2.0
albania | 192.6 | 52.6 | 13.1 | 6.0 | 3.4 | 2.2
benin | 188.1 | 47.9 | 12.2 | 5.5 | 3.2 | 2.1
georgia | 322.1 | 83.6 | 22.5 | 9.8 | 5.6 | 3.6
latvia | 355.2 | 92.0 | 23.3 | 10.6 | 5.9 | 4.0
somalia | 420.1 | 108.7 | 27.6 | 12.6 | 7.1 | 4.6
ethiopia | 825.9 | 215.7 | 53.9 | 24.4 | 13.9 | 9.0
tunisia | 1,200.1 | 303.1 | 77.7 | 34.6 | 19.7 | 12.9
Table 4: Running time in seconds of UST on the networks in Table 1.
|
††thanks: These authors contributed equally to this work††thanks: These
authors contributed equally to this work
# One-Dimensional Edge Contact to Encapsulated MoS2 with a Superconductor
A. Seredinski School of Sciences and Humanities, Wentworth Institute of
Technology, Boston, MA 02115 Department of Physics, Duke University, Durham,
NC, 27708 E.G. Arnault Department of Physics, Duke University, Durham, NC,
27708 V.Z. Costa Department of Physics and Astronomy, San Francisco State
University, San Francisco, CA 94132 L. Zhao Department of Physics, Duke
University, Durham, NC, 27708 T.F.Q. Larson Department of Physics, Duke
University, Durham, NC, 27708 K. Watanabe National Institute for Materials
Science, Namiki 1-1, Tsukuba, Ibaraki 305-0044, Japan T. Taniguchi National
Institute for Materials Science, Namiki 1-1, Tsukuba, Ibaraki 305-0044, Japan
F. Amet Department of Physics and Astronomy, Appalachian State University,
Boone, NC 28607 A.K.M. Newaz Department of Physics and Astronomy, San
Francisco State University, San Francisco, CA 94132 G. Finkelstein
Department of Physics, Duke University, Durham, NC, 27708
###### Abstract
Establishing ohmic contact to van der Waals semiconductors such as MoS2 is
crucial to unlocking their full potential in next-generation electronic
devices. Encapsulation of few layer MoS2 with hBN preserves the material’s
electronic properties but makes electrical contacts more challenging. Progress
toward high quality edge contact to encapsulated MoS2 has been recently
reported. Here, we evaluate a contact methodology using sputtered MoRe, a Type
II superconductor with a relatively high critical field and temperature
commonly used to induce superconductivity in graphene. We find that the
contact transparency is poor and that the devices do not support a measurable
supercurrent down to 3 Kelvin, which has ramifications for future fabrication
recipes.
††preprint: AIP/123-QED
Soon after the isolation of monolayer graphene, it was found that mono- and
few-layer crystals could be isolated from transition metal dichalcogenides
(TMDs) Novoselov _et al._ (2005). TMDs host an array of interesting phenomena
including superconductivity, charge density waves, and quantum spin Hall
states Manzeli _et al._ (2017). Among the library of TMDs, molybdenym
disulfide (MoS2) has attracted attention due to its layer-dependent band
structure Mak _et al._ (2010); Lee _et al._ (2010), high mobility
Radisavljevic _et al._ (2011); Kim _et al._ (2012); Baugher _et al._
(2013), large spin-orbit interaction Zhu, Cheng, and Schwingenschlögl (2011);
Xiao _et al._ (2012); Kośmider, González, and Fernández-Rossier (2013), and
gate-induced superconductivity Ye _et al._ (2012); Taniguchi _et al._
(2012); Lu _et al._ (2015); Costanzo _et al._ (2016). Encapsulation of MoS2
with hexagonal boron nitride (hBN) both protects it from atmosphere and
separates it from sources of disorder Lee _et al._ (2015); Cao _et al._
(2015). However, due to Schottky barriers, a readily formed oxide layer, and
the fabrication challenges that come along with encapsulation, ohmic contact
to hBN/MoS2/hBN heterostructures has proven difficult.
Figure 1: (a) Optical image of the first sample. The black outline shows the
location of the encapsulated MoS2. Scale bar 5 $\mu$m. (b) Schematic side view
of the one-dimensional edge contact between the encapsulated MoS2 and the
sputtered MoRe (not to scale).
Low temperature ohmic contact of normal metals to encapsulated MoS2 has been
achieved through workfunction engineeringCui _et al._ (2017) as well as
intervening graphene layers Lee _et al._ (2015); Cui _et al._ (2015).
Recently, progress has been made in one-dimensional edge contact to MoS2 with
normal metal through in situ Ar+ sputtering Jain _et al._ (2019); Cheng _et
al._ (2019). It would be highly desirable to develop superconducting edge
contact to MoS2, which could enable the study of the Josephson junction
physics taking advantage of MoS2’s spin-orbit and spin-valley couplings.
In this work we make one-dimensional edge contact to encapsulated MoS2 using
molybdenum-rhenium (MoRe), a Type II superconductor known to form high
transparency contact to MoS2 for a 2D interface Island _et al._ (2016). We
utilize a recipe known to make ohmic edge contacts to hBN-encapsulated
graphene Calado _et al._ (2015); Borzenets _et al._ (2016). Our measurements
show low transparency contact to MoS2 that is improved neither by Ar+
sputtering pre-treatment of the contact interfaces nor by annealing. These
results indicate the probable presence of interfacial tunnel barriers. This
result may prove informative for groups developing hybrid samples made of van
der Waals heterostructures with superconducting contacts.
We study two MoS2 devices encapsulated within hBN. Both samples are contacted
by several MoRe electrodes, which define a series of Josephson junctions of
different lengths. The first device uses bilayer MoS2, while the second device
uses monolayer MoS2. Figure 1 shows an optical image of the first device as
well as a schematic view of the one-dimensional edge contact between the MoS2
and MoRe, created via reactive ion etching and sputtering. The second device
underwent an in situ Ar+ sputtering pre-treatment immediately before MoRe
deposition.
Figure 2: (a) Gate voltage dependence of the $I-V$ characteristics in a 200 nm
long, 5 $\mu$m wide junction on the first device ($J_{1}$). Inset: resistance
at high $V_{SD}$ for each gate voltage. The junction is seen to be highly
resistive across applied gate and bias voltages, and no signs of
superconducting behavior are visible. (b) $I-V$ curves for junctions $J_{1-3}$
of the first sample at $V_{BG}=42$ V. There is no significant difference
between the 200 nm and 500 nm long junctions, indicating that the current is
limited by the contacts. Inset: top-down schematic of the sample with
$J_{1-3}$ labeled.
Both van der Waals heterostructures were assembled from mechanically
exfoliated flakes using a dry transfer technique utilizing a polyethylene
terephthalate stamp. Polymer residue was removed by immersion in
dichloromethane for one hour at 70 ∘C followed by several hours at room
temperature.
The one-dimensional interface between the MoS2 and the MoRe was prepared via
standard electron-beam lithography techniques, reactive ion etching (RIE), and
sputtering. RIE consisted of three steps, all carried out with a process
pressure of 10-1 Torr. First, a ten second CHF3 / O2 (10:1 flow rate ratio)
step removed leftover e-beam resist (PMMA) residue from the top surface of the
heterostructure. This was followed by a ten second SF6 process to etch through
the top hBN. Finally, a ten second CF4 step was used to etch the MoS2 in the
contact region. While a CF4 etch is a typical process for MoS2, SF6 may itself
be sufficient Jain _et al._ (2019). In order to limit the device’s exposure
to atmosphere, and so the formation of MoOx along the interface, the device
was not removed from the system and imaged between these steps.
The devices had minimal exposure to air before being transferred to the
sputtering system. The second sample was treated with Ar+ sputtering before
metal deposition to refresh the contact interface. The chamber was pumped to a
pressure of $\sim 10^{-8}$ Torr and 100 nm of MoRe (50-50% by weight) was
sputtered on both devices. To minimize processing, the Josephson junctions
were not shaped with further etching, so the flakes of MoS2 continue beyond
the boundaries of the junctions. This is visible in Figure 1a, which shows an
optical image of the first device.
The samples are cooled in a closed-cycle cryocooler with a base temperature of
3 K. Unless otherwise noted, a voltage $V_{applied}$ is applied to the
junction in series with a protective $R_{S}=$10 M$\Omega$ resistor. The drain
current, $I_{D}$ is measured, and the source-drain voltage is calculated as
$V_{SD}=V_{applied}-R_{S}I_{D}$; as a result the curves in Figures 2 and 3
have different horizontal extent.
Figure 2a shows the effects of electrostatic gating on the $I-V$ curves of a
200 nm long and 5 $\mu$m wide junction made on the first device. The gate
voltage ($V_{BG}$) increases the Fermi level in the MoS2, causing it to
approach the conduction band. We observe that for increasing $V_{BG}$, the
threshold of $V_{SD}$ required to achieve a linear slope decreases. Figure 2b
demonstrates the $I-V$ curves measured for three junctions of different length
at the maximal gate voltage of 42 V. (See the schematic in the inset: $J_{1}$
is 200 nm long, and $J_{2,3}$ are 500 nm long.) It is clear that 1) the curves
show no significant length dependence, indicating that the current is limited
by the contact barriers; and 2) the measurements are consistent between the
three junctions, indicating uniform properties of the contacts. These initial
measurements are consistent with the presence of barriers (likely Schottky
barriers) at the interfaces Jain _et al._ (2019). At the highest gate voltage
(42 V) the resistance is 2.4 M$\Omega$, corresponding to the the contact
resistance of $R_{c}\approx$ 6 M$\Omega$$\cdot$$\mu$m.
Due to this high contact resistance, we next anneal the sample at 200∘C for 17
hours in a vacuum of $10^{-6}$ mbar. Annealing processes have been shown to
decrease contact resistance in similar devices. This may be due to a host of
phenomena which change the bonding or structure at the interface Jain _et
al._ (2019). In this study, the annealing resulted in higher contact
resistance, with an increase of as much as 40% at high bias and $V_{BG}=42$ V.
This decrease in contact quality may be due to the MoRe reflowing away from
the contact edge, as seen in gold junctions without an additional metal
sticking layer Jain _et al._ (2019).
Figure 3: Temperature dependencies measured in the 200 nm long, 5 $\mu$m wide
junction in the first device. (a) Post-anneal $I-V$ characteristics. (b) Low
bias ($V_{SD}=0.05$ V) resistance $R$, plotted in linear and (inset) log
scale, which shows $R$ decaying with temperature. $V_{BG}=42$ V throughout.
(c) $\ln(I_{D})$ vs $(V_{SD}/\mathrm{Volt})^{1/2}$ plot of the same data
showing an approximately linear relationship in the intermediate temperature
range. This is consistent with thermionic transport across the contact
interfaces.
We study the behavior of the junction as a function of temperature to gain
insight into the poor contact quality. Figure 3a plots the $I-V$
characteristics of the same junction from 3 to 290 K. A clear reduction in
low-bias resistance spanning more than a decade is seen as the temperature
rises (Figure 3b). Such behavior is consistent with thermionic transport
across a barrier. This interpretation is supported by an approximately linear
relation between the log of the current and the square root of the bias
voltage in the device (Figure 3c) as expected, e.g., for a triangular Schottky
barrier Sze and Ng (2006). This relation breaks down for low bias voltages at
higher temperatures.
Due to the contact characteristics of this device, we study a second device
utilizing Ar+ sputtering immediately prior to the deposition of the MoRe
contacts, focusing on a 500 nm long and 5 $\mu$m wide junction. Despite this
change in deposition parameters and an overnight anneal at 300∘C in $10^{-6}$
mbar, this second device also displays high contact resistances at low
temperature. Utilizing a direct voltage biasing scheme without a 10 M$\Omega$
series resistor, we measure gate sweeps for different $V_{SD}$ (Figure 4).
Even at the highest applied VSD=5 V, the currents supported by the junction
are orders of magnitude lower than comparable or longer junctions made with
both top contacts Liu _et al._ (2015); Smithe _et al._ (2016) and high
quality normal metal edge contacts Jain _et al._ (2019); Cheng _et al._
(2019).
Figure 4: Current vs $V_{BG}$ sweeps measured in a 500 nm long by 5 $\mu$m
wide junction in the second device following the annealing, which show the
induced highly resistive behavior. The three curves correspond to $V_{SD}=$
1.5, 3, and 5 V. Inset: the same data in log scale.
In summary, we tested a methodology for making one-dimensional edge contact to
encapsulated MoS2 with MoRe, and found high contact resistances on the order
of M$\Omega\cdot\mu$m. This contact was not improved by annealing at 200-300
∘C. In situ Ar+ sputtering of the interface before the deposition of MoRe also
did not improve the contact quality. We conclude that the presence of tunnel
barriers limits the performance of these devices. The lack of length
dependence, consistency between different junctions, insensitivity to Ar+ pre-
cleaning, and the lack of improvement upon annealing all point to the presence
of intrinsic Schottky barriers at the interfaces.
Higher transparency contacts may be achieved in the future by replacing MoRe
with superconductors having a significantly higher or lower work function.
Nevertheless, the current contact recipe could support the use of MoS2 in more
complex superconducting heterostructures. Namely, TMDs, including MoS2Safeer
_et al._ (2019), are already used to induce the spin-orbit coupling in
graphene Wang _et al._ (2016); Island _et al._ (2019). One can extend these
studies to Josephson junctions by making superconducting contacts that would
selectively contact the graphene but not the TMD layer. In this context, our
work establishes an order of magnitude estimate for the (very small) current
expected to be shunted through an MoS2 layer in such a complex van der Waals
heterostructure.
###### Acknowledgements.
A.S., E.G.A, T.F.L., L.Z., and G.F. acknowledge support by the Office of Basic
Energy Sciences, U.S. Department of Energy, under Award de-sc0002765. V.Z.C.
and A.K.M.N. acknowledge support from the National Science Foundation Grant
ECCS-1708907 and Department of Defense Award (ID: 72495RTREP). K.W. and
T.T.acknowledge the Elemental Strategy Initiative conducted by the MEXT, Japan
and the CREST (JPMJCR15F3), JST. F.A. acknowledges the ARO under Award
W911NF-16-1-0132. This work was performed in part at the Duke University
Shared Materials Instrumentation Facility (SMIF), a member of the North
Carolina Research Triangle Nanotechnology Network (RTNN), which is supported
by the National Science Foundation (Grant ECCS-1542015) as part of the
National Nanotechnology Coordinated Infrastructure (NNCI).
## References
* Novoselov _et al._ (2005) K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, PNAS 102, 10451 (2005).
* Manzeli _et al._ (2017) S. Manzeli, D. Ovchinnikov, D. Pasquier, O. V. Yazyev, and A. Kis, Nature Reviews Materials 2, 17033 (2017).
* Mak _et al._ (2010) K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Physical Review Letters 105, 136805 (2010).
* Lee _et al._ (2010) C. Lee, H. Yan, L. E. Brus, T. F. Heinz, J. Hone, and S. Ryu, ACS Nano 4, 2695 (2010).
* Radisavljevic _et al._ (2011) B. Radisavljevic, A. Radenovic, J. Brivio, V. Giacometti, and A. Kis, Nature Nanotechnology 6, 147 (2011).
* Kim _et al._ (2012) S. Kim, A. Konar, W.-S. Hwang, J. H. Lee, J. Lee, J. Yang, C. Jung, H. Kim, J.-B. Yoo, J.-Y. Choi, Y. W. Jin, S. Y. Lee, D. Jena, W. Choi, and K. Kim, Nature Communications 3, 1011 (2012).
* Baugher _et al._ (2013) B. W. H. Baugher, H. O. H. Churchill, Y. Yang, and P. Jarillo-Herrero, Nano Letters 13, 4212 (2013).
* Zhu, Cheng, and Schwingenschlögl (2011) Z. Y. Zhu, Y. C. Cheng, and U. Schwingenschlögl, Physical Review B 84, 153402 (2011).
* Xiao _et al._ (2012) D. Xiao, G.-B. Liu, W. Feng, X. Xu, and W. Yao, Physical Review Letters 108, 196802 (2012).
* Kośmider, González, and Fernández-Rossier (2013) K. Kośmider, J. W. González, and J. Fernández-Rossier, Physical Review B 88, 245436 (2013).
* Ye _et al._ (2012) J. T. Ye, Y. J. Zhang, R. Akashi, M. S. Bahramy, R. Arita, and Y. Iwasa, Science 338, 1193 (2012).
* Taniguchi _et al._ (2012) K. Taniguchi, A. Matsumoto, H. Shimotani, and H. Takagi, Applied Physics Letters 101, 042603 (2012).
* Lu _et al._ (2015) J. M. Lu, O. Zheliuk, I. Leermakers, N. F. Q. Yuan, U. Zeitler, K. T. Law, and J. T. Ye, Science 350, 1353 (2015).
* Costanzo _et al._ (2016) D. Costanzo, S. Jo, H. Berger, and A. F. Morpurgo, Nature Nanotechnology 11, 339 (2016).
* Lee _et al._ (2015) G.-H. Lee, X. Cui, Y. D. Kim, G. Arefe, X. Zhang, C.-H. Lee, F. Ye, K. Watanabe, T. Taniguchi, P. Kim, and J. Hone, ACS Nano 9, 7019 (2015).
* Cao _et al._ (2015) Y. Cao, A. Mishchenko, G. L. Yu, E. Khestanova, A. P. Rooney, E. Prestat, A. V. Kretinin, P. Blake, M. B. Shalom, C. Woods, J. Chapman, G. Balakrishnan, I. V. Grigorieva, K. S. Novoselov, B. A. Piot, M. Potemski, K. Watanabe, T. Taniguchi, S. J. Haigh, A. K. Geim, and R. V. Gorbachev, Nano Letters 15, 4914 (2015).
* Cui _et al._ (2017) X. Cui, E.-M. Shih, L. A. Jauregui, S. H. Chae, Y. D. Kim, B. Li, D. Seo, K. Pistunova, J. Yin, J.-H. Park, H.-J. Choi, Y. H. Lee, K. Watanabe, T. Taniguchi, P. Kim, C. R. Dean, and J. C. Hone, Nano Letters 17, 4781 (2017).
* Cui _et al._ (2015) X. Cui, G.-H. Lee, Y. D. Kim, G. Arefe, P. Y. Huang, C.-H. Lee, D. A. Chenet, X. Zhang, L. Wang, F. Ye, F. Pizzocchero, B. S. Jessen, K. Watanabe, T. Taniguchi, D. A. Muller, T. Low, P. Kim, and J. Hone, Nature Nanotechnology 10, 534 (2015).
* Jain _et al._ (2019) A. Jain, Á. Szabó, M. Parzefall, E. Bonvin, T. Taniguchi, K. Watanabe, P. Bharadwaj, M. Luisier, and L. Novotny, Nano Letters 19, 6914 (2019).
* Cheng _et al._ (2019) Z. Cheng, Y. Yu, S. Singh, K. Price, S. G. Noyce, Y.-C. Lin, L. Cao, and A. D. Franklin, Nano Letters 19, 5077 (2019).
* Island _et al._ (2016) J. O. Island, G. A. Steele, H. S. J. van der Zant, and A. Castellanos-Gomez, 2D Materials 3, 031002 (2016).
* Calado _et al._ (2015) V. E. Calado, S. Goswami, G. Nanda, M. Diez, A. R. Akhmerov, K. Watanabe, T. Taniguchi, T. M. Klapwijk, and L. M. K. Vandersypen, Nature nanotechnology 10, 761 (2015).
* Borzenets _et al._ (2016) I. V. Borzenets, F. Amet, C. T. Ke, A. W. Draelos, M. T. Wei, A. Seredinski, K. Watanabe, T. Taniguchi, Y. Bomze, M. Yamamoto, S. Tarucha, and G. Finkelstein, Physical Review Letters 117, 237002 (2016).
* Sze and Ng (2006) S. M. Sze and K. K. Ng, _Physics of Semiconductor Devices_ (John Wiley & Sons, Ltd., 2006).
* Liu _et al._ (2015) W. Liu, D. Sarkar, J. Kang, W. Cao, and K. Banerjee, ACS Nano 9, 7904 (2015).
* Smithe _et al._ (2016) K. K. H. Smithe, C. D. English, S. V. Suryavanshi, and E. Pop, 2D Materials 4, 011009 (2016).
* Safeer _et al._ (2019) C. K. Safeer, J. Ingla-Aynés, F. Herling, J. H. Garcia, M. Vila, N. Ontoso, M. R. Calvo, S. Roche, L. E. Hueso, and F. Casanova, Nano Letters 19, 1074 (2019).
* Wang _et al._ (2016) Z. Wang, D.-K. Ki, J. Y. Khoo, D. Mauro, H. Berger, L. S. Levitov, and A. F. Morpurgo, Physical Review X 6, 041020 (2016).
* Island _et al._ (2019) J. O. Island, X. Cui, C. Lewandowski, J. Y. Khoo, E. M. Spanton, H. Zhou, D. Rhodes, J. C. Hone, T. Taniguchi, K. Watanabe, L. S. Levitov, M. P. Zaletel, and A. F. Young, Nature 571, 85 (2019).
|
# Deciding What to Learn: A Rate-Distortion Approach
Dilip Arumugam Benjamin Van Roy
###### Abstract
Agents that learn to select optimal actions represent a prominent focus of the
sequential decision-making literature. In the face of a complex environment or
constraints on time and resources, however, aiming to synthesize such an
optimal policy can become infeasible. These scenarios give rise to an
important trade-off between the information an agent must acquire to learn and
the sub-optimality of the resulting policy. While an agent designer has a
preference for how this trade-off is resolved, existing approaches further
require that the designer translate these preferences into a fixed learning
target for the agent. In this work, leveraging rate-distortion theory, we
automate this process such that the designer need only express their
preferences via a single hyperparameter and the agent is endowed with the
ability to compute its own learning targets that best achieve the desired
trade-off. We establish a general bound on expected discounted regret for an
agent that decides what to learn in this manner along with computational
experiments that illustrate the expressiveness of designer preferences and
even show improvements over Thompson sampling in identifying an optimal
policy.
Sequential Decision-Making, Information Theory
## 1 Introduction
Learning is a process of acquiring information that reduces an agent’s
uncertainty about its environment. Anything that an agent may endeavor to
learn requires obtaining a precise amount of information about the
environment; naturally, as measured by this requisite information, some things
are easier to learn than others. When interacting with a complex environment,
however, the agent is spoiled for choice as there is too much to learn within
any reasonable time frame, and the agent must prioritize. A simple approach is
to designate a learning target, which can be thought of as a corpus of
information that, while insufficient to fully identify the environment,
suffices to guide effective decisions. Then, the agent can prioritize
gathering of information about this learning target.
One possible learning target, which has dominated the bandit-learning
literature (Bubeck et al., 2012; Lattimore & Szepesvári, 2020), is an action-
selection policy that would be optimal given full information about the
environment. While suitable for simple environments, like multi-armed bandits
with few arms, this concept does not scale well with the size of the action
space. Moreover, in complex environments, there is typically too much to learn
about the optimal policy within any reasonable time frame.
Recent work has highlighted conditions under which it is helpful to target a
near-optimal or satisficing policy (Russo & Van Roy, 2018b). Such a learning
target is not without precedent and has been studied implicitly in a variety
of contexts (Bubeck et al., 2011; Kleinberg et al., 2008; Rusmevichientong &
Tsitsiklis, 2010; Ryzhov et al., 2012; Deshpande & Montanari, 2012; Berry et
al., 1997; Wang et al., 2008; Bonald & Proutiere, 2013). There is an important
tension between information requirements for policy learning and policy
performance; as one is more permissive of increasingly sub-optimal policies,
the requisite amount of information for learning such policies decreases.
Crucially, a satisficing policy can be manually specified by an agent designer
in order to strike the desired balance. To do so, however, it is incumbent
upon the designer to have sufficient knowledge of the problem structure in
order to negotiate the information-performance trade-off.
We consider the design of an agent that selects its own learning target. This
shifts the agent designer’s role from specifying one to endowing the agent
with the ability to designate and to suitably adapt the target as learning
progresses. The designer can specify the general form of this learning target
as part of the scaffold for a learning algorithm. More traditional, fixed-
target learning algorithms can then be repurposed as subroutines an agent may
use to achieve its own goals. We introduce in this paper what is possibly the
first principled approach to address a fundamental question: how should an
agent decide what to learn?
As a first step, this work offers one concrete answer to this question by
introducing an agent that adaptively learns target actions. To endow this
agent with the ability to reason about the information-performance trade-off
autonomously, we employ rate-distortion theory (Shannon, 1959; Berger, 1971),
building on connections to sequential decision-making made by Russo & Van Roy
(2018b). With an appropriately chosen distortion measure, the canonical rate-
distortion function precisely characterizes the trade-off between the
information required for policy learning and policy performance. Rather than
placing the burden on the agent designer to procure the solution to a single
rate-distortion function on behalf the agent, we instead place the onus upon
the agent to solve a rate-distortion function in each time period and
gradually adapt its self-designated target action. We recognize that
computation of rate-distortion functions is a well-studied problem of the
information theory community for which an elegant solution already exists as
the classic Blahut-Arimoto algorithm (Blahut, 1972; Arimoto, 1972).
Accordingly, we begin by introducing a variant of Thompson sampling which uses
the Blahut-Arimoto algorithm as a subroutine for computing a target action in
each time period that achieves the rate-distortion limit. We then prove a
bound on the expected discounted regret for this algorithm, differing from
previous information-theoretic analyses in its treatment of a learning target
that changes in each time period. Finally, we conclude with a series of
computational experiments that highlight the efficacy of our procedure in
enabling an agent to target desired points along the information-performance
trade-off curve.
The paper proceeds as follows: in Section 2 we briefly discuss background
material before clarifying the connections between our approach and rate-
distortion theory in Section 3. Due to space constraints, we relegate an
overview of prior work to the appendix. We introduce our main algorithm in
Section 4 before finally presenting a corresponding regret analysis and
supporting computational experiments in Sections 5 and 6, respectively.
## 2 Background
In this section, we begin with an overview of several standard quantities in
information theory. For more background on information theory, see Cover &
Thomas (2012). We conclude the section with a brief outline of rate-distortion
theory.
### 2.1 Information Theory
Consider three random variables $X,Y,Z$ defined on a probability space
$(\Omega,\mathbb{F},\mathbb{P})$. We define entropy, conditional entropy,
mutual information, and conditional mutual information as follows:
$\displaystyle\mathbb{H}(X)$
$\displaystyle=-\mathbb{E}[\log(\mathbb{P}(X\in\cdot))]$
$\displaystyle\mathbb{H}(Y|X)$
$\displaystyle=-\mathbb{E}[\log(\mathbb{P}(Y\in\cdot|X))]$
$\displaystyle\mathbb{I}(X;Y)$
$\displaystyle=\mathbb{H}(X)-\mathbb{H}(X|Y)=\mathbb{H}(Y)-\mathbb{H}(Y|X)$
$\displaystyle\mathbb{I}(X;Y|Z)$
$\displaystyle=\mathbb{H}(X|Z)-\mathbb{H}(X|Y,Z)=\mathbb{H}(Y|Z)-\mathbb{H}(Y|X,Z)$
Importantly, the multivariate mutual information between a single random
variable $X$ and another sequence of random variables $Z_{1},\ldots Z_{n}$
decomposes via the chain rule of mutual information:
$\displaystyle\mathbb{I}(X;Z_{1},\ldots,Z_{n})$
$\displaystyle=\sum\limits_{i=1}^{n}\mathbb{I}(X;Z_{i}|Z_{1},\ldots,Z_{i-1})$
### 2.2 Rate-Distortion Theory
Rate-distortion theory is a sub-area of information theory concerned with
lossy compression and the achievability of coding schemes that maximally
compress while adhering to a desired upper bound on error or loss of fidelity
(Shannon, 1959; Berger, 1971; Cover & Thomas, 2012). More formally, consider a
random variable $X$ with fixed distribution $p(x)=\mathbb{P}(X=x)$ that
represents an information source along with a random variable $\hat{X}$ that
corresponds to a channel output. Given a distortion measure
$d:\mathcal{X}\times\hat{\mathcal{X}}\mapsto\mathbb{R}_{\geq 0}$ and a desired
upper bound on distortion $D$, the rate-distortion function is defined as:
$\displaystyle\mathcal{R}(D)$
$\displaystyle=\inf\limits_{\hat{X}\in\Lambda}\mathcal{I}(X;\hat{X})$ (1)
quantifying the minimum number of bits (on average) that must be communicated
from $X$ across a channel in order to adhere to the specified expected
distortion threshold $D$. Here, the infimum is taken over
$\Lambda=\\{\hat{X}:\mathbb{E}\left[d(X,\hat{X})\right]\leq D\\}$ representing
the set of all random variables $\hat{X}:\Omega\mapsto\hat{\mathcal{X}}$ which
which satisfy the constraint on expected distortion. Intuitively, a higher
rate corresponds to requiring more bits of information and smaller information
loss between $X$ and $\hat{X}$, enabling higher-fidelity reconstruction (lower
distortion); conversely, lower rates reflect more substantial information
loss, potentially exceeding the tolerance on distortion $D$.
###### Fact 1.
$\mathcal{R}(D)$ is a non-negative, convex, and monotonically-decreasing
function in $D$ (Cover & Thomas, 2012).
Some readers may be more familiar with the related problem of computing
channel capacity; while the rate-distortion function considers a fixed
information source $p(x)$ and optimizes for a channel
$p(\hat{x}|x)=\mathbb{P}(\hat{X}=\hat{x}|X=x)$ that minimizes distortion, the
channel-capacity function considers a fixed channel and optimizes for the
information source that maximizes throughput.
## 3 Sequential Decision-Making & Rate-Distortion Theory
### 3.1 Problem Formulation
We define all random variables with respect to a common probability space
$(\Omega,\mathbb{F},\mathbb{P})$; all events are determined by a random
outcome $\omega\in\Omega$. An agent interacts with an unknown environment
$\mathcal{E}$, which is itself a random variable. The interaction generates a
history $H_{t}=(A_{0},O_{1},A_{1},O_{2},\ldots,O_{t})$ of actions and
observations that take values in finite sets $\mathcal{A}$ and $\mathcal{O}$.
Initial uncertainty about the environment is reflected by probabilities
$\mathbb{P}(\mathcal{E}\in\cdot)$ where $\mathcal{E}$ has support on $\Theta$
and, as the history unfolds, what can be learned is represented by conditional
probabilities $\mathbb{P}(\mathcal{E}\in\cdot|H_{t})$.
Actions are independent of the environment conditioned on history,
$A_{t+1}\perp\mathcal{E}|H_{t}$. This reflects the fact that the agent selects
actions based only on history and, possibly, algorithmic randomness. It may be
helpful to think of the actions as being selected by an admissible policy
$\pi(a|H_{t})=\mathbb{P}(A_{t}=a|H_{t})$, which assigns a probability to each
action $a\in\mathcal{A}$ given the history. By admissible, we mean that action
probabilities are determined by history and do not depend on further
information about the environment.
We assume that observations are independent of history conditioned on the
environment and most recent action, $O_{t+1}\perp H_{t}|(\mathcal{E},A_{t})$.
Note that this precludes delayed consequences, and we will restrict attention
in this paper to such environments. Further, we assume a stationary
environment such that conditional observation probabilities
$\mathbb{P}(O_{t+1}|\mathcal{E},A_{t})$ do not depend on $t$.
Upon each observation, the agent enjoys a reward $R_{t+1}=r(A_{t},O_{t+1})$
where $r:\mathcal{A}\times\mathcal{O}\mapsto\mathbb{R}$ is a deterministic
function. Let $\overline{r}(a)=\mathbb{E}[R_{t+1}|A_{t}=a,\mathcal{E}]$ denote
mean reward and note that $\overline{r}$ is itself a random variable since it
depends on $\mathcal{E}$. Let $A_{\star}$ be an action that maximizes the
expected mean reward $\mathbb{E}[\overline{r}(A_{\star})]$ and let
$R_{\star}=\overline{r}(A_{\star})$. Note that $A_{\star}$ and $R_{\star}$ are
random variables, as they depend on $\mathcal{E}$. It may be helpful to think
of $A_{\star}$ as generated by an optimal policy
$\pi_{\star}(a)=\mathbb{P}(A_{t}=a|\mathcal{E})$, which is inadmissible, in
the sense that it depends on the environment, not just the history.
Traditionally, the performance of an admissible policy $\pi$ at any time
period $\tau=0,1,2,\ldots$ is quantified by its regret:
$\mathbb{E}\left[\sum\limits_{t=\tau}^{\infty}R_{\star}-R_{t+1}\Big{|}H_{\tau}\right].$
While this is a suitable measure of asymptotic performance, we follow suit
with Russo & Van Roy (2018b) and examine expected discounted regret
$\mathbb{E}\left[\sum\limits_{t=\tau}^{\infty}\gamma^{t-\tau}(R_{\star}-R_{t+1})\Big{|}H_{\tau}\right],$
where the discount factor $\gamma\in[0,1)$ helps regulate the agent’s
preference for minimizing near-term versus long-term performance shortfall.
### 3.2 Target Actions
In the course of identifying an optimal policy, we take
$\mathbb{H}(A_{\star})$ to denote the bits of information an agent must
acquire in order to identify $A_{\star}$. Russo & Van Roy (2016) offer a novel
information-theoretic analysis of Thompson sampling (Thompson, 1933) whose
corresponding regret bound depends on $\mathbb{H}(A_{\star})$. Due to the non-
negativity of conditional entropy, $\mathbb{H}(A_{\star}|\mathcal{E})\geq 0$,
it follows that the entropy of $A_{\star}$ upper bounds the mutual information
between $A_{\star}$ and $\mathcal{E}$,
$\mathbb{H}(A_{\star})\geq\mathbb{H}(A_{\star})-\mathbb{H}(A_{\star}|\mathcal{E})=\mathbb{I}(A_{\star};\mathcal{E})$,
which is tight when the optimal action $A_{\star}$ is a deterministic function
of $\mathcal{E}$.
When faced with a complex environment $\mathcal{E}$, acquiring these
$\mathbb{H}(A_{\star})$ bits of information for optimal behavior may be
exceptionally difficult. While Thompson sampling is a simple yet effective
algorithm with widespread empirical success in synthesizing optimal policies
(Chapelle & Li, 2011; Russo et al., 2018), it can fall short in these more
challenging learning settings. Russo & Van Roy (2018b) first drew awareness to
this issue, highlighting several examples where Thompson sampling struggles in
the face of a large, possibly infinite, action set or a time-sensitivity
constraint on learning. In short, the problem stems from the fact that
Thompson sampling will select new, untested actions in each time period,
rapidly becoming inefficient as the number of actions grows.
Russo & Van Roy (2018b) introduce the notion of satisficing actions
$\tilde{A}$, in lieu of optimal actions, as a remedy to the aforementioned
issues. The core premise of this alternative learning target is that a
deliberately sub-optimal action should require the agent to learn fewer bits
of information about the environment in order to identify a corresponding
satisficing policy. Their proposed satisficing Thompson sampling algorithm
makes the natural modification of probability matching with respect to the
agent’s posterior beliefs over $\tilde{A}$, given the current history, such
that $A_{t}\sim\mathbb{P}(\tilde{A}=\cdot|H_{t})$. Crucially, Russo & Van Roy
(2018b) draw an interesting connection between the specification of
satisficing actions and rate-distortion theory. Taking the distortion function
to be the instantaneous expected regret conditioned on a realization of the
environment,
$d(\tilde{a},e)=\mathbb{E}[\overline{r}(A_{\star})-\overline{r}(a)|\mathcal{E}=e]$,
they study the corresponding rate-distortion function
$\displaystyle\mathcal{R}(D)=\inf\limits_{\tilde{A}\in\tilde{\mathcal{A}}}\mathbb{I}(\tilde{A};\mathcal{E})$
(2)
where
$\tilde{\mathcal{A}}=\\{\tilde{A}:\mathbb{E}\left[d(\tilde{A},\mathcal{E})\right]\leq
D,\tilde{A}\perp H_{t}|\mathcal{E},\forall t\\}$ denotes the set of all random
variables $\tilde{A}:\Omega\mapsto\mathcal{A}$ that are conditionally-
independent from all histories given the environment $\mathcal{E}$ and adhere
to the distortion constraint. Applying Fact 1, we immediately recover the
following:
###### Fact 2.
For any $D>0$,
$\mathbb{H}(A_{\star})\geq\mathbb{H}(A_{\star})-\mathbb{H}(A_{\star}|\mathcal{E})=\mathbb{I}(A_{\star};\mathcal{E})=\mathcal{R}(0)\geq\mathcal{R}(D)=\mathbb{I}(\tilde{A};\mathcal{E})$
which confirms a crucial desideratum for satisficing actions; namely, that an
agent must acquire fewer bits of information about $\mathcal{E}$ in order to
learn a satisficing action, relative to learning an optimal action. Moreover,
following an analogue of the information-theoretic analysis of Russo & Van Roy
(2016), Russo & Van Roy (2018b) prove an information-theoretic regret bound
that depends on the value of the rate-distortion function, rather than the
entropy. While this performance guarantee highlights an interesting and useful
link between sequential decision-making and rate-distortion theory, there is
no guarantee that a manually-specified satisficing action $\tilde{A}$ will
achieve the rate-distortion limit as desired. Thus, an agent that can
manufacture its own satisficing actions which achieve the rate-distortion
limit stands to dramatically outperform any hand-crafted $\tilde{A}$. To make
the distinction between the manually-specified satisficing actions of prior
work, we use the term target actions to refer to the agent’s self-designated
learning targets which explicitly differ from satisficing actions in that the
are (1) computed by the agent, (2) adapted over time according the agent’s
current knowledge of the environment $\mathcal{E}$, and (3) achieve the rate-
distortion limit in each time period.
Agents we consider can forgo the aim of learning an optimal action and instead
try to learn a target action. Formally, a target action $\tilde{A}$ is a
random variable that be thought of as generated by an inadmissible policy
$\tilde{\pi}(a)=\mathbb{P}(\tilde{A}=a|\mathcal{E})$. Similarly with
$A_{\star}$, a target action may depend on the environment, not just the
history. Moreover, a target action is a random variable $\tilde{A}$ that
satisfies $H_{t}\perp\tilde{A}|\mathcal{E}$ for all $t$. In other words,
observations do not provide information about $\tilde{A}$ beyond what the
environment would. As it based upon an inadmissible policy, a target action
can change along with the agent’s beliefs over the environment
$\mathbb{P}(\mathcal{E}\in\cdot|H_{t})$. This represents another key
distinction between target actions that an agent can modify to reflect its
updated knowledge about the environment and manually-specified satisficing
actions that act as a fixed learning objective (much like optimal actions
$A_{\star}$). We use $\tilde{A}_{t}$ to denote the target action computed in
time period $t$ according to the distortion function
$d(a,e|H_{t})=\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(a))^{2}|\mathcal{E}=e,H_{t}])$.
Consequently, this induces a sequence of rate-distortion functions, one for
each time period, each of which is conditioned on the agent’s history $H_{t}$.
In the next section, we discuss a classic approach for computing a single,
arbitrary rate-distortion function before introducing a variant of Thompson
sampling that applies this method to compute target actions in each time
period.
## 4 Approach
### 4.1 Notation
At various points going forward, it will be necessary to refer to the mutual
information between two random variables conditioned upon a specific
realization of an agent’s history at some time period $t$. For convenience, we
will denote this as
$\mathbb{I}_{t}(X;Y)=\mathbb{I}(X;Y|H_{t}=H_{t}).$
This notation will also apply analogously to the conditional mutual
information
$\mathbb{I}_{t}(X;Y|Z)=\mathbb{I}(X;Y|H_{t}=H_{t},Z).$
Note that their dependence on the realization of random history $H_{t}$ makes
both $\mathbb{I}_{t}(X;Y)$ and $\mathbb{I}_{t}(X;Y|Z)$ random variables
themselves. The traditional notion of conditional mutual information which
uses the random variable $H_{t}$ arises by integrating over this randomness:
$\displaystyle\mathbb{E}\left[\mathbb{I}_{t}(X;Y)\right]$
$\displaystyle=\mathbb{I}(X;Y|H_{t})$
$\displaystyle\mathbb{E}\left[\mathbb{I}_{t}(X;Y|Z)\right]$
$\displaystyle=\mathbb{I}(X;Y|H_{t},Z)$
Additionally, we will also adopt a similar notation to express a conditional
expectation given the random history $H_{t}$:
$\mathbb{E}_{t}\left[X\right]=\mathbb{E}\left[X|H_{t}\right].$
### 4.2 Blahut-Arimoto Satisficing Thompson Sampling
A classic algorithm for carrying out the constrained optimization problem
captured in the rate-distortion function is the Blahut-Arimoto algorithm
(Blahut, 1972; Arimoto, 1972). While the first step in the derivation of the
algorithm is to start with the Lagrangian of the constrained objective, we
will adopt a different notation to recognize the sequence of rate-distortion
functions an agent must solve as its history expands. Namely, consider a loss
function that, given history $H_{t}$, assesses a target action:
$\mathcal{L}_{\beta}(\tilde{A}|H_{t})=\mathbb{I}_{t}(\mathcal{E};\tilde{A})+\beta\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}\right].$
The first term can be interpreted as the number of bits of information from
the environment required to identify target action, which we refer to as the
information rate of $\tilde{A}$. The second term is a measure of distortion –
the expected squared error between mean rewards generated by the target action
versus an optimal action – scaled by a constant $\beta\in\mathbb{R}_{\geq 0}$
representing a Lagrange multiplier. Hence, this loss-function captures a rate-
distortion trade-off. An optimal action minimizes distortion but, via Fact 2,
may require a high rate. An uninformed action has a rate of zero but results
in high distortion. Our goal is for an agent designer to use the $\beta$
hyperparameter to express a preference for the ease of learning versus the
tolerable level of sub-optimality whereas it is the agent’s responsibility to
identify the appropriate target action $\tilde{A}_{t}$ that best reflects
these preferences (Singh et al., 2010).
The Blahut-Arimoto Algorithm can be applied to identify a target action
$\tilde{A}$ that minimizes this loss function. The algorithm is initialized
with environment-dependent target action probabilities $\tilde{p}_{0}$, and
generates a sequence of iterates $\tilde{p}_{1},\tilde{p}_{2},\ldots$,
converging on probabilities $\tilde{p}_{\star}$ such that
$\tilde{p}_{\star}(a|e)=\mathbb{P}(\tilde{A}=a|\mathcal{E}=e)$ for all
$a\in\mathcal{A}$ and $e\in\Theta$. Each iteration carries out two steps. The
first computes marginal probabilities of the target action
$\tilde{q}_{k}(a)=\mathbb{E}_{t}[\tilde{p}_{k}(a|\mathcal{E})]\qquad\forall
a\in\mathcal{A},$
while the second updates environment-dependent target action probabilities,
$\forall a\in\mathcal{A},e\in\Theta,$
$\tilde{p}_{k+1}(a|e)=\tfrac{\tilde{q}_{k}(a)\exp(-\beta\mathbb{E}_{t}[(\overline{r}(A_{\star})-\overline{r}(a))^{2}|\mathcal{E}=e])}{\sum_{a^{\prime}\in\mathcal{A}}\tilde{q}_{k}(a^{\prime})\exp(-\beta\mathbb{E}_{t}[(\overline{r}(A_{\star})-\overline{r}(a^{\prime}))^{2}|\mathcal{E}=e])}.$
A standard choice for the initial channel parameters $\tilde{p}_{0}(a|e)$ is
the uniform distribution. Again, $\beta$ now subsumes the role of $D$ in
Equation 2 for expressing the desired prioritization of minimizing rate (lower
$\mathbb{I}(\tilde{A}_{t};\mathcal{E})$) versus minimizing distortion (lower
$d(a,e|H_{t})=\mathbb{E}_{t}[(\overline{r}(A_{\star})-\overline{r}(a))^{2}|\mathcal{E}=e])$).
Notice that as $\beta\rightarrow\infty$, $\tilde{p}_{k+1}(a|e)$ sharpens to a
max, placing all probability mass on the realization of $\tilde{A}$ that
minimizes distortion; consequently,
$\tilde{p}_{\star}(a|e)=\mathbb{P}(\tilde{A}=a|\mathcal{E}=e)=\mathbb{P}(A_{\star}=a|\mathcal{E}=e)$
and we recover the standard learning target of Thompson sampling.
Just as Thompson sampling selects actions according to the probability of
being optimal $\mathbb{P}(A_{t}=a|H_{t-1})=\mathbb{P}(A^{\star}=a|H_{t-1})$,
our BLahut-Arimoto Satisficing Thompson Sampling (BLASTS) algorithm selects
actions according to their probability of being the target action
$\tilde{A}_{t}$ that achieves the rate-distortion limit. We present the BLASTS
algorithm as Algorithm 2.
## 5 Regret Analysis
Abstracting away the precise details of BLASTS, we can consider a coarsely-
defined algorithm that selects each action $A_{t}$ as follows: (1) identify a
target action $\tilde{A}_{t}$ that minimizes a loss function
$\mathcal{L}_{\beta}(\cdot|H_{t})$ and (2) sample
$A_{t}\sim\mathbb{P}(\tilde{A}_{t}=\cdot|H_{t})$. Recall that the loss
function is defined, for any target action $\tilde{A}$, by
$\mathcal{L}_{\beta}(\tilde{A}|H_{t})=\mathbb{I}_{t}(\mathcal{E};\tilde{A})+\beta\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}\right].$
Due to space constraints, the proofs associated with all of the following
results can be found in the appendix. The following result helps establish
that the expected loss of any target action decreases as observations
accumulate.
Algorithm 1 Blahut-Arimoto Satisficing Thompson Sampling (BLASTS)
Input: Lagrange multiplier $\beta\in\mathbb{R}_{\geq 0}$, Blahut-Arimoto
iterations $K\in\mathbb{N}$, Posterior samples $Z\in\mathbb{N}$
$H_{0}=\\{\\}$
for $t=0$ to $T-1$ do
$e_{1},\ldots,e_{Z}\sim\mathbb{P}(\mathcal{E}\in\cdot|H_{t})$
$d(a,e|H_{t})=\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(a))^{2}|\mathcal{E}=e,H_{t}])$
$\tilde{p}_{0}(a|e_{z})=\frac{1}{|\mathcal{A}|},\forall
a\in\mathcal{A},z\in[Z]$
for $k=0$ to $K-1$ do
$\tilde{q}_{k}(a)=\mathbb{E}_{t}[\tilde{p}_{k}(a|\mathcal{E})],\forall
a\in\mathcal{A}$
$\tilde{p}_{k+1}(a|e_{z})\propto\tilde{q}_{k}(a)\exp\left(-\beta d(a,e_{z}\mid
H_{t})\right),\forall a\in\mathcal{A},\forall z\in Z$
end for
$\hat{z}\sim\text{Uniform}(Z)$
$A_{t}\sim\tilde{p}_{K}(a|e_{\hat{z}})$
$H_{t+1}=H_{t}\cup\\{(A_{t},O_{t+1})\\}$
$R_{t+1}=r(A_{t},O_{t+1})$
end for
###### Lemma 1.
For all $\beta>0$, target actions $\tilde{A}$, and $t=0,1,2,\ldots$,
$\mathbb{E}_{t}[\mathcal{L}_{\beta}(\tilde{A}|H_{t+1})]=\mathcal{L}_{\beta}(\tilde{A}|H_{t})-\mathbb{I}_{t}(\tilde{A};(A_{t},O_{t+1})).$
As a consequence of the above, the following lemma assures that expected loss
decreases as target actions are adapted. It also suggests that there are two
sources of decrease in loss: (1) a possible decrease in shifting from target
$\tilde{A}_{t}$ to $\tilde{A}_{t+1}$ and (2) a decrease of
$\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))$ from observing the interaction
$(A_{t},O_{t+1})$. The former reflects the agent’s improved ability to select
a suitable target, and the latter captures information gained about the
previous target. The proof of the lemma follows immediately from Lemma 1 and
the fact that $\tilde{A}_{t+1}$ minimizes
$\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})$, by definition.
###### Lemma 2.
For all $\beta>0$, target actions $\tilde{A}$, and $t=0,1,2,\ldots$,
$\mathbb{E}[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})|H_{t}]\leq\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})-\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1})).$
Note that, for all $t$, loss is non-negative and bounded by mutual information
between the optimal action and the environment (since optimal actions incur a
distortion of 0):
$\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\leq\mathcal{L}_{\beta}(A_{\star}|H_{t})=\mathbb{I}_{t}(\mathcal{E};A_{\star}).$
We therefore have the following corollary.
###### Corollary 1.
For all $\beta>0$ and $\tau=0,1,2,\ldots$,
$\mathbb{E}\left[\sum_{t=\tau}^{\infty}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\Big{|}H_{\tau}\right]\leq\mathbb{I}_{\tau}(\mathcal{E};A_{\star}).$
The proof of Corollary 1 follows directly by applying the preceding inequality
to the following generalization that applies to any target action.
###### Corollary 2.
For all $\beta>0$, target actions $\tilde{A}$, and $\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau}).$
Let $\Gamma$ be a constant such that
$\Gamma\geq\frac{\mathbb{E}_{t}[\overline{r}(\tilde{A})-\overline{r}(A)]^{2}}{\mathbb{I}_{t}(\tilde{A};A,O)},$
for all histories $H_{t}$, target actions $\tilde{A}$, if the executed action
$A$ is an independent sample drawn from the marginal distribution of
$\tilde{A}$, and $O$ is the resulting observation. Thus, $\Gamma$ is an upper
bound on the information ratio (Russo & Van Roy, 2014, 2016, 2018a) for which
existing information-theoretic analyses of worst-case finite-arm bandits and
linear bandits provide explicit values of $\Gamma$ that satisfy this
condition.
We can now establish our main results. We omit the proof of Theorem 1 as it is
a special case of our subsequent result.
###### Theorem 1.
If $\beta=\frac{1-\gamma^{2}}{(1-\gamma)^{2}\Gamma}$ then, for all
$\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(A_{t}))\right]\leq
2\sqrt{\frac{\Gamma\mathbb{I}_{\tau}(\mathcal{E};A_{\star})}{1-\gamma^{2}}}.$
In a complex environment with many actions,
$\mathbb{I}(\mathcal{E};A_{\star})$ can be extremely large, rendering the
above result somewhat vacuous under such circumstances. The next result offers
a generalization, establishing a regret bound that can depend on the
information content of any target action, including of course those that are
much simpler than $A_{\star}$.
###### Theorem 2.
If $\beta=\frac{1-\gamma^{2}}{(1-\gamma)^{2}\Gamma}$ then, for all target
actions $\tilde{A}$ and $\tau=0,1,2,\ldots$,
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(A_{t}))\right]$
$\displaystyle\leq
2\sqrt{\frac{\Gamma\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}{1-\gamma^{2}}}+\frac{2\epsilon}{1-\gamma},$
where
$\epsilon=\sqrt{\mathbb{E}_{\tau}[(\overline{r}(A_{\star})-\overline{r}(\tilde{A})^{2}]}$.
For the sake of completeness, we may derive the analogues of Corollary 2 and
Theorem 2 for the more traditional finite-horizon, undiscounted regret
setting.
###### Corollary 3.
For all $\beta>0$, target actions $\tilde{A}$, and $\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau}).$
###### Theorem 3.
If $\beta=\frac{T}{\Gamma}$ then, for all target actions $\tilde{A}$ and
$\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(A_{\star})-\overline{r}(A_{t})\right]\leq
2\sqrt{\Gamma T\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}+2T\epsilon,$
where
$\epsilon=\sqrt{\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(\tilde{A})^{2}|H_{\tau}]}$.
Notably, the information-theoretic regret bounds of Theorems 2 and 3 align
with that of (Russo & Van Roy, 2018b) as a sum of the difficulty associated
with learning $\tilde{A}$ and the associated performance shortfall between
$\tilde{A}$ and $A_{\star}$.
## 6 Experiments
(a) 50 arms
(b) 250 arms
Figure 1: Bernoulli bandit with independent arms
In this section, we outline two sets of computational experiments that
evaluate BLASTS against traditional Thompson sampling (TS). The primary goal
of our experiments is to illustrate how BLASTS enables an agent to navigate
the information-performance trade-off through the specification of $\beta$. To
this end, we examine two commonly-studied multi-armed bandit problems and
sweep across several values of $\beta$, benchmarking performance relative to
Thompson sampling. In the course of doing so, we find that both settings offer
a range of $\beta$ values which allow the agent to converge on the optimal
policy with greater efficiency than Thompson sampling.
In all of our experiments, we use linear hypermodels (Dwaracherla et al.,
2020) as a common choice for representing an agent’s epistemic uncertainty
over the environment $\mathcal{E}$. While several prior works have made use of
finite ensembles for representing an agent’s posterior beliefs over
environment parameters (Osband et al., 2016; Lu & Van Roy, 2017), hypermodels
offer a more computationally-tractable approach that demonstrably scales
better with a large number of actions. For an independent multi-armed bandit
problem with $K$ actions, a linear hypermodel takes as input an index sample
$z\sim\mathcal{N}(0,I_{K})$ and computes a single posterior sample as
$f_{\nu}(z)=\mu+\sigma z$ where the parameters
$\nu=(\mu\in\mathbb{R}^{K},\sigma\in\mathbb{R}^{K})$ are incrementally updated
via gradient descent to minimize a bootstrapped loss function. Due to space
constraints, we refer readers to (Dwaracherla et al., 2020) for the precise
details of this loss function and further information about hypermodels. It is
important to note that both Thompson sampling and BLASTS are agnostic to this
modeling choice and are compatible with any approach for representing an
agent’s uncertainty about the environment. We use a noise variance of 0.1, a
prior variance of 1.0, and a batch size of 1024 throughout all experiments
while using Adam (Kingma & Ba, 2014) to optimize hypermodel parameters with a
learning rate of 0.001.
We leverage an existing implementation of the Blahut-Arimoto algorithm for all
experiments (James et al., 2018). The number of posterior samples used was
fixed to 64 and the maximum number of iterations was set to 100, stopping
early if the average distortion between two consecutive iterations fell below
a small threshold. In preliminary experiments, we found better numerical
stability when running the Blahut-Arimoto algorithm in base 2, rather than
base $e$. To benchmark performance, we plot the (undiscounted) cumulative
regret in each time period with shading to represent 95% confidence intervals
computed across 10 random seeds.
### 6.1 Independent Bernoulli & Gaussian Bandits
Our first experiment focuses on a Bernoulli bandit with $K$ independent arms.
In each random trial, the environment is represented as a vector
$\mathcal{E}\in\mathbb{R}^{K}$ where
$\mathcal{E}_{a}\sim\text{Uniform}(0,1),\forall a\in\mathcal{A}$. Accordingly,
the reward observed for taking action $a\in\mathcal{A}$ is sampled as a
$\text{Bernoulli}(\mathcal{E}_{a})$. In our second experiment, we pivot to a
Gaussian bandit where rewards for action $a$ are drawn from
$\mathcal{N}(\mathcal{E}_{a},1)$, again with
$\mathcal{E}_{a}\sim\text{Uniform}(0,1),\forall a\in\mathcal{A}$. Results for
each experiment are shown in Figures 1 and 2, respectively.
(a) 50 arms
(b) 250 arms
Figure 2: Gaussian bandit with independent arms
The first notable observation from both sets of experiments is the control
that the $\beta$ parameter exerts over the performance of BLASTS. As expected,
while $\beta\rightarrow 0$, BLASTS approaches the performance of a uniform
random policy. In contrast, as $\beta\rightarrow\infty$, BLASTS gradually
recovers the performance of Thompson sampling. Importantly, when obtaining a
satisficing solution is viable, there is a suitable range of $\beta$ values to
accommodate different degrees of sub-optimality, many of which converge to
such satisficing policies in fewer time periods than what is needed for an
optimal policy. In our experiments, we ran BLASTS for a wider range of $\beta$
values than what is shown and selectively pruned away a subset of values for
readability. In all plots, the smallest value of $\beta$ in our selection that
achieves the optimal policy is shown.
A second key finding of the above experiments is the capacity for BLASTS to
synthesize an optimal policy more efficiently than Thompson sampling. Recall
that the input $D$ to the rate-distortion function $\mathcal{R}(D)$ represents
the desired upper bound on expected distortion. In the context of the Blahut-
Arimoto algorithm, $\beta$ represents the desired slope of the recovered
solution along the rate-distortion curve. By Corollary 5 of (Blahut, 1972), we
know that, given the current history $H_{t}$, the distortion $D$ achieved at
the point on the rate-distortion curve parameterized by $\beta$ is given as
$D(\beta|H_{t})=\mathbb{E}\left[\frac{\tilde{q}_{\star}(A)\exp(-\beta\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(A))^{2}|\mathcal{E},H_{t}])}{\sum_{a^{\prime}\in\mathcal{A}}\tilde{q}_{\star}(a^{\prime})\exp(-\beta\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(a^{\prime}))^{2}|\mathcal{E},H_{t}])}\right],$
where $\tilde{q}_{\star}$ achieves the infimum
$\inf\limits_{q}-\mathbb{E}\left[\log\left(\sum\limits_{a\in\mathcal{A}}q(a)\exp(-\beta\mathbb{E}_{t}[(\overline{r}(A_{\star})-\overline{r}(A))^{2}|\mathcal{E}]\right)\right].$
Letting $\Delta>0$ denote the action gap between the best and second-best arm
(Farahmand, 2011; Bellemare et al., 2016), it stands to reason that, for any
$\beta$ obtaining the optimal policy,
$\max\limits_{t}D(\beta|H_{t})<\Delta^{2}$. By Fact 2, it follows that the
target actions computed along these same $\beta$ values serve as easier
learning targets (through smaller $\mathbb{I}_{t}(\tilde{A};\mathcal{E})$)
while still converging to the optimal policy.
In summary, the results presented here verify that BLASTS is capable of
realizing a broad spectrum of policies. Included in this spectrum are
satisficing policies that accommodate various problem constraints on time and
resources, as well as optimal policies that be identified with greater
efficiency than Thompson sampling.
### 6.2 Balancing Rate-Distortion with the Information Ratio
The previous experiments clearly illustrate the importance of the $\beta$
hyperparameter in enabling an agent designer to express preferences over
behaviors and allowing an agent to realize those preferences through its
learned target actions. In the context of the rate-distortion function,
$\beta$ encodes a preference for minimizing rate over minimizing distortion.
Some of the $\beta$ values that ultimately recover satisficing policies,
however, do appear to have signs of strong performance in the earlier stages
of learning. However, it is clear that despite this initial potential, the
fixed value of $\beta$ is ultimately too small to prioritize regret
minimization. It is a natural to wonder if allowing $\beta$ to vary with time
might more efficiently synthesize an optimal policy? One crude strategy for
exploring this would be to place $\beta$ on a manually-tuned schedule,
eventually allowing it to increase to a value that emphasizes optimal actions
by the end of learning. As a more principled alternative to such a laborious
strategy, we consider the relationship between $\beta$ and the information
ratio, inspired by the value of
$\beta=\frac{1-\gamma^{2}}{(1-\gamma)^{2}\Gamma}$ derived in our analysis.
(a) Bernoulli
(b) Gaussian
Figure 3: BLASTS with adaptive $\beta_{t}=\overline{\Psi}_{t}^{-1}$ for
independent bandits with 50 arms
The information ratio (Russo & Van Roy, 2014, 2016, 2018a) is a powerful tool
for expressing the cost (measured in squared units of regret) per bit of
information acquired in each time period. The constant $\Gamma$ in our
analysis acts a uniform upper bound on the information ratio (for our setting)
that facilitates our information-theoretic regret bounds. For the more
traditional setting of finding optimal policies, the information ratio at time
period $t$ is given by $\Psi_{t}(\pi)=\frac{\Delta_{t}(\pi)^{2}}{g_{t}(\pi)}$
where $\Delta_{t}(\pi)$ denotes the expected regret with respect to
$A_{\star}$ and $g_{t}(\pi)$ denotes the information gain
$\mathbb{I}_{t}(A_{\star};A_{t},O_{t+1})$. While, in theory, an agent wishes
to compute a policy $\pi=\min\limits_{\pi}\Psi(\pi)$ that minimizes the
information ratio, practical instantiations of this principle often rely on
the fact that $g_{t}(\pi)\geq\mathbb{E}[v_{t}(A)]$ where
$v_{t}(A)=\mathbb{V}[\overline{r}(A)|\mathcal{E}]|H_{t}]$ is the variance of
the expected reward for action $A$ conditioned on the agent’s current beliefs
over the environment $\mathcal{E}$ (Russo & Van Roy, 2014, 2018a).
Consequently, in each time period, an agent may aim to compute a policy that
minimizes an upper bound
$\overline{\Psi}(\pi)=\frac{\Delta_{t}(\pi)^{2}}{v_{t}(\pi)}.$ To see an
initial connection between $\beta$ and the information ratio, recall that
$\beta$ is representative of the desired slope along the rate-distortion curve
(Blahut, 1972), with units of bits per unit of distortion; since BLASTS
operates with a squared-regret distortion, this leaves $\beta$ as a quantity
with units of bits per squared unit of regret. Moreover, once an agent has
resolved most of its uncertainty in the environment, small values of the
information ratio are indicative of optimal policies where BLASTS should,
ideally, take on larger values of $\beta$ to identify such optimal actions. In
light of these connections, we experiment with a version of BLASTS that uses
the minimizer of the variance-based information ratio to compute $\beta$ in
each time period. More specifically, let
$\overline{\Psi}_{t}=\min\limits_{\pi\in\Delta(\mathcal{A})}\overline{\Psi}_{t}(\pi)$
and take $\beta_{t}=\overline{\Psi}_{t}^{-1}$; small constant is always added
to $\overline{\Psi}_{t}$ to avoid division by zero. Results for this variant
on the independent Bernoulli and Gaussian bandits are shown in Figure 3. While
an adaptive $\beta$ shows marginal gain in the Gaussian bandit, the Bernoulli
bandit results show marked improvement in finding an optimal policy.
These results using an adaptive $\beta_{t}$ can be translated back to the
fixed $\beta$ setting by considering a distortion function
$\hat{d}(\tilde{a},e)=\overline{\Psi}^{-1}d(\tilde{a},\mathcal{E})$. Our
choice of using expected squared distortion is supported by our theory,
however the question of whether more efficient distortion functions exist in
practice is an interesting direction for future work.
## 7 Conclusion
A standard design principle of sequential decision-making is to build agents
that learn optimal actions. Recent work has highlighted scenarios wherein
problem constraints make the pursuit of optimal actions infeasible, forcing
the agent designer to craft a new target for an agent to learn. In this work,
we forge a new direction where agents are designed to fabricate their own
learning targets whose generic form is now the sole responsibility of the
agent designer. We highlight how rate-distortion theory gives rise to a
principled form for these learning targets, allowing practitioners to express
their preference between the ease of learning and the sub-optimality of the
resulting policy. We prove a general regret bound for this setting, contending
with the non-stationarity of learning targets, and empirically verify the
flexibility of our approach in yielding a broad spectrum of policies with
varying degrees of sub-optimality. Importantly, we find that an agent’s
ability to specify target actions that require fewer bits of information can
translate into greater efficiency in finding optimal policies relative to
Thompson sampling. Future work may find it fruitful to couple the Blahut-
Arimoto algorithm with more powerful strategies for information acquisition
(Russo & Van Roy, 2018a).
## Acknowledgements
Financial support from Army Research Office (ARO) grant W911NF2010055 is
gratefully acknowledged.
## References
* Abel et al. (2019) Abel, D., Arumugam, D., Asadi, K., Jinnai, Y., Littman, M. L., and Wong, L. L. State abstraction as compression in apprenticeship learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 3134–3142, 2019.
* Agrawal & Goyal (2012) Agrawal, S. and Goyal, N. Analysis of Thompson sampling for the multi-armed bandit problem. In _Conference on learning theory_ , pp. 39–1, 2012.
* Agrawal & Goyal (2013) Agrawal, S. and Goyal, N. Further optimal regret bounds for Thompson sampling. In _Artificial intelligence and statistics_ , pp. 99–107, 2013\.
* Arimoto (1972) Arimoto, S. An algorithm for computing the capacity of arbitrary discrete memoryless channels. _IEEE Transactions on Information Theory_ , 18(1):14–20, 1972.
* Bacon et al. (2017) Bacon, P.-L., Harb, J., and Precup, D. The option-critic architecture. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2017.
* Barto & Mahadevan (2003) Barto, A. G. and Mahadevan, S. Recent advances in hierarchical reinforcement learning. _Discrete event dynamic systems_ , 13(1-2):41–77, 2003.
* Bellemare et al. (2016) Bellemare, M. G., Ostrovski, G., Guez, A., Thomas, P., and Munos, R. Increasing the action gap: New operators for reinforcement learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 30, 2016.
* Berger (1971) Berger, T. _Rate Distortion Theory: A Mathematical Basis for Data Compression_. Prentice-Hall, 1971.
* Berry et al. (1997) Berry, D. A., Chen, R. W., Zame, A., Heath, D. C., and Shepp, L. A. Bandit problems with infinitely many arms. _Ann. Statist._ , 25(5):2103–2116, 10 1997. doi: 10.1214/aos/1069362389. URL https://doi.org/10.1214/aos/1069362389.
* Blahut (1972) Blahut, R. Computation of channel capacity and rate-distortion functions. _IEEE transactions on Information Theory_ , 18(4):460–473, 1972.
* Bonald & Proutiere (2013) Bonald, T. and Proutiere, A. Two-target algorithms for infinite-armed bandits with bernoulli rewards. In _Advances in Neural Information Processing Systems_ , pp. 2184–2192, 2013.
* Bubeck et al. (2011) Bubeck, S., Munos, R., Stoltz, G., and Szepesvári, C. X-armed bandits. _Journal of Machine Learning Research_ , 12(5), 2011.
* Bubeck et al. (2012) Bubeck, S., Cesa-Bianchi, N., et al. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. _Foundations and Trends® in Machine Learning_ , 5(1):1–122, 2012.
* Chapelle & Li (2011) Chapelle, O. and Li, L. An empirical evaluation of Thompson sampling. In _Advances in neural information processing systems_ , pp. 2249–2257, 2011.
* Cover & Thomas (2012) Cover, T. M. and Thomas, J. A. _Elements of information theory_. John Wiley & Sons, 2012.
* Csiszár (1974) Csiszár, I. On the computation of rate-distortion functions (corresp.). _IEEE Transactions on Information Theory_ , 20(1):122–124, 1974.
* Csiszár & Tsunády (1984) Csiszár, I. and Tsunády, G. Information geometry and alternating minimization procedures. _Statistics and decisions_ , 1:205–237, 1984.
* Dayan & Hinton (1993) Dayan, P. and Hinton, G. E. Feudal reinforcement learning. In _Advances in Neural Information Processing Systems_ , 1993.
* Deshpande & Montanari (2012) Deshpande, Y. and Montanari, A. Linear bandits in high dimension and recommendation systems. In _2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , pp. 1750–1754. IEEE, 2012.
* Dong & Van Roy (2018) Dong, S. and Van Roy, B. An information-theoretic analysis for Thompson sampling with many actions. In _Advances in Neural Information Processing Systems_ , pp. 4157–4165, 2018.
* Dwaracherla et al. (2020) Dwaracherla, V., Lu, X., Ibrahimi, M., Osband, I., Wen, Z., and Van Roy, B. Hypermodels for exploration. In _International Conference on Learning Representations_ , 2020.
* Farahmand (2011) Farahmand, A.-m. Action-gap phenomenon in reinforcement learning. _Advances in Neural Information Processing Systems_ , 24:172–180, 2011.
* Harb et al. (2018) Harb, J., Bacon, P.-L., Klissarov, M., and Precup, D. When waiting is not an option: Learning options with a deliberation cost. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 32, 2018.
* James et al. (2018) James, R. G., Ellison, C. J., and Crutchfield, J. P. dit: a Python package for discrete information theory. _The Journal of Open Source Software_ , 3(25):738, 2018. doi: https://doi.org/10.21105/joss.00738.
* Jong et al. (2008) Jong, N. K., Hester, T., and Stone, P. The utility of temporal abstraction in reinforcement learning. Citeseer, 2008.
* Kaelbling (1993) Kaelbling, L. P. Hierarchical learning in stochastic domains: preliminary results. In _Proceedings of the Tenth International Conference on International Conference on Machine Learning_ , pp. 167–173, 1993.
* Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Kleinberg et al. (2008) Kleinberg, R., Slivkins, A., and Upfal, E. Multi-armed bandits in metric spaces. In _Proceedings of the fortieth annual ACM symposium on Theory of computing_ , pp. 681–690, 2008.
* Lattimore & Szepesvári (2019) Lattimore, T. and Szepesvári, C. An information-theoretic approach to minimax regret in partial monitoring. _arXiv preprint arXiv:1902.00470_ , 2019.
* Lattimore & Szepesvári (2020) Lattimore, T. and Szepesvári, C. _Bandit algorithms_. Cambridge University Press, 2020.
* Lu & Van Roy (2017) Lu, X. and Van Roy, B. Ensemble sampling. In _Advances in neural information processing systems_ , pp. 3258–3266, 2017.
* Matz & Duhamel (2004) Matz, G. and Duhamel, P. Information geometric formulation and interpretation of accelerated Blahut-Arimoto-type algorithms. In _Information theory workshop_ , pp. 66–70. IEEE, 2004.
* Nachum et al. (2018) Nachum, O., Gu, S. S., Lee, H., and Levine, S. Data-efficient hierarchical reinforcement learning. In _Advances in neural information processing systems_ , pp. 3303–3313, 2018.
* Naja et al. (2009) Naja, Z., Alberge, F., and Duhamel, P. Geometrical interpretation and improvements of the Blahut-Arimoto’s algorithm. In _2009 IEEE International Conference on Acoustics, Speech and Signal Processing_ , pp. 2505–2508. IEEE, 2009.
* Osband et al. (2016) Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. Deep exploration via bootstrapped dqn. In _Advances in neural information processing systems_ , pp. 4026–4034, 2016.
* Osband et al. (2019) Osband, I., Van Roy, B., Russo, D. J., and Wen, Z. Deep exploration via randomized value functions. _Journal of Machine Learning Research_ , 20(124):1–62, 2019.
* Rusmevichientong & Tsitsiklis (2010) Rusmevichientong, P. and Tsitsiklis, J. N. Linearly parameterized bandits. _Mathematics of Operations Research_ , 35(2):395–411, 2010.
* Russo & Van Roy (2014) Russo, D. and Van Roy, B. Learning to optimize via information-directed sampling. In _Advances in Neural Information Processing Systems_ , pp. 1583–1591, 2014.
* Russo & Van Roy (2016) Russo, D. and Van Roy, B. An information-theoretic analysis of Thompson sampling. _The Journal of Machine Learning Research_ , 17(1):2442–2471, 2016.
* Russo & Van Roy (2018a) Russo, D. and Van Roy, B. Learning to optimize via information-directed sampling. _Operations Research_ , 66(1):230–252, 2018a.
* Russo & Van Roy (2018b) Russo, D. and Van Roy, B. Satisficing in time-sensitive bandit learning. _arXiv preprint arXiv:1803.02855_ , 2018b.
* Russo et al. (2018) Russo, D. J., Van Roy, B., Kazerouni, A., Osband, I., and Wen, Z. A tutorial on Thompson sampling. _Foundations and Trends® in Machine Learning_ , 11(1):1–96, 2018.
* Ryzhov et al. (2012) Ryzhov, I. O., Powell, W. B., and Frazier, P. I. The knowledge gradient algorithm for a general class of online learning problems. _Operations Research_ , 60(1):180–195, 2012.
* Sayir (2000) Sayir, J. Iterating the Arimoto-Blahut algorithm for faster convergence. In _2000 IEEE International Symposium on Information Theory (Cat. No. 00CH37060)_ , pp. 235. IEEE, 2000.
* Shannon (1959) Shannon, C. E. Coding theorems for a discrete source with a fidelity criterion. _IRE Nat. Conv. Rec., March 1959_ , 4:142–163, 1959.
* Singh et al. (2010) Singh, S., Lewis, R. L., Sorg, J., Barto, A. G., and Helou, A. On separating agent designer goals from agent goals: Breaking the preferences–parameters confound, 2010.
* Sutton et al. (1999) Sutton, R. S., Precup, D., and Singh, S. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. _Artificial Intelligence_ , 1999.
* Thompson (1933) Thompson, W. R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. _Biometrika_ , 25(3/4):285–294, 1933.
* Vezhnevets et al. (2017) Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. Feudal networks for hierarchical reinforcement learning. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pp. 3540–3549, 2017.
* Vontobel et al. (2008) Vontobel, P. O., Kavcic, A., Arnold, D. M., and Loeliger, H.-A. A generalization of the Blahut–Arimoto algorithm to finite-state channels. _IEEE Transactions on Information Theory_ , 54(5):1887–1918, 2008.
* Wang et al. (2008) Wang, Y., Audibert, J., and Munos, R. Algorithms for infinitely many-armed bandits. In _NIPS_ , 2008.
* Wen et al. (2020) Wen, Z., Precup, D., Ibrahimi, M., Barreto, A., Van Roy, B., and Singh, S. On efficiency in hierarchical reinforcement learning. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Yu (2010) Yu, Y. Squeezing the Arimoto–Blahut algorithm for faster convergence. _IEEE Transactions on Information Theory_ , 56(7):3149–3157, 2010.
## Appendix A Related Work
Our work focuses on principled Bayesian exploration wherein an agent maintains
a posterior distribution over its environment (Chapelle & Li, 2011; Agrawal &
Goyal, 2012, 2013; Russo & Van Roy, 2016). As complete knowledge of the
environment (the vector of mean rewards at each arm, for example) would endow
an agent with prescience of optimal actions, efficient exploration amounts to
the resolution of an agent’s epistemic uncertainty about the environment. A
natural approach for resolving such uncertainty is Thompson sampling which
employs probability matching in each time period to sample actions according
to the probability of being optimal (Thompson, 1933; Agrawal & Goyal, 2012,
2013; Russo & Van Roy, 2016; Russo et al., 2018). Chapelle & Li (2011)
kickstarted renewed interest in Thompson sampling through empirical successes
in online advertisement and news recommendation applications. While a
corresponding regret bound was developed in subsequent work (Agrawal & Goyal,
2012, 2013), our paper follows suit with Russo & Van Roy (2016) who introduced
an elegant, information-theoretic analysis of Thompson sampling; their
technique has been subsequently studied and extended to a variety of other
problem settings (Russo & Van Roy, 2018a, b; Dong & Van Roy, 2018) and
applications (Lattimore & Szepesvári, 2019; Osband et al., 2019). In this
work, we also leverage the information-theoretic analysis of Russo & Van Roy
(2016) while additionally incorporating ideas from rate-distortion theory
(Shannon, 1959). Unlike prior work exploring the intersection of sequential
decision-making and rate-distortion theory, we are not concerned with state
abstraction (Abel et al., 2019) nor are we concerned with an agent exclusively
targeting optimal actions through some compressive statistic of the
environment (Dong & Van Roy, 2018).
A core novelty of this paper is leveraging the Blahut-Arimoto algorithm
(Arimoto, 1972; Blahut, 1972) for the efficient computation of rate-distortion
functions. The algorithm was originally developed for the dual problem of
computing the channel-capacity function (Arimoto, 1972) and was soon after
extended to handle computation of the rate-distortion function as well
(Blahut, 1972). An initial study of the algorithm’s global convergence
properties (for discrete random variables) was done by Arimoto (1972) and
further explored by Csiszár (1974); Csiszár & Tsunády (1984). While there have
been many variants of the Blahut-Arimoto algorithm introduced over the years
(Sayir, 2000; Matz & Duhamel, 2004; Vontobel et al., 2008; Naja et al., 2009;
Yu, 2010), we find that the simplicity of the original algorithm is suitable
both in theory and in practice.
The goal of finding target actions with a tolerable degree of sub-optimality
deviates from the more traditional objective of identifying optimal actions.
As previously mentioned, this setting can implicitly arise when faced with a
continuous action space (Bubeck et al., 2011; Kleinberg et al., 2008;
Rusmevichientong & Tsitsiklis, 2010), a fixed time horizon (Ryzhov et al.,
2012; Deshpande & Montanari, 2012), or an infinite-armed bandit problem (Berry
et al., 1997; Wang et al., 2008; Bonald & Proutiere, 2013). Russo & Van Roy
(2018b) attempt to rectify some shortcomings of these works by introducing a
discounted notion of regret that emphasizes initial stages of learning and
measures performance shortfall relative to satisficing actions, instead of
optimal ones. Moreover, the analysis of their satisficing Thompson sampling
algorithm inherits the benefits of flexibility and generality from the
analogous information-theoretic results for Thompson sampling (Russo & Van
Roy, 2016). In this work, we obviate the need for the manual specification of
satisficing actions, instead relying on direct computation of the rate-
distortion function to adaptively compute the distribution over satisficing
actions in each time period that achieves the rate-distortion limit.
The idea of an agent that learns to designate and achieve its own goals bears
close resemblance to hierarchical agents studied in the reinforcement-learning
literature (Kaelbling, 1993; Dayan & Hinton, 1993; Sutton et al., 1999; Barto
& Mahadevan, 2003). In recent years, the two most-popular paradigms for
hierarchical reinforcement learning have been feudal reinforcement learning
(Dayan & Hinton, 1993; Nachum et al., 2018) and options (Sutton et al., 1999;
Jong et al., 2008; Bacon et al., 2017; Wen et al., 2020). Feudal
reinforcement-learning agents are comprised of an internal managerial
hierarchy wherein the action space of managers represents sub-goals for
workers in the subsequent level of the hierarchy; when workers can be quickly
trained to follow the directed sub-goals of their managers (without regard for
the optimality of doing so) the top-most manager can more efficiently
synthesize an optimal policy. Options provide a coherent abstraction for
expressing various temporally-extended behaviors or skills, typically
replacing or augmenting the original action space of the agent (Jong et al.,
2008). While there is great empirical support for the performance of feudal
learning and options when the goal representation or option set is computed
and fixed a priori, recent work in learning such components online often
relies on laborious tuning and heuristics to achieve success (Vezhnevets et
al., 2017; Bacon et al., 2017; Harb et al., 2018). In contrast, the main
contribution of this work is to build a principled approach for learning such
targets, albeit with a restricted focus to the simpler setting of bandit
learning. We leave the exciting question of how the ideas presented here may
scale up to tackle the challenges of hierarchical reinforcement learning to
future work.
## Appendix B Blahut-Arimoto Satisficing Thompson Sampling
Here we present the full BLASTS algorithm with inline comments for clarity.
Algorithm 2 Blahut-Arimoto Satisficing Thompson Sampling (BLASTS)
Input: Lagrange multiplier $\beta\in\mathbb{R}_{\geq 0}$, Blahut-Arimoto
iterations $K\in\mathbb{N}$, Posterior samples $Z\in\mathbb{N}$
$H_{0}=\\{\\}$
for $t=0$ to $T-1$ do
$e_{1},\ldots,e_{Z}\sim\mathbb{P}(\mathcal{E}\in\cdot|H_{t})$ {Finite sample
from current belief over $\mathcal{E}$}
$d(a,e|H_{t})=\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(a))^{2}|\mathcal{E}=e,H_{t}])$
{Distortion function for target action $\tilde{A}_{t}$}
$\tilde{p}_{0}(a|e_{z})=\frac{1}{|\mathcal{A}|},\forall
a\in\mathcal{A},z\in[Z]$
for $k=0$ to $K-1$ do
$\tilde{q}_{k}(a)=\mathbb{E}_{t}[\tilde{p}_{k}(a|\mathcal{E})],\forall
a\in\mathcal{A}$ {Run the Blahut-Arimoto algorithm}
$\tilde{p}_{k+1}(a|e_{z})=\frac{\tilde{q}_{k}(a)\exp(-\beta d(a,e_{z}\mid
H_{t}))}{\sum_{a^{\prime}\in\mathcal{A}}\tilde{q}_{k}(a^{\prime})\exp(-\beta
d(a,e_{z}\mid H_{t}))},\forall a\in\mathcal{A},z\in[Z]$
end for
$\hat{z}\sim\text{Uniform}(Z)$ {Select posterior sample uniformly at random}
$A_{t}\sim\tilde{p}_{K}(a|e_{\hat{z}})${Probability matching}
$H_{t+1}=H_{t}\cup\\{(A_{t},O_{t+1})\\}$
$R_{t+1}=r(A_{t},O_{t+1})$
end for
## Appendix C Discounted Regret Analysis
Abstracting away the precise details of BLASTS, we can consider a coarsely-
defined algorithm that selects each action $A_{t}$ as follows: (1) identify a
target action $\tilde{A}_{t}$ that minimizes a loss function
$\mathcal{L}_{\beta}(\cdot|H_{t})$ and (2) sample
$A_{t}\sim\mathbb{P}(\tilde{A}_{t}=\cdot|H_{t})$. Recall that the loss
function is defined, for any target action $\tilde{A}$, by
$\mathcal{L}_{\beta}(\tilde{A}|H_{t})=\mathbb{I}_{t}(\mathcal{E};\tilde{A})+\beta\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}\right].$
The following result helps establish that the expected loss of any target
action decreases as observations accumulate.
###### Lemma 3.
For all $\beta>0$, target actions $\tilde{A}$, and $t=0,1,2,\ldots$,
$\mathbb{E}_{t}[\mathcal{L}_{\beta}(\tilde{A}|H_{t+1})]=\mathcal{L}_{\beta}(\tilde{A}|H_{t})-\mathbb{I}_{t}(\tilde{A};(A_{t},O_{t+1})).$
###### Proof.
Recall that $H_{t+1}=(H_{t},A_{t},O_{t+1})$. By definition of a target action,
we have that $\forall t,H_{t}\perp\tilde{A}|\mathcal{E}$, which implies
$\mathbb{I}_{t}((A_{t},O_{t+1});\tilde{A}|\mathcal{E})=0$. Thus,
$\mathbb{I}_{t}(\mathcal{E};\tilde{A})=\mathbb{I}_{t}(\mathcal{E};\tilde{A})+\mathbb{I}_{t}((A_{t},O_{t+1});\tilde{A}|\mathcal{E})=\mathbb{I}_{t}(\mathcal{E},(A_{t},O_{t+1});\tilde{A})$
by the chain rule of mutual information. Applying the chain rule once again,
we have,
$\mathbb{I}_{t}(\mathcal{E};\tilde{A})=\mathbb{I}_{t}(\mathcal{E},(A_{t},O_{t+1});\tilde{A})=\mathbb{I}_{t}(\mathcal{E};\tilde{A}|A_{t},O_{t+1})+\mathbb{I}_{t}(\tilde{A};(A_{t},O_{t+1})).$
It follows that
$\displaystyle\mathbb{E}_{t}[\mathcal{L}_{\beta}(\tilde{A}|H_{t+1})]=$
$\displaystyle\mathbb{E}[\mathcal{L}_{\beta}(\tilde{A}|H_{t+1})|H_{t}]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\mathbb{I}_{t}(\mathcal{E};\tilde{A}|A_{t},O_{t+1})+\beta\mathbb{E}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}|H_{t},A_{t},O_{t+1}\right]\Big{|}H_{t}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{t}\left[\mathbb{I}_{t}(\mathcal{E};\tilde{A}|A_{t},O_{t+1})\right]+\beta\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}_{t}\left[\mathbb{I}_{t}(\mathcal{E};\tilde{A})-\mathbb{I}_{t}(\tilde{A};(A_{t},O_{t+1}))\right]+\beta\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}\right]$
$\displaystyle=$
$\displaystyle\mathbb{I}_{t}(\mathcal{E};\tilde{A})+\beta\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}))^{2}\right]-\mathbb{I}_{t}(\tilde{A};(A_{t},O_{t+1}))$
$\displaystyle=$
$\displaystyle\mathcal{L}_{\beta}(\tilde{A}|H_{t})-\mathbb{I}_{t}(\tilde{A};(A_{t},O_{t+1})).$
∎
As a consequence of the above, the following lemma assures that expected loss
decreases as target actions are adapted. It also suggests that there are two
sources of decrease in loss: (1) a possible decrease in shifting from target
$\tilde{A}_{t}$ to $\tilde{A}_{t+1}$ and (2) a decrease of
$\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))$ from observing the interaction
$(A_{t},O_{t+1})$. The former reflects the agent’s improved ability to select
a suitable target, and the latter captures information gained about the
previous target. We omit the proof as the lemma follows immediately from Lemma
1 and the fact that $\tilde{A}_{t+1}$ minimizes
$\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})$, by definition.
###### Lemma 4.
For all $\beta>0$, target actions $\tilde{A}$, and $t=0,1,2,\ldots$,
$\mathbb{E}[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})|H_{t}]\leq\mathbb{E}[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t+1})|H_{t}]=\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})-\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1})).$
Note that, for all $t$, loss is non-negative and bounded by mutual information
between the optimal action and the environment (since optimal actions incur a
distortion of 0):
$\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\leq\mathcal{L}_{\beta}(A_{\star}|H_{t})=\mathbb{I}_{t}(\mathcal{E};A_{\star}).$
We therefore have the following corollary.
###### Corollary 4.
For all $\beta>0$ and $\tau=0,1,2,\ldots$,
$\mathbb{E}\left[\sum_{t=\tau}^{\infty}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\Big{|}H_{\tau}\right]\leq\mathbb{I}_{\tau}(\mathcal{E};A_{\star}).$
We omit the proof of Corollary 1 as it follows directly by applying the
preceding inequality to the following generalization that applies to any
target action.
###### Corollary 5.
For all $\beta>0$, target actions $\tilde{A}$, and $\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau}).$
###### Proof.
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]$
$\displaystyle\leq\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})-\mathbb{E}_{t}\left[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})\right]\right]$
$\displaystyle=\sum_{t=\tau}^{\infty}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]-\mathbb{E}_{\tau}\left[\mathbb{E}_{t}\left[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})\right]\right]$
$\displaystyle=\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})\right]+\sum_{t=\tau+1}^{\infty}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]-\sum_{t=\tau}^{\infty}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})\right]$
$\displaystyle=\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})+\sum_{t=\tau+1}^{\infty}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]-\sum_{t=\tau+1}^{\infty}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]$
$\displaystyle=\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})$
where the steps follow as Lemma 2, linearity of expectation, the tower
property, and the fact that $\tilde{A}_{\tau}$ is the minimizer of
$\mathcal{L}_{\beta}(\cdot|H_{\tau})$, by definition. ∎
Let $\Gamma$ be a constant such that
$\Gamma\geq\frac{\mathbb{E}_{t}[\overline{r}(\tilde{A})-\overline{r}(A)]^{2}}{\mathbb{I}_{t}(\tilde{A};A,O)},$
for all histories $H_{t}$, target actions $\tilde{A}$, if the executed action
$A$ is an independent sample drawn from the marginal distribution of
$\tilde{A}$, and $O$ is the resulting observation. Thus, $\Gamma$ is an upper
bound on the information ratio (Russo & Van Roy, 2014, 2016, 2018a) for which
existing information-theoretic analyses of worst-case finite-arm bandits and
linear bandits provide explicit values of $\Gamma$ that satisfy this
condition.
We can now establish our main results. We omit the proof of Theorem 1 as it is
a special case of our subsequent result.
###### Theorem 4.
If $\beta=\frac{1-\gamma^{2}}{(1-\gamma)^{2}\Gamma}$ then, for all
$\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(A_{t}))\right]\leq
2\sqrt{\frac{\Gamma\mathbb{I}_{\tau}(\mathcal{E};A_{\star})}{1-\gamma^{2}}}.$
In a complex environment with many actions,
$\mathbb{I}(\mathcal{E};A_{\star})$ can be extremely large, rendering the
above result somewhat vacuous under such circumstances. The next result offers
a generalization, establishing a regret bound that can depend on the
information content of any target action, including of course those that are
much simpler than $A_{\star}$.
###### Theorem 5.
If $\beta=\frac{1-\gamma^{2}}{(1-\gamma)^{2}\Gamma}$ then, for all target
actions $\tilde{A}$ and $\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(A_{t}))\right]\leq
2\sqrt{\frac{\Gamma\mathbb{I}(\mathcal{E};\tilde{A}|H_{\tau}=H_{\tau})}{1-\gamma^{2}}}+\frac{2\epsilon}{1-\gamma},$
where
$\epsilon=\sqrt{\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(\tilde{A})^{2}|H_{\tau}]}$.
###### Proof.
From the inequalities satisfied by $\Gamma$, the Cauchy-Schwartz inequality,
and Corollary 2, we have
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t}))\right]\leq$
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}\sqrt{\Gamma\mathbb{I}_{\tau}(\tilde{A}_{t};(A_{t},O_{t+1}))}\right]$
$\displaystyle\leq$
$\displaystyle\sum_{t=\tau}^{\infty}\sqrt{\gamma^{2(t-\tau)}\Gamma}\sqrt{\sum_{t=\tau}^{\infty}\mathbb{E}_{\tau}\left[\mathbb{I}_{\tau}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]}$
$\displaystyle\leq$
$\displaystyle\sqrt{\Gamma\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})\sum_{t=0}^{\infty}\gamma^{2t}}$
$\displaystyle=$
$\displaystyle\sqrt{\frac{\Gamma\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})}{1-\gamma^{2}}}.$
Since $\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\geq 0$,
$\sqrt{\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t}))^{2}\right]}\leq(1-\gamma)\sqrt{\frac{\Gamma\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})}{1-\gamma^{2}}}.$
Further, applying Jensen’s inequality to the left-hand side and using the fact
that $\tilde{A}_{t}$ minimizes $\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})$ on
the right-hand side,
$\mathbb{E}_{t}\left[\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})\right]\leq(1-\gamma)\sqrt{\frac{\Gamma\mathcal{L}_{\beta}(\tilde{A}|H_{t})}{1-\gamma^{2}}}.$
Lemma 1 implies that
$\mathbb{E}_{\tau}[\mathcal{L}_{\beta}(\tilde{A}|H_{t})]\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau}),$
for all $t\geq\tau$, and therefore, by Jensen’s inequality,
$\mathbb{E}_{\tau}\left[\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})\right]\leq(1-\gamma)\mathbb{E}_{\tau}\left[\sqrt{\frac{\Gamma\mathcal{L}_{\beta}(\tilde{A}|H_{t})}{1-\gamma^{2}}}\right]\leq(1-\gamma)\sqrt{\frac{\Gamma\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}|H_{t})\right]}{1-\gamma^{2}}}\leq(1-\gamma)\sqrt{\frac{\Gamma\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})}{1-\gamma^{2}}}.$
It follows that
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t}))\right]\leq$
$\displaystyle\sqrt{\frac{\Gamma\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})}{1-\gamma^{2}}}$
$\displaystyle\leq$
$\displaystyle\sqrt{\frac{\Gamma(\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})+\beta\epsilon^{2})}{1-\gamma^{2}}}$
$\displaystyle\leq$
$\displaystyle\sqrt{\frac{\Gamma\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}{1-\gamma^{2}}}+\frac{\epsilon}{1-\gamma}.$
Applying these same steps, we complete the above bound as
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t}))\right]\leq\sqrt{\frac{\Gamma\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}{1-\gamma^{2}}}+\frac{\epsilon}{1-\gamma}.$
Putting everything together, we have
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(A_{t}))\right]$
$\displaystyle=\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})+\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t}))\right]$
$\displaystyle=\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t}))\right]+\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{\infty}\gamma^{t-\tau}(\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t}))\right]$
$\displaystyle\leq
2\sqrt{\frac{\Gamma\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}{1-\gamma^{2}}}+\frac{2\epsilon}{1-\gamma}.$
∎
## Appendix D Undiscounted Regret Analysis
In this section, we derive a variant of Theorem 2 where performance shortfall
is measured by the expected cumulative regret across a finite horizon.
Consider a fixed time horizon $T$ and observe the analogous result to
Corollary 2:
###### Corollary 6.
For all $\beta>0$, target actions $\tilde{A}$, and $\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau}).$
###### Proof.
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\mathbb{I}_{t}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]$
$\displaystyle\leq\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})-\mathbb{E}_{t}\left[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})\right]\right]$
$\displaystyle=\sum_{t=\tau}^{T+\tau}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]-\mathbb{E}_{\tau}\left[\mathbb{E}_{t}\left[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})\right]\right]$
$\displaystyle=\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})\right]+\sum_{t=\tau+1}^{T+\tau}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]-\sum_{t=\tau}^{T+\tau}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t+1}|H_{t+1})\right]$
$\displaystyle=\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})+\sum_{t=\tau+1}^{T+\tau}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]-\sum_{t=\tau+1}^{T+\tau+1}\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\right]$
$\displaystyle=\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})-\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}_{T+\tau+1}|H_{T+\tau+1})\right]$
$\displaystyle\leq\mathcal{L}_{\beta}(\tilde{A}_{\tau}|H_{\tau})\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})$
where the steps follow as Lemma 2, linearity of expectation, the tower
property, the non-negativity of $\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\geq
0$, and the fact that $\tilde{A}_{\tau}$ is the minimizer of
$\mathcal{L}_{\beta}(\cdot|H_{\tau})$, by definition. ∎
With Corollary 3, we may introduce the undiscounted analog to Theorem 2:
###### Theorem 6.
If $\beta=\frac{T}{\Gamma}$ then, for all target actions $\tilde{A}$ and
$\tau=0,1,2,\ldots$,
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(A_{\star})-\overline{r}(A_{t})\right]\leq
2\sqrt{\Gamma T\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}+2T\epsilon,$
where
$\epsilon=\sqrt{\mathbb{E}[(\overline{r}(A_{\star})-\overline{r}(\tilde{A})^{2}|H_{\tau}]}$.
###### Proof.
From the inequalities satisfied by $\Gamma$, the Cauchy-Schwartz inequality,
and Corollary 3, we have
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t})\right]\leq$
$\displaystyle\sqrt{\Gamma}\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\sqrt{\mathbb{I}_{\tau}(\tilde{A}_{t};(A_{t},O_{t+1}))}\right]$
$\displaystyle\leq$ $\displaystyle\sqrt{\Gamma
T\sum_{t=\tau}^{T+\tau}\mathbb{E}_{\tau}\left[\mathbb{I}_{\tau}(\tilde{A}_{t};(A_{t},O_{t+1}))\right]}$
$\displaystyle\leq$ $\displaystyle\sqrt{\Gamma
T\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})}$
Since $\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})\geq 0$,
$\sqrt{\mathbb{E}_{t}\left[(\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t}))^{2}\right]}\leq
T^{-1}\sqrt{\Gamma T\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})}.$
Further, applying Jensen’s inequality to the left-hand side and using the fact
that $\tilde{A}_{t}$ minimizes $\mathcal{L}_{\beta}(\tilde{A}_{t}|H_{t})$ on
the right-hand side,
$\mathbb{E}_{t}\left[\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})\right]\leq
T^{-1}\sqrt{\Gamma T\mathcal{L}_{\beta}(\tilde{A}|H_{t})}.$
Lemma 1 implies that
$\mathbb{E}_{\tau}[\mathcal{L}_{\beta}(\tilde{A}|H_{t})]\leq\mathcal{L}_{\beta}(\tilde{A}|H_{\tau}),$
for all $t\geq\tau$, and therefore, by Jensen’s inequality,
$\mathbb{E}_{\tau}\left[\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})\right]\leq
T^{-1}\mathbb{E}_{\tau}\left[\sqrt{\Gamma
T\mathcal{L}_{\beta}(\tilde{A}|H_{t})}\right]\leq T^{-1}\sqrt{\Gamma
T\mathbb{E}_{\tau}\left[\mathcal{L}_{\beta}(\tilde{A}|H_{t})\right]}\leq
T^{-1}\sqrt{\Gamma T\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})}.$
It follows that
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})\right]\leq$
$\displaystyle\sqrt{\Gamma T\mathcal{L}_{\beta}(\tilde{A}|H_{\tau})}$
$\displaystyle\leq$ $\displaystyle\sqrt{\Gamma
T(\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})+\beta\epsilon^{2})}$
$\displaystyle\leq$ $\displaystyle\sqrt{\Gamma
T\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}+T\epsilon.$
Applying these same steps, we complete the above bound as
$\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t})\right]\leq\sqrt{\Gamma
T\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}+T\epsilon.$
Putting everything together, we have
$\displaystyle\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(A_{\star})-\overline{r}(A_{t})\right]$
$\displaystyle=\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})+\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t})\right]$
$\displaystyle=\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(A_{\star})-\overline{r}(\tilde{A}_{t})\right]+\mathbb{E}_{\tau}\left[\sum_{t=\tau}^{T+\tau}\overline{r}(\tilde{A}_{t})-\overline{r}(A_{t})\right]$
$\displaystyle\leq 2\sqrt{\Gamma
T\mathbb{I}_{\tau}(\mathcal{E};\tilde{A})}+2T\epsilon.$
∎
|
# Bethe strings in the spin dynamical structure factor of the Mott-Hubbard
phase in one-dimensional fermionic Hubbard model
José M. P. Carmelo Center of Physics of University of Minho and University of
Porto, P-4169-007 Oporto, Portugal Department of Physics, University of
Minho, Campus Gualtar, P-4710-057 Braga, Portugal Boston University,
Department of Physics, 590 Commonwealth Avenue, Boston, Massachusetts 02215,
USA Tilen Čadež Center for Theoretical Physics of Complex Systems, Institute
for Basic Science (IBS), Daejeon 34126, Republic of Korea
(6 August 2020; revised 9 October 2020; accepted 24 December 2020; published
15 January 2021)
###### Abstract
The spectra and role in the spin dynamical properties of bound states of
elementary magnetic excitations named Bethe strings that occur in some
integrable spin and electronic one-dimensional models have recently been
identified and realized in several materials by experiments. Corresponding
theoretical studies have usually relied on the one-dimensional spin-$1/2$
Heisenberg antiferromagnet in a magnetic field. At the isotropic point, it
describes the large onsite repulsion $U$ limit of the spin degrees of freedom
of the one-dimensional fermionic Hubbard model with one electron per site in a
magnetic field $h$. In this paper we consider the thermodynamic limit and
study the effects of lowering the latter quantum problem ratio $u=U/4t$, where
$t$ is the first-neighbor transfer integral, on the line-shape singularities
in $(k,\omega)$-plane regions at and just above the lower thresholds of the
transverse and longitudinal spin dynamical structure factors. The most
significant spectral weight contribution from Bethe strings leads to a gapped
continuum in the spectrum of the spin dynamical structure factor
$S^{+-}(k,\omega)$. Our study focuses on the line shape singularities at and
just above the gapped lower threshold of that continuum, which have been
identified in experiments. Our results are consistent with the contribution of
Bethe strings to $S^{zz}(k,\omega)$ being small at low spin densities and
becoming negligible upon increasing that density. Our results provide
physically important information about how electron itinerancy affects the
spin dynamics.
###### pacs:
## I Introduction
Recently, there has been a renewed interest in the experimental identification
and realization of bound states of elementary magnetic excitations named Bethe
strings in materials whose magnetic properties are described by the one-
dimensional (1D) spin-$1/2$ Heisenberg antiferromagnet in magnetic fields
Wang_19 ; Bera_20 ; Wang_18 ; Kohno_09 ; Stone_03 . This applies to that model
isotropic point in the case of experimental studies of CuCl2$\cdot$2N(C5D5)
and Cu(C4H4N2)(NO3)2 Kohno_09 ; Stone_03 ; Heilmann_78 .
The isotropic spin-$1/2$ Heisenberg $XXX$ chain describes the spin degrees of
freedom of the 1D fermionic Hubbard model’s Mott-Hubbard insulator phase in
the limit of large onsite repulsion $U$. That phase is reached at a density of
one electron per site. Interesting related physical questions are whether
lowering the ratio $u=U/4t$ leads to a description of the spin dynamical
properties suitable to spin-chain compounds and how electron itinerancy
affects the spin dynamics. Here $t$ is the model first-neighbor transfer
integral.
In the case of the 1D fermionic Hubbard model, there are in its exact solution
Lieb ; Lieb-03 ; Martins two types of Bethe strings described by complex
nonreal Bethe-ansatz rapidities. They refer to the model spin and charge
degrees of freedom, respectively, Takahashi ; Carmelo_18 ; Carmelo_18A . Here
we call them charge and spin $n$-strings. The nature of their configurations
becomes clearer in terms of the rotated electrons that are generated from the
electrons by a unitary transformation. It is such that
$\sigma=\uparrow,\downarrow$ rotated-electron single-site occupancy, rotated-
electron double-site occupancy, and rotated-electron no site occupancy are
good quantum numbers for the whole $u>0$ range. (For electrons they are good
quantum numbers only for large $u$.) The corresponding electron - rotated-
electron unitary operator is uniquely defined in Ref. Carmelo_17, by its set
of $4^{L}\times 4^{L}=4^{2L}$ matrix elements between all $4^{L}$ energy
eigenstates that span the model’s Hilbert space. Here $L$ is the number of
sites and lattice length in units of lattice spacing one.
The spin $n$-strings are for $n>1$ bound states of a number $n$ of spin-
singlet pairs of rotated electrons with opposite spin projection that singly
occupy sites. The charge $n$-strings are for $n>1$ bound states of $n$ charge
$\eta$-spin singlet pairs of rotated-electron doubly and unoccupied sites
Carmelo_18 ; Carmelo_18A . However, energy eigenstates described by only real
Bethe-ansatz rapidities do not contain $n>1$ charge and spin $n$-strings and
are populated by unbound spin-singlet pairs and unbound charge $\eta$-spin
singlet pairs Carmelo_18 ; Carmelo_18A . Ground states are not populated by
the latter type of pairs.
Previous studies focused on contributions to the spin dynamical structure
factors of the 1D fermionic Hubbard model with one electron per site from
excited energy eigenstates described by real Bethe-anstaz rapidities at zero
magnetic field Benthien_07 ; Bhaseen_05 ; Essler_99 and in a finite magnetic
field Carmelo_16 . There were also studies of structure factors of the 1D
Hubbard model in a magnetic field in the limit of low excitation energy
$\omega$ Carmelo_93A .
Our study addresses the 1D Hubbard model with one electron per site in the
spin subspace spanned by energy eigenstates without charge $\eta$-spin singlet
pairs. Some of these energy eigenstates are described by complex nonreal spin
Bethe-ansatz rapidities and thus are populated by spin $n$-strings.
The general goal of this paper is the study of the contribution from spin
$n$-string states to the spin dynamical structure factors of the 1D Hubbard
model with one electron per site in a magnetic field $h$. Our study relies on
the dynamical theory introduced for the 1D Hubbard model in Ref. Carmelo_05, .
It has been adapted to the 1D Hubbard model with one electron per site in a
spin subspace spanned by energy eigenstates described by real Bethe-ansatz
rapidities in Ref. Carmelo_16, . The studies of this paper use the latter
dynamical theory in an extended spin subspace spanned by two classes of energy
eigenstates, populated and not populated by spin $n$-strings, respectively.
In the case of integrable models, the general dynamical theory of Refs.
Carmelo_16, ; Carmelo_05, ; Carmelo_08, reaches the same finite-energy
dynamical correlation functions expressions as the mobile quantum impurity
model scheme of Refs. Imambekov_09, ; Imambekov_12, . Such expressions apply
at and in the $(k,\omega)$-plane vicinity of the corresponding spectra’s lower
thresholds’s. That for the former dynamical theory and the mobile quantum
impurity model scheme such dynamical correlation functions expressions are for
arbitrary finite values of the excitation energy indeed the same and account
for the same microscopic processes is an issue discussed and confirmed in
Appendix A of Ref. Carmelo_18, and in Ref. Carmelo_16A, for a representative
integrable model and several dynamical correlation functions.
The dynamical theory of Refs. Carmelo_16, ; Carmelo_05, ; Carmelo_08, is a
generalization to the whole $u=U/4t>0$ range of the approach used in the
$u\rightarrow\infty$ limit in Refs. Karlo, ; Karlo_97, . Momentum dependent
exponents in the expressions of spectral functions have also been obtained in
Refs. Sorella_96, ; Sorella_98, .
Beyond the studies of Ref. Carmelo_16, , here the application of the dynamical
theory is extended to the contribution to the spin dynamical structure factors
from excited energy eigenstates populated by spin $n$-strings.
The theory refers to the thermodynamic limit, in which the expression of the
square of the matrix elements of the dynamical structure factors between the
ground state and the excited states behind most spectral weight has the
general form given in Eq. (85). It does not provide the precise values of the
$u$ and $m$ dependent constant $0<B_{s}\leq 1$ and $u$ dependent constants
$0<f_{l}<1$ where $l=0,2,4$ in that expression. In spite of this limitation,
our results provide important physical information on the dynamical structure
factors under study.
In the case of the related isotropic spin $1/2$ Heisenberg chain in a magnetic
field, it is knownKohno_09 that the only contribution from excited energy
eigenstates populated by spin $n$-strings that leads to a $(k,\omega)$-plane
gapped continuum with a significant amount of spectral wight refers to
$S^{+-}(k,\omega)$.
Based on a relation between the level of negativity of the momentum dependent
exponents that control the spin dynamical structure factors $(k,\omega)$-plane
singularities and the amount of spectral weight existing near them, we confirm
that that result applies to the whole $u>0$ range of the 1D Hubbard model with
one electron per site in a magnetic field. However, the contribution of spin
$n$-strings states to $S^{zz}(k,\omega)$ is found to be small at low spin
densities and to become negligible upon increasing it beyond a spin density
$\tilde{m}$ that decreases upon decreasing $u$, reading $\tilde{m}=0$ for
$u\rightarrow\infty$ and $\tilde{m}\approx 0.317$ for $u\gg 1$. Finally, the
contribution of these states to $S^{-+}(k,\omega)$ is found to be negligible
at finite magnetic fields.
The main aim of this paper is the study of the line shape singularities of
$S^{+-}(k,\omega)$, $S^{xx}(k,\omega)$, and $S^{zz}(k,\omega)$ at and just
above the $(k,\omega)$-plane gapped lower threshold of the spectra associated
with spin $n$-string states. The corresponding singularity peaks have been
identified in neutron scattering experiments Kohno_09 ; Stone_03 ; Heilmann_78
.
As a side result, we address the more general problem of the line-shape of the
transverse and longitudinal spin dynamical structure factors at finite
magnetic field $h$ in the $(k,\omega)$-plane vicinity of singularities at and
above the lower thresholds of the spectra of the excited energy eigenstates of
the 1D Hubbard model with one electron per site that produce a significant
amount of spectral weight. This includes both excited states with and without
spin $n$-strings. The contribution from the latter states leads to the largest
amount of spin dynamical structure factors’s spectral weight Carmelo_16 .
Our secondary goal is to provide an overall physical picture that includes the
relative $(k,\omega)$-plane location of all spectra with a significant amount
of spectral weight and accounting for the contributions of different types of
states to both the gapped and gapless lower threshold singularities that
emerge in the spin dynamical structure factors.
The paper is organized as follows. The model and the spin dynamical structure
factors are the issues addressed in Sec. II. In Sec. III the
$(k,\omega)$-plane spectra of the excited states that lead to most dynamical
structure factors’s spectral weight are studied, with emphasis on those of the
spin $n$-string states. The line shape at and above the gapped lower
thresholds of the $n$-string states’s dynamical structure factors spectra is
the main subject of Sec. IV. As a side result, in that section the problem is
revisited at and above the lower thresholds of the $(k,\omega)$-plane continua
associated with excited states described by real Bethe-anstaz rapidities. In
Sec. V the limiting behaviors of the spin dynamical structure factors are
addressed. Finally, the discussion and concluding remarks are presented in
Sec. VI.
A set of useful results needed for our studies are presented in five
Appendices. This includes the selection rules and sum rule provided in
Appendix A. In Appendix B the gapless transverse and longitudinal continuum
spectra are revisited. The energy gaps between the gapped lower thresholds of
the spin $n$-string states’s spectra and the lower $(k,\omega)$-plane continua
is the issue addressed in Appendix C. In Appendix D the number and current
number deviations and the spectral functionals that control the momentum
dependent exponents in the spin dynamical structure factors’s expressions are
given. Some useful quantities also needed for our studies are defined and
provided in Appendix E.
## II The model and the spin dynamical structure factors
In this paper we use in general units of lattice constant and Planck constant
one. Our study refers to spin subspaces spanned by energy eigenstates for
which the number of lattice sites $N_{a}$ equals that of electrons
$N=N_{\uparrow}+N_{\downarrow}$, of which $N_{\uparrow}$ and $N_{\downarrow}$
have up and down spin projection, respectively.
The Hubbard model with one electron per site at vanishing chemical potential
in a magnetic field $h$ under periodic boundary conditions on a 1D lattice of
length $L\rightarrow\infty$ is given by,
${\hat{H}}=t\,\hat{T}+U\,\hat{V}_{D}+2\mu_{B}h\,{\hat{S}}^{z}\,.$ (1)
Here $\mu_{B}$ is the Bohr magneton and for simplicity in $g\mu_{B}$ we have
taken $g=2$. The operators read,
$\displaystyle\hat{T}$ $\displaystyle=$
$\displaystyle-\sum_{\sigma=\uparrow,\downarrow}\sum_{j=1}^{N}\left(c_{j,\sigma}^{{\dagger}}\,c_{j+1,\sigma}+c_{j+1,\sigma}^{{\dagger}}\,c_{j,\sigma}\right)\hskip
5.69046pt{\rm and}$ $\displaystyle\hat{V}_{D}$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{N}\hat{\rho}_{j,\uparrow}\hat{\rho}_{j,\downarrow}\hskip
5.69046pt{\rm where}\hskip
5.69046pt\hat{\rho}_{j,\sigma}=c_{j,\sigma}^{{\dagger}}\,c_{j,\sigma}-1/2\,,$
(2)
where $\hat{T}$ is the kinetic-energy operator in units of $t$, $\hat{V}_{D}$
is the electron (or spin $1/2$ atom) on-site repulsion operator in units of
$U$, the operator $c_{j,\sigma}^{\dagger}$ (and $c_{j,\sigma}$) creates (and
annihilates) a spin-projection $\sigma=\uparrow,\downarrow$ electron at
lattice site $j=1,...,N$, and the electron number operators read
${\hat{N}}=\sum_{\sigma=\uparrow,\downarrow}\,\hat{N}_{\sigma}$ and
${\hat{N}}_{\sigma}=\sum_{j=1}^{N}\hat{n}_{j,\sigma}=\sum_{j=1}^{N}c_{j,\sigma}^{{\dagger}}\,c_{j,\sigma}$.
Moreover, ${\hat{S}}^{z}=\sum_{j=1}^{N}\hat{S}^{z}_{j}$ is the diagonal
generator of the global spin $SU(2)$ symmetry algebra. We denote the energy
eigenstate’s spin projection by
$S^{z}=-(N_{\uparrow}-N_{\downarrow})/2\in[-S,S]$ where $S\in[0,N/2]$ denotes
their spin.
Our results refer to magnetic fields $0<h<h_{c}$ and corresponding spin
densities $0<m<1$. Here $m=(N_{\uparrow}-N_{\downarrow})/N_{a}$ and $h_{c}$ is
the critical magnetic field above which there is fully polarized
ferromagnetism. The corresponding spin-density curve that relates $h$ and $m$
is given by,
$\displaystyle h(m)$ $\displaystyle=$
$\displaystyle-{\varepsilon_{s}^{0}(k_{F\downarrow})\over
2\mu_{B}}|_{m=1-2k_{F\downarrow}/\pi}\in[0,h_{c}]\hskip 5.69046pt{\rm where}$
$\displaystyle 2\mu_{B}\,h_{c}$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h(m)|_{m=1}=\sqrt{(4t)^{2}+U^{2}}-U\,,$ (3)
$\varepsilon_{s}^{0}(q)$ is the $s$ band energy dispersion, Eq. (111), whose
zero-energy level is shifted relative to that in Eq. (98), such that
$\varepsilon_{s}(k_{F\downarrow})=0$, and the magnetic energy scale
$2\mu_{B}\,h_{c}$ is associated with the quantum phase transition from the
Mott-Hubbard insulator phase to fully polarized ferromagnetism. It defines the
corresponding critical magnetic field,
$h_{c}=(\sqrt{(4t)^{2}+U^{2}}-U)/2\mu_{B}$.
The spin dynamical structure factors studied in this paper in the
$(k,\omega)$-plane vicinity of well defined singularities are quantities of
both theoretical interest and of interest for comparison with experimentally
measurable quantities. They can be written as,
$\displaystyle S^{aa}(k,\omega)$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{N}e^{-ikj}\int_{-\infty}^{\infty}dt\,e^{-i\omega
t}\langle GS|\hat{S}^{a}_{j}(t)\hat{S}^{a}_{j}(0)|GS\rangle$ (4)
$\displaystyle=$
$\displaystyle\sum_{\nu}|\langle\nu|\hat{S}^{a}_{k}|GS\rangle|^{2}\delta(\omega-\omega^{aa}_{\nu}(k))\,.$
Here $a=x,y,z$, the spectra read $\omega^{aa}_{\nu}(k)=(E_{\nu}^{aa}-E_{GS})$,
$E_{\nu}^{aa}$ refers to the energies of the excited energy eigenstates that
contribute to the $aa=xx,yy,zz$ dynamical structure factors, $E_{GS}$ is the
initial ground state energy, and $\hat{S}^{a}_{k}$ are for $a=x,y,z$ the
Fourier transforms of the usual local $a=x,y,z$ spin operators
$\hat{S}^{a}_{j}$, respectively.
Due to the rotational symmetry in spin space, off-diagonal components of the
spin dynamical structure factor vanish, $S^{aa^{\prime}}(k,\omega)=0$ for
$a\neq a^{\prime}$, and the two transverse components are identical,
$S^{xx}(k,\omega)=S^{yy}(k,\omega)$. At zero and finite magnetic field, one
has that $S^{zz}(k,\omega)=S^{xx}(k,\omega)$ and $S^{zz}(k,\omega)\neq
S^{xx}(k,\omega)$, respectively.
In the transverse case, we often address the problem in terms of the dynamical
structure factors $S^{+-}(k,\omega)$ and $S^{-+}(k,\omega)$ in
$S^{xx}(k,\omega)={1\over 4}\left(S^{+-}(k,\omega)+S^{-+}(k,\omega)\right)$.
We rely on the symmetry that exists for the problems under study between the
spin density intervals $m\in]-1,0]$ and $m\in]0,1[$, such that,
$\displaystyle S^{-+}(k,\omega)|_{m}$ $\displaystyle=$ $\displaystyle
S^{+-}(k,\omega)|_{-m}\hskip 5.69046pt{\rm and}$ $\displaystyle
S^{+-}(k,\omega)|_{m}$ $\displaystyle=$ $\displaystyle S^{-+}(k,\omega)|_{-m}$
(5) $\displaystyle{\rm for}\hskip 5.69046ptm\in]0,1[\,.$
Hence we only consider explicitly the spin density interval $m\in]0,1[$. Since
$S^{aa}(k,\omega)=S^{aa}(-k,\omega)$ and the same applies to
$S^{+-}(k,\omega)$ and $S^{-+}(k,\omega)$, for simplicity the results of this
paper refer to $k>0$ momenta in the first Brillouin zone, $k\in[0,\pi]$.
Some useful selection rules tell us which classes of energy eigenstates have
nonzero matrix elements with the ground state Muller . Such selection rules as
well as some useful sum rules are given in Appendix A.
The selection rules in Eq. (46) reveal that at $h=0$ and thus $m=0$ when
$S^{zz}(k,\omega)=S^{xx}(k,\omega)$, the longitudinal dynamical structure
factor is fully controlled by transitions from the ground state for which
$S^{z}=S=0$ to excited states with spin numbers $S^{z}=0$ and $S=1$. However,
following such rules the transverse dynamical structure factors are controlled
by transitions from that ground state to excited states with spin numbers
$S^{z}=\pm 1$ and $S=1$.
This is different from the case for magnetic fields $0<h<h_{c}$ considered in
this paper. According to the selection rules, Eq. (47), the longitudinal
dynamical structure factor $S^{zz}(k,\omega)\neq S^{xx}(k,\omega)$ is
controlled by transitions from the ground state with spin numbers $S^{z}=-S$
to excited states with the same spin numbers $S^{z}=-S$. According to the same
selection rules, the dynamical structure factors $S^{+-}(k,\omega)$ and
$S^{-+}(k,\omega)$ are controlled by transitions from the ground state with
spin numbers $S^{z}=-S$ to excited states with spin numbers $S^{z}=-S\pm 1$.
Figure 1: The two $(k,\omega)$-plane lower and upper continuum regions where
for spin densities (a-c) $m=0.1$ and (d-f) $m=0.3$ and $u=0.4,1.0,15.0$ there
is in the thermodynamic limit more spectral weight in $S^{+-}(k,\omega)$. The
sketch of the $(k,\omega)$-plane distributions represented here and in Figs.
2-6 does not provide information on the relative amount of spectral weight
contained within each spectrum’s gray continuum. [The three reference vertical
lines mark the momenta (a-c) $k=k_{F\uparrow}-k_{F\downarrow}=\pi/10$,
$k=k_{F\downarrow}=9\pi/20$, and $k=2k_{F\downarrow}=9\pi/10$ and (d-f)
$k=k_{F\uparrow}-k_{F\downarrow}=3\pi/10$, $k=k_{F\downarrow}=7\pi/20$, and
$k=2k_{F\downarrow}=7\pi/10$, where $k_{F\downarrow}={\pi\over 2}(1-m)$ and
$k_{F\uparrow}={\pi\over 2}(1+m)$, Eq. (97).] The lower and upper continuum
spectra are associated with excited energy eigenstates without and with spin
$n$-strings, respectively. In the thermodynamic limit, the $(k,\omega)$-plane
region between the upper threshold of the lower continuum and the gapped lower
threshold of the upper $n$-string continuum has nearly no spectral weight. In
the case of the gapped lower threshold of the spin $n$-string continuum, the
analytical expressions given in this paper refer to near and just above that
threshold whose subintervals correspond to branch lines parts represented in
the figure by solid and dashed lines. The latter refer to $k$ intervals where
the momentum dependent exponents plotted in Figs. 7-10 are negative and
positive, respectively. In the former intervals, $S^{+-}(k,\omega)$ displays
singularity peaks, seen also in experimental studies of CuCl2$\cdot$2N(C5D5)
and Cu(C4H4N2)(NO3)2 Kohno_09 ; Stone_03 ; Heilmann_78 .
## III Dynamical structure factors spectra
Our study of the spin dynamical structure factors relies on the representation
of the energy eigenstates suitable to the dynamical theory used in this
paperCarmelo_16 . It involves “quasiparticles” that in this paper we call $sn$
particles. Here $n=1,...,\infty$ is the number of spin-singlet pairs that
describes their internal degrees of freedom.
For $n>1$ a $sn$ particle contains $n$ bound spin-singlet pairs, the integer
$n$ being also the length of the corresponding spin $n$-string. For
simplicity, we denote the $s1$ particles by $s$ particles. Their internal
degrees of freedom correspond to a single singlet pair. Energy eigenstates
that are not populated and are populated by $sn$ particles with $n>1$ pairs
are described by real and complex nonreal Bethe-anstaz rapidities,
respectively.
Figure 2: The same continuum spectra as in Fig. 1 for spin densities (a-c)
$m=0.5$ and (d-f) $m=0.8$ and $u=0.4,1.0,15.0$. [The three reference vertical
lines mark the momenta (a-c) $k=k_{F\downarrow}=\pi/4$ and
$k=k_{F\uparrow}-k_{F\downarrow}=2k_{F\downarrow}=\pi/2$ and (d-f)
$k=k_{F\downarrow}=\pi/10$, $k=2k_{F\downarrow}=\pi/5$, and
$k=k_{F\uparrow}-k_{F\downarrow}=4\pi/5$, where $k_{F\downarrow}={\pi\over
2}(1-m)$ and $k_{F\uparrow}={\pi\over 2}(1+m)$, Eq. (97).]
As mentioned in Sec. I and confirmed in Appendix D, there is a direct relation
between the values of the momentum dependent exponents that within the
dynamical theory used here control the line shape in the $(k,\omega)$-plane
vicinity of the spin dynamical structure factors spectral features and the
amount of spectral weight located near them: Negative exponents imply the
occurrence of singularities associated with a significant amount of spectral
weight in their $(k,\omega)$-plane vicinity.
The use of this criterion reveals that in the present thermodynamic limit and
for magnetic fields $0<h<h_{c}$, the only significant contribution to
$S^{+-}(k,\omega)$ from energy eigenstates populated by $sn$ particles refers
to those populated by $N_{\downarrow}-2$ $s$ particles and one $s2$ particle.
Here $N_{\downarrow}=N_{\downarrow}^{0}+1\in[2,N/2]$ is the excited energy
eigenstate’s number of down-spin electrons in the case of initial ground
states with $N_{\downarrow}^{0}\in[1,N/2-1]$.
There is as well a much weaker contribution at small spin densities from
states populated by $N_{\downarrow}-3$ $s$ particles and one $s3$ particle.
Here $N_{\downarrow}=N_{\downarrow}^{0}+1\in[3,N/2]$ for the excited energy
eigenstate in the case of initial ground states with
$N_{\downarrow}^{0}\in[2,N/2-1]$.
Figure 3: The two $(k,\omega)$-plane lower and upper continuum regions where
for spin densities (a-c) $m=0.1$ and (d-f) $m=0.3$ and $u=0.4,1.0,15.0$ there
is in the thermodynamic limit more spectral weight in $S^{xx}(k,\omega)$. The
notations are the same as in Fig. 1. [The three reference vertical lines mark
the momenta (a-c) $k=k_{F\uparrow}-k_{F\downarrow}=\pi/10$,
$k=k_{F\downarrow}=9\pi/20$, and $k=2k_{F\downarrow}=9\pi/10$ and (d-f)
$k=k_{F\uparrow}-k_{F\downarrow}=3\pi/10$, $k=k_{F\downarrow}=7\pi/20$, and
$k=2k_{F\downarrow}=7\pi/10$, where $k_{F\downarrow}={\pi\over 2}(1-m)$ and
$k_{F\uparrow}={\pi\over 2}(1+m)$, Eq. (97).] The additional part of the lower
continuum relative to that of $S^{+-}(k,\omega)$ in Figs. 1 and 2 stems from
the contributions of $S^{-+}(k,\omega)$. As a result, for some $k$ intervals
the upper spin $n$-string continuum overlaps with the lower continuum.
In the case of $S^{zz}(k,\omega)$, this refers only to energy eigenstates
populated by $N_{\downarrow}-2$ $s$ particles and one $s2$ particle. Here
$N_{\downarrow}=N_{\downarrow}^{0}\in[2,N/2]$ both for the excited energy
eigenstate and initial ground states. The contribution from such states to
$S^{-+}(k,\omega)$ is found to be negligible, since all relevant exponents are
both positive and large.
The contribution to $S^{+-}(k,\omega)$ from energy eigenstates populated by
$N_{\downarrow}-3$ $s$ particles and one $s3$ particle that occurs for small
values of the spin density is very weak and is negligible near the
$(k,\omega)$-plane singularities to which the analytical expressions obtained
in our study refer to. In addition, the latter very weak contributions occur
in $(k,\omega)$-plane regions above the gapped lower threshold of the spectrum
continuum associated with energy eigenstates populated by $N_{\downarrow}-2$
$s$ particles and one $s2$ particle. [The expression of that spectrum is given
below in Eq. (6).]
Hence, the energy eigenstates described by complex nonreal Bethe ansatz
rapidities considered in our study are populated by $N_{\downarrow}-2$ $s$
particles and one $s2$ particle. Such states contain thus a single spin
$n$-string of length $n=2$. In addition, we account for the contribution from
energy eigenstates populated by $N_{\downarrow}$ $s$ particles that are
described by real Bethe ansatz rapidities.
Figure 4: The same continuum spectra as in Fig. 3 for spin densities (a-c)
$m=0.5$ and (d-f) $m=0.8$ and $u=0.4,1.0,15.0$. For such spin densities, there
is no overlap between the upper spin $n$-string continuum and the lower
continuum. [The three reference vertical lines mark the momenta (a-c)
$k=k_{F\downarrow}=\pi/4$ and
$k=k_{F\uparrow}-k_{F\downarrow}=2k_{F\downarrow}=\pi/2$ and (d-f)
$k=k_{F\downarrow}=\pi/10$, $k=2k_{F\downarrow}=\pi/5$, and
$k=k_{F\uparrow}-k_{F\downarrow}=4\pi/5$, where $k_{F\downarrow}={\pi\over
2}(1-m)$ and $k_{F\uparrow}={\pi\over 2}(1+m)$, Eq. (97).]
The goal of this section is to introduce the spectra associated with
$(k,\omega)$-plane regions that contain most spectral weight of the spin
dynamical structure factors. The $(k,\omega)$-plane distribution of such
spectra is represented for $S^{+-}(k,\omega)$, $S^{xx}(k,\omega)$, and
$S^{zz}(k,\omega)$ in Figs. 1 and 2, 3 and 4, and 5 and 6, respectively. [In
these figures, the spectra of the branch lines studied below are such that the
$s2$ and $s2^{\prime}$ branch lines are represented by blue lines and the
$\bar{s}$ and $\bar{s}^{\prime}$ branch lines by red and green lines,
respectively; The $U=0$ electronic Fermi points $k_{F\downarrow}={\pi\over
2}(1-m)$ and $k_{F\uparrow}={\pi\over 2}(1+m)$ define at $u>0$ the ground-
state $s$ band Fermi points $\pm k_{F\downarrow}$ and the $s$ band limiting
momentum values $\pm k_{F\uparrow}$.] The spectra displayed in Figs. 1, 3, and
5 refer to spin densities (a-c) $m=0.1$ and (d-f) $m=0.3$ and
$u=0.4,1.0,15.0$. In Figs. 2, 4, and 6 they correspond to spin densities (a-c)
$m=0.5$ and (d-f) $m=0.8$ and the same set $u=0.4,1.0,15.0$ of $u$ values.
Figure 5: The $(k,\omega)$-plane continuum region where for spin densities
(a-c) $m=0.1$ and (d-f) $m=0.3$ and $u=0.4,1.0,15.0$ there is in the
thermodynamic limit more spectral weight in $S^{zz}(k,\omega)$. [The three
reference vertical lines mark the momenta (a-c)
$k=k_{F\uparrow}-k_{F\downarrow}=\pi/10$, $k=k_{F\downarrow}=9\pi/20$, and
$k=2k_{F\downarrow}=9\pi/10$ and (d-f)
$k=k_{F\uparrow}-k_{F\downarrow}=3\pi/10$, $k=k_{F\downarrow}=7\pi/20$, and
$k=2k_{F\downarrow}=7\pi/10$ where $k_{F\downarrow}={\pi\over 2}(1-m)$ and
$k_{F\uparrow}={\pi\over 2}(1+m)$, Eq. (97).] Contributions from excited
states containing spin $n$-strings are much smaller than for
$S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$ and do not lead to an upper
continuum. The gapped lower threshold of such states is though displayed. Only
when for spin densities $0<m<\tilde{m}$ where $\tilde{m}=0$ for $u\rightarrow
0$ and $\tilde{m}\approx 0.317$ for $u\gg 1$ that threshold coincides with the
$\bar{s}^{\prime}$ branch line, singularities occur near and just above it.
That line is represented as a solid (green) line. In the remaining parts of
the gapped lower threshold, which for spin densities $\tilde{m}<m<1$ means all
of it, the momentum dependent exponents are positive and there are no
singularities. This reveals there is a negligible amount of spectral weight
near such lines.
In the cases of $S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$, the figures show
both a lower continuum $(k,\omega)$-plane region whose spectral weight is
associated with excited states without spin $n$-strings and an upper continuum
whose spectral weight stems from excited states populated by spin $n$-strings.
In the case of $S^{zz}(k,\omega)$, the contribution to the spectral weight
from excited states containing spin $n$-strings is much weaker than for
$S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$ and does not lead to an upper
continuum. The gapped lower threshold of such states’s spectrum is represented
in Figs. 5 and 6 by a $(k,\omega)$-plane line.
Since at finite magnetic fields the contribution to the spectral weight from
excited states containing spin $n$-strings is negligible in the case of
$S^{-+}(k,\omega)$ and their lower continuum spectrum was previously studied
Carmelo_16 , its $(k,\omega)$-plane spectrum distribution is not shown here.
Note though that in Figs. 3 and 4 for $S^{xx}(k,\omega)$, the additional part
of the lower continuum relative to that of $S^{+-}(k,\omega)$ represented in
Figs. 1 and 2 stems from contributions of $S^{-+}(k,\omega)$. As a result, for
small spin densities and some $k$ intervals the upper spin $n$-string
continuum of $S^{xx}(k,\omega)$ overlaps with its lower continuum.
Figure 6: The same continuum spectra as in Fig. 5 for spin densities (a-c)
$m=0.5$ and (d-f) $m=0.8$ and $u=0.4,1.0,15.0$. For these spin densities,
there are no singularities near the gapped lower threshold of the spin
$n$-string excited states. For these spin densities the contribution of such
states to $S^{zz}(k,\omega)$ are actually negligible over the whole
$(k,\omega)$ plane. [The three reference vertical lines mark the momenta (a-c)
$k=k_{F\downarrow}=\pi/4$ and
$k=k_{F\uparrow}-k_{F\downarrow}=2k_{F\downarrow}=\pi/2$ and (d-f)
$k=k_{F\downarrow}=\pi/10$, $k=2k_{F\downarrow}=\pi/5$, and
$k=k_{F\uparrow}-k_{F\downarrow}=4\pi/5$, where $k_{F\downarrow}={\pi\over
2}(1-m)$ and $k_{F\uparrow}={\pi\over 2}(1+m)$, Eq. (97).]
In the case of both $S^{+-}(k,\omega)$ and $S^{zz}(k,\omega)$, there is in the
present thermodynamic limit for spin densities $0<m<1$ and thus finite
magnetic fields $0<h<h_{c}$ very little spectral weight between the upper
threshold of the lower continuum associated with spin $n$-string-less excited
states and the gapped lower threshold of the spin $n$-string states’s spectra
in Figs. 1-2 and 5 and 6, respectively. The same applies to $S^{xx}(k,\omega)$
in the $k$ intervals of Figs. 3 and 4 for which there is a gap between the
upper continuum associated with spin $n$-string states and the lower
continuum.
Indeed, in the thermodynamic limit nearly all the small amount of spectral
weight associated with the spin $n$-string-less excited energy eigenstates
named in the literature four-spinon states, is contained inside the lower
continuum in such figures. This also applies to large finite systems. In the
large $u$ limit, in which the spin degrees of freedom of the present model
with one electron per site are described by the isotropic spin-$1/2$
Heisenberg chain, this is so for the latter model both at the isotropic point
$\Delta=1$ (see Fig. 4 of Ref. Caux_06, ) and for anisotropy $\Delta<1$ (see
Fig. 1 of Ref. Caux_05, ).
Concerning this key issue for our study that the amount of spectral weight in
the $(k,\omega)$-plane gap regions shown in Figs. 1-6 is negligible, let us
consider the more involved case of $S^{+-}(k,\omega)$. Similar conclusions
apply to the simpler problems of the other spin dynamical structure factors.
The behavior of spin operators matrix elements between energy eigenstates in
the selection rules valid for $u>0$ and magnetic fields $0<h<h_{c}$, Eq. (47),
has important physical consequences. It implies that the spectral weight
stemming from excited energy eigenstates described by only real Bethe-ansatz
rapidities existing in finite systems in a $(k,\omega)$-plane region
corresponding to the momentum interval $k\in[2k_{F\downarrow},\pi]$ and
excitation energy values $\omega$ above the upper threshold of the lower
continuum in Figs. 1 and 2, whose spectrum’s expression is given in Eq. (50),
becomes negligible in the present thermodynamic limit for a macroscopic
system.
Our thermodynamic limit’s study is complementary to and consistent with
results obtained by completely different methods for finite-size systems and
small yet finite $t^{2}/U$ Kohno_09 ; Muller . The spectral weight located in
that $(k,\omega)$-plane region is found to decrease upon increasing the system
size Kohno_09 . This is confirmed by comparing the spectra represented in the
first row frames of Figs. 3 (a) and 3 (b) of Ref. Kohno_09, for two finite-
size systems with $N=320$ and $N=2240$ spins, respectively, in the case under
consideration of the spin dynamical structure factor $S^{+-}(k,\omega)$.
More generally, the selection rules in Eqs. (46) and (47) valid for $u>0$ are
behind in the thermodynamic limit nearly all spectral weight generated by
transitions to excited energy eigenstates described only by real Bethe-ansatz
rapidities being contained in the $(k,\omega)$-plane lower continuum shown in
Figs. 1 and 2, whose spectrum is given in Eq. (50).
Let us consider the $(k,\omega)$-plane spectral weight distributions shown in
Fig. 18 of Ref. Muller, for $S^{+-}(k,\omega)$, which apply to the half-
filled 1D Hubbard model for small yet finite $t^{2}/U$. As reported in that
reference, due to the interplay of the selections rules given in Eqs. (46) and
(47) for $h=0$ and $0<h<h_{c}$, respectively, the spectral weight existing
between the continuous lower boundary $\epsilon_{4L}$ and the upper boundary
$\epsilon_{4U}$ at $h=0$ becomes negligible for finite magnetic fields
$0<h<h_{c}$. In addition, the spectral weight existing between the continuous
lower boundary $\epsilon_{5L}$ and the upper boundary $\epsilon_{5U}$ for
small finite-size systems, becomes negligible in the thermodynamic limit for a
macroscopic system. This is indeed due to the selection rules, Eq. (47), as
discussed in that reference, which for the 1D Hubbard model with one fermion
per site are valid for $u>0$. As also reported in Ref. Muller, , only the
spectral weight below the continuous lower boundary $\epsilon_{5L}(q)$,
located in the $(k,\omega)$-plane between the lower boundary $\epsilon_{6L}$
and the upper boundary $\epsilon_{6U}$ has a significant amount of spectral
weight.
This refers to the $(k,\omega)$-plane region where, according to the analysis
of Ref. Muller, , for magnetic fields $0<h<h_{c}$ a macroscopic system has
nearly the whole spectral weight stemming from transitions to excited energy
eigenstates described by only real Bethe-ansatz rapidities. Consistently with
the spectral weight in the present gap region being negligible, the
$(k,\omega)$-plane between the continuous lower boundary $\epsilon_{6L}$ and
the upper boundary $\epsilon_{6U}$ in Fig. 18 of that reference corresponds
precisely to the lower continuum shown in Figs. 1 and 2, whose spectrum is
provided in Eq. (50).
Besides the $s$ and $s2$ particles, there is in the present spin subspace a
$c$ particle branch of Bethe ansatz quantum numbers associated with the charge
degrees of freedom Carmelo_16 ; Carmelo_05 . However, it refers to a
corresponding full $c$ momentum band that does not contribute to the spin
dynamical properties. Its only contribution to the spin problem studied in
this paper stems from microscopic momentum shifts $-{\pi\over L}$ or
${\pi\over L}$ of all the corresponding $c$ band $N$ discrete momentum values
$q_{j}={2\pi\over L}\,I^{c}_{j}$. Here $I_{j}^{c}=0,\pm 1,\pm 2,...$ for
$N_{s}+N_{s2}$ even and $I_{j}^{c}=\pm 1/2,\pm 3/2,\pm 5/2,...$ for
$N_{s}+N_{s2}$ odd are the Bethe-ansatz $c$ band quantum numbers in Eq. (96).
Those lead to macroscopic momentum variations $-\pi$ or $\pi$, respectively,
upon changes in the value of the numbers of $s$ and $s2$ particles, according
to the boundary conditions given in Eq. (96).
The line shape near the gapped lower threshold of the $S^{+-}(k,\omega)$’s
continuum spectrum represented in Figs. 1 and 2 is controlled by the above
class of excited states that are generated by the occupancy configurations of
both $N_{s}=N_{\downarrow}-2$ $s$ particles over $N_{\uparrow}$ discrete
momentum values $q_{j}={2\pi\over L}\,I_{j}^{s}$ and one $s2$ particle over
$N_{\uparrow}-N_{\uparrow}+1$ discrete momentum values $q_{j}={2\pi\over
L}\,I_{j}^{s2}$. Here (i) $I_{j}^{s}=0,\pm 1,\pm 2,...$ for $N_{\uparrow}$ odd
and $I_{j}^{s}=\pm 1/2,\pm 3/2,\pm 5/2,...$ for $N_{\uparrow}$ even and (ii)
$I_{j}^{s2}=0,\pm 1,\pm 2,...$ for $N_{s2}=1$ are the Bethe-ansatz $s$ and
$s2$ band quantum numbers, respectively, in Eq. (96). However, the line shape
in the vicinity of the lower threshold of the $S^{+-}(k,\omega)$’s lower
continuum spectrum in the same figures is controlled by excited energy
eigenstates described by real Bethe ansatz rapidities. Those are described by
occupancy configurations of $N_{s}=N_{\downarrow}$ $s$ particles over
$N_{\uparrow}$ discrete momentum values $q_{j}={2\pi\over L}\,I_{j}^{s}$.
The Bethe-ansatz equations and quantum numbers whose occupancy configurations
generate the energy eigenstates that span the spin subspaces used in our
studies are given in Eqs. (91) and (92) in functional form, in terms of $s$
and $s2$ bands momentum distributions. Those describe the momentum occupancy
configurations that generate such states.
As further discussed in Appendix D, ground states are for spin densities
$0<m<1$ only populated by the $N_{c}=N$ nondynamical $c$ particles and
$N_{s}=N_{\downarrow}$ $s$ particles that symmetrically or quasi-symmetrically
occupy the $s$ band, which also contains
$N_{s}^{h}=N_{\uparrow}-N_{\downarrow}$ holes.
The gapped upper spectrum in Figs. 1 and 2 associated with the
$(k,\omega)$-plane continuum of $S^{+-}(k,\omega)$ that stems from transitions
from the ground state to excited energy eigenstates populated by
$N_{s}=N_{\downarrow}-2$ $s$ particles and one $s2$ particle is given by,
$\displaystyle\omega^{+-}_{\Delta}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}(q_{1})+\varepsilon_{s2}(q_{2})$ (6)
$\displaystyle{\rm where}\hskip 5.69046ptk=\iota
k_{F\downarrow}-q_{1}+q_{2}\hskip 5.69046pt{\rm and}\hskip 5.69046pt\iota=\pm
1$ $\displaystyle{\rm for}\hskip
5.69046ptq_{1}\in[-k_{F\downarrow},k_{F\downarrow}]\hskip 5.69046pt{\rm and}$
$\displaystyle q_{2}\in[0,(k_{F\uparrow}-k_{F\downarrow})]\hskip 5.69046pt{\rm
for}\hskip 5.69046pt\iota=1$ $\displaystyle
q_{2}\in[-(k_{F\uparrow}-k_{F\downarrow}),0]\hskip 5.69046pt{\rm for}\hskip
5.69046pt\iota=-1\,.$
This spectrum has two branches corresponding to $\iota=\pm 1$ such that,
$\displaystyle k$ $\displaystyle=$ $\displaystyle
k_{F\downarrow}-q_{1}+q_{2}\in[0,\pi]$ $\displaystyle k$ $\displaystyle=$
$\displaystyle-k_{F\downarrow}-q_{1}+q_{2}\in[-\pi,0]\,.$ (7)
In Eq. (6) and other expressions of spin dynamical structure factors’s spectra
given below and in Appendices B and C, $\varepsilon_{s}(q)$ and
$\varepsilon_{s2}(q)$ are the $s$ and $s2$ band energy dispersions,
respectively, defined by Eqs. (98), (99), and (101)-(110). Limiting behaviors
of such dispersions and corresponding $s$ and $s2$ group velocities that
provide useful information on the corresponding spin dynamical structure
factors’s spectra momentum, spin density, and interaction dependences are
provided in Eqs. (112)-(128).
We denote by $\Delta^{ab}(k)$ where $ab=+-,xx,zz$ the spectra of the spin
$n$-string excited states’s gapped lower thresholds of $S^{ab}(k,\omega)$.
They play an important role in our study, since for some $k$ intervals there
are singularities at and just above them.
For $S^{+-}(k,\omega)$, $S^{xx}(k,\omega)$, and $S^{zz}(k,\omega)$ such gapped
thresholds have a different form for two spin density intervals
$m\in]0,\tilde{m}]$ and $m\in[\tilde{m},1[$, respectively. Here $\tilde{m}$ is
a $u$ dependent spin density at which the following equality holds,
$W_{s2}|_{m=\tilde{m}}=-\varepsilon_{s}(2k_{F\downarrow}-k_{F\uparrow})|_{m=\tilde{m}}\,.$
(8)
From the use of the $\varepsilon_{s2}(0)$’s expression given in Eq. (114), the
$s2$ energy bandwidth $W_{s2}$ appearing here can be expressed as
$W_{s2}=4\mu_{B}h-\varepsilon_{s2}(0)$. The spin density $\tilde{m}$ is a
continuous increasing function of $u$ that in the $u\rightarrow 0$ and $u\gg
1$ limits reads,
$\lim_{u\rightarrow 0}\tilde{m}=0\hskip 14.22636pt{\rm and}\hskip
14.22636pt\lim_{u\gg 1}\tilde{m}\approx 0.317\,.$ (9)
Momenta involving a related momentum $\tilde{k}$ separate parts of the gapped
lower threshold spectra of $S^{+-}(k,\omega)$, $S^{xx}(k,\omega)$, and
$S^{zz}(k,\omega)$ that refer to different types of $k$ dependences. At
$k=\tilde{k}$ the following relations that define it hold,
$\displaystyle W_{s2}$ $\displaystyle=$
$\displaystyle\varepsilon_{s}(k_{F\uparrow}-\tilde{k})-\varepsilon_{s}(k_{F\downarrow}-\tilde{k})$
$\displaystyle{\rm for}\hskip
5.69046pt\tilde{k}\geq(k_{F\uparrow}-k_{F\downarrow})\hskip 5.69046pt{\rm
and}\hskip 5.69046ptm\in[0,\tilde{m}]$ $\displaystyle W_{s2}$ $\displaystyle=$
$\displaystyle
4\mu_{B}\,h-\varepsilon_{s2}(\tilde{k})-\varepsilon_{s}(k_{F\downarrow}-\tilde{k})$
(10) $\displaystyle{\rm for}\hskip
5.69046pt\tilde{k}\leq(k_{F\uparrow}-k_{F\downarrow})\hskip 5.69046pt{\rm
and}\hskip 5.69046ptm\in[\tilde{m},1[\,.$
The momentum $\tilde{k}$ is given by
$\tilde{k}=(k_{F\uparrow}-k_{F\downarrow})$ at $m=\tilde{m}$.
The spectra of the transverse gapped lower thresholds are such that,
$\Delta^{xx}(k)=\Delta^{+-}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[0,\pi]\,.$ (11)
(The equality $\Delta^{-+}(k)=\Delta^{+-}(k)$ also holds, yet as reported
above the amount of $S^{-+}(k,\omega)$’s spectral weight produced by excited
$n$-string states is negligible in the thermodynamic limit and finite magnetic
fields.) The spectrum of the longitudinal gapped lower threshold is also
related to $\Delta^{+-}(k)$ as follows,
$\Delta^{zz}(k)=\Delta^{+-}(\pi-k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[0,\pi]\,.$ (12)
For smaller spin densities $m\in[0,\tilde{m}]$, the spectrum $\Delta^{+-}(k)$
is given by,
$\displaystyle\Delta^{+-}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[0,(k_{F\uparrow}-k_{F\downarrow})]$ (13) $\displaystyle=$
$\displaystyle 4\mu_{B}\,h-\varepsilon_{s}(k_{F\uparrow}-k)\hskip
5.69046pt{\rm for}\hskip
5.69046ptk\in[(k_{F\uparrow}-k_{F\downarrow}),{\tilde{k}}]$ $\displaystyle=$
$\displaystyle 4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)\hskip
5.69046pt{\rm for}\hskip 5.69046ptk\in[{\tilde{k}},2k_{F\downarrow}]$
$\displaystyle=$ $\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})\hskip
5.69046pt{\rm for}\hskip 5.69046ptk\in[2k_{F\downarrow},\pi]\,.$
For larger spin densities $m\in[\tilde{m},1[$, that spectrum is slightly
different and reads,
$\displaystyle\Delta^{+-}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[0,{\tilde{k}}[$ (14) $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)$ $\displaystyle{\rm
for}\hskip 5.69046ptk\in]{\tilde{k}},2k_{F\downarrow}]$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in[2k_{F\downarrow},\pi]\,.$
The expressions of the previously studied two-parametric transverse gapless
spectra Carmelo_16 $\omega^{-+}(k)$ and $\omega^{+-}(k)$, whose superposition
gives $\omega^{xx}(k)$, and that of the longitudinal gapless spectrum
$\omega^{zz}(k)$ that [except for $\omega^{-+}(k)$] refer to the lower
continua in Figs. 1-6, are given in Eqs. (49)-(51). The corresponding excited
energy eigenstates are described by real Bethe-ansatz rapidities. The
expressions of the one-parametric spectra of their upper thresholds
$\omega^{-+}_{ut}(k)$, $\omega^{+-}_{ut}(k)$, $\omega^{xx}_{ut}(k)$, and
$\omega^{zz}_{ut}(k)$ and lower thresholds $\omega^{-+}_{lt}(k)$,
$\omega^{+-}_{lt}(k)$, $\omega^{xx}_{lt}(k)$, and $\omega^{zz}_{lt}(k)$ are
also provided in Appendix B.
We consider the following energy gaps,
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$
$\displaystyle\Delta^{+-}(k)-\omega^{+-}_{ut}(k)\geq 0$
$\displaystyle\Delta_{\rm gap}^{xx}(k)$ $\displaystyle=$
$\displaystyle\Delta^{xx}(k)-\omega^{xx}_{ut}(k)$ $\displaystyle\Delta_{\rm
gap}^{zz}(k)$ $\displaystyle=$
$\displaystyle\Delta^{zz}(k)-\omega^{zz}_{ut}(k)\geq 0\,,$ (15)
where,
$\displaystyle\Delta_{\rm gap}^{xx}(k)$ $\displaystyle=$
$\displaystyle\Delta^{+-}(k)-\omega^{+-}_{ut}(k)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in[0,k^{xx}_{ut}]$ $\displaystyle\Delta_{\rm
gap}^{xx}(k)$ $\displaystyle=$
$\displaystyle\Delta^{+-}(k)-\omega^{-+}_{ut}(k)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in[k^{xx}_{ut},\pi]\,,$ (16)
and
$\Delta_{\rm gap}^{zz}(k)=\Delta_{\rm gap}^{+-}(\pi-k)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in[0,\pi]\,.$ (17)
The momentum $k^{xx}_{ut}>k_{F\uparrow}-k_{F\downarrow}$ in Eq. (16) is that
at which the equality
$\omega^{-+}_{ut}(k^{xx}_{ut})=\omega^{+-}_{ut}(k^{xx}_{ut})$ holds. In the
thermodynamic limit and for the $k$ intervals for which such energy gaps are
positive, there is a negligible amount of spectral weight in their
corresponding $(k,\omega)$-plane regions. This justifies why here we named
them gaps.
The upper threshold spectra $\omega^{-+}_{ut}(k)$, $\omega^{+-}_{ut}(k)$,
$\omega^{xx}_{ut}(k)$, $\omega^{zz}_{ut}(k)$ in Eqs. (15)-(17) are given in
Eqs. (52)-(55). The spectra $\omega^{+-}_{ut}(k)$, $\omega^{xx}_{ut}(k)$, and
$\omega^{zz}_{ut}(k)$ refer to the upper thresholds of the lower continua in
Figs. 1 and 2, 3 and 4, and 5 and 6, respectively.
As confirmed from analysis of Figs. 1-6, one has that $\Delta_{\rm
gap}^{+-}(k)\geq 0$ and $\Delta_{\rm gap}^{zz}(k)\geq 0$, whereas $\Delta_{\rm
gap}^{xx}(k)$ is negative for some $k$ intervals. Specifically,
$\displaystyle\Delta_{\rm gap}^{xx}(k)$ $\displaystyle\leq$ $\displaystyle
0\hskip 5.69046pt{\rm for}$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle[\bar{k}_{0},\pi]\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\in]0,\bar{m}_{0}]$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle[\bar{k}_{0},\bar{k}_{1}]\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\in]\bar{m}_{0},\bar{m}]\,.$ (18)
The values of the spin densities $\bar{m}_{0}$ and $\bar{m}>\bar{m}_{0}$
increase and decrease upon increasing $u$, their limiting values being,
$\displaystyle\lim_{u\rightarrow 0}\bar{m}_{0}$ $\displaystyle=$
$\displaystyle{2\over\pi}\arcsin\left({1\over 3}\right)\approx 0.216$
$\displaystyle\lim_{u\rightarrow 0}\bar{m}$ $\displaystyle=$
$\displaystyle{2\over\pi}\arctan\left({1\over 2}\right)\approx 0.295$
$\displaystyle\lim_{u\gg 1}\bar{m}_{0}$ $\displaystyle\approx$ $\displaystyle
0.239\hskip 7.11317pt{\rm and}\hskip 7.11317pt\lim_{u\gg 1}\bar{m}\approx
0.276\,.$ (19)
The momenta $\bar{k}_{0}$ and $\bar{k}_{1}$ also appearing in Eq. (18) are
such $k^{xx}_{ut}\leq\bar{k}_{0}\leq\bar{k}_{1}$, and
$\bar{k}_{0}\leq\bar{k}_{1}\leq\pi$. The equality, $\bar{k}_{0}=\bar{k}_{1}$,
holds at $m=\bar{m}$. At that spin density the momentum
$\bar{k}_{0}=\bar{k}_{1}$ is very little $u$ dependent. It is given by
$\bar{k}_{0}=\bar{k}_{1}=2k_{F\downarrow}$ in the $\lim_{u\rightarrow 0}$
limit and for $u\gg 1$ it reaches a value very near and just above
$2k_{F\downarrow}$.
For $m\in]0,\bar{m}]$ and the $k$ intervals in Eq. (18), the
$S^{xx}(k,\omega)$’s expressions in the vicinity of that factor gapped lower
threshold obtained in this paper are not valid because $\Delta_{\rm
gap}^{xx}(k)<0$. However, the $S^{+-}(k,\omega)$ and $S^{zz}(k,\omega)$’s
expressions in the vicinity of their gapped lower thresholds considered in the
following are valid for all $k$ intervals, since the energy gaps $\Delta_{\rm
gap}^{+-}(k)$ and $\Delta_{\rm gap}^{zz}(k)$ are finite and positive for
$0<m<1$ and $u>0$.
In Appendix C, limiting values of the energy gaps considered here and their
values at some specific momenta are provided.
## IV The line shape at and near the spin dynamical structure factors’s
singularities
The spin dynamical structure factors’s singularities studied in this paper
occur at and just above spectral lines that within the dynamical theory of
Refs. Carmelo_16, ; Carmelo_05, are called branch lines. Such lines coincide
with well defined $k$ intervals of the $(k,\omega)$-plane lower thresholds of
both the spectra of excited states populated and not populated by spin
$n$-strings, respectively, plotted in Figs. 1-6.
In the case of the contribution from spin $n$-string states, the dynamical
theory line shape expressions are valid provided there is no or nearly no
spectral weight just below the corresponding gapped lower thresholds. In the
present thermodynamic limit, the amount of spectral weight just below such
thresholds either vanishes or is extremely small. In the latter case, the very
weak coupling to it leads to a higher order contribution to the line shape
expressions given in the following that can be neglected in that limit.
In the case of the lower $(k,\omega)$-plane spectrum continua in Figs. 1-6 of
excited states not populated by spin $n$-strings and thus described by real
Bethe-ansatz rapidities, there is no spectral weight below the corresponding
lower thresholds. This ensures that the expressions of the spin dynamical
structure factors at and just above such thresholds are exact.
The momentum interval $k\in[0,\pi]$ of the gapped lower thresholds of spectra
of spin $n$-string states is divided in several subintervals that refer to a
set of branch lines called $s2$, $\bar{s}$, $\bar{s}^{\prime}$, and
$s2^{\prime}$ branch line. The corresponding excited states are populated by
$N_{\downarrow}-2$ $s$ particles and one $s2$ particle. The lower thresholds
of the spectra associated with excited states populated by $N_{\downarrow}$
$s$ particles, either correspond to a single $s$ branch line or to two
sections of such a line.
The $\bar{s}$, $\bar{s}^{\prime}$, and $s$ branch lines refer to $k$ ranges
corresponding to a maximum $s$ band $q$ interval
$q\in[-(k_{F\downarrow}-\delta q_{s}),(k_{F\downarrow}-\delta q_{s})]$ in the
case of $s$ hole creation and to a maximum $s$ band $q$ interval such that
$|q|\in[(k_{F\downarrow}+\delta q_{s}),k_{F\uparrow}]$ in case of $s$ particle
creation. Here $\delta q_{s}$ such that $\delta q_{s}/k_{F\uparrow}\ll 1$ for
$0<m<1$ is for the different branch lines either very small or vanishes in the
thermodynamic limit.
In the very small $k$ intervals corresponding to the $s$ band intervals
$q\in[-k_{F\downarrow},-(k_{F\downarrow}-\delta q_{s})]$ and
$q\in[(k_{F\downarrow}-\delta q_{s}),k_{F\downarrow}]$ the line shape of the
spin dynamical structure factors is different, as given in Ref. Carmelo_16, .
(See Eqs. (128)-(133) of Ref. Carmelo_16, .)
Similarly, in the case of the $(k,\omega)$-plane vicinity of the $s2$ and
$s2^{\prime}$ branch lines, which are part of the gapped lower thresholds, the
line shape expressions obtained in this paper are valid in $k$ ranges
corresponding to $s2$ band maximum intervals
$q\in[-(k_{F\uparrow}-k_{F\downarrow}-\delta q_{s2}),0]$ or
$q\in[0,(k_{F\uparrow}-k_{F\downarrow}-\delta q_{s2})]$. Here $\delta q_{s2}$
such that $\delta q_{s2}/(k_{F\uparrow}-k_{F\downarrow})\ll 1$ is for $0<m<1$
very small and may vanish in the thermodynamic limit. (And again, the spin
dynamical structure factors expressions are different and known for
$q\in[-(k_{F\uparrow}-k_{F\downarrow}),-(k_{F\uparrow}-k_{F\downarrow}-\delta
q_{s2})]$ and $q\in[(k_{F\uparrow}-k_{F\downarrow}-\delta
q_{s2}),(k_{F\uparrow}-k_{F\downarrow})]$ yet are not of interest for this
study.)
In the present thermodynamic limit, the above $s$ band momentum intervals are
thus represented in the following as $q\in]-k_{F\downarrow},k_{F\downarrow}[$
and $|q|\in]k_{F\downarrow},k_{F\uparrow}]$ and the $s2$ band momentum
intervals by $q\in]-(k_{F\uparrow}-k_{F\downarrow}),0]$ or
$q\in[0,(k_{F\uparrow}-k_{F\downarrow})[$.
Around the specific momentum values where along a gapped lower threshold or a
lower threshold two neighboring branch lines or branch line sections cross,
there are small momentum widths where the corresponding lower threshold refers
to a boundary line that connects the two branch lines or branch line sections
under consideration.
In the thermodynamic limit, such momentum intervals are in general negligible
and the corresponding small spectra deviations are not visible in the spectra
plotted in Figs. 1-6. In the cases they are small yet more extended, the two
branch lines or branch line sections run very near the lower threshold and
there is very little spectral weight between it and such lines. In this case,
the singularities on the two branch lines or branch line sections remain the
dominant spectral feature.
We again account for such negligible effects by replacing $[$ and $]$ by $]$
and $[$, respectively, at the $k$ limiting values that separate lower
thresholds’s $k$ intervals associated with two neighboring branch lines or
branch line sections.
### IV.1 The line shape near the $s2$, $\bar{s}$, $\bar{s}^{\prime}$, and
$s2^{\prime}$ branch lines (gapped lower thresholds)
Here we study the line shape at and just above the gapped lower thresholds of
the spectra plotted in Figs. 1-6 of the transverse and longitudinal structure
factors. In the case of $S^{xx}(k,\omega)$, this refers to $k$ intervals for
which $\Delta_{\rm gap}^{xx}(k)>0$ and thus different from those given in Eq.
(18). In Appendix D, the number and current number deviations as well as the
spectral functionals that control the expressions of the spin dynamical
structure factors given below are provided.
Figure 7: The momentum dependence of the exponent that in the $k$ intervals
for which it is negative controls the $S^{+-}(k,\omega)$ line shape near and
just above the $s2$ branch line for spin densities $m$ (a) $0.05$, (b) $0.1$,
(c) $0.3$, (d) $0.5$, (e) $0.8$, and (f) $0.99$ and $u=0.4,1.0,15.0$. The $s2$
branch line is part of the gapped lower threshold of the spin $n$-strings
continuum displayed in Figs. 1 and 2. The same exponent, in the $k$ intervals
for which it is negative, also controls the $S^{xx}(k,\omega)$’s line shape
near and just above the $s2$ branch line in the spin $n$-strings continuum
displayed in Figs. 3 and 4.
The line shape near the gapped lower thresholds has the following general
form,
$\displaystyle S^{ab}(k,\omega)$ $\displaystyle=$ $\displaystyle
C_{ab}^{\Delta}\Bigl{(}\omega-\Delta_{\beta}^{ab}(k))\Bigr{)}^{\zeta_{\beta}^{ab}(k)}$
(20) $\displaystyle{\rm for}\hskip
5.69046pt(\omega-\Delta_{\beta}^{ab}(k))\geq 0\hskip 5.69046pt{\rm where}$
$\displaystyle\beta=s2,\bar{s},\bar{s}^{\prime},s2^{\prime}\hskip
5.69046pt{\rm and}\hskip 5.69046ptab=+-,xx,zz$ $\displaystyle({\rm
valid}\hskip 5.69046pt{\rm when}\hskip 5.69046pt\Delta_{\rm gap}^{ab}>0)\,.$
Here $C_{ab}^{\Delta}$ is a constant that has a fixed value for the $k$ and
$\omega$ ranges associated with small values of the energy deviation
$(\omega-\Delta_{\beta}^{ab}(k))\geq 0$, the gapped lower threshold spectra
$\Delta_{\beta}^{ab}(k)$ are given in Eqs. (11)-(14) and the index
$\beta=s2,\bar{s},\bar{s}^{\prime},s2^{\prime}$ labels branch lines or branch
line sections that are part of the gapped lower thresholds in some specific
$k$ intervals defined in the following.
Figure 8: The same as in Fig. 7 for the $\bar{s}^{\prime}$ branch line. That
line coincides with the gapped lower threshold of the spin $n$-strings
continuum for small $k$ intervals and only for spin densities $0<m<\tilde{m}$
where $\tilde{m}$ continuously increases from $\tilde{m}=0$ for $u\rightarrow
0$ to $\tilde{m}\approx 0.317$ for $u\gg 1$. The corresponding exponent
plotted here is negative for such $k$ intervals.
The branch-line exponents that appear in Eq. (20) have the following general
form,
$\zeta^{aa}_{\beta}(k)=-1+\sum_{\iota=\pm 1}\Phi_{\iota}^{2}(q)\hskip
5.69046pt{\rm for}\hskip
5.69046pt\beta=s2,\bar{s},\bar{s}^{\prime},s2^{\prime},s\,,$ (21)
where the spectral functionals $\Phi_{\iota}(q)$ suitable to each type of
branch line are given in Eqs. (87)-(90). [This also includes the $s$ branch
lines that define the lower thresholds of the lower continua in Figs. 1-6.
Their exponents are also of form, Eq. (21), and appear in the spin dynamical
structure factors’s general expression provided below in Eq. (33).]
Figure 9: The same as in Fig. 7 for the $\bar{s}$ branch line, which refers to
subintervals of the gapped lower threshold of the spin $n$-string continuum of
both $S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$. In the case of
$S^{xx}(k,\omega)$, the momentum dependent exponent plotted here is valid only
for the $k$ intervals of the $\bar{s}$ branch line in Figs. 3 and 4 for which
there is a gap between it and the upper threshold of the lower continuum.
As mentioned above, the amount of spectral weight below the gapped thresholds
either vanishes or is very small. In the latter case, the very weak coupling
to it leads to a higher order contribution to the line shapes given in Eqs.
(20) and (21) that can be neglected in the present thermodynamic limit.
The relation of the excitation momentum $k$ to the $s$ band momentum $q$ or
$s2$ band momentum $q$ that appear in the $\Phi_{\iota}$’s argument in the
general exponent expression, Eq. (21), is branch-line dependent. Hence it is
useful to revisit the expressions of the spectra of the gapped lower
thresholds, Eqs. (11)-(14) and (12), for each of their branch lines or branch
line sections, including information on the relation between the physical
excitation momentum $k$ and the $s$ or $s2$ bands momenta $q$. The
corresponding expressions are given for the $k$ intervals for which the
dynamical structure factor’s expression is of the form, Eq. (20), which
implies replacements of $[$ and $]$ by $]$ and $[$, respectively, in the
limits of such intervals.
In the case of $S^{+-}(k,\omega)$, the gapped lower threshold spectrum
$\Delta^{+-}(k)$ is divided in the following branch-line intervals,
$\displaystyle\Delta_{s2}^{+-}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k)\hskip 5.69046pt{\rm and}\hskip 5.69046ptk=q$
$\displaystyle{\rm where}$ $\displaystyle
k\in]0,(k_{F\uparrow}-k_{F\downarrow})[\hskip 5.69046pt{\rm and}$
$\displaystyle q\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ $\displaystyle{\rm
for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle{\rm and}$ $\displaystyle
k\in]0,{\tilde{k}}[\hskip 5.69046pt{\rm and}\hskip
5.69046ptq\in[0,{\tilde{k}}[\hskip 5.69046pt$ (22) $\displaystyle{\rm
for}\hskip 5.69046ptm\in[\tilde{m},1[\,,$
$\displaystyle\Delta_{\bar{s}^{\prime}}^{+-}(k)$ $\displaystyle=$
$\displaystyle 4\mu_{B}\,h-\varepsilon_{s}(k_{F\uparrow}-k)\hskip
5.69046pt{\rm and}$ $\displaystyle k=k_{F\uparrow}-q$ $\displaystyle{\rm
where}$ $\displaystyle k\in](k_{F\uparrow}-k_{F\downarrow}),\tilde{k}[\hskip
5.69046pt{\rm and}$ (23) $\displaystyle
q\in](k_{F\uparrow}-\tilde{k}),k_{F\downarrow}[$ $\displaystyle{\rm for}\hskip
5.69046ptm\in]0,\tilde{m}]\,,$ $\displaystyle\Delta_{\bar{s}}^{+-}(k)$
$\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)$ $\displaystyle{\rm
and}$ $\displaystyle k=k_{F\downarrow}-q$ $\displaystyle{\rm where}$
$\displaystyle k\in[{\tilde{k}},2k_{F\downarrow}[\hskip 5.69046pt{\rm and}$
$\displaystyle q\in]-k_{F\downarrow},(k_{F\downarrow}-{\tilde{k}})[$
$\displaystyle{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle{\rm
and}$ $\displaystyle k\in]{\tilde{k}},2k_{F\downarrow}[\hskip 5.69046pt{\rm
and}$ (24) $\displaystyle
q\in]-k_{F\downarrow},(k_{F\downarrow}-{\tilde{k}})[$ $\displaystyle{\rm
for}\hskip 5.69046ptm\in[\tilde{m},1[\,,$
and
$\displaystyle\Delta_{s2^{\prime}}^{+-}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})\hskip 5.69046pt{\rm
and}\hskip 5.69046ptk=2k_{F\downarrow}+q$ $\displaystyle{\rm where}$
$\displaystyle k\in]2k_{F\downarrow},\pi[\hskip 5.69046pt{\rm and}$ (25)
$\displaystyle q\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ $\displaystyle{\rm
for}\hskip 5.69046ptm\in]0,1[\,.$
Figure 10: The same as in Fig. 7 for the $s2^{\prime}$ branch line. As in in
Fig. 9, in the case of $S^{xx}(k,\omega)$, the momentum dependent exponent
plotted here is valid only for the $k$ intervals of the $s2^{\prime}$ branch
line in Figs. 3 and 4 for which there is a gap between it and the upper
threshold of the lower continuum.
The corresponding $k$ dependent exponents of general form, Eq. (21), that
appear in the expression,
$S^{+-}(k,\omega)=C_{+-}^{\Delta}(\omega-\Delta_{\beta}^{+-}(k))^{\zeta_{\beta}^{+-}(k)}$,
Eq. (20) for $ab=+-$ and $\beta=s2,\bar{s}^{\prime},\bar{s},s2^{\prime}$, are
given by,
$\displaystyle\zeta_{s2}^{+-}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\iota\over
2\xi_{s\,s}^{1}}+\Phi_{s,s2}(\iota k_{F\downarrow},q)\right)^{2}$
$\displaystyle{\rm for}\hskip 5.69046ptq=k\hskip 5.69046pt{\rm and}$
$\displaystyle k$ $\displaystyle\in$
$\displaystyle]0,(k_{F\uparrow}-k_{F\downarrow})[\hskip 5.69046pt{\rm
for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle]0,{\tilde{k}}[\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\in[\tilde{m},1[$ $\displaystyle\zeta_{\bar{s}^{\prime}}^{+-}(k)$
$\displaystyle=$ $\displaystyle-1+\sum_{\iota=\pm
1}\left(-{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k_{F\uparrow}-k\hskip 5.69046pt{\rm and}$ $\displaystyle k$
$\displaystyle\in$
$\displaystyle](k_{F\uparrow}-k_{F\downarrow}),\tilde{k}[\hskip 5.69046pt{\rm
for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle\zeta_{\bar{s}}^{+-}(k)$
$\displaystyle=$ $\displaystyle-1+\sum_{\iota=\pm
1}\left(\iota{\xi_{s\,s2}^{0}\over 2}+{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k_{F\downarrow}-k\hskip 5.69046pt{\rm and}$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle]{\tilde{k}},2k_{F\downarrow}[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle]{\tilde{k}},2k_{F\downarrow}[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$
$\displaystyle\zeta_{s2^{\prime}}^{+-}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\iota\over
2\xi_{s\,s}^{1}}+\xi_{s\,s}^{1}+\Phi_{s,s2}(\iota
k_{F\downarrow},q)\right)^{2}$ (26) $\displaystyle{\rm for}\hskip
5.69046ptq=k-2k_{F\downarrow}\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in]2k_{F\downarrow},\pi[\,.$
The phase shifts in units of $2\pi$, $\Phi_{s,s}\left(\iota
k_{F\downarrow},q\right)$ and $\Phi_{s,s2}\left(\iota
k_{F\downarrow},q\right)$ where $\iota=\pm 1$, appearing in this equation and
in other exponents’s expressions provided in the following are defined by Eqs.
(129)-(133). Limiting behaviors of such phase shifts are provided in Eqs.
(134)-(138). The phase-shifts related parameters
$\xi^{1}_{s\,s}=1/\xi^{0}_{s\,s}$ and $\xi_{s\,s2}^{0}$ also appearing in the
above exponents’s expressions are defined by Eqs. (139)-(143) and (144)-(145),
respectively.
Physically, $\pm 2\pi\Phi_{s,s}\left(\pm k_{F\downarrow},q\right)$ is the
phase shift acquired by a $s$ particle of momentum $\pm k_{F\downarrow}$ upon
creation of one $s$ band hole ($-2\pi\Phi_{s,s}$) and one $s$ particle
($+2\pi\Phi_{s,s}$) at a momentum $q$ in the $s$ band interval
$q\in]-k_{F\downarrow},k_{F\downarrow}[$ and such that
$|q|\in]k_{F\downarrow},k_{F\uparrow}]$, respectively. However,
$2\pi\Phi_{s,s2}\left(\pm k_{F\downarrow},q\right)$ is the phase shift
acquired by a $s$ particle of momentum $\pm k_{F\downarrow}$ upon creation of
one $s2$ particle at a momentum $q$ in the $s2$ band subinterval
$q\in[0,(k_{F\uparrow}-k_{F\downarrow}[$ or
$q\in](k_{F\uparrow}-k_{F\downarrow},0]$.
The three functionals $\Phi_{\iota}(q)$ in the general expression, Eq. (21),
specific to the exponents given in Eq. (26) for the $S^{+-}(k,\omega)$’s
$s2,s2^{\prime}$ branch lines, $\bar{s}$ branch line, and $\bar{s}^{\prime}$
branch line are provided in Eqs. (87), (88), and (89), respectively. The
corresponding suitable specific values of the number and current number
deviations used in such functionals are for the present branch lines given in
Table 1.
b. line | $k$ in terms of $q$ | $\delta N_{s}^{F}$ | $\delta J_{s}^{F}$ | $\delta N_{s}^{NF}$ | $\delta J_{s2}$ | $\delta N_{s2}$
---|---|---|---|---|---|---
$s2$ | $k=q$ | $-1$ | $0$ | $0$ | $0$ | $1$
$\bar{s}^{\prime}$ | $k=k_{F\uparrow}-q$ | $0$ | $1/2$ | $-1$ | $1/2$ | $1$
$\bar{s}$ | $k=k_{F\downarrow}-q$ | $0$ | $1/2$ | $-1$ | $0$ | $1$
$s2^{\prime}$ | $k=2k_{F\downarrow}+q$ | $-1$ | $1$ | $0$ | $0$ | $1$
Table 1: The momentum $k>0$ and $s$ and $s2$ bands number and current number
deviations defined in Appendix D for $+-$ transverse spin excitations
populated by one $s2$ particle and thus described by both real and complex
nonreal rapidities in the case of the $s2$ branch line, $\bar{s}^{\prime}$
branch line, $\bar{s}$ branch line, and $s2^{\prime}$ branch line that for the
momentum intervals given in the text are part of the corresponding gapped
lower threshold.
The $S^{+-}(k,\omega)$’s $s2$, $\bar{s}^{\prime}$, $\bar{s}$, and
$s2^{\prime}$ branch line exponents whose expressions are given in Eq. (26)
are plotted as a function of $k$ in Figs. 7, 8, 9, and 10, respectively. In
the $k$ intervals of the gapped lower threshold of the spin $n$-string
continuum in Figs. 1 and 2 for which they are negative, which are represented
by solid lines in these figures, there are singularities at and just above the
corresponding $\beta=s2,\bar{s}^{\prime},\bar{s},s2^{\prime}$ branch lines in
the expression
$S^{+-}(k,\omega)=C_{+-}^{\Delta}(\omega-\Delta_{\beta}^{+-}(k))^{\zeta_{\beta}^{+-}(k)}$,
Eq. (20) for $ab=+-$.
The related $S^{xx}(k,\omega)$’s expression, Eq. (20) for $ab=xx$, in the
vicinity and just above the gapped lower threshold of the spin $n$-string
continuum in Figs. 3 and 4 is similar to that of $S^{+-}(k,\omega)$ for the
$k$ intervals for which there is no overlap with the lower continuum spectrum
associated with excited states described by real Bethe-ansatz rapidities. This
thus excludes the low-$m$ $k$ intervals considered in Eq. (18).
Concerning again the relation between the physical excitation momentum $k$ and
the $s$ and $s2$ bands momenta $q$, it is useful to provide the
$S^{zz}(k,\omega)$’s expressions of the gapped lower threshold spectrum
$\Delta^{zz}(k)$, Eqs. (15) and (17), for each of its branch-line parts as,
$\displaystyle\Delta_{s2}^{zz}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-(k_{F\uparrow}-k_{F\downarrow}))\hskip
5.69046pt{\rm and}$ $\displaystyle k=(k_{F\uparrow}-k_{F\downarrow})+q$
$\displaystyle{\rm where}$ $\displaystyle
k\in]0,(k_{F\uparrow}-k_{F\downarrow})[\hskip 5.69046pt{\rm and}$ (27)
$\displaystyle q\in]-(k_{F\uparrow}-k_{F\downarrow}),0[$ $\displaystyle{\rm
for}\hskip 5.69046ptm\in]0,1[\,,$ $\displaystyle\Delta_{\bar{s}}^{zz}(k)$
$\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}\left(k_{F\uparrow}-k\right)\hskip
5.69046pt{\rm and}$ $\displaystyle k=k_{F\uparrow}-q$ $\displaystyle{\rm
where}$ $\displaystyle
k\in](k_{F\uparrow}-k_{F\downarrow}),(\pi-{\tilde{k}})[\hskip 5.69046pt{\rm
and}$ $\displaystyle q\in]-(k_{F\downarrow}-{\tilde{k}}),k_{F\downarrow}[$
$\displaystyle{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle{\rm
and}$ $\displaystyle
k\in](k_{F\uparrow}-k_{F\downarrow}),(\pi-{\tilde{k}})[\hskip 5.69046pt{\rm
and}$ (28) $\displaystyle
q\in]-(k_{F\downarrow}-{\tilde{k}}),k_{F\downarrow}[$ $\displaystyle{\rm
for}\hskip 5.69046ptm\in[\tilde{m},1[\,,$
$\displaystyle\Delta_{\bar{s}^{\prime}}^{zz}(k)$ $\displaystyle=$
$\displaystyle 4\mu_{B}\,h-\varepsilon_{s}(k_{F\downarrow}-k)\hskip
5.69046pt{\rm and}\hskip 5.69046ptk=k_{F\downarrow}-q$ $\displaystyle{\rm
where}$ $\displaystyle k\in](\pi-{\tilde{k}}),2k_{F\downarrow}[\hskip
5.69046pt{\rm and}$ (29) $\displaystyle
q\in]-k_{F\downarrow},-(k_{F\uparrow}-\tilde{k})[$ $\displaystyle{\rm
for}\hskip 5.69046ptm\in]0,\tilde{m}]\,,$
and
$\displaystyle\Delta_{s2^{\prime}}^{zz}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-\pi)\hskip 5.69046pt{\rm and}\hskip
5.69046ptk=\pi+q$ $\displaystyle{\rm where}$ $\displaystyle
k\in]2k_{F\downarrow},\pi[\hskip 5.69046pt{\rm and}$ $\displaystyle
q\in]-(k_{F\uparrow}-k_{F\downarrow}),0[$ $\displaystyle{\rm for}\hskip
5.69046ptm\in]0,\tilde{m}]$ $\displaystyle{\rm and}$ $\displaystyle
k\in](\pi-{\tilde{k}}),\pi[\hskip 5.69046pt{\rm and}$ (30) $\displaystyle
q\in]-{\tilde{k}},0[\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\in[\tilde{m},1[\,.$
Figure 11: The same as in Fig. 8 for the $\bar{s}^{\prime}$ branch line of
$S^{zz}(k,\omega)$. For that dynamical structure factor, this exponent is the
only that is negative and refers to singularities near the corresponding small
momentum intervals of the gapped lower threshold of the spin $n$-string
continuum in Figs. 5 and 6. Such singularities only emerge in
$S^{zz}(k,\omega)$ for spin densities $0<m<\tilde{m}$ where $\tilde{m}=0$ for
$u\rightarrow 0$ and $\tilde{m}\approx 0.317$ for $u\gg 1$.
The corresponding $k$ dependent exponents of general form, Eq. (21), that
appear in the expression,
$S^{zz}(k,\omega)=C_{zz}^{\Delta}(\omega-\Delta_{\beta}^{zz}(k))^{\zeta_{\beta}^{zz}(k)}$,
Eq. (20) for $ab=+-$ and $\beta=s2,\bar{s},\bar{s}^{\prime},s2^{\prime}$,
read,
$\displaystyle\zeta_{s2}^{zz}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm
1}\left(-{\iota\over\xi_{s\,s}^{1}}-\xi_{s\,s}^{1}+\Phi_{s,s2}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k-k_{F\uparrow}+k_{F\downarrow}\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in]0,(k_{F\uparrow}-k_{F\downarrow})[$
$\displaystyle\zeta_{\bar{s}}^{zz}(k)$ $\displaystyle=$ $\displaystyle-1$
$\displaystyle+$ $\displaystyle\sum_{\iota=\pm
1}\left(-{\iota\over\xi_{s\,s}^{1}}+\iota{\xi_{s\,s2}^{0}\over
2}-{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota k_{F\downarrow},q)\right)^{2}$
$\displaystyle{\rm for}\hskip 5.69046ptq=k_{F\uparrow}-k\hskip 5.69046pt{\rm
and}$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle](k_{F\uparrow}-k_{F\downarrow}),(\pi-{\tilde{k}})[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle k$
$\displaystyle\in$
$\displaystyle](k_{F\uparrow}-k_{F\downarrow}),(\pi-{\tilde{k}})[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$
$\displaystyle\zeta_{\bar{s}^{\prime}}^{zz}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left({\iota\over
2\xi_{s\,s}^{1}}+{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k_{F\downarrow}-k\hskip 5.69046pt{\rm and}$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle](\pi-{\tilde{k}}),2k_{F\downarrow}[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,\tilde{m}]$
$\displaystyle\zeta_{s2^{\prime}}^{zz}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm
1}\left(-{\iota\over\xi_{s\,s}^{1}}+\Phi_{s,s2}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k-\pi\hskip 5.69046pt{\rm and}$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle]2k_{F\downarrow},\pi[\hskip 5.69046pt{\rm
for}\hskip 5.69046ptm\in]0,\tilde{m}]$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle](\pi-{\tilde{k}}),\pi[\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\in[\tilde{m},1[\,.$ (31)
Also in the present case of $S^{zz}(k,\omega)$, the three functionals
$\Phi_{\iota}(q)$ in the general expression, Eq. (21), specific to the
$s2,s2^{\prime}$ branch lines, $\bar{s}$ branch line, and $\bar{s}^{\prime}$
branch line are provided in Eqs. (87), (88), and (89), respectively. The
corresponding suitable values of the number and current number deviations used
in such functionals are though different for the present branch lines. They
are given in Table 2.
b. line | $k$ in terms of $q$ | $\delta N_{s}^{F}$ | $\delta J_{s}^{F}$ | $\delta N_{s}^{NF}$ | $\delta J_{s2}$ | $\delta N_{s2}$
---|---|---|---|---|---|---
$s2$ | $k=k_{F\uparrow}-k_{F\downarrow}+q$ | $-2$ | $-1$ | $0$ | $0$ | $1$
$\bar{s}$ | $k=k_{F\uparrow}-q$ | $-2$ | $-1/2$ | $-1$ | $0$ | $1$
$\bar{s}^{\prime}$ | $k=k_{F\uparrow}-q$ | $1$ | $-1/2$ | $-1$ | $-1/2$ | $1$
$s2^{\prime}$ | $k=\pi+q$ | $-2$ | $0$ | $0$ | $0$ | $1$
Table 2: The momentum $k>0$ and $s$ and $s2$ bands number and current number
deviations defined in Appendix D for longitudinal spin excitations populated
by one $s2$ particle and thus described both real and complex nonreal
rapidities in the case of the $s2$ branch line, $\bar{s}$ branch line,
$\bar{s}^{\prime}$ branch line, and $s2^{\prime}$ branch line that for the
momentum intervals given in the text are part of the corresponding gapped
lower threshold.
The behaviors of the spin dynamical structure factor $S^{zz}(k,\omega)$ are
actually qualitatively different from those of $S^{+-}(k,\omega)$. Except for
$\zeta_{\bar{s}^{\prime}}^{zz}(k)$, the exponents in Eq. (31) are positive for
all their $k$ intervals. That $\bar{s}^{\prime}$ branch line exponent is
plotted as a function of $k$ in Fig. 11. It is negative for its whole $k$
subinterval, which is part of the $k$ interval of the gapped lower threshold
in Fig. 5. The $\bar{s}^{\prime}$ branch line’s $m$-dependent subinterval is
either small or that line is not part of the $S^{zz}(k,\omega)$’s gapped lower
threshold at all. Its momentum width decreases upon increasing $m$ up to the
spin density $\tilde{m}$. As mentioned above, this spin density decreases upon
decreasing $u$, having the limiting values $\tilde{m}=0$ for $u\rightarrow 0$
and $\tilde{m}\approx 0.317$ for $u\gg 1$. For $\tilde{m}<m<1$, the
$\bar{s}^{\prime}$ branch line is not part of the $S^{zz}(k,\omega)$’s gapped
lower threshold spectrum. This is why for $m=0.5>\tilde{m}$ and
$m=0.8>\tilde{m}$ that line does not appear in the gapped lower threshold
plotted in Fig. 6.
Hence gapped lower threshold’s singularities only emerge in $S^{zz}(k,\omega)$
for spin densities $0<m<\tilde{m}$ at and just above the $\bar{s}^{\prime}$
branch line, the corresponding line shape reading,
$S^{zz}(k,\omega)=C_{zz}^{\Delta}(\omega-\Delta_{\bar{s}^{\prime}}^{zz}(k))^{\zeta_{\bar{s}^{\prime}}^{+-}(k)}$.
That branch line $k$ subinterval width though strongly decreases upon
increasing $m$ up to $\tilde{m}$.
These behaviors are consistent with the $S^{zz}(k,\omega)$’s spectral weight
stemming from spin $n$-string states decreasing upon increasing the spin
density, being negligible for $\tilde{m}<m<1$. Consistent with the $u$
dependence of the spin density $\tilde{m}$, this spectral weight suppression
becomes stronger upon decreasing $u$. Hence increasing the spin density $m$
within the interval $m\in]0,\tilde{m}]$ and lowering the $u$ value tends to
suppress the contribution of spin $n$-string states to $S^{zz}(k,\omega)$.
### IV.2 The line shape near the lower thresholds
To provide an overall physical picture that accounts for all gapped lower
threshold’s singularities and lower threshold’s singularities in the spin
dynamical structure factors, here we shortly revisit their line shape behavior
at and just above the lower thresholds of the lower continua in Figs. 1-6. The
corresponding contributions are from excited states described by real Bethe-
ansatz rapidities. Such lower continua contain most spectral weight of the
corresponding spin dynamical structure factors.
Figure 12: The momentum dependence of the exponent that controls the
$S^{xx}(k,\omega)$’s line shape near and just above the lower threshold of the
lower continuum in Figs. 3 and 4 for spin densities $m$ (a) $0.05$, (b) $0.1$,
(c) $0.3$, (d) $0.5$, (e) $0.8$, and (f) $0.99$ and $u=0.4,1.0,15.0$. For
$k\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ and
$k\in](k_{F\uparrow}-k_{F\downarrow}),\pi[$ that exponent corresponds to that
of $S^{+-}(k,\omega)$ and $S^{-+}(k,\omega)$, respectively.
In the case of the transverse dynamical structure factor,
$S^{xx}(k,\omega)={1\over 4}\left(S^{+-}(k,\omega)+S^{-+}(k,\omega)\right)$,
we consider the transitions to excited states that determine the line shape in
the vicinity of the lower thresholds of both $S^{+-}(k,\omega)$ and
$S^{-+}(k,\omega)$, respectively. The spectrum of $S^{xx}(k,\omega)$ at and
just above its lower threshold, refers to a superposition of the lower
threshold spectra $\omega^{+-}(k)$ and $\omega^{-+}(k)$, Eqs. (58) and
(59)-(60), respectively. The $(k,\omega)$-plane lower continuum that results
from such a spectra superposition is represented in Figs. 3 and 4.
Similarly to Eq. (20), for spin densities $0<m<1$, $u>0$, and $k\in]0,\pi[$
the line shape of the spin dynamical structure factors $S^{ab}(k,\omega)$
where $ab=+-,-+,xx,zz$ near and just above their lower thresholds has the
following general form,
$\displaystyle S^{ab}(k,\omega)$ $\displaystyle=$ $\displaystyle
C_{ab}\Bigl{(}\omega-\omega^{ab}_{lt}(k)\Bigr{)}^{\zeta_{s}^{ab}(k)}$ (32)
$\displaystyle{\rm for}\hskip 5.69046pt(\omega-\omega^{ab}_{lt}(k))\geq 0$
$\displaystyle{\rm where}\hskip 5.69046ptab=+-,-+,xx,zz\,.$
In the case of $S^{xx}(k,\omega)$, this expression can be expressed as
$\displaystyle S^{xx}(k,\omega)=S^{+-}(k,\omega)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in[0,(k_{F\uparrow}-k_{F\downarrow})[$
$\displaystyle\hskip 35.56593pt=S^{-+}(k,\omega)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),\pi[\,.$ (33)
The lower thresholds under consideration refer to a single $s$ branch line
that except for $S^{-+}(k,\omega)$ has two sections. In Eq. (32), $C_{ab}$ are
constants that have a fixed value for the $k$ and $\omega$ ranges
corresponding to small values of the energy deviation
$(\omega-\omega^{ab}_{lt}(k))\geq 0$. The $ab=+-,-+,zz$ lower threshold
spectra $\omega^{+-}(k)$, $\omega^{-+}(k)$, and $\omega^{zz}(k)$ in that
deviation are given in Eqs. (58), (59)-(60), and (61)-(62), respectively.
Figure 13: The momentum dependence of the exponent that controls the
$S^{zz}(k,\omega)$ line shape near and just above the lower threshold of the
lower continuum in Figs. 5 and 6 for spin densities $m$ (a) $0.05$, (b) $0.1$,
(c) $0.3$, (d) $0.5$, (e) $0.8$, and (f) $0.99$ and $u=0.4,1.0,15.0$.
The $k$ dependent exponents appearing in the spin dynamical factors’s
expression, Eq. (32), are also of general form, Eq. (21). In the present case,
they are given by,
$\displaystyle\zeta_{s}^{-+}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\xi_{s\,s}^{1}\over
2}-\Phi_{s,s}(\iota k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm
for}\hskip 5.69046ptq=k_{F\uparrow}-k\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),\pi[$
$\displaystyle\zeta_{s}^{+-}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\xi_{s\,s}^{1}\over
2}+\Phi_{s,s}(\iota k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm
for}\hskip 5.69046ptq=k-k_{F\uparrow}\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in]0,(k_{F\uparrow}-k_{F\downarrow})[$
$\displaystyle\zeta_{s}^{+-}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm
1}\left({\iota\over\xi_{s\,s}^{1}}-{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k_{F\uparrow}-k\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),\pi[$
$\displaystyle\zeta_{s}^{zz}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left({\iota\over
2\xi_{s\,s}^{1}}+{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota
k_{F\downarrow},q)\right)^{2}$ $\displaystyle{\rm for}\hskip
5.69046ptq=k_{F\downarrow}-k\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in]0,2k_{F\downarrow}[$ $\displaystyle\zeta_{s}^{zz}(k)$
$\displaystyle=$ $\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\iota\over
2\xi_{s\,s}^{1}}+{\xi_{s\,s}^{1}\over 2}+\Phi_{s,s}(\iota
k_{F\downarrow},q)\right)^{2}$ (34) $\displaystyle{\rm for}\hskip
5.69046ptq=k-k_{F\downarrow}\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in]2k_{F\downarrow},\pi[\,.$
The functional $\Phi_{\iota}(q)$ in the general exponent expression, Eq. (21),
is for the present $s$ branch lines given in Eq. (90). The suitable specific
values of the number and current number deviations used in such a functional
to obtain the exponents in Eq. (34) are provided in Table 3.
As confirmed by the form of the expressions given in Eqs. (58) and (60), one
has that $\omega^{+-}_{lt}(k)=\omega^{-+}_{lt}(k)$ for
$k\in](k_{F\uparrow}-k_{F\downarrow}),\pi[$. In that $k$ interval, the line
shape of $S^{xx}(k,\omega)={1\over
4}\left(S^{+-}(k,\omega)+S^{-+}(k,\omega)\right)$ is controlled by the
smallest of the exponents $\zeta_{s}^{-+}(k)$ and $\zeta_{s}^{+-}(k)$ in Eq.
(34), which turns out to be $\zeta_{s}^{-+}(k)$. Hence, the exponent
$\zeta_{s}^{xx}(k)$ is given by,
$\displaystyle\zeta_{s}^{xx}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\xi_{s\,s}^{1}\over
2}+\Phi_{s,s}(\iota k_{F\downarrow},q)\right)^{2}\hskip 5.69046pt{\rm for}$
(35) $\displaystyle q=k-k_{F\uparrow}\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\xi_{s\,s}^{1}\over
2}-\Phi_{s,s}(\iota k_{F\downarrow},q)\right)^{2}\hskip 5.69046pt{\rm for}$
$\displaystyle q=k_{F\uparrow}-k\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),\pi[\,.$
This exponent is plotted as a function of $k$ in Fig. 12. The $s$ branch line
exponent $\zeta_{s}^{zz}(k)$ whose expression is given in Eqs. (34) is also
plotted as a function of momentum in Fig. 13.
$s$ | $k=k(q)$ intervals | $\delta N_{s}^{F}$ | $\delta J_{s}^{F}$ | $\delta N_{s}^{NF}$
---|---|---|---|---
$-+$ | $k=k_{F\uparrow}-q\in](k_{F\uparrow}-k_{F\downarrow}),\pi[$ | $0$ | $-1/2$ | $-1$
$+-$ | $k=k_{F\uparrow}+q\in[0,(k_{F\uparrow}-k_{F\downarrow})[$ | $0$ | $-1/2$ | $1$
$+-$ | $k=k_{F\uparrow}-q\in](k_{F\uparrow}-k_{F\downarrow}),\pi[$ | $2$ | $-1/2$ | $-1$
$zz$ | $k=k_{F\downarrow}-q\in]0,2k_{F\downarrow}[$ | $1$ | $1/2$ | $-1$
$zz$ | $k=k_{F\downarrow}+qk\in]2k_{F\downarrow},\pi]$ | $-1$ | $1/2$ | $1$
Table 3: The momentum $k>0$ intervals and $s$ band number and current number
deviations defined in Appendix D for the $s$ branch lines that coincide with
the lower thresholds of the $-+$, $+-$, and $zz$ dynamical structure factors
lower continua. In the case of $S^{+-}(k,\omega)$ and $S^{zz}(k,\omega)$, such
lower continua appear in Figs. 1 and 2 and 5 and 6, respectively. The lower
continua of $S^{xx}(k,\omega)$ displayed in Figs. 3 and 4 are a superposition
of those of $S^{+-}(k,\omega)$ and $S^{-,+}(k,\omega)$.
Both such exponents are negative in the whole momentum interval $k\in]0,\pi[$
for spin densities $0<m<1$ and $u>0$. It follows that there are singularities
at and just above the corresponding lower thresholds. (Due to a sign error,
the minus sign in the quantity $-\xi_{s\,s}^{1}/2$ appearing in Eq. (35) was
missed in Ref. Carmelo_16, where the exponent $\zeta_{1}^{xx}(k)$ is named
$\xi^{t}$. Its momentum dependence plotted in Fig. 12 corrects that plotted in
Fig. 5 of Ref. Carmelo_16, .)
## V Limiting behaviors of the spin dynamical structure factors
Consistent with the relation, Eq. (5), the spin dynamical structure factor
$S^{-+}(k,\omega)$ is at $m=0$ that obtained in the $m\rightarrow 0$ limit
from $m>0$ values whereas $S^{+-}(k,\omega)$ is at $m=0$ that obtained in the
$m\rightarrow 0$ limit from $m<0$ values. One then confirms that
$S^{-+}(k,\omega)=S^{+-}(k,\omega)$ at $m=0$. However, in the $m\rightarrow 0$
limit from $m>0$ values, the $S^{+-}(k,\omega)$ gapped continuum, Eq. (6),
becomes a gapless line that coincides with both its $\bar{s}$ and
$\bar{s}^{\prime}$ branch lines and the lower threshold of
$S^{-+}(k,\omega)=S^{+-}(k,\omega)$ at $m=0$.
In the case of the initial ground state referring to $h=0$ and thus $m=0$, one
has in addition that $S^{zz}(k,\omega)=S^{xx}(k,\omega)$. The selection rules
in Eq. (46) impose that the longitudinal dynamical structure factor is fully
controlled by transitions from the $S=S^{z}=0$ ground state to spin triplet
excited states with spin numbers $S=1$ and $S^{z}=0$. This is different from
the case when the initial ground state refers to $h\neq 0$ and $m\neq 0$. Then
according to the selection rules, Eq. (47), the longitudinal dynamical
structure factor $S^{zz}(k,\omega)\neq S^{xx}(k,\omega)$ is controlled by
transitions from the ground state with spin numbers $S^{z}=S$ or $S^{z}=-S$ to
excited states with the same spin numbers $S^{z}=S$ or $S^{z}=-S$,
respectively.
In the case of the $h=0$ and $m=0$ initial ground state, (i)
$S^{zz}(k,\omega)$ and (ii) $S^{+-}(k,\omega)$ and $S^{-+}(k,\omega)$ are
fully controlled by transitions to spin triplet $S=1$ excited states with (i)
$S^{z}=0$ and (ii) $S^{z}=\pm 1$, respectively. Their $s$ band two-hole
spectrum is obtained in the $m\rightarrow 0$ limit from that of
$S^{+-}(k,\omega)$ for $m<0$ and from that of $S^{-+}(k,\omega)$ for $m>0$ and
thus reads,
$\displaystyle\omega^{xx}(k)$ $\displaystyle=$
$\displaystyle\omega^{zz}(k)=-\varepsilon_{s}(q_{1})-\varepsilon_{s}(q_{2})$
(36) $\displaystyle{\rm where}\hskip 5.69046ptk=\iota\pi-q_{1}-q_{2}\hskip
5.69046pt{\rm and}\hskip 5.69046pt\iota=\pm 1$ $\displaystyle{\rm for}\hskip
5.69046ptq_{1}\in[-\pi/2,\pi/2]$ $\displaystyle{\rm and}\hskip
5.69046ptq_{2}\in[-\pi/2,\pi/2]\,.$
Consistent, spin $SU(2)$ symmetry implies that the triplet $S=1$ and $S^{z}=0$
excited states that control $S^{zz}(k,\omega)$ have exactly the same spectrum,
Eq. (36), as the triplet $S=1$ and $S^{z}=\pm 1$ excited states that control
$S^{+-}(k,\omega)$ and $S^{-+}(k,\omega)$.
In spite of the singular behavior concerning the class of excited states that
control the longitudinal dynamical structure factor for $m=0$ and $m>0$
initial ground states, respectively, one confirms in the following that the
same line shape near the spin dynamical structure factors’s lower thresholds
is obtained at $m=0$ and in the $m\rightarrow 0$ limit, respectively.
### V.1 Behaviors of the spin dynamical structure factors in the
$m\rightarrow 0$ limit
In the $m\rightarrow 0$ limit from $m>0$ values, the transverse spin structure
factor $S^{-+}(k,\omega)$ lower threshold spectrum, Eq. (58), expands to the
whole $k\in[0,\pi]$ interval. The corresponding line shape near the $s$ branch
line is then valid for $k\in]0,\pi[$. Since a similar spectrum is obtained for
the lower threshold of $S^{-+}(k,\omega)$ in the $m\rightarrow 0$ limit from
$m<0$ values, one finds,
$\displaystyle\omega^{xx}_{lt}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}(k_{F}-k)\hskip 5.69046pt{\rm where}$
$\displaystyle k$ $\displaystyle=$ $\displaystyle{\pi\over
2}-q\in]0,\pi[\hskip 5.69046pt{\rm for}$ $\displaystyle q$ $\displaystyle\in$
$\displaystyle]-\pi/2,\pi/2[\,.$ (37)
As reported above, in the $m\rightarrow 0$ limit from $m>0$ values the
$S^{+-}(k,\omega)$’s gapped continuum associated with the spectrum, Eq. (6),
becomes a gapless line that coincides with both the spectra in Eqs. (23) and
(24) of its $\bar{s}$ and $\bar{s}^{\prime}$ branch lines, respectively, and
the lower threshold of $S^{-+}(k,\omega)=S^{+-}(k,\omega)$ at $m=0$. (In the
$m\rightarrow 0$ limit from $m<0$ values, the $\bar{s}$ and $\bar{s}^{\prime}$
branch lines rather stem from $S^{-+}(k,\omega)$.) Hence the spectra
$\Delta_{\bar{s}^{\prime}}^{+-}(k)=\Delta_{\bar{s}}^{+-}(k)$ read in that
limit,
$\displaystyle\Delta_{\bar{s}^{\prime}}^{+-}(k)$ $\displaystyle=$
$\displaystyle\Delta_{\bar{s}}^{+-}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}(\pi/2-k)\hskip 5.69046pt{\rm where}\hskip
5.69046ptk={\pi\over 2}-q$ $\displaystyle{\rm for}$ $\displaystyle
k\in]0,\pi[\hskip 5.69046pt{\rm for}\hskip 5.69046ptq\in]-\pi/2,\pi/2[\,.$
(38)
It then turns out that the corresponding exponents
$\zeta_{\bar{s}^{\prime}}^{+-}(k)$ and $\zeta_{\bar{s}}^{+-}(k)$, Eq. (26),
have in the $m\rightarrow 0$ limit exactly the same value. In addition, that
value is the same as that of $\zeta_{s}^{xx}(k)$, Eq. (35), reached in that
limit. Indeed, by use of the limiting behaviors $\lim_{m\rightarrow
0}\Phi_{s,s}\left(\pm k_{F\downarrow},q\right)=\pm 1/(2\sqrt{2})$ for
$q\neq\pm k_{F\downarrow}$, $\lim_{m\rightarrow 0}\Phi_{s,s2}\left(\pm
k_{F},0\right)=\pm 1/\sqrt{2}$, and $\lim_{m\rightarrow
0}\xi_{s\,s}^{1}=1/\sqrt{2}$ reported in Eqs. (136), (137), and (142), one
finds that,
$\displaystyle\zeta_{s}^{xx}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(-{\xi_{s\,s}^{1}\over
2}-\Phi_{s,s}(\iota\pi/2,q)\right)^{2}$ $\displaystyle=$
$\displaystyle-{1\over 2}$ $\displaystyle\zeta_{\bar{s}^{\prime}}^{+-}(k)$
$\displaystyle=$ $\displaystyle-1+\sum_{\iota=\pm
1}\left(-{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota\pi/2,q)\right)^{2}$
$\displaystyle=$ $\displaystyle-{1\over 2}$
$\displaystyle\zeta_{\bar{s}}^{+-}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left(\iota{\xi_{s\,s2}^{0}\over
2}+{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota\pi/2,q)\right)^{2}$ (39)
$\displaystyle=$ $\displaystyle-{1\over 2}\,.$
The spin $SU(2)$ symmetry obliges as well that at $m=0$ the results should be
similar for the transverse and longitudinal spin structure factors,
respectively. In the $m\rightarrow 0$ limit, the longitudinal spin structure
factor lower threshold spectrum, Eq. (61), expands to the whole $k\in]0,\pi[$
interval and indeed is similar to that in Eq. (37), as it reads,
$\displaystyle\omega^{zz}_{lt}(k)$ $\displaystyle=$
$\displaystyle\omega_{s}^{zz}(k)=-\varepsilon_{s}(\pi/2-k)\hskip 5.69046pt{\rm
where}$ $\displaystyle k$ $\displaystyle=$ $\displaystyle
k_{F}-q\in]0,\pi[\hskip 5.69046pt{\rm for}\hskip
5.69046ptq\in]-\pi/2,\pi/2[\,.$ (40)
In spite of such a similarity, the longitudinal dynamical structure factor is
at $m=0$ fully controlled by transitions from the ground state to excited
states with spin numbers $S=1$ and $S^{z}=0$. The line shape obtained from
such spin triplet excited states is though exactly the same as that obtained
in the $m\rightarrow 0$ limit from the $S^{z}=S$ or $S^{z}=-S$ and $S>0$
excited states.
However, in the $m\rightarrow 0$ limit the $S^{zz}(k,\omega)$’s gapped
$\bar{s}$ and $\bar{s}^{\prime}$ branch line spectra in Eqs. (28) and (29),
respectively, become gapless and coincide with both each other and with the
lower threshold of the longitudinal spin structure factor, Eq. (40), for whole
$k\in]0,\pi[$ interval,
$\displaystyle\Delta_{\bar{s}^{\prime}}^{zz}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}(\pi/2-k)\hskip 5.69046pt{\rm and}\hskip
5.69046ptk={\pi\over 2}-q$ $\displaystyle{\rm where}$ $\displaystyle
k\in]0,\pi[\hskip 5.69046pt{\rm for}$ $\displaystyle q$ $\displaystyle\in$
$\displaystyle]-\pi/2,\pi/2[\,.$ (41)
One then finds that in such a limit,
$\zeta_{\bar{s}^{\prime}}^{zz}(k)<\zeta_{\bar{s}}^{zz}(k)$. Here
$\zeta_{\bar{s}^{\prime}}^{zz}(k)$ and $\zeta_{\bar{s}}^{zz}(k)$ are the
corresponding branch line exponents given in Eq. (31). Such an inequality
implies that the line shape is controlled by the exponents
$\zeta_{\bar{s}^{\prime}}^{zz}(k)$ and $\zeta_{s}^{zz}(k)$ such that
$\zeta_{\bar{s}^{\prime}}^{zz}(k)=\zeta_{s}^{zz}(k)$ in the $m\rightarrow 0$
limit, as given below. Here $\zeta_{\bar{s}^{\prime}}^{zz}(k)$ is the exponent
associated with the spectrum in Eq. (29).
The use of the limiting behaviors reported in Eqs. (136) and (142), confirms
that the exponent $\zeta_{\bar{s}^{\prime}}^{zz}(k)$, Eq. (31), equals both
the exponent $\zeta_{s}^{zz}(k)$, Eq. (34), and those given in Eq. (39). The
former two exponents are found to be given by,
$\displaystyle\zeta_{s}^{zz}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left({\iota\over
2\xi_{s\,s}^{1}}+{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota\pi/2,q)\right)^{2}$
$\displaystyle=$ $\displaystyle-{1\over 2}$
$\displaystyle\zeta_{\bar{s}^{\prime}}^{zz}(k)$ $\displaystyle=$
$\displaystyle-1+\sum_{\iota=\pm 1}\left({\iota\over
2\xi_{s\,s}^{1}}+{\xi_{s\,s}^{1}\over 2}-\Phi_{s,s}(\iota\pi/2,q)\right)^{2}$
(42) $\displaystyle=$ $\displaystyle-{1\over 2}\,.$
Again and in spite of such similarities, the two classes of excited states
described by real and complex nonreal rapidities, respectively, that at $m=0$
contribute to the longitudinal dynamical structure factor have rather spin
numbers $S=1$ and $S^{z}=0$. The line shape associated with such spin triplet
excited states is though exactly the same as that obtained in the
$m\rightarrow 0$ limit from the above excited states.
One then concludes that for $u>0$ and in the $m\rightarrow 0$ limit the line
shape at and just above the lower threshold of the spin structure factor is of
the form,
$\displaystyle S^{aa}(k,\omega)=C\,(\omega-\omega(k))^{-1/2}\hskip
5.69046pt{\rm where}$
$\displaystyle\omega(k)=2t\int_{0}^{\infty}d\omega\,{\cos\left(\omega\,\Lambda_{s}\left({\pi\over
2}-k\right)\right)\over\omega\cosh\omega}\,J_{1}(\omega)\,,$ (43)
for $]0,\pi[$ and $aa=xx,yy,zz$ where $C$ is a constant that has a fixed value
for the $k$ and $\omega$ ranges corresponding to small values of the energy
deviation $(\omega-\omega(k))$, $J_{1}(\omega)$ is a Bessel function, and the
$s$ band rapidity function $\Lambda_{s}(q)$ is defined in terms of its inverse
function $q=q_{s}(\Lambda)$ in Eq. (113). The exponent $-1/2$ is indeed that
known to control the line shape at and just above the lower threshold of
$\omega(k)$ Essler_99 .
### V.2 Behaviors of the spin dynamical structure factors in the
$m\rightarrow 1$ limit
The sum rules, Eq. (48), imply that $\lim_{m\rightarrow 1}S^{-+}(k,\omega)=0$
and $\lim_{m\rightarrow 1}S^{zz}(k,\omega)=0$. It follows that as
$m\rightarrow 1$ and thus $h\rightarrow h_{c}$, the spin dynamical structure
factor is dominated by $S^{xx}(k,\omega)$. Here $h_{c}$ is the critical field
associated with the spin energy scale $2\mu_{B}\,h_{c}$, Eq. (3), at which
fully polarized ferromagnetism is achieved.
At $h=h_{c}$ the power-law expressions of the present dynamical theory
involving $k$ dependent exponents are not valid, being replaced by a
$\delta$-function like distribution,
$\displaystyle S^{xx}(k,\omega)={\pi\over
2}\delta\left(\omega-\omega^{xx}_{lt}(k)\right)\hskip 5.69046pt{\rm where}$
$\displaystyle\omega^{xx}_{lt}(k)=4t\,\left(\sqrt{1+u^{2}}-u\right)$
$\displaystyle-{2t\over\pi}\int_{-\pi}^{\pi}dk\sin k\arctan\left({\sin
k-\Lambda_{s}(\pi-k)\over u}\right),$ (44)
for $[0,\pi]$. Here the $s$ band rapidity function $\Lambda_{s}(q)$ is defined
in terms of its inverse function $q=q_{s}(\Lambda)$ in Eq. (122).
## VI Discussion and concluding remarks
### VI.1 Discussion of the results
Our results provide important information about how in 1D Mott-Hubbard
insulators electron itinerancy associated in the present model with the
transfer integral $t$ affects the spin dynamics: The main effect of increasing
$t$ at constant $U$ and thus decreasing the ratio $u=U/4t$ is on the energy
bandwidth of the corresponding relevant spectra.
Physically, this is justified by the interplay of kinetic energy and spin
fluctuations. However, the matrix elements that control the spectral weights
and the related momentum-dependent exponents in the dynamical structure
factors’s expressions studied in this paper are little affected by decreasing
the ratio $u=U/4t$.
The internal degrees of freedom of the $s$ and $s2$ particles refer to one
unbound singlet pair of spins $1/2$ and two bound singlet pairs of such spins.
The spins $1/2$ in such pairs refer to rotated electrons that singly occupy
sites. In the $u\rightarrow 0$ limit, the corresponding $s$ and $s2$ energy
dispersion’s bandwidths reach their maximum values, $\lim_{u\rightarrow
0}W_{s}=2t\left(1+\sin\left({\pi\over 2}\,m\right)\right)$ and
$\lim_{u\rightarrow 0}W_{s2}=4t\sin\left({\pi\over 2}\,m\right)$,
respectively, whereas
$\lim_{u\rightarrow\infty}W_{s}=\lim_{u\rightarrow\infty}W_{s2}=0$, as given
in Eq. (115). Indeed, for small, intermediate, and large yet finite $u$ values
the $s$ particles for all spin densities $m$ and the $s2$ particles for $m>0$,
along with the two and four spins $1/2$ within them, respectively, contribute
to the kinetic energy associated with electron itinerancy. However, in the
$u\rightarrow\infty$ limit all spin configurations become degenerate and the
spins $1/2$ within the $s$ and $s2$ particles become localized.
Consistently, the kinetic energy, $E_{\rm
kin}=t\,\partial\langle\hat{H}\rangle/\partial t$, of all Mott-Hubbard
insulator’s states decreases from a maximum value reached in the $u\rightarrow
0$ limit to zero for $u\rightarrow\infty$. Intermediate $u$ values refer to a
crossover between these two limiting behaviors. While this applies to all spin
densities, for further information on the interplay of kinetic energy and spin
fluctuations at $m=0$, see for instance Sec. IV of Ref. Carmelo_88, for
electronic density $n=1$.
The dynamical theory used in the studies of this paper refers to a specific
case of the general dynamical theory considered in Ref. Carmelo_16, . The
former theory refers to the Hamiltonian, Eq. (1), acting onto a subspace that
includes spin $n$-string states. It has specific values for the spectral
parameters that control the momentum dependent exponents in the spin dynamical
structure factors’s expressions that have been obtained in this paper for
$(k,\omega)$-plane regions at and near well-defined types of spectral
features.
As mentioned in Sec. I, the issue of how the branch-line cusp singularities
stem from the behavior of matrix elements between the $m>0$ ground states and
specific classes of excited states is shortly discussed in Appendix D. The
dynamical theory refers to the thermodynamic limit, in which the matrix
elements squares $|\langle\nu|\hat{S}^{a}_{k}|GS\rangle|^{2}$ in Eq. (4) have
in terms of the relative weights $a(m_{+1},\,m_{-1})$ and lowest peak weights
$A^{(0,0)}$ defined in that Appendix the general form given in its Eq. (85).
The theory provides in Eq. (84) the dependence of such weights on the
$\iota=\pm 1$ functionals $\Phi_{\iota}^{2}$ that control the cusp
singularities exponents.
Unfortunately, it does not provide the precise values of the $u$ and $m$
dependent constant $0<B_{s}\leq 1$ and $u$ dependent constants $0<f_{l}<1$
where $l=0,2,4$ in the $A^{(0,0)}$ expression under consideration. Those
contribute to the coefficients $C_{ab}^{\Delta}$ and $C_{ab}$, respectively,
in the spin dynamical structure factors’s analytical expressions, Eqs. (20)
and (32), which are determined by the lowest peaks spectral weights. In spite
of this limitation, our results provide important physical information on such
factors.
The possible alternative use of form factors of the
$\sigma=\uparrow,\downarrow$ electron creation and annihilation operators
involved in the dynamical structure factors studied in this paper remains an
unsolved problem for the present 1D Hubbard model.
When $\zeta^{ab}_{\beta}(k)=-1+\sum_{\iota=\pm 1}\Phi_{\iota}^{2}<0$, Eq.
(21), there are cusp singularities at and just above the corresponding $\beta$
branch lines. The form of the matrix elements expression, Eq. (85), reveals
both that the occurrence of cusp singularities is controlled by the matrix
elements $\langle\nu|\hat{S}^{a}_{k}|GS\rangle$ and that
$|\langle\nu|\hat{S}^{a}_{k}|GS\rangle|^{2}$ also diverges in the case of the
excited states that generate such singularities. This confirms that there is a
direct relation between the negativity of the exponents
$\zeta^{ab}_{\beta}(k)$ and the amount of spectral weight at and just above
the corresponding $\beta$ branch lines.
For simplicity, in this paper we have not provided further details of the
dynamical theory that are common to those already given in Ref. Carmelo_16, .
The form of both the relative weights and the lowest peak weights considered
in the studies of Ref. Karlo_97, for the charge degrees of freedom of the 1D
Hubbard model for electronic densities $n_{e}\in[0,1]$ at spin density $m=0$
is similar to that of the present relative weights $a(m_{+1},\,m_{-1})$ and
lowest peak weights $A^{(0,0)}$ for the spin degrees of freedom of the same
model for spin densities $m\in[0,1]$ at electronic density $n_{e}=1$. Such
studies consider the $u\rightarrow\infty$ limit in which for the dynamical
correlation function under consideration the values of the lowest peak weights
can be calculated. The results of that reference confirm that the cusp
singularities correspond to $(k,\omega)$-plane regions with a larger amount of
spectral weight.
That the momentum-dependent exponents in Eqs. (20) and (32) and thus the
corresponding matrix elements that control the spectral weights, Eq. (85), are
little affected by decreasing the ratio $u=U/4t$ reveals that in the present
case of the spin dynamical structure factors of the 1D Hubbard model’s Mott-
Hubbard insulating phase the relative spectral-weight contributions of
different types of excited energy eigenstates is little $u$ dependent. This
means that concerning that issue, results for the most known limit of small
yet finite $t^{2}/U$ and thus large $u$ in which the present quantum problem
is equivalent to the spin-$1/2$ $XXX$ chain Kohno_09 ; Muller also apply to
small and intermediate $u$ values. This applies to the analysis presented in
Sec. III, concerning the spectral weight in the gap regions being negligible
in the present thermodynamic limit
Our results have focused on the contribution from spin $n$-string states. This
refers to the line shape at and just above the $(k,\omega)$-plane gapped lower
threshold’s spectra $\Delta_{\beta}^{ab}(k)$ where $ab=+-,xx,zz$ and $\beta$
refers to different branch lines. In well-defined $m$-dependent $k$
subintervals, Eqs. (22)-(25) and (27)-(30), such branch lines coincide with
the gapped lower thresholds under consideration. In these physically important
$(k,\omega)$-plane regions, the spin dynamical structure factors
$S^{ab}(k,\omega)$ have the general analytical expression provided in Eq.
(20). In the case of $S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$, such gapped
lower thresholds refer to the $n$-string states’s upper continua shown in the
$(k,\omega)$-plane in Figs. 1 and 2 and 3 and 4, respectively.
That as justified in Sec. III the spectral weight in the gap regions is
negligible in the present thermodynamic limit, is consistent with the amount
of that weight existing just below the $(k,\omega)$-plane gapped lower
thresholds of the $n$-string states’s spectra shown in Figs. 1-6 being
vanishingly small or negligible. This is actually behind the validity at
finite magnetic fields $0<h<h_{c}$ and in the thermodynamic limit of the
analytical expressions of the spin dynamical structure factors of general
form, Eq. (20), obtained in this paper.
The momentum dependent exponents that control the spin dynamical structure
factors’s line-shape in such expressions are given in Eq. (26) for
$S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$ and in Eq. (31) for
$S^{zz}(k,\omega)$. In the former case, the exponents associated with the
$(k,\omega)$-plane vicinity of the $s2-$, $\bar{s}^{\prime}-$, $\bar{s}-$, and
$s2^{\prime}$-branch lines are plotted in Figs. 7-10. Such lines refer to
different $k$ intervals of the gapped lower threshold of the $n$-string
states’s spectra of $S^{+-}(k,\omega)$ and $S^{xx}(k,\omega)$. The solid lines
in Figs. 1 and 2 and 3 and 4 that belong to that gapped lower threshold
correspond to $k$ intervals for which the exponents are negative. In them,
singularities occur in the spin dynamical structure factors’s expression, Eq.
(20), at and above the gapped lower thresholds.
In the case of $S^{xx}(k,\omega)$, the expression given in that equation does
not apply for small spin densities in the ranges and corresponding $k$
intervals given in Eqs. 18 and 19. For these spin-density ranges and momentum
intervals, there is overlap between the lower continuum and upper $n$-string
states’s continuum, as shown in Figs. 3 (a-c).
However, consistently with the perturbative arguments provided in Appendix D
in terms of the number of elementary processes associated with annihilation of
one $s$ particle, the contribution to $S^{zz}(k,\omega)$ from excited states
populated by $n$-strings is much weaker than for $S^{+-}(k,\omega)$ and
$S^{xx}(k,\omega)$ and is negligible in the case of $S^{-+}(k,\omega)$. In the
case of $S^{zz}(k,\omega)$ it does not lead to a $(k,\omega)$-plane continuum.
The gapped lower threshold of such states is shown in Figs. 5 and 6. There the
$k$ subinterval associated with the $\beta=\bar{s}^{\prime}$ branch line is
the only one at and above which there are singularities. We have found that
out of the four branch-line’s exponents whose expressions are provided in Eq.
(31), only that of the $\beta=\bar{s}^{\prime}$ branch line is indeed
negative. That line is represented in the gapped lower threshold of
$S^{zz}(k,\omega)$ shown in Figs. 5 (a) - 5 (c) by a solid (green) line. The
corresponding exponent is plotted in Fig. 11.
That line’s $k$ subinterval is though small. Its momentum width decreases upon
decreasing $u$ and/or increasing the spin density within the range
$0<m\leq\tilde{m}$. Here $\tilde{m}$ increases from $\tilde{m}=0$ for
$u\rightarrow 0$ to $\tilde{m}\approx 0.317$ for large $u$. For spin densities
$\tilde{m}\leq m<1$, that line is not part of the gapped lower threshold, so
that the contribution to $S^{zz}(k,\omega)$ from excited states populated by
$n$-strings becomes negligible. Consistent, in Figs. 5 (d) - 5 (f) and 6 that
line is lacking.
To provide an overall physical picture that includes the relative
$(k,\omega)$-plane location of all spectra with a significant amount of
spectral weight, we also accounted for the contributions from all types of
excited energy eigenstates that lead to gapped and gapless lower threshold
singularities in the spin dynamical structure factors. This includes excited
energy eigenstates described only by real Bethe-ansatz rapidities and thus
without $n$-strings, which are known to lead to most spectral weight of the
sum rules, Eq. (48). Their contribution to $S^{+-}(k,\omega)$,
$S^{xx}(k,\omega)$, and $S^{zz}(k,\omega)$ leads to the $(k,\omega)$-plane
lower continua shown in Figs. 1 and 2, 3 and 4, and 5 and 6, respectively.
### VI.2 Concluding remarks
Spin $n$-strings have been identified in experimental studies of
CuCl2$\cdot$2N(C5D5) and Cu(C4H4N2)(NO3)2 Kohno_09 ; Stone_03 ; Heilmann_78 .
In this paper the contribution of spin $n$-strings to the spin dynamical
structure factors of the 1D fermionic Hubbard model with one electron per site
in a magnetic field has been studied. That model describes a 1D Mott-Hubbard
insulator.
1D Mott-Hubbard insulators are a paradigm for the importance of strong
correlations and are known to exhibit a wide variety of unusual physical
phenomena. For instance, while in the 1D Hubbard metallic phase increasing the
onsite repulsion $U$ reduces the lattice distortion, in its Mott-Hubbard
insulating phase Coulomb correlations enhance the lattice dimerization
Baeriswyl . 1D Mott-Hubbard insulators can be studied within condensed matter
by inelastic neutron scattering in spin chains such as for instance chain
cuprates, as well as a number of quasi-1D organic compounds Kohno_09 ;
Stone_03 ; Pollet .
The theoretical description of the spin degrees of freedom of some of such
condensed-matter systems is commonly modeled by the spin-$1/2$ $XXX$
antiferromagnet Kohno_09 ; Stone_03 . As justified in the following, our study
indicates that the 1D Hubbard model with one electron per site can
alternatively be used to describe the spin dynamical properties of such
systems.
Analysis of the spin dynamical structure factors spectra plotted in the
$(k,\omega)$ plane in Figs. 1-6, reveals that the only effect of decreasing
the ratio $u=U/4t$ is to increase such spectra energy bandwidths. (Within the
isotropic spin-$1/2$ $XXX$ chain, this can be achieved by increasing the
exchange integral $J$.)
It is somehow surprising that the 1D Hubbard model with one electron per site
for $u=15$, which is equivalent to a isotropic spin-$1/2$ $XXX$ chain with
$J=4t^{2}/U$, and the former model for $u=0.4$ and $u=1.0$, lead to spin
dynamical structure factors’s spectra that except for their energy bandwidth
have basically the same form.
However, the type of momentum dependences of the exponents plotted in Figs.
7-13 that control the $(k,\omega)$-plane line shape of the spin dynamical
structure factors in the vicinity of the singularities located in the gapped
lower thresholds of the spin $n$-string states’s spectra and lower thresholds
of the lower continua represented in Figs. 1-6 is not affected by decreasing
$u$.
That as found in this paper the main effect of increasing $t$ at constant $U$
and thus decreasing the ratio $u=U/4t$ is on the energy bandwidth of the
corresponding relevant spectra is an important information about how in 1D
Mott-Hubbard insulators electron itinerancy associated in the present model
with the transfer integral $t$ affects the spin dynamics.
This seems to confirm that concerning the spin dynamical properties of spin
chain compounds in a magnetic field, both the 1D Hubbard model with one
electron per site and the spin-$1/2$ $XXX$ antiferromagnet are suitable model
candidates. Consistent, for general Mott-Hubbard insulating materials there is
no reason for the on-site repulsion to be much stronger than the electron
hopping amplitude $t$. This situation is realized in the Bechgaard salts
Pollet .
Since the dynamical theory used in our study for the whole $u>0$ range and the
thermodynamic limit only provides the line shape at and near the cusp
singularities located at the gapped lower thresholds and lower thresholds, it
cannot access other possible peaks, as for instance those due to the
Brillouin-zone folding effect. However and as discussed in Sec. VI.1, results
for the most known limit of small yet finite $t^{2}/U$ and thus large $u$ in
which the present quantum problem is equivalent to the spin-$1/2$ $XXX$ chain
Kohno_09 also apply to small and intermediate $u$ values provided that the
spectral features energy bandwidths are suitably rescaled. Hence one can at
least confirm that the cusp singularities located at the gapped lower
thresholds and lower thresholds predicted by the half-filled 1D Hubbard model
are observable in neutron scattering experiments.
In such experiments, the quantity that is observed is proportional to,
$S^{av}(k,\omega)={1\over
6}\left(S^{-+}(k,\omega)+S^{+-}(k,\omega)+4S^{zz}(k,\omega)\right)\,.$ (45)
Upon superposition of the spectra of the spin dynamical structure factors on
the right-hand side of this equation, we have checked that all cusp
singularities at and near both the gapped lower thresholds and lower
thresholds found in this paper for the 1D Hubbard model at any of the $u$
values $u=0.4$, $u=0.1$, and $u=15.0$ correspond to peaks shown in Fig. 4 of
Ref. Kohno_09, for CuCl2$\cdot$2N(C5D5) and in Fig. 5 of that reference for
Cu(C4H4N2)(NO3)2 at the finite values of the magnetic field considered in
these figures and suitable transfer integral $t$ values, to reach agreement
with the corresponding energy bandwidths. This should obviously apply to
$u=15.0$ at which large $u$ value the spin degrees of freedom of the present
model are described by the spin-$1/2$ $XXX$ chain (with exchange integral
$J=4t^{2}/U=t/u$) used in the studies of Ref. Kohno_09, to theoretically
access the cusp singularities under consideration.
That such a correspondence also applies to $u=0.4$ and $u=1.0$ is justified by
the results of this paper according to which: The dependence on $u$ of the
momentum dependence of the negative exponents that control the spin dynamical
structure factors’s line shape is rather weak; The main effect of decreasing
$u$ on such factors’s spectra is merely to increase their energy bandwidth.
The dynamical theory used in our study provides analytical expressions of the
spin dynamical structure factors at and just above the $(k,\omega)$-plane
gapped lower thresholds and lower thresholds of their spectra with more
spectral weight. The use of other methods such as the time-dependent density
matrix renormalization group White ; Schollwock ; Moreno to obtain the line
shape of such dynamical functions over other $(k,\omega)$-plane regions would
provide valuable complementary information.
In the case of 1D Mott-Hubbard insulators, the apparent independence on the
$u$ values of the spin dynamics found in this paper, suggests that the
suitable values of the interaction for such systems are rather settled by the
agreement with experimental results on the charge dynamics and one-particle
spectral function at energy scales above the Mott-Hubbard gap.
###### Acknowledgements.
J. M. P. C. thanks the Boston University’s Condensed Matter Theory Visitors
Program for support and Boston University for hospitality during the initial
period of this research. He acknowledges the support from FCT through the
Grants No. PTDC/FIS-MAC/29291/2017 and No. SFRH/BSAB/142925/2018. J. M. P. C.
and T. Č. thank Pedro D. Sacramento for illuminating discussions and they
acknowledge the support from FCT through the Grant No. UID/FIS/04650/2013. T.
Č. gratefully acknowledges the support by the Institute for Basic Science in
Korea (Project No. IBS-R024-D1). J. M. P. C. and T. Č. contributed equally to
this work.
## Appendix A Useful selection rules and sum rules
Let $|S,\alpha\rangle$, $|S^{z},\beta\rangle$, and $|S,S^{z},\gamma\rangle$
denote energy eigenstates where $S\in[0,N/2]$ is their spin, $S^{z}$ their
spin projection, and $\alpha$, $\beta$ and $\gamma$ represent all other
quantum numbers needed to uniquely specify these states, respectively. The
selection rules given in the following are derived from the properties of the
operators $\hat{S}^{z}_{k}$ and $\hat{S}^{\pm}_{k}$ by straightforward
manipulations involving their operator algebra Muller .
At vanishing magnetic field, $h=0$, the following selection rules hold in the
thermodynamic limit,
$\displaystyle\langle
S,\alpha|\hat{S}^{a}_{k}|S^{\prime}\alpha^{\prime}\rangle$ $\displaystyle=$
$\displaystyle 0\hskip 5.69046pt{\rm for}\hskip 5.69046ptS=S^{\prime}=0\hskip
5.69046pt{\rm and}\hskip 5.69046pta=z,\pm$ $\displaystyle\langle
S,\alpha|\hat{S}^{a}_{k}|S^{\prime}\alpha^{\prime}\rangle$ $\displaystyle=$
$\displaystyle 0\hskip 5.69046pt{\rm for}\hskip 5.69046pt|S-S^{\prime}|\neq
0,1\hskip 5.69046pt{\rm and}\hskip 5.69046pta=z,\pm$ $\displaystyle\langle
S^{z},\beta|\hat{S}^{\pm}_{k}|S^{z^{\prime}},\beta^{\prime}\rangle$
$\displaystyle=$ $\displaystyle 0\hskip 5.69046pt{\rm for}\hskip
5.69046ptS^{z^{\prime}}\neq S^{z}\pm 1$ $\displaystyle\langle
S^{z},\beta|\hat{S}^{z}_{k}|S^{z^{\prime}},\beta^{\prime}\rangle$
$\displaystyle=$ $\displaystyle 0\hskip 5.69046pt{\rm for}\hskip
5.69046ptS^{z^{\prime}}\neq S^{z}\,.$ (46)
However, for finite magnetic fields $0<h<h_{c}$ the following selection rules
are valid in that limit,
$\displaystyle\langle
S,S,\gamma|\hat{S}^{\pm}_{k}|S^{\prime},S^{z^{\prime}},\gamma^{\prime}\rangle$
$\displaystyle=$ $\displaystyle 0$ $\displaystyle{\rm for}$ $\displaystyle
S^{\prime}\neq S\pm 1\hskip 5.69046pt{\rm and}\hskip
5.69046ptS^{z^{\prime}}\neq S\pm 1$ $\displaystyle\langle
S,S,\gamma|\hat{S}^{z}_{k}|S^{\prime},S^{z^{\prime}},\gamma^{\prime}\rangle$
$\displaystyle=$ $\displaystyle 0$ $\displaystyle{\rm for}$ $\displaystyle
S^{\prime}\neq S\hskip 5.69046pt{\rm and}\hskip 5.69046ptS^{z^{\prime}}\neq
S\,.$ (47)
Finally, the dynamical structure factors satisfy the following sum rules,
$\displaystyle{1\over
2\pi^{2}}\int_{-\pi}^{\pi}dk\int_{0}^{\infty}d\omega\,S^{+-}(k,\omega)$
$\displaystyle=$ $\displaystyle(1+m)$ $\displaystyle{1\over
2\pi^{2}}\int_{-\pi}^{\pi}dk\int_{0}^{\infty}d\omega\,S^{-+}(k,\omega)$
$\displaystyle=$ $\displaystyle(1-m)$ $\displaystyle{1\over
2\pi^{2}}\int_{-\pi}^{\pi}dk\int_{0}^{\infty}d\omega\,S^{zz}(k,\omega)$
$\displaystyle=$ $\displaystyle{1\over 2}(1-m^{2})\,.$ (48)
## Appendix B Gapless transverse and longitudinal continuum spectra
Within a $k$ extended zone scheme, the $S^{-+}(k,\omega)$’s spectrum
$\omega^{-+}(k)$ and the $S^{+-}(k,\omega)$’s spectrum $\omega^{+-}(k)$
associated with the lower continuum in Figs. 1 and 2 read,
$\displaystyle\omega^{-+}(k)=-\varepsilon_{s}(q_{1})-\varepsilon_{s}(q_{2})$
$\displaystyle{\rm where}\hskip 5.69046ptk=\iota\pi-q_{1}-q_{2}\hskip
5.69046pt{\rm and}\hskip 5.69046pt\iota=\pm 1$ $\displaystyle{\rm for}\hskip
5.69046ptq_{1}\in[-k_{F\downarrow},k_{F\downarrow}]\hskip 5.69046pt{\rm
and}\hskip 5.69046ptq_{2}\in[-k_{F\downarrow},k_{F\downarrow}]\,,$ (49)
and
$\displaystyle\omega^{+-}(k)=\varepsilon_{s}(q_{1})-\varepsilon_{s}(q_{2})$
$\displaystyle{\rm where}\hskip 5.69046ptk=\iota\pi+q_{1}-q_{2}\hskip
5.69046pt{\rm and}\hskip 5.69046pt\iota=\pm 1$ $\displaystyle{\rm for}\hskip
5.69046pt|q_{1}|\in[k_{F\downarrow},k_{F\uparrow}]\hskip 5.69046pt{\rm
and}\hskip 5.69046ptq_{2}\in[-k_{F\downarrow},k_{F\downarrow}]\,,$ (50)
respectively. Here $\varepsilon_{s}(q)$ is the $s$ band energy dispersion
given in Eq. (98).
The spectrum $\omega^{xx}(k)$ of the transverse dynamical structure factor
$S^{xx}(k,\omega)$ associated with the lower continuum in Figs. 3 and 4
results from combination of the two spectra $\omega^{-+}(k)$ and
$\omega^{+-}(k)$ in Eqs. (49) and (50), respectively.
However, the spectrum $\omega^{zz}(k)$ associated with the lower continuum in
Figs. 5 and 6 is given by,
$\displaystyle\omega^{zz}(k)=\varepsilon_{s}(q_{1})-\varepsilon_{s}(q_{2})$
$\displaystyle{\rm where}\hskip 5.69046ptk=q_{1}-q_{2}$ $\displaystyle{\rm
for}\hskip 5.69046pt|q_{1}|\in[k_{F\downarrow},k_{F\uparrow}]\hskip
5.69046pt{\rm and}\hskip
5.69046ptq_{2}\in[-k_{F\downarrow},k_{F\downarrow}]\,.$ (51)
The upper thresholds of the two-parametric spectra, Eqs. (49) and (50), have
the following one-parametric spectra for spin densities $m\in]0,1[$,
$\displaystyle\omega^{+-}_{ut}(k)$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h-\varepsilon_{s}(k_{F\downarrow}-k)\hskip 5.69046pt{\rm
where}\hskip 5.69046ptk=k_{F\downarrow}-q$ (52) $\displaystyle{\rm for}\hskip
5.69046ptk\in[0,k_{F\downarrow}]\hskip 5.69046pt{\rm and}\hskip
5.69046ptq\in[0,k_{F\downarrow}]\,,$ $\displaystyle=$
$\displaystyle\varepsilon_{s}(q_{1})-\varepsilon_{s}(q_{2})\hskip
5.69046pt{\rm where}\hskip 5.69046ptk=\pi+q_{1}-q_{2}$ $\displaystyle{\rm
for}\hskip 5.69046ptk\in[k_{F\downarrow},\pi]\hskip 5.69046pt{\rm and}\hskip
5.69046ptv_{s}(q_{1})=v_{s}(q_{2})\hskip 5.69046pt$ $\displaystyle{\rm
with}\hskip 5.69046ptq_{1}\in[-k_{F\uparrow},-k_{F\downarrow}]$
$\displaystyle{\rm and}\hskip 5.69046ptq_{2}\in[-k_{F\downarrow},0]\,,$
and
$\displaystyle\omega^{-+}_{ut}(k)$ $\displaystyle=$
$\displaystyle-2\varepsilon_{s}\left({\pi-k\over 2}\right)\hskip 5.69046pt{\rm
where}\hskip 5.69046ptk=\pi-2q$ (53) $\displaystyle{\rm for}\hskip
5.69046ptk\in[(k_{F\uparrow}-k_{F\downarrow}),\pi]$ $\displaystyle{\rm
and}\hskip 5.69046ptq\in[-k_{F\downarrow},0]\,,$
respectively.
The upper threshold spectrum $\omega^{xx}_{ut}(k)$ of the combined spectra,
Eqs. (49) and (50), is given by,
$\displaystyle\omega^{xx}_{ut}(k)$ $\displaystyle=$
$\displaystyle\omega^{+-}_{ut}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[0,k^{xx}_{ut}]$ (54) $\displaystyle=$
$\displaystyle\omega^{-+}_{ut}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[k^{xx}_{ut},\pi]\,,$
where the momentum $k^{xx}_{ut}$ is such that
$\omega^{+-}_{ut}(k^{xx}_{ut})=\omega^{-+}_{ut}(k^{xx}_{ut})$.
However, the one-parametric upper threshold spectrum associated with the two-
parametric longitudinal spectrum, Eq. (51), reads for $m\in]0,1[$,
$\displaystyle\omega^{zz}_{ut}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s}(q_{1})-\varepsilon_{s}(q_{2})\hskip
5.69046pt{\rm where}\hskip 5.69046ptk=q_{1}-q_{2}$ (55) $\displaystyle{\rm
for}\hskip 5.69046ptv_{s}(q_{1})=v_{s}(q_{2})\hskip 5.69046pt{\rm and}\hskip
5.69046ptk\in[0,k_{F\uparrow}]\hskip 5.69046pt{\rm with}$ $\displaystyle
q_{1}\in[k_{F\downarrow},k_{F\uparrow}]\hskip 5.69046pt{\rm and}\hskip
5.69046ptq_{2}\in[0,k_{F\downarrow}]\,,$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h-\varepsilon_{s}(k_{F\uparrow}-k)\hskip 5.69046pt{\rm where}\hskip
5.69046ptk=k_{F\uparrow}-q$ $\displaystyle{\rm for}\hskip
5.69046ptk\in[k_{F\uparrow},\pi]\hskip 5.69046pt{\rm and}\hskip
5.69046ptq\in[-k_{F\downarrow},0]\,.$
At $k=0,k_{F\downarrow},\pi$ and $k=0,k_{F\uparrow}-k_{F\downarrow},\pi$, the
upper threshold spectra, Eqs. (52) and (53), respectively, are given by,
$\displaystyle\omega^{+-}_{ut}(0)$ $\displaystyle=$ $\displaystyle
W_{s}^{h}=2\mu_{B}\,h$ $\displaystyle\omega^{+-}_{ut}(k_{F\downarrow})$
$\displaystyle=$ $\displaystyle W_{s}=2\mu_{B}\,h+W_{s}^{p}$
$\displaystyle\omega^{+-}_{ut}(\pi)$ $\displaystyle=$ $\displaystyle 0$
$\displaystyle\omega^{-+}_{ut}(k_{F\uparrow}-k_{F\downarrow})$
$\displaystyle=$ $\displaystyle 0$ $\displaystyle\omega^{-+}_{ut}(\pi)$
$\displaystyle=$ $\displaystyle 2W_{s}^{p}\,.$ (56)
At $k=0,k_{F\uparrow},\pi$ the upper threshold spectrum $\omega^{zz}_{ut}(k)$
reads,
$\displaystyle\omega^{zz}_{ut}(0)$ $\displaystyle=$ $\displaystyle 0$
$\displaystyle\omega^{zz}_{ut}(k_{F\uparrow})$ $\displaystyle=$ $\displaystyle
W_{s}=2\mu_{B}\,h+W_{s}^{p}$ $\displaystyle\omega^{zz}_{ut}(\pi)$
$\displaystyle=$ $\displaystyle W_{s}^{h}=2\mu_{B}\,h\,.$ (57)
The energy scales $W_{s}=W_{s}^{p}+W_{s}^{h}$, $W_{s}^{p}$, and $W_{s}^{h}$
are in the above equations the $s$ band energy width, the $s$ particle energy
bandwidth, and the $s$ hole energy bandwidth defined by Eqs. (114)-(116).
The dynamical theory used in our study provides the spin dynamical structure
factors’s line shape near the lower thresholds of the spectra, Eqs. (49),
(50), and (51). In the case of (i) $S^{-+}(k,\omega)$ and (ii)
$S^{+-}(k,\omega)$ and $S^{zz}(k,\omega)$ such lower thresholds refer to (i) a
single $s$ branch line and (ii) two sections of a $s$ branch line,
respectively.
These lower thresholds spectra can be expressed in terms of the excitation
momentum $k$ or of the $s$ band momentum $q$ and are given by,
$\displaystyle\omega^{-+}_{lt}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}(k_{F\uparrow}-k)\hskip 5.69046pt{\rm and}$
$\displaystyle k$ $\displaystyle=$ $\displaystyle k_{F\uparrow}-q\hskip
5.69046pt{\rm where}$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle](k_{F\uparrow}-k_{F\downarrow}),\pi[\hskip 5.69046pt{\rm and}$
$\displaystyle q$ $\displaystyle\in$
$\displaystyle]-k_{F\downarrow},k_{F\downarrow}[\,,$ (58)
$\displaystyle\omega^{+-}_{lt}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s}(k-k_{F\uparrow})\hskip 5.69046pt{\rm and}$
$\displaystyle k$ $\displaystyle=$ $\displaystyle k_{F\uparrow}+q\hskip
5.69046pt{\rm where}$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle]0,(k_{F\uparrow}-k_{F\downarrow})[\hskip 5.69046pt{\rm and}$
$\displaystyle q$ $\displaystyle\in$
$\displaystyle]-k_{F\uparrow},-k_{F\downarrow}[\,,$ (59)
$\displaystyle\omega^{+-}_{lt}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}(k_{F\uparrow}-k)\hskip 5.69046pt{\rm and}$
$\displaystyle k$ $\displaystyle=$ $\displaystyle k_{F\uparrow}-q\hskip
5.69046pt{\rm where}$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle](k_{F\uparrow}-k_{F\downarrow}),\pi[\hskip 5.69046pt{\rm and}$
$\displaystyle q$ $\displaystyle\in$
$\displaystyle]-k_{F\downarrow},k_{F\downarrow}[\,,$ (60)
$\displaystyle\omega^{zz}_{lt}(k)$ $\displaystyle=$
$\displaystyle-\varepsilon_{s}-k_{F\downarrow}[_{F\downarrow}-k)\hskip
5.69046pt{\rm and}$ $\displaystyle k$ $\displaystyle=$ $\displaystyle
k_{F\downarrow}-q\hskip 5.69046pt{\rm where}$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle]0,2k_{F\downarrow}[\hskip 5.69046pt{\rm
and}$ $\displaystyle q$ $\displaystyle\in$
$\displaystyle]-k_{F\downarrow},k_{F\downarrow}[\,,$ (61)
$\displaystyle\omega^{zz}_{lt}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s}(k-k_{F\downarrow})\hskip 5.69046pt{\rm and}$
$\displaystyle k$ $\displaystyle=$ $\displaystyle k_{F\downarrow}+q\hskip
5.69046pt{\rm where}$ $\displaystyle k$ $\displaystyle\in$
$\displaystyle]2k_{F\downarrow}),\pi[\hskip 5.69046pt{\rm and}$ $\displaystyle
q$ $\displaystyle\in$ $\displaystyle]k_{F\downarrow},k_{F\uparrow}[\,.$ (62)
## Appendix C Energy gaps’s expressions and limiting values
In this Appendix, the expressions in terms of the $s$ and $s2$ bands energy
dispersions and limiting values of the energy gaps $\Delta_{\rm gap}^{+-}(k)$,
Eq. (15), $\Delta_{\rm gap}^{xx}(k)$, Eq. (16), and $\Delta_{\rm
gap}^{zz}(k)$, Eq. (17), and their values at some specific momenta are
provided.
For $m\in[0,\tilde{m}]$ the energy gap $\Delta_{\rm gap}^{+-}(k)$ reads,
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$
$\displaystyle-2\mu_{B}\,h+\varepsilon_{s2}(k)+\varepsilon_{s}(k_{F\downarrow}-k)$
$\displaystyle{\rm for}\hskip
5.69046ptk\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ $\displaystyle\Delta_{\rm
gap}^{+-}(k)$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h-\varepsilon_{s}(k_{F\uparrow}-k)+\varepsilon_{s}(k_{F\downarrow}-k)$
$\displaystyle{\rm for}\hskip
5.69046pt](k_{F\uparrow}-k_{F\downarrow}),{\tilde{k}}[$
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h-W_{s2}\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in]{\tilde{k}},k_{F\downarrow}[$ $\displaystyle\Delta_{\rm
gap}^{+-}(k)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)$
$\displaystyle+\varepsilon_{s}(q)-\varepsilon_{s}(k+q-\pi)$ $\displaystyle{\rm
for}\hskip 5.69046ptk\in]k_{F\downarrow},2k_{F\downarrow}[\hskip 5.69046pt{\rm
and}$ $\displaystyle q\in]-(k_{\bullet}-k_{F\uparrow}+k_{F\downarrow}),0[$
$\displaystyle q_{1}=k+q-\pi\in]-k_{F\uparrow},-k_{\bullet}[$
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})+\varepsilon_{s}(q)-\varepsilon_{s}(k+q-\pi)$
(63) $\displaystyle{\rm for}\hskip 5.69046ptk\in]2k_{F\downarrow},\pi[\hskip
5.69046pt{\rm and}$ $\displaystyle
q\in]-k_{F\downarrow},-(k_{\bullet}-k_{F\uparrow}+k_{F\downarrow})[$
$\displaystyle q_{1}=k+q-\pi\in]-k_{\bullet},-k_{F\downarrow}[$
$\displaystyle{\rm for}\hskip 5.69046pt{\rm spin}\hskip 5.69046pt{\rm
densities}\hskip 5.69046ptm\in[0,\tilde{m}]\,.$
For spin densities $m\in[\tilde{m},1[$ its expression is,
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$
$\displaystyle-2\mu_{B}\,h+\varepsilon_{s2}(k)+\varepsilon_{s}(k_{F\downarrow}-k)$
$\displaystyle{\rm for}\hskip 5.69046ptk\in]0,{\tilde{k}}[$
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h-W_{s2}\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in]{\tilde{k}},k_{F\downarrow}[$ $\displaystyle\Delta_{\rm
gap}^{+-}(k)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)$
$\displaystyle+\varepsilon_{s}(q)-\varepsilon_{s}(k+q-\pi)$ $\displaystyle{\rm
for}\hskip 5.69046ptk\in]k_{F\downarrow},2k_{F\downarrow}[\hskip 5.69046pt{\rm
and}$ $\displaystyle q\in]-(k_{\bullet}-k_{F\uparrow}+k_{F\downarrow}),0[$
$\displaystyle q_{1}=k+q-\pi\in]-k_{F\uparrow},-k_{\bullet}[$
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})+\varepsilon_{s}(q)-\varepsilon_{s}(k+q-\pi)$
(64) $\displaystyle{\rm for}\hskip 5.69046ptk\in]2k_{F\downarrow},\pi[\hskip
5.69046pt{\rm and}$ $\displaystyle
q\in]-k_{F\downarrow},-(k_{\bullet}-k_{F\uparrow}+k_{F\downarrow})[$
$\displaystyle q_{1}=k+q-\pi\in]-k_{\bullet},-k_{F\downarrow}[$
$\displaystyle{\rm for}\hskip 5.69046pt{\rm spin}\hskip 5.69046pt{\rm
densities}\hskip 5.69046ptm\in[\tilde{m},1[\,.$
The momentum $k_{\bullet}$ appearing in the above equations satisfies the
following equation, expressed in terms of the $s$ band group velocity defined
in Eq. (100),
$v_{s}(k_{\bullet})=v_{s}(k_{\bullet}-k_{F\uparrow}+k_{F\downarrow})\hskip
5.69046pt{\rm where}\hskip 5.69046ptk_{\bullet}>k_{F\downarrow}\,.$ (65)
(The limiting behaviors of the $s$ band group velocity are given in Eqs.
(119), (120), (125), (126), and (128).)
The energy gap $\Delta_{\rm gap}^{+-}(k)$ is given by $2\mu_{B}\,h-W_{s2}$ for
the following $k$ values and spin densities,
$\displaystyle\Delta_{\rm gap}^{+-}(k)$ $\displaystyle=$ $\displaystyle
2\mu_{B}\,h-W_{s2}$ $\displaystyle k$ $\displaystyle=$ $\displaystyle 0\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,1[$ $\displaystyle k$
$\displaystyle=$ $\displaystyle k_{F\uparrow}-k_{F\downarrow}\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in[0,1/3]$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle]{\tilde{k}},k_{F\downarrow}[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in[0,\tilde{m}]$ $\displaystyle k$
$\displaystyle\in$ $\displaystyle]{\tilde{k}},k_{F\downarrow}[\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in[\tilde{m},1[\,.$ (66)
Here $W_{s2}$ is the $s2$ band energy width. From the use of results given in
Appendix E, one finds that the energy scale $2\mu_{B}\,h-W_{s2}\geq 0$ in Eq.
(66) has the following limiting values,
$\displaystyle\lim_{u\rightarrow 0}\,(2\mu_{B}\,h-W_{s2})$ $\displaystyle=$
$\displaystyle 0\hskip 5.69046pt{\rm for}\hskip 5.69046ptm\in]0,1[$
$\displaystyle\lim_{m\rightarrow 0}\,(2\mu_{B}\,h-W_{s2})$ $\displaystyle=$
$\displaystyle 0\hskip 5.69046pt{\rm for}\hskip 5.69046ptu>0$
$\displaystyle\lim_{m\rightarrow 1}\,(2\mu_{B}\,h-W_{s2})$ $\displaystyle=$
$\displaystyle U-(\sqrt{(4t)^{2}+(2U)^{2}}$ (67)
$\displaystyle-\sqrt{(4t)^{2}+U^{2}})>0\hskip 5.69046pt{\rm for}\hskip
5.69046ptu>0$ $\displaystyle\approx$ $\displaystyle U\hskip 5.69046pt{\rm
for}\hskip 5.69046ptu\ll 1$ $\displaystyle\approx$ $\displaystyle{t\over
u}={4t^{2}\over U}\hskip 5.69046pt{\rm for}\hskip 5.69046ptu\gg 1\,.$
At $k=\pi$ (that in the spectra expressions means the $k\rightarrow\pi$ limit)
the present gap reads,
$\displaystyle\Delta_{\rm gap}^{+-}(\pi)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h$ (68) $\displaystyle{\rm for}\hskip 5.69046ptm\in]0,1[\hskip
5.69046pt{\rm and}\hskip 5.69046ptu>0\,.$
This expression has the following limiting values,
$\displaystyle\lim_{u\rightarrow 0}\,\Delta_{\rm gap}^{+-}(\pi)$
$\displaystyle=$ $\displaystyle 8t\sin\left({\pi\over 2}m\right)\hskip
5.69046pt{\rm for}\hskip 5.69046ptm\in]0,1[$ $\displaystyle\lim_{m\rightarrow
0}\,\Delta_{\rm gap}^{+-}(\pi)$ $\displaystyle=$ $\displaystyle 0\hskip
5.69046pt{\rm for}\hskip 5.69046ptu>0$ $\displaystyle\lim_{m\rightarrow
1}\,\Delta_{\rm gap}^{+-}(\pi)$ $\displaystyle=$
$\displaystyle\sqrt{(4t)^{2}+(U)^{2}}-U>0\hskip 5.69046pt{\rm for}\hskip
5.69046ptu>0$ (69) $\displaystyle\approx$ $\displaystyle 4t-U\hskip
5.69046pt{\rm for}\hskip 5.69046ptu\ll 1$ $\displaystyle\approx$
$\displaystyle{2t\over u}={4t^{2}\over U}\hskip 5.69046pt{\rm for}\hskip
5.69046ptu\gg 1\,.$
The energy gap $\Delta_{\rm gap}^{xx}(k)$, Eqs. (15) and (16), can be
expressed as,
$\displaystyle\Delta_{\rm gap}^{xx}(k)$ $\displaystyle=$
$\displaystyle\Delta_{\rm gap}^{+-}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in]0,k^{xx}_{ut}[$ $\displaystyle\Delta_{\rm gap}^{xx}(k)$
$\displaystyle=$ $\displaystyle\Delta_{\rm gap}^{-+}(k)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in]k^{xx}_{ut},\pi[\,,$ (70)
where $k^{xx}_{ut}>0$ is the $k$ value at which
$\omega^{-+}_{ut}(k^{xx}_{ut})=\omega^{+-}_{ut}(k^{xx}_{ut})$ and,
$\Delta_{\rm gap}^{-+}(k)=\Delta^{-+}(k)-\omega^{-+}_{ut}(k)\,.$ (71)
The gapped lower threshold spectrum $\Delta^{-+}(k)$ in this expression obeys
the equality $\Delta^{-+}(k)=\Delta^{+-}(k)$, where $\Delta^{+-}(k)$ is given
in Eqs. (13)-(14).
For spin densities $m\in[0,\tilde{m}]$, the energy gap $\Delta_{\rm
gap}^{-+}(k)$, Eq. (71), reads,
$\displaystyle\Delta_{\rm gap}^{-+}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ $\displaystyle\Delta_{\rm
gap}^{-+}(k)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-\varepsilon_{s}(k_{F\uparrow}-k)+2\varepsilon_{s}\left({\pi-k\over
2}\right)$ $\displaystyle{\rm for}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),{\tilde{k}}[$
$\displaystyle\Delta_{\rm gap}^{-+}(k)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)+2\varepsilon_{s}\left({\pi-k\over
2}\right)$ $\displaystyle{\rm for}\hskip
5.69046ptk\in]{\tilde{k}},2k_{F\downarrow}[$ $\displaystyle\Delta_{\rm
gap}^{-+}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})+2\varepsilon_{s}\left({\pi-k\over
2}\right)$ (72) $\displaystyle{\rm for}\hskip
5.69046ptk\in]2k_{F\downarrow},\pi[\,,$
whereas for $m\in[\tilde{m},1[$ it is given by,
$\displaystyle\Delta_{\rm gap}^{-+}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in[0,{\tilde{k}}-\delta{\tilde{k}}_{1}]$ $\displaystyle\Delta_{\rm
gap}^{-+}(k)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)$ $\displaystyle{\rm
for}\hskip 5.69046ptk\in]{\tilde{k}},(k_{F\uparrow}-k_{F\downarrow})[$
$\displaystyle\Delta_{\rm gap}^{-+}(k)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}(k_{F\downarrow}-k)+2\varepsilon_{s}\left({\pi-k\over
2}\right)$ $\displaystyle{\rm for}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),2k_{F\downarrow}[$
$\displaystyle\Delta_{\rm gap}^{-+}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-2k_{F\downarrow})+2\varepsilon_{s}\left({\pi-k\over
2}\right)$ (73) $\displaystyle{\rm for}\hskip
5.69046ptk\in]2k_{F\downarrow},\pi[\,.$
At $k=0,k_{F\uparrow}-k_{F\downarrow},\pi$ the energy gap $\Delta_{\rm
gap}^{-+}(k)$ is given by,
$\displaystyle\Delta_{\rm gap}^{-+}(0)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}\hskip 5.69046pt{\rm for}\hskip 5.69046ptm\in]0,1[$
$\displaystyle\Delta_{\rm gap}^{-+}(k_{F\uparrow}-k_{F\downarrow})$
$\displaystyle=$ $\displaystyle 4\mu_{B}\,h\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\in[0,\tilde{m}]$ $\displaystyle\Delta_{\rm gap}^{-+}(\pi)$
$\displaystyle=$ $\displaystyle 4\mu_{B}\,h-2W_{s}^{p}$ (74)
$\displaystyle{\rm for}\hskip 5.69046ptm\in]0,1[\,.$
In the $k$ intervals $k\in]\bar{k}_{0},\pi[$ and
$k\in]\bar{k}_{0},\bar{k}_{1}[$, Eq. (18), for spin densities
$m\in]0,\bar{m}_{0}]$ and $m\in]0,\bar{m}]$, respectively, one has that
$\Delta_{\rm gap}^{xx}(k)=\Delta_{\rm gap}^{-+}(k)<0$. For instance, at
$k=\pi$ (and in the $k\rightarrow\pi$ limit in the spectra expressions) and
for spin densities $m\in]0,1[$, the energy gap $\Delta_{\rm
gap}^{xx}(\pi)=\Delta_{\rm gap}^{-+}(\pi)$ is in the $u\rightarrow 0$ limit
and for $u\gg 1$ given by,
$\displaystyle\Delta_{\rm gap}^{xx}(\pi)$ $\displaystyle=$ $\displaystyle
12t\left(\sin\left({\pi\over 2}m\right)-{1\over 3}\right)$ (75)
$\displaystyle=$ $\displaystyle-4t\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\rightarrow 0$ $\displaystyle=$ $\displaystyle 0\hskip 5.69046pt{\rm
for}\hskip 5.69046ptm=\bar{m}_{0}={2\over\pi}\arcsin\left({1\over
3}\right)\approx 0.216$ $\displaystyle=$ $\displaystyle 8t\hskip 5.69046pt{\rm
for}\hskip 5.69046ptm\rightarrow 1\,,$
and
$\displaystyle\Delta_{\rm gap}^{xx}(\pi)$ $\displaystyle=$ $\displaystyle-{\pi
t\over u}=-{4\pi t^{2}\over U}\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\rightarrow 0$ (76) $\displaystyle=$ $\displaystyle 0\hskip
5.69046pt{\rm for}\hskip 5.69046ptm=\bar{m}_{0}\approx 0.239$ $\displaystyle=$
$\displaystyle{4t\over u}={16t^{2}\over U}\hskip 5.69046pt{\rm for}\hskip
5.69046ptm\rightarrow 1\,,$
respectively.
Finally, the energy gap $\Delta_{\rm gap}^{zz}(k)$, Eqs. (15) and (17), is for
spin densities $m\in]0,\tilde{m}]$ and $m\in[\tilde{m},1[$ given by,
$\displaystyle\Delta_{\rm gap}^{zz}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-(k_{F\uparrow}-k_{F\downarrow}))$ (77)
$\displaystyle{\rm for}\hskip
5.69046ptk\in]0,(k_{F\uparrow}-k_{F\downarrow})[$ $\displaystyle=$
$\displaystyle 4\mu_{B}\,h-W_{s2}-\varepsilon_{s}\left(k_{F\uparrow}-k\right)$
$\displaystyle{\rm for}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),(\pi-{\tilde{k}})[$
$\displaystyle=$ $\displaystyle
4\mu_{B}\,h-\varepsilon_{s}(k_{F\downarrow}-k)$ $\displaystyle{\rm for}\hskip
5.69046ptk\in](\pi-{\tilde{k}}),2k_{F\downarrow}[$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-\pi)\hskip 5.69046pt{\rm for}\hskip
5.69046ptk\in]2k_{F\downarrow},\pi[$ $\displaystyle{\rm when}\hskip
5.69046ptm\in[0,\tilde{m}]\,,$
and
$\displaystyle\Delta_{\rm gap}^{zz}(k)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(k-(k_{F\uparrow}-k_{F\downarrow}))\hskip
5.69046pt{\rm for}\hskip 5.69046ptk\in[0,(k_{F\uparrow}-k_{F\downarrow})]$
(78) $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-W_{s2}-\varepsilon_{s}\left(k_{F\uparrow}-k\right)$
$\displaystyle{\rm for}\hskip
5.69046ptk\in](k_{F\uparrow}-k_{F\downarrow}),(\pi-{\tilde{k}})[$
$\displaystyle=$ $\displaystyle\varepsilon_{s2}(k-\pi)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptk\in](\pi-{\tilde{k}}),\pi[$ $\displaystyle{\rm
when}\hskip 5.69046ptm\in[\tilde{m},1[\,,$
respectively.
## Appendix D Matrix elements functionals and cusp singularities
The ground state at a given spin density $m$ is populated by
$N_{s}=N_{\downarrow}$ $s$ particles that fill a $s$ band Fermi sea
$q\in[-k_{F\downarrow},k_{F\downarrow}]$ where $k_{F\downarrow}$ is given in
Eq. (97) and by a full $c$ band $q\in[-\pi,\pi]$ populated by $N_{c}=N$ $c$
particles that do not participate in the spin dynamical properties. Within the
present thermodynamic limit, we have here ignored corrections of $1/L$ order
to these bands momentum limiting values. There are no $s2$ particles in the
ground state.
However, the following number and current number deviations under transitions
from the ground state to excited energy eigenstates are associated with
momentum deviations $1/L$ corrections that must be accounted for even in the
thermodynamic limit,
$\displaystyle\delta N_{s,\iota}^{F}\hskip 5.69046pt{\rm for}\hskip
5.69046pt\iota=1,-1\hskip 5.69046pt{\rm(right,left)}\hskip 5.69046pts\hskip
2.84544pt{\rm particles}$ $\displaystyle\delta N_{s}^{F}=\sum_{\iota=\pm
1}\delta N_{s,\iota}^{F}\hskip 5.69046pt{\rm and}\hskip 5.69046pt\delta
J_{s}^{F}={1\over 2}\sum_{\iota=\pm 1}\iota\,\delta N_{s,\iota}^{F}$
$\displaystyle\delta J_{s2}={\iota\over 2}\delta
N_{s2}(q)|_{q=\iota\,(k_{F\uparrow}-k_{F\downarrow})}$ $\displaystyle\delta
J_{c}^{F}={1\over 2}\sum_{\iota=\pm 1}\iota\,\delta N_{c,\iota}^{F}\hskip
5.69046pt{\rm where}$ $\displaystyle\delta N_{c,\iota}^{F}=-\delta
N_{c,-\iota}^{F}\,.$ (79)
Under the transitions from the ground state to the excited energy eigenstates
that span the spin subspaces of the quantum problem studied in this paper, the
number of $s$ particles may change. This leads to number deviations $\delta
N_{s}$. The specific number deviations $\delta N_{s,\iota}^{F}$ in Eq. (79)
refer only to changes of the $s$ particles numbers at the left $(\iota=-1)$ or
right $(\iota=1)$ $s$ band Fermi points. The same information is contained in
the two Fermi points number deviations $\delta N_{s,\iota}^{F}$ and in the
corresponding Fermi points number deviations $\delta N_{s}^{F}=\sum_{\iota=\pm
1}\delta N_{s,\iota}^{F}$ and current number deviations $\delta
J_{s}^{F}={1\over 2}\sum_{\iota=\pm 1}\iota\,\delta N_{s,\iota}^{F}$.
The overall $s$ particles number deviation $\delta N_{s}$ can be expressed as,
$\delta N_{s}=\delta N_{s}^{F}+\delta N_{s}^{NF}\,.$ (80)
Here $\delta N_{s}^{NF}$ refers to changes in the number of $s$ particles at
$s$ band momenta other than at the Fermi points.
For the spin subspaces under consideration, the $s2$ band number deviations
only read $\delta N_{s2}=0$ or $\delta N_{s2}=1$. In the latter case, the $s2$
particle can be created at any $s2$ band momentum
$q\in[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]$. Only
when the $s2$ particle is created at the $s2$ band limiting values
$q=-(k_{F\uparrow}-k_{F\downarrow})$ or $q=(k_{F\uparrow}-k_{F\downarrow})$
that process leads to a current number deviation $\delta J_{s2}=-1/2$ and
$\delta J_{s2}=1/2$, respectively.
The dynamical structure factors are within the dynamical theory used in the
studies of this paper expressed as a sum of $s$-band spectral function terms
$B_{s}(k,\omega)$ (denoted by $B_{Q}(k,\omega)$ in Ref. Carmelo_16, ), each
associated with a reference energy eigenstate whose $s$-band Fermi sea changes
from occupancy one to zero at the $\iota=+1$ right and $\iota=-1$ left Fermi
points $q_{Fs,\iota}=q_{Fs,\iota}^{0}+\pi\delta N_{s,\iota}^{F}/L$. Here
$q_{Fs,\iota}^{0}$ refers to the ground state and the $\iota=\pm 1$ number
deviations $\delta N_{s,\iota}^{F}$ are those in Eq. (79).
In the subspaces of our study, that reference state corresponds to fixed
$\iota=\pm 1$ deviations $\delta N_{s,\iota}^{F}$ and can have no holes within
the $s$-band Fermi sea, one hole at a fixed $s$-band momentum $q$, or two
holes at fixed $s$-band momenta $q$ an $q^{\prime}$, all inside that Fermi sea
and away from the Fermi points. In addition, that state can have no $s$
particles or a single $s$ particle at a fixed $s$-band momentum $q$ outside
the $s$-band Fermi sea and away from its Fermi points. It can also have no
$s2$ particles or one $s2$ particle at a fixed momentum
$q^{\prime\prime}\in[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]$.
Besides the reference state, each term $B_{s}(k,\omega)$ involves sums that
run over $m_{\iota}=1,2,3,...$ elementary particle-hole processes of
$\iota=\pm 1$ momenta $\iota 2\pi/L$ around the corresponding Fermi points
$q_{Fs,\iota}$ that generate a tower of excited states upon that reference
state. It reads Carmelo_16 ,
$\displaystyle B_{s}(k,\omega)$ $\displaystyle=$ $\displaystyle{L\over
2\pi}\sum_{m_{+1};m_{-1}}\,A^{(0,0)}\,a(m_{+1},\,m_{-1})$ (81)
$\displaystyle\times$ $\displaystyle\delta\Bigl{(}\omega-\epsilon-{2\pi\over
L}\,v_{s}\sum_{\iota=\pm 1}(m_{\iota}+\Phi_{\iota}^{2}/4)\Bigr{)}$
$\displaystyle\times$ $\displaystyle\delta\Bigl{(}k-{2\pi\over
L}\,\sum_{\iota=\pm 1}\iota\,(m_{\iota}+\Phi_{\iota}^{2}/4)\Bigr{)}\,.$
Here $v_{s}=v_{s}(k_{F\downarrow})$ where $v_{s}(q)$ is the $s$-band group
velocity, Eq. (100), and the lowest peak weight $A^{(0,0)}$ and the weights
$A^{(0,0)}\,a(m_{+1},\,m_{-1})$ refer to the matrix elements square
$|\langle\nu|\hat{S}^{a}_{k}|GS\rangle|^{2}$ in Eq. (4) between the ground
state and the $m_{\iota}=0$ reference excited state and the corresponding
$m_{\iota}>0$ tower excited states. For the present subspaces, the $\iota=\pm
1$ functionals $\Phi_{\iota}$ and the spectrum $\epsilon$ in Eq. (81) have the
general form,
$\displaystyle\Phi_{\iota}={\iota\,\delta N^{F}_{s}\over
2\xi_{s\,s}^{1}}+\xi_{s\,s}^{1}\,(\delta J^{F}_{s}-2\delta J_{s2})$
$\displaystyle+\,\Phi_{s,s}(\iota k_{F\downarrow},q)\delta
N_{s}(q)+\Phi_{s,s}(\iota k_{F\downarrow},q^{\prime})\delta N_{s}(q^{\prime})$
$\displaystyle+\,(1-\delta_{|q^{\prime\prime}|,(k_{F\uparrow}-k_{F\downarrow})})\,\Phi_{s,s2}(\iota
k_{F\downarrow},q^{\prime\prime})\delta N_{s2}(q^{\prime\prime})$
$\displaystyle\epsilon=\varepsilon_{s}(q)\delta
N_{s}(q)+\varepsilon_{s}(q^{\prime})\delta
N_{s}(q^{\prime})+\varepsilon_{s2}(q^{\prime\prime})\delta
N_{s2}(q^{\prime\prime})$ $\displaystyle{\rm where}$ $\displaystyle\delta
N_{s}(q)=0,\pm 1\,;\hskip 5.69046pt\delta N_{s}(q^{\prime})=0,-1\hskip
5.69046pt{\rm and}$ $\displaystyle\delta N_{s2}(q^{\prime\prime})=0,1\,.$ (82)
Here the deviations $\delta N^{F}_{s}$, $\delta J^{F}_{s}$, and $\delta
J_{s2}$ are given in Eq. (79), the $\iota=\pm 1$ phase shifts
$\Phi_{s,s}\left(\iota k_{F\downarrow},q\right)$ and $\Phi_{s,s2}\left(\iota
k_{F\downarrow},q\right)$ in units of $2\pi$ are defined by Eq. (133), the
phase-shift related parameter $\xi_{s\,s}^{1}$ is defined in Eq. (140), and
the energy dispersions $\varepsilon_{s}(q)$ and $\varepsilon_{s2}(q)$ are
given in Eqs. (98) and (99), respectively.
The relative weights $a(m_{+1},\,m_{-1})$ in Eq. (81) can be expressed in
terms of the gamma function as Carmelo_16 ,
$\displaystyle a(m_{+1},m_{-1})$ $\displaystyle=$
$\displaystyle\prod_{\iota=\pm 1}a_{\iota}(m_{\iota})\hskip 5.69046pt{\rm
where}$ $\displaystyle a_{\iota}(m_{\iota})$ $\displaystyle=$
$\displaystyle\frac{\Gamma(m_{\iota}+\Phi_{\iota}^{2})}{\Gamma(m_{\iota}+1)\,\Gamma(\Phi_{\iota}^{2})}\,.$
(83)
In the present thermodynamic limit, the matrix elements weights have the
following asymptotic behavior Carmelo_16 ,
$\displaystyle A^{(0,0)}$ $\displaystyle=$ $\displaystyle\left({1\over
L\,B_{s}}\right)^{-1+\sum_{\iota=\pm 1}\Phi_{\iota}^{2}}$
$\displaystyle\times$ $\displaystyle\prod_{\iota=\pm
1}e^{-f_{0}+f_{2}\left(2{\tilde{\Phi}}_{\iota}\right)^{2}-f_{4}\left(2{\tilde{\Phi}}_{\iota}\right)^{4}}$
$\displaystyle a(m_{+1},m_{-1})$ $\displaystyle=$
$\displaystyle\prod_{\iota=\pm
1}{(m_{\iota}+\Phi_{\iota}^{2}/4)^{-1+\Phi_{\iota}^{2}}\over\Gamma(\Phi_{\iota}^{2})}\,.$
(84)
Here ${\tilde{\Phi}}_{\iota}=\Phi_{\iota}-\iota\delta N_{s,\iota}^{F}$, the
constant $0<B_{s}\leq 1$ and the constants $0<f_{l}<1$ where $l=0,2,4$ depends
on $u$ and $m$ and depend only on $u$, respectively, and are independent of
$L$. Importantly, in that limit the matrix elements square in Eq. (4) then
read,
$\displaystyle|\langle\nu|\hat{S}^{a}_{k}|GS\rangle|^{2}=\left({1\over
L\,B_{s}}\right)^{-1+\sum_{\iota=\pm 1}\Phi_{\iota}^{2}}$
$\displaystyle\times\prod_{\iota=\pm
1}{e^{-f_{0}+f_{2}\left(2{\tilde{\Phi}}_{\iota}\right)^{2}-f_{4}\left(2{\tilde{\Phi}}_{\iota}\right)^{4}}\over\Gamma(\Phi_{\iota}^{2})}\left(m_{\iota}+\Phi_{\iota}^{2}/4\right)^{-1+\Phi_{\iota}^{2}}$
$\displaystyle=\left({1\over L\,B_{s}}\right)^{-1+\sum_{\iota=\pm
1}\Phi_{\iota}^{2}}\prod_{\iota=\pm
1}{e^{-f_{0}+f_{2}\left(2{\tilde{\Phi}}_{\iota}\right)^{2}-f_{4}\left(2{\tilde{\Phi}}_{\iota}\right)^{4}}\over\Gamma(\Phi_{\iota}^{2})}$
$\displaystyle\times\left({L\over
4\pi\,v_{s}}(\omega-\epsilon+\iota\,v_{s}\,k)\right)^{-1+\Phi_{\iota}^{2}}\,.$
(85)
Here the equality $m_{\iota}={L\over
4\pi\,v_{s}}(\omega-\epsilon+\iota\,v_{s}\,k)-\Phi_{\iota}^{2}/4$ imposed by
the $\delta$-functions in Eq. (81) has been used.
In the general case in which the two $\iota=\pm 1$ functionals $\Phi_{\iota}$
are finite the $s$-particle spectral function $B_{s}(k,\omega)$, Eq. (81), can
be written as Carmelo_16 ,
$\displaystyle B_{s}(k,\omega)={1\over 4\pi\,B_{s}\,v_{s}}\,\prod_{\iota=\pm
1}\,\Theta(\omega-\epsilon+\iota\,v_{s}\,k)$
$\displaystyle{e^{-f_{0}+f_{2}\left(2{\tilde{\Phi}}_{\iota}\right)^{2}-f_{4}\left(2{\tilde{\Phi}}_{\iota}\right)^{4}}\over\Gamma(\Phi_{\iota}^{2})}\Bigl{(}{\omega-\epsilon+\iota\,v_{s}\,k\over
4\pi\,B_{s}\,v_{s}}\Bigr{)}^{-1+\Phi_{\iota}^{2}}\,.$ (86)
To reach this expression, which in the thermodynamic limit is exact, Eqs.
(81), (84), and (85) were used.
The summation of the terms $B_{s}(k,\omega)$ that lead to expressions for the
dynamical structure factors can be performed and reach several kinds of
contributions.
When $\delta N_{s}(q)=\delta N_{s}(q^{\prime})=0$ and $\delta
N_{s2}(q^{\prime\prime})=0$ or $\delta N_{s2}(q^{\prime\prime})=1$ at
$q^{\prime\prime}=0$ in Eq. (82), such summations lead to
$S^{ab}(k,\omega)\propto\Bigl{(}\omega-\omega_{0}\Bigr{)}^{\zeta^{ab}}$ for
$(\omega-\omega_{0})\neq\pm v_{s}\,(k-k_{0})$ where $\omega_{0}=0$ and
$\omega_{0}=4\mu_{B}\,h$ for $\delta N_{s2}(q^{\prime\prime})=0$ and $\delta
N_{s2}(0)=1$, respectively, $k_{0}=2k_{F\downarrow}\,\delta J_{s}^{F}$, and
$\zeta^{ab}=-2+\sum_{\iota=\pm 1}\Phi_{\iota}^{2}$. Moreover, they lead to an
alternative behavior $B(k,\omega)\propto\Bigl{(}\omega-\omega_{0}\mp
v_{s}\,(k-k_{0})\Bigr{)}^{\zeta^{ab}_{\pm}}$ for
$(\omega-\omega_{0})\approx\pm v_{s}\,(k-k_{0})$ where
$\zeta^{ab}_{\pm}=-1+\Phi_{\pm}^{2}$. These behaviors are only valid in very
small $(k,\omega)$-plane regions associated with very small values of $\omega$
or $(\omega-4\mu_{B}\,h)$ and of $(k-k_{0})$ and lead to cusp singularities
when $\zeta^{ab}<0$ and/or $\zeta^{ab}_{\pm}<0$ Carmelo_16 .
When only one of the deviations $\delta N_{s}(q)$, $\delta N_{s}(q^{\prime})$,
and $\delta N_{s2}(q^{\prime\prime})$ in Eq. (82) reads $1$ (or $-1$) the
summation of terms $B_{s}(k,\omega)$ gives the line shape of the dynamical
structure factors in the $(k,\omega)$-plane vicinity of branch lines
associated with the lower thresholds, Eqs. (20) and (32). The form of the
exponents $\zeta^{ab}_{\beta}(k)=-1+\sum_{\iota=\pm 1}\Phi_{\iota}^{2}$, Eq.
(21), in these expressions is fully determined by the square matrix elements,
Eq. (85).
When several of the deviations $\delta N_{s}(q)$, $\delta N_{s}(q^{\prime})$,
and $\delta N_{s2}(q^{\prime\prime})$ in Eq. (82) are given by $1$ (or $-1$),
the summation of terms $B_{s}(k,\omega)$ leads to a line shape without cusp
singularities.
The results of this paper focus on the line shape near the branch lines
associated with the lower thresholds, Eqs. (20) and (32). They rely on the
specific form that the functional, Eq. (82), has for the $s2,s2^{\prime}$
branch lines, $\bar{s}$ branch lines, and $\bar{s}^{\prime}$ branch lines that
are part of the gapped lower thresholds.
In the case of the $s2$ and $s2^{\prime}$ branch lines, that spectral
functional’s form is,
$\displaystyle\Phi_{\iota}(q)$ $\displaystyle=$
$\displaystyle\iota\,\xi_{s\,s}^{0}{\delta N^{F}_{s}\over
2}+\xi_{s\,s}^{1}\,\delta J^{F}_{s}+\Phi_{s,s2}(\iota k_{F\downarrow},q)$ (87)
$\displaystyle=$ $\displaystyle{\iota\,\delta N^{F}_{s}\over
2\xi_{s\,s}^{1}}+\xi_{s\,s}^{1}\,\delta J^{F}_{s}+\Phi_{s,s2}(\iota
k_{F\downarrow},q)$ $\displaystyle{\rm for}\hskip 5.69046pts2\hskip
5.69046pt{\rm and}\hskip 5.69046pts2^{\prime}\hskip 5.69046pt{\rm
branch}\hskip 5.69046pt{\rm lines}\,.$
For the excited energy eigenstates that contribute to the singularities at and
above the $s2$ and $s2^{\prime}$ branch lines, the maximum interval of the
$s2$ band momentum $q$ in Eq. (87) is
$q\in[0,(k_{F\uparrow}-k_{F\downarrow})[$ or
$q\in]-(k_{F\uparrow}-k_{F\downarrow}),0]$.
For $\bar{s}$ and $\bar{s}^{\prime}$ branch lines, the spectral functionals
are different and have the form,
$\displaystyle\Phi_{\iota}(q)=$ (88) $\displaystyle=$
$\displaystyle\iota\,\xi_{s\,s}^{0}{\delta N^{F}_{s}\over
2}+{\iota\,\xi_{s\,s2}^{0}\over 2}+\xi_{s\,s}^{1}\,\delta
J^{F}_{s}-\Phi_{s,s}(\iota k_{F\downarrow},q)$ $\displaystyle=$
$\displaystyle{\iota\,\delta N^{F}_{s}\over
2\xi_{s\,s}^{1}}+{\iota\,\xi_{s\,s2}^{0}\over 2}+\xi_{s\,s}^{1}\,\delta
J^{F}_{s}-\Phi_{s,s}(\iota k_{F\downarrow},q)$ $\displaystyle{\rm for}\hskip
5.69046pt\bar{s}\hskip 5.69046pt{\rm branch}\hskip 5.69046pt{\rm lines}\,,$
and
$\displaystyle\Phi_{\iota}(q)=$ (89) $\displaystyle=$
$\displaystyle\iota\,\xi_{s\,s}^{0}{\delta N^{F}_{s}\over
2}+\xi_{s\,s}^{1}\,(\delta J^{F}_{s}-2\delta J_{s2})-\Phi_{s,s}(\iota
k_{F\downarrow},q)$ $\displaystyle=$ $\displaystyle{\iota\,\delta
N^{F}_{s}\over 2\xi_{s\,s}^{1}}+\xi_{s\,s}^{1}\,(\delta J^{F}_{s}-2\delta
J_{s2})-\Phi_{s,s}(\iota k_{F\downarrow},q)$ $\displaystyle{\rm for}\hskip
5.69046pt\bar{s}^{\prime}\hskip 5.69046pt{\rm branch}\hskip 5.69046pt{\rm
lines}\,,$
respectively. Here the maximum interval of the $s$ band momentum is
$q\in]-k_{F\downarrow},k_{F\downarrow}[$, $\xi_{s\,s}^{0}=1/\xi_{s\,s}^{1}$ at
one electron per site and we accounted for the phase shift $\Phi_{s,s2}(\iota
k_{F\downarrow},\pm(k_{F\uparrow}-k_{F\downarrow}))$ reading
$\mp\xi_{s\,s}^{1}$ [see Eq. (143)].
The values of the $s$ and $s2$ bands number and current number deviations that
in the case of the transverse and longitudinal spin excitations are used in
Eqs. (87)-(89) are provided in Tables 1 and 2, respectively.
Finally, the momentum dependent exponents that control the line shape near the
$s$ branch lines that refer to parts of the lower thresholds of the combined
spectra, Eqs. (49) and (50), and of the spectrum, Eq. (51), involve spectral
functionals of general form,
$\displaystyle\Phi_{\iota}(q)$ $\displaystyle=$ $\displaystyle{\iota\,\delta
N^{F}_{s}\over 2\xi_{s\,s}^{1}}+\xi_{s\,s}^{1}\,\delta
J^{F}_{s}\mp\Phi_{s,s}(\iota k_{F\downarrow},q)$ $\displaystyle{\rm where}$
$\displaystyle-$ $\displaystyle\rightarrow$ $\displaystyle\hskip 5.69046pt{\rm
maximum}\hskip 5.69046pt{\rm interval}\hskip
5.69046ptq\in]-k_{F\downarrow},k_{F\downarrow}[$ $\displaystyle+$
$\displaystyle\rightarrow$ $\displaystyle\hskip 5.69046pt{\rm maximum}\hskip
5.69046pt{\rm interval}\hskip 5.69046pt|q|\in]k_{F\downarrow},k_{F\uparrow}]$
(90) $\displaystyle{\rm for}\hskip 5.69046pts\hskip 5.69046pt{\rm
branch}\hskip 5.69046pt{\rm lines}\,.$
Here $-$ and $+$ is the phase-shift sign in $\mp\Phi_{s,s}(\iota
k_{F\downarrow},q)$ suitable to $s$ branch lines involving $s$ band hole and
$s$ particle creation, respectively, at a $q$ belonging to the given maximum
intervals.
The values of the $s$ band number and current number deviations that are used
in Eq. (90) are provided in Table 3.
In terms of many-electron processes, the quantum problem studied in this paper
is not perturbative. However, in terms of the fractionalized particles that
naturally emerge from the rotated-electrons degrees of freedom separation it
is perturbative. (In the subspace of the present quantum problem, rotated-
electron operators are expressed in terms of corresponding fractionalized
particles operators as given in Eq. (80) of Ref. Carmelo_16, .)
The case of most interest for the studies of this paper refers to the gapped
excited energy eigenstates populated by one $s2$ particle. For the $+-$, $xx$,
and $zz$ spin dynamical structure factors, such states are behind the
$(k,\omega)$-plane spectral weight located above the gapped lower thresholds
shown in Figs. 1-6. For such $+-$, $zz$, and $-+$ factors the $s$-particle
number deviations, $\delta N_{s}=\delta N_{s}^{F}+\delta N_{s}^{NF}$, Eq.
(80), are given by $\delta N_{s}=-1$, $\delta N_{s}=-2$, and $\delta
N_{s}=-3$, respectively. That $\sum_{\iota=\pm 1}\Phi_{\iota}^{2}(q)$
increases upon increasing $|\delta N_{s}|$ is behind both a decreasing amount
of spectral weight above the corresponding gapped lower threshold and an
increase of the momentum-dependent exponents, Eqs. (20) and (32).
## Appendix E Some useful quantities
In this Appendix a set o quantities needed for our study are defined and
corresponding useful limiting behaviors are provided.
The quantum problem described by the 1D Hubbard model with one electron per
site in a magnetic field acting in the spin subspaces considered in this paper
involves a subset of Bethe ansatz equations.
The equation associated with the $s$ band of the classes of excited energy
eigenstates that span such spin subspaces is given by,
$\displaystyle q_{j}={2\over
L}\sum_{j^{\prime}=1}^{L}\,\arctan\left({\Lambda_{s}(q_{j})-\sin
k(q_{j^{\prime}})\over u}\right)$ $\displaystyle-{2\over
L}\sum_{j^{\prime}=1}^{N_{\uparrow}}\,N_{s}(q_{j^{\prime}})\arctan\left({\Lambda_{s}(q_{j})-\Lambda_{s}(q_{j^{\prime}})\over
2u}\right)$ $\displaystyle-{2\over
L}\sum_{j^{\prime}=1}^{N_{\uparrow}-N_{\downarrow}+N_{s2}}\,N_{s2}(q_{j^{\prime}})\\{\arctan\left({\Lambda_{s}(q_{j})-\Lambda_{s2}(q_{j^{\prime}})\over
u}\right)$
$\displaystyle+\arctan\left({\Lambda_{s}(q_{j})-\Lambda_{s2}(q_{j^{\prime}})\over
3u}\right)\\}$ $\displaystyle{\rm where}\hskip
14.22636ptj=1,...,N_{\uparrow}\,.$ (91)
That associated with the $s2$ band reads,
$\displaystyle q_{j}$ $\displaystyle=$ $\displaystyle{2\over
L}\sum_{j^{\prime}=1}^{L}\,\arctan\left({\Lambda_{s2}(q_{j})-\sin
k(q_{j^{\prime}})\over 2u}\right)$ (92) $\displaystyle-$ $\displaystyle{2\over
L}\sum_{j^{\prime}=1}^{N_{\uparrow}}\,N_{s}(q_{j^{\prime}})\\{\arctan\left({\Lambda_{s2}(q_{j})-\Lambda_{s}(q_{j^{\prime}})\over
u}\right)$ $\displaystyle+$
$\displaystyle\arctan\left({\Lambda_{s2}(q_{j})-\Lambda_{s}(q_{j^{\prime}})\over
3u}\right)\\}$ $\displaystyle{\rm where}\hskip
14.22636ptj=1,...,N_{\uparrow}-N_{\downarrow}+N_{s2}$ $\displaystyle{\rm
and}\hskip 14.22636ptN_{s2}=0,1\,.$
In these equations, $N_{s}(q_{j^{\prime}})=1$ and $N_{s2}(q_{j^{\prime}})=1$
for occupied $q_{j^{\prime}}$ and $N_{s}(q_{j^{\prime}})=0$ and
$N_{s2}(q_{j^{\prime}})=0$ for unoccupied $q_{j^{\prime}}$.
For the spin subspaces spanned by excited states populated by
$N_{s}=N_{\downarrow}-2$ $s$ particles and one $s2$ particle, the Bethe-ansatz
equation, Eq. (92), does not include the third term that involves the spin
rapidity differences $\Lambda_{s2}(q_{j})-\Lambda_{s2}(q_{j^{\prime}})$.
Indeed, it vanishes for $q_{j}=q_{j^{\prime}}$.
The $s$ band Bethe ansatz rapidity is real and associated with the rapidity
function $\Lambda_{s}(q_{j})$. The $s2$ band rapidity function
$\Lambda_{s2}(q_{j})$ that appears in Eqs. (91) and (92) is the real part of
the following two Bethe ansatz complex rapidities associated with a spin
$n$-string of length $n=2$,
$\Lambda_{s2}(q_{j})\pm i\,u\,.$ (93)
The rapidity function $k(q_{j})$ that appears in the above equations is
associated with the $c$ band that in the present subspaces is full with a
constant occupancy of $N$ $c$ particles and thus is not dynamically active.
That function is defined by the following equation,
$\displaystyle k(q_{j})=q_{j}-{2\over
L}\sum_{j^{\prime}=1}^{N_{\uparrow}}\,N_{s}(q_{j^{\prime}})\arctan\left({\sin
k(q_{j})-\Lambda(q_{j^{\prime}})\over u}\right)$ $\displaystyle-{2\over
L}\sum_{j^{\prime}=1}^{N_{\uparrow}-N_{\downarrow}+N_{s2}}\,N_{s2}(q_{j^{\prime}})\arctan\left({\sin
k(q_{j})-\Lambda_{s2}(q_{j^{\prime}})\over 2u}\right)$ $\displaystyle{\rm
where}\hskip 14.22636ptj=1,...,N\,.$ (94)
In the above equations,
$q_{j}={2\pi\over L}\,I^{\beta}_{j}\hskip 5.69046pt{\rm for}\hskip
5.69046pt\beta=c,s,s2\,,$ (95)
where the quantum numbers $I^{\beta}_{j}$ are either integers or half-odd
integers according to the following boundary conditions Takahashi ,
$\displaystyle I_{j}^{c}$ $\displaystyle=$ $\displaystyle 0,\pm 1,\pm
2,...\hskip 14.22636pt{\rm for}\hskip 4.26773ptN_{s}+N_{s2}\hskip
4.26773pt{\rm even}$ $\displaystyle=$ $\displaystyle\pm 1/2,\pm 3/2,\pm
5/2,...\hskip 14.22636pt{\rm for}\hskip 4.26773ptN_{s}+N_{s2}\hskip
4.26773pt{\rm odd}$ $\displaystyle I_{j}^{s}$ $\displaystyle=$ $\displaystyle
0,\pm 1,\pm 2,...\hskip 14.22636pt{\rm for}\hskip 4.26773ptN_{\uparrow}\hskip
4.26773pt{\rm odd}$ $\displaystyle=$ $\displaystyle\pm 1/2,\pm 3/2,\pm
5/2,...\hskip 14.22636pt{\rm for}\hskip 4.26773ptN_{\uparrow}\hskip
4.26773pt{\rm even}$ $\displaystyle I_{j}^{s2}$ $\displaystyle=$
$\displaystyle 0,\pm 1,\pm 2,...\hskip 14.22636pt{\rm for}\hskip
4.26773ptN_{s2}=1\,.$ (96)
In the thermodynamic limit, we often use continuous momentum variables $q$
that replace the discrete $s$ and $s2$ bands momenta $q_{j}$ such that
$q_{j+1}-q_{j}=2\pi/L$. They read $q\in[-k_{F\uparrow},k_{F\uparrow}]$ and
$q\in[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]$,
respectively. In that limit the momenta $k_{F\downarrow}$ and $k_{F\uparrow}$
rare given by,
$k_{F\downarrow}={\pi\over 2}(1-m)\,;\hskip 5.69046ptk_{F\uparrow}={\pi\over
2}(1+m)\,;\hskip 5.69046ptk_{F}={\pi\over 2}\,,$ (97)
for the spin-density interval, $m\in]0,1[$ where $k_{F}=\lim_{m\rightarrow
0}k_{F\downarrow}=\lim_{m\rightarrow 0}k_{F\uparrow}$.
The energy dispersions $\varepsilon_{s}(q)$ and $\varepsilon_{s2}(q)$ that
appear in the spectra of the spin excitations are defined as follows,
$\displaystyle\varepsilon_{s}(q)$ $\displaystyle=$
$\displaystyle{\bar{\varepsilon}_{s}}(\Lambda_{s}(q))\hskip 5.69046pt{\rm
for}\hskip 5.69046ptq\in[-k_{F\uparrow},k_{F\uparrow}]\hskip 5.69046pt{\rm
where}$ $\displaystyle{\bar{\varepsilon}_{s}}(\Lambda)$ $\displaystyle=$
$\displaystyle\int_{B}^{\Lambda}d\Lambda^{\prime}\,2t\,\eta_{s}(\Lambda^{\prime})\,,$
(98)
and
$\displaystyle\varepsilon_{s2}(q)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h+\varepsilon_{s2}^{0}(q)\hskip 5.69046pt{\rm for}$ $\displaystyle
q$ $\displaystyle\in$
$\displaystyle[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]\hskip
5.69046pt{\rm where}$ $\displaystyle\varepsilon_{s2}^{0}(q)$ $\displaystyle=$
$\displaystyle{\bar{\varepsilon}}_{s2}^{0}(\Lambda_{s2}(q))\hskip
5.69046pt{\rm and}$ $\displaystyle{\bar{\varepsilon}}_{s2}^{0}(\Lambda)$
$\displaystyle=$
$\displaystyle\int_{\infty}^{\Lambda}d\Lambda^{\prime}\,2t\,\eta_{s2}(\Lambda^{\prime})\,,$
(99)
respectively.
The corresponding $s$ and $s2$ bands group velocities are given by,
$v_{s}(q)={\partial\varepsilon_{s}(q)\over\partial q}\hskip 5.69046pt{\rm
and}\hskip 5.69046ptv_{s2}(q)={\partial\varepsilon_{s2}(q)\over\partial q}\,.$
(100)
The distribution $2t\,\eta_{s}(\Lambda)$ appearing in Eq. (98) is coupled to a
distribution $2t\,\eta_{c}(k)$ through the following integral equations,
$2t\,\eta_{c}(k)=2t\sin k+\frac{\cos
k}{\pi\,u}\int_{-B}^{B}d\Lambda\,{2t\,\eta_{s}(\Lambda)\over 1+\left({\sin
k-\Lambda\over u}\right)^{2}}\,,$ (101)
and
$\displaystyle 2t\,\eta_{s}(\Lambda)$ $\displaystyle=$
$\displaystyle{1\over\pi\,u}\int_{-\pi}^{\pi}dk\,{2t\,\eta_{c}(k)\over
1+\left({\Lambda-\sin k\over u}\right)^{2}}$ (102) $\displaystyle-$
$\displaystyle\frac{1}{2\pi\,u}\int_{-B}^{B}d\Lambda^{\prime}\,{2t\,\eta_{s}(\Lambda^{\prime})\over
1+\left({\Lambda-\Lambda^{\prime}\over 2u}\right)^{2}}\,.$
The distribution $2t\,\eta_{s2}(\Lambda)$ appearing in Eq. (99) is given by,
$\displaystyle 2t\,\eta_{s2}(\Lambda)$ $\displaystyle=$ $\displaystyle{1\over
2\pi\,u}\int_{-\pi}^{\pi}dk\,{2t\,\eta_{c}(k)\over 1+\left({\Lambda-\sin
k\over 2u}\right)^{2}}$ (103) $\displaystyle-$
$\displaystyle\frac{1}{\pi\,u}\int_{-B}^{B}d\Lambda^{\prime}\,{2t\,\eta_{s}(\Lambda^{\prime})\over
1+\left({\Lambda-\Lambda^{\prime}\over u}\right)^{2}}$ $\displaystyle-$
$\displaystyle\frac{1}{3\pi\,u}\int_{-B}^{B}d\Lambda^{\prime}\,{2t\,\eta_{s}(\Lambda^{\prime})\over
1+\left({\Lambda-\Lambda^{\prime}\over 3u}\right)^{2}}\,,$
where the distributions $2t\,\eta_{c}(k)$ and $2t\,\eta_{s}(\Lambda)$ are the
solutions of Eqs. (101) and (102).
The rapidity distribution function $\Lambda_{s}(q)$ where
$q\in[-k_{F\uparrow},k_{F\uparrow}]$ in the argument of the auxiliary
dispersion ${\bar{\varepsilon}_{s}}$ in Eq. (98) is defined in terms of the
$s$ band inverse function $q=q_{s}(\Lambda)$ where
$\Lambda\in[-\infty,\infty]$. The latter is defined by the equation,
$\displaystyle q=q_{s}(\Lambda)$ $\displaystyle=$
$\displaystyle{1\over\pi}\int_{-\pi}^{\pi}dk\,2\pi\rho(k)\,\arctan\left({\Lambda-\sin
k\over u}\right)$ (104) $\displaystyle-$
$\displaystyle\frac{1}{\pi}\int_{-B}^{B}d\Lambda^{\prime}\,2\pi\sigma(\Lambda^{\prime})\,\arctan\left({\Lambda-\Lambda^{\prime}\over
2u}\right)$ $\displaystyle{\rm for}\hskip
5.69046pt\Lambda\in[-\infty,\infty]\,.$
The rapidity distribution function $\Lambda_{s2}(q)$ where
$q\in[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]$ is
also defined in terms of the $s2$ band inverse function $q=q_{s2}(\Lambda)$
where $\Lambda\in[-\infty,\infty]$ as follows,
$\displaystyle q=q_{s2}(\Lambda)$ $\displaystyle=$
$\displaystyle{1\over\pi}\int_{-\pi}^{\pi}dk\,2\pi\rho(k)\,\arctan\left({\Lambda-\sin
k\over 2u}\right)$ (105) $\displaystyle-$
$\displaystyle\frac{1}{\pi}\int_{-B}^{B}d\Lambda^{\prime}\,2\pi\sigma(\Lambda^{\prime})\arctan\left({\Lambda-\Lambda^{\prime}\over
u}\right)$ $\displaystyle-$
$\displaystyle\frac{1}{\pi}\int_{-B}^{B}d\Lambda^{\prime}\,2\pi\sigma(\Lambda^{\prime})\arctan\left({\Lambda-\Lambda^{\prime}\over
3u}\right)$ $\displaystyle{\rm for}\hskip
5.69046pt\Lambda\in[-\infty,\infty]\,.$
Here the distributions $2\pi\rho(k)$ and $2\pi\sigma(\Lambda)$ are the
solution of the following coupled integral equations,
$2\pi\rho(k)=1+\frac{\cos
k}{\pi\,u}\int_{-B}^{B}d\Lambda\,{2\pi\sigma(\Lambda)\over 1+\left({\sin
k-\Lambda\over u}\right)^{2}}\,,$ (106)
and
$\displaystyle 2\pi\sigma(\Lambda)$ $\displaystyle=$
$\displaystyle{1\over\pi\,u}\int_{-\pi}^{\pi}dk\,{2\pi\rho(k)\over
1+\left({\Lambda-\sin k\over u}\right)^{2}}$ (107) $\displaystyle-$
$\displaystyle\frac{1}{2\pi\,u}\int_{-B}^{B}d\Lambda^{\prime}\,{2\pi\sigma(\Lambda^{\prime})\over
1+\left({\Lambda-\Lambda^{\prime}\over 2u}\right)^{2}}\,.$
Such distributions obey the sum rules,
${1\over\pi}\int_{-\pi}^{\pi}dk\,2\pi\rho(k)=2\hskip 5.69046pt{\rm and}\hskip
5.69046pt\frac{1}{\pi}\int_{-B}^{B}d\Lambda\,2\pi\sigma(\Lambda)=(1-m)\,.$
(108)
The parameter $B=\Lambda_{s}(k_{F\downarrow})$ appearing in the above
equations has the limiting behaviors,
$\displaystyle B$ $\displaystyle=$
$\displaystyle\Lambda_{s}(k_{F\downarrow})\hskip 5.69046pt{\rm with}$
$\displaystyle\lim_{m\rightarrow 0}B$ $\displaystyle=$
$\displaystyle\infty\hskip 5.69046pt{\rm and}\hskip
5.69046pt\lim_{m\rightarrow 1}B=0\,.$ (109)
Other $\Lambda_{s}(q)$ and $\Lambda_{s2}(q)$ values are,
$\displaystyle\Lambda_{s}(0)$ $\displaystyle=$ $\displaystyle 0\hskip
5.69046pt{\rm and}\hskip 5.69046pt\Lambda_{s}(\pm k_{F\uparrow})=\pm\infty$
$\displaystyle\Lambda_{s2}(0)$ $\displaystyle=$ $\displaystyle 0\hskip
5.69046pt{\rm and}\hskip
5.69046pt\Lambda_{s2}(\pm(k_{F\uparrow}-k_{F\downarrow}))=\pm\infty\,.$ (110)
The $s$ band dispersion,
$\displaystyle\varepsilon_{s}^{0}(q)$ $\displaystyle=$
$\displaystyle{\bar{\varepsilon}_{s}}^{0}(\Lambda_{s}(q))\hskip 5.69046pt{\rm
where}$ $\displaystyle{\bar{\varepsilon}_{s}}^{0}(\Lambda)$ $\displaystyle=$
$\displaystyle\int_{\infty}^{\Lambda}d\Lambda^{\prime}\,2t\,\eta_{s}(\Lambda^{\prime})\,.$
(111)
whose zero-energy level is for $0<m<1$ shifted relative to that of
$\varepsilon_{s}(q)$ defines the spin density curve, as given in Eq. (3).
In the $m\rightarrow 0$ limit, the $s2$ band does not exist in the ground
state. In that limit, it reduces to $q=0$ with $\varepsilon_{s2}(0)=0$ when
$N_{s2}=1$. In the same limit, the $s$ band energy dispersion can be written
as,
$\displaystyle\varepsilon_{s}(q)$ $\displaystyle=$
$\displaystyle{\bar{\varepsilon}_{s}}(\Lambda_{s}(q))\hskip 5.69046pt{\rm
for}\hskip 5.69046ptq\in\left[-{\pi\over 2},{\pi\over 2}\right]\hskip
5.69046pt{\rm where}$ $\displaystyle{\bar{\varepsilon}_{s}}(\Lambda)$
$\displaystyle=$
$\displaystyle-2t\int_{0}^{\infty}d\omega\,{\cos(\omega\,\Lambda)\over\omega\cosh(\omega\,u)}\,J_{1}(\omega)\,,$
(112)
and the rapidity function $\Lambda_{s}(q)$ is defined in terms of its inverse
function $q=q_{s}(\Lambda)$ where $\Lambda\in[-\infty,\infty]$ as,
$q=q_{s}(\Lambda)=\int_{0}^{\infty}d\omega\,{\sin(\omega\,\Lambda)\over\omega\cosh(\omega\,u)}\,J_{0}(\omega)\,.$
(113)
In these equations $J_{0}(\omega)$ and $J_{1}(\omega)$ are Bessel functions.
The $s$ and $s2$ band energy dispersions $\varepsilon_{s}(q)$ and
$\varepsilon_{s2}(q)$, Eqs. (98) and (99), respectively, have limiting values,
$\displaystyle\varepsilon_{s}(0)=-W_{s}^{p}$ $\displaystyle\varepsilon_{s}(\pm
k_{F\downarrow})=0$ $\displaystyle\varepsilon_{s}(\pm
k_{F\uparrow})=W_{s}^{h}=2\mu_{B}\,h$
$\displaystyle\varepsilon_{s2}(0)=4\mu_{B}\,h-W_{s2}$
$\displaystyle\varepsilon_{s2}(\pm(k_{F\uparrow}-k_{F\downarrow}))=4\mu_{B}\,h\,,$
(114)
where,
$\displaystyle\lim_{u\rightarrow 0}W_{s}^{p}$ $\displaystyle=$ $\displaystyle
2t\left(1-\sin\left({\pi\over 2}\,m\right)\right)$
$\displaystyle\lim_{u\rightarrow 0}W_{s}^{h}$ $\displaystyle=$
$\displaystyle\lim_{u\rightarrow 0}2\mu_{B}\,h=4t\sin\left({\pi\over
2}\,m\right)$ $\displaystyle\lim_{u\rightarrow 0}W_{s}$ $\displaystyle=$
$\displaystyle W_{s}^{p}+W_{s}^{h}=2t\left(1+\sin\left({\pi\over
2}\,m\right)\right)$ $\displaystyle\lim_{u\rightarrow\infty}W_{s}$
$\displaystyle=$ $\displaystyle W_{s}^{p}+W_{s}^{h}=0$
$\displaystyle\lim_{u\rightarrow 0}W_{s2}$ $\displaystyle=$ $\displaystyle
4t\sin\left({\pi\over 2}\,m\right)$
$\displaystyle\lim_{u\rightarrow\infty}W_{s2}$ $\displaystyle=$ $\displaystyle
0\,,$ (115)
for spin densities $m\in]0,1[$ and,
$\displaystyle\lim_{m\rightarrow 1}W_{s}$ $\displaystyle=$ $\displaystyle
W_{s}^{h}=2\mu_{B}\,h_{c}=\sqrt{(4t)^{2}+U^{2}}-U$
$\displaystyle\lim_{m\rightarrow 1}W_{s2}$ $\displaystyle=$
$\displaystyle\sqrt{(4t)^{2}+(2U)^{2}}-2U\,,$ (116)
for all $u>0$ values.
In the $u\rightarrow 0$ limit, the $s$ band energy dispersions have for spin
densities $m\in]0,1[$ the following expressions,
$\displaystyle\varepsilon_{s}(q)$ $\displaystyle=$
$\displaystyle\varepsilon_{s}^{0}(q)-\varepsilon_{s}^{0}(k_{F\downarrow})=-2t\left(\cos
q-\cos k_{F\downarrow}\right)$ $\displaystyle=$ $\displaystyle
2t\sin\left({\pi\over 2}\,m\right)-2t\cos q$
$\displaystyle\varepsilon_{s}^{0}(q)$ $\displaystyle=$
$\displaystyle-2t\left(\cos q-\cos k_{F\uparrow}\right)$ (117)
$\displaystyle=$ $\displaystyle-2t\sin\left({\pi\over 2}\,m\right)-2t\cos q$
$\displaystyle{\rm for}\hskip 5.69046ptq\in[-k_{F\uparrow},k_{F\uparrow}]\,.$
The $s2$ band energy dispersions have for $u\rightarrow 0$ and spin densities
$0<m<1$ the following expressions,
$\displaystyle\varepsilon_{s2}(q)$ $\displaystyle=$ $\displaystyle
4\mu_{B}\,h-2t\left(\cos(|q|+k_{F\downarrow})-\cos k_{F\uparrow}\right)$
$\displaystyle=$ $\displaystyle 8t\sin\left({\pi\over
2}\,m\right)-2t\left(\cos(|q|+k_{F\downarrow})+\sin\left({\pi\over
2}\,m\right)\right)$ $\displaystyle=$ $\displaystyle 6t\sin\left({\pi\over
2}\,m\right)-2t\cos(|q|+k_{F\downarrow})$
$\displaystyle\varepsilon_{s2}^{0}(q)$ $\displaystyle=$
$\displaystyle-2t\left(\cos(|q|+k_{F\downarrow})-\cos k_{F\uparrow}\right)$
$\displaystyle=$ $\displaystyle-2t\sin\left({\pi\over
2}\,m\right)-2t\cos(|q|+k_{F\downarrow})$ $\displaystyle{\rm for}$
$\displaystyle
q\in[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]\,.$
(118)
In the $u\rightarrow 0$ limit, the corresponding group velocities, Eq. (100),
read,
$\displaystyle v_{s}(q)$ $\displaystyle=$ $\displaystyle 2t\sin q\hskip
5.69046pt{\rm for}\hskip 5.69046ptq\in[-k_{F\uparrow},k_{F\uparrow}]$
$\displaystyle v_{s2}(q)$ $\displaystyle=$ $\displaystyle{\rm
sgn}\\{q\\}\,2t\sin(|q|+k_{F\downarrow})\hskip 5.69046pt{\rm for}$ (119)
$\displaystyle
q\in[-(k_{F\uparrow}-k_{F\downarrow}),(k_{F\uparrow}-k_{F\downarrow})]\,,$
respectively, so that,
$v_{s}(k_{F\downarrow})=v_{s2}(k_{F\uparrow}-k_{F\downarrow})=2t\cos\left({\pi\over
2}m\right)\,.$ (120)
In the $m\rightarrow 1$ spin density limit, the $s$ band energy dispersions
are for all $u>0$ values given by the following integrals,
$\displaystyle\varepsilon_{s}(q)$ $\displaystyle=$
$\displaystyle-{2t\over\pi}\int_{-\pi}^{\pi}dk\sin k\arctan\left({\sin
k-\Lambda_{s}(q)\over u}\right)$ $\displaystyle+$
$\displaystyle\sqrt{(4t)^{2}+U^{2}}-U$ $\displaystyle\varepsilon_{s}^{0}(q)$
$\displaystyle=$ $\displaystyle-{2t\over\pi}\int_{-\pi}^{\pi}dk\sin
k\arctan\left({\sin k-\Lambda_{s}(q)\over u}\right)$ $\displaystyle{\rm for}$
$\displaystyle q\in[-\pi,\pi]\,,$ (121)
where the rapidity function $\Lambda_{s}(q)$ is defined by its inverse
function as,
$q=q_{s}(\Lambda)={1\over\pi}\int_{-\pi}^{\pi}dk\arctan\left({\Lambda-\sin
k\over u}\right)\,.$ (122)
In the same $m\rightarrow 1$ limit, the $s2$ band energy dispersions are for
all $u>0$ values given by the integrals,
$\displaystyle\varepsilon_{s2}(q)$ $\displaystyle=$
$\displaystyle-{2t\over\pi}\int_{-\pi}^{\pi}dk\sin k\arctan\left({\sin
k-\Lambda_{s}(q)\over 2u}\right)$ $\displaystyle+$
$\displaystyle\sqrt{(8t)^{2}+(2U)^{2}}-2U$
$\displaystyle\varepsilon_{s2}^{0}(q)$ $\displaystyle=$
$\displaystyle-{2t\over\pi}\int_{-\pi}^{\pi}dk\sin k\arctan\left({\sin
k-\Lambda_{s2}(q)\over 2u}\right)$ $\displaystyle{\rm for}$ $\displaystyle
q\in[-\pi,\pi]\,,$ (123)
where the rapidity function $\Lambda_{s2}(q)$ is again defined by its inverse
function as,
$q=q_{s2}(\Lambda)={1\over\pi}\int_{-\pi}^{\pi}dk\arctan\left({\Lambda-\sin
k\over 2u}\right)\,.$ (124)
For $u\gg 1$, one can derive analytical expressions for the $s$ and $s2$ band
energy dispersions and the corresponding group velocities, Eq. (100), for spin
densities $m$ in the limits $m\rightarrow 0$ and $(1-m)\ll 1$. For $u\gg 1$
and in the $m\rightarrow 0$ limit, the behaviors of the $s$ band energy
dispersions and group velocity are,
$\displaystyle\varepsilon_{s}(q)$ $\displaystyle=$ $\displaystyle-{\pi\,t\over
2u}\cos q\hskip 5.69046pt{\rm and}\hskip
5.69046pt\varepsilon_{s}^{0}(q)=\varepsilon_{s}(q)$ $\displaystyle v_{s}(q)$
$\displaystyle=$ $\displaystyle{\pi\,t\over 2u}\sin q$ (125)
$\displaystyle{\rm for}\hskip 5.69046ptq\in[-\pi/2,\pi/2]\hskip 5.69046pt{\rm
and}\hskip 5.69046ptm\rightarrow 0\,.$
For $u\gg 1$ and $(1-m)\ll 1$, the $s$ band energy dispersions and group
velocity, Eq. (100), behave as,
$\displaystyle\varepsilon_{s}(q)$ $\displaystyle=$ $\displaystyle-{t\over
u}\,(\cos q-1)$ $\displaystyle+{t\over u}\,(1-m)\sin q\,\arctan\left({1\over
2}\tan\left({q\over 2}\right)\right)$ $\displaystyle\varepsilon_{s}^{0}(q)$
$\displaystyle=$ $\displaystyle-{2t\over u}+\varepsilon_{s}(q)$
$\displaystyle=$ $\displaystyle-{t\over u}\,(\cos q+1)$ $\displaystyle+{t\over
u}\,(1-m)\sin q\,\arctan\left({1\over 2}\tan\left({q\over 2}\right)\right)$
$\displaystyle v_{s}(q)$ $\displaystyle=$ $\displaystyle{t\over u}\sin
q+{t\over u}\,(1-m){\sin q\over 1+3\cos^{2}\left({q\over 2}\right)}$ (126)
$\displaystyle+{t\over u}\,(1-m)\cos q\,\arctan\left({1\over
2}\tan\left({q\over 2}\right)\right)$ $\displaystyle{\rm for}\hskip
5.69046ptq\in\left[-{\pi\over 2}(1+m),{\pi\over 2}(1+m)\right]$
$\displaystyle{\rm and}\hskip 5.69046pt(1-m)\ll 1\,.$
For $u\gg 1$ and in the $m\rightarrow 0$ limit, the $s2$ band energy
dispersion and group velocity vanish, consistent with the momentum and energy
widths of the $s2$ band vanishing. For $u\gg 1$ and $(1-m)\ll 1$, they behave
as,
$\displaystyle\varepsilon_{s2}(q)$ $\displaystyle=$ $\displaystyle{4t\over
u}-{t\over 2u}\,(1+\cos q)$ $\displaystyle+{t\over 2u}\,(1-m)\sin
q\\{\arctan\left(2\tan\left({q\over 2}\right)\right)$
$\displaystyle+\arctan\left({2\over 3}\tan\left({q\over 2}\right)\right)\\}$
$\displaystyle\varepsilon_{s2}^{0}(q)$ $\displaystyle=$
$\displaystyle\varepsilon_{s2}(q)-{4t\over u}$ $\displaystyle v_{s2}(q)$
$\displaystyle=$ $\displaystyle{t\over 2u}\sin q+{t\over 2u}\,(1-m)\sin
q\\{{1\over 1+3\sin^{2}\left({q\over 2}\right)}$ (127) $\displaystyle+{3\over
4+5\cos^{2}\left({q\over 2}\right)}\\}$ $\displaystyle+{t\over 2u}\,(1-m)\cos
q\,\\{\arctan\left(2\tan\left({q\over 2}\right)\right)$
$\displaystyle+\arctan\left({2\over 3}\tan\left({q\over
2}\right)\right)\\}\hskip 5.69046pt{\rm for}$ $\displaystyle q\in[-\pi m,\pi
m]\hskip 5.69046pt{\rm and}\hskip 5.69046pt(1-m)\ll 1\,.$
For $u\gg 1$ and $(1-m)\ll 1$, the following equality holds,
$v_{s}(k_{F\downarrow})=v_{s2}(k_{F\uparrow}-k_{F\downarrow})={\pi t\over
2u}(1-m)\,.$ (128)
The phase shifts play an important role in the spin dynamical properties. They
are given by,
$\displaystyle 2\pi\,\Phi_{s,\beta}(q,q^{\prime})$ $\displaystyle=$
$\displaystyle 2\pi\,\bar{\Phi}_{s,\beta}\left(r,r^{\prime}\right)$
$\displaystyle{\rm where}\hskip 5.69046ptr={\Lambda_{s}(q)\over u}$
$\displaystyle{\rm and}\hskip
5.69046ptr^{\prime}={\Lambda_{\beta}(q^{\prime})\over u}\,.$ (129)
In the case of the excited energy eigenstates involved in the studies of this
paper, $\beta=s,s2$. The rapidity phase shifts
$2\pi\bar{\Phi}_{s,\beta}\left(r,r^{\prime}\right)$ on the right-hand side of
the above equality are functions of the rapidity-related variables
$r=\Lambda/u$ of the $s$ and $s2$ branches. They are defined by the following
integral equations,
$\displaystyle\bar{\Phi}_{s,s}\left(r,r^{\prime}\right)$ $\displaystyle=$
$\displaystyle{1\over\pi}\arctan\left({r-r^{\prime}\over 2}\right)$ (130)
$\displaystyle+$
$\displaystyle\int_{-B/u}^{B/u}dr^{\prime\prime}\,G(r,r^{\prime\prime})\,{\bar{\Phi}}_{s,s}(r^{\prime\prime},r^{\prime})\,,$
and
$\displaystyle\bar{\Phi}_{s,s2}\left(r,r^{\prime}\right)$ $\displaystyle=$
$\displaystyle{1\over\pi}\arctan(r-r^{\prime})+{1\over\pi}\arctan\left({r-r^{\prime}\over
3}\right)$ (131) $\displaystyle+$
$\displaystyle\int_{-B/u}^{B/u}dr^{\prime\prime}\,G(r,r^{\prime\prime})\,{\bar{\Phi}}_{s,s2}(r^{\prime\prime},r^{\prime})\,.$
The kernel $G(r,r^{\prime})$ in Eqs. (130) and (131) is for $u>0$ given by,
$G(r,r^{\prime})=-{1\over{2\pi}}\left({1\over{1+((r-r^{\prime})/2)^{2}}}\right)\,.$
(132)
The phase shifts that appear in the expressions of the branch line exponents
read,
$\displaystyle\Phi_{s,s}\left(\iota k_{F\downarrow},q\right)$ $\displaystyle=$
$\displaystyle\bar{\Phi}_{s,s}\left(\iota{B\over u},{\Lambda_{s}(q)\over
u}\right)$ $\displaystyle\Phi_{s,s2}\left(\iota k_{F\downarrow},q\right)$
$\displaystyle=$ $\displaystyle\bar{\Phi}_{s,s2}\left(\iota{B\over
u},{\Lambda_{s2}(q)\over u}\right)$ (133) $\displaystyle{\rm where}\hskip
5.69046pt\iota=\pm 1\,.$
In the $m\rightarrow 0$ limit, the phase shift $\Phi_{s,s}(q,q^{\prime})$ in
units of $2\pi$ can be written as,
$\displaystyle\Phi_{s,s}(q,q^{\prime})$ $\displaystyle=$
$\displaystyle\bar{\Phi}_{s,s}\left(\Lambda_{s}(q),\Lambda(q^{\prime})\right)\hskip
5.69046pt{\rm where}$
$\displaystyle\bar{\Phi}_{s,s}\left(\Lambda,\Lambda^{\prime}\right)$
$\displaystyle=$
$\displaystyle{1\over\pi}\int_{0}^{\infty}d\omega\,{\sin(\omega\,(\Lambda-\Lambda^{\prime}))\over\omega\left(1+e^{2\omega
u}\right)}\,,$ (134)
and the rapidity function $\Lambda_{s}(q)$ is defined in terms of its inverse
function in Eq. (113). The integral in Eq. (134) can be solved for $u>0$, with
the result,
$\displaystyle\bar{\Phi}_{s,s}(\Lambda,\Lambda^{\prime})$ $\displaystyle=$
$\displaystyle{i\over 2\pi}\,\ln\left({\Gamma\left({1\over
2}+i{(\Lambda-\Lambda^{\prime})\over
4u}\right)\Gamma\left(1-i{(\Lambda-\Lambda^{\prime})\over
4u}\right)\over\Gamma\left({1\over 2}-i{(\Lambda-\Lambda^{\prime})\over
4u}\right)\Gamma\left(1+i{(\Lambda-\Lambda^{\prime})\over 4u}\right)}\right)$
(135) $\displaystyle{\rm for}\hskip 5.69046pt\Lambda\neq\iota\infty\hskip
5.69046pt{\rm where}\hskip 5.69046pt\iota=\pm 1$ $\displaystyle=$
$\displaystyle{\iota\over 2\sqrt{2}}\hskip 5.69046pt{\rm for}\hskip
5.69046pt\Lambda=\iota\infty\hskip 5.69046pt{\rm and}\hskip
5.69046pt\Lambda^{\prime}\neq\iota\infty$ $\displaystyle=$
$\displaystyle\iota\left({3\over 2\sqrt{2}}-1\right)\hskip 5.69046pt{\rm
for}\hskip 5.69046pt\Lambda=\Lambda^{\prime}=\iota\infty\,,$
where $\Gamma(x)$ is the usual $\gamma$ function.
The use of Eq. (135) leads to the following expressions for the phase shift
$\Phi_{s,s}\left(\iota k_{F},q\right)=\lim_{m\rightarrow
0}\Phi_{s,s}\left(\iota k_{F\downarrow},q\right)$ in the $m\rightarrow 0$
limit for $u>0$,
$\displaystyle\lim_{m\rightarrow 0}\Phi_{s,s}\left(\iota
k_{F\downarrow},q\right)$ $\displaystyle=$
$\displaystyle\Phi_{s,s}\left(\iota\pi/2,q\right)$ $\displaystyle=$
$\displaystyle{\iota\over 2\sqrt{2}}\hskip 5.69046pt{\rm for}\hskip
5.69046ptq\neq\iota k_{F\downarrow}$ $\displaystyle=$
$\displaystyle\iota\left({3\over 2\sqrt{2}}-1\right)\hskip 5.69046pt{\rm
for}\hskip 5.69046ptq=\iota k_{F\downarrow}$ $\displaystyle{\rm for}$
$\displaystyle u>0\hskip 5.69046pt{\rm where}\hskip 5.69046pt\iota=\pm 1\,.$
(136)
In the $m\rightarrow 0$ limit and for $u>0$, the phase shift
$\Phi_{s,s2}\left(\iota k_{F},0\right)=\lim_{m\rightarrow
0}\Phi_{s,s2}\left(\iota k_{F\downarrow},q\right)$ has in units of $2\pi$ the
following value,
$\Phi_{s,s2}\left(\iota k_{F},0\right)={\iota\over\sqrt{2}}\,.$ (137)
For $u\gg 1$ and in the $m\rightarrow 1$ limit, the phase shifts
$\Phi_{s,s}\left(\iota k_{F\downarrow},q\right)$ and $\Phi_{s,s2}\left(\iota
k_{F\downarrow},q\right)$ behave as,
$\displaystyle\lim_{m\rightarrow 1}\Phi_{s,s}(\iota k_{F\downarrow},q)$
$\displaystyle=$ $\displaystyle\Phi_{s,s}(0,q)$ $\displaystyle=$
$\displaystyle-{1\over\pi}\arctan\left({1\over 2}\tan\left({q\over
2}\right)\right)$ $\displaystyle\lim_{m\rightarrow 1}\Phi_{s,s2}(\iota
k_{F\downarrow},q)$ $\displaystyle=$ $\displaystyle\Phi_{s,s2}(0,q)$ (138)
$\displaystyle=$ $\displaystyle-{1\over\pi}\arctan\left(2\tan\left({q\over
2}\right)\right)$ $\displaystyle-$
$\displaystyle{1\over\pi}\arctan\left({2\over 3}\tan\left({q\over
2}\right)\right)\,.$
The $s$ band Fermi-points phase-shift parameters $\xi^{j}_{s\,s}$ where
$j=0,1$ are given by,
$\xi^{j}_{s\,s}=1+\sum_{\iota=\pm
1}(\iota)^{j}\,\Phi_{s,s}\left(k_{F\downarrow},\iota
k_{F\downarrow}\right)\,.$ (139)
They play an important role in both the spectral and static properties. For
one electron per site, the equality $\xi^{0}_{s\,s}=1/\xi^{1}_{s\,s}$ holds,
so that only one of these two parameters is needed, for instance
$\xi^{1}_{s\,s}$, which is a diagonal entry of the 1D Hubbard model dressed
charge matrix Frahm ; Carmelo_93 .
From manipulations of the phase-shift integral equation, Eq. (130), one finds
that the latter parameter is given by,
$\xi_{s\,s}^{1}=\xi_{s\,s}^{1}(B/u)\,.$ (140)
The function $\xi_{s\,s}^{1}(r)$ on the right-hand side of this equation at
$r=B/u$ is the solution of the integral equation,
$\xi_{s\,s}^{1}(r)=1+\int_{-B/u}^{B/u}dr^{\prime}\,G(r,r^{\prime})\,\xi_{s\,s}^{1}(r^{\prime})\,.$
(141)
The kernel $G(r,r^{\prime})$ appearing here is given in Eq. (132).
For $u>0$, the parameter $\xi^{1}_{s\,s}$ continuously increases from
$\xi^{1}_{s\,s}=1/\sqrt{2}$ as $m\rightarrow 0$ to $\xi^{1}_{s\,s}=1$ for
$m\rightarrow 1$, so that its limiting values are,
$\lim_{m\rightarrow 0}\xi_{s\,s}^{1}={1\over\sqrt{2}}\hskip 14.22636pt{\rm
and}\hskip 14.22636pt\lim_{m\rightarrow 1}\xi_{s\,s}^{1}=1\,.$ (142)
The parameter $\xi^{1}_{s\,s}$ is also related to the phase shift
$\Phi_{s,s2}(k_{F\downarrow},q)$ in Eq. (133) as follows,
$\displaystyle\xi^{1}_{s\,s}$ $\displaystyle=$ $\displaystyle-\Phi_{s,s2}(\pm
k_{F\downarrow},(k_{F\uparrow}-k_{F\downarrow}))$ (143) $\displaystyle=$
$\displaystyle\Phi_{s,s2}(\pm
k_{F\downarrow},-(k_{F\uparrow}-k_{F\downarrow}))\,.$
Finally the parameter $\xi_{s\,s2}^{0}$ that also appears in the momentum
dependent exponents is given by,
$\xi_{s\,s2}^{0}=2\Phi_{s,s2}(k_{F\downarrow},0)\,,$ (144)
where the phase shift $\Phi_{s,s2}(k_{F\downarrow},q)$ is defined in Eq.
(133). At $q=0$ it is such that $\Phi_{s,s2}(\iota
k_{F\downarrow},0)=\iota\,\Phi_{s,s2}(k_{F\downarrow},0)$. This justifies why
$\iota\,\xi_{s\,s2}^{0}=2\Phi_{s,s2}(\iota
k_{F\downarrow},0)=\iota\,2\Phi_{s,s2}(k_{F\downarrow},0)$ for $\iota=\pm 1$.
The parameter $\xi_{s\,s2}^{0}$ continuously decreases from
$\xi_{s\,s2}^{0}=\sqrt{2}$ as $m\rightarrow 0$ to $\xi_{s\,s2}^{0}=0$ for
$m\rightarrow 1$. Consitent, it follows from Eqs. (137) and (138) that,
$\lim_{m\rightarrow 0}\xi_{s\,s2}^{0}=\sqrt{2}\hskip 14.22636pt{\rm and}\hskip
14.22636pt\lim_{m\rightarrow 1}\xi_{s\,s2}^{0}=0\,.$ (145)
## References
* (1) Z. Wang, M. Schmidt, A. Loidl, J. Wu, H. Zou, W. Yang, C. Dong, Y. Kohama, K. Kindo, D. I. Gorbunov, S. Niesen, O. Breunig, J. Engelmayer, and T. Lorenz, Phys. Rev. Lett. 123, 067202 (2019).
* (2) A. K. Bera, J. Wu, W. Yang, R. Bewley, M. Boehm, J. Xu, M. Bartkowiak, O. Prokhnenko, B. Klemke, A. T. M. N. Islam, J. M. Law, Z. Wang, and B. Lake, Nature Phys. 16, 625 (2020).
* (3) Z. Wang, J. Wu, W. Yang, A. K. Bera, D. Kamenskyi, A. T. M. N. Islam, S. Xu, J. M. Law, B. Lake, C. Wu, and A. Loidl, Nature 554, 219 (2018).
* (4) M. Kohno, Phys. Rev. Lett. 102, 037203 (2009).
* (5) M. B. Stone, D. H. Reich, C. Broholm, K. Lefmann, C. Rischel, C., Landee, and M. M. Turnbull, Phys. Rev. Lett. 91, 037205 (2003).
* (6) I. U. Heilmann, G. Shirane, Y. Endoh, R. J. Birgeneau, and S. L. Holt, Phys. Rev. B 18, 3530 (1978).
* (7) E. H. Lieb and F. Y. Wu, Phys. Rev. Lett. 20, 1445 (1968).
* (8) E. H. Lieb and F. Y. Wu, Physica A 321, 1 (2003).
* (9) M. J. Martins and P. B. Ramos, Nucl. Phys. B 522, 413 (1998).
* (10) M. Takahashi, Progr. Theor. Phys 47, 69 (1972).
* (11) J. M. P. Carmelo and P. D. Sacramento, Phys. Reports 749, 1 (2018).
* (12) J. M. P. Carmelo, S. Nemati, and T. Prosen, Nucl. Phys. B 930, 418 (2018).
* (13) J. M. P. Carmelo and T. Čadež, Nucl. Phys. B 914, 461 (2017).
* (14) H. Benthien and E. Jeckelmann, Phys. Rev. B 75, 205128 (2007).
* (15) M. J. Bhaseen, H. L. Essler, and A. Grage, Phys. Rev. B 71, 020405(R) (2005).
* (16) H. L. Essler and V. E. Korepin Phys. Rev. B 59, 1734 (1999).
* (17) J. M. P. Carmelo and T. Čadež, Nucl. Phys. B 904, 39 (2016); Nucl. Phys. B 961, 115233 (2020), Corrigendum.
* (18) J. M. P. Carmelo, P. Horsch, D. K. Campbell, and A. H. Castro Neto Phys. Rev. B (R) 48, 4200 (1993).
* (19) J. M. P. Carmelo, K. Penc, and D. Bozi, Nucl. Phys. B 725, 421 (2005); 737, 351 (E) (2006).
* (20) J. M. P. Carmelo, D. Bozi, and K. Penc, J. Phys.: Cond. Matter 20, 415103 (2008).
* (21) A. Imambekov and L. I. Glazman, Science 323, 228 (2009).
* (22) A. Imambekov, T. L. Schmidt, and L. I. Glazman, Rev. Mod. Phys. 84, 1253 (2012).
* (23) J. M. P. Carmelo and P. D. Sacramento, Annals of Phys. 369, 102 (2016).
* (24) K. Penc, K. Hallberg, F. Mila, and H. Shiba, Phys. Rev. Lett. 77, 1390 (1996).
* (25) K. Penc, K. Hallberg, F. Mila, and H. Shiba, Phys. Rev. B 55, 15 475 (1997).
* (26) S. Sorella and A. Parola, Phys. Rev. Lett. 76, 4604 (1996).
* (27) S. Sorella and A. Parola, Phys. Rev. B 57, 6444 (1998).
* (28) G. Müller, H. Thomas, H. Beck, and J. C. Bonner, Phys. Rev. B 24, 1429 (1981).
* (29) S.-J. Caux and R. Hagemans, J. Stat. Mech., P12013 (2006).
* (30) J.-S. Caux and J. M. Maillet, Phys. Rev. Lett. 95, 077201 (2005).
* (31) D. Baeriswyl, J. Carmelo, and K. Maki, Synth. Met. 21, 271 (1987).
* (32) M. Raczkowski, F. F. Assaad, and L. Pollet, Phys. Rev. B 91, 045137 (2015).
* (33) J. Carmelo and D. Baeriswyl, Phys. Rev. B 37, 7541 (1988).
* (34) S. R. White and A. E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004).
* (35) U. Schollwöck, Ann. Phys. 326, 96 (2011).
* (36) A. Moreno, A. Muramatsu, and J. M. P. Carmelo, Phys. Rev. B 87, 075101 (2013).
* (37) H. Frahm and V. E. Korepin, Phys. Rev. B 42, 10553 (1990).
* (38) J. M. P. Carmelo and A. H. Castro Neto, Phys. Rev. Lett. 70, 1904 (1993).
|
# Coulomb corrections to Fermi beta decay in nuclei
Naftali Auerbach School of Physics and Astronomy, Tel Aviv University, Tel
Aviv 69978, Israel<EMAIL_ADDRESS>Bui Minh Loc111Present address:
Center for Exotic Nuclear Studies, Institute for Basic Science (IBS), Daejeon
34126, Korea Division of Nuclear Physics, Advanced Institute of Materials
Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam
Faculty of Applied Sciences, Ton Duc Thang University, Ho Chi Minh City,
Vietnam<EMAIL_ADDRESS>
###### Abstract
We study the influence of the Coulomb force on the Fermi beta-decays in
nuclei. This work is composed of two main parts. In the first part, we
calculate the Coulomb corrections to super-allowed beta decay. We use the
notion of the isovector monopole state and the self-consistent charge-exchange
Random Phase Approximation to compute the correction. In the second part of
this work, we examine the influence of the anti-analog state on isospin mixing
in the isobaric analog state and the correction to the beta-decay Fermi
transition.
###### keywords:
super-allowed beta decay, isovector monopole state, anti-analog state ,
isospin mixing
††journal: Nuclear Physics A
## 1 Introduction
In a number of studies attempts are made to determine corrections one has to
introduce in the evaluation of the beta-decay matrix elements. In particular
super-allowed transitions in $T=1,T_{z}=+1$ (or $T_{z}=-1$) nuclei [1, 2, 3,
4, 5, 6, 7, 8] are extensively studied theoretically and experimentally. These
corrections are important because using the measured $ft$ values one can
relate these to the $u$-quark to $d$-quark transition matrix element $V_{ud}$
in the Cabibbo-Kobayashi-Maskawa (CKM) matrix. In the Standard Model this
matrix fulfils the unitarity condition, the sum of squares of the matrix
elements in each row (column) is equal to one, as for example:
$V_{ud}^{2}+V_{us}^{2}+V_{ub}^{2}=1.$ (1)
In order to determine the $V_{ud}$ term using the experimental $ft$ values in
the super-allowed beta-decay, one must introduce a number of corrections [1].
In this paper, similarly to reference [2] we will consider one important
aspect of it, namely the Coulomb correction. There have been a number of works
that dealt with this problem using different methods [1, 2, 3, 4, 5, 6, 7, 8]
and more. In particular, the authors of [1, 8] have devoted a considerable
amount of work to study the influence of the Coulomb interaction on the $ft$
values. Of course in any of these studies, there are some approximations
involved. One of the main issues is the way the Coulomb force introduces
admixtures of higher excitations into the parent and its analog state. This
was the main topic of reference [2]. The Coulomb force admixes particle-hole
($ph$) states, mostly of $2\hbar\omega$ at unperturbed energy positions. There
is, however, a $ph$ interaction that changes the situation, creating a
collective state. In the case of a one-body Coulomb potential, the excitation
caused by it leads to $J=0^{+},T=1$ $ph$ states. In the isovector channel, the
$ph$ interaction is repulsive, and therefore there is an upward energy shift.
The resulting collective state is the isovector monopole (IVM) giant
resonance. The shift is substantial as many Random Phase Approximation (RPA)
studies indicate [9, 10, 11] about $1\hbar\omega$, and in some other studies
even higher [12, 13], $2\hbar\omega$ shifts. In the RPA, the energy weighted
sum rule is conserved, and therefore the upward energy shift will reduce the
strength. The amount of Coulomb mixing is determined by the strength divided
by the energy squared. Therefore in a hand waving argument this amount will be
reduced by a factor $(2/3)^{3}=8/27$ in the RPA, compared to the calculation
in which unperturbed $2\hbar\omega$ $ph$ excitations are used. As we will see
in the next sections the actual calculations confirm this rough estimate.
There are additional drawbacks in the shell model approaches [1] as pointed
out in reference [14] which were avoided in [2]. We will now briefly outline
the main steps in the theory given in [2].
## 2 Coulomb Mixing and the isovector monopole
The Fermi beta decay matrix element between the ground state and its isobaric
analog state (IAS) we write in the form:
$|M_{F}|^{2}=|M_{F}^{0}|^{2}(1-\delta_{C})$ (2)
where $M_{F}$ is the physical Fermi matrix element:
$M_{F}=\langle\Psi_{1}|T_{+}|\Psi_{2}\rangle.$ (3)
$|\Psi_{1}\rangle$ and $|\Psi_{2}\rangle$ are the parent and daughter physical
states. The symbol $M_{F}^{0}$ stands for the Fermi matrix element obtained in
the limit when in the Hamiltonian all the charge-dependent parts are put to
zero, and the wave functions are eigenstates of the charge-independent
Hamiltonian $H_{0}$. The symbol $\delta_{C}$ is the Coulomb correction. The
eigenstates of this Hamiltonian with isospin $T$ and $T_{z}$ will be denoted
as $|T,T_{z}\rangle$ and:
$H_{0}|T,T_{z}\rangle=E_{T}|T,T_{z}\rangle.$ (4)
The $2T+1$, components with different $T_{z}$ values are degenerate, the
action of the isospin lowering and raising operators, $T_{-}$, $T_{+}$ gives:
$\displaystyle T_{-}|T,T\rangle=\sqrt{2T}|T,T-1\rangle,$ $\displaystyle
T_{+}|T,T-1\rangle=\sqrt{2T}|T,T\rangle.$ (5)
In this case $M_{F}^{0}=\sqrt{2T}$. We introduce now a charge-dependent part
$V_{\rm{CD}}$. The dominant part of the charge-dependent interaction is the
charge asymmetric one-body Coulomb potential $V_{C}$ (While the charge-
dependent components of the two-body nuclear force might be important in
changing the relative spacing of levels in the analog nucleus its influence in
isospin mixing in the ground state or IAS is expected to be small).
The one-body Coulomb potential will now admix into the ground state and its
IAS the IVM [2, 10]. In perturbation theory the effect of the charge-dependent
part on the wave functions of the two members of the isospin multiplet,
$|T,T\rangle$ and $|T,T-1\rangle$ will be:
$\displaystyle\Psi_{1}$ $\displaystyle=$ $\displaystyle
N_{1}^{-1}(|T,T\rangle+\varepsilon_{T}|M_{T,T}\rangle+\varepsilon_{T+1}|M_{T+1,T}\rangle),$
(6) $\displaystyle\Psi_{2}$ $\displaystyle=$ $\displaystyle
N_{2}^{-1}(|T,T-1\rangle+\eta_{T-1}|M_{T-1,T-1}\rangle$ (7)
$\displaystyle+\eta_{T}|M_{T,T-1}\rangle+\eta_{T+1}|M_{T+1,T-1}\rangle),$
where $|M_{T^{\prime},T^{\prime}_{z}}\rangle$, are the
$T^{\prime},T^{\prime}_{z}$ components of the IVM, and where
$N_{1}=\sqrt{1+\varepsilon_{T}^{2}+\varepsilon_{T+1}^{2}},$ (8)
and
$N_{2}=\sqrt{1+\eta_{T-1}^{2}+\eta_{T}^{2}+\eta_{T+1}^{2}},$ (9)
with
$\varepsilon_{i}=\frac{\langle
T,T|V_{C}|M_{T+i,T}\rangle}{E_{M_{T+i,T}}-E_{0}},\quad i=0,1,$ (10)
where $E_{0}$ is the g.s. energy in this nucleus,
$\eta_{i}=\frac{\langle
T,T-1|V_{C}|M_{T+i,T-1}\rangle}{E_{M_{T+i,T-1}}-E_{1}},\quad i=-1,0,1.$ (11)
Here $E_{1}$ is the energy of the analog state. One can write these as:
$\displaystyle\varepsilon_{i}$ $\displaystyle=$ $\displaystyle\langle
T,T,1,0|T+i,T\rangle\frac{\langle T+i||V_{C}||T\rangle}{E_{M_{T+i,T}}-E_{0}},$
(12) $\displaystyle\eta_{i}$ $\displaystyle=$ $\displaystyle\langle
T,T,1,0|T+i,T-1\rangle\frac{\langle
T+i||V_{C}||T\rangle}{E_{M_{T+i,T-1}}-E_{1}}.$ (13)
Introducing the Clebsch-Gordan coefficients and assuming that the reduced
matrix elements are equal, one arrives (2) at the simple expression:
$\langle\Psi_{1}|T_{+}|\Psi_{2}\rangle^{2}=2T\left[1-4(T+1)\frac{V_{1}}{\xi\hbar\omega
A}\varepsilon_{1}^{2}\right]^{2}$ (14)
and
$\delta_{C}=8(T+1)\frac{V_{1}}{\xi\hbar\omega A}\varepsilon_{1}^{2}.$ (15)
Here $\xi\hbar\omega$ is the energy of the IVM in the parent nucleus, $V_{1}$
is the symmetry energy parameter determined from the equation
$E_{M_{T+1,T}}-E_{M_{T,T}}\approx V_{1}\frac{N-Z}{A},$ (16)
and $\varepsilon_{1}^{2}$ is the admixture of the $T+1$ component of the IVM
in the parent nucleus. We should emphasize that the result in eq. (14) is
dependent implicitly on all the various admixture in eq. (6, 7) and (12, 13).
The assumption of equal reduced matrix elements in deriving eq. (14) is
approximate. The differences between the reduced matrix elements for different
isospin components increase with the increasing number of excess neutrons. See
[10, 11] and references therein. For nuclei with low neutron excess, in
particular, for super-allowed decays $(N-Z=2)$, this is a very good
approximation. We apply here eq. (14, 15) to super-allowed transitions only.
## 3 Results of the Coulomb corrections to super-allowed beta decay
$\delta_{C}$
In reference, [2], the calculations of $\delta_{C}$ were based on values of
isospin mixing derived from some general sum rules and not on detailed
microscopic computations of isospin impurities. One calculation presented
there has relied on a schematic microscopic model [15] which was introduced in
the 1970s. We return to the subject of Coulomb corrections because, at present
new, more advanced methods to calculate isospin mixing in low-lying nuclear
states are available. We mainly rely on the recently published article [11]
about isospin impurities calculated using microscopic theories and new types
of interactions. Using the formalism described in the previous section we
apply equations (14, 15) to compute the values of $\delta_{C}$ for a number of
nuclei through the periodic table. We concentrate on super-allowed beta-decay
transitions. The calculations are performed using the Hartree-Fock (HF) RPA.
For open-shell nuclei, one should take into account the pairing correlations
and so one has to use the Quasi-particle Random Phase Approximation (QRPA).
However in the case of only two nucleons outside the closed shells, one can
limit ourselves to RPA. So for the super-allowed transitions (in $T=1$
nuclei), we proceed with our calculation the following way. We calculate in
the charge-exchange HF-RPA [11], the distribution of the IVM strength in the
$N=Z$ closed-shell nuclei ${}^{40}_{20}$Ca, ${}^{56}_{28}$Ni,
${}^{80}_{40}$Zr, and ${}^{100}_{\phantom{1}50}$Sn. In these cases, the IVM
has only a $T=1$ isospin. We compute the Coulomb mixing of the IVM into the
ground states of these nuclei, (see for details reference [11]) and denote the
amount of isospin admixture as $\bar{\varepsilon}^{2}$. The results are
presented in Table 1.
In the neighboring $T=1$ nuclei, ${}^{42}_{20}$Ca, ${}^{58}_{28}$Ni,
${}^{82}_{40}$Zr, and ${}^{102}_{\phantom{1}50}$Sn the admixture of the $T+1$
(in this case $T+1=2$) can be approximated by introducing the Clebsch-Gordan
squared coefficient $1/(T+1)=1/2$
$\varepsilon_{1}^{2}\approx\frac{1}{2}\bar{\varepsilon}^{2}.$ (17)
The error here is very small. We now apply eq. (15). We use the above relation
and instead of $\xi\hbar\omega$, we use the energy of the IVM determined in
the RPA calculations $\bar{E}_{0}$. We note that the value of $\xi$ is between
3 to 4. For $V_{1}$ we take the results of reference [11] to determine the
value of $V_{1}$ by using eq. (16). For this purpose, we utilize the RPA
results [11] for nuclei that have a neutron excess. For example in the case of
${}^{42}_{20}$Ca we use the ${}^{48}_{20}$Ca results. For illustration
purposes we show in Table 2 the ${}^{48}_{20}$Ca RPA results. In Table 2 we
have included also the value of $\delta_{C}$. Since in 48Ca the isospin is
$T=4$, the assumption about the equality of the reduced matrix elements for
the IVM components with isospins $T+1$, $T$, $T-1$ is not satisfied. Therefore
the value of $\delta_{C}$ is approximate. A rough estimate would assign an
uncertainty of $10-15\%$ for the value of $\delta_{C}$, meaning that for
nuclei with a large neutron excess the values of the Coulomb correction are
smaller than in the case of super-allowed transitions. The isospin mixing of
the $T+1$ states to the ground state is denoted as $\varepsilon_{T+1}^{2}$
(see reference[11]). When averaging the values obtained with different Skyrme
interactions we find the value of $V_{1}$ for the Ca region to be 90 MeV, for
Ni 120 MeV, for Zr 60 MeV, and Sn 90 MeV. Except for Zr, the values of $V_{1}$
are around 100 MeV. This is the value we used in reference [11]. The Zr region
is exceptional, the symmetry energy potential is weaker as noticed a long time
ago [16]. This point will be mentioned later in the article. The Coulomb
potential $V_{C}$ is computed using the HF calculation. Introducing all
mentioned above quantities into eq. (15) we find the total Coulomb corrections
$\delta_{C}$ for the super-allowed beta transitions in $T=1$ nuclei (see Table
3).
Table 1: The Coulomb mixing $\bar{\varepsilon}^{2}$ (%) and the IVM determined in the RPA calculation $\bar{E}_{0}$ (MeV) for $N=Z$ nuclei. 40Ca | 56Ni
---|---
Skyrme | $\bar{\varepsilon}^{2}$ (%) | $\bar{E_{0}}$ (MeV) | Skyrme | $\bar{\varepsilon}^{2}$ (%) | $\bar{E_{0}}$ (MeV)
SIII | 0.68 | 35.08 | SIII | 1.22 | 36.52
SKM* | 0.78 | 32.51 | SKM* | 1.42 | 34.57
SLy4 | 0.77 | 31.13 | SLy4 | 1.43 | 32.70
BSK17 | 0.70 | 32.79 | BSK17 | 1.23 | 34.74
SAMi0 | 0.74 | 33.66 | SAMi0 | 1.36 | 34.57
80Zr | 100Sn
Skyrme | $\bar{\varepsilon}^{2}$ (%) | $\bar{E_{0}}$ (MeV) | Skyrme | $\bar{\varepsilon}^{2}$ (%) | $\bar{E_{0}}$ (MeV)
SIII | 3.63 | 32.06 | SIII | 4.54 | 34.35
SKM* | 4.07 | 30.18 | SKM* | 5.34 | 32.45
SLy4 | 3.96 | 28.93 | SLy4 | 5.27 | 30.83
BSK17 | 3.72 | 30.21 | BSK17 | 4.75 | 32.40
SAMi0 | 3.96 | 30.75 | SAMi0 | 5.15 | 32.58
Table 2: Results for 48Ca ($T=4$). Skyrme int. | $\varepsilon_{T+1}^{2}$ (%) | $\bar{E_{0}}$ (MeV) | $V_{1}$ (MeV) | $\delta_{C}$ (%)
---|---|---|---|---
SIII | 0.10 | 34.79 | 106.14 | 0.26
SKM* | 0.12 | 32.54 | 92.17 | 0.28
SLy4 | 0.12 | 30.57 | 97.80 | 0.32
BSK17 | 0.10 | 32.83 | 108.63 | 0.26
SAMi0 | 0.11 | 32.28 | 77.25 | 0.22
Table 3: The Coulomb correction $\delta_{C}$ (%) for $T=1$ nuclei. | 42Ca | 58Ni | 82Zr | 102Sn
---|---|---|---|---
Skyrme | $\delta_{C}$ (%) | $\delta_{C}$ (%) | $\delta_{C}$ (%) | $\delta_{C}$ (%)
SIII | 0.40 | 0.54 | 0.70 | 0.98
SKM* | 0.42 | 0.60 | 0.78 | 1.06
SLy4 | 0.46 | 0.68 | 0.98 | 1.30
BSK17 | 0.44 | 0.62 | 0.78 | 1.14
SAMi0 | 0.32 | 0.52 | 0.68 | 0.98
It is interesting to mention the case of 80Zr in which isospin mixing was
studied experimentally [17, 18]. The value for isospin mixing obtained in our
work [11] agreed with the experiment.
We should mention that our calculations of $\delta_{C}$ expresses the global
features of this quantity over the periodic table and do not attempt to fit
the small fluctuations of this quantity for different nuclei. Our main
conclusion is that the $\delta_{C}$ is smaller by factor $1.5-2$ compared with
references [1, 8]. The main reason was explained in the Introduction. In our
approach, there is no division of the correction into two parts (overlap
corrections and the rest). All is taken into account in the single expression
eq. (15). Our result for the correction $\delta_{C}$ is closer to some other
computations in reference [3, 5] because in these works some corrections of
collective nature of Coulomb strength are taken into consideration.
Table 4: Results of $\delta_{C}$ (%) in various approaches. | $A\approx 40$ | $A\approx 66$ | $A\approx 80$
---|---|---|---
Hardy-Towner [8] | 0.66 | 1.56 | 1.63
Satuła et al. [4] | 0.77 | 0.9 | 1.5-1.6
Rodin [5] | 0.43 | 0.99 | -
Liang et al. [3] | 0.33-0.38 | 0.47-0.56 | 1.1-1.2
Auerbach-Loc | 0.40-0.54 | 0.54-0.66 | 0.72-1.12
## 4 Fermi beta transitions, isospin mixing, and the role of the anti-analog
state
So far we have discussed the role of the IVM in inducing isospin impurities
into the low-lying nuclear states. The energy of the IVM is high and is
distant from the level it admixes. The amount of mixing changes smoothly when
going from one nucleus to the next. The IVM involves $2\hbar\omega$ $ph$
excitations and cannot be properly described in a small space shell-model
calculation. When we pass from the parent state $|\pi\rangle$ to the analog
nucleus that is the one where one of the neutrons was changed to a proton
($N-1$ neutrons and $Z+1$ protons) states with isospin $T-1$ are allowed.
Several states stand out. These are the “configuration” states [10, 13, 16].
They are composed of the same spatial and spin components as the analog state
(denoted as $|A\rangle$) but are constructed to be orthogonal to the IAS. Of
course, they are not eigenstates of the Hamiltonian but are mixed with other
$T-1$ states. So we will treat these states as doorways. The configuration
states are expected in general to have relatively large matrix elements with
the analog because the Coulomb force produces large monopole contributions
[10, 13, 16]. Among the “configuration” states let us, for the purpose of
simplicity, single out one “configuration” state, the anti-analog. In the case
that the excess neutrons (or excess protons) occupy only two different orbits,
the anti-analog is the only configuration state.
## 5 Coulomb mixing of the anti-analog and analog
Consider a simple parent state in which $n_{1}$ excess neutrons occupy orbit
$j_{1}(n)$ and $n_{2}$ neutrons orbit $j_{2}(n)$. In the parenthesis, we put
$n$, or $p$ for neutrons or protons. (In some light nuclei the role of excess
neutrons is interchanged with excess protons). Of course, $n_{1}+n_{2}\equiv
N-Z=2T$ . The parent state is:
$|\pi\rangle=\left|j_{1}^{n_{1}}(n)j_{2}^{n_{2}}(n)\right\rangle$ (18)
has isospin $T$. The analog is:
$\displaystyle|A\rangle=\frac{1}{\sqrt{2T}}\big{[}\sqrt{n_{1}}\left|j_{1}^{n_{1}-1}(n)j_{1}(p)j_{2}^{n_{2}}(n)\right\rangle$
$\displaystyle+\sqrt{n_{2}}\left|j_{1}^{n_{1}}(n)j_{2}^{n_{2}-1}(n)j_{2}(p)\right\rangle\big{]}$
(19)
has isospin $T$. The anti-analog $|\bar{A}\rangle$ is then:
$\displaystyle|\bar{A}\rangle=\frac{1}{\sqrt{2T}}\big{[}\sqrt{n_{2}}\left|j_{1}^{n_{1}-1}(n)j_{1}(p)j_{2}^{n_{2}}(n)\right\rangle$
$\displaystyle-\sqrt{n_{1}}\left|j_{1}^{n_{1}}(n)j_{2}^{n_{2}-1}(n)j_{2}(p)\right\rangle\big{]}.$
(20)
We consider here parent nuclei with simple configurations: for even-even
nuclei, the $n_{1}$ and $n_{2}$ are even and in each orbit the excess nucleons
are coupled to $J=0^{+}$ and in odd-even nuclei $n_{1}$ is odd and $n_{2}$ is
even. The one-body Coulomb matrix element between the analog and anti-analog
is then [10, 13, 16]:
$\langle\bar{A}|V_{C}|A\rangle=\frac{\sqrt{n_{1}n_{2}}}{2T}\left[\langle
j_{1}|V_{C}|j_{1}\rangle-\langle j_{2}|V_{C}|j_{2}\rangle\right],$ (21)
where $V_{C}$ is the Coulomb potential. If the excess neutrons occupy orbits
belonging to different major shells, this matrix element is sizable. The
energy splitting between the analog and anti-analog is often given by the
symmetry potential $V_{1}$:
$E_{\bar{A}}-E_{A}=\frac{V_{1}(N-Z)}{A}.$ (22)
The value of $V_{1}$ is smaller than in the splitting of the IVM and it about
50 MeV (see reference [16] and experimental data quoted in this reference).
The coupling between the analog and anti-analog successfully explained [10,
19] isospin forbidden decays in light nuclei [20, 21]. As one goes to heavy
nuclei along the stability line the number of excess neutrons increases which
leads to reductions in the matrix element (21) and the increase in the energy
splitting (22), causing the mixing of $T-1$ impurities to diminish. The
dominant mechanism becomes the mixing with the IVM state. In heavy unstable
nuclei with a small neutron excess, the anti-analog mixing mechanism may lead
to significant isospin impurities in the analog state. The above example
discusses only two orbits, but this mechanism can be easily generalized to
more than two orbits.
## 6 The anti-analog and the Coulomb correction $\delta_{C}$
We discuss now the contribution of the anti-analog to the Coulomb corrections
for Femi beta-decay transitions. Using the definitions in eq. (3)
$|\Psi_{1}\rangle=|\pi\rangle,$ (23)
and
$|\Psi_{2}\rangle=\sqrt{1-\varepsilon^{2}}|A\rangle+\varepsilon|\bar{A}\rangle,$
(24)
with
$\varepsilon=\frac{\langle\bar{A}|V_{C}|A\rangle}{E_{\bar{A}}-E_{A}}.$ (25)
Note that this is the isospin mixing of the anti-analog into the analog. With
these expressions one immediately sees that:
$\delta_{C}=\varepsilon^{2}.$ (26)
## 7 Harmonic oscillator estimate
For a uniform charge distribution with radius $R$ the inner part of the
Coulomb potential is
$V_{C}=\frac{1}{2}\frac{Ze^{2}}{R^{3}}r^{2}.$ (27)
For $R=1.2A^{1/3}$ fm one can write the matrix element in equation (21) as:
$\langle\bar{A}|V_{C}|A\rangle\approx
0.35\frac{\sqrt{n_{1}n_{2}}}{2T}\frac{Z}{A}\left[\langle
j_{1}|r^{2}|j_{1}\rangle-\langle j_{2}|r^{2}|j_{2}\rangle\right].$ (28)
If $j_{1}$ and $j_{2}$ belong to two major shells differing by one node then
the difference in the radii square in a harmonic oscillator well becomes:
$\Delta(r^{2})=\frac{\hbar}{m\omega},$ (29)
$m$ is the mass of the nucleon and $\omega$ the oscillator frequency. Taking
$\hbar\omega=41A^{-1/3}$ MeV, we obtain:
$\langle\bar{A}|V_{C}|A\rangle=0.35\frac{\sqrt{n_{1}n_{2}}}{2T}\frac{Z}{A^{2/3}}\rm{MeV}.$
(30)
Taking $V_{1}\approx 50$ MeV in eq. (22), $N-Z=2T$
$\delta_{C}=5.0\times 10^{-5}Z^{2}A^{2/3}\frac{n_{1}n_{2}}{(2T)^{4}}.$ (31)
## 8 Numerical Estimates for the Anti-Analog mixing
Using the self-consistent HF potential we computed the difference of the
Coulomb matrix elements in eq. (21) for the orbits $2p_{1/2}$ and $1g_{9/2}$
for ${}^{88}_{38}$Sr. We use the Skyrme HF for the five different forces given
previously in the paper [11]. The difference in the Coulomb matrix elements
and $\delta_{C}$ for the above two orbits are shown in Table 5. For the
harmonic oscillator the difference in the matrix elements was 0.250 MeV and
$\delta_{C}=0.14\%$. In nuclei with excess neutrons (protons) occupying two
different orbits $j_{1}$ and $j_{2}$ but both orbits belonging to the same
major harmonic oscillator shell, formula (29) is not applicable. However, it
was shown in reference [10, 16, 19] that due to the different binding
energies, and different angular momentum of the two orbits, in a finite well
potential, the difference of the two Coulomb matrix elements in eq. (28) is
comparable (within a factor of 2) to the results of the harmonic oscillator.
See for example Table 3.2 in reference [19].
Table 5: $\langle\bar{A}|V_{C}|A\rangle$ is calculated for ${}^{88}_{38}$Sr. From the harmonic oscillator estimate $\langle\bar{A}|V_{C}|A\rangle$ = 0.25 (MeV), and $\delta_{C}=0.13\%$. Skyrme | $\langle\bar{A}|V_{C}|A\rangle$ | $\delta_{C}$
---|---|---
int. | (MeV) | (%)
SIII | 0.293 | 0.18
SKM* | 0.257 | 0.14
SLy4 | 0.281 | 0.17
BSK17 | 0.289 | 0.18
SAMi0 | 0.331 | 0.23
In reference [16] in Table 1 are listed a number of Coulomb energy differences
for orbits that are within the same major shell or in different major shells.
The values are quite similar. One can get an estimate by comparing the
relative shifts of states in mirror nuclei. Comparing the low-lying spectra of
17F and 17O one finds that the Coulomb energy difference in the parenthesis of
eq. (21) for the $s_{1/2}$ and $d_{3/2}$ is about 400 keV. From the spectra of
41Sc and 41Ca one finds that the difference in the Coulomb energies for the
orbits $p_{3/2}$ and $f_{7/2}$ is about 220 keV and from the spectra of 57Cu
and 57Ni the difference in Coulomb energies for the $p_{3/2}$ and $f_{5/2}$ is
260 keV. These differences are about half of the Coulomb energy differences
found for harmonic oscillator orbits in different major shells.
It is interesting to mention in this respect that large isospin impurities in
the analog have been measured [22, 23] in the $A=32$ isobars. In reference
[22] the experiment involved the Fermi transitions within the $T=1$
isotriplet. The analysis of the experiment indicated a large impurity and a
$\delta_{C}$ correction of 5.3 %. In the same $A=32$ nuclei members of the
$T=2$ multiplet were also measured. (The parent state is ${}^{32}_{18}$Ar14
with 4 excess protons). A large isospin impurity of about 1-2% was found in
the analog state [23]. The shell-model calculation in a restricted space,
finds the isospin admixture to be 0.43% [24]. It is remarkable that in these
nuclei the primary configuration populated by the excess protons involves two
different orbits, the $s_{1/2}$ and $d_{3/2}$, thus allowing for the formation
of the anti-analog. If we use equation (31) for the isospin quintet in the
$A=32$, we find $\delta_{C}=0.25\%$. The above equation applies to harmonic
oscillator orbits belonging to different major shells with $N$ and $N+1$
nodes, however as already discussed above the difference in the Coulomb matrix
elements is affected by the binding energies and angular momentum, and sizable
matrix elements between the analog and anti-analog are produced [10, 16, 19].
Although the mixing with anti-analog might contribute to $\delta_{C}$ it will
not reach the large percentage found in the experiment.
As already remarked proceeding along the stability line to heavier nuclei the
number of excess neutrons increases and the isospin admixture caused by the
anti-analog decreases. However, presently, (and even more in the future) it
will be possible to study, proton-rich, heavy exotic nuclei, with a small
neutron (or proton) excess (thus low $T$). In such nuclei the isospin
admixtures, as one can see from formula (31), will strongly increase. Choosing
nuclei in which the excess protons (neutrons) occupy orbits in different major
shells, and have low isospin we can point out some examples (which are not
necessarily all feasible for experimental studies). For $T=3/2$ nuclei with
the excess of three nucleons occupying two orbits from different major shells
we select ${}^{17}_{\phantom{1}7}$N10 and find from eq. (31)
$\delta_{C}=0.04\%$, for ${}^{44}_{22}$Ti19, $\delta_{C}=0.7\%$ and for
${}^{79}_{38}$Sr41, $\delta_{C}=3.3\%$. For $T=2$ nuclei one can look at the
example of ${}^{80}_{38}$Sr42, here $\delta_{C}=2.1\%$. In the examples chosen
we selected nuclei in which the excess nucleons occupy two orbits belonging to
different harmonic oscillator shells. The trend of fast-growing $\delta_{C}$
with mass number $A$, for low isospin states and excess neutrons occupying
different major shells, is seen in eq. (31). For $T=2$ and the mass $A=40$,
$\delta_{C}=0.3\%$, for $A=60$, $\delta_{C}=0.9\%$ and for $A=100$,
$\delta_{C}=3.9\%$.
## 9 $\delta_{C}$ and the spreading width and energy shifts of the analog
In the doorway state approximation [10, 13] the spreading width of the IAS is
given by
$\Gamma^{\downarrow}_{A}=\sum_{d}\frac{|\langle
A|V_{C}|d\rangle|^{2}}{|E_{A}-E_{d}|^{2}}\Gamma_{d}^{\downarrow},$ (32)
where $|d\rangle$ denotes doorway states and $\Gamma_{d}^{\downarrow}$ their
spreading width.
For the analog, the important doorways are the anti-analog in lighter nuclei
and the IVM in the heavier. Here we limit ourselves to the anti-analog
$|A\rangle$. Eq. (32) can be written as
$\Gamma^{\downarrow}_{A}=\delta_{C}\Gamma^{\downarrow}_{\bar{A}}.$ (33)
The spreading width of the anti-analog $\Gamma^{\downarrow}_{\bar{A}}$ is due
to the strong interaction and therefore is of the order of a single-particle
spreading width, thus several MeV.
The practicality of the above equation is quite limited. The total width of
the analog state is in general composed of the escape width [10, 13, 19] and
spreading width. It is usually difficult to separate the two. And then the
spreading width of the analog gets a contribution from the IVM and anti-analog
and again it is not easy to separate the two. As we just mentioned above, the
spreading width of the anti-analog is not well known. One can get only a rough
idea about the contribution of the anti-analog to the width of the analog. For
example, if we use a 3 MeV spreading width for the anti-analog in 88Sr and the
estimated value for $\delta_{C}$, we conclude that the contribution of the
anti-analog to the spreading width of the analog is only a few keV. However,
it is worth noticing that in some medium mass nuclei, in low-isospin exotic
nuclei, the mixing with the anti-analog can produce relatively large spreading
widths of the analog, of the order of a few tens of keV. For example, in
${}^{79}_{38}$Sr41 the contribution of the anti-analog to the spreading width
of the analog is of the order of 100 keV. In experiments, one would observe
broadened analog resonances. The mixing discussed here affects also the
energies of the IAS. This mixing might produce shifts of the order:
$\Delta E=\delta_{C}(E_{\bar{A}}-E_{A}).$ (34)
The values of $\Delta E$ are typical of the order of several tens of keV. For
example in ${}^{79}_{38}$Sr41 this equals 70 keV. The shifts may vary for
different states in the analog nucleus and the spectrum maybe somewhat
distorted compared to the parent nucleus. For example, the three excess
neutrons may occupy states with $J=5/2,J=1/2$ of the kind
$p_{1/2}^{2}f_{5/2}$, $p_{1/2}f^{2}_{5/2}$ while another configuration with
quantum number $J=5/2$ will be $f^{3}_{5/2}$. The first two states will have
anti-analogs, the third will not have and therefore the first two states will
be shifted according to eq. (34) while the third will not.
A short account of this work was discussed at a conference in 2014 in section
4 of reference [25].
## 10 Conclusions and Outlooks
First we summarize the first part, the Coulomb corrections to super-allowed
beta decay. We stress here again, the main purpose of the first part of our
work is to calculate the total correction to $\delta_{C}$ using the Coulomb
interaction unchanged and taking into account the collective effects of the
particle-hole space in the isovector channel. It is not new that such
collectivity causes the states to shift to higher energies. The discovery of
the giant electric dipole was found at excitation energy of $2\hbar\omega$
instead of $1\hbar\omega$ expected from non-collective particle-hole states.
The same also holds for the isovector $J=0^{+}$ channel. This would cause a
reduction in isospin mixing in the ground states of even-even nuclei. In turn,
this leads to reduced values of the $\delta_{C}$. The purpose of the present
paper (and the one in reference [2]) was to include this effect and not to
find all relevant corrections to the superallowed beta decays. We did not
intend to calculate the $V_{ud}$ matrix element in the CKM matrix. The other
approaches [1, 3, 4, 5, 6, 7] do not include the effect of the collective
shift of the Coulomb strength. So far none of these works explain how one can
avoid it, and how other aspects of their theory are able to compensate for
this nuclear structure effect. In reference [2] we suggested a simple model to
account of this collective aspect of theory and how it affects the amount of
isospin mixing. In the present paper, we use the results obtained in reference
[11]. The work in [11] is a fully microscopic, detailed method to find the
isospin admixture in the ground states of even-even nuclei. This is probably,
at present, one of the best calculations of isospin mixing in the ground
states of even-even nuclei. The calculation is employing many versions of the
Skyrme interaction. We were able to separate the three isospin components
($T-1,T$, and $T+1$) of the isovector excitations, and were able to determine
their energy splitting. From there we found the values of the symmetry
potential $V_{1}$, and the excitation energies of the IVM state. Indeed the
energies turn out to be at $3\hbar\omega$ and not at $2\hbar\omega$. The new
values for these quantities were used in the present paper. The values
obtained for the $\delta_{C}$ in the present paper are about a factor of $2$
smaller than the ones found by Hardy and Towner [1, 8] (see Table 4).
Concerning the Conserved Vector Current (CVC) hypothesis test, which really
means, in the present context, that the $\mathcal{F}t$ values should be
constant with $Z$ and $A$. This would be the case when the isospin symmetry is
fully conserved. If all corrections are introduced to the measured values then
this will be the case. The $\delta_{C}$ is only one among a few other
corrections. As explained above we only calculate the $\delta_{C}$. Hardy and
Towner when they calculate this correction for the various nuclei they adjust
several parameters in their theory for each nucleus separately. Their model is
semi-phenomenological and the strength of the two-body Coulomb interaction is
adjusted to fit the experimental Isobaric Multiplet Mass Equation (IMME) for
each nucleus under consideration. Also, a charge-dependent nuclear interaction
is incorporated by a 2% increase in all the $T=1$ proton-neutron matrix
elements in Hardy and Towner work [1]. It could affect isospin mixing in
certain cases, when the levels that mix are close in energy, and the two-body
matrix element may affect the mixing. This could happen to some close-spaced
excited levels, or in odd-odd nuclei. In our approach, we do not calculate the
Coulomb correction for each nucleus and do not adjust the interaction in each
case. No surprise that we do not consider the CVC theorem.
It is of interest to note the following. Recently Hardy and Towner published a
detailed survey of the Superallowed transitions [8]. The main and ultimate
purpose of Hardy and Towner is to use the Superalowed beta transitions in
order to determine with great precision the value of the $V_{ud}$ matrix
element in the CKM matrix and to assess whether unitarity is fulfilled or
violated. In our work, the purpose is just to consider one aspect of the
theory namely how the collectivity of the state that causes isospin mixing
affects $\delta_{C}$. The aims of our work and the one of Hardy and Towner are
quite different and one should not compare the results of these two
approaches. There is extensive work in which other corrections are considered
[1, 26, 27]. However, if we adopt the various values of the corrections listed
in Ref. [8], except $\delta_{C}$ of course, and insert our values of this
parameter we obtain for $A=60$ the $V_{ud}^{2}$ to be smaller by $1\%$ than
what was obtained in Hardy and Towner [8]. This is a very rough estimate but
it indicates that in spite of the large difference in the values of
$\delta_{C}$ in the two approaches the change in $V_{ud}^{2}$ is less than 1%.
So at the level of low precision requirement, the value of $V_{ud}^{2}$ is not
very sensitive to the value of $\delta_{C}$. This is not surprising, the
percentage of all Coulomb corrections is very small.
In our approach, the calculation of isospin mixing is detailed and advanced.
It can be applied to any even-even nucleus with any isospin in the ground
state. Applying the results of the new calculations of isospin mixing to
obtain $\delta_{C}$ in nuclei with $T>1$ requires additional improvement. We
plan to improve this by calculating explicitly the reduced matrix elements for
the $T-1,T$, and $T+1$ for the IVM state. The overestimate of the amount of
isospin mixing described by non-collective single-particle models was already
noted a long time ago (see for example [10, 12, 13, 15, 16]. For example,
isospin mixing determines the spreading width of the Isobaric Analog Resonance
(see reference [10, 13]). When using the single-particle model and one-
particle optical potential the spreading widths for the Isobaric Analog
Resonance turn out to by factors 5 (or more) larger than the experimental
ones. It was noted that introducing some correlations among various particle-
hole states one can reduce this very large discrepancy [10, 13], but this
still remains a problem until today.
Now we summarize the second part, the influence of the anti-analog state on
isospin mixing in the isobaric analog state and the correction to the beta-
decay Fermi transition. Recently a paper was published [28] in which a
considerable deviation from isospin symmetry was observed for a $T=3/2$
isospin multiplet in the Sr region. Following that paper, an article was
published [29] in which an attempt was made to explain the results in [28]
using a shell-model approach.
The anti-analog state (or any other configuration states) are not eigenstates
of the full Hamiltonian and are fragmented by the strong force. Here we
treated the anti-analog as a doorway, assigning it an average energy position
and placing there its unperturbed configuration. This of course is an
approximation. The admixtures found in the restricted shell-model are small,
not exceeding 0.5%, and usually, of the order of 0.1% [24]. It is not clear
whether these calculations include fully the anti-analog mixing caused by the
one-body Coulomb field. Our approach here is more transparent and does not
require complicated computations. It would be nice to know whether indeed in
shell-model calculations the large contribution to the isospin impurities
comes from components that make up the anti-analog.
Our calculations depend on the values of several parameters which are not well
determined. The value of the energy separation between the analog and anti-
analog is uncertain. The fragmentation of the anti-analog strength will affect
the outcome, and of course, the structure of the parent state is important.
Even when the basic configuration of the parent state does not involve two
different orbits, configuration mixing will bring in some higher orbits, which
would validate the anti-analog mechanism. Although this is a second-order
effect in the evaluation of the isospin impurity, in some cases configuration
mixing is substantial and the admixtures of configurations with higher orbits
could be large, thus leading to sizeable anti-analog components. One should
also mention the iso-multiplets of excited states. In this case, one can often
find situations in which higher orbits (from different major shells) compose
the excited states. The two-body part of the Coulomb force is neglected in our
approach. A shell model calculation takes into account the two-body part of
the Coulomb interaction.
The isospin mixing in the analog due to the discussed mechanism has a
particular dependence on the mass $A$, the charge $Z$, and the excess neutron
number $(N-Z)$. It is conceivable that in studies of exotic nuclei one can
choose favorable cases where isospin mixing is large and learn more about this
subject.
## Acknowledgements
We thank Chien-Yeah Seng and Dai-Nam Le for the discussions.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
## References
* [1] I. S. Towner, J. C. Hardy, Improved calculation of the isospin-symmetry-breaking corrections to superallowed Fermi $\beta$ decay, Phys. Rev. C 77 (2008) 025501. doi:10.1103/PhysRevC.77.025501.
* [2] N. Auerbach, Coulomb corrections to superallowed $\beta$ decay in nuclei, Phys. Rev. C 79 (2009) 035502. doi:10.1103/PhysRevC.79.035502.
* [3] H. Liang, N. V. Giai, J. Meng, Isospin corrections for superallowed Fermi $\beta$ decay in self-consistent relativistic random-phase approximation approaches, Phys. Rev. C 79 (2009) 064316. doi:10.1103/PhysRevC.79.064316.
* [4] W. Satuła, J. Dobaczewski, W. Nazarewicz, M. Rafalski, Microscopic Calculations of Isospin-Breaking Corrections to Superallowed Beta Decay, Phys. Rev. Lett. 106 (2011) 132502. doi:10.1103/PhysRevLett.106.132502.
* [5] V. Rodin, Relation between isospin-symmetry-breaking correction to superallowed $\beta$ decay and the energy of the charge-exchange giant monopole resonance, Phys. Rev. C 88 (2013) 064318. doi:10.1103/PhysRevC.88.064318.
* [6] L. Xayavong, N. A. Smirnova, Radial overlap correction to superallowed ${0}^{+}\rightarrow{0}^{+}$ $\beta$ decay reexamined, Phys. Rev. C 97 (2018) 024324. doi:10.1103/PhysRevC.97.024324.
* [7] W. E. Ormand, B. A. Brown, Isospin-mixing corrections for $fp$-shell Fermi transitions, Phys. Rev. C 52 (1995) 2455–2460. doi:10.1103/PhysRevC.52.2455.
* [8] J. C. Hardy, I. S. Towner, Superallowed ${0}^{+}\rightarrow{0}^{+}$ nuclear $\beta$ decays: 2020 critical survey, with implications for ${V}_{\mathit{ud}}$ and CKM unitarity, Phys. Rev. C 102 (2020) 045501. doi:10.1103/PhysRevC.102.045501.
* [9] N. Auerbach, A. Klein, A microscopic theory of giant electric isovector resonances, Nuclear Physics A 395 (1) (1983) 77–118. doi:https://doi.org/10.1016/0375-9474(83)90090-8.
* [10] N. Auerbach, Coulomb effects in nuclear structure, Physics Reports 98 (5) (1983) 273–341. doi:https://doi.org/10.1016/0370-1573(83)90008-X.
* [11] B. M. Loc, N. Auerbach, G. Colò, Isospin mixing and Coulomb mixing in ground states of even-even nuclei, Phys. Rev. C 99 (2019) 014311. doi:10.1103/PhysRevC.99.014311.
* [12] A. Bohr, B. R. Mottelson, Nuclear Structure, Volume I: Single-Particle Motion, World Scientific Publishing Company, 1998. doi:10.1142/3530.
* [13] N. Auerbach, J. Hüfner, A. K. Kerman, C. M. Shakin, A Theory of Isobaric Analog Resonances, Rev. Mod. Phys. 44 (1972) 48–125. doi:10.1103/RevModPhys.44.48.
* [14] G. A. Miller, A. Schwenk, Isospin-symmetry-breaking corrections to superallowed fermi $\beta$ decay: Formalism and schematic models, Phys. Rev. C 78 (2008) 035501. doi:10.1103/PhysRevC.78.035501.
* [15] A. Yeverechyahu, PhD thesis, Tel Aviv University (unpublished) (1976).
* [16] G. F. Bertsch, A. Mekjian, Isospin impurities in nuclei, Annual Review of Nuclear Science 22 (1) (1972) 25–64. doi:10.1146/annurev.ns.22.120172.000325.
* [17] A. Corsi, O. Wieland, S. Barlini, A. Bracco, F. Camera, V. L. Kravchuk, G. Baiocco, L. Bardelli, G. Benzoni, M. Bini, N. Blasi, S. Brambilla, M. Bruno, G. Casini, M. Ciemala, M. Cinausero, F. C. L. Crespi, M. D’Agostino, M. Degerlier, A. Giaz, F. Gramegna, M. Kmiecik, S. Leoni, A. Maj, T. Marchi, K. Mazurek, W. Meczynski, B. Million, D. Montanari, L. Morelli, S. Myalski, A. Nannini, R. Nicolini, G. Pasquali, G. Poggi, V. Vandone, G. Vannini, Measurement of isospin mixing at a finite temperature in 80Zr via giant dipole resonance decay, Phys. Rev. C 84 (2011) 041304. doi:10.1103/PhysRevC.84.041304.
* [18] S. Ceruti, F. Camera, A. Bracco, R. Avigo, G. Benzoni, N. Blasi, G. Bocchi, S. Bottoni, S. Brambilla, F. C. L. Crespi, A. Giaz, S. Leoni, A. Mentana, B. Million, A. I. Morales, R. Nicolini, L. Pellegri, A. Pullia, S. Riboldi, O. Wieland, B. Birkenbach, D. Bazzacco, M. Ciemala, P. Désesquelles, J. Eberth, E. Farnea, A. Görgen, A. Gottardo, H. Hess, D. S. Judson, A. Jungclaus, M. Kmiecik, W. Korten, A. Maj, R. Menegazzo, D. Mengoni, C. Michelagnoli, V. Modamio, D. Montanari, S. Myalski, D. Napoli, B. Quintana, P. Reiter, F. Recchia, D. Rosso, E. Sahin, M. D. Salsac, P.-A. Söderström, O. Stezowski, C. Theisen, C. Ur, J. J. Valiente-Dobón, M. Zieblinski, Isospin mixing in ${}^{80}\mathrm{Zr}$: From finite to zero temperature, Phys. Rev. Lett. 115 (2015) 222502. doi:10.1103/PhysRevLett.115.222502.
* [19] N. Auerbach, A. Lev, The role of configuration states in isospin forbidden proton decays of T = 32 states, Physics Letters B 34 (1) (1971) 13–16. doi:https://doi.org/10.1016/0370-2693(71)90492-8.
* [20] A. McDonald, E. Earle, W. McLatchie, H. Mak, D. Martin, P. Ikossi, Isospin-forbidden particle decays in light nuclei: (IV). Total width of the lowest $T=2$ level of 24Mg, Nuclear Physics A 305 (1) (1978) 151–162. doi:https://doi.org/10.1016/0375-9474(78)90169-0.
* [21] P. G. Ikossi, W. J. Thompson, T. B. Clegg, W. W. Jacobs, E. J. Ludwig, Systematics of Isospin Mixing in Proton Elastic Scattering from Light Nuclei, Phys. Rev. Lett. 36 (1976) 1357–1359. doi:10.1103/PhysRevLett.36.1357.
* [22] D. Melconian, S. Triambak, C. Bordeanu, A. García, J. C. Hardy, V. E. Iacob, N. Nica, H. I. Park, G. Tabacaru, L. Trache, I. S. Towner, R. E. Tribble, Y. Zhai, Experimental Validation of the Largest Calculated Isospin-Symmetry-Breaking Effect in a Superallowed Fermi Decay, Phys. Rev. Lett. 107 (2011) 182301. doi:10.1103/PhysRevLett.107.182301.
* [23] M. Bhattacharya, D. Melconian, A. Komives, S. Triambak, A. García, E. G. Adelberger, B. A. Brown, M. W. Cooper, T. Glasmacher, V. Guimaraes, P. F. Mantica, A. M. Oros-Peusquens, J. I. Prisciandaro, M. Steiner, H. E. Swanson, S. L. Tabor, M. Wiedeking, $\mathit{ft}$ value of the ${0}^{+}\rightarrow{0}^{+}$ ${\beta}^{+}$ decay of ${}^{32}\mathrm{Ar}$: A measurement of isospin symmetry breaking in a superallowed decay, Phys. Rev. C 77 (2008) 065503. doi:10.1103/PhysRevC.77.065503.
* [24] A. Signoracci, B. A. Brown, Effects of isospin mixing in the $A=32$ quintet, Phys. Rev. C 84 (2011) 031301. doi:10.1103/PhysRevC.84.031301.
* [25] N. Auerbach, Isospin corrections to super-allowed beta decays in nuclei, J. Phys. Conf. Ser. 533 (2014) 012001. doi:10.1088/1742-6596/533/1/012001.
* [26] C.-Y. Seng, M. Gorchtein, H. H. Patel, M. J. Ramsey-Musolf, Reduced hadronic uncertainty in the determination of ${V}_{ud}$, Phys. Rev. Lett. 121 (2018) 241804\. doi:10.1103/PhysRevLett.121.241804.
* [27] C.-Y. Seng, M. Gorchtein, M. J. Ramsey-Musolf, Dispersive evaluation of the inner radiative correction in neutron and nuclear $\beta$ decay, Phys. Rev. D 100 (2019) 013001. doi:10.1103/PhysRevD.100.013001.
* [28] D. E. M. Hoff, A. M. Rogers, S. M. Wang, P. C. Bender, K. Brandenburg, K. Childers, J. A. Clark, A. C. Dombos, E. R. Doucet, S. Jin, R. Lewis, S. N. Liddick, C. J. Lister, Z. Meisel, C. Morse, W. Nazarewicz, H. Schatz, K. Schmidt, D. Soltesz, S. K. Subedi, S. Waniganeththi, Mirror-symmetry violation in bound nuclear ground states, Nature 580 (7801) (2020) 52–55. doi:10.1038/s41586-020-2123-1.
* [29] S. M. Lenzi, A. Poves, A. O. Macchiavelli, Isospin symmetry breaking in the mirror pair 73Sr-73Br, Phys. Rev. C 102 (2020) 031302. doi:10.1103/PhysRevC.102.031302.
|
# Hypercontractivity of the semigroup
of the fractional laplacian on the $n$-sphere
Rupert L. Frank Mathematics 253-37, Caltech, Pasadena, CA 91125, USA, and
Mathematisches Institut, Ludwig-Maximilans Universät München, Theresienstr.
39, 80333 München, Germany<EMAIL_ADDRESS>and Paata Ivanisvili
Department of Mathematics, North Carolina State University, Raleigh, NC 27695
<EMAIL_ADDRESS>
###### Abstract.
For $1<p\leq q$ we show that the Poisson semigroup $e^{-t\sqrt{-\Delta}}$ on
the $n$-sphere is hypercontractive from $L^{p}$ to $L^{q}$ in dimensions
$n\leq 3$ if and only if $e^{-t\sqrt{n}}\leq\sqrt{\frac{p-1}{q-1}}$. We also
show that the equivalence fails in large dimensions.
###### Key words and phrases:
Hypercontractivity, Poisson Semigroup, n-sphere
###### 2010 Mathematics Subject Classification:
39B62, 42B35, 47A30
${}$${}$footnotetext: © 2021 by the author. This paper may be reproduced, in
its entirety, for non-commercial purposes.
## 1\. Introduction
### 1.1. Poisson semigroup on the sphere
Let
$\displaystyle\mathbb{S}^{n}=\\{x\in\mathbb{R}^{n+1}\,:\,\|x\|=1\\}$
be the unit sphere in $\mathbb{R}^{n+1}$, where
$\|x\|=\sqrt{x_{1}^{2}+\ldots+x_{n+1}^{2}}$ for
$x=(x_{1},\ldots,x_{n+1})\in\mathbb{R}^{n+1}$. Let $\Delta$ be the
Laplace–Beltrami operator on $\mathbb{S}^{n}$. We will be working with
spherical polynomials $f:\mathbb{S}^{n}\to\mathbb{C}$, i.e., finite sums
$f(\xi)=\sum_{d\geq 0}H_{d}(\xi),$
where $H_{d}$ satisfies
$\Delta H_{d}=-d(d+n-1)H_{d}.$
The heat semigroup $e^{t\Delta}$ is defined by $e^{t\Delta}f=\sum_{d\geq
0}e^{-d(d+n-1)t}H_{d}$. The hypercontractivity result for the heat semigroup
on $\mathbb{S}^{n}$ states that for any $1\leq p\leq q<\infty$, any integer
$n\geq 1$, and any $t\geq 0$ we have
(1) $\displaystyle\|e^{t\Delta}f\|_{q}\leq\|f\|_{p}\quad\text{for all}\
f\qquad\text{if and only if}\qquad e^{-tn}\leq\sqrt{\frac{p-1}{q-1}},$
where
$\|f\|_{p}^{p}=\|f\|_{L^{p}(\mathbb{S}^{n},d\sigma_{n})}^{p}=\int_{\mathbb{S}^{n}}|f|^{p}d\sigma_{n}$,
and $d\sigma_{n}$ is the normalized surface area measure of $\mathbb{S}^{n}$.
The case $n=1$ was solved independently in [9] and [10], and the general case
$n\geq 2$ was settled in [7]. We remark that the condition
$e^{-tn}\leq\sqrt{\frac{p-1}{q-1}}$ in (1) is different from the classical
hypercontractivity condition $e^{-t}\leq\sqrt{\frac{p-1}{q-1}}$ in Gauss space
due to Nelson [8], and on the hypercube due to Bonami [2]. The appearance of
the extra factor $n$ in (1) can be explained from the fact that the spectral
gap (the smallest nonzero eigenvalue) of $-\Delta$ equals $n$.
In [7] the authors ask what the corresponding hypercontractivity estimates are
for the Poisson semigroup on $\mathbb{S}^{n}$. As pointed out in [7], there
are two natural Poisson semigroups on $\mathbb{S}^{n}$ one can consider: 1)
$e^{-t\sqrt{-\Delta}}f$, and 2) $P_{r}f=\sum r^{d}H_{d}$, $r\in[0,1]$. Notice
that when $n=1$ both of these semigroups coincide (with $r=e^{-t}$). It was
conjectured by E. Stein that
$\|P_{r}f\|_{q}\leq\|f\|_{p}\quad\text{if and only if}\quad
r\leq\sqrt{\frac{p-1}{q-1}}$
holds on $\mathbb{S}^{n}$ for all $n\geq 1$. Besides the case $n=1$ mentioned
above, the case $n=2$ was confirmed in [4], and the general case $n\geq 2$ in
[1].
The question of hypercontractivity for the semigroup $e^{-t\sqrt{-\Delta}}$ on
$\mathbb{S}^{n}$ for $n\geq 2$, however, has remained open. Since the spectral
gap of $\sqrt{-\Delta}$ equals $\sqrt{n}$, it is easy to see that a necessary
condition for the estimate $\|e^{-t\sqrt{-\Delta}}f\|_{q}\leq\|f\|_{p}$ is
$e^{-t\sqrt{n}}\leq\sqrt{\frac{p-1}{q-1}}$; see Section 2.1. One might
conjecture that this necessary condition is also sufficient. Surprisingly, it
turns out the answer is positive in small dimensions and negative in large
dimensions.
###### Theorem 1.1.
Let $1<p<q$, $n\geq 1$, and $t\geq 0$. Then
(2)
$\displaystyle\textup{(i)}\;\;\|e^{-t\sqrt{-\Delta}}f\|_{q}\leq\|f\|_{p}\quad\text{for
all}\
f\qquad\text{implies}\qquad\textup{(ii)}\;\;e^{-t\sqrt{n}}\leq\sqrt{\frac{p-1}{q-1}}.$
Moreover, $(ii)$ implies $(i)$ in dimensions $n\leq 3$. Finally, for any
$q>\max\\{2,p\\}$, there exists $n_{0}=n_{0}(p,q)\geq 4$ such that (ii) does
not imply (i) in dimensions $n$ with $n\geq n_{0}$.
It remains an open problem to find a necessary and sufficient condition on
$t>0$ in dimensions $n\geq 4$ for which the semigroup $e^{-t\sqrt{-\Delta}}$
is hypercontractive from $L^{p}(\mathbb{S}^{n})$ to $L^{q}(\mathbb{S}^{n})$.
## 2\. Proof of Theorem 1.1
### 2.1. The necessity part $\textup{(i)}\Rightarrow\textup{(ii)}$
We recall this standard argument for the sake of completeness. Let
$f(\xi)=1+\varepsilon H_{1}(\xi)$ where $H_{1}$ is any (real) spherical
harmonic of degree $1$, i.e., $\Delta H_{1}=-nH_{1}$. Then
$e^{-t\sqrt{-\Delta}}f(\xi)=1+\varepsilon e^{-t\sqrt{n}}H_{1}(\xi)$. As
$\varepsilon\to 0$, we obtain
$\displaystyle\int_{\mathbb{S}^{n}}|1+\varepsilon
e^{-t\sqrt{n}}H_{1}(\xi)|^{q}d\sigma_{n}$
$\displaystyle=\int_{\mathbb{S}^{n}}\left(1+q\varepsilon
e^{-t\sqrt{n}}H_{1}(\xi)+\frac{q(q-1)}{2}\varepsilon^{2}e^{-2t\sqrt{n}}H^{2}_{1}(\xi)+O(\varepsilon^{3})\right)d\sigma_{n}$
$\displaystyle=1+\frac{q(q-1)}{2}\varepsilon^{2}e^{-2t\sqrt{n}}\|H_{1}\|_{2}^{2}+O(\varepsilon^{3}).$
Thus,
(3)
$\displaystyle\|e^{-t\sqrt{-\Delta}}f\|_{q}=1+\frac{q-1}{2}\varepsilon^{2}e^{-2t\sqrt{n}}\|H_{1}\|_{2}^{2}+O(\varepsilon^{3}).$
Similarly, we have
(4)
$\displaystyle\|f\|_{p}=1+\frac{p-1}{2}\varepsilon^{2}\|H_{1}\|_{2}^{2}+O(\varepsilon^{2}).$
Substituting (3) and (4) into the inequality
$\|e^{-t\sqrt{-\Delta}}f\|_{q}\leq\|f\|_{p}$, and taking $\varepsilon\to 0$ we
obtain the necessary condition $e^{-2t\sqrt{n}}\leq\frac{p-1}{q-1}$ which
coincides with (ii) in (2).
### 2.2. The sufficiency part $\textup{(ii)}\Rightarrow\textup{(i)}$ in
dimensions $n=1,2,3$.
Our goal is to show that if $1<p<q$ and if $t\geq 0$ is such that
$e^{-t2\sqrt{n}}\leq\frac{p-1}{q-1}$, then
(5) $\displaystyle\|e^{-t\sqrt{-\Delta}}f\|_{q}\leq\|f\|_{p}\quad\text{in
dimensions}\quad n=1,2,3.$
The case $n=1$ was confirmed in [10]. In what follows we assume
$n\in\\{2,3\\}$. First we need the fact that the heat semigroup $e^{t\Delta}$
has a nonnegative kernel. Indeed, for each $t>0$ there exists
$K_{t}:[-1,1]\to[0,\infty)$ such that
$e^{t\Delta}f(\xi)=\int_{\mathbb{S}^{n}}K_{t}(\xi\cdot\eta)f(\eta)d\sigma_{n}(\eta),$
where $\xi\cdot\eta=\sum_{j=1}^{n+1}\xi_{j}\eta_{j}$ for
$\xi=(\xi_{1},\ldots,\xi_{n+1})$ and $\eta=(\eta_{1},\ldots,\eta_{n+1})$, see,
for example, Proposition 4.1 in [7]. Next, we recall the subordination formula
(6) $\displaystyle
e^{-x}=\frac{1}{\sqrt{\pi}}\int_{0}^{\infty}e^{-y-x^{2}/(4y)}\frac{dy}{\sqrt{y}}\quad\text{valid
for all}\ x\geq 0,$
By the functional calculus, we deduce that the Poisson semigroup
$e^{-t\sqrt{-\Delta}}$ has a positive kernel with total mass $1$. The latter
fact together with the convexity of the map $x\mapsto|x|^{p}$ for $p\geq 1$
implies that $\|e^{-t\sqrt{-\Delta}}\|_{p}\leq\|f\|_{p}$ for all $t\geq 0$.
Thus, it suffices to verify (5) for those $t\geq 0$ for which
$e^{-2t\sqrt{n}}=\frac{p-1}{q-1}$.
Next we claim that it suffices to verify (5) only for the powers $p,q$ such
that $2\leq p\leq q$. Indeed, assume (5) holds for $2\leq p\leq q$. By duality
and the symmetry of the semigroup $e^{-t\sqrt{-\Delta}}$ we obtain
$\|e^{-t\sqrt{-\Delta}}f\|_{p^{\prime}}\leq\|f\|_{q^{\prime}}$ where
$p^{\prime}=\frac{p}{p-1}$, $q^{\prime}=\frac{q}{q-1}$, $1<q^{\prime}\leq
p^{\prime}\leq 2$. Notice that
$\frac{p-1}{q-1}=\frac{q^{\prime}-1}{p^{\prime}-1}$, thus we extend (5) to all
$p,q$ such that $1<p\leq q\leq 2$. It remains to extend (5) for those powers
$p,q$ when $p\leq 2\leq q$. To do so, let $p\leq 2\leq q$, and let $t\geq 0$
be such $e^{-2t\sqrt{n}}=\frac{p-1}{q-1}$. Choose $t_{1},t_{2}\geq 0$ so that
$t=t_{1}+t_{2}$ and $e^{-2t_{1}\sqrt{n}}=p-1$ and
$e^{-2t_{2}\sqrt{n}}=\frac{1}{q-1}$. Then we have
$\displaystyle\|e^{-t\sqrt{-\Delta}}f\|_{q}=\|e^{-t_{2}\sqrt{-\Delta}}(e^{-t_{1}\sqrt{-\Delta}}f)\|_{q}\leq\|e^{-t_{1}\sqrt{-\Delta}}f\|_{2}\leq\|f\|_{p}.$
In what follows we assume $2\leq p\leq q$. We will use a standard argument to
deduce the validity of the hypercontractivity estimate from a log Sobolev
inequality. Nonnegativity of the kernel for the Poisson semigroup combined
with the triangle inequality implies $|e^{-t\sqrt{-\Delta}}f|\leq
e^{-t\sqrt{-\Delta}}|f|$ for any $f$. Thus by continuity and standard density
arguments we can assume that $f\geq 0$, $f$ is not identically zero, and $f$
is smooth in $(\ref{hyp22})$.
The equality $e^{-2t\sqrt{n}}=\frac{p-1}{q-1}$ implies
$q=1+e^{2t\sqrt{n}}(p-1)$. Fix $p\geq 2$ and consider the map
$\displaystyle\varphi(t)=\|e^{-t\sqrt{-\Delta}}f\|_{q(t)}>0,\quad t\geq 0,$
where $q(t)=1+e^{2t\sqrt{n}}(p-1)$. If we show $\varphi^{\prime}(t)\leq 0$,
then we obtain $\varphi(t)\leq\varphi(0)=\|f\|_{p}$, and this proves the
sufficiency part. Let $\psi(t)=\ln\varphi(t)$. We have
$\displaystyle\frac{q^{2}}{q^{\prime}}\psi^{\prime}(t)=-\ln\left(\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q}d\sigma_{n}\right)+\frac{\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q}\left(\ln(e^{-t\sqrt{-\Delta}}f)^{q}+\frac{q^{2}}{q^{\prime}}\frac{\partial_{t}e^{-t\sqrt{-\Delta}}f}{e^{-t\sqrt{-\Delta}}f}\right)d\sigma_{n}}{\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q}d\sigma_{n}}.$
Clearly $\psi^{\prime}\leq 0$ if and only if
$\displaystyle\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q}\ln(e^{-t\sqrt{-\Delta}}f)^{q}d\sigma_{n}-\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q}d\sigma_{n}\ln\left(\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q}d\sigma_{n}\right)$
$\displaystyle\leq\frac{q^{2}}{q^{\prime}}\int_{\mathbb{S}^{n}}(e^{-t\sqrt{-\Delta}}f)^{q-1}\sqrt{-\Delta}(e^{-t\sqrt{-\Delta}}f)d\sigma_{n}.$
Let $g=e^{-t\sqrt{-\Delta}}f\geq 0$. Then we can rewrite the previous
inequality as
(7) $\displaystyle\int_{\mathbb{S}^{n}}g^{q}\ln
g^{q}d\sigma_{n}-\int_{\mathbb{S}^{n}}g^{q}d\sigma_{n}\ln\left(\int_{\mathbb{S}^{n}}g^{q}d\sigma_{n}\right)\leq\frac{q^{2}}{2(q-1)\sqrt{n}}\int_{\mathbb{S}^{n}}g^{q-1}\sqrt{-\Delta}gd\sigma_{n},$
where we used the fact that $q^{\prime}=2(q-1)\sqrt{n}$. Since
$e^{-t\sqrt{-\Delta}}$ is contractive in $L^{\infty}(\mathbb{S}^{n})$ with a
nonnegative, symmetric kernel, it follows that the validity of the estimate
(7) for $q=2$ implies (7) for all $q\in[2,\infty)$; see, e.g., Theorem 4.1 in
[3].
Let $g=\sum_{k\geq 0}H_{d}$ be the decomposition of $g$ into its spherical
harmonics. Then the estimate (7) for $q=2$ takes the form
$\displaystyle\int_{\mathbb{S}^{n}}g^{2}\ln
g^{2}d\sigma_{n}-\int_{\mathbb{S}^{n}}g^{2}d\sigma_{n}\ln\left(\int_{\mathbb{S}^{n}}g^{2}d\sigma_{n}\right)\leq\sum_{k\geq
0}2\sqrt{\frac{k(k+n-1)}{n}}\,\|H_{k}\|_{2}^{2}.$
It follows from Beckner’s conformal log Sobolev inequality [1] (which is a
consequence of Lieb’s sharp Hardy–Littlewood–Sobolev inequality [6]) that for
any smooth nonnegative $g=\sum_{k\geq 0}H_{k}$ we have
$\displaystyle\int_{\mathbb{S}^{n}}g^{2}\ln
g^{2}d\sigma_{n}-\int_{\mathbb{S}^{n}}g^{2}d\sigma_{n}\ln\left(\int_{\mathbb{S}^{n}}g^{2}d\sigma_{n}\right)\leq\sum_{k\geq
0}\Delta_{n}(k)\,\|H_{k}\|_{2}^{2}$
with $\Delta_{n}(k)=2n\sum_{m=0}^{k-1}\frac{1}{2m+n}$. Thus, the estimate (5)
is a consequence of the following lemma.
###### Lemma 2.1.
Let $n\in\\{2,3\\}$. Then for all integers $k\geq 1$ one has
$\displaystyle n\sum_{m=0}^{k-1}\frac{1}{2m+n}\leq\sqrt{\frac{k(k+n-1)}{n}}.$
###### Proof.
We first check the inequality for $k\leq 3$ by direct computation. Indeed, the
case $k=1$ is an equality. The case $k=2$ can be checked as follows,
$\displaystyle 1+\frac{n}{2+n}=\frac{2+2n}{2+n}\leq\sqrt{\frac{2+2n}{n}},$
which is true because $n(2+2n)\leq(2+n)^{2}$ holds for $n=2,3$. The case $k=3$
can be checked similarly:
$\displaystyle\frac{2+2n}{2+n}+\frac{n}{4+n}\leq\sqrt{\frac{6+3n}{n}}$
holds for $n=2,3$ (notice that this inequality fails for $n=4$).
Next, we assume $k\geq 4$. We have
$\displaystyle\sum_{m=0}^{k-1}\frac{1}{m+\frac{n}{2}}=\frac{2}{n}+\sum_{m=1}^{k-1}\frac{1}{m+\frac{n}{2}}\leq\frac{2}{n}+\int_{0}^{k-1}\frac{1}{x+\frac{n}{2}}dx=\frac{2}{n}+\ln\left(\frac{k+\frac{n}{2}-1}{\frac{n}{2}}\right).$
Thus it suffices to show
$\displaystyle\frac{2}{n}+\ln\left(\frac{k+\frac{n}{2}-1}{\frac{n}{2}}\right)-\frac{2}{n}\sqrt{\frac{k(k+n-1)}{n}}\leq
0.$
Notice that the left hand side, call it $h(k)$, is decreasing in $k$. Indeed,
we have
$\displaystyle
h^{\prime}(k)=\frac{1}{\frac{n}{2}+k-1}-\frac{2k+n-1}{n\sqrt{kn(k+n-1)}}\leq\frac{1}{\frac{n}{2}+k-1}-\frac{1}{\sqrt{kn}}\leq\frac{1}{2\sqrt{\frac{n}{2}(k-1)}}-\frac{1}{\sqrt{kn}}\leq
0.$
On the other hand, we have for $n=2,3$,
$\displaystyle
h(4)=\frac{2}{n}+\ln\left(\frac{6+n}{n}\right)-\frac{2}{n}\sqrt{\frac{12+4n}{n}}\leq
0.$
Indeed, if $n=2$, $h(4)=1+2\ln 2-\sqrt{10}<0$, and if $n=3$,
$h(4)=\frac{2+3\ln 3-4\sqrt{2}}{3}<0$. ∎
### 2.3. Counterexample to $\textup{(ii)}\Rightarrow\textup{(i)}$ in high
dimensions
Let $\lambda:=\frac{n-1}{2}$, and let $C_{d}^{(\lambda)}(x)$ be the Gegenbauer
polynomial
(8) $\displaystyle
C_{d}^{(\lambda)}(x)=\sum_{j=0}^{\lfloor\frac{d}{2}\rfloor}(-1)^{j}\frac{\Gamma(d-j+\lambda)}{\Gamma(\lambda)j!(d-2j)!}(2x)^{d-2j},$
where $\lfloor\frac{d}{2}\rfloor$ denotes the largest integer $m$ such that
$m\leq\frac{d}{2}$, and $\Gamma(x)$ is the Gamma function. Notice that if we
let $Y_{d}(\xi)=C_{d}^{(\lambda)}(\xi\cdot e_{1})$, where
$e_{1}=(1,0,\ldots,0)\in\mathbb{R}^{n+1}$, then $Y_{d}(\xi)$ is a spherical
harmonic of degree $d$ on $\mathbb{S}^{n}$. In particular, for $t\geq 0$ such
that $e^{-2t\sqrt{n}}=\frac{p-1}{q-1}$, the estimate
$\|e^{-t\sqrt{-\Delta}}f\|_{L^{q}(\mathbb{S}^{n})}\leq\|f\|_{L^{p}(\mathbb{S}^{n})}$
applied to $f=Y_{d}(\xi)$ is equivalent to the estimate
(9) $\displaystyle\frac{\|Y_{d}\|_{q}}{\|Y_{d}\|_{p}}\leq
e^{t\sqrt{d(d+n-1)}}=\left(\frac{q-1}{p-1}\right)^{\frac{1}{2}\sqrt{\frac{d(d+n-1)}{n}}}.$
Next, we need
###### Lemma 2.2.
For any $d\geq 0$ we have
(10)
$\displaystyle\lim_{n\to\infty}\,\frac{\|Y_{d}\|_{L^{q}(\mathbb{S}^{n},d\sigma_{n})}}{\|Y_{d}\|_{L^{p}(\mathbb{S}^{n},d\sigma_{n})}}=\frac{\|h_{d}\|_{L^{q}(\mathbb{R},d\gamma)}}{\|h_{d}\|_{L^{p}(\mathbb{R},d\gamma)}},$
where $d\gamma(y)=\frac{e^{-y^{2}/2}}{\sqrt{2\pi}}dy$ is the standard Gaussian
measure on the real line, and $h_{d}(x)$ is the probabilistic Hermite
polynomial
(11) $\displaystyle
h_{d}(x)=\sum_{j=0}^{\lfloor\frac{d}{2}\rfloor}\frac{(-1)^{j}d!}{j!(d-2j)!}\frac{x^{d-2j}}{2^{j}}.$
###### Proof.
Indeed, notice that
(12)
$\displaystyle\|Y_{d}\|_{p}^{p}=\int_{\mathbb{S}^{n}}|C_{d}^{(\lambda)}(\xi\cdot
e_{1})|^{p}d\sigma_{n}(\xi)=\int_{-1}^{1}|C_{d}^{(\lambda)}(t)|^{p}c_{\lambda}(1-t^{2})^{\lambda-\frac{1}{2}}dt,$
where
$c_{\lambda}=\frac{\Gamma(\lambda+1)}{\Gamma(\frac{1}{2})\Gamma(\lambda+\frac{1}{2})}$.
In particular, after the change of variables $t=\frac{s}{\sqrt{2\lambda}}$ in
(12), and multiplying both sides in (12) by $(d!/(2\lambda)^{d/2})^{p}$ we
obtain
$\displaystyle\left(\frac{d!}{(2\lambda)^{d/2}}\right)^{p}\|Y_{d}\|_{p}^{p}=\int_{\mathbb{R}}\left|\frac{d!}{(2\lambda)^{d/2}}C_{d}^{(\lambda)}\left(\frac{s}{\sqrt{2\lambda}}\right)\right|^{p}\frac{c_{\lambda}}{\sqrt{2\lambda}}\left(1-\frac{s^{2}}{2\lambda}\right)^{\lambda-\frac{1}{2}}\mathbbm{1}_{[-\sqrt{2\lambda},\sqrt{2\lambda}]}(s)ds,$
where $\mathbbm{1}_{[-\sqrt{2\lambda},\sqrt{2\lambda}]}(s)$ denotes the
indicator function of the set $[-\sqrt{2\lambda},\sqrt{2\lambda}]$. Notice
that by Stirling’s formula for any $j\geq 0$, and any $d\geq 0$ we have
(13)
$\displaystyle\lim_{\lambda\to\infty}\,\frac{1}{\lambda^{d-j}}\frac{\Gamma(d-j+\lambda)}{\Gamma(\lambda)}=1.$
Therefore, (11) and (8) together with (13) imply that for all $s\in\mathbb{R}$
we have
$\displaystyle\lim_{\lambda\to\infty}\frac{d!}{(2\lambda)^{d/2}}C_{d}^{(\lambda)}\left(\frac{s}{\sqrt{2\lambda}}\right)=h_{d}(s).$
Invoking Stirling’s formula again we have
$\displaystyle\lim_{\lambda\to\infty}\frac{c_{\lambda}}{\sqrt{2\lambda}}\left(1-\frac{s^{2}}{2\lambda}\right)^{\lambda-\frac{1}{2}}\mathbbm{1}_{[-\sqrt{2\lambda},\sqrt{2\lambda}]}(s)=\frac{e^{-s^{2}/2}}{\sqrt{2\pi}}\quad\text{for
all}\quad s\in\mathbb{R}.$
Finally, to apply Lebesgue’s dominated convergence theorem it suffices to
verify that for all $s\in\mathbb{R}$ and all $\lambda\geq\lambda_{0}$ we have
the following pointwise estimates
$\displaystyle
a)\quad\frac{c_{\lambda}}{\sqrt{2\lambda}}\left(1-\frac{s^{2}}{2\lambda}\right)^{\lambda-\frac{1}{2}}\mathbbm{1}_{[-\sqrt{2\lambda},\sqrt{2\lambda}]}(s)\leq
Ce^{-s^{2}/2}$ $\displaystyle
b)\quad\frac{d!}{(2\lambda)^{d/2}}C_{d}^{(\lambda)}\left(\frac{s}{\sqrt{2\lambda}}\right)\leq
c_{1}(d)(1+|s|)^{c_{2}(d)},$
where $\lambda_{0},C,c_{1}(d),c_{2}(d)$ are some positive constants
independent of $\lambda$ and $s$.
To verify a) it suffices to consider the case
$s\in[-\sqrt{2\lambda},\sqrt{2\lambda}]$. Since
$\lim_{\lambda\to\infty}\frac{c_{\lambda}}{\sqrt{2\lambda}}=\frac{1}{\sqrt{2\pi}}$
it follows that $\frac{c_{\lambda}}{\sqrt{2\lambda}}\leq C$ for all
$\lambda\geq\lambda_{0}$, where $\lambda_{0}$ is a sufficiently large number.
Next, the estimate $(1-\frac{s^{2}}{2\lambda})^{\lambda-1/2}\leq
C^{\prime}e^{-s^{2}/2}$ for $s\in[-\sqrt{2\lambda},\sqrt{2\lambda}]$ follows
if we show that $(1-\frac{1}{2\lambda})\ln(1-t)\leq
C^{\prime\prime}/\lambda-t$ for all $t:=\frac{s^{2}}{2\lambda}\in[0,1]$ where
$C^{\prime\prime}$ is a universal positive constant. The latter inequality
follows from $\ln(1-t)\leq-t$ for $t\in[0,1]$.
To verify b) it suffices to show that for all $\lambda\geq\lambda_{0}>0$ and
all integers $j$ such that $d\geq j\geq 0$ one has
$\displaystyle\frac{1}{\lambda^{d-j}}\frac{\Gamma(d-j+\lambda)}{\Gamma(\lambda)}\leq
C(d-j),$
where $C(d-j)$ depends only on $d-j$. The latter inequality follows from (13)
provided that $\lambda\geq\lambda_{0}$ where $\lambda_{0}$ is a sufficiently
large number.
Thus, it follows from the Lebesgue’s dominated convergence theorem that
$\lim_{n\to\infty}\frac{d!}{(n-1)^{d/2}}\|Y_{d}\|_{L^{p}(\mathbb{S}^{n},d\sigma_{n})}=\|h_{d}\|_{L^{p}(\mathbb{R},d\gamma)}.$
The lemma is proved. ∎
Now we fix $q>\max\\{p,2\\}$ and, in order to prove the failure of
$\textup{(ii)}\Rightarrow\textup{(i)}$ for all sufficiently large $n$, we
argue by contradiction and assume that there is a sequence of dimensions
$\\{n_{j}\\}_{j\geq 1}$ going to infinity such that
$\textup{(ii)}\Rightarrow\textup{(i)}$ in Theorem 1.1 does hold. Then, by
combining (9) and (10) we have
(14)
$\displaystyle\frac{\|h_{d}\|_{L^{q}(\mathbb{R},d\gamma)}}{\|h_{d}\|_{L^{p}(\mathbb{R},d\gamma)}}\leq\left(\frac{q-1}{p-1}\right)^{\frac{\sqrt{d}}{2}}.$
On the other hand, a consequence of the main result in [5] and the assumption
$q>\max\\{p,2\\}$ is that
$\displaystyle\lim_{d\to\infty}\left(\frac{\|h_{d}\|_{L^{q}(\mathbb{R},d\gamma)}}{\|h_{d}\|_{L^{p}(\mathbb{R},d\gamma)}}\right)^{1/d}=\left(\frac{q-1}{\max\\{p,2\\}-1}\right)^{\frac{1}{2}},$
which is in contradiction with (14).
###### Remark 2.1.
Let $B(x,y)$ be the Beta function. The estimate (9) for $p=2$ and $q=4$ takes
the form
(15)
$\displaystyle\int_{-1}^{1}|C_{d}^{(\frac{n-1}{2})}(t)|^{4}(1-t^{2})^{\frac{n-2}{2}}dt\leq
9^{\sqrt{\frac{d(d+n-1)}{n}}}\frac{(n-1)^{2}B(1/2,n/2)}{d^{2}(2d+n-1)^{2}B^{2}(n-1,d)},$
where we used the fact that
$\|Y_{d}\|^{2}_{L^{2}(\mathbb{S}^{n})}=\frac{n-1}{d(2d+n-1)B(n-1,d)}$. The
numerical computations show that the inequality (15) already fails for $d=7$
and $n=13$.
### Acknowledgements
Partial support through US National Science Foundation grants DMS-1363432 and
DMS-1954995 (R.L.F.) as well as DMS-2052645, DMS-1856486, and CAREER-
DMS-2052865, CAREER-DMS-1945102 (P.I.) is acknowledged.
## References
* [1] W. Beckner, Sobolev inequalities, the Poisson semigroups, and analysis on the sphere $\mathbb{S}^{n}$. Proc. Natl. Acad. Sci. USA 89 (1992), 4816–4819.
* [2] A. Bonami, Étude des coefficients de Fourier des fonctions de $L^{p}(G)$. Ann. Inst. Fourier 20 (1970), 335–420.
* [3] L. Gross, Logarithmic Sobolev inequalities and contractivity properties of semigroups.In: Dell’Antonio G., Mosco U. (eds) Dirichlet Forms. Lecture Notes in Mathematics, vol 1563. Springer, Berlin, Heidelberg.
* [4] S. Janson, On hypercontractivity for multipliers on orthogonal polynomials. Ark. Mat. 21 (1983), 97–110.
* [5] L. Larsson-Cohn, $L_{p}$-norms of Hermite polynomials and an extremal problem on Wiener chaos. Ark. Mat. 40 (2002), 133–144.
* [6] E. H. Lieb, Sharp constants in the Hardy–Littlewood–Sobolev and related inequalities. Ann. of Math. 118 (1983), no. 2, 349–374.
* [7] C. Mueller, F. Weissler, Hypercontractivity for the Heat Semigroup for Ultraspherical Polynomials and on the $n$-Sphere. Journal of Functional Analysis 48 (1982), 252–282.
* [8] E. Nelson, The free Markoff field. Journal of Functional Analysis 12 (1973), 211–227.
* [9] O. Rothaus, Logarithmic Sobolev inequalities and the spectrum of Sturm–Liouville operators. Journal of Functional Analysis 39 (1980), 42–56.
* [10] F. Weissler, Logarithmic Sobolev inequalities and hypercontractivity estimates on the circle. Journal of Functional Analysis 37 (1980), 218–234.
|
# Cryptoasset Competition and Market Concentration in the Presence of Network
Effects
Konstantinos Stylianou University of LeedsWoodhouse LaneLeedsUnited Kingdom
<EMAIL_ADDRESS>, Leonhard Spiegelberg Brown University115 Waterman
StProvidenceRhode IslandUSA<EMAIL_ADDRESS>, Maurice Herlihy Brown
University115 Waterman StProvidenceRhode IslandUSA<EMAIL_ADDRESS>and
Nic Carter Coin MetricsBostonMassachusettsUSA<EMAIL_ADDRESS>
(2020)
###### Abstract.
When network products and services become more valuable as their userbase
grows (network effects), this tendency can become a major determinant of how
they compete with each other in the market and how the market is structured.
Network effects are traditionally linked to high market concentration, early-
mover advantages, and entry barriers, and in the cryptoasset market they have
been used as a valuation tool too. The recent resurgence of Bitcoin has been
partly attributed to network effects too. We study the existence of network
effects in six cryptoassets from their inception to obtain a high-level
overview of the application of network effects in the cryptoasset market. We
show that contrary to the usual implications of network effects, they do not
serve to concentrate the cryptoasset market, nor do they accord any one
cryptoasset a definitive competitive advantage, nor are they consistent enough
to be reliable valuation tools. Therefore, while network effects do occur in
cryptoasset networks, they are not a defining feature of the cryptoasset
market as a whole.
network effects, Metcalfe, Metcalfe’s Law, concentration, monopolization,
monopoly
††journalyear: 2020††copyright: rightsretained††conference: ; ††doi: ;
## 1\. Introduction
The rapid appreciation and popularization of cryptoassets over the past few
years has incited a large body of scholarship on understanding their behavior
and their positioning in the market, particularly financial markets. As
cryptoassets gradually became a household investment and transaction medium,
they began to invite greater regulatory and investor scrutiny, which created
the need to better understand their function as a market of their own and as
market that forms part of the greater economy. While early analyses focused on
simple economic illustrations of the functioning of cryptoasset networks in
isolation, later work started exploring market-wide phenomena, including the
dominance patterns of some cryptoassets over others.
Since cryptoassets are based on blockchain networks and are therefore network
markets, one important parameter that reflects and determines their behaviour
is the relationship between their userbase and their value. This relationship
has a long history in network markets under the theory of network effects.
Network effects theory states that the value of a product or service $V$ is
co-determined by its userbase $u$. Then, for products or services that obey
network effects, one can derive the value of the network, and therefore their
relative value to each other too, for a given userbase assuming that the
relationship between $V$ and $u$ is known, for example $V\propto nlog(u)$,
$V\propto u^{2}$, $V\propto 2^{u}$ etc.
Initially, this insight attracted attention because of its predictive
potential of cryptoasset valuation. Indeed a number of studies attempted to
develop valuation models based on network effects that could be used by
investors to predict the future value of their assets and the value of the
market as a whole. However, the implications of network effects go far beyond
valuation and, understood properly, they inform also the structure and
competitiveness of the market making them a key input into policy-making and
regulatory decisions. Most notably, markets that are characterized by network
effects are commonly thought to be winner-take-all markets, where first mover
advantage is key, entry barriers are high, networks hit tipping points of no
return, and contestable monopolies or high concentration can be the natural
state of the market. This is for two reasons: firstly, because the value of
joining a network is increasing in the number of other network adopters,
because the bigger the number of existing adopters the greater the utility
every new adopter derives from it (pure network effects), and secondly,
because for every new adopter joining the network, existing adopters also
benefit (network externalities). In both cases bigger equals better
(everything else equal), creating an incentive for users to join the network
where the value will grow larger both for new and for existing users, which
creates a snowball effect. This kind of power concentration in networks that
exhibit network effects usually makes regulators uneasy, and therefore, if
cryptoassets exhibit network effects, they would (and should) attract higher
regulatory and investor scrutiny.
Extant literature on network effects in cryptoassets is limited and has
focused almost exclusively on confirming or rejecting, usually for Bitcoin
only, a specific application of network effects, namely Metcalfe’s law, which
states that the value of a network is proportional to the square of its users
($V\propto u^{2}$), and, if confirmed, it would be a useful valuation tool.
However, this line of literature presents only a binary distinction between
the existence or not of a specific type of network effects, focuses only on
valuation, uses sub-optimal data, and has also been temporally limited to the
period before the recent resurgence in mid 2019, or excludes periods,
therefore missing key parts in the cryptoasset market evolution.
By contrast, our analysis takes a more comprehensive view of network effects
in cryptoassets, and, while it confirms that network effects occur in
cryptoassets, it shows that they do not have the usual implications associated
with them in terms of according competitive advantages, resulting in market
concentration, or serving as a reliable valuation tool. Firstly, we define
network effects to occur when the value of a cryptoasset network changes
supra- or infra-proportionately to changes in its userbase, thereby showing
both positive and reverse network effects, while not being constrained by a
specific version of network effects. We also use two proxies for value and
userbase each to better capture what users perceive as the value of the
network and how the network size (userbase) should be measured, and we base
our results on cleaner vetted data. Moreover, we examine multiple cryptoassets
to get a broader view of the industry, as opposed to previous works which
focused on Bitcoin. Lastly, our analysis covers the entire history of the
studied cryptoassets, which includes the valuation spikes and subsequent
declines in 2014, 2017 and 2019. The spike in 2019 and the preceding decline
from the heights of 2017 are particularly valuable because they help us show
that the results obtained in previous studies which sampled only up to early
2018 do not hold based on more recent history.
Figure 1. Price and userbase development since 2016.
## 2\. Background, Motivation and Implications
Network effects were first studied in the 1970s to more accurately capture the
value and growth of telecommunications networks (Rohlfs, 1974). The intuition
was that when the nature of a product or service is such that it relies on
linking users together, the value of the product $V$ is co-determined by its
userbase $u$. More specifically, for every user added to the userbase of a
product, value is created not just for the joining user but for existing users
as well. As a result, each new user derives value from joining a network that
is relative to the size of the network (pure network effects) and creates an
externality in the form of value that is captured by the network of existing
users (network externality). Conversely, for every exiting user, value is lost
both for the exiting user and for existing users. This type of network effects
was called direct network effects to distinguish it from later extensions to
the theory, which accounted for the effects changes in the network’s userbase
have on complementary products and services developed for that network (Church
et al., 2008). This latter type was called indirect network effects, and it is
not the kind that will concern us here.
The powerful implication of (direct) network effects is the increasing returns
to the userbase and ultimately to the product exhibiting network effects.
Because for products that exhibit network effects every new adopter makes the
product more valuable relative to existing size of its network, it creates
incentives for other adopters to adopt the product with the bigger network
over its competitors. Consequently, the more the userbase grows the more it
invites further growth rendering the product increasingly more valuable and
competitive. The exact relationship between value and userbase can vary; While
one can say that in the most basic version of network effects the value of a
product grows linearly with the number of users added to its userbase
($V\propto u$) (Swann, 2002), most commonly network effects are used to
describe relationships that are logarithmic ($V\propto nlog(u)$) (Briscoe et
al., 2006), quadratic ($V\propto u^{2}$) (Metcalfe, 2013) or other (e.g.
$V\propto 2^{u}$).
Network effects have found application in numerous industries and business
models ranging from telecommunications (Birke and Swann, 2004; Gallaugher and
Wang, 2002), to web servers, PC software (Gandal, 1995), airline reservation
systems, ATMs (Economides and Salop, 1992), and platform systems (Church and
Gandal, 2005). Indeed, the intuition and implications of network effects have
been so pervasive that they have been invoked in any industry where the
consumption or use of a product by consumers makes the product more valuable
for others (for a collection of relevant literature see (Garcia-Swartz and
Garcia-Vicente, 2015). It is no surprise that cryptoassets have also been
hypothesized to exhibit network effects. The combination of the inherent
network nature, the meteoric rise in popularity (read: userbase), and the
substantial price volatility (read: value) has suggested a strong-if elusive-
relationship.
The particular motivation behind the study of network effects in cryptoassets
has so far been to discover a valuation formula: if we know the function
between userbase and value, then with informed guesses on the network’s growth
we can predict future prices (Peterson, 2018; Van Vliet, 2018; Shanaev et al.,
2019). But valuation formulas reduce network effects down to a binary
distinction represented by a single function. While useful as prediction tools
and high-level descriptors of cryptoasset trends, valuation formulas provide
little granularity.
Our motivation and goal is, instead, to provide more high-level view of how
network effects influence the cryptoasset market as a whole, and particularly
what they say about the potential for concentration in the market and about
competitive (dis)advantages of one cryptoasset over others. These are the most
impactful implications of network effects, and they are desirable for those
networks that can exploit them, but undesirable for their competitors or for
regulators who have to deal with concentrated markets. We work with numerous
cryptoassets so that we can obtain a market-wide overview (limited by how big
and representative our sample is), and we study them from their inception
until early 2020 which allows us to capture all historically important phases,
including the resurgence in 2019, which extant literature has not had a chance
to consider. This type of approach allows us to draw insights about the
structure and competitive dynamics of the cryptoasset market. It goes back to
the early wave of ”Bitcoin maximalism”, which stood for the idea that the
optimal number of currencies as alternatives to the mainstream financial
system is one, and altcoins will eventually be rendered obsolete as more and
more users gravitate toward the biggest, most stable, most widely accepted
cryptocurrency, namely Bitcoin. At the time, Bitcoin maximalism was rejected
by Vitalik Buterin, the creator of Ethereum, correctly pointing out that the
cryptoasset universe is not a homogeneous thing, and that therefore there is
no one single ”network” around which network effects would form (Buterin,
2014). We expand on that thinking.
Looking at network effects to study the competitive dynamics of the
cryptoasset market and its potential to concentrate around one or a small
number of cryptoassets can provide useful insights for industrial policy.
Normally, a showing that cryptoassets exhibit network effects would suggest
that early cryptoassets have a first-mover advantage and may lock the market
in (Economides, 1996; Katz and Shapiro, 1985; Gandal and Halaburda, 2016),
even if they are intrinsically inferior to other comparable cryptoassets
(Farrell and Saloner, 1985; Hagiu and Rothman, 2016; Briscoe et al., 2006).
While, the market seems to have moved away from that danger, network effects
theory also suggests that, assuming homogeneity, once a cryptoasset hits a
tipping point, it may fully prevail because new users will always prefer the
cryptoasset with the larger userbase (the so called ”winner-take-all” markets,
which Bitcoin maximalism relied on) (Economides, 1996; Katz and Shapiro,
1985). Homogeneity is, of course, a matter of degree, and it is still likely
that, if a cryptoasset exhibits stronger network effects than its peers, it
can prevail at least within a sub-segment of the market. The flip side of
network effects can also be observed, whereby the loss of a user results in a
supra-proportionate loss of value (i.e. more value than the user intrinsically
contributed individually), which incites further losses and so on. This means
that rapid depreciation is more likely in cryptoassets characterized by
network effects. The rapid appreciation and depreciation cycles coupled with
the winner-take-all characteristic can in turn result in cryptoasset markets
that are successively dominated by a new winner in every era (successive
contestable monopolies). Then, if this is the natural state of the market,
artificially forcing more competition may not be optimal.
These insights are well-applicable in financial markets. For instance, the
influential ”Cruickshank report”, an independent report on banking services in
the United Kingdom prepared for the UK Treasury, which has in turn influenced
regulatory and legal decisions (noa, 2007, 2011), warned about the far
reaching implications of network effects: ”Network effects also have profound
implications for competition, efficiency and innovation in markets where they
arise. Establishing critical mass is the first hurdle, as the benefits to
customers and businesses of a network arise only gradually with increasing
use. It is possible to imagine a world in which electronic cash is widely held
and used, for example, but much harder to see how to get there. Once a network
is well established, it can be extremely difficult to create a new network in
direct competition. … Where network effects are strong, the number of
competing networks is likely to be small and the entry barriers facing new
networks will be high” (Cruickshank, 2000). As the fintech industry is heating
up, network effects have also been cited there as a strong factor in
entrenching existing market power of financial services (see e.g. the recent
proposed acquisition of Plaid by Visa (Cyphers, 2020)), and such risks have
also been highlighted in the cryptoasset market, with models showing that
certain conditions can allow cryptoasset markets to become oligopolies and
market players entrench their position in the market (Arnosti and Weinberg,
2018; Cong et al., 2020).
## 3\. Prior Literature and Contribution
A number of papers have investigated aspects of the application of network
effects in cryptoasset networks. The focus has been to determine whether the
value of cryptoassets (and mainly Bitcoin) complies with network effects, and
in particular on whether it follows Metcalfe’s law, which is the most popular
iteration of network effects and stipulates that the value of a network grows
at a rate proportional to the square of the number of users ($V\propto
u^{2}$).
The early influential analysis by Peterson (Peterson, 2018) remains the point
of reference. Peterson developed a valuation model for Bitcoin’s price based
on Metcalfe’s law for the period 2009-2017, using wallets as a proxy for
users, Bitcoin prices as the proxy for value, and a Gompertz function to
account for growth. He found that the price of Bitcoin follows Metcalfe’s law
with R-square of 85 percent. In a revised version of the original paper that
extends through 2019, Peterson re-confirms the application of Metcalfe’s law
to Bitcoin (Peterson, 2019). However, he excludes significant periods of time
on the grounds of price manipulation, during which the value of the Bitcoin
network, as measured by the USD price of Bitcoin, lies well outside of
Peterson’s model predictions. Van Vliet (Van Vliet, 2018) enhanced Peterson’s
model by incorporating Rogers’ diffusion of innovation models to better
capture population parameters and growth rates. By doing so, van Vliet raised
R-squared to 99 percent. Shanaev et al. (Shanaev et al., 2019) acknowledge the
utility of Peterson’s and van Vliet’s analyses but depart from them in that
their model does not rely on historical data for the estimation of the
coefficient of proportionality, which raises an endogeneity problem. They
still use Metcalfe’s law but only as only as one of the building blocks of
their model. Civitarese (Civitarese, 2018) rejects the applicability of
Metcalfe’s law to the value of the Bitcoin network by running a cointegration
test between price and an adjusted number of wallets’ connections.
Gandal and Halaburda (Gandal and Halaburda, 2016) use a completely different
approach to examine the existence of network effects in cryptoasset networks.
They define network effects as the reinforcement effects the price of a
cryptoasset has on the price of another cryptoasset. With Bitcoin as the base
cryptoasset, the idea is that, if network effects are in place, as Bitcoin
becomes more popular (price increase), more people will believe that it will
win the winner-take-all race against other cryptoassets resulting in further
demand and higher prices. Therefore, network effects would manifest themselves
as an inverse (negative) correlation between the prices of the sampled
cryptoassets. For the period May 2013 - July 2014, their results showed signs
of network effects after April 2014.
Our analysis complements and differs from prior literature in several ways.
Firstly, we do not focus on a specific network effects formula; we rather look
at when, to what degree, in which cryptoassets, and for what proxies of value
and userbase network effects are observable (defined as supra-proportional
change in value relative to userbase) regardless of which particular
curve/function they follow. Secondly, we go beyond Bitcoin to examine six
cryptoassets that we have selected as representative of different features and
characteristics to better be able to observe potential industry-wide trends.
This helps us notice whether one cryptoasset has the potential to dominate the
market or multiple cryptoassets benefit from the same network effect forces.
Thirdly, we use different parameters as proxies for value and userbase to more
fully capture the functionality and usage of cryptoassets in the market.
Importantly, we do not rely on the total number of users as a proxy for
userbase like extant literature, because many of those addresses are dormant
or permanently inaccessible and therefore economically irrelevant. Fourthly,
we study the full history of cryptoassets from their inception to today which
allows us to observe their different phases, including the price collapse in
2018 and the resurgence in mid-2019, which dramatically change the picture of
network effects and which have been missed by previous studies. Lastly, we
work with data sets that have been meticulously cleaned to filter out spurious
or manipulative activity, which improves the accuracy of our results compared
to data-sets that are pulled unfiltered from the network. Our analysis
confirms the existence of network effects, but also that they do not have the
results usually associated with them on the market.
## 4\. Methodology and Development
We study the application of network effects in Bitcoin (BTC), Dogecoin (DOGE),
Ethereum (ETH), Litecoin (LTC), XRP and Tezos (XTZ). The selection of these
cryptoassets was made on the basis of diversity and feasibility. We aimed to
study cryptoassets that exhibited different attributes in terms of age, market
capitalization and any special features that make them stand out from other
competing cryptoassets in order to build a representative sample of the
crypto-economy (Irresberger et al., 2020). We also limited the study to
cryptoassets for which we could get reliable, standardized time-series data
from the cryptoassets’ initial release to the time of the study (Metrics,
2020). The unreliability of the prices reported by exchanges in the early days
of the industry led us to consider Bitcoin from July 2010, Litecoin from March
2013, and XRP from August 2014—the rest from their beginning. Table 1
summarizes the attributes of each chosen cryptoasset.
| Age | Market cap (2020) | Features
---|---|---|---
Bitcoin (BTC) | Old (2009) | V. Large ($170B) | Popularity, first cryptocurrency, UTXO based
Dogecoin (DOGE) | Old (2013) | V. Small ($0.3B) | ”Joke cryptocurrency”, early BTC contender , UTXO based
Ethereum (ETH) | Medium (2015) | Medium ($25B) | Turing complete, programmable, account based
Litecoin (LTC) | Old (2011) | Small ($2.6B) | First major BTC fork, UTXO based
XRP | Old (2012) | Small ($8B) | Consensus, fintech-orientated, account based
Tezos (XTZ) | New (2018) | Small ($1.7B) | Centralized PoS, on chain governance, account based
Table 1. List of studied cryptoassets, chosen to cover different
characteristics.
We first define network effects. Network effects occur where the value of the
network $V$ grows supra-proportionately to the number of users $n$ that
participate in the network. Reverse network effects occur where the value $V$
drops supra-proportionately to the number of users $n$ that leave the network.
Unless there is a reason to distinguish between positive and reverse network
effects, we collectively refer to them as network effects. Therefore, we
define network effects to occur in cryptoassets when a positive value change
$\Delta V>0$ is larger than a positive userbase change $\Delta u>0$, or when a
negative value change $\Delta V<0$ is smaller than a negative userbase change
$\Delta u<0$. Notice that we do not consider that network effects apply when
value and userbase move in different directions, e.g. when the value increases
while the userbase decreases, regardless of which increases or decreases more.
Thus, network effects occur if
$\Delta V>\Delta u\geq 0\ \lor\Delta V<\Delta u\leq 0$
In our analysis we define change at time $t$ similar to log returns, i.e.
(1) $\displaystyle\Delta V$ $\displaystyle:=\ln\frac{V_{t+1}}{V_{t}}$ (2)
$\displaystyle\Delta u$ $\displaystyle:=\ln\frac{u_{t+1}}{u_{t}}$
Then, we identify appropriate proxies to represent value $V$ and userbase $u$.
To represent $V$ we use two proxies: (a) token price and (b) transaction
value. The two proxies represent different aspects of the value users assign
to cryptoassets. In theory, even one proxy applied to one cryptoasset would be
enough to demonstrate (or not) network effects (as has, for example, been done
in previous literature that relied only on token price), assuming the proxy
and cryptoasset are representative. However, because cryptoassets are
differentiated resulting in diversified usage patterns, and because the chosen
proxies express different ways by which users perceive the value of the
network, a multitude of cryptoassets and proxies was used in an effort to
better represent the industry.
Token Price (PriceUSD): The first parameter we use is token price, which is
the fixed closing price of the asset as of 00:00 UTC the following day (i.e.,
midnight UTC of the current day) denominated in USD (for a detailed
explanation of Coin Metric’s methodology on toke price see (Metrics, 2020)).
Token price expresses value in terms of market forces, namely the point at
which supply meets demand. It is the value that users as market participants
collectively assign to a given cryptoasset by deciding to buy and sell at that
price level. We assume that the studied cryptoassets trade under normal market
conditions; any acknowledgement of price manipulation that may have occurred
at times has been accounted for in the cleaning of data by Coin Metrics
(Metrics, 2020).
Transaction Value (TxTfrValAdjUSD): The second proxy of choice is transaction
value, which expresses the USD value of the sum of native units transferred
between distinct addresses per day removing noise and certain artifacts to
better reflect the real economically relevant value circulating in the
network. The assumption is that as the network becomes more valuable to users,
they will use it more frequently and/or to transfer greater value among them.
Therefore, transaction value as a proxy sees cryptoassets as means of
transaction. We considered and rejected transaction count as an appropriate
proxy, because on some networks a large number of recorded transactions are
unrelated to value transfer, but rather to the operation of the network, e.g.
consensus formation on Tezos (Perez et al., 2020). One could retort that even
these non-value-carrying transactions reflect engagement with the network and
that therefore are an indication of the value of the network to users. Even
so, lumping together value-carrying and operational transactions would taint
the comparison across cryptoassets, since on some cryptoassets the majority of
transactions are operational (e.g. Tezos, see (Perez et al., 2020)), while on
others value-carrying (e.g. Bitcoin).
Next, to represent $u$ we select the following proxies: (a) addresses with
non-zero balance (b) trailing 6-month active addresses and . Different ways to
represent userbase more fully captures the relationship between value and
userbase. We considered and rejected counting userbase based on total number
of addresses (like all previous literature), because of the large number of
inactive addresses. Contrary to other industries where network effects have
been studied and where inactive users are eventually purged from the network
(e.g. mobile phone subscriptions, social networks), so that total user count
may still be a good approximation of the economically meaningful userbase,
this is not the case with cryptoassets. Instead we opted for two variants of
addresses with non-zero balance, as defined below.
Addresses with Non-Zero Balance (AdrBalCnt): This proxy represents the sum
count of unique addresses holding any amount of native units as of the end of
that day. Only native units are considered (e.g., a 0 ETH balance address with
ERC-20 tokens would not be considered). The utility of this proxy lies in that
it excludes all non-economically active addresses, the assumption being that
addresses with zero balance are dormant (similar to bank accounts with zero
balance). This choice responds to criticism that has been raised with regard
to extant literature that tended to use all addresses or wallets as a proxy
for users. Despite our choice of improved metric, it still remains a fact that
there is no one-to-one mapping between addresses and actual users, which is a
common problem to any network or service, e.g. the same person may have
multiple bank accounts. While there are methods to de-cluster actual users
from wallets and addresses, these are not sufficiently precise and are
unavailable or inapplicable across cryptoassets (Harrigan and Fretter, 2016).
We also acknowledge that on networks with lower transaction fees it is easier
to generate and/or maintain addresses with balance, and to counter that we
could raise the amount of native units the counted addresses should have, but
this would introduce a subjectivity question without even fully eradicating
the initial problem of spurious addresses.
Trailing 6-Month Active Addresses (6MAdrActCnt): This proxy counts all unique
addresses that have been active at least once over the trailing 6-month period
from the time of measurement. Repeat activity is not double-counted.
Traditionally, most userbase measurements are taken in time frames that range
from one month to one year. Given that cryptoassets are of relatively young
age, which may suggest that their userbase is expected to interact with them
less frequently, and that part of their utility involves simply owning them,
which does not generate any activity, we decided that a 6-month time frame
sufficiently captures active userbase.
Before we derive network effects, we first calculate the Pearson correlation
between value $V$ and users $u$ which is informative in terms of their overall
relationship. Next, we obtain relevant measurements of network effects. We
rely predominantly on the PriceUSD-AdrBalCnt pair of proxies for value and
userbase, but additional measurements are in the Appendix. To see how
prevalent network effects are in the studied cryptoassets we calculate the
ratio of total days to the days where network effects were observed
(separately for positive and reverse) for each cryptoasset. To see how strong
network effects are we calculate the ratio of total days to the sum of the
network effects observations over the days they occurred for each cryptoasset
(separately for positive and reverse). To see how strong network effects are
in cryptoassets relative to each other we reduce to a 100 day period. The
results are presented in Part 5 and the analysis of the results in Part 6.
Metric abbr | Metric meaning
---|---
PriceUSD | Token price
TxTfrValAdjUSD | Transaction value
AdrBalCnt | Addresses with non-zero balance
6MAdrActCnt | Trailing 6-month active addresses
NFX | Network effects
Table 2. Legend of metrics in use.
## 5\. Results
We are looking for network effects in the relationship between value $V$ and
users $u$ of various cryptoassets as represented by the proxies defined
previously. Four pairs (2x2 proxies) are possible:
* •
`Token Price - Addresses with Non-Zero Balance`: This pair demonstrates
network effects expressed as the change of monetary value of a cryptoasset
relative to the users that hold any amount of that cryptoasset. By counting
only accounts with non-zero balance, we filter out economically dormant users.
* •
`Token Price - Trailing 6-month Active Addresses`: This pair demonstrates
network effects expressed as the change of monetary value of a cryptoasset
relative to the users that have been active at least once in the trailing
6-month period on that cryptoasset’s network. Counting all active users over a
recent time segment (usually 1, 6 or 12 months) is a common measurement of
network or platform userbase and less conservative than daily active users.
* •
`Transaction Value - Addresses with Non-Zero Balance`: This pair demonstrates
network effects expressed as the change of transaction value of a cryptoasset
relative to the users that hold any amount of that cryptoasset.
* •
`Transaction Value - Trailing 6-month Active Addresses`: This pair
demonstrates network effects expressed as the change of transaction value of a
cryptoasset relative to the users that have been active at least once in the
trailing 6-month period on that cryptoasset’s network.
Before we derive network effects, we calculate, based on the above pairs, the
Pearson correlation between value $V$ and users $u$ which tells us whether, as
a general matter, cryptoasset value and userbase are moving in the same
direction. This already provides an indication of whether cryptoassets become
more valuable as their adoption increases.
| | User proxies
---|---|---
Cryptoasset | Value proxy | AdrBalCnt | 6MAdrActCnt
BTC | PriceUSD | 0.878760 | 0.800890
BTC | TxTfrValAdjUSD | 0.771601 | 0.734617
DOGE | PriceUSD | 0.532856 | 0.255025
DOGE | TxTfrValAdjUSD | 0.258791 | 0.141790
ETH | PriceUSD | 0.256837 | 0.475199
ETH | TxTfrValAdjUSD | 0.048093 | 0.214427
LTC | PriceUSD | 0.646814 | 0.844012
LTC | TxTfrValAdjUSD | 0.258648 | 0.431706
XRP | PriceUSD | 0.551157 | 0.803027
XRP | TxTfrValAdjUSD | 0.189622 | 0.278429
XTZ | PriceUSD | -0.477943 | -0.681394
XTZ | TxTfrValAdjUSD | -0.169407 | -0.240346
Table 3. Pearson correlation between value and user proxies
It is evident that only BTC shows a strong correlation between value and
userbase, at least when userbase is measured by our main proxy of total
addresses with non-zero balance (AdrBalCnt), with LTC showing the next highest
correlation, which is, however, average and only holds when value is measured
as value in fiat currency (PriceUSD). Correlations when userbase is measured
as addresses that have been active in the trailing 6-month period
(6MAdrActCnt) tend to be higher although still not consistently so. Higher
correlation using 6MAdrActCnt might be explained on the grounds that user
activity picks up during phases of large price movements. Overall, the
mediocre and inconsistent correlations between value and userbase provide a
first indication that a blanket conclusion that the cryptoasset market is
characterized or not by network effects is unwarranted.
Next, we obtain relevant measurements based on the PriceUSD-AdrBalCnt pair of
proxies for value and userbase as presented in Table 4 (additional
measurements for other pairs are in the Appendix). As explained in the
methodology, we believe these are the most appropriate proxies. Column 5 of
Table 4 shows prevalence of network effects for each cryptoasset as calculated
by the ratio of total days to the days where network effects were observed
(separately for positive and reverse). Column 6 of Table 4 shows strength of
network effects as calculated by the ratio of total days to the sum of the
network effects observations over the days they occurred for each cryptoasset
(separately for positive and reverse). Column 7 of Table 4 shows relative
strength of network effects across cryptoassets by reducing to a 100 day
period. This allows us to compare how strong network effects are across
cryptoassets regardless of how prevalent they are across them.
Crypto | Total days | Days of NFX (pos-reverse) | Sum (strength) of NFX (pos-reverse) | Ratio of total days/NFX days (pos-reverse) | Relative strength of NFX (pos-reverse)
---|---|---|---|---|---
Bitcoin | 3461 | 1434 | 47.1 | 0.400 | 3.28
| | 243 | 9.2 | 0.070 | 3.78
Doge | 2175 | 695 | 30.7 | 0.310 | 4.40
| | 295 | 11.1 | 0.130 | 3.70
Ethereum | 1614 | 707 | 33.5 | 0.430 | 4.70
| | 12 | 0.2 | 0.007 | 1.66
Litecoin | 2473 | 722 | 34.5 | 0.290 | 4.77
| | 354 | 11.2 | 0.140 | 3.16
XRP | 1973 | 901 | 41 | 0.450 | 4.55
| | - | - | - | -
Tezos | 558 | 244 | 10.8 | 0.430 | 4.40
| | - | - | - | -
| | | | |
Table 4. Network effects measurements based on the Token price - Addresses
with non zero balance proxy pair
(a) BTC
(b) DOGE
(c) ETH
(d) LTC
(e) XRP
(f) XTZ
Figure 2. Network effect observations and distribution (blue: positive NFX,
red: reverse NFX, white: no NFX); userbase measured by total addresses with
non-zero balance, value measured by USD token price.
## 6\. Analysis
Our results are useful in reaching a number of conclusions on how network
effects inform the structure and evolution of the cryptoasset market.
### (1) Network effects do not provide precise valuation predictions:
The most common application of network effects theory has been to draw
insights into future cryptoasset pricing based on the evolution of their
userbase. Our results indicate that network effect observations in
cryptoassets are frequent but inconsistent and therefore they cannot be relied
on, generally, as a valuation tool as previous literature suggests (Figures 2
and 3). They are most frequent in XRP (45 percent of time in the pair Token
Price-Addresses with Non-Zero Balance) and least frequent in LTC (29 percent
of time in the same pair). While they appear more consistent in ETH and XRP,
their results can be somewhat misleading at first glance: ETH’s and XRP’s
userbase (AdrBalCnt) was constantly increasing and so any supra-proportionate
increase in price registered as a (positive) network effect observation (blue
lines in (c) and (e) in Figure 2). However, the positive network effect
observations are frequently punctuated by days/periods of no network effect
observations during which the price either does not rise supra-proportionately
to userbase or drops. In cryptoassets such as BTC and LTC, where userbase
fluctuates, it is easier to notice the changes in network effects trends (blue
and red lines in (a) and (d) in Figures 2 and 3), even through network effect
frequency is comparable to ETH and XRP. Therefore, it is hard to conclude that
in any cryptoasset network effects exhibit constant patterns that, if extended
into the future, can hold predictive value. This does not mean that we do not
acknowledge the exponential long-term price increase of some cryptoassets
(Figure 1), but we note that this is not linked consistently to their userbase
growth, which is what network effects theory suggests. One explanation of why
our results do not support the conclusions of previous studies can relate to
the different time frames. Most previous studies’ datasets end around the
valuation peak of January 2018, missing the precipitous fall in 2018 and the
subsequent rise in 2019, which upend the relatively smoother network effect
curves of valuations up until the end of 2017. Another explanation relates to
methodology. For example, Peterson’s revised study, which covers up to 2020
and confirms the finding of the paper’s previous popular version that
Bitcoin’s valuation follows Metcalfe’s law, excludes certain sizeable time
periods, which, if accounted for, show a poor(er) fit (Peterson, 2019). A
third explanation relates to the proxies used. Some previous studies rely on
wallets (total addresses) as the proxy for userbase, which is a more crude
measurement than our preferred addresses with non-zero balance, as the latter
show only economically active users and are therefore a better approximation
of relevant userbase.
### (2) Reverse network effects are also noticeable meaning that cryptoassets
are vulnerable to rapid decline, not just conducive to rapid growth:
While network effects have mostly been used to describe growth patterns, they
are equally applicable in describing decline. Reverse network effects reflect
situations where a decrease in users is linked to a larger decrease in value.
Such observations are important, because they show that each user loss incurs
a greater loss of value and therefore expose the potential for a rapid decline
of the network once user exodus begins. Reverse network effects therefore
highlight the precariousness of success (as measured by proxies of value).
Most cryptoassets exhibited at least one prolonged period where reverse
network effects were dominant, during which phases their value contracted
disproportionately to the contraction of their userbase ending up less
valuable than their userbase size would otherwise suggest or mandate during
that period. This is noticeable both when userbase is measured by addresses
with non zero balance, but it is even more pronounced when userbase is
measured as trailing 6-month active addresses (Figure 3). This makes sense
since the users active in the trailing 6-month period are more likely to be
responsive to price fluctuations compared to users who simply hold some
balance on their account. From Figure 3 it is also evident that user
disengagement is almost consistently observed after every price crash (as
manifested through the reverse network effects that begin 6 months after many
of the crashes), and the fact that price continues to decrease supra-
proportionately to userbase, as measured by active users in the trailing
6-month period, 6 months after the crash, may be indicative of the lasting
effects user exodus has on the value of cryptoasset networks. Generally,
however, while reverse network effects serve as a cautionary note that rapid
decline of value can be triggered by user exit, they are weaker in magnitude
than positive network effects (Table 4). So, overall, positive network effects
(albeit inconsistent) still seem to characterize cryptoasset networks.
### (3) Cryptoassets do not seem to be a winner-take-all market:
A common corollary of network effects is that they eventually cause the market
to gravitate toward oligopolistic structure, since, everything else equal,
users prefer to join the network where the value from their joining will be
maximized. This causes a ”rich-get-richer” effect where the most valuable
network continues to become even more valuable as users prefer to join that
over others. Such markets tend to become oligopolistic, with the usual
downsides of such industry structure (higher prices, reduced output, entry
barriers; lower variety and innovation), and can therefore be a cause for
concern. For this to be more likely to happen the various networks
(=cryptoassets) must be undifferentiated and switching among and multi-homing
across networks must be rare or costly (Schmalensee, 2011). These features do
not seem to characterize the cryptoasset market, which accordingly appears
less susceptible to a winner-take-all trend, at least on account of network
effects. Indeed, of the thousands of available cryptoassets many serve
different purposes, and users can own multiple cryptoassets at the same time
and enter and exit their networks without friction. As evidenced by our
results, the fact that the various cryptoassets we studied exhibit network
effects of comparable relative strength (Column 7 in Table 4), and that they
retain their userbase and valuation cycles (Figure 1) seems to suggest that
the underlying market features, including network effects, do not lead it
toward an oligopolistic structure.
### (4) Network effects strength across cryptoassets is comparable and
therefore network effects do not accord a single cryptoasset a strong
comparative advantage over its peers, undermining fears of concentration:
Besides frequency and duration, i.e. what period of a cryptoasset’s lifetime
is dominated by network effects, another useful parameter of network effect
observations in cryptoassets is their strength, i.e. the magnitude of the
impact of a userbase change to value change (Shankar and Bayus, 2003). Strong
network effects can be indicative of higher homogeneity or cohesion within the
network, where the addition of each new user (e.g. investor) affects existing
users of that closely-knit network more than if it was a different looser
network. In turn, this is reflected in the value of the network, or they may
be indicative of stronger reputational effects, where the addition of each new
user signals major changes for the network, which are then reflected in its
value. Our results show that the comparative strength of network effects
across the studied cryptoassets is similar (Table 4). This leads us to believe
that no single cryptoasset benefits from network effects significantly more
than its peers and therefore that no cryptoasset enjoys an overwhelming
competitive advantage over its peers on account of network effects. A
necessary corollary observation is that network effects accrue at similar
levels to the studied cryptoassets, which means that network effects as a
phenomenon, characterizes the cryptoasset industry as a whole (at least based
on our sample), not just Bitcoin, which has been the main subject of many of
extant studies in the area. This is not a surprising finding, but it is worth
highlighting that it lends support to the previous point that the structure of
the cryptoasset market does not seem to be such where network effects lead it
to concentration around a small number of cryptoassets or that it helps
cryptoassets overtake their peers on account of network effects. This is most
likely because cryptoassets are differentiated and multi-homing and switching
are pervasive.
### (5) Network effects are not consistently observed during the early days of
cryptoassets and therefore it is doubtful that they can be relied on as a tool
to bootstrap a new cryptoasset:
A common business model when launching new products or services in digital
markets is to exploit network effects to quickly establish a growing foothold.
Particularly if the product or service is also the first of its kind to hit
the market, network effects can dramatically augment the first mover
advantage, everything else equal. Our results indicate that network effects
are not consistently observed in the studied cryptoassets during their early
days (the first year of data); in particular, DOGE, XTZ and LTC do not exhibit
consistent positive network effects neither by token price (PriceUSD) nor by
transaction value (TxTfrValAdjUSD) as proxies for value (Figures 2 and 5). The
lack of consistency is even more pronounced when userbase is measured by
active addresses in the trailing 6-month period, which is an instructive
measure here, because it tracks recent user activity which is the driver of
early adoption. In Figure 3 only BTC and ETH have a claim to positive early
network effects and in ETH they are sparser. This suggests that new
cryptoassets cannot necessarily hope that network effects will assist in their
initial uptake. It is useful to dispel this hypothesis because investors are
looking for patterns in events that may trigger valuation changes (e.g. the
hypothesis that cryptoasset value as measured in monetary terms increases once
the cryptoasset is listed on a major crypto-exchange).
### (6) Comparison between network effects on price and transaction value
reveals sensitivity to price, which can be a competitive disadvantage:
Extant literature has relied exclusively on token price as the proxy for
network value. Using transaction value too helps us draw useful comparisons.
For this, it is most instructive to rely on trailing 6-month active addresses
as the proxy for userbase, because this proxy is more responsive to value
fluctuations. Then, a comparison between the strength of network effects
measured by token price (PriceUSD) and by transaction value (TxTfrValAdjUSD)
reveals that some cryptoassets experience greater fluctuations in their
transaction value relative to their token price. During upturns, network
effects tell us that token price and transaction value increase more than the
userbase increases, and during downturns, reverse network effects show the
opposite. By comparing the ratios among cryptoassets of the sum of network
effects when value is measured by token price and the sum of network effects
when value is measured by transaction value one can observe differences in how
transaction value is affected among cryptoassets. Specifically, the ratios for
BTC, DOGE, ETH and LTC are similar ranging from 0.12 to 0.14 for positive
network effects and 0.07 to 0.09 for reverse network effects, whereas XRP’s is
0.07 and for XTZ’s is 0.06 for positive network effects and 0.04 and 0.03 for
reverse network effects (compare sum ratios in Figure 3 and Figure
LABEL:fig:stemplots_fx__6m). This means that during periods of positive
network effects, XRP’s and XTZ’s transaction value grows more than their token
price grows relative to their userbase, and that during periods of reverse
network effects, XRP’s and XTZ’s transaction value drops more than their token
price drops relative to their userbase. This kind of increased volatility may
be generally undesirable, but it is particularly problematic during downturns
(reverse network effects) because it shows that activity on XRP and XTZ
networks is more drastically affected making them more sensitive and less
resilient, which is a competitive disadvantage. Our results hold too when we
look exclusively at 2017 and 2018 as the years with the most sustained price
increase and decrease respectively.
## 7\. Conclusion
Network effects can be among the most common and influential factors shaping
market dynamics in industries where products and services are built around
networks. It is no wonder that they have been cited as a determinant in how
cryptoassets grow in value and compete. Our analysis show that while network
effects do characterize cryptoassets, they do not result in the usual
concentration and competitive advantage implications usually associated with
them. Our work also invites further research to determine the exact scope and
conditions under which network effects apply. More precise proxies for
userbase and value and accounting for exogenous effects are steps in the right
direction.
## References
* (1)
* noa (2007) 2007\. Morgan Stanley / Visa International and Visa Europe. https://ec.europa.eu/competition/antitrust/cases/dec_docs/37860/37860_629_1.pdf
* noa (2011) 2011\. _Competition and Choice in Retail Banking - Ninth Report of Session 2010–11_. Technical Report HC 612–I. House of Commons. https://publications.parliament.uk/pa/cm201011/cmselect/cmtreasy/612/612i.pdf
* Arnosti and Weinberg (2018) Nick Arnosti and S Matthew Weinberg. 2018. Bitcoin: A Natural Oligopoly. In _10th Innovations in Theoretical Computer Science Conference_ , Avrim Blum (Ed.). Vol. 124. Schloss Dagstuhl, 5.
* Birke and Swann (2004) Daniel Birke and GM Peter Swann. 2004. Network effects in mobile telecommunications: An empirical analysis. _J. Evol. Econ. v16 i1-2_ (2004), 65–84. Publisher: Citeseer.
* Briscoe et al. (2006) Bob Briscoe, Andrew Odlyzko, and Benjamin Tilly. 2006\. Metcalfe’s Law Is Wrong - Communications Networks Increase in Value as They Add Members But by How Much? _IEEE Spectrum_ 43, 7 (July 2006), 34–39. https://doi.org/10.1109/MSPEC.2006.1653003
* Buterin (2014) Vitalik Buterin. 2014\. On Bitcoin Maximalism, and Currency and Platform Network Effects. https://blog.ethereum.org/2014/11/20/bitcoin-maximalism-currency-platform-network-effects/
* Church and Gandal (2005) Jeffrey Church and Neil Gandal. 2005. Platform Competition in Telecommunications. In _The Handbook of Telecommunications Economics (Volume 2)_ , Martin Cave, Sumit Kumar Majumdar, and Ingo Vogelsang (Eds.). North-Holland, 119–155.
* Church et al. (2008) Jeffrey Church, Neil Gandal, and David Krause. 2008\. Indirect network effects and adoption externalities. _Review of Network Economics_ 7, 3 (2008). Publisher: De Gruyter.
* Civitarese (2018) Jamil Civitarese. 2018\. Does Metcalfe’s Law Explain Bitcoin Prices? A Time Series Analysis. _SSRN Electronic Journal_ (2018). https://doi.org/10.2139/ssrn.3107895
* Cong et al. (2020) Lin William Cong, Ye Li, and Neng Wang. 2020. _Tokenomics: Dynamic Adoption and Valuation_. Technical Report 27222. National Bureau of Economic Research.
* Cruickshank (2000) Don Cruickshank. 2000\. _Competition in UK banking – A Report to the Chancellor of the Exchequer_. Technical Report. The Stationery Office. 1206–1282 pages. http://www.hm-treasury.gov.uk/documents/financial_services/banking/bankreview/fin_bank_reviewfinal.cfm
* Cyphers (2020) Bennett Cyphers. 2020\. Visa Wants to Buy Plaid, and With It, Transaction Data for Millions of People. https://www.eff.org/deeplinks/2020/11/visa-wants-buy-plaid-and-it-transaction-data-millions-people
* Economides (1996) Nicholas Economides. 1996\. The Economics of Networks. _International Journal of Industrial Organization_ 14, 6 (Oct. 1996), 673–699. https://doi.org/10.1016/0167-7187(96)01015-6
* Economides and Salop (1992) Nicholas Economides and Steven C. Salop. 1992. Competition and Integration Among Complements, and Network Market Structure. _The Journal of Industrial Economics_ 40, 1 (March 1992), 105–123. https://doi.org/10.2307/2950629
* Farrell and Saloner (1985) Joseph Farrell and Garth Saloner. 1985. Standardization, Compatibility, and Innovation. _The RAND Journal of Economics_ 16, 1 (1985), 70\. https://doi.org/10.2307/2555589
* Gallaugher and Wang (2002) John M. Gallaugher and Yu-Ming Wang. 2002. Understanding Network Effects in Software Markets: Evidence from Web Server Pricing. _MIS Quarterly_ 26, 4 (Dec. 2002), 303\. https://doi.org/10.2307/4132311
* Gandal (1995) Neil Gandal. 1995\. Competing Compatibility Standards and Network Externalities in the PC Software Market. _The Review of Economics and Statistics_ 77, 4 (1995), 599–608. https://doi.org/10.2307/2109809
* Gandal and Halaburda (2016) Neil Gandal and Hanna Halaburda. 2016. Can We Predict the Winner in a Market with Network Effects? Competition in Cryptocurrency Market. _Games_ 7, 3 (2016), 16. https://doi.org/10.3390/g7030016
* Garcia-Swartz and Garcia-Vicente (2015) Daniel D. Garcia-Swartz and Florencia Garcia-Vicente. 2015. Network effects on the iPhone platform: An empirical examination. _Telecommunications Policy_ 39, 10 (Nov. 2015), 877–895. https://doi.org/10.1016/j.telpol.2015.07.011
* Hagiu and Rothman (2016) Andrei Hagiu and Simon Rothman. 2016. Network Effects Aren’t Enough. _Harvard Business Review_ (April 2016), 64–71.
* Harrigan and Fretter (2016) M. Harrigan and C. Fretter. 2016. The Unreasonable Effectiveness of Address Clustering. In _2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld)_. 368–373. https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0071
* Irresberger et al. (2020) Felix Irresberger, Kose John, and Fahad Saleh. 2020. The Public Blockchain Ecosystem: An Empirical Analysis. _SSRN Electronic Journal_ (2020). https://doi.org/10.2139/ssrn.3592849
* Katz and Shapiro (1985) Michael L. Katz and Carl Shapiro. 1985. Network Externalities, Competition, and Compatibility. _The American Economic Review_ 75, 3 (1985), 424–440. https://www.jstor.org/stable/1814809
* Metcalfe (2013) Robert Metcalfe. 2013\. Metcalfe’s Law After 40 Years of Ethernet. _Computer_ 46, 12 (Dec. 2013), 26–31. https://doi.org/10.1109/MC.2013.374
* Metrics (2020) Coin Metrics. 2020\. Coin Metrics Hourly Reference Rates Methodology (version 2.5).
* Perez et al. (2020) Daniel Perez, Jiahua Xu, and Benjamin Livshits. 2020. Revisiting Transactional Statistics of High-scalability Blockchains. 535–550.
* Peterson (2018) Timothy Peterson. 2018\. Metcalfe’s Law as a Model for Bitcoin’s Value. _Alternative Investment Analyst Review_ 7, 2 (2018), 9–18. https://doi.org/10.2139/ssrn.3078248
* Peterson (2019) Timothy Peterson. 2019\. Bitcoin Spreads Like a Virus. _SSRN Electronic Journal_ (2019). https://doi.org/10.2139/ssrn.3356098
* Rohlfs (1974) Jeffrey Rohlfs. 1974\. A Theory of Interdependent Demand for a Communications Service. _The Bell Journal of Economics and Management Science_ 5, 1 (1974), 16–37. https://doi.org/10.2307/3003090
* Schmalensee (2011) Richard Schmalensee. 2011\. Jeffrey Rohlfs’ 1974 Model of Facebook: An Introduction. _http://ssrn.com/abstract=1802053_ (2011).
* Shanaev et al. (2019) Savva Shanaev, Satish Sharma, Arina Shuraeva, and Binam Ghimire. 2019. The Marginal Cost of Mining, Metcalfe’s Law and Cryptocurrency Value Formation: Causal Inferences from the Instrumental Variable Approach. _SSRN Electronic Journal_ (2019). https://doi.org/10.2139/ssrn.3432431
* Shankar and Bayus (2003) Venkatesh Shankar and Barry L. Bayus. 2003. Network effects and competition: an empirical analysis of the home video game industry. _Strategic Management Journal_ 24, 4 (April 2003), 375–384. https://doi.org/10.1002/smj.296
* Swann (2002) G. M. Peter Swann. 2002\. The Functional Form of Network Effects. _Information Economics and Policy_ 14, 3 (Sept. 2002), 417–429. https://doi.org/10.1016/S0167-6245(02)00051-3
* Van Vliet (2018) Ben Van Vliet. 2018\. An Alternative Model of Metcalfe’s Law for Valuing Bitcoin. _Economics Letters_ 165 (2018), 70–72. https://doi.org/10.1016/j.econlet.2018.02.007
## 8\. Appendix
(a) BTC
(b) DOGE
(c) ETH
(d) LTC
(e) XRP
(f) XTZ
Figure 3. Network effect observations and distribution (blue: positive NFX,
red: reverse NFX, white: no NFX); userbase measured by trailing 6 month
addresses, value measured by USD token price.
(a) BTC
(b) DOGE
(c) ETH
(d) LTC
(e) XRP
(f) XTZ
Figure 4. Network effect observations and distribution (blue: positive NFX,
red: reverse NFX, white: no NFX); userbase measured by trailing 6 month
addresses, value measured by transaction value.
(a) BTC
(b) DOGE
(c) ETH
(d) LTC
(e) XRP
(f) XTZ
Figure 5. Network effect observations (blue: positive NE, red: reverse NE,
white: no NE); userbase measured by total addresses with non-zero balance,
value measured by transaction value.
|
# Heavy quarkonia spectroscopy at zero and finite temperature in bottom-up
AdS/QCD
Miguel Angel Martin Contreras<EMAIL_ADDRESS>Instituto de Física y
Astronomía,
Universidad de Valparaíso,
A. Gran Bretaña 1111, Valparaíso, Chile Alfredo Vega<EMAIL_ADDRESS>Instituto de Física y Astronomía,
Universidad de Valparaíso,
A. Gran Bretaña 1111, Valparaíso, Chile Saulo Diles<EMAIL_ADDRESS>Campus
Salinópolis,
Universidade Federal do Pará,
68721-000, Salinópolis, Pará, Brazil
###### Abstract
S-wave states of charmonium and bottomonium are described using bottom-up
AdS/QCD. We propose a holographic model that unifies the description of masses
and decay constants, leading to a precise match with experimental data on
heavy quarkonia. Finite temperature effects are considered by calculating the
current-current spectral functions of heavy vector mesons. The identification
of quasi-particle states as Breit-Wigner resonances in the holographic
spectral function was made. We develop a prescription to subtract background
contributions from the spectral function to isolate the Breit-Wigner peak. The
quasi-particle holographic thermal evolution is described, allowing us to
estimate the melting temperature for vector charmonia and bottomonia. Our
holographic model predicts that $J/\Psi$ melts at $415$ MeV $(\sim
2.92\leavevmode\nobreak\ T_{c})$ and $\Upsilon$ melts at $465$ MeV $(\sim
3.27\leavevmode\nobreak\ T_{c})$.
First keyword and Second keyword and More
## I Introduction
Heavy quarkonia work as a probe of quark-gluon plasma formation in heavy-ion
collisions, where charmonium suppression seemed to play the fundamental role
Matsui and Satz (1986). It happens that $J/\Psi$ track is hard to reconstruct
due to physical effects such as nuclear absorption and recombination Chaudhuri
(2002); Liu et al. (2011); Abreu et al. (2018). This difficulty in tracking
back the charmonium trajectories made unfavorable $J/\Psi$ as a precise probe
of QGP. On the other hand, bottomonium production by recombination and
regeneration effects is small Song et al. (2012); Emerick et al. (2012); Reed
(2011). Bottomonium then emerges as a promising candidate to probe QGP
properties, but not invalidating the importance of charmonium in this context.
See Krouppa et al. (2019); Yao and Müller (2019).
Charmonium and bottomonium mesons were experimentally discovered, latter a
than its light cousins ($\rho,\phi$), due to its considerable threshold
energies imposed by the heavy $c,b$ quark masses. Curiously, current
experimental data about the spectrum of radial excitations is more extensive
and complete for the heavy vector than the light ones. The decay constants for
the excited S-wave states are entirely determined from experiments for the
heavy vector quarkonium Tanabashi et al. (2018). Decay constants of charmonium
and bottomonium are observed to be decreasing with excitation levels. For the
$\phi$ meson, the decay constants of excited states are estimated from
experimental data. These estimations predict they are also decreasing with
excitation level Pang (2019); Badalian and Bakker (2019).
Meson spectroscopy is a static low energy phenomenon. In this case, the color
interaction is strongly coupled and a non-perturbative approach for strong
interactions is required Gross and Wilczek (1973); Politzer (1973); van
Ritbergen et al. (1997). One important non-perturbative approach is the
holographic dual of QCD, referred as AdS/QCD Polchinski and Strassler (2002);
Boschi-Filho and Braga (2003); Erlich et al. (2005); Brodsky and de Teramond
(2008). AdS/QCD models are inspired in the exact duality between the conformal
and supersymmetric field theory $\mathcal{N}=4$ SYM in four space-time
dimensions, and the type IIB string theory in $AdS_{5}\times S^{5}$ Maldacena
(1999); Aharony et al. (2000). In top-down AdS/QCD models, the energy scales
are fixed by probe branes located in the bulk geometry. The presence of these
probe branes in the AdS bulk breaks conformal symmetry and set the energy
scales in the boundary theory Karch and Katz (2002); Sakai and Sugimoto
(2005a, b). On the other hand, bottom-up AdS/QCD models implement deformations
in the bulk geometry directly associated with observed phenomena in hadronic
physics. The most popular bottom-up AdS/QCD models are the hard wall
Polchinski and Strassler (2002); Boschi-Filho and Braga (2004, 2003) and the
soft wall Karch et al. (2006). The soft wall model is particularly interesting
for investigating the radial excitations of mesons since it predicts a linear
Regge trajectory for the hadron masses. Bottom-up AdS/QCD models have been
systematically applied in the description of the spectrum of mesons Karch et
al. (2006); Grigoryan and Radyushkin (2007); Erdmenger et al. (2008);
Colangelo et al. (2008); Ballon Bayona et al. (2010); Cotrone et al. (2011)
and in particular for heavy quarkonia Kim et al. (2007); Grigoryan et al.
(2010); Li et al. (2016); Braga et al. (2016a). Heavy quark potentials have
been analyzed for different botton-up AdS/QCD models, finding in all cases the
linear behaviour for large separation Boschi-Filho and Braga (2005); Boschi-
Filho et al. (2006); Andreev and Zakharov (2006, 2007); Colangelo et al.
(2011); Bruni et al. (2019); Diles (2020).
The observed decay constants of quarkonia S-wave states increase the
difficulty in obtaining an accurate description of their spectrum. The
challenge comes from the fact that decay constants decrease in a monotonic and
non-linear way with excitation level. The hard-wall model predicts decay
constants increasing with excitation level, while the soft-wall model
(quadratic dilaton) predicts completely degenerate decay constants. This poor
description of decay constants at zero temperature leads to bad results at
finite temperature, such as the disappearance of the spectral peaks of the
fundamental state at low temperatures Fujita et al. (2009, 2010); Mamani et
al. (2014). A good description of decay constants in the vacuum is needed to
get a consistent spectral analysis at finite temperature. Decay constant
defines the strength of the resonances fixing the zero-temperature limit of
the spectral function.
In Ref. Grigoryan et al. (2010) it is proposed an holographic description of
$c\bar{c}$ considering modifications in the holographic potential. These
modifications lead to an improvement in the description of masses and decay
constants of $J/\Psi,\Psi^{\prime}$. However, the holographic potential of
Grigoryan et al. (2010) does not capture the decrease in decay constants. An
alternative proposal is to set up an ultraviolet scale by calculating
correlation functions in an AdS slice at finite $z_{uv}$ Evans and Tedder
(2006); Afonin (2011, 2012); Braga et al. (2016b). This ultraviolet cut-off
results in decay constants that decrease with excitation level. However, this
model predicts a small decrease in the excitation level than experimental data
that shows a fast decrease. So, it captures the decrease in decay constants
but not the correct slope. The problem of the slope in decay constants was
circumvented in a different holographic model proposed in Ref. Braga et al.
(2017) and refined in Ref. Braga and Ferreira (2018). The holographic model of
Ref. Braga and Ferreira (2018) captures the correct observed spectrum of decay
constants of either charmonium and bottomonium with good precision. This
success in describing the decay constants does not extend to the mass
spectrum. An accurate description of the radial excitations of heavy quarkonia
involves either the masses and the decay constants. Here we propose a
holographic model that simultaneously describes the masses and decay constants
of the radial excitations of charmonium and bottomonium. The predictions of
our model agree with experimental data within an RMS error near to $6\%$ for
charmonium and $7,2\%$ for bottomonium, providing a precise description of
quarkonia spectroscopy at zero temperature. We consider the effects of hot
plasma on quarkonia states and use our model to compute in-medium spectral
functions. We propose a prescription for background subtraction, isolating the
contribution of the quasi-particle states in the spectral function from the
medium effects. The melting temperatures of
$J/\Psi,\Psi^{\prime},\Upsilon,\Upsilon^{\prime}$ are estimated and their
thermal masses analyzed.
The paper is organized as follows: in Section II, we motivate and present the
dilaton that defines our holographic model. In Section III, we describe
precisely the spectrum of masses and decay constants of charmonium and
bottomonium. In Section IV we consider our model at finite temperature: we
discuss the confinement/ deconfinement phase transition, compute finite
temperature spectral functions of $c\bar{c}$ and $b\bar{b}$ and analyse the
quasi-particle states associated with the resonance peaks. In section V we
perform the Breit-Wigner analysis to the holographic spectral densities
calculated for heavy quarkonia. Finally, we elaborate in Section VI the main
conclusions of this work.
## II Holographic Model
In the context of the AdS/QCD bottom-up models, heavy vector quarkonium is
described as an abelian massless bulk gauge field. This affirmation follows
from the standard field/operator duality Aharony et al. (2000). Recall the
scaling dimension of the source operators creating mesons at the conformal
boundary defines the dual bulk field mass, according to the relation:
$M_{5}^{2}\,R^{2}=(\Delta-S)(\Delta+S-4),$ (1)
where $S$ is the meson spin, and $R$ is the AdS radius. This relation defines
a primitive notion of _hadronic identity_ since their corresponding bulk mass
will categorize the dual hadronic states defined by the boundary source
operator. In the case of _any_ $q\,\bar{q}$ vector meson state, it is
generated by structures with $\Delta=3$, implying $M_{5}^{2}\,R^{2}=0$. Thus,
the action in the bulk space is given by
$I_{\text{Vector
$Q\bar{Q}$}}=-\frac{1}{4\,g_{5}^{2}}\,\int{d^{5}x\,\sqrt{-g}\,e^{-\Phi(z)}}\,g^{mp}\,g^{nr}\,F_{mn}\,F_{pr},$
(2)
where $g_{5}$ is a constant that fixes units in the action given above and
$F_{mn}$ is the field strength. This coupling is calculated from the large
$q^{2}$ behavior of the holographic vector two-point functions Erlich et al.
(2005). The geometrical background is either AdS5 or AdS5 BH, depending on
whether we are at zero or finite temperature. We will postpone this discussion
to the next section. Independent of the geometry, the equations of motion for
the bulk gauge fields are
$\frac{1}{\sqrt{-g}}\,\partial_{n}\left[\sqrt{-g}\,e^{-\Phi(z)}\,g^{np}\,g^{mr}\,F_{pr}\right]=0.$
(3)
Confinement in this model is induced _via_ the static dilaton field $\Phi(z)$.
In the standard AdS/QCD softwall model, characterized by the static quadratic
dilaton, large $z$ behavior guaranteed the emergence of linear radial Regge
trajectories. However, it does not properly describe the meson decay constants
since they are expected to decrease with the radial excitation number $n$. The
softwall model calculation brings degenerate decay constants for $n$.
A lesson learned from Martin Contreras and Vega (2020a) was that decay
constants depend on the low $z$ limit behavior of the AdS/QCD model at hand.
We can modify this behavior by two possible forms: by deforming the background
Braga et al. (2016b, a) or by introducing terms in the dilaton that becomes
relevant at low $z$ Braga et al. (2017); Braga and Ferreira (2018). The
resulting Regge trajectories are still linear, and the decay constant behavior
is corrected.
On the experimental side, these sorts of linear Regge trajectories describe
better the light sector. Nevertheless, in the heavy one, the linear
approximation does not seem to fit the available experimental data. By looking
closely at the heavy quarkonium radial trajectories, we observed linearity in
the highly excited states. On the other side, the linear spectrum deviate due
to the heavy constituent quark mass effect in the meson. This picture can be
seen from the angular quantization of the string Afonin and Pusenkov (2014) or
the Bethe-Salpeter analysis Chen (2018) by writing the radial trajectory as
$(M_{n}-m_{Q_{1}}-m_{Q_{2}})^{2}=a(n+b),$ (4)
where $a$ is a universal slope and $b$ is related to the mesonic quantum
numbers. Therefore, nonlinearities emerge when the constituent quark mass
comes to play. The nonlinear trajectories can be written in general as
$M^{2}=a(n+b)^{\nu}.$ (5)
In a recent work Martin Contreras and Vega (2020b), these nonlinear Regge
trajectories were described in the context of bottom-up holographic QCD. The
main idea behind this model is that the inclusion of quark constituent masses
implies deviation from the quadratic behavior imposed on the static dilaton.
This model successfully described vector mesons in the light unflavored,
strange, heavy-light, and heavy sectors.
This nonquadratic and the softwall model dilaton share the same low $z$
behavior. Therefore, in the nonquadratic context, the decay constants do not
behave following the phenomenological constraints. An attempt to circumvent
this drawback is by adding the proper low $z$ behavior that captures the
expected decay constants phenomenology. Therefore we propose the following
nonquadratic dilaton
$\Phi(z)=\left(\kappa\,z\right)^{2-\alpha}+M\,z\,+\text{tanh}\left[\frac{1}{M\,z}-\frac{\kappa}{\sqrt{\Gamma}}\right],$
(6)
where the low $z$ contributions written above were read from Braga and
Ferreira (2018). The parameters $\kappa$, $M$ and $\sqrt{\Gamma}$ are energy
scales controlling the slope and the intercept, whereas $\alpha$ is
dimensionless and measures the constituent quark mass effect in the heavy
meson, as it was introduced in Martin Contreras and Vega (2020b).
In the later sections, we will discuss the application of this dilaton for
charmonium and bottomonium systems at zero and finite temperature.
## III zero temperature
In the case of zero temperature, the geometrical background is given by the
Poincaré patch
$dS^{2}=g_{mn}\,dx^{m}\,dx^{n}=\frac{R^{2}}{z^{2}}\left[dz^{2}+\eta_{\mu\,\nu}\,dx^{\mu}\,dx^{\nu}\right],$
(7)
with the signature $\eta_{\mu\nu}=\text{diag}(-1,1,1,1)$ and $z\in(0,\infty)$.
Following the AdS/CFT methodology, we will write the Fourier transformed bulk
vector field in terms of the bulk modes $\psi(z,q)$ and the boundary sources
as
$A_{\mu}(z,q)=A_{\mu}(q)\,\psi(z,q),$ (8)
where we have implicitly imposed the gauge fixing $A_{z}=0$. We use the $z$
component of the equations of motion, $\partial_{z}(\partial_{\mu}A^{\mu})=0$,
and the Lorentz gauge in the boundary to set $\partial_{\mu}A^{\mu}=0$
everywhere. These definitions yield the following equations for the eigenmodes
$\partial_{z}\left[e^{-B(z)}\,\partial_{z}\,\psi_{n}(z,q)\right]+(-q^{2})\,e^{-B(z)}\,\psi_{n}(z,q)=0.$
(9)
where we have defined the background information $B(z)$ function as
$B(z)=\Phi(z)-\text{log}\left(\frac{R}{z}\right).$ (10)
Confinement emerges in this model by the effect of the dilaton field that
induces a holographic confining potential. We apply the Bogoliubov
transformation $\psi(z)=e^{B(z)/2}\,\phi(z)$ to the expression (9) obtaining a
Schrodinger-like equation defined as
$-\phi_{n}^{\prime\prime}(z)+U(z)\,\phi_{n}(z)=M_{n}^{2}\,\phi_{n}(z),$ (11)
where $M_{n}^{2}=-q^{2}$ defines the meson spectrum, and the holographic
potential is constructed in terms of the derivatives of the $\Phi(z)$ dilaton
field in eqn. (6) as follows
$U(z)=\frac{3}{4\,z^{2}}+\frac{\Phi^{\prime}(z)}{2\,z}+\frac{1}{4}\Phi^{\prime}(z)^{2}-\frac{1}{2}\Phi^{\prime\prime}(z).$
(12)
By solving the Schrodinger-like equation numerically, we obtain the associated
bulk modes and the holographic mass spectrum. The results for charmonium and
bottomonium, with the corresponding parameter fixing, are summarized in tables
1 and 2.
Charmonium States $I^{G}(J^{PC})=0^{+}(1^{--})$
---
Parameters: | $\kappa=1.8$ GeV, $M=1.7$ GeV, $\sqrt{\Gamma}=0.53$ GeV and $\alpha=0.54$
$n$ | State | $M_{\text{Exp}}$ (MeV) | $M_{\text{Th}}$ (MeV) | %$M$ | $f_{\text{Exp}}$ (MeV) | $f_{\text{Th}}$ (MeV) | %$f$
$1$ | $J/\psi$ | $3096.916\pm 0.011$ | $3140.1$ | $1.42$ | $416.16\pm 5.25$ | $412.4$ | $1.4$
$2$ | $\psi(2S)$ | $3686.109\pm 0.012$ | $3656.9$ | $0.9$ | $296.08\pm 2.51$ | $272.7$ | $8.0$
$3$ | $\psi(4040)$ | $4039\pm 1$ | $4055.7$ | $0.4$ | $187.13\pm 7.61$ | $201.8$ | $7.8$
$4$ | $\psi(4415)$ | $4421\pm 4$ | $4376$ | $0.9$ | $160.78\pm 9.70$ | $164.1$ | $2.0$
Nonlinear Regge Trajectory: | $M_{n}^{2}=8.097(0.39+n)^{0.58}$GeV2 with $R^{2}=0.999$
Table 1: Summary of results for charmonium states. $M_{Th}$ and $f_{Th}$ are
calculated with the parameters mentioned on header, and corresponding errors
appear in columns $\%M$ and $\%f$. Experimental results are read from PDG
Tanabashi et al. (2018) and total error is $\delta_{\text{RMS}}=6.0\,\%$. The
Regge trajectories are also presented. Bottomonium States
$I^{G}(J^{PC})=0^{+}(1^{--})$
---
Parameters: | $\kappa=9.9$ GeV, $M=2.74$ GeV, $\sqrt{\Gamma}=1.92$ GeV and $\alpha=0.863$
$n$ | State | $M_{\text{Exp}}$ (MeV) | $M_{\text{Th}}$ (MeV) | %$M$ | $f_{\text{Exp}}$ (MeV) | $f_{\text{Th}}$ (MeV) | %$f$
$1$ | $\Upsilon(1S)$ | $9460.3\pm 0.26$ | $9506.5$ | $0.5$ | $714.99\pm 2.40$ | $718.8$ | $0.5$
$2$ | $\Upsilon(2S)$ | $10023.26\pm 0.32$ | $9892.9$ | $1.0$ | $497.37\pm 2.23$ | $575.7$ | $16$
$3$ | $\Upsilon(3S)$ | $10355.2\pm 0.5$ | $10227.2$ | $1.2$ | $430.11\pm 1.94$ | $413.0$ | $4.0$
$4$ | $\Upsilon(4S)$ | $10579.4\pm 1.2$ | $10497.5$ | $0.8$ | $340.65\pm 9.08$ | $324.3$ | $4.8$
$5$ | $\Upsilon(10860)$ | $10889.9^{+3.2}_{-2.6}$ | $10721.5$ | $1.5$ | – | – | –
$6$ | $\Upsilon(11020)$ | $10992.9^{+10.0}_{-3.1}$ | $10912.7$ | $0.7$ | – | – | –
Nonlinear Regge Trajectory: | $M_{n}^{2}=7.376(1.31+n)^{0.24}$GeV2 with $R^{2}=0.999$
Table 2: Summary of results for bottomonium states. $M_{Th}$ and $f_{Th}$ are
calculated with the parameters mentioned on header, and corresponding errors
appear in columns $\%M$ and $\%f$. Experimental results are read from PDG
Tanabashi et al. (2018) and total error is $\delta_{\text{RMS}}=7.2\,\%$. The
Regge trajectories are also presented.
In the case of electromagnetic decay constants $f_{n}$, they arise as the
residues of the expansion in poles $-q^{2}=M_{n}^{2}$ of the two-point
function, defined from the correlator of two electromagnetic currents:
$\displaystyle\Pi_{\mu\nu}(q^{2})$ $\displaystyle=$ $\displaystyle
i\,\int{d^{4}x\,e^{i\,q\cdot x}\langle
0\left|\mathcal{T}\left\\{j_{\mu}(x)\,j_{\nu}(0)\right\\}\right|0\rangle}$
(13) $\displaystyle=$
$\displaystyle\left(q_{\mu}\,q_{\nu}-q^{2}\,\eta_{\mu\nu}\right)\,\Pi(-q^{2})$
$\displaystyle=$
$\displaystyle\left(q_{\mu}\,q_{\nu}-q^{2}\,\eta_{\mu\nu}\right)\,\sum_{n}{\frac{f_{n}^{2}}{-q^{2}-M_{n}^{2}+i\,\varepsilon}}.$
The tensor structure written in parentheses is nothing else than the
transverse projector, coming from the fulfillment of the Ward-Takahashi
identities.
The importance of the two-point function relies on the description of the
intermediate hadronic states that appear in scattering processes involving
hadrons. Decay constants measure the probability of finding such states in the
collision final products.
In the case of heavy quarks, the electromagnetic quark currents
$e\,\bar{c}\,\gamma_{\mu}\,c$ and $e\,\bar{b}\,\gamma_{\mu}\,b$ creates the
$J/\psi$ and $\Upsilon$ mesons respectively. At the physical level, these
mesonic vector states appear as observed resonances in the $e^{+}\,e^{-}$
annihilation process when the center of mass energy is around the mass of the
corresponding mesonic states. Therefore, these states are expected to be also
poles in the two-point function.
Experimentally, decay constants are measured from the vector meson decaying
process $V\to e^{+}\,e^{-}$, according to the expression:
$f_{n}^{2}=\frac{3\,M_{n}\,\Gamma_{V\to
e^{+}e^{-}}}{4\,\pi\,\alpha^{2}_{\text{em}}\,C_{V}^{2}},$ (14)
where $\Gamma_{V\to e^{+}\,e^{-}}$ is the heavy vector decay width, and
$C_{V}$ stands for the heavy quark electromagnetic charge contribution to the
meson, i.e., $C_{J/\psi}=2/3$ and $C_{\Upsilon}=1/3$.
The holographic dual of the two-point function is determined from the on-shell
boundary action Karch et al. (2006). Following the field/operator duality, the
holographic two-point is written as
$\Pi\left(-q^{2}\right)=-\left.\frac{e^{-B\left(z\right)}}{g_{5}^{2}\,(-q^{2})}\,\partial_{z}\,V\left(z,q\right)\right|_{z\to
0},$ (15)
where $V(z,q)$ is the bulk-to-boundary propagator. It is straightforward to
prove that this object can be written in terms of the normalizable modes
$\psi(z,q)$ by using the Green’s function associated with the equations of
motion (9). In work Martin Contreras and Vega (2020a), authors follow this
path deriving a general expression for the decay constants calculated for any
general AdS/QCD model depending only on the value of the quotient
$\psi(z,q)/z^{2}$ and the dilaton at the conformal boundary
$f_{n}^{2}=\frac{1}{M_{n}^{2}\,g_{5}^{2}}\,\lim_{z\to
0}{\,e^{-2\,\Phi(z)}\,\left|\frac{2\,\psi_{n}(z,q)}{z^{2}}\right|^{2}}.$ (16)
Let us stop here and see how the decay constants are calculated in the soft
wall model, i.e., static and quadratic dilaton. Following Karch et al. (2006),
we see that the mass spectrum has the linear structure
$M_{n}^{2}=4\,k^{2}(n+1)$, with $k$ being the dilaton slope. The
eigenfunctions are defined in terms of Laguerre associated polynomials
$\psi_{n}(z)=\sqrt{\frac{2\,k^{4}\,n!}{(n+1)!}}\,z^{2}\,L_{n}^{1}(k^{2}\,z^{2}),$
(17)
therefore, the decay constants follow from eqn. (16) yielding
$f_{n}^{2}=\frac{F_{n}^{2}}{M_{n}^{2}}=\frac{1}{4\,g_{5}^{2}\,k^{2}\,(n+1)}\times\,\frac{8\,k^{4}\,(n+1)!}{n!}=\frac{2\,k^{2}}{g_{5}^{2}},$
(18)
where we have used the asymptotic form of the Laguerre associated polynomials
when $z\to 0$. Therefore, we can conclude that decay constants are degenerate
in the softwall model. If we do similar computations in the hardware model
context Boschi-Filho and Braga (2003), they will lead to increasing decays
$f_{n}$ with the excitation number $n$. This drawback can be avoided by
deforming the low $z$ limit in the static dilaton, as it was first noticed by
Braga et al. Braga and Ferreira (2016). We will extend this idea in the
context of non-quadratic dilatons.
The numerical results for the charmonium and bottomonium decay constants,
calculated in the deformed non-quadratic dilaton context, are summarized in
tables 1 and 2. The deviations presented in the caption of tables 1 and 2
represent the difference between the theoretical prediction and the most
probable value of a given experimental measure. The total deviation
$\delta_{RMS}$ is defined as
$\delta_{\text{RMS}}=\sqrt{\frac{1}{N-N_{p}}\sum_{i}^{N}\left(\frac{\delta\,O_{i}}{O_{i}}\right)^{2}},$
(19)
where $O_{i}$ is a given experimental measure with $\delta\,O_{i}$ defining
the deviation of the theoretical value from the experimental one, $N_{p}$ is
the number of model parameters, and $N$ the total number of available
observables.
## IV Finite temperature
For the finite-temperature extension, we will consider the heavy quarkonium
system living in a thermal bath, addressed by a colored plasma.
Holographically, we will deal with vector bulk field living in an AdS-
Schwarzschild black hole background, described by the metric
$dS_{\text{AdS-
Schw}}^{2}=\frac{R^{2}}{z^{2}}\left[\frac{dz^{2}}{f(z)}-f(z)\,dt^{2}+d\vec{x}\cdot
d\vec{x}\right],$ (20)
with the blackening factor defined as
$f(z)=1-\frac{z^{4}}{z_{h}^{4}}.$ (21)
where $z_{h}$ is the event horizon locus.
The description of heavy quarkonium at finite temperature in the context of
the softwall model was developed in Fujita et al. (2010). However, as it was
discussed in Vega and Cabrera (2016); Vega and Ibañez (2017); Vega and Martin
Contreras (2019), by analyzing the holographic potential in the context of
Bogoliubov transformations and tortoise coordinates, the mesonic melting
temperature appears to be too low as the ones expected from lattice QCD. This
bad behavior is attached to the holographic decay constant description in the
softwall model, where these objects are degenerate. This affirmation is
sustained by the thermal analysis of the hadronic part of the two-point
function Dominguez et al. (2010, 2013). For instance, the hadronic spectral
density calculated from thermal sum rules
$\left.\frac{1}{\pi}\mathbb{I}\text{m}\,\Pi(s,T)\right|_{\text{hadron}}=\frac{f_{n}^{2}\,M_{n}(T)^{3}\,\Gamma_{n}(T)}{[s-M_{n}^{2}(T)]^{2}+M_{n}^{2}(T)\,\Gamma_{n}(T)^{2}},$
(22)
establishes the formal dependence of the melting process with the decay
constant.
This softwall model issue was circumvented by introducing low $z$
modifications into the model, as it was done in Braga et al. (2016c).
Therefore, it is natural to suppose that this hybrid dilaton should exhibit
the expected raising in the melting temperatures in agreement with
phenomenology.
Let us focus on reviewing the holographic description of the heavy quarkonium.
Our starting point is the calculation of the hadronic spectral density. To do
so, we will follow the Minkowskian prescription given by Son and Starinets
(2002). Let us perform the variable change $z=z_{h}\,u$ in the metric (20) in
order to fix the horizon locus at $u=1$. We will also fix $-q^{2}=\omega^{2}$
in our analysis.
### IV.1 Confinement/Deconfinement phase transition
In the boundary gauge theory, the formation of a deconfined plasma is
holographically described via the Hawking-Page phase transition in the dual
geometry Hawking and Page (1983); Herzog (2007). On the gauge theory side,
above the critical temperature, $T_{c}$, the fundamental quarks and gluons
inside the colorless matter are allowed to walk away from its partners,
forming a plasma of deconfined colored particles. It is usually considered
that the light vector meson dominates the deconfinement transitions. That is,
the medium is formed when the light quarks can escape from the hadrons.
Consequently, we use the light meson spectrum to fix the energy scales
governing the confinement/deconfinement transition.
The observed spectrum of radial excitations of the $\rho$ meson includes the
masses of the first five radial excitations, and the decay constant of the
ground state Tanabashi et al. (2018). It is important to mention that
additional scales in the model encode heavy quarkonia properties and bring no
particular advantages in describing the light meson spectrum. In particular,
for light mesons, the parameter $\alpha$ in eq.(6) is set to vanish. The
observed spectrum of the radial excitations of the $\rho$ meson are reasonable
fitted using the model parameters $\kappa=0.6$ GeV, $M=0.06$ GeV,
$\sqrt{\Gamma}=0.02$ GeV. Using these parameters to fix the dilaton profile,
we compute the gravitational on-shell action of the AdS-Schwarzschild black
hole geometry and the thermal AdS geometry. The normalized difference is then
obtained as
$\Delta
S=\int_{\epsilon}^{z_{h}}dz\frac{e^{-\Phi(z)}}{z^{5}}-\sqrt{f(\epsilon)}\int_{\epsilon}^{\infty}dz\frac{e^{-\Phi(z)}}{z^{5}}.$
(23)
We show in Figure 1 the difference in action as a function of temperature. In
the region where $\Delta S$ is positive, the thermal AdS is stable. In the
region with $\Delta S$ is negative, the black hole is stable. The condition
$\Delta S=0$ defines the critical temperature, and we obtain
$T_{c}=142\,\leavevmode\nobreak\ \textrm{MeV.}$ (24)
There are two important comments to make at this point. First, using the
$\rho$ meson spectrum to fix model parameters is a particular choice. As it
was recently pointed out in Afonin and Katanaeva (2020), the definition of
$T_{c}$ through a Hawking-Page transition is model depending. The same authors
performed a similar calculation considering the gluon condensate obtaining a
critical temperature of $156$ MeV Afonin (2020). Second, the phase transition
associated with QGP formation in heavy-ion collisions is more likely a
continuous crossing over than an abrupt transition Aoki et al. (2006).
However, the present computation of $T_{c}$ has no intention of dealing with
these subtleties. The critical temperature we obtain ($T_{c}=142$ MeV) is
consistent with the present holographic model and will be adopted from now on.
Figure 1: The difference between the on-shell gravitational action of AdS-
Schwarzschild and Thermal AdS geometries is depicted as a function of
temperature in GeV. The intersection with the horizontal axis gives the
critical temperature of the deconfinement transition.
### IV.2 Spectral density
The holographic spectral density comes from the thermal Green’s function. We
define the bulk-to-boundary propagator in momentum space
$V_{\mu}(z,q)=V(z,q)V^{0}_{\mu}(q)$, where $V^{0}_{\mu}(q)$ is the source at
the boundary. According to the Minkowskian prescription, this correlator is
written in terms of the derivatives of the bulk-to-boundary propagator
$V(z,q)$ as
$G_{R}(\omega)=-\left.\frac{2}{z_{h}\,\mathcal{N}}\,\,e^{-B(u)}\,f(u)\,V(u,-\omega)\,\partial_{u}\,V(u,\omega)\right|_{u=0}.$
(25)
The spectral density, according to the Kubo relations, is written as the
imaginary part of the Green’s function
$\rho(\omega)=-\mathbb{I}\text{m}\,G_{R}(\omega).$ (26)
The bulk-to-boundary propagator obeys the bulk spatial vector equation of
motion
$\partial_{u}\left[e^{-B(u)}\,f(u)\,\partial_{u}\,V(u,\omega)\right]+\frac{z_{h}^{2}\,\omega^{2}}{f(u)}e^{-B(u)}\,V(u,\omega)=0.$
(27)
Although we are at finite temperature, the bulk-to-boundary propagator still
preserves its properties at the conformal boundary. If this is not guaranteed,
the field/operator duality does not hold anymore. Recall that at the conformal
boundary, we require that $V(u\to 0)\to 1$. On the other side, we also need
that $V(u,\omega)$ obeys the out-going boundary condition $\phi_{-}(u)$,
defined as
$\phi_{-}(u)=\left(1-u\right)^{-\,i\frac{\omega\,z_{h}}{4}}$ (28)
These conditions define the procedure to compute the spectral density. We will
follow the method depicted in Teaney (2006); Fujita et al. (2010); Miranda et
al. (2009); Fujita et al. (2009). Our starting point is writing a general
solution $v(u)$ for the Eqn. (27) in terms of the normalizable $\psi_{0}(u)$
and non-normalizable $\psi_{1}(u)$, that form a basis, in the following form
$v(u)=A\,\left[\psi_{1}(u)+\frac{B}{A}\,\psi_{0}(u)\right],$ (29)
such that the bulk-to-boundary propagator is written as
$V(\omega,u)=A^{-1}\,v(u)$, and satisfying the asymptotic solutions near the
conformal boundary
$\displaystyle\psi_{0}(u\,\omega)$ $\displaystyle=$
$\displaystyle\frac{2}{\omega\,z_{h}}\,u\,J_{1}(\omega\,z_{h}\,u)$ (30)
$\displaystyle\psi_{1}(u\,\omega)$ $\displaystyle=$
$\displaystyle-\frac{\pi\,\omega\,z_{h}}{2}\,u\,Y_{1}(\omega\,z_{h}\,u)$ (31)
After replacing this solution into the Green’s function definition we obtain
$G_{R}(\omega)=-\left.\frac{2\,R}{z_{h}\,\mathcal{N}}\left[\frac{B}{A}-\frac{\omega^{2}\,z_{h}^{2}}{2}\,\text{log}\,\left(\frac{e^{\gamma_{e}}\,\varepsilon\,\omega\,z_{h}}{2}\right)\,\varepsilon^{2}\right]\right|_{\varepsilon\to
0}$ (32)
Finally, the spectral density is written as the imaginary part of the Green’s
function
$\displaystyle\rho(\omega)$ $\displaystyle=$
$\displaystyle-\mathbb{I}\text{m}\,G_{R}(\omega)$ (33) $\displaystyle=$
$\displaystyle\frac{2\,R}{z_{h}\,\mathcal{N}}\,\mathbb{I}\text{m}\frac{B}{A}.$
Numerical results for the spectral density calculated for charmonium and
bottomonium system are shown in Fig. 2.
---
Figure 2: This figure describes the spectral density for charmonium (left
panel) and bottomonium (right panel) calculated using Eqn. (33), depicting the
melting process.Dashed lines corresponds to the melting temperature in each
case.
### IV.3 Thermal holographic potential
Another essential quantity that carries valuable information about the heavy
quarkonium thermal picture is the thermal potential. At zero temperature case,
the potential translates the dilaton effect into the holographic confinement.
Holographic mesonic states appear as eigenfunctions of this potential.
The thermal dissociation of mesons is connected with the holographic
potential. In Vega and Martin Contreras (2019), this idea was discussed in the
context of softwall-like dilatons that vanish at the conformal boundary. In
this proposal, the melting is characterized by the disappearance of the
potential well. At zero temperature, the dilaton vanishes near the boundary,
and the potential holographic displays one single minimum that is global at
zero temperature. The disappearance of the global minimum of the holographic
potential encodes the information of meson dissociation.
In this work, we consider a dilaton that does not vanish near the boundary.
This dilaton field, given in Eqn. (6) interpolates between linear and the
deformed quadratic behavior, which induces a nonlinear spectrum. This dilaton
also changes the global structure of the potential by introducing local minima
near the UV at zero temperature. As argued in Grigoryan et al. (2010); Martin
Contreras and Vega (2020a), this UV deformation is needed in order to describe
the proper phenomenological behavior the decay constants of the excited
quarkonia states.
It is expected that, at finite temperature, the holographic potential also has
information about the melting process. To make a formal approach to this
phenomenology, we apply the Liouville (tortoise) transformation. It transforms
the equations of motion into a Schrödinger-like equation in terms of a
Liouville (tortoise) coordinate $r^{*}$. The potential exhibits a barrier that
decreases with the temperature, mimicking how the confinement starts to cease
when the temperature rises. Following Vega and Martin Contreras (2019), one
expect that the barrier disappears when all of the quarkonia states melt down
into the thermal medium. However, the appearance of a local minima near $z=0$
can sustain the state after the disappearance of the barrier.
The Liouville transformation appears in the core of the Liouville theory of
second-order differential equations. Given a differential equation, we can
associate it with a differential diagonalizable operator. As a consequence,
this operator will acquire a spectrum of eigenvalues and eigenfunctions. In
the holographic case at hand, the associated potential is defined _via_ the
transformation
$r^{*}(u)=z_{h}\,\int_{0}^{u}{\frac{d\,\xi}{1-\xi^{4}}}=\frac{z_{h}}{2}\left(\text{tan}^{-1}\,u+\text{tanh}^{-1}\,u\right).$
(34)
The equations of motion (27) transform into the following Schrodinger-like
equation
$-\frac{d^{2}\,\phi(r^{*})}{d\,r^{*2}}+U(r^{*})\,\phi(r^{*})=\omega^{2}\,z_{h}^{2}\,\phi(r^{*}),$
(35)
with the following definitions
$U(r*)=f(u)^{2}\left[\frac{3}{4\,u^{2}}+\frac{\Phi^{\prime}(u)}{2\,u}+\frac{\Phi^{\prime}(u)^{2}}{4}-\frac{\Phi^{\prime\prime}(u)}{2}\right.\\\
\left.-\frac{f^{\prime}(u)}{2\,u\,f(u)}-\frac{f^{\prime}(u)\,\Phi^{\prime}(u)}{2\,f(u)}\right]$
(36) $\displaystyle\phi(r^{*})$ $\displaystyle=$
$\displaystyle\psi(u)\,e^{\frac{1}{2}\,B(u)}$ (37) $\displaystyle u$
$\displaystyle=$ $\displaystyle u(r^{*}).$ (38)
where $u=u(r^{*})$ is obtained by inverting the Liouville coordinate defined
in Eqn. (34).
In figure 3, we depict the melting process from the Liouville potential for
the heavy quarkonia. In the zero temperature case, the potential reduces to
the holographic one described in Eqn. (12).
---
Figure 3: In this figure, we plot the holographic Liouville potential for
charmonium (left panel) and bottomonium (right) panel. Also, we plot the first
three masses calculated a zero temperature to illustrate the melting process.
When the barrier decreases below the mass, we can consider that such a state
had undergone a melting process.
The melting process in the present case is a two step process involving two
different energy scales. The first step is the disappearance of the infra-red
barrier when the temperature is increased above $T_{c}$ allowing for the bulk
modes to be absolved by the event horizon. At this step all the excited states
melts in the thermal medium. But this is not sufficient to state the melting
of the ground state. The appearance of a deep, narrow and persistent well near
$z=0$ produces a barrier greater them the mass of the ground state. The well
is separated from the event horizon by a barrier which narrows with the
raising of temperature. At the melting temperature the barrier is too narrow
to hold the bulk wave packet, that escapes from the well and is absolved by
the event horizon. A quantitative description of the tunneling process is not
performed here and the melting temperature depicted in Figure (3) are obtained
from the Breight-Wigner analysis performed in the next section.
## V Breit-Wigner analysis
Once the spectral functions are calculated, we will perform the Breit-Wigner
analysis to discuss the thermal properties captured by the holographic model
described above. This analysis allows extracting information about the meson
melting process, as the temperature and the thermal mass shifting. Recall that
when a meson starts to melt, the resonance begins to broad (the width becomes
large), and the peak height, which is proportional to the decay constant,
decreases. In other words, the mesonic couplings tend to zero as the
temperature rises, implying these states ceased to be formed in the colored
medium.
Therefore, comparing the peak height and the width size will be the natural
form to define the meson melting temperature: the temperature at which the
width size overcomes the peak high is where the meson starts to melt. This
phenomenological landscape also comes in the context of pNRQCD at thermal
equilibrium.
The next thing to consider is the background. These background effects
observed in the spectral function come from continuum contribution, and they
should be subtracted in order to isolate the Breit-Wigner behavior. The
background subtraction methodology is not unique, and in general, is model
depending. However, most of the authors define interpolation polynomials in
terms o powers of $\omega^{2}$. See, for example, Colangelo et al. (2009); Cui
et al. (2016) in the light scalar sector and Fujita et al. (2010) for heavy
vector quarkonium. In these references, authors worked with quadratic-like
dilatons.
In our particular case, we will follow a different path: we will consider the
large $\omega^{2}$ behavior to deduce a background subtraction mechanism. As
ref. Grigoryan et al. (2010) pointed it out, in a conformal theory at short
distances, we could expect that
$\lim_{\omega^{2}\to\infty}\frac{\rho(\omega^{2})}{\omega^{2}}=\frac{\pi}{2\,g_{5}^{2}}\,\,\,\text{
i.e., a dimensionless constant},$ (39)
for the case of quadratic-like dilatons. The OPE-expansion of the 2-point
function dictates this behavior, allowing the match between the bulk and the
boundary theories. In the purely phenomenological sense, the existence of this
dimensional constant is a signature of asymptotic freedom. Thus, the spectral
function for these quadratic-like dilatons can be rescaled as
$\bar{\rho}(\omega^{2})=\frac{\rho}{\omega^{2}},$ (40)
in order to test the asymptotic freedom signature in the model. Therefore, if
the rescaled spectral function behavior does not match this criterion, the
model does not have a proper large $\omega^{2}$ limit compared with QCD. The
softwall model with quadratic dilaton perfectly matches this condition.
Then, what happens when the model does not have a quadratic dilaton? To answer
this question, we can go further by imposing the same asymptotic condition.
However, changing the quadratic structure on the dilaton will imply that the
asymptotic behavior of the spectral function is different: it is still linear
in $\omega^{2}$, but with a shifted value of the coupling $g_{5}$, defined at
zero temperature from the holographic 2-point function. Thus, we suggest the
following rescaling:
$\bar{\rho}(\omega^{2})=\frac{\rho(\omega^{2})}{\delta\,\omega^{2}},$ (41)
where $\delta$ is determined from the large $\omega^{2}$ behavior observed in
the spectral function $\rho(\omega^{2})$. From this rescaled spectral
function, we will subtract the background effects and construct the Breit-
Wigner analysis. For our practical purposes, we will write the Breit-Wigner
distribution as
$\bar{\rho}(\omega^{2})=\frac{1}{2}\frac{A_{0}\,\omega^{2}_{0}\,\Gamma_{0}\,\omega^{a_{0}}}{(\omega^{2}-\omega^{2}_{0})^{2}+\frac{\omega^{2}_{0}\,\Gamma^{2}_{0}}{4}},$
(42)
where $A_{0}$, $a_{0}$ are fitting parameters, $\omega_{0}$ is the mesonic
peak and $\Gamma_{0}$ is the decay width, proportional to the inverse of the
meson life-time.
### V.1 Background substraction
In the thermal approach to heavy quarkonium, the colored medium is vital since
it strongly modifies the vacuum phenomenology. In particular, following the
Feynman-Hellman theorem analysis, it is expected that bound states energy
decrease when constituent mass is increased at zero temperature Quigg and
Rosner (1979). Consequently, zero temperature spectral peaks experience
shifting in their positions, color singlet excitations transform into other
singlet states by thermal fluctuations, or these singlet excitations transform
into another color octets. All of this intricated phenomenology is encoded in
the medium. Therefore, in order to isolate the thermal information regarding
the heavy quarkonium state melting process, a proper subtraction scheme is
needed. In our case, we will consider an interpolating polynomial in
$\omega^{2}$ that will be subtracted to the spectral density, allowing us to
obtain a Breit-Wigner distribution associated with the heavy quark state only.
In figure 4, we depict the subtraction process for the melting of $J/\psi$,
observed in our model at 415 MeV (2.92 $T_{c}$).
---
Figure 4: This figure depicts the subtraction procedure for $J/\psi$ at 400
MeV and 415 MeV, $\psi^{\prime}$ at 85 MeV and 90 MeV, and $\Upsilon$ at 465
MeV. Notice that the background polynomial appears as the orange function in
both cases. We plot the subtracted spectral density on the top right part of
each figure that we fit with the Breit-Wigner distribution (42). In the lower
panels, we plot the bottomonium case for the same temperature, 465 MeV, with
two different interpolating polynomials. In both situations, changing the
polynomial does not affect the melting criterium. Recall that, unless other
non-holographic effective models, the in-medium effects are encoded into the
metric tensor. Thus, any proper characteristic behavior, as heavy quarkonium
regeneration or Gluon radiation, is indistinguishable.
At this step, an important remark should be made. The interpolating polynomial
is not defined univocally. We can fix a criterium that these sorts of
polynomial should obey. In principle, since we do not have a proper
phenomenological tool at hand to split the behavior of the medium from the
hadronic state, we will ask for a _smooth subtraction_. In other words, the
region where the interpolating polynomial splits from the spectral function
should not display an abrupt change. Since the possible functions that could
match this condition are infinite, we can only bring a temperature interval
where the meson starts to melt. However, choosing similar polynomials will
lead to the same melting interval. See lower panels in figure 4.
### V.2 Melting Temperature Criterium
As we observe in figure 2, mesonic states disappear progressively with
increasing temperature. In the holographic potential case, the melting
temperature is not connected with the disappearing of the confining barrier.
Since the potential has a depth well in the UV region, the thermal stability
would be associated with the tunneling of such a barrier.
In the holographic situation, the generated dual object is a colored medium at
thermal equilibrium, where the heavy quarkonium exists. In such a static
situation, mesonic states either exist or have melted down. Thus, the only
relevant information at the holographic level we have is the spectral function
and the background subtraction.
In order to find the interval where heavy mesons start to melt, we will follow
the standard criterium connecting the Breit-Wigner maximum with its graphical
width, defined as a product of the meson mass and the thermal width
$\frac{\bar{\rho}(\omega_{0}^{2})}{\omega_{0}\,\frac{\Gamma}{2}}<1.$ (43)
Notice that the definition depicted above is an alternative to the criteria
defined from the effective potential models and lattice QCD, defined where the
melting occurs when the in-medium binding energy equals the thermal decay
width Rothkopf (2020). In the holographic case, melting temperatures are
intrinsically connected to decay constants, proportional to the two-point
function residues at zero temperature. Recall the decay constants carry
information about how the mesonic states decay electromagnetically into
leptons. Thus, indirectly they measure the mesonic stability affected by
thermal changes: excited states with lower binding energy than the ground one
melt first. This connection with meson stability is supported by the
experimental fact that decay constants decrease with the excitation number.
Another possible form to explore the connection between the mesonic melting
process and stability is done in the context of configurational entropy,
discussed in refs. Braga et al. (2018); Braga and Ferreira (2018); Braga and
da Mata (2020); Braga and Junqueira (2020).
In the case of the charmonium, the $\psi^{\prime}$ state melts near 90 MeV or
$0.63\,T_{c}$. The ground state, the $J/\psi$ meson melts near to 415 MeV or
$2.92\,T_{c}$. If we compare with the pNRQCD results Burnier et al. (2015), we
obtain a lower temperature for the $2\,S$ charmonium state (lattice result:
$0.95\,T_{c}$) but higher for the ground state (lattice result:
$1.37\,T_{c}$). The main difference in both results is that in our holographic
case we are considering heavy quarkonium at rest, i.e., $|\vec{p}|=0$.
A similar situation is observed in the bottomonium case: the $\Upsilon(2\,S)$
melts near to $115$ MeV (or $0.81\,T_{c}$), compared with the pNRQCD result of
$1.25\,T_{c}$. For the ground state we have $465$ MeV (or $3.27\,T_{c}$),
compared with the lattice result of $2.66\,T_{c}$.
If we compare with holographic stringy models Andreev (2019), where the
melting temperature is estimated from the string tension in an AdS deformed
target space, we found bigger results for heavy quarkonium melting
temperature. They predict $1.05\,T_{c}$ and $2.52\,T_{c}$ for charmonium and
bottomonium respectively.
### V.3 Thermal Mass
---
Figure 5: Resonance location as a function of the temperature. The shaded
region in each panel describes the increase of the thermal width until the
meson melting occurs. The left panels correspond to ground states, and the
right panels are the first excited states. Upper panels correspond to
charmonium, and lower panels correspond to bottomonium.
Other important quantities to discuss are the masses and widths of the
different hadronic states since these parameters have information about the
interaction with the colored medium. Figure 5 has summarized the mass thermal
behavior modeled for the first two charmonium and bottomonium excited states.
Comparing with other holographic models (see Fujita et al. (2010, 2009) for
heavy mesons; Colangelo et al. (2009, 2009) and Cui et al. (2016) for light
mesons), the mass for the ground state in our case tends to increase with
temperature until the meson melting takes place, as the upper (J/$\psi$) and
lower ($\Upsilon$) panels in figure 5 display. The same behavior is observed
for the charmonium first excited state, depicted in Figure 5 right upper
panel. However, this very same behavior is not observed for the first excited
state of the bottomonium. In the $\Upsilon(2S)$ meson case, the hadronic
resonance location decreases with the temperature.
The observed behavior for the thermal mass in our case seems to be quite
different from the one depicted in Fujita et al. (2009). In their case, the
thermal mass increases towards a maximum, where the authors claimed the
melting process starts, and then thermal mass decreases up to the last
charmonium meson is melted. In our case, such a concavity change occurs for
low temperatures compared with $T_{c}$, far from the melting temperatures,
around three times $T_{c}$. The monotonicity of the thermal mass appears to be
more consistent with lattice calculations Burnier et al. (2016); Rothkopf
(2020). In those approaches, writing the NRQCD heavy quark potential is done
in the soft scale, i.e., kinematical scale. In the case of hard scales, near
to the constituent quark masses, other approaches are necessary.
In the context of QCD sum rules Dominguez et al. (2010), following the Hilbert
moment mechanism, the thermal mass in the case of heavy quarks does not change
with the temperature until the system reaches the critical temperature, where
it drops. As an interesting observation, in this model, the decay constants go
to zero as the temperature comes closer to the critical one, indicating that
the melting has occurred.
## VI Conclusions
By deforming the non-quadratic dilaton defined in Martin Contreras and Vega
(2020b) using the proposal given by Braga et al. in Braga et al. (2017), it
was possible to fit for the vector charmonium and bottomonium both the mass
spectra as non-linear Regge trajectories and their decreasing decay constants.
The precise holographic description of the heavy vector meson excited states
is reached by considering all the lessons learned in the last decade of
bottom-up AdS/QCD.
The precision of the fit is measured by the $\delta_{RMS}$, defined in
eq.(19), being $6\%$ for charmonium and $7,2\%$ for bottomonium. The dilaton
deformations are necessary for a precise description of the spectrum of masses
and decay constants. If we use the original quadratic dilaton to describe the
charmonium spectrum by fixing $k=1.55$ GeV, we find $\delta_{RMS}=74\%$. So,
the new parameters introduced in the dilaton do allow an accurate description
of the spectrum. Notice that the model has predictability even though we are
using four parameters to fit each heavy quarkonium family. As a matter of
fact, for the non-linear trajectory $M^{2}=a(n+b)^{\nu}$ we need three
parameters. If we assume that decay constants are functions of the excitation
number $n$ only, we can write them as $f(n)=c(-n+d)$, if we suppose linearity
as our first guest. The minus sign in the parametrization emphasizes the
decreasing behavior of the decays with $n$. Thus, if we count the maximum
number of parameters need for both decays and masses, we obtain five
parameters. If we assume non-linear behavior for decays, we have one extra
parameter, implying six instead of five maximum parameters per family. Thus,
in our case, we have four. Thus our model is predictable. Such precision is
essential to set the correct zero temperature behavior of the spectral
functions. If we think of the increasing temperature as an analog for time
evolution, zero-temperature properties play the role of initial conditions.
Spectral functions have been numerically computed for several representative
values of the temperature. As expected, pronounced resonance peaks around the
zero temperature masses of charmonium and bottomonium are observed near
$T_{c}$. To discuss the fate of the particle states when increasing
temperature, it is necessary to subtract background contributions from the
spectral functions. We provide a detailed discussion on this subject and
propose a numerical scheme to perform such a subtraction. The Breit-Wigner
peaks are analyzed. We obtain the melting temperature of $J/\Psi$ and
$\Upsilon$ to be, respectively, $T_{J/\Psi}=415$ MeV and $T_{\Upsilon}=465$
MeV. These high melting temperatures obtained are directly connected to the
correct description of the decay constants of the corresponding fundamental
states of $c\bar{c}$ and $b\bar{b}$. The excited states
$\Psi^{\prime},\Upsilon^{\prime}$ melts at temperatures smaller them $T_{c}$.
So, we consider smaller temperatures around $50-60$ MeV where we can see the
pronounced peaks associated with the states. Within this range of
temperatures, around $50-470$ MeV, we consider the thermal mass shifting of
$J/\Psi,\Psi^{\prime}$ and $\Upsilon,\Upsilon^{\prime}$. We observe a small
and monotonic increase in the masses of the ground states with temperature.
The specific form of the dilaton leads to a holographic potential that differs
from the one obtained in quadratic dilaton models. In the present case, there
is a narrow well in the ultra-violet region. The melting of the fundamental
state is no longer entirely governed by the disappearance of the infra-red
barrier. For this shape of holographic potential, the criteria for defining
the melting of the states established in Vega and Martin Contreras (2019) does
not apply. It is a task for future work to understand the melting process from
the thermal evolution of this class of holographic potentials.
###### Acknowledgements.
We wish to acknowledge the financial support provided by FONDECYT (Chile)
under Grants No. 1180753 (A. V.) and No. 3180592 (M. A. M. C.). Saulo Diles
thanks the Campus Salinopolis of the Universidade Federal do Pará for the
release of work hours for research.
## References
* Matsui and Satz (1986) T. Matsui and H. Satz, Phys. Lett. B 178, 416 (1986).
* Chaudhuri (2002) A. Chaudhuri, Phys. Rev. C 66, 021902 (2002), eprint nucl-th/0203045.
* Liu et al. (2011) Y. Liu, B. Chen, N. Xu, and P. Zhuang, Phys. Lett. B 697, 32 (2011), eprint 1009.2585.
* Abreu et al. (2018) L. Abreu, K. Khemchandani, A. Martínez Torres, F. Navarra, and M. Nielsen, Phys. Rev. C 97, 044902 (2018), eprint 1712.06019.
* Song et al. (2012) T. Song, K. C. Han, and C. M. Ko, Phys. Rev. C 85, 014902 (2012), eprint 1109.6691.
* Emerick et al. (2012) A. Emerick, X. Zhao, and R. Rapp, Eur. Phys. J. A 48, 72 (2012), eprint 1111.6537.
* Reed (2011) R. Reed (STAR), J. Phys. G 38, 124185 (2011), eprint 1109.3891.
* Krouppa et al. (2019) B. Krouppa, A. Rothkopf, and M. Strickland, Nucl. Phys. A 982, 727 (2019), eprint 1807.07452.
* Yao and Müller (2019) X. Yao and B. Müller, Phys. Rev. D 100, 014008 (2019), eprint 1811.09644.
* Tanabashi et al. (2018) M. Tanabashi et al. (Particle Data Group), Phys. Rev. D98, 030001 (2018).
* Pang (2019) C.-Q. Pang, Phys. Rev. D 99, 074015 (2019), eprint 1902.02206.
* Badalian and Bakker (2019) A. Badalian and B. Bakker, Few Body Syst. 60, 58 (2019), eprint 1903.11504.
* Gross and Wilczek (1973) D. J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973).
* Politzer (1973) H. Politzer, Phys. Rev. Lett. 30, 1346 (1973).
* van Ritbergen et al. (1997) T. van Ritbergen, J. Vermaseren, and S. Larin, Phys. Lett. B 400, 379 (1997), eprint hep-ph/9701390.
* Polchinski and Strassler (2002) J. Polchinski and M. J. Strassler, Phys. Rev. Lett. 88, 031601 (2002), eprint hep-th/0109174.
* Boschi-Filho and Braga (2003) H. Boschi-Filho and N. R. Braga, JHEP 05, 009 (2003), eprint hep-th/0212207.
* Erlich et al. (2005) J. Erlich, E. Katz, D. T. Son, and M. A. Stephanov, Phys. Rev. Lett. 95, 261602 (2005), eprint hep-ph/0501128.
* Brodsky and de Teramond (2008) S. J. Brodsky and G. F. de Teramond, Phys. Rev. D 77, 056007 (2008), eprint 0707.3859.
* Maldacena (1999) J. M. Maldacena, Int. J. Theor. Phys. 38, 1113 (1999), eprint hep-th/9711200.
* Aharony et al. (2000) O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri, and Y. Oz, Phys. Rept. 323, 183 (2000), eprint hep-th/9905111.
* Karch and Katz (2002) A. Karch and E. Katz, JHEP 06, 043 (2002), eprint hep-th/0205236.
* Sakai and Sugimoto (2005a) T. Sakai and S. Sugimoto, Prog. Theor. Phys. 113, 843 (2005a), eprint hep-th/0412141.
* Sakai and Sugimoto (2005b) T. Sakai and S. Sugimoto, Prog. Theor. Phys. 114, 1083 (2005b), eprint hep-th/0507073.
* Boschi-Filho and Braga (2004) H. Boschi-Filho and N. R. Braga, Eur. Phys. J. C 32, 529 (2004), eprint hep-th/0209080.
* Karch et al. (2006) A. Karch, E. Katz, D. T. Son, and M. A. Stephanov, Phys. Rev. D 74, 015005 (2006), eprint hep-ph/0602229.
* Grigoryan and Radyushkin (2007) H. Grigoryan and A. Radyushkin, Phys. Rev. D 76, 095007 (2007), eprint 0706.1543.
* Erdmenger et al. (2008) J. Erdmenger, N. Evans, I. Kirsch, and E. Threlfall, Eur. Phys. J. A 35, 81 (2008), eprint 0711.4467.
* Colangelo et al. (2008) P. Colangelo, F. De Fazio, F. Giannuzzi, F. Jugeau, and S. Nicotri, Phys. Rev. D 78, 055009 (2008), eprint 0807.1054.
* Ballon Bayona et al. (2010) C. Ballon Bayona, H. Boschi-Filho, N. R. Braga, and M. A. Torres, JHEP 01, 052 (2010), eprint 0911.0023.
* Cotrone et al. (2011) A. L. Cotrone, A. Dymarsky, and S. Kuperstein, JHEP 03, 005 (2011), eprint 1010.1017.
* Kim et al. (2007) Y. Kim, J.-P. Lee, and S. H. Lee, Phys. Rev. D 75, 114008 (2007), eprint hep-ph/0703172.
* Grigoryan et al. (2010) H. R. Grigoryan, P. M. Hohler, and M. A. Stephanov, Phys. Rev. D 82, 026005 (2010), eprint 1003.1138.
* Li et al. (2016) Y. Li, P. Maris, X. Zhao, and J. P. Vary, Phys. Lett. B 758, 118 (2016), eprint 1509.07212.
* Braga et al. (2016a) N. R. F. Braga, M. A. Martin Contreras, and S. Diles, EPL 115, 31002 (2016a), eprint 1511.06373.
* Boschi-Filho and Braga (2005) H. Boschi-Filho and N. R. Braga, JHEP 03, 051 (2005), eprint hep-th/0411135.
* Boschi-Filho et al. (2006) H. Boschi-Filho, N. R. Braga, and C. N. Ferreira, Phys. Rev. D 74, 086001 (2006), eprint hep-th/0607038.
* Andreev and Zakharov (2006) O. Andreev and V. I. Zakharov, Phys. Rev. D 74, 025023 (2006), eprint hep-ph/0604204.
* Andreev and Zakharov (2007) O. Andreev and V. I. Zakharov, JHEP 04, 100 (2007), eprint hep-ph/0611304.
* Colangelo et al. (2011) P. Colangelo, F. Giannuzzi, and S. Nicotri, Phys. Rev. D 83, 035015 (2011), eprint 1008.3116.
* Bruni et al. (2019) R. C. Bruni, E. Folco Capossoli, and H. Boschi-Filho, Adv. High Energy Phys. 2019, 1901659 (2019), eprint 1806.05720.
* Diles (2020) S. Diles, EPL 130, 51001 (2020), eprint 1811.03141.
* Fujita et al. (2009) M. Fujita, K. Fukushima, T. Misumi, and M. Murata, Phys. Rev. D 80, 035001 (2009), eprint 0903.2316.
* Fujita et al. (2010) M. Fujita, T. Kikuchi, K. Fukushima, T. Misumi, and M. Murata, Phys. Rev. D 81, 065024 (2010), eprint 0911.2298.
* Mamani et al. (2014) L. A. Mamani, A. S. Miranda, H. Boschi-Filho, and N. R. F. Braga, JHEP 03, 058 (2014), eprint 1312.3815.
* Evans and Tedder (2006) N. Evans and A. Tedder, Phys. Lett. B 642, 546 (2006), eprint hep-ph/0609112.
* Afonin (2011) S. Afonin, Phys. Rev. C 83, 048202 (2011), eprint 1102.0156.
* Afonin (2012) S. Afonin, Int. J. Mod. Phys. A 27, 1250171 (2012), eprint 1207.2644.
* Braga et al. (2016b) N. R. Braga, M. A. Martin Contreras, and S. Diles, Phys. Lett. B 763, 203 (2016b), eprint 1507.04708.
* Braga et al. (2017) N. R. F. Braga, L. F. Ferreira, and A. Vega, Phys. Lett. B 774, 476 (2017), eprint 1709.05326.
* Braga and Ferreira (2018) N. R. Braga and L. F. Ferreira, Phys. Lett. B 783, 186 (2018), eprint 1802.02084.
* Martin Contreras and Vega (2020a) M. A. Martin Contreras and A. Vega, Phys. Rev. D 101, 046009 (2020a), eprint 1910.10922.
* Afonin and Pusenkov (2014) S. S. Afonin and I. V. Pusenkov, Phys. Rev. D90, 094020 (2014), eprint 1411.2390.
* Chen (2018) J.-K. Chen, Eur. Phys. J. C78, 648 (2018).
* Martin Contreras and Vega (2020b) M. A. Martin Contreras and A. Vega, Phys. Rev. D 102, 046007 (2020b), URL https://link.aps.org/doi/10.1103/PhysRevD.102.046007.
* Braga and Ferreira (2016) N. R. F. Braga and L. F. Ferreira, Phys. Rev. D 94, 094019 (2016), eprint 1606.09535.
* Vega and Cabrera (2016) A. Vega and P. Cabrera, Phys. Rev. D 93, 114026 (2016), eprint 1601.05999.
* Vega and Ibañez (2017) A. Vega and A. Ibañez, Eur. Phys. J. A 53, 217 (2017), eprint 1706.01994.
* Vega and Martin Contreras (2019) A. Vega and M. Martin Contreras, Nucl. Phys. B 942, 410 (2019), eprint 1808.09096.
* Dominguez et al. (2010) C. A. Dominguez, M. Loewe, J. Rojas, and Y. Zhang, Phys. Rev. D 81, 014007 (2010), eprint 0908.2709.
* Dominguez et al. (2013) C. Dominguez, M. Loewe, and Y. Zhang, Phys. Rev. D 88, 054015 (2013), eprint 1307.5766.
* Braga et al. (2016c) N. R. F. Braga, M. A. Martin Contreras, and S. Diles, Eur. Phys. J. C 76, 598 (2016c), eprint 1604.08296.
* Son and Starinets (2002) D. T. Son and A. O. Starinets, JHEP 09, 042 (2002), eprint hep-th/0205051.
* Hawking and Page (1983) S. Hawking and D. N. Page, Commun. Math. Phys. 87, 577 (1983).
* Herzog (2007) C. P. Herzog, Phys. Rev. Lett. 98, 091601 (2007), eprint hep-th/0608151.
* Afonin and Katanaeva (2020) S. Afonin and A. Katanaeva, in _18th International Conference on Hadron Spectroscopy and Structure_ (2020), pp. 718–721, eprint 2009.05375.
* Afonin (2020) S. Afonin (2020), eprint 2005.01550.
* Aoki et al. (2006) Y. Aoki, G. Endrodi, Z. Fodor, S. Katz, and K. Szabo, Nature 443, 675 (2006), eprint hep-lat/0611014.
* Teaney (2006) D. Teaney, Phys. Rev. D 74, 045025 (2006), eprint hep-ph/0602044.
* Miranda et al. (2009) A. S. Miranda, C. Ballon Bayona, H. Boschi-Filho, and N. R. Braga, JHEP 11, 119 (2009), eprint 0909.1790.
* Colangelo et al. (2009) P. Colangelo, F. Giannuzzi, and S. Nicotri, Phys. Rev. D 80, 094019 (2009), eprint 0909.1534.
* Cui et al. (2016) L.-X. Cui, Z. Fang, and Y.-L. Wu, Chinese Physics C 40, 063101 (2016), URL https://doi.org/10.1088/1674-1137/40/6/063101.
* Quigg and Rosner (1979) C. Quigg and J. L. Rosner, Phys. Rept. 56, 167 (1979).
* Rothkopf (2020) A. Rothkopf, in _28th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions_ (2020), eprint 2002.04938.
* Braga et al. (2018) N. R. Braga, L. F. Ferreira, and R. Da Rocha, Phys. Lett. B 787, 16 (2018), eprint 1808.10499.
* Braga and da Mata (2020) N. R. Braga and R. da Mata, Phys. Rev. D 101, 105016 (2020), eprint 2002.09413.
* Braga and Junqueira (2020) N. R. Braga and O. Junqueira (2020), eprint 2010.00714.
* Burnier et al. (2015) Y. Burnier, O. Kaczmarek, and A. Rothkopf, JHEP 12, 101 (2015), eprint 1509.07366.
* Andreev (2019) O. Andreev, Phys. Rev. D 100, 026013 (2019), eprint 1902.10458.
* Burnier et al. (2016) Y. Burnier, O. Kaczmarek, and A. Rothkopf, JHEP 10, 032 (2016), eprint 1606.06211.
|
# SuperWASP Variable Stars: Classifying Light Curves Using Citizen Science
Heidi B. Thiemann,${}^{1}{}^{2}$ Andrew J. Norton,1 Hugh J. Dickinson,1 Adam
McMaster${}^{1}{}^{2}$ Ulrich C. Kolb,1
1School of Physical Sciences, The Open University, Milton Keynes, MK7 6AA, UK
2DISCnet Centre for Doctoral Training, The Open University, Walton Hall,
Milton Keynes, MK7 6AA, UK E-mail<EMAIL_ADDRESS>(HBT)
(Accepted 2021 January 14. Received 2021 January 14; in original form 2020
December 10)
###### Abstract
We present the first analysis of results from the SuperWASP Variable Stars
Zooniverse project, which is aiming to classify 1.6 million phase-folded light
curves of candidate stellar variables observed by the SuperWASP all sky survey
with periods detected in the SuperWASP periodicity catalogue. The resultant
data set currently contains $>$1 million classifications corresponding to
$>$500,000 object-period combinations, provided by citizen scientist
volunteers. Volunteer-classified light curves have $\sim$89 per cent accuracy
for detached and semi-detached eclipsing binaries, but only $\sim$9 per cent
accuracy for rotationally modulated variables, based on known objects. We
demonstrate that this Zooniverse project will be valuable for both population
studies of individual variable types and the identification of stellar
variables for follow up. We present preliminary findings on various unique and
extreme variables in this analysis, including long period contact binaries and
binaries near the short-period cutoff, and we identify 301 previously unknown
binaries and pulsators. We are now in the process of developing a web portal
to enable other researchers to access the outputs of the SuperWASP Variable
Stars project.
###### keywords:
stars: variables – stars: binaries – surveys – catalogues
††pubyear: 2021††pagerange: SuperWASP Variable Stars: Classifying Light Curves
Using Citizen Science–SuperWASP Variable Stars: Classifying Light Curves Using
Citizen Science
## 1 Introduction
Variable stars are key to investigating and testing stellar astrophysics, and
the dynamics and structure of stellar systems. The detection, classification,
and study of classes of variable stars is therefore an important pursuit.
Typically, variable stars are detected through amplitude and period variations
in their photometric light curve. Classifications of periodic variables based
on their light curve are not always conclusive, but instead give a strong
indication of variable type, and can be used to identify candidates for
spectroscopic and photometric follow-up.
The full SuperWASP photometric archive contains $>$30 million light curves of
relatively bright stars (V$\leq$15), observed with a high cadence (as short as
30 seconds) and long baseline ($\sim$11 years). A previous period search using
the first few years of the SuperWASP archive enabled a significant amount of
research in the field of stellar variability, including: the identification of
140 short-period eclipsing binaries close to the period cut-off (Lohr et al.,
2013); the identification of period change in post common-envelope eclipsing
binary systems to search for circumbinary planets (Lohr et al., 2014); the
discovery of a doubly eclipsing quintuple system (Lohr et al., 2015b); the
identification of period change in $\sim$1400 eclipsing binaries (Lohr et al.,
2015a); the discovery of a $\delta$ Sct star in an eclipsing binary (Norton et
al., 2016); the study of $\sim$5000 RR Lyrae stars and identification of
$\sim$800 Blazhko effect systems (Greer et al., 2017); and the study of
rotationally modulated variables (Thiemann et al., 2020). A more recent re-
analysis of this archive detected $\sim$8 million potential periods in $\sim$3
million unique objects (Norton, 2018).
There have been previous attempts at using machine learning algorithms and
Artificial Neural Networks (ANNs), often called Neural Networks (NN), to
automate the classification of SuperWASP variable stars from the raw data,
including Payne (2013), who made use of three NNs to process a range of
parameters which defined the shape of the phase folded light curve. They
processed over 4.3 million periods, giving $\sim$1.1 million preliminary
classifications. However these NNs found only partial success, identifying 75
per cent of light curves correctly. As an alternative to machine learning, the
SuperWASP Variable Stars (SVS)
Zooniverse111www.zooniverse.org/projects/ajnorton/superwasp-variable-stars
project is instead using citizen science to classify the 1.6 million folded
light curves referred to above. In this paper, we present the first analysis
of SVS, containing over 1 million classifications, corresponding to over
500,000 unique object-period combinations.
Figure 1: Histogram of the identified periods in all objects in the SuperWASP
Periodicity Catalogue. There are significant numbers of excess periods close
to integer multiples or fractions of a sidereal day or lunar month, indicated
by coloured vertical lines (red lines correspond to fractions of a day; light
blue corresponds to multiples of a day; dark blue corresponds to the monthly
and linger cycles). All such periods are flagged and may be discarded. The
upper panel shows the cumulative period histogram while the lower one, whose
vertical axis is truncated, shows the regular histogram. Figure 2: Histogram
of all un-flagged periods corresponding to objects in the SuperWASP
Periodicity Catalogue. The coloured vertical lines indicate where flagged
periods have been removed (red lines correspond to fractions of a day; light
blue corresponds to multiples of a day; dark blue corresponds to the monthly
and linger cycles). The upper panel shows the cumulative period histogram
while the lower one shows the regular histogram.
The SVS project was launched on 5th Sep 2018 and had engaged $\sim$4,500
volunteers at the time of this analysis. This analysis acts as a preliminary
look at the Zooniverse classifications, demonstrating that SVS can be used for
both population studies and for identifying rare and unique variables. This
analysis will guide how we develop the project as it gains more volunteer and
machine learning classifications. In Section 2 we describe the SuperWASP data;
in Section 3 we describe the Zooniverse project; in Section 4 we summarise our
results including the identification of new and unique stellar variables; in
Section 6 we draw our conclusions.
## 2 SuperWASP Periodicity Catalogue
SuperWASP (Pollacco et al., 2006) surveyed almost the entire night sky using
two identical observatories in La Palma, Canary Islands, and Sutherland, South
Africa. Each robotic observatory consisted of 8 cameras each with a 14 cm
aperture and a 7.8 $\times$ 7.8 square degree field of view, allowing for a
total sky coverage of $\sim$500 square degrees per exposure. The survey
excludes the Galactic Plane where the large pixel scale of 16.7 arcsecond per
pixel prevents separation of signals from individual stars in this dense
stellar region. SuperWASP observations were reduced using the pipeline
described in Pollacco et al. (2006). Over the course of $\sim$2800 nights
between 2004 - 2013, SuperWASP accumulated $\sim$16 million images containing
$\sim$580 billion data points corresponding to $\sim$31 million unique stars
(Norton, 2018). The SuperWASP data set therefore provides a high cadence and
long baseline of observations for more than 30 million stars with magnitudes
between $V=8-15$.
For SuperWASP observations, 1 count $s^{-1}$ after background subtraction is
roughly equivalent to V$\sim$15\. Therefore the mean SuperWASP magnitude is
defined as $V=-2.5\log_{10}(\frac{F}{10^{6}})$ where $F$ is the mean SuperWASP
flux and the pseudo-V magnitude is comparable to the Tycho V magnitude. A
typical object in the SuperWASP archive will have $\sim$20,000 observations in
its light curve. While the SuperWASP data can contain a significant level of
noise, the long baseline of observations can often compensate for this in
phase folded light curves.
SuperWASP photometry is carried out by placing apertures on the images at pre-
defined positions identified using the USNO catalogue as an input. However,
the large pixel size of the individual cameras means that it is possible that
a single star can be associated with two or more different identifiers in the
SuperWASP archive, and that light from multiple stars can appear within the
same photometric aperture. Typically there is only a single (or dominant) star
in the aperture, so association with a specific object is possible, but that
is not always the case. Hence, in each case confirmatory photometry with a
small PSF is necessary to confirm exactly which object is variable.
Figure 3: Upper panel: Volunteers are first tasked with classifying each light
curve as a generic variable type. This example shows an EW folded at half the
correct period. Lower panel: If a volunteer chooses a classification of EA/EB,
EW, or pulsator, they are asked to choose whether the period is correct or
not.
Norton (2018) recently performed a re-analysis of the entire SuperWASP archive
with the aim of detecting all periodic variables. The re-analysis comprised a
one-dimensional CLEAN power spectrum analysis (based on the technique outlined
by Roberts et al. (1987)) as well as a phase dispersion minimisation and
folding analysis (following the method of Davies (1990)). Only periods that
were significantly detected using both methods were considered to be
plausible. For each light curve, all periods that passed these criteria were
recorded, with a significance value recorded from both the folding analysis
and the Fourier analysis. The periods identified have an average uncertainty
of $\sim\pm 0.1$ per cent.
This re-analysis detected $\sim$8 million candidate periods of stellar
variables in $\sim$3 million unique objects, shown in Figure 1. A significant
number of period detections result from systematic effects in the SuperWASP
photometric data, resulting in the detection of periods close to integer
fractions or multiples of a sidereal day or lunar month (i.e. 1 day, 1/2 day,
1/4 day, etc.). Periods flagged as affected by one of these effects were
removed from the data set, leaving 1,569,061 candidate periods in 767,199
unique objects, shown in Figure 2. Clearly some genuine periods will have been
rejected by this method, but if we extrapolate across the gaps, the rejected
genuine periods should amount to no more than 5 per cent of the total. The
SuperWASP periodicity catalogue is available on the Warwick SuperWASP
archive222http://wasp.warwick.ac.uk/archive/docs/index.shtml as the table
period_ajn5.
To generate subjects for SVS (Norton, 2018), light curve data for objects with
one or more potentially genuine periods listed in the SuperWASP Periodicity
Catalogue were used. The data for each selected object were folded at each of
its potential periods and then rendered to produce a set of one or more phase-
folded light curve images. Each image displays the raw photometric data
points, overlaid with the mean profile in 100 phase bins, an example of which
is shown in Figure 3.
## 3 Citizen Science
The Zooniverse333www.zooniverse.org (Lintott et al., 2008) is the world’s most
popular platform for "people-powered research", where a community of
volunteers, or "citizen scientists", can participate in real scientific
research through simple tasks such as analysing and categorising large data
sets. This approach, using the "wisdom of the crowd", can be used to greatly
improve the accuracy and speed with which data can be analysed and classified.
Despite minimal training and subject matter expertise, Zooniverse volunteers
have proven time and time again that non-experts can achieve a good level of
accuracy, and can identify unusual objects that automated algorithms will
often miss.
SVS launched on 5th September 2018, with the aim of classifying the output of
the SuperWASP Periodicity Catalogue (Norton, 2018). The aim of SVS is
threefold: to identify rare variable stars; to identify populations of
variable stars in order to probe the extremes and trends of each population;
and to facilitate the future development of a web portal in order to give
researchers and the public access to the output of this project.
We constructed the SVS project using the Zooniverse project builder
platform444www.zooniverse.org/lab, creating a classification task, tutorial,
and "Field Guide" which provides example light curves and guidance for
classification. There is also an option for volunteers to report their
findings in the "Talk" section, where they can discuss individual light
curves, highlight unusual and rare ones, and identify which objects have
already been detected in other databases.
The classification of variable stars can be difficult, with 211 variable star
types and sub-types listed in the International Variable Star
Index555www.aavso.org/vsx/index.php (VSX) (Watson et al., 2020). The noise
level of the SuperWASP light curves often makes it difficult to distinguish
between similar types of variables. However, to be successful, Zooniverse,
projects must be accessible to non-subject matter experts. We therefore ask
volunteers to classify light curves into the following generic and overarching
variable types:
* •
Pulsators: stars which display periodic changes in brightness due to changes
in the star’s size and luminosity as its outer layers expand and contract in a
regular manner. This category includes RR Lyrae, $\delta$ Scuti, Cepheid
variables, and Mira variables. Light curves are often asymmetric with a
steeper rise and shallower fall in brightness.
* •
EA/EB: detached and semi-detached eclipsing binary systems which display
periodic changes in brightness. This category includes Algol (EA) and Beta
Lyrae (EB) eclipsing binaries. Two eclipses per cycle may be distinguished,
often of different depth, with clear boundaries to the eclipses.
* •
EW: contact-eclipsing and near-contact eclipsing binary systems which display
periodic changes in brightness. This category includes W Ursae Majoris (EW)
type eclipsing binaries. Brightness variation is continuous and the eclipses
are often of similar depth, resulting in half the orbital period often being
identified instead of the true period.
* •
Rotators: stars which display rotational modulation in their light curve. This
category includes single stars with significant star spots and stars with
ellipsoidal modulation from close binaries that do not eclipse but instead are
distorted into non-spherical (ellipsoidal) shapes by gravity due to their
proximity. Brightness variations are typically quasi-sinusoidal.
* •
Unknown: stars displaying some degree of periodicity but which do not fall
into any previous category. This category might include semi-regular stars and
long period variables.
* •
Junk: light curves which display no genuine periodicity, or apparent
periodicity which is due only to data dropouts or remaining systematic
artefacts.
Volunteers are presented with a phase-folded light curve and tasked with
classifying it into one of the following options: pulsator, EA/EB, EW,
rotator, unknown, or junk, shown in Figure 3. If the volunteer chooses either
EA/EB, EW, or pulsator, they are presented with a second question which asks
them to choose whether the folding period is: correct period, half period, or
wrong period. The classification task itself is essentially a pattern matching
task.
We collect multiple classifications of each phase-folded light curve, allowing
us to take the most common classification as the true classification and
"retire" it from the live project. Between 5th September 2018 – 23rd September
2019, each light curve required 7 classifications from separate volunteers to
"retire" it, meaning that if a light curve received 4 or more of the same
classification, the light curve would be assigned to the corresponding
category. On 24th September 2019, a variable retirement rate was implemented
using Caesar666https://caesar.zooniverse.org advanced retirement engine
provided by the Zooniverse platform. As a result, a light curve is now retired
if either the classification count reaches 7, the subject receives 4 of the
same classification, or if the subject receives 3 junk classifications, since
junk light curves are typically easier to identify. Following the introduction
of the variable retirement rate with Caesar, junk classified subjects are
retired more quickly, so we would expect to see a higher relative frequency of
junk in the output, with the number of junk classifications eventually
plateauing as they are retired from the live project.
In the period immediately following the project launch, the subject images
presented to volunteers were selected randomly from the full pool of 1.6
million light curves. Even if all 4,500 volunteers that had so far engaged
with the project classified one subject per minute, the expected time for any
particular subject to accrue 7 classifications is almost 40 hours. In reality,
the initial retirement rate was $\sim$3,000 subjects per month on average.
Accordingly, a subject batching strategy was adopted which reduced the
available subject pool size to 288,000 light curves at any one time. Following
this change, the retirement rate increased to $\sim$17,000 subjects per month,
peaking at $\sim$43,711 retirements in October 2019.
During peak times of activity (when SVS is promoted as a "featured project" on
the Zooniverse front page), there is an average of $\sim$4,300 classifications
per day, peaking at 11,442; outside of these intervals, there is an average of
$\sim$1,100 classifications per day and a retirement rate of $\sim$5,000 per
month. At this lower classification rate, it is estimated that it will take
$\sim$4–5 years to complete each "live" set of 288,000 objects, or $\sim$25
years to complete the full set ($\sim$15 years at a higher classification
rate). By comparison, one of the authors classified $\sim$5,000 light curves
in a day without working on other research activities. Considering these
timescales, machine learning will be vital to complete the classification of
all 1.6 million phase-folded light curves within a reasonable time-frame.
We use the Gini coefficient to give a quantitative measure of the engagement
of volunteers. The Gini coefficient ranges from 0 to 1, where 0 indicates that
each volunteer contributes an equal number of classifications, and 1 indicates
that there is an extreme difference in number of classifications from each
volunteer. We find that the mean Gini coefficient for SVS is 0.92. By
comparison, Spiers et al. (2019) finds that the mean Gini coefficient for
astronomy projects on Zooniverse is 0.82, and Eisner et al. (2020) finds a
similarly high Gini coefficient for Planet Hunters TESS of 0.94. Whilst a
higher Gini coefficient does not necessarily indicate project "success", it
does indicate that SVS has a large number of prolific classifiers, which is
often desirable for citizen science projects. Loyal classifiers spend more
time engaging with the project, and hence are likely to have a strong
understanding of the project aims and classification methods and make fewer
mistakes.
For the project age, SVS has fewer total volunteers than other general
astronomy projects on the Zooniverse, but a comparable number of total
volunteers to other non-astronomy projects and variable star astronomy
projects. A direct comparison is Variable Star
Zoo777https://www.zooniverse.org/projects/ilacerna/variable-star-zoo
(classifying $\sim$60,000 light curves), a project which aims to classify
variable stars in the VVV survey. Variable Star Zoo launched in July 2018 and
has engaged with 5,305 volunteers to date, similar to SVS. Two upcoming
variable star Zooniverse projects are Zwicky Stellar
Sleuths888https://www.zooniverse.org/projects/adamamiller/zwickys-stellar-
sleuths, and a new project by ASAS-SN, Citizen ASAS-
SN999https://www.zooniverse.org/projects/tharinduj/citizen-asas-sn. SVS will
complement these projects, and the increase in variable star Zooniverse
projects may increase volunteer interest in this branch of astronomy.
Figure 4: The number of classifications (black) and retirements (red) over the
first 2 years of the project. The shallow increase shows pre-launch
classifications from experts and beta testers. SVS was officially launched on
2nd September 2018, and since then has has a fairly consistent classification
rate. Peaks of activity (such as being a "featured project") cause sudden
rises in classifications. The change to a variable retirement limit and
batching is clear in early 2019.
### 3.1 Data Cleaning
The classifications used in this analysis were downloaded on 2nd September
2020, giving almost 2 years of classification data. Although there have been
1,071,345 classifications corresponding to over 568,739 unique object-period
combinations, the majority of light curves have not yet received a sufficient
number of classifications for retirement.
Classifications from SVS are exported as a CSV file from the Zooniverse site.
Before data cleaning, the SVS classification export is stripped of non-
essential data, including time of classification and username of Zooniverse
volunteers. In addition to the primary science analysis, an in-depth
assessment of classification reliability, including detection of "spam"
classifications was performed. For this secondary analysis, the full SVS
classification export was used as is.
The likely classification for each subject is decided by a custom written
script. This script looks at all the classifications of the same Subject ID
(or same SuperWASP ID and Period ID) and finds the most popular (or only)
classification. If two (or more) classifications are equally popular, then we
allocate the classification as the first given classification from the
following list: junk, pulsator, rotator, EW, EA/EB, unknown (ordered from most
common to least common). The unfiltered SVS export has 1,071,345 rows
corresponding to all classifications made up to that time. After processing
and removing duplicated rows, 1,025,750 light curve classifications remain.
After finding the top classification for each subject, the output had 568,739
rows corresponding to unique object-period combinations. Figure 5 shows a
histogram of the number of classifications per object.
Figure 5: There are 5 objects with 9 classifications, 27 objects with 8
classifications, 1934 objects with 7 classifications, and 3510 objects with 6
classifications, 11,085 with 5 classifications, 35,298 with 4 classifications,
84,180 with 3 classifications, 109,034 with 2 classifications, 323,666 with 1
classification. At this stage, only 9 per cent of objects (7 per cent of non-
junk objects) have received enough classifications for retirement.
Additional catalogues are cross-matched with the output to identify additional
parameters such as distance, colour, and previous classifications. This
includes a 10 arcsecond spatial cross-match with Gaia-DR2 and the Gaia-DR2
Bailer-Jones distance catalogue (Gaia Collaboration 2018; Bailer-Jones et al.
2018), a 10 arcsecond cross-match with NOMAD (Zacharias et al., 2004), and a 2
arcminute cross-match to VSX (Watson et al., 2020).
Light curves with fewer than 4 classifications are removed, and any remaining
duplicates (both spatial and WASP ID) are retained, since these are plausibly
multi-periodic or multi-classification objects. We complete an initial visual
assessment of unrealistic periods, but at this stage, objects with such
periods are not removed since these are plausibly extreme period objects which
may be of interest. Table 1 shows a breakdown of the cleaned data set.
Table 1: Breakdown of the first 1 million classifications corresponding to 568,739 unique object-period combinations, and the results of positional cross-matches to the Gaia-DR2 and Bailer-Jones et al. (2018) catalogue, VSX, and SuperWASP catalogues of binaries (Payne, 2013) and pulsators (Greer et al., 2017). | Full output | EA/EB | EW | Pulsator | Rotator | Unknown | Junk
---|---|---|---|---|---|---|---
Classifications | 568739 | 29882 | 36328 | 25730 | 56582 | 41541 | 378,671
$N_{class}\geq 4$ | 13390 | 2425 | 3187 | 1777 | 4402 | 1599 | N/A
$N_{class}\geq 4$ and correct period | 11322 | 1629 | 2672 | 1020 | 4402 | 1599 | N/A
In Gaia-DR2 | 10213 | 792 | 2599 | 1000 | 4275 | 1547 | N/A
In VSX | 5,283 | 665 | 1528 | 579 | 1939 | 572 | N/A
In Payne and/or Greer | 314 | 259 | 44 | 11 | N/A | N/A | N/A
### 3.2 Classification Reliability
A total of 7,478 volunteers made 1,071,345 classifications. SVS has
$\sim$4,500 registered volunteers, indicating that $\sim$3000 volunteers
engaged with the project but did not register on the Zooniverse platform.
Registered volunteers made 93.9 per cent of classification, and 6.1 per cent
of classifications (65,398) were made by unregistered or anonymous volunteers,
making $\sim$20 classification each on average. Fig 6 shows the distribution
of classifications made per volunteer. Just over half (52.6 per cent) of
volunteers made 10 or fewer classifications, 36.0 per cent made 11–100, 9.6
per cent made 101–1000, and 1.6 per cent made over 1,000. 18 (0.2 per cent)
"super-classifiers" made more than 10,000 classifications.
Figure 6: The number of classifications per volunteer. Any classifications
made by an anonymous volunteer over different days will be counted as multiple
volunteers’ inputs.
To estimate the classification reliability, SVS classifications are compared
existing variable classifications, such as VSX classifications or Gaia-DR2
variable types. Figure 7 shows the confusion matrix for volunteer
classifications compared to the closest stellar variable within the VSX
catalogue. While the SVS classification accuracy is high for binaries and
pulsators, with $\sim$89 per cent of EA/EBs, $\sim$71 per cent of EWs, and
$\sim$78 per cent of pulsators being correctly classified, rotators are a more
challenging variable type with only $\sim$9 per cent of rotator
classifications being "correct". The category of unknown easily categorised,
but separating SVS classified objects into their corresponding classes from
the VSX catalogue gives $\sim$24 per cent semi-regular variables, $\sim$23 per
cent miscellaneous variables, and $\sim$15 per cent long period variables.
Overall, we find a classification accuracy of 60 per cent for all variable
types, excluding junk.
Figure 7: The confusion matrix for volunteer classifications compared to VSX
classifications. The category of unknown for VSX contains semi-regular stars
and stars classified as miscellaneous. We find an overall classification
accuracy of 60 per cent.
Too few SVS variables have a Gaia-DR2 variability component to undertake a
similar full assessment, using the Gaia-DR2 variability results catalogue
containing 363,369 classifications of pulsators from Cepheids to Mira
variables. Only 1 EA/EB and 8 EW type SVS variables are classified as
pulsating stars in Gaia-DR2. Of the 273 pulsators (27 per cent of 1020
identified) in Gaia-DR2 variability results, 9 are classified as Type I or II
Cepheids, 9 are Mira variables, 17 are $\delta$ Scuti stars, and 238 are RR
Lyrae stars. 81 rotators and 47 unknown variables are classified as pulsators
in Gaia-DR2 variability results.
This assessment gives a rudimentary estimate on the probability that different
classes of variables are classified correctly. When combined with the
SuperWASP periodicity catalogue likelihood statistics, we can use this to give
us a good idea of the correct period and variability type. It is most likely
that incorrect classifications arise from two causes. Some variable types,
especially EA/EB, can appear to be another variable type when folded at the
wrong period. It is therefore important that we have a robust method of
identifying the true period of an object which may have multiple detected
periods, see Section 4.3. The other dominant cause of incorrect
classifications will mostly likely be human error, and non-specialists may
miss some of the nuances of a light curve that indicate a certain variability
type. But a cohort of non-specialist volunteers is by no means a bad thing,
since the combination of people-power and multiple classifications means that
an accurate consensus is usually reached. Feedback from citizen scientist
volunteers also suggests that confusion can arise from the overlaid binned red
line, especially in instances where the binned line appears to show a
different variable type from the actual data, due to data drop-outs or spikes.
At this stage of the project, it is not possible to remove or edit this binned
line, but it is something to be aware of in the analysis of the resultant
classifications, and use of labelled data in machine learning. Other issues
may arise if volunteers skip the training available to them through the
Zooniverse interface, forget the training, or find the training is not written
in their first language.
While highly unlikely, it is also possible that bots, spamming, or deliberate
sabotage can influence the results. There are no in-built protections against
this on the Zooniverse platform, so the only way of identifying "spam"
classifications is by checking for a high number of classifications by the
same user within an unrealistically short time-frame. All classifications were
checked for a single user making multiple classifications per second and none
were found. It is not possible to check this for users who are not logged in,
so unexpected spikes in classifications ($>$100 classifications in $<$1
minute) were searched for. Only one spike in activity matching these
parameters was detected by a single user, and their classifications were
visually assessed by the authors and verified as non-spam.
Volunteer weightings have not yet been implemented in the classification
pipeline, but will be an important part of the CNN, and will be used to
improve classification reliability. We trialled two simple methods of
calculating weightings: identifying overlap of classifications with "expert"
or author classifications, and overlap with VSX classifications. With 6
possible variable types, a suitable number of classifications is needed for
each variable type to calculate weightings. Unfortunately the overlap with
"expert" classifications is too low to provide a conclusive weighting.
Assessing against VSX, we take only those have made $>$100 classifications of
each variable type, of which only 15 have an overlap of $>$100 with VSX, which
also provides an inconclusive weighting system. Alternative methods will be
explored in future work, for example through the use of individual volunteer
confusion matrices, see Section 5.1.1.
## 4 Results
### 4.1 Overview
Volunteer classifications indicate that this first analysis consists primarily
of junk classifications (66.6 per cent of all classifications), which are
discarded. The remainder of the classifications are made up of EA/EB (5.3 per
cent), EW (6.4 per cent), pulsators (4.5 per cent), rotators (9.9 per cent),
and unknown (7.3 per cent). As previously identified, the classification
accuracy of rotators is low so the true proportion will be lower than this
figure indicates. Figure 8 shows the distribution of V band magnitudes ranging
from approximately 8$\geq$V$\geq$15, with a number of fainter sources. Genuine
faint sources can be detected by the longest SuperWASP exposures, but
contamination by nearby stars can sometimes mimic faint sources, resulting in
spurious detections. Figure 9 shows the distribution of distances of these
typically near-by stellar variables. Each variable type has a similar
distribution, with the exception of pulsators, showing a peak in distance at
$\sim$4800 pc, with a fainter average V magnitude of $\sim$13.8, likely due to
a greater number of more distant stellar variables of this type.
Figure 8: The distribution of NOMAD V magnitude of SVS stars with a variable
type classification and correct period classification ranges between
8$\geq$V$\geq$18. Figure 9: The distance (pc) distribution of SVS stars with a
variable type classification and correct period classification. The full data
set is shown in the solid line, while the pulsators are shown by the dashed
line. Pulsators appear to have a different distribution to other variables.
The spatial distribution of the 568,739 unique object-period combinations is
shown as a sky density plot in Figure 10. The classifications are not evenly
distributed, since typically only a few degrees of sky are available for
classification at any one time, and SuperWASP could not resolve objects in the
dense regions of the Galactic Plane.
Figure 10: Map of SVS classifications. Red points indicate objects which have
been retired from the live queue, grey points indicate objects which have
received too few classifications for retirement. Classifications are not
evenly distributed since only a few degrees of the sky are available to
volunteers at any one time. As each data set is complete, more of the sky map
will be filled.
We have not yet accounted for the effects of interstellar extinction and
reddening on magnitudes, colours, and variable classification. Jayasinghe et
al. (2018) make use of the reddening-free Wesenheit magnitudes (e.g. Madore
1982; Lebzelter et al. 2018), with Gaia DR2 and 2MASS passbands to improve
variability classification for ASAS-SN, but do not account for the effects of
extinction in colours. We aim to complete an analysis of the effect of both in
future analyses of SVS classifications, making use of either the reddening-
free Wesenheit magnitudes, the calculation of stellar extinction using the
Binary and Stellar Evolution and Population Synthesis (BiSEPS) (Willems &
Kolb, 2002) implementation of extinction given by Drimmel et al. (2003), or
Gaia-DR2 reddening values and distances. Unlike ASAS-SN, magnitude and
passband data does not feed into an automated classification pipeline, and our
initial machine learning classification algorithm will not incorporate this
data (Section 5.1.1). We expect that reddening would not be the cause of
reclassification of the overarching variable types, however, for specific
subsets of variable types (e.g. RR Lyrae stars), extinction correction may be
necessary.
### 4.2 New Variable Objects
Type | EA/EB | EW | Pulsator | Rotator | Unknown
---|---|---|---|---|---
Number | 192 | 40 | 69 | 1,365 | 894
Table 2: Previously unidentified stellar variables by variable type. There are significantly more variables classified as rotator or unknown. Stars classified as rotators are unlikely to be true rotators and may be binaries and pulsators folded at the wrong period, and unknown variables are likely to be junk, semi-regular or long period variables. WASP ID | Type | Period (days)
---|---|---
1SWASPJ000005.14-755731.3 | EA/EB | 4.30
1SWASPJ000026.84+393855.6 | EA/EB | 3.59
1SWASPJ000028.05+041248.4 | EA/EB | 4.69
1SWASPJ000039.60-191306.0 | EA/EB | 6.76
1SWASPJ000047.05+353443.1 | EW | 1.22
1SWASPJ000054.70+544425.6 | EA/EB | 3.19
1SWASPJ000057.42-544520.1 | EA/EB | 0.75
1SWASPJ000059.84+094404.5 | EA/EB | 0.65
1SWASPJ000105.41-622920.6 | EA/EB | 1.48
1SWASPJ000132.23-051917.6 | Pulsator | 1.62
1SWASPJ000132.66-091513.7 | EA/EB | 4.19
1SWASPJ000145.10+501843.4 | EA/EB | 1.69
1SWASPJ000149.26+061830.8 | EA/EB | 0.32
1SWASPJ000149.45-363918.1 | Pulsator | 0.64
1SWASPJ000203.48-214746.0 | EA/EB | 0.86
1SWASPJ000315.40+495750.8 | EA/EB | 3.65
1SWASPJ000323.81+325049.7 | EA/EB | 8.25
1SWASPJ000343.16+465244.0 | Pulsator | 1.31
1SWASPJ000353.60+043503.0 | EW | 0.28
1SWASPJ000410.77-525122.4 | EW | 0.24
Table 3: Sample from 301 previously unidentified stellar variables and related
characteristics, not including rotators and unknown variables. The periods of
each object have been assessed by the authors to correct for mis-
classifications; whilst they have been corrected as much as possible, some
periods remain best guesses. All periods have an uncertainty of $\pm$0.1 per
cent. The full table, including rotators and unknown variables, can be found
at 10.5281/zenodo.4439383.
We expect SVS to classify many known stellar variables, and identify several
previously unknown stellar variables. Previously known variables are
identified by a 2 arcminute cross-match with the VSX catalogue (retrieved on
20 October 2020), which contains classifications of 2,105,377 variable stars
from surveys including e.g. OGLE (Udalski, 2003), ASAS (Rucinski, 2006), ASAS-
SN (Shappee et al. 2014; Kochanek et al. 2017; Jayasinghe et al. 2018), ROTSE
(Akerlof et al., 2003), NSVS (Wozniak et al., 2003), ZTF (Bellm et al., 2019).
A secondary cross-match is performed with catalogues from Payne (2013)
containing 12,884 EAs, 5,226 EBs, and 2,875 EWs, and Greer et al. (2017)
containing 4,963 RR Lyrae stars.
To select potentially new variable stars, objects with a known classification
and period are removed; objects that are flagged as variable, but which have
no classification or period, are not removed. All new stellar variables were
assessed by eye by the authors to verify the classification type and
correctness of the period. Duplicated objects were removed and objects were
reclassified as required. We caution that the subset of remaining rotator and
unknown objects may still contain binaries and pulsators at the incorrect
period, despite the best efforts of the authors to identify them. Through this
process, we are left with 2,560 unique candidate new variables, shown in Table
2.
Using this approach, we have identified 301 previously unknown variable stars,
not including rotators and unknown variables, a selection of which are shown
in Table 3, with a period distribution shown in Figure 11. Of particular
interest are a short period cutoff eclipsing binary (with two SuperWASP IDs:
1SWASPJ004003.56+501501.9 and 1SWASPJ004008.54+501455.6), new $\delta$ Scuti
stars (Section 4.4), and binaries displaying the O’Connell effect. Based on
the low classification accuracy of rotators, we caution that new variables
classified as rotators or unknown may not have the correct classification.
Figure 11: The distribution of period of newly identified stellar variables
(EA/EB, EW, and pulsator) by variable type. EA/EBs are shown by the dashed
line; EWs by the dotted line; pulsators by the solid line.
Excluding rotators and unknown variables, these new variables are typically
bright (V$\sim$13) stars. It is likely that these objects have not been
detected due to either surveys not yet having enough epochs to provide a
variability classification (e.g. ASAS-SN), focus on the Galactic Plane or
specific specific fields (e.g. Kepler, OGLE), or can only observe one
hemisphere (ZTF). Assuming that 66 per cent of the 1.6 million light curves in
SVS are junk, we estimate that on completion of SVS, $\sim$5,000 new EA/EB,
EW, and pulsating stellar variables could be identified.
### 4.3 Multiple Periods and Multiple Classifications
Stars displaying two or more real periodic modulations in their light curve
are of great interest, and multiply periodic systems can act as stellar
laboratories. Targets of interest are pulsating stars in eclipsing binary
systems. There are detections of only $\sim$100 $\delta$ Scuti stars in
eclipsing binaries (Kahraman Aliçavus et al., 2017), and there are very few RR
Lyrae stars known in eclipsing binaries, and no known Galactic Cepheids in
eclipsing binaries with orbital periods of less than 1 year (Evans et al.,
2011).
A search identified 1,202 multi-periodic systems, including 229 EA/EBs, 362
EWs, 100 pulsators, 441 rotators, and 70 unknowns. A visual inspection by the
authors revealed that none are convincing multi-periodic systems, but instead
are objects with aliases of the true period. Initially,
1SWASPJ004859.70+172328.1 appeared to have multiple correct EA/EB
classifications. Further investigation found this object has a true period of
3.11 d, discounting the alias periods. However, this object has previously
been identified as an eclipseless rotator (with a period of 3.11 d), but the
SuperWASP light curves show a clear primary eclipse and shallow secondary
eclipse, shown in Figure 12. While the primary eclipse depth remains constant,
the out of eclipse light curve changes significantly over the 8 years of
observation, possibly due to a tidally locked star spot on one of the stellar
components.
Figure 12: 1SWASPJ004859.70+172328.1, an object with multiple EA/EB
classifications, with a true period of 3.11 d. The midpoint of each frame is
as follows: field 1 (August 2004), field 2 (August 2006), field 8 (October
2011), field 9 (December 2012).
We are also interested in multi-classification systems. To identify such
systems, we searched the SVS data set for subjects that have the same WASP ID
but have multiple different, but by consensus "correct" period
classifications. This search found 1,563 systems with 2 or more
classifications, shown in Table 4. The classifications with the greatest
overlap appear to be EA/EB and EW, and rotators with other classifications.
Based on the low classification accuracy of rotators, we make the assumption
that any multi-classification object in which one classification is rotator or
unknown can be discounted as a true multiple classification.
Each of our candidate multi-classification systems were verified by eye
(excluding rotators and unknown variables), ultimately yielding only
apparently 1 real multi-classification system, 1SWASPJ000220.66-292933.8,
shown in Figure 13. This object has both an EW and pulsator classification and
SuperWASP periods of 3.15 d and 1.46 d respectively. On inspection, the EW
classified light curve appears to be that of a RS Canum Venaticorum (RS CVn)
binary. This object has a candidate RS CVn classification, with a period of
6.29 d or an eclipseless RS CVn classification with a period of 3.14 d in VSX.
This object appears to have experienced significant surface spot coverage
evolution over the 7 years of observations, and even hints at an eclipse in
field 2.
Figure 13: The light curve of 1SWASPJ000220.66-292933.8, classified by
volunteers both as an EW with a period of 3.15 d and a pulsator with a period
of 1.46 d. It has previously been classified as an eclipseless RS CVn and a
non-periodic rotator. The midpoint of each frame is as follows: field 1
(September 2006), field 2 (September 2007), field 4 (August 2012), field 5
(September 2013).
Another object of particular interest was one which appeared to be a $\delta$
Scuti star in an eclipsing binary (1SWASPJ004811.15+473719.1), however this
was found to be two separate systems, a binary (1SWASPJ004810.36+473747.7) and
a $\delta$ Scuti star (1SWASPJ004811.15+473719.1), spatially separated by 30
arcseconds, shown in Fig 14.
| EA/EB | EW | Rotator | Pulsator | Unknown
---|---|---|---|---|---
EA/EB | - | 246 | 128 | 5 | 75
EW | 246 | - | 716 | 16 | 46
Rotator | 128 | 716 | - | 99 | 202
Pulsator | 5 | 16 | 99 | - | 30
Unknown | 75 | 46 | 202 | 30 | -
Table 4: The number of light curves with multiple classifications per
classification type. Rotators have the greatest overlap with other variable
classifications, likely due to the low classification accuracy of rotators,
and the high number of alias period light curves per rotator object. Figure
14: Upper: The $\delta$ Scuti star 1SWASPJ004811.15+473719.1 with a period of
1.9 hours. Lower: The EW-type eclipsing binary 1SWASPJ004810.36+473747.7 with
a period of 18.7 hours (0.78 d). These objects were classified as the singular
object 1SWASPJ004811.15+473719.1 with both an EW and a $\delta$ Scuti star in
the same photometric aperture.
### 4.4 Extreme Variables
A valuable aspect of large catalogues of variable stars can be the
identification of extremes of each class, i.e. those with extremely long or
short periods, or extremely high or low amplitudes. SVS has the opportunity to
increase the sample size of short period contact binaries, as well as
identifying, for example, unusually long period contact binaries. For the full
SVS data set, there are two peaks, at $\sim$0.3 days where we might expect to
find short period binaries and aliases of binaries, and short period
pulsators, and $\sim$30 days where we might expect to find semi-regular stars,
currently classified as unknown.
We explore extremes of each variable type using the following criteria as
standard definitions of periods, and visually inspect light curves at the
extremes of each period:
* •
EA/EB: 0.3 d$\leq P\leq$10 d (e.g. Stepien 1995)
* •
EW: 0.22 d$\leq P\leq$1 d (e.g. Rucinski 1992)
* •
Pulsator: 0.3 d$\leq P\leq$8 d (e.g. Leavitt & Pickering 1912; Breger 1979;
Matsunaga et al. 2006; Drake et al. 2014)
* •
Rotator: P$\geq$0.5 d (periods range from hours to months (e.g. Nielsen et al.
2013)
* •
Unknown: N/A (semi-regular P $\geq$10 d) (e.g. Soszyński et al. 2009)
The class of pulsators has the widest range of possible periods, including
$\delta$ Scuti ($\sim<$0.3d), RR Lyrae (0.44-0.82 d), Cepheid (with periods of
weeks to months), Mira (P$\geq$100 d), and W Virginis (0.8 d$\leq P\leq$35 d).
We chose a lower limit of P$\leq$0.3d to allow us to identify candidate
$\delta$ Scuti and High Amplitude $\delta$ Scuti stars (HADS).
We have identified objects that appear to be long-period examples of near-
contact eclipsing binary stars, with orbital periods of up to a month or more.
To be in contact, or near contact, at such long periods requires the stellar
components to be giants. Such objects have been proposed as the progenitors of
red novae, but none have been conclusively identified pre-nova. The outbursts
are believed to be due to stellar mergers, but only one progenitor of such an
event has ever been studied, V1309 Sco, and that was only recognised
retrospectively, after the merger occurred (Tylenda et al., 2011). SVS
volunteers have identified $\sim$10 candidates, with an example of one of
these systems identified in SVS is shown in Figure 15. These candidate near-
contact red giant eclipsing binaries are the subject of an ongoing follow-up
campaign and the subject of an upcoming paper.
Figure 15: The first classification of a candidate near contact red giant
eclipsing binary, 1SWASPJ000927.89+014542.1, with a period of 41.62 d,
significantly longer than typical contact eclipsing binary periods.
We have also identified a new eclipsing binary
(1SWASPJ004003.56+501501.9/1SWASPJ004008.54+501455.6) with a period of
$\sim$0.23 days near the short-period cutoff of $\sim$0.22 days, shown in
Figure 16. Such stars are of importance in the study of the evolution and
structure of close binary systems.
Figure 16: A newly identified EW type binary (both 1SWASPJ004003.56+501501.9
and 1SWASPJ004008.54+501455.6) with a period of 0.23 d, close to the short-
period cutoff.
## 5 Discussion
In the full table of volunteer-classified light curves, we provide the
SuperWASP ID, period (from the SuperWASP periodicity catalogue), and best-
guess variable type. We do not provide RA, Declination, or B, V, R magnitudes
for any classified object. In most cases, there is only a single bright star
in the photometric aperture and so this will usually be the source of the
variability, so associations with other data are still possible most of the
time. However. the large SuperWASP pixel size and possibility of contamination
mean that we cannot confirm the association of a light curve with a specific
stellar object without further follow up. We caution that anyone using this
catalogue may need to confirm the variability type with their own follow up.
Although it is disappointing not to find many new multi-periodic or multi-
classification systems at this stage, this analysis method can be applied to
future analyses, especially for the identification of variables with evolving
star spots. With a greater number of classifications, we expect to identify a
significant number of extremely short and long period pulsators, including
$\delta$ Scuti stars and Mira variables. Individual pulsator sub-types are not
identified by citizen scientist volunteers, so would require the authors to
visually inspect each pulsator light curve after making cuts using additional
period, colour, and luminosity data. We also expect to identify more extreme
binaries, including near-contact red giant eclipsing binaries, and binaries
near the short-period cutoff. It is evident that if some form of machine
learning is implemented, there may still be the need for some level of human
interaction with multi-periodic and multi-classification systems to identify
false positives.
We currently cannot estimate whether volunteer classifications have been
biased. There is no identifying data on the image of each light curve, in an
attempt to keep the classification task to a pattern matching exercise only.
However, following the project launch, it was realised that some metadata for
each light curve was visible to volunteers in the form of the SuperWASP ID.
For volunteers who notice this, the ID gives information on the RA and
Declination of each SuperWASP object, and hence the closest corresponding star
from other catalogues. Subsequently, some users have used this ID to cross-
match the light curve to existing classifications and surveys, using this
knowledge to make a decision on the classification type. We do not have a way
of identifying who has made use of this method and whether it can bias the
results. Volunteer feedback has indicated that use of cross-matching has
improved their knowledge of stellar variables and classification accuracy, and
they value being able to investigate the light curves in more depth.
To that end, as of November 2020, we have added links to external catalogues
(CERiT, ASAS-SN, and Simbad) to the metadata which is visible only after a
classification has been completed. It is not intended to be a tool to
influence classifications, but it has been developed in order to allow
interested volunteers to engage with the project further.
### 5.1 The Future of SuperWASP Variable Stars
To successfully complete all classifications in SVS and make the results
public, we are now working on implementing machine learning techniques and
building a platform through which the results can be accessed.
#### 5.1.1 The Need for Machine Learning
We estimate that at the current classification rate it will take at least 15
years to classify all 1.6 million light curves in SVS. To this extent, we are
developing a novel method for classifying these phase-folded light curves to
speed up the classification process, which is the subject of an upcoming
paper. In this new method we will train a Convolutional Neural Network (CNN)
on the same images of phase-folded light curves as those presented to SVS
volunteers. We will make use of the $>$1 million volunteer-generated
classifications, or labels, to train the CNNs. We will run an initial CNN
using volunteer-generated labels, then use expert classified light curves to
calculate further volunteer confusion matrices, deriving fuzzy labels and
weighting classifications to improve reliability. We will then use a custom
Zooniverse project to allow for expert bulk classification of CNN predictions,
and retrain the CNN using expert classifications.
There is also the scope to use volunteer comments from the "Talk" forum
section of SVS. It is possible for a volunteer to create a discussion page for
each light curve, where they might "tag" or comment on it, giving a further
classification type (i.e. while the SVS classification might be pulsator, a
volunteer might comment "RR Lyrae" which indicates that the light curve is a
pulsator sub-type). This forum potentially holds another significant source of
labelled data which may be explored in future work.
#### 5.1.2 A New User Interface
One of the key aims of SVS is to make the classified SuperWASP periodicity
catalogue light curves publicly available and to create the first catalogue of
variable stars in the SuperWASP archive. We have begun work on a new user
interface (UI), similar to WASP-DR1101010https://wasp.cerit-sc.cz/form and the
ASAS-SN Catalogue of Variable Stars111111https://asas-sn.osu.edu/variables.
This (UI) will take the form of a web portal, which will allow a user to
easily and quickly search the classified light curves using a number of
different parameters, including RA and Declination with a search radius,
magnitude or flux, period, and variable type. A search of this UI will not
only provide SuperWASP data and classifications, but also an automated cross-
match to other catalogues, for example: SIMBAD, ASAS-SN, and VSX. Having
selected an object, the user will be able to dynamically work with the data or
download a FITS or CSV file. The dynamic interface will allow the user to fold
the light curve at a different period, re-scale the plot, or convert between
magnitude and flux, and more. This new UI will be updated with new SVS
classifications or reclassifications every 6 months following its launch.
## 6 Conclusions
We present the preliminary results of the first analysis of the SuperWASP
Variable Stars Zooniverse project, which consists of 1,025,750 classifications
corresponding to 568,739 unique object-period combinations. Over 4,500
registered volunteers had engaged with the project between September 2018 and
September 2020.
Each SuperWASP light curve has been classified by between 4 and 7 volunteers,
classifying it as a broad type of stellar variable. We find that the majority
(66.6 per cent) of classifications are junk and are therefore discarded, but
the remainder (33.4 per cent) of the classifications corresponding to EA/EB,
EW, pulsator, rotator, and unknown, are valuable for population studies and
studies of unique stellar variables. We identified that variables with a
rotational modulation are the most inconsistently classified by volunteers,
with only $\sim$9 per cent of rotators being correctly classified, compared to
$\sim$89 per cent of EA/EB type binaries. We caution that the classification
of rotator should not be relied upon until there is a more reliable method of
classification for this variable type.
As a result of SVS, 301 new variable stars have been identified. Extrapolating
to the wider data set, we would expect that $\sim$5,000 new variable stars
could be identified on completion of this project. We have identified extreme
period variables, including long period contact binaries, and eclipsing
contact binaries near the short-period cutoff, and $\delta$ Scuti stars. This
project has the potential to expand the catalogue of $\delta$ Scuti stars in
eclipsing binaries, and discover the first Cepheids in eclipsing binaries (if
they exist), as well as to identify multi-periodic Cepheids and RR Lyrae
stars. The high number of false-positive multiply periodic and multi-
classification light curves identified by volunteers indicates that an expert
must complete the final stage of classification by eye for the most extreme
and unusual light curves.
This analysis is not conclusive, but it demonstrates that SVS is successful in
its aims of identifying unique and extreme variables, and identifying
populations of stellar variables for further study. This analysis and methods
will guide the project in future analyses of volunteer and machine learning
classifications. We are now working on using citizen scientist classified data
to train CNNs to speed up the classification process, however humans are still
skilled at picking out the rare and unique objects, and generating labelled
data. Both volunteer classified light curves and CNN classified light curves
will feed into a new public user interface which is currently under
development.
Data Availability: The full catalogue of 301 new variables discovered in SVS
is available via Zenodo.
## Acknowledgements
We would like to recognise and thank the thousands of Zooniverse volunteers
for their contribution to the SuperWASP Variable Stars project. We would also
like to thank the Zooniverse team for their help in developing and maintaining
the Zooniverse platform. The SuperWASP Variable Stars project was developed
with the help of the ASTERICS Horizon2020 project. This publication uses data
generated via the Zooniverse.org platform, development of which is funded by
generous support, including a Global Impact Award from Google, and by a grant
from the Alfred P. Sloan Foundation. This work was supported by the Science
and Technology Facilities Council [grant number ST/P006760/1] through the
DISCnet Centre for Doctoral Training. The SuperWASP project is currently
funded and operated by Warwick University and Keele University, and was
originally set up by Queen’s University Belfast, the Universities of Keele,
St. Andrews and Leicester, the Open University, the ING, the IAC, SAAO and
STFC. This research has made use of the International Variable Star Index
(VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA. This
research has made use of the TOPCAT and STILTS software packages (written by
Mark Taylor, University of Bristol). This research made use of the cross-match
service provided by CDS, Strasbourg. This research has made use of the VizieR
catalogue access tool, CDS, Strasbourg, France.
## References
* Akerlof et al. (2003) Akerlof C. W., et al., 2003, PASP, 115, 132
* Bailer-Jones et al. (2018) Bailer-Jones C. A. L., Rybizki J., Fouesneau M., Mantelet G., Andrae R., 2018, AJ, 156, 58
* Bellm et al. (2019) Bellm E. C., et al., 2019, PASP, 131, 018002
* Breger (1979) Breger M., 1979, PASP, 91, 5
* Davies (1990) Davies S. R., 1990, MNRAS, 244, 93
* Drake et al. (2014) Drake A. J., et al., 2014, ApJS, 213, 9
* Drimmel et al. (2003) Drimmel R., Cabrera-Lavers A., López-Corredoira M., 2003, A&A, 409, 205
* Eisner et al. (2020) Eisner N. L., et al., 2020, arXiv e-prints, p. arXiv:2011.13944
* Evans et al. (2011) Evans N. R., Berdnikov L., Gorynya N., Rastorguev A., Eaton J., 2011, AJ, 142, 87
* Gaia Collaboration (2018) Gaia Collaboration 2018, A&A, 616, A1
* Greer et al. (2017) Greer P. A., Payne S. G., Norton A. J., Maxted P. F. L., Smalley B., West R. G., Wheatley P. J., Kolb U. C., 2017, A&A, 607, A11
* Jayasinghe et al. (2018) Jayasinghe T., et al., 2018, MNRAS, 477, 3145
* Kahraman Aliçavus et al. (2017) Kahraman Aliçavus F., Soydugan E., Smalley B., Kubát J., 2017, MNRAS, 470, 915
* Kochanek et al. (2017) Kochanek C. S., et al., 2017, PASP, 129, 104502
* Leavitt & Pickering (1912) Leavitt H. S., Pickering E. C., 1912, Harvard College Observatory Circular, 173, 1
* Lebzelter et al. (2018) Lebzelter T., Mowlavi N., Marigo P., Pastorelli G., Trabucchi M., Wood P. R., Lecoeur-Taïbi I., 2018, A&A, 616, L13
* Lintott et al. (2008) Lintott C. J., et al., 2008, MNRAS, 389, 1179
* Lohr et al. (2013) Lohr M. E., Norton A. J., Kolb U. C., Maxted P. F. L., Todd I., West R. G., 2013, A&A, 549, A86
* Lohr et al. (2014) Lohr M. E., et al., 2014, A&A, 566, A128
* Lohr et al. (2015a) Lohr M. E., et al., 2015a, A&A, 578, A103
* Lohr et al. (2015b) Lohr M. E., Norton A. J., Payne S. G., West R. G., Wheatley P. J., 2015b, A&A, 578, A136
* Madore (1982) Madore B. F., 1982, ApJ, 253, 575
* Matsunaga et al. (2006) Matsunaga N., et al., 2006, MNRAS, 370, 1979
* Nielsen et al. (2013) Nielsen M. B., Gizon L., Schunker H., Karoff C., 2013, A&A, 557, L10
* Norton (2018) Norton A. J., 2018, Research Notes of the American Astronomical Society, 2, 216
* Norton et al. (2016) Norton A. J., Lohr M. E., Smalley B., Wheatley P. J., West R. G., 2016, A&A, 587, A54
* Payne (2013) Payne S. G., 2013, The identification and classification of variability in stellar sources observed with SuperWASP, The Open University, http://oro.open.ac.uk/54829/
* Pollacco et al. (2006) Pollacco D., et al., 2006, Ap&SS, 304, 253
* Roberts et al. (1987) Roberts D. H., Lehar J., Dreher J. W., 1987, AJ, 93, 968
* Rucinski (1992) Rucinski S. M., 1992, AJ, 103, 960
* Rucinski (2006) Rucinski S. M., 2006, MNRAS, 368, 1319
* Shappee et al. (2014) Shappee B. J., et al., 2014, ApJ, 788, 48
* Soszyński et al. (2009) Soszyński I., et al., 2009, Acta Astron., 59, 239
* Spiers et al. (2019) Spiers H., Swanson A., Fortson L., Simmons B., Trouille L., Blickhan S., Lintott C., 2019, Journal of Science Communication, 18
* Stepien (1995) Stepien K., 1995, MNRAS, 274, 1019
* Thiemann et al. (2020) Thiemann H. B., Norton A. J., Kolb U. C., 2020, Publ. Astron. Soc. Australia, 37, e042
* Tylenda et al. (2011) Tylenda R., et al., 2011, A&A, 528, A114
* Udalski (2003) Udalski A., 2003, Acta Astron., 53, 291
* Watson et al. (2020) Watson C., Henden A. A., Price A., 2020, VizieR Online Data Catalog, p. B/vsx
* Willems & Kolb (2002) Willems B., Kolb U., 2002, MNRAS, 337, 1004
* Wozniak et al. (2003) Wozniak P. R., Vestrand W. T., McGowan K. E., Kinemuchi K., ROTSE Collaboration 2003, in American Astronomical Society Meeting Abstracts. p. 57.03
* Zacharias et al. (2004) Zacharias N., Monet D. G., Levine S. E., Urban S. E., Gaume R., Wycoff G. L., 2004, in American Astronomical Society Meeting Abstracts. p. 48.15
|
# APEX-Net: Automatic Plot Extractor Network
###### Abstract
Automatic extraction of raw data from 2D line plot images is a problem of
great importance having many real-world applications. Several algorithms have
been proposed for solving this problem. However, these algorithms involve a
significant amount of human intervention. To minimize this intervention, we
propose APEX-Net, a deep learning based framework with novel loss functions
for solving the plot extraction problem. We introduce APEX-1M, a new large
scale dataset which contains both the plot images and the raw data. We
demonstrate the performance of APEX-Net on the APEX-1M test set and show that
it obtains impressive accuracy. We also show visual results of our network on
unseen plot images and demonstrate that it extracts the shape of the plots to
a great extent. Finally, we develop a GUI based software for plot extraction
that can benefit the community at large. For dataset and more information
visit https://sites.google.com/view/apexnetpaper/.
Index Terms— Deep Learning, Convolutional Neural Networks, Plot Digitization,
Plot Extraction
## 1 Introduction
Fig. 1: Network architecture of APEX-Net. The input plot image is passed
through several convolutional layers to obtain the predicted plots
$\mathcal{Y}$ along with their confidence score $\mathcal{S}$.
Imagine a scenario, where we are reading an analytical business report or a
scientific research paper. Let us say we stumble upon an image of a 2D line
plot that depicts the dependence of an entity $y$ on another entity $x$.
Suppose that we want to use the underlying raw data of that plot, where, raw
data refers to the sequence of (x,y) point coordinates used to draw the plot.
In a typical situation, the associated raw data is generally not reported and
is inaccessible either because of being confidential or irrelevant in the
context of the report. However, the data being important to us, we manually
start extracting the pixel location of each curve point which ends up being a
laborious process. Such a scenario highlights the significance of being able
to automatically extract the raw data solely from the plot image. This kind of
scenario occurs very frequently and hence a significant amount of research
effort has been devoted towards automating this process.
In the recent past, several algorithms have been developed for automated
extraction of plots, such as WebPlotDigitizer [1], Grabit [2], DigitizeIt [3],
GetData Graph Digitizer [4], Plot Digitizer [5], Engauge Digitizer [6],
EasyNData [7], Quintessa Graph Grabber [8]. A detailed comparison of various
plot extractors is available in [9]. Extracting raw data in the presence of a
single curve in the plot has been addressed by several image processing
algorithms. However, when there are multiple curves present in a plot image,
the task becomes more challenging. Although, most of the existing plot
extractors can automatically extract the raw data, they still require the
following additional information from the user: (a) pixel location of four
points, two on the x-axis $(P_{1}$ and $P_{2})$ and two on the y-axis $(Q_{1}$
and $Q_{2})$, (b) raw x values of $P_{1},P_{2}$ and raw y values of
$Q_{1},Q_{2}$, (c) the RGB color value of the desired curve, and (d) a
rectangular bounding box containing the curve or a thick brush stroke that
approximately traces the curve. Even though these algorithms have reduced the
human intervention significantly, they are not automatic in the true sense. An
ideal plot extractor should be able to extract the raw data for all the curves
present in the image without any human intervention.
In the past decade, deep learning has enjoyed a great success in computer
vision and has helped solve various complex problems [10, 11, 12, 13]. Based
on this success, we believe that deep learning techniques can help in
designing an automated plot extraction algorithm free of any human
intervention. However, to the best of our knowledge, this problem has not been
addressed using deep learning. The primary reason is due to the unavailability
of a large scale dataset of annotated plots. To alleviate this issue, we
introduce APEX-1M, a plot dataset with rich variability, as described in
Section 2.2. We further propose APEX-Net, a deep learning framework trained on
APEX-1M dataset. The proposed framework helps in eliminating the need for
steps (a), (c), and (d) mentioned previously. Eliminating step (b) is more
challenging as it involves text detection along with logical reasoning and
hence, this aspect is not addressed in our work.
Upon deeper inspection, we find plot extraction to be analogous to the task of
object detection [14, 15]. In object detection, the first objective is to
generate the bounding boxes around the objects and the second is to recognize
the class label of those objects, which is a classification task. Analogously,
in automatic plot extraction, the first objective is to detect different types
of curves present in the image and the second objective is to extract raw data
for each of those curves. But, here it is a regression task. Further, there is
no concept of bounding boxes in plot extraction.
Drawing inspiration from the object detection algorithms and acknowledging the
differences, we have developed a deep learning framework called APEX-Net, that
solves the problem of automatic plot extraction. To the best of our knowledge,
this is the first work that addresses this problem in a deep learning
framework. Our major contributions are as follows: (a) we introduce APEX-1M, a
large scale plot dataset capturing large variations in the nature of plot
images, (b) we propose APEX-Net, a deep learning framework that extracts the
raw data from plot images which significantly reduces human intervention, and
(c) we design novel loss functions specifically tailored for the plot
extraction task.
Fig. 2: Result of APEX-Net on an example from APEX-1M test dataset (shown in
(a)), and result on unseen examples ( shown in (b) and (c)). In (a), (b), and
(c) the large image on the left is the input image, and the smaller images on
the right are the visualization of the predicted plot data. (d) depicts the
home screen of our GUI tool and (e) depicts the GUI in action.
## 2 Proposed Approach
### 2.1 Problem Statement
Assume that we are given $\mathcal{I}\in[0,1]^{m\times n\times 3}$, which is
an RGB image of size $m\times n$, containing multiple 2D line plots and let
$K$ denote the total number of plots contained in $\mathcal{I}$. Let the
combined plot data for all the $K$ plots be represented as
$\mathcal{D}=\Big{\\{}\big{\\{}(x_{i}^{j},y_{i}^{j})\big{\\}}_{i=1}^{N_{j}}\Big{\\}}_{j=1}^{K}$,
where $x_{i}^{j}$ and $y_{i}^{j}$ denote the value of the independent variable
and the dependent variable, respectively, for the $i^{th}$ sample in the
$j^{th}$ plot. Here, $N_{j}$ denotes the number of sampled points used in the
construction of the $j^{th}$ plot. Given $\mathcal{I}$, our objective is to
extract the plot data $\mathcal{D}$.
We assume that the image $\mathcal{I}$ was generated by a source user
$\mathcal{U}$. Let us imagine that $\mathcal{U}$ wants to visualize the
dependence of an entity $y$ on another entity $x$, where $x$ and $y$ are real
valued. Let the underlying relationship between $x$ and $y$ be denoted as
$y=f(x)$, where $f$ is a real valued function unknown to $\mathcal{U}$. In
order to acquire an approximation of $f$, $\mathcal{U}$ measures the value of
$y$ on finite discrete instances of $x$ obtaining the finite collection
$\left\\{(x_{i},y_{i})\right\\}_{i=1}^{N}$. Here, $N$ is the number of
discrete instances of $x$ . Using an interpolation scheme, $\mathcal{U}$
obtains an approximation $\hat{f}$ and then renders the plot image depicting
$\hat{f}$. In a general case, $\mathcal{U}$ wants to simultaneously visualize
$K$ different functions. Using the above mentioned sampling and interpolation
process, $\mathcal{U}$ generates the data $\mathcal{D}$ and then renders all
the $K$ plots in a single image $\mathcal{I}$. Given $\mathcal{I}$, obtaining
$\mathcal{D}$ exactly is not possible in general, because $\mathcal{U}$ may or
may not have used markers while rendering the plot. However, our true goal is
not to extract $\mathcal{D}$, but to extract the functions obtained by
interpolating the sampled points contained in $\mathcal{D}$. Next, we
summarize the strategy employed by us for solving this problem.
Let $\mathcal{B}=(x_{\min},x_{\max},y_{\min},y_{\max})$ denote the rectangular
bounding box on the 2D plane containing all the plots, where,
$x_{\min}=\min(\\{x_{i}^{j}\\})$, $x_{\max}=\max(\\{x_{i}^{j}\\})$,
$y_{\min}=\min(\\{y_{i}^{j}\\})$, and $y_{\max}=\max(\\{y_{i}^{j}\\})$. Upon
visual inspection, a human can easily extract $\mathcal{B}$ from the image
$\mathcal{I}$. However, due to high variability involved in the nature of the
plot image $\mathcal{I}$, it becomes difficult for a computer to address this
task. Thus, we invoke a human intervention for obtaining $\mathcal{B}$. For
obtaining the plot data we assume that the plots lie inside a unit square box
$\mathcal{B}_{S}=(0,1,0,1)$. This gives us the normalized plot data. After
that, we just have to unnormalize $\mathcal{B}_{S}$ to fit it inside
$\mathcal{B}$ through the standard transformation $\hat{x}=x\times
x_{\max}+(1-x)\times x_{\min}$ and $\hat{y}=y\times y_{\max}+(1-y)\times
y_{\min}$. Here $(x,y)$ denotes the normalized output obtained from our
network and $(\hat{x},\hat{y})$ denotes the coordinates obtained after
performing unnormalization. Our method assumes that the raw data was plotted
on a linear scale. If the scale of the x-axis or the y-axis is non-linear then
appropriate transformation needs to be applied to the output data. For
instance, if the scale of the x-axis is logarithmic, then we need to apply the
transformation $\hat{x}=x_{\min}\times(\frac{x_{\max}}{x_{min}})^{x}$. Now, in
order to extract the plot data, we assume that each plot contains $N$ sample
points. The x-coordinates of these $N$ points are pre-decided and only the
y-coordinates are predicted by the proposed network. We choose $N$ equally
spaced points between $0$ and $1$. Let the x-coordinates be denoted as
$X=(x_{1},x_{2},\cdots,x_{N})$, in which, $x_{i}=\frac{i-1}{N-1}$, where $i$
is an integer varying from $1$ to $N$. Let the corresponding y-coordinates
predicted by the network be denoted as $Y=(y_{1},y_{2},\cdots,y_{N})$. In our
approach we choose $N=1024$.
### 2.2 Dataset Generation
There is a high variability inherent to real world plot images mainly due to
varying shape of the curves in a plot. Moreover, the appearance of the plot
image varies a lot depending on the size, style, and color of the line and
marker. Some other aspects that contribute to this variability are the
background style, aspect ratio, padding, margin, and location of the legend.
To train a deep learning architecture, we require a large scale curated
dataset of plot images that contain the ground-truth information about the
curves used in the plot. However, such a dataset is not publicly available.
Hence, we create a synthetic dataset for this purpose, which we refer to as
the APEX-1M dataset. For the network to be able to generalize well, our
synthetic dataset should be close enough to the real world plot distribution.
To attain this, we randomize the following parameters associated with the plot
image: (a) Number of plots in the image ($K$) - we choose $K$ between 1 and
10; (b) Shape of each plot (plot data) - we randomly choose a function
$f:[0,1]\to[0,1]$ using the following mechanism. First, we choose a positive
integer $c$ between $4$ and $32$ .Then, we generate
$X_{c}=(\frac{i-1}{c-1})_{i=1}^{c}$ a list of equally spaced points on the
x-axis between $0$ and $1$. For each x value in $X_{c}$, we randomly assign a
y value between $0$ and $1$ to obtain $Y_{c}$. Combining $X_{c}$ and $Y_{c}$,
we get a list of $c$ points in the 2D plane, to which, we apply cubic spline
interpolation to obtain the function $f$. We further sample $N$ points from
$f$, corresponding to $x$ values in $X$, to obtain $Y^{gt}$, where $N=1024$
and $X$ is the same as mentioned in section 2.1. This process gives us a
single plot data. Applying this $K$ times gives us the ground truth data
$\mathcal{Y}^{gt}=(Y^{gt}_{1},Y^{gt}_{2},\cdots,Y^{gt}_{K})$; (c) Color \- we
choose colors randomly for plot lines and marker faces used in each plot; (d)
Style \- we randomly choose the line style and the marker shape from a
predefined list; (e) Size \- width of the line and the size of the marker face
is varied; (f) Title \- random sequence of characters are generated for the
main title and also for the label of x and y axis. Moreover, the location of
title, font size and font style of the text are also varied; (g) Axis ticks \-
size of ticks used for representing values on the axis and the orientation of
the values are varied; (h) Legend \- the location and size of the legend along
with the text label of each plot are randomized ; (i) Background \- the
background style is varied using the predefined templates and the grid-lines
are displayed with half probability; (j) Spacing and image properties \- we
give variable padding and margin to the plot image. We also vary the
resolution and aspect ratio of the image so that the network can handle low as
well as high quality images. We use Matplotlib library [16] for generating
APEX-1M dataset with one million examples and split it into two parts: train
$(80\%)$ and test $(20\%)$.
### 2.3 Network Architecture
Given $\mathcal{I}$, we have two goals to accomplish: predicting the number of
plots contained in the image and estimating $Y$ for each of these plots. We
accomplish both of these goals simultaneously using a unified framework -
APEX-Net. We first make an assumption about the maximum number of plots that
can be contained in the image and denote it by $\hat{K}$. We choose
$\hat{K}=10$, since most of the real world multiple plot images generally tend
to contain less than $10$ plots. However, this is just a design parameter
chosen for our network and is not a limitation of our framework. In order to
accommodate images with higher number of plots, $\hat{K}$ can be increased. In
our unified framework, given an image $\mathcal{I}$, our network produces two
outputs $\mathcal{Y}$ and $\mathcal{S}$, where,
$\mathcal{Y}=(Y_{1},Y_{2},\cdots,Y_{\hat{K}})$ and
$\mathcal{S}=(s_{1},s_{2},\cdots,s_{\hat{K}})$. Here, $Y_{i}$ and $s_{i}$
denote the estimated y-coordinates and the confidence score of the $i^{th}$
predicted plot, respectively. The confidence score $s_{i}$ is a real value
between $0$ and $1$, which denotes the probability of the $i^{th}$ predicted
plot actually being present in the image. During inference, we only select
those plots whose score is greater than $0.5$ and discard the rest.
Given an input image $\mathcal{I}$ of size $m\times n$, we first resize the
image to a fixed size of $512\times 512$. We then pass the image through a
sequence of blocks as depicted in Figure 1. Each block consists of a
convolution layer, a batch normalization layer, and an activation function.
The last block uses the sigmoid activation function to scale the values
between $0$ and $1$. Apart from that, all the other blocks use ReLU (Rectified
Linear Unit) as the activation function. Most of the blocks contain a max-
pooling layer, which helps in progressively reducing the size of the feature
maps. The network outputs $\mathcal{Y}$ and $\mathcal{S}$, which are tensors
of size $10\times 1024$ and $10\times 1$, respectively.
### 2.4 Loss Function
Let $(\mathcal{I},\mathcal{Y}^{gt})$ be an example from the training dataset,
where $\mathcal{Y}^{gt}=(Y^{gt}_{1},Y^{gt}_{2},\cdots,Y^{gt}_{K})$ is a tensor
of size $K\times N$ denoting the y-coordinates of the ground-truth plot data.
$K$ denotes the number of plots contained in $\mathcal{I}$ and $N=1024$. Let
$\mathcal{Y}=(Y_{1},Y_{2},\cdots,Y_{\hat{K}})$ and
$\mathcal{S}=(s_{1},s_{2},\cdots,s_{\hat{K}})$ be the output obtained after
passing $\mathcal{I}$ through the network. The network is trained using two
loss functions $\mathcal{L}_{plot}$ and $\mathcal{L}_{score}$ jointly, defined
in Equation 1 and 2, respectively, where, $\left\lVert\cdot\right\rVert_{2}$
denotes the $\ell_{2}$ norm and $\chi_{A}$ is the characteristic function of
$A$, where $A$ is given by Equation 3
$\mathcal{L}_{plot}=\sum_{i=1}^{K}\min_{1\leq j\leq\hat{K}}\left\lVert
Y^{gt}_{i}-Y_{j}\right\rVert_{2}$ (1)
$\mathcal{L}_{score}=-\sum_{j=1}^{\hat{K}}\Big{(}\chi_{A}(j)\log(s_{j})+\big{(}1-\chi_{A}(j)\big{)}\log(1-s_{j})\Big{)}$
(2) $A=\\{\mathop{\mathrm{\textit{arg}\,min}}_{1\leq j\leq\hat{K}}\left\lVert
Y^{gt}_{i}-Y_{j}\right\rVert_{2}\nonscript\>|\allowbreak\nonscript\>\mathopen{}1\leq
i\leq K\\}$ (3) $\mathcal{L}_{total}=\mathcal{L}_{plot}+\mathcal{L}_{score}$
(4)
The intuition behind using these loss functions is as follows: To each of the
$K$ ground-truth plot, we assign the closest amongst the $\hat{K}$ predicted
plot. To facilitate the extraction of accurate raw plot data, we minimize the
distance between the obtained closest pairs. Further, if a predicted plot gets
assigned to a ground-truth plot, we would prefer its score to be close to $1$
and $0$ otherwise.
### 2.5 Results
Absence of deep learning methods for plot extraction prevents us from
performing a detailed metric comparison. However, we mention the metric scores
that our framework attains, which would serve as a baseline for other future
works in this direction. Table 1 demonstrates the performance of our network
on the test set of APEX-1M dataset. $\mathcal{E}_{plot}$ represents the plot
loss $\mathcal{L}_{plot}$ (described in Equation 1) averaged over the entire
test set. $\mathcal{E}_{count}$ denotes the relative count error averaged over
the entire test set, where relative count error for a single example is given
by $\frac{|K-\hat{K}|}{K}$. Visual results of our network on an example from
the test set is shown in Figure 2(a). Results on unseen data, which are not a
part of the APEX-1M dataset, are shown in Figure 2(b) and 2(c). We develop a
GUI tool for providing the community with an easy to use plot extractor.
Snippets of tool are shown in Figure 2(d) and 2(e).
Dataset | Dataset size | $\mathcal{E}_{plot}$ | $\mathcal{E}_{count}$
---|---|---|---
APEX-1M Test | $2\times 10^{5}$ | $6.82$ | $0.15$
Table 1: Performance of APEX-Net on APEX-1M Test
## 3 Conclusion and Future Work
We propose APEX-1M dataset - a large scale dataset of annotated plots that
enables us to train APEX-Net - a deep learning framework for automatic plot
extraction. We show that APEX-Net achieves remarkable performance on the
APEX-1M dataset. Visual demonstration shows that our network performs well
even on unseen data. To the best of our knowledge, this work is the first
attempt to solve plot extraction problem in a deep learning setup. As our main
objective, we have been able to reduce the human intervention to a great
extent. We believe that future works in this direction will help in completely
eliminating the need for a human in the loop and the process will be truly
automated. One limitation of APEX-Net is that it considers the plot axes to be
aligned with the image boundary. However, our approach might fail in the
presence of an affine or projective distortion. These limitations will be
addressed in our future works.
## References
* [1] Ankit Rohatgi, “Webplotdigitizer: Version 4.3,” 2020, Available at https://automeris.io/WebPlotDigitizer, Accessed on October 20, 2020.
* [2] Jiro Doke, “Grabit,” 2020, Available at https://www.mathworks.com/matlabcentral/fileexchange/7173-grabit, Accessed on October 20, 2020.
* [3] “Digitizeit,” Available at https://www.digitizeit.de/, Accessed on October 20, 2020.
* [4] “Getdatagraphdigitizer,” Available at http://getdata-graph-digitizer.com/index.php, Accessed on October 20, 2020\.
* [5] “Plotdigitizer,” Available at http://plotdigitizer.sourceforge.net/, Accessed on October 20, 2020.
* [6] Mark Mitchell, Baurzhan Muftakhidinov, Tobias Winchen, et al., “Engauge digitizer software,” Webpage: http://markummitchell. github. io/engauge-digitizer. Accessed on October 20, 2020, vol. 11, 2017.
* [7] Peter Uwer, “Easyndata: A simple tool to extract numerical values from published plots,” arXiv preprint arXiv:0710.2896, 2007, Accessed on October 20, 2020.
* [8] “Quintessa graph grabber,” Available at https://www.quintessa.org/software/downloads-and-demos/graph-grabber-2.0.2, Accessed on October 20, 2020.
* [9] “Graph digitizer comparison – 16 ways to digitize your data,” Available at http://www.ifsc.usp.br/~lavfis/images/dados/digitalizarGrafico.pdf, Accessed on October 20, 2020.
* [10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
* [11] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
* [12] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
* [13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
* [14] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
* [15] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
* [16] John D Hunter, “Matplotlib: A 2d graphics environment,” Computing in science & engineering, vol. 9, no. 3, pp. 90–95, 2007\.
|
# Sparse Conic Reformulation of Structured QCQPs based on Copositive
Optimization with Applications in Stochastic Optimization
Markus Gabl IOR, Karlsruhe Institute of Technology, Germany.
<EMAIL_ADDRESS>
###### Abstract
Recently, Bomze et. al. introduced a sparse conic relaxation of the scenario
problem of a two stage stochstic version of the standard quadratic
optimization problem. When compared numerically to Burer’s classical
reformulation, the authors showed that there seems to be almost no difference
in terms of solution quality, whereas the solution time can differ by orders
of magnitudes. While the authors did find a very limited special case, for
which Burer’s reformulation and their relaxation are equivalent, no satisfying
explanation for the high quality of their bound was given. This article aims
at shedding more light on this phenomenon and give a more thorough theoretical
account of its inner workings. We argue that the quality of the outer
approximation cannot be explained by traditional results on sparse conic
relaxations based on positive semidenifnite or completely positive matrix
completion, which require certain sparsity patterns characterized by chordal
and block clique graphs respectively, and put certain restrictions on the type
of conic constraint they seek to sparsify. In an effort to develope an
alternative approach, we will provide a new type of convex reformulation of a
large class of stochastic quadratically constrained quadratic optimization
problems that is similar to Burer’s reformulation, but lifts the variables
into a comparatively lower dimensional space. The reformulation rests on a
generalization of the set-completely positive matrix cone. This cone can then
be approximated via inner and out approximations in order to obtain upper and
lower bounds, which potentially close the optimality gap, and hence can give a
certificate of exactness for these sparse reformulations outside of
traditional, known sufficient conditions. Finally, we provide some numerical
experiments, where we asses the quality of the inner and outer approximations,
thereby showing that the approximations may indeed close the optimality gap in
interesting cases.
Keywords: Quadratic Optimization $\cdot$ Copositive Optimization$\cdot$ Matrix
Completion $\cdot$ Conic Optimization
## 1 Introduction
Recently, in [3], the authors considered the scenario problem of a two-stage
stochastic version of the standard quadratic optimization problem given by
$\displaystyle\min_{\mathbf{x}\in\mathbb{R}^{n_{1}},\mathbf{y}_{i}\in\mathbb{R}^{n_{2}}}\left\\{\mathbf{x}^{\mathsf{T}}{\mathsf{A}}\mathbf{x}+\sum_{i=1}^{S}p_{i}\left(\mathbf{x}^{\mathsf{T}}{\mathsf{B}}_{i}\mathbf{y}_{i}+\mathbf{y}_{i}^{\mathsf{T}}{\mathsf{C}}_{i}\mathbf{y}_{i}\right)\colon(\mathbf{x},\mathbf{y}_{i})\in\Delta,\
i\in[1\\!:\\!S]\right\\},$ (2St3QP)
where $\Delta\subset\mathbb{R}^{n_{1}+n_{2}}$ is the unit simplex, and
$p_{i},\ i\in[1\\!:\\!S]$ are probabilities of certain scenarios occurring.
This optimization problem can be exactly reformulated into a copositive
optimization problem based on Burer’s reformulation presented in [4]. The
reformulation forces a lifting of the space of variables into a space of
dimension $O((n_{1}+Sn_{2})^{2})$, which makes this reformulation entirely
impractical for the purposes of stochastic optimization since the number of
scenarios $S$ is typically very high and the copostive optimization problem
has to be approximated with semidefinite optimization problems, which are
known to scale poorly. In an effort to circumvent this issue, the authors
introduced a copositive relaxation that merely requires
$O(S(n_{1}+n_{2})^{2})$ variables and showed empirically that the
approximating SDPs are practical even if the number of scenarios is high.
Somewhat surprisingly, they observed that the quality of the solutions they
found did not substantially differ from the bound obtained by employing the
traditional copositive reformulation. In fact they state that the difference
was small enough to possibly be an artifact of numerical inaccuracies of the
sdp-solver. Aside from an exactness result for a niche case of (2St3QP) no
theoretical explanation for this phenomenon was provided. The present article
is chiefly motivated by the question: why does the cheap relaxation perform so
well? While we were not able to fully answer this question, we are still able
to provide valuable theoretical insights that amount to a novel, practical
approach to sparse conic reformulations. In short we introduce a
generalization of the set-completely positive matrix cones that yield conic
relaxations that are sparse to begin with, and which can, much like the
traditional set-completely positive matrix cones, be approximated in order to
generate lower and upper bounds, that may certify optimality in case the gap
between them is zero.
To set up our exposition, we will now introduce a more general quadratic
optimization problem and discuss some important context, specifically
copositive optimization and sparse conic reformulations based on matrix
completion. To begin with, the class optimization problems in question is
given by:
$\displaystyle\begin{split}\min_{\mathbf{x},\mathbf{y}_{i}}\mathbf{x}^{\mathsf{T}}{\mathsf{A}}\mathbf{x}+\mathbf{a}^{\mathsf{T}}\mathbf{x}&+\sum_{i=1}^{S}\mathbf{x}^{\mathsf{T}}{\mathsf{B}}_{i}\mathbf{y}_{i}+\mathbf{y}_{i}^{\mathsf{T}}{\mathsf{C}}_{i}\mathbf{y}_{i}+\mathbf{c}_{i}^{\mathsf{T}}\mathbf{y}_{i}\\\
\mathrm{s.t.:}\
{\mathsf{F}}_{i}\mathbf{x}+{\mathsf{G}}_{i}\mathbf{y}_{i}&={\mathbf{r}}_{i},\quad\hskip
1.70709pti\in[1\\!:\\!S],\\\
Q_{j}(\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S})&=0,\quad\hskip
2.84544ptj\in[1\\!:\\!K],\\\ \mathbf{x}&\in{\mathcal{K}}_{0},\\\
\mathbf{y}_{i}&\in{\mathcal{K}}_{i},\quad i\in[1\\!:\\!S],\end{split}$ (1)
where ${\mathcal{K}}_{0}\subseteq\mathbb{R}^{n_{1}},\
{\mathcal{K}}_{i}\subseteq\mathbb{R}^{n_{2}},\ i\in[1\\!:\\!S]$, are closed,
convex cones, ${\mathsf{A}}\in\SS^{n_{1}}$ (i.e. symmetric matrices of oder
$n_{1}$), $\ \mathbf{a}\in\mathbb{R}^{n_{1}}$,
${\mathsf{B}}_{i}\in\mathbb{R}^{n_{1}\times n_{2}},\
{\mathsf{C}}_{i}\in\SS^{n_{2}},\mathbf{c}_{i}\in\mathbb{R}^{n_{2}},\
i\in[1\\!:\\!S]$ and ${\mathsf{F}}_{i}\in\mathbb{R}^{m_{i}\times n_{1}},\
{\mathsf{G}}_{i}\in\mathbb{R}^{m_{i}\times n_{2}},\
{\mathbf{r}}_{i}\in\mathbb{R}^{m_{i}},\ i\in[1\\!:\\!S]$. Further,
$Q_{j}(\cdot)\colon\mathbb{R}^{n_{1}+Sn_{2}}\rightarrow\mathbb{R},\
j\in[1\\!:\\!K]$ are quadratic functions that do not involve bilinear terms
between $\mathbf{y}_{i}$ and $\mathbf{y}_{j}$ for $i\neq j$. The special
structure in place here is that $\mathbf{y}_{i}$ does not interact with
$\mathbf{y}_{j}$ in a bilinear fashion in neither the constraints nor the
objective, and the statement stays true even if the linear constraints are
squared.
This setup encompasses not only (2St3QP), but general two-stage stochastic
conic QCQPs over finitely supported distributions, which are important since
they are used to approximate two-stage stochastic conic QCQPs with infinite
support. In the context of two-stage stochastic optimization, $S$ would be the
number of scenarios and $\mathbf{y}_{i}$ would be variables specific to
scenario $i$. Hence, the special structure in (1) is native to all two-stage
stochastic QCQPs regardless of the structure of the nominal QCQP.
Under some well known regularity conditions on the functions $Q_{i}(.)$ (see
[5, 11, 7]) our structured QCQP can be reformulated into a conic optimization
problem with ${\mathcal{O}}\left((n_{1}+Sn_{2})^{2}\right)$ variables. This
reformulation takes the form:
$\displaystyle\begin{split}\min_{{\mathsf{X}},{\mathsf{Y}}_{i},{\mathsf{Z}}_{i},\mathbf{x},\mathbf{y}_{i}}\mathrm{Tr}({\mathsf{A}}_{i}{\mathsf{X}})+\mathbf{a}^{\mathsf{T}}\mathbf{x}&+\sum_{i=1}^{S}\mathrm{Tr}({\mathsf{B}}_{i}{\mathsf{Z}}_{i})+\mathrm{Tr}({\mathsf{C}}_{i}{\mathsf{Y}}_{i,i})+\mathbf{c}_{i}^{\mathsf{T}}\mathbf{y}\\\
\mathrm{s.t.:}\
{\mathsf{F}}_{i}\mathbf{x}+{\mathsf{G}}_{i}\mathbf{y}_{i}&={\mathbf{r}}_{i},\quad\hskip
19.91684pti\in[1\\!:\\!S],\\\
\operatorname{diag}\left(\begin{pmatrix}{\mathsf{F}}_{i}&{\mathsf{G}}_{i}\end{pmatrix}\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\begin{pmatrix}{\mathsf{F}}_{i}^{\mathsf{T}}\\\
{\mathsf{G}}_{i}^{\mathsf{T}}\end{pmatrix}\right)&={\mathbf{r}}_{i}\circ{\mathbf{r}}_{i},\quad
i\in[1\\!:\\!S],\\\
\hat{Q}_{j}(\mathbf{x},{\mathsf{X}},\mathbf{y}_{1},{\mathsf{Z}}_{1},{\mathsf{Y}}_{1},\dots,\mathbf{y}_{S},{\mathsf{Z}}_{S},{\mathsf{Y}}_{S})&=0,\quad\hskip
17.64056pt\ j\in[1\\!:\\!K],\\\
\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{1}^{\mathsf{T}}&\dots&\mathbf{y}_{S}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
\mathbf{y}_{1}&{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1,1}&\dots&{\mathsf{Y}}_{1,S}\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
\mathbf{y}_{S}&{\mathsf{Z}}_{S}&{\mathsf{Y}}_{S,1}&\dots&{\mathsf{Y}}_{S,S}\end{pmatrix}&\in\mathcal{CPP}(\mathbb{R}_{+}\times_{i=0}^{S}{\mathcal{K}}_{i}).\end{split}$
(2)
for appropriate linear functions $\hat{Q}_{ij}$ with
$\hat{Q}_{ij}(\mathbf{x},\mathbf{x}\mathbf{x}^{\mathsf{T}},\mathbf{y}_{1},\mathbf{y}_{1}\mathbf{x}^{\mathsf{T}},\mathbf{y}_{1}\mathbf{y}_{1}^{\mathsf{T}},\dots,\mathbf{y}_{S},\mathbf{y}_{S}\mathbf{x}^{\mathsf{T}},\mathbf{y}_{S}\mathbf{y}_{S}^{\mathsf{T}})=Q_{ij}(\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S})$,
with $\circ$ denoting the elementwise multiplication of vectors, and the set-
completely positive matrix cone is define as
$\displaystyle\mathcal{CPP}_{n}({\mathcal{K}})$
$\displaystyle\coloneqq\left\\{\sum_{i=1}^{k}\mathbf{x}_{i}\mathbf{x}_{i}^{\mathsf{T}}\colon\mathbf{x}_{i}\in{\mathcal{K}},\
i\in[1\\!:\\!k]\right\\}$
$\displaystyle=\mathrm{clconv}\left\\{\mathbf{x}\mathbf{x}^{\mathsf{T}}\colon\mathbf{x}\in{\mathcal{K}}\right\\}=\\{{\mathsf{X}}{\mathsf{X}}^{\mathsf{T}}\colon{\mathsf{X}}\in\mathbb{R}^{n\times
k}\,,\,\mathbf{x}_{i}\in{\mathcal{K}},\,i\in[1\\!:\\!k]\\},$
for a closed, convex cone ${\mathcal{K}}\subseteq\mathbb{R}^{n}$. For example,
$\mathcal{CPP}(\mathbb{R}^{n})$ is the positive semidefinite matrix cone,
denoted by $\SS^{n}_{+}$, and $\mathcal{CPP}(\mathbb{R}^{n}_{+})$ is the
classical completely positive matrix cone, extensively discussed in [1]. In
the literature, optimization over the set-completely positive cone and its
dual, the set-copositive matrix cone is colloquially referred to as copositive
optimization. In general, set-completely positive matrix cones are intractable
and have to be approximated. For example, it is well known that
$\displaystyle\mathcal{CPP}(\mathbb{R}^{n}_{+})\subseteq\mathcal{DNN}^{n}\coloneqq\SS^{n}_{+}\cap{\mathcal{N}}^{n},$
where ${\mathcal{N}}^{n}$ is the cone of nonnegative $n\times n$ matrices and
$\mathcal{DNN}^{n}$ is called the doubly nonnegative matrix cone.
While many tractable approximations do exist, be it based on positive
semidefinite, second order cone or linear programming constraints, they all
have in common that their complexity increases exponentially with the
approximations quality. Even simple approximations, such as
$\mathcal{DNN}^{n}$, typically involve semidefinite constraints of the same
order as the set-completely positive constraint. As a result the above
reformulation is often impractical. Especially in the context of stochastic
optimization, where the number of scenarios $S$ is typically very high, the
size of the psd-constraints, which is of the order
${\mathcal{O}}\left((n_{1}+Sn_{2})^{2}\right)$, becomes prohibitive.
Following the basic idea of the authors in [3], we can obtain a lower
dimensional relaxation by replacing the conic constraint by $S$ smaller conic
constraints given by
$\displaystyle\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{i}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
\mathbf{y}_{i}&{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i,i}\end{pmatrix}\in\mathcal{CPP}\left(\mathbb{R}_{+}\times{\mathcal{K}}_{i}\right),\
i\in[1\\!:\\!S],$ (3)
so that the number of variables is now
${\mathcal{O}}\left(S(n_{1}+n_{2})^{2}\right)$, therefore linear in $S$. The
cost we have to pay is that the resulting optimization problem is not
necessarily equivalent to (2) and hence to (1), as the conic constraints are
clearly a relaxation of the conic constraint of the exact reformulation. It
is, however, this relaxation which performed so inexplicably well when it was
applied in [3].
In searching for an explanation of this performance, one may turn to the
literature on sparse reformulations of conic optimization problems. Results in
this field are typically based on theorems on matrix completion. The central
question in that area is, when a given matrix with non-specified entries, so
called partial matrices, can be completed to a matrix in
$\mathcal{CPP}({\mathcal{K}})$. This is useful in the context of solving a
conic optimization problem: if the problem data is sparse so that some of the
entries of the matrix variable only appear in the conic constraint, one can
check if the removal of that entries leaves a partial matrix for which one can
give sufficient conditions so that it is completable to a matrix that fulfills
the original conic constraint. If such conditions are available, they replace
the conic constraint and the spurious entries of the matrix can be dropped
entirely. The result is, what we will call a sparse reformulation of the
original conic problem. These reformulations can reduce the number of
variables substantially, which eases the computational burden so that
otherwise unmanageable problems become viable.
The classical text on such an approach is [15], where the conic constraint to
be massaged is an sdp-constraint. Their approach utilizes the fact that a
partial matrix, where the non-specified entries exhibit the so called chordal
sparsity pattern can always be completed to a psd-matrix provided all fully
specified, principle submatrices are positive semidefinite. The framework was
applied in various contexts such as robust optimization [13] or optimal power
flow [8, 14]. A similar approach was recently put forward by [10], who applied
classical $\mathcal{CPP}(\mathbb{R}^{n}_{+})$-completion results derived in
[6] in the context of copositive optimization. Their approach necessitates the
presence of so called block-clique sparsity patterns in the problem data,
owing to the fact that partial matrices with block-clique specification
pattern can be completed to matrices in $\mathcal{CPP}(\mathbb{R}^{n}_{+})$
whenever the fully specified, principle submatrices are completely positive.
Unfortunately, none of these results are able to explain the phenomenon we
seek to investigate and we will spend a full section on discussing their
shortcomings and what we can still learn from them about our object of
interest. We will argue that, unless ${\mathcal{K}}=\mathbb{R}^{n}$, the
required sparsity patterns are, outside of some limited special cases, not the
ones present in (2), where the sparsity pattern takes the form of an arrow-
head. Also, in cases where ${\mathcal{K}}$ is neither the positive orthant nor
the full space, completion results are, to the best of our knowledge, entirely
absent from literature.
#### Contribution
In an effort to remedy these shortcummings, we propose a new approach to
sparse conic reformulations. Rather than treating completability of a matrix
as an abstract concept we identify a cone that is isomorphic to the cone of
completeable partial matrices with arrowhead sparsity pattern, denoted
$\mathcal{CMP}$, as a generalization of the set-completely positive matrix
cone. We show that the geometry of this cone can be used in order to derive a
lower dimensional alternative to the exact reformulation (2). Much the same
way one uses inner and outer approximations in order to solve copositive
optimization problems, we derive inner and outer approximations of
$\mathcal{CMP}$ in order to obtain upper and lower bounds to this new conic
optimization problem. Numerical experiments show that in practice these
approximations exhibit interesting beneficial properties.
#### Outline
The rest of the article is organized as follows: In Section 2 we will give a
short discussion on existing approaches to sparse conic optimization and
discuss the limitations that ultimately make these techniques unfit to tackle
sparse reformulations of (2). Hence, we develop an alternative approach in
Section 3, based on the aforementioned convex cone $\mathcal{CMP}$. This new
type of convex reformulation motivates a strategy to sparse optimization that
is analogous to classical copositive optimization techniques, where difficult
conic constraints are approximated via inner and outer approximations. In
Section 4 we present many such approximations and discuss their limitations.
Finally, we asses the efficacy of our approach in extensive numerical
experiments.
### Notation
Throughout the paper matrices are denoted by sans-serif capital letters (e.g.
${\mathsf{O}}$ will denote the zero matrix, where the size will be clear from
the context), vectors by boldface lower case letters (e.g. $\mathbf{o}$ will
denote the zero vector,$\mathbf{e}_{i}$ will denote a vector of zeros with a
one at the $i$-th coordinate) and scalars (real numbers) by simple lower case
letters. Sets will be denoted using calligraphic letters, e.g., cones will
often be denoted by ${\mathcal{K}}$. We use $\SS^{n}$ to indicate the set of
symmetric matrices and $\SS^{n}_{+}$/$\SS^{n}_{-}$ for the sets of
positive-/negative-semidefinite symmetric matrices, respectively. Moreover, we
use ${\mathcal{N}}_{n}$ to denote the set of entrywise nonnegative, symmetric
matrices. We also use the shorthand notation
$[l\\!:\\!k]\coloneqq\left\\{l,l+1,\dots,k-1,k\right\\}\subseteq\mathbb{N}$.
For a given set ${\mathcal{A}}$ we denote its convex hull by
$\mathrm{conv}({\mathcal{A}})$. For a convex set ${\mathcal{C}}$, the set of
generators of its extreme rays and points is given by
$\mathrm{ext}({\mathcal{C}})$. We also make use of the Frobenius product of
two appropriately sized matrices ${\mathsf{A}}$ and ${\mathsf{B}}$ defined as
${\mathsf{A}}\bullet{\mathsf{B}}\coloneqq\mathrm{trace}({\mathsf{A}}^{\mathsf{T}}{\mathsf{B}})$,
which can be interpreted as the sum of the inner products of the columns of
${\mathsf{A}}$ and ${\mathsf{B}}$.
## 2 Classical approaches to sparse conic optimization and why they fail
As stated in the introduction, there are already many approaches for utilizing
sparsity patterns in conic optimization problems. At the core of these results
lie matrix completion theorems, which we will discuss shortly. But in order to
state them we must introduce some essential terms first.
A graph $G=({\mathcal{V}},{\mathcal{E}})$ is given by its set of vertices
${\mathcal{V}}=\left\\{v_{1},\dots,v_{n}\right\\}$ and its set of edges
${\mathcal{E}}\subseteq\left\\{\left\\{v,u\right\\}\colon
v,u\in{\mathcal{V}}\right\\}$, both of which are finite. A subgraph
$T=({\mathcal{V}}_{T},{\mathcal{E}}_{T})$ of a graph $G$ is a graph such that
${\mathcal{V}}_{T}\subseteq{\mathcal{V}}$ and
${\mathcal{E}}_{T}\subseteq{\mathcal{E}}$. Vertex $v_{j}$ is adjacent to
$v_{j}$ and vice versa if $\left\\{v_{i},v_{j}\right\\}\in{\mathcal{E}}$. If
$e=\\{v_{i},v_{j}\\}\in{\mathcal{E}}$ then $v_{i}$ and $v_{j}$ are incident on
$e$. A graph where all vertices are adjacent to one another is called a
complete graph. A path that connects vertex $v_{i}$ with $v_{j}$ is given by a
sequence of edges so that $\\{v_{1}.v_{k_{1}}\\},\dots,\\{v_{k_{p}},v_{j}\\}$,
$v_{k_{i}}$ are distinct and $p>1$ is the length of that path. A graph is
connected if any two vertices have a connecting path. A graph that is not
connected is disconnected. A cycle is path that connects a vertex $v$ to
itself. A chord of a cycle with length greater than 3 is an edge that connects
two vertexes who are incident on two different edges of the cycle. A graph is
chordal if every cycle with length greater 3 has a chord. A subgraph of $G$
that is complete is called a clique. A block $B$ of a graph is a subgraph that
is connected, has no disconnected subgraph that is obtained by removing just
one vertex and its adjacent edges (i.e. a cut vertex) from $B$, and is not
contained in any other subgraph with these two properties. A block-clique
graph is a graph whose blocks are cliques.
A partial matrix of order $n$ is a matrix whose entries in the $i$-th row and
the $j$-th column are determined if and only if
$(i,j)\in{\mathcal{I}}\subseteq[1\\!:\\!n]^{2}$ and are undetermined
otherwise. A matrix is said to be partial positive semidefinite/ completely
positive/ doubly nonnegative if and only if every fully determined principal
submatrix is positive semidefinite/ completely positive/ doubly nonnegative. A
partial matrix is positive semidefinite/ completely positive/ doubly
nonnegative completable if we can specify the undetermined entries so that the
fully specified matrix is semidefinite/ completely positive/ doubly
nonnegative.
The specification graph of partial matrix ${\mathsf{A}}$ of order $n$ is a
graph $G({\mathsf{A}})$ with vertices ${\mathcal{V}}=\left\\{v_{i}\colon
i\in[1\\!:\\!n]\right\\}$ and edges ${\mathcal{E}}$ such that
$\left\\{v_{i},v_{j}\right\\}\in{\mathcal{E}}$ if and only if the entry
$a_{ij}$ is specified. A symmetric matrix with $G({\mathsf{A}})=G$ is called a
symmetric matrix realization of $G$.
The following three theorems give the key results on matrix completion as far
as this text is concerned:
###### Theorem 1.
All partial positive semidefinite symmetric matrix realizations of a graph $G$
are positive semidefinite completable if and only if $G$ is chordal.
###### Proof.
See [1, Theorem 1.39]. ∎
###### Theorem 2.
Every partial completely positive matrix realization of a graph $G$ is
completely positive completable if and only if G is a block-clique graph
###### Proof.
See [1, Theorem 2.33]. ∎
###### Theorem 3.
Every partial doubly nonnegative matrix realization of a graph $G$ is doubly
nonnegative completable if and only if G is a block-clique graph
###### Proof.
See [6]. ∎
These theorems can be used in order to establish that a constraint on a high-
dimensional matrix, say ${\mathsf{X}}$, can be replaced by a number of
constraints on certain, principal submatrices of ${\mathsf{X}}$ without
increasing the feasible set. This is achieved by showing that values for the
submatrices of ${\mathsf{X}}$ that fulfill the latter constraints can be
completed to a full evaluation of ${\mathsf{X}}$ that fulfills the original
larger constraint. For the sake of illustration we present the following toy
example.
###### Example 1.
Consider the optimization problem
$\displaystyle\min_{{\mathsf{X}}\in\SS^{n}_{+}}\left\\{{\mathsf{Q}}\bullet{\mathsf{X}}\colon{\mathsf{B}}\bullet{\mathsf{X}}=1\right\\},$
(4)
where $({\mathsf{Q}})_{ij}=({\mathsf{B}})_{ij}=0$ if $|i-j|>1$, the remaining
entries of ${\mathsf{B}}$ equal one and those of ${\mathsf{Q}}$ are arbitrary.
The entries of ${\mathsf{X}}$ that are outside the inner band of width 1 are
not present in neither the equality constraint nor the objective. Consider the
relaxation of (4) where the psd-constraints is replaced by
$\displaystyle\begin{pmatrix}X_{ii}&X_{ij}\\\
X_{ji}&X_{jj}\end{pmatrix}\in\SS^{2}_{+}\quad\forall(i,j)\colon|i-j|=1,\ i<j,$
(5)
and the entries of ${\mathsf{X}}$ outside of the inner band are dropped from
the problem. Clearly, we obtain a relaxation of the original problem since the
new condition is necessary for the ${\mathsf{X}}$ to be positive semidefinite.
Also, the fact that we dropped entries of ${\mathsf{X}}$ can be thought of as
a replacement of the matrix ${\mathsf{X}}$ by a partial matrix, say
${\mathsf{X}}_{*}$, whose entries outside the inner band are not specified. In
this case the specification graph of ${\mathsf{X}}_{*}$ is easily checked to
be chordal, as it doesn’t contain any cycles at all. Hence, if all fully
specified submatrices of ${\mathsf{X}}_{*}$ are positive semidefinite, i.e.
(5) holds, then it can be completed to positive semidefinite matrix by 1. The
resulting matrix would be feasible for (4) with the same objective function
value, so that the relaxation turns out to be lossless.
One may attempt to similarly derive a sparse reformulation of (8) by invoking
the completion results we discussed above. This would necessitate to show that
a partial matrix of the following form
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&{\mathsf{Z}}_{2}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S-1}^{\mathsf{T}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1,1}&\mathbf{*}&\dots&\mathbf{*}&\mathbf{*}\\\
{\mathsf{Z}}_{2}&\mathbf{*}&{\mathsf{Y}}_{2,2}&\dots&\mathbf{*}&\mathbf{*}\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\
{\mathsf{Z}}_{S-1}&\mathbf{*}&\mathbf{*}&\dots&{\mathsf{Y}}_{S-1,S-1}&\mathbf{*}\\\
{\mathsf{Z}}_{S}&\mathbf{*}&\mathbf{*}&\dots&\mathbf{*}&{\mathsf{Y}}_{S,S}\end{pmatrix},\
$
can be completed to a set-completely positive matrix whenever the submatrices
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i,i}\end{pmatrix}\in\mathcal{CPP}\left({\mathcal{K}}_{0}\times{\mathcal{K}}_{i}\right),\
i\in[1\\!:\\!S].$
Note, that this would coincide with the model in [3], which we discussed in
the introduction, so that the matrix completion theory is a promising
contender for the desired explanation for the effectiveness of the model. The
strategy appears feasible at first, at least for the case where
${\mathcal{K}}_{i}$ are nonnegative orthants given that in this case,
completion results are readily available. Unfortunately it is futile, since
the arrowhead structure is not block-clique outside of narrow special cases,
as we will now show.
###### Lemma 4.
Let $S>1$ and consider a partial matrix of where the specified entries exhibit
an arrow-head structure, i.e.
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&{\mathsf{Z}}_{2}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S-1}^{\mathsf{T}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1,1}&\mathbf{*}&\dots&\mathbf{*}&\mathbf{*}\\\
{\mathsf{Z}}_{2}&\mathbf{*}&{\mathsf{Y}}_{2,2}&\dots&\mathbf{*}&\mathbf{*}\\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\
{\mathsf{Z}}_{S-1}&\mathbf{*}&\mathbf{*}&\dots&{\mathsf{Y}}_{S-1,S-1}&\mathbf{*}\\\
{\mathsf{Z}}_{S}&\mathbf{*}&\mathbf{*}&\dots&\mathbf{*}&{\mathsf{Y}}_{S,S}\end{pmatrix},\
$
where ${\mathsf{X}}\in\SS^{n_{1}},\ {\mathsf{Y}}_{i,i}\in\SS^{n_{2}},\
{\mathsf{Z}}_{i}\in\mathbb{R}^{n_{2}\times n_{1}},\ i\in[1\\!:\\!S],$ and let
$G_{spec}$ be its specification graph. Then $G_{spec}$ is chordal. If
$n_{1}\in\left\\{0,1\right\\}$ then $G_{spec}$ is also a block-clique graph,
which is not the case otherwise.
###### Proof.
We start out by showing that $G_{spec}$ is chordal in general. We group the
nodes of the specification graph into $S+1$ groups where the first group
$g_{0}=\left\\{1,\dots n_{1}\right\\}$ are the nodes that correspond to the
first $n_{1}$ rows of the matrix and whose internal edges are specified by the
north west entries ${\mathsf{X}}$. The second group
$g_{1}=\left\\{n_{1}+1,\dots,n_{1}+n_{2}\right\\}$ corresponds to the rows
$n_{1}+1$ to $n_{1}+n_{2}$ whose internal edges are specified by the blocks
${\mathsf{Y}}_{2,2}$ and whose external edges, connecting to neighbors outside
of $g_{1}$, are specified by ${\mathsf{Z}}_{1}$. The construction of the
remaining groups proceeds accordingly. We will now show that any cycle of
length greater 3 must have a chord. Note that all the groups are cliques since
the blocks ${\mathsf{Y}}_{i,i}$ are fully specified. Thus, a cycle of length
greater than 3 must have a chord if it is entirely contained in one of the
groups. We therefore only need to consider cycles that are not entirely
contained in one group. Also, any member of $g_{0}$ is a neighbor to any other
node in the graph since the blocks ${\mathsf{Z}}_{i},\ i\in[1\\!:\\!S]$ are
fully specified. Thus, if a vertex $v$ of $g_{0}$ is visited by a cycle, then
the edge to any other node in the cycle that is not the predecessor of $v$
gives a chord. A cycle that visits more than one group needs to visit $g_{0}$
since the other groups are not connected to one another and thus has a chord.
If $n_{1}=1$ then $g_{0}$ is a singleton. A block cannot contain just vertices
from multiple $g_{i},\ i\in[1\\!:\\!S]$ since these groups are pairwise
disconnected. A connection can only be established by adding $g_{0}$ but then
the single node in $g_{0}$ is a cut vertex, i.e. the subgraph can become
disconnected by deleting a single node and its adjacent edges. Hence, a block
of $G_{spec}$ must be a subgraph formed from the union of $g_{0}$ and one
$g_{i},i\in[1\\!:\\!S]$ and the respective edges. A subgraph formed from all
the nodes of such a union, say $T$, cannot be contained in any other block
since the construction of such a block would require to add nodes from a third
group. Thus, $T$ is a block, but it is also a clique since the $q_{i}$ is a
clique and the node in $g_{0}$ is adjacent to all the members of $g_{i}$.
If $n_{1}=0$ then $G_{spec}$ consists of $S$ subgraphs that are cliques and
pairwise disconnected, hence they are blocks.
Otherwise, the entire graph is its only block since it cannot become
disconnected by deleting a single node and its adjacent edges, but this block
is not a clique since $g_{i},\ i\in[1\\!:\\!S]$ have no inter-group edges. ∎
As a consequence of the lemma, the traditional route for sparse conic
reformulations provides little insight: If ${\mathcal{K}}_{i}$ are positive
orthants the completion theorems are not applicable since (2) lacks the proper
sparsity pattern. Also in that case, we cannot compare the $\mathcal{DNN}$
relaxations of (2) and its sparse relaxation based on (3), since the same
sparsity pattern would be required. If ${\mathcal{K}}_{i}$ are neither the
positive orthant nor the full space, we do not even have any completion
results to begin with.
Still, the present methodology allows for at least some insight into the
benefits of working with (3), namely in the form of the following performance
guarantee.
###### Theorem 5.
Let $\mathrm{val}(SDP)$ be the optimal value of problem (2) after
$\mathcal{CPP}(\mathbb{R}_{+}\times_{i=0}^{S}{\mathcal{K}}_{i})$ is replaced
by $\SS^{n_{1}+Sn_{2}+1}$ and let $\mathrm{val}(R)$ be that optimal value
after replacing the full conic constraint with the conic constraints in (3).
We have $\mathrm{val}(SDP)\leq\mathrm{val}(R)$ and the statement also holds if
we replace the cones
$\mathcal{CPP}(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\times{\mathcal{K}}_{i}),\
i\in[1\\!:\\!S]$ in (3) by any other subsets of $\SS^{n_{1}+n_{2}+1}_{+}$.
###### Proof.
Clearly, the two problems have the same objective function, so we only need to
compare the feasible sets. Let
$\left({\mathsf{X}},{\mathsf{Y}}_{1},\dots,{\mathsf{Y}}_{S},{\mathsf{Z}}_{i},\dots,{\mathsf{Z}}_{S},\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S}\right)$
be such that
$\displaystyle\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{i}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
\mathbf{y}_{i}&{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}$
$\displaystyle\in\SS^{n_{1}+n_{2}+1},\ i\in[1\\!:\\!S],$ (6)
and the linear constraints in (2) are fulfilled, i.e. we have a feasible
solution for the optimization problem defining $\mathrm{val}(R)$. All we nee
to show, is that, after setting ${\mathsf{Y}}_{i,i}={\mathsf{Y}}_{i},\
i\in[1\\!:\\!S]$, we can find ${\mathsf{Y}}_{i,j},\ i\neq j$ such that we can
construct a positive semidefinite matrix. By 1 it suffices to show that the
specification graph of the partial matrix where ${\mathsf{Y}}_{i,j},\ i\neq j$
are not specified is a chordal graph and that all fully specified principal
submatrices are positive semidefinite. So consider the partial matrix
$\displaystyle\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{1}^{\mathsf{T}}&\mathbf{y}_{2}^{\mathsf{T}}&\dots&\mathbf{y}_{S-1}^{\mathsf{T}}&\mathbf{y}_{S}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&{\mathsf{Z}}_{2}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S-1}^{\mathsf{T}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
\mathbf{y}_{1}&{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1,1}&\mathbf{*}&\dots&\mathbf{*}&\mathbf{*}\\\
\mathbf{y}_{2}&{\mathsf{Z}}_{2}&\mathbf{*}&{\mathsf{Y}}_{2,2}&\dots&\mathbf{*}&\mathbf{*}\\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\\
\mathbf{y}_{S-1}&{\mathsf{Z}}_{S-1}&\mathbf{*}&\mathbf{*}&\dots&{\mathsf{Y}}_{S-1,S-1}&\mathbf{*}\\\
\mathbf{y}_{S}&{\mathsf{Z}}_{S}&\mathbf{*}&\mathbf{*}&\dots&\mathbf{*}&{\mathsf{Y}}_{S,S}\end{pmatrix}.$
Since in all but the first two columns (we use this word now referring to
literal columns in the above representation) have unspecified blocks one can
only obtain fully specified principal submatrices if one deletes all but one
of the partially specified columns and all but the respective rows (again in
the literal sense). The so obtained blocks are precisely the blocks in (6) and
are thus positive semidefinite. The chordality of the specification graph
follows from 4. This completes the proof. ∎
The theorem states that our sparse, hence low dimensional, relaxation is at
least as strong as the fully dimensional SDP-relaxation and thus gives a
theoretical performance guarantee. It also applies to relaxations of (3) such
as the $\mathcal{DNN}$-relaxation since
$\mathcal{DNN}^{n}\subseteq\SS^{n}_{+}$.
###### Remark 1.
We could have arrived at 4 by using the results in [9] who describe a
chordality-detection procedure for SDPs with chordal sparsity pattern.
However, this procedure seemed more complicated than proving chordality of
arrow-head matrices directly here. It is nonetheless important to note that
the above result is not the first of its kind, but can be obtained directly
from known results in literature. Still, to the best of our knowledge, the
context in which we use this technique is original.
###### Remark 2.
At this point we would also like to highlight a specific shortcoming of the
above completion theorems. An inattentive reading of their claims might give
the false impression that, as an example, for a partial psd-matrix to be
completable, it needs to have a chordal specification graph. This assessment
is incorrect. A partial psd-matrix ${\mathsf{M}}$ may have a specification
graph $G({\mathsf{M}})$ that is not chordal, while still being psd-
completeable. All the theorem says is that not all partial psd-matrices with
specification graph $G({\mathsf{M}})$ are psd-completable. But that does not
exclude the possibility that some still can be completed. This is significant,
since for a sparse relaxation to be exact it suffices that its optimal set
contains just one appropriately completable matrix. To additionally require
that all other feasible matrices, or more so, all matrices with the same
sparsity pattern are completable is needlessly restrictive, which explains
part of the inflexibility of the classical machinery.
## 3 An alternative approach to sparse reformulations
We have seen that the classical approach to sparse reformulations is limited
in several capacities. It is restrictive with respect to the cones
${\mathcal{K}}_{i}$ and it is inflexible with respect to the sparsity
structure, such that it is ultimately ill-equipped to tackle sparse
reformulations of (2). We therefore propose and alternative strategy, where we
provide a convex-conic reformulation of (1) based on a generalization of
$\mathcal{CPP}$ that rests on a lifting of the space of variables into a space
that is of lower dimension than required for the classical
$\mathcal{CPP}$-reformulation (2). Hence, the reformulation is already sparse,
which comes at the price of having to optimize over a new, complicated cone.
This, however, is just a new guise of an old problem in copositive
optimization, and we will meet in a, thus, familiar fashion: by providing
inner and out approximations, that provide upper and lower bounds on the
problem whose gap is hopefully small or even zero. In order to achieve this we
will first introduce some necessary concepts, that will allow us to state and
proof our main reformulation result. After that, we close this section with a
detailed description of our approach.
### 3.1 The space of connected components $\SS_{n}^{S,k}$ and the cone of
completable, completely positive, connected components $\mathcal{CMP}$
We define
$\displaystyle\SS_{n}^{S,k}$
$\displaystyle\coloneqq\left\\{\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}\end{pmatrix},\dots,\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
{\mathsf{Z}}_{S}&{\mathsf{Y}}_{S}\end{pmatrix}\right]\colon\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\in\SS^{n},\ i\in[1\\!:\\!S],\
{\mathsf{X}}\in\SS^{k}\right\\},$
i.e. the set of vectors of $S$ symmetric matrices of order $n$ connected by a
component of order $k$, which we call the space of connected components. In
order to distinguish elements of $\SS_{n}^{S,k}$ from normal matrices we use
san-serif letters braced by rectangular braces, for example
$\left[{\mathsf{A}}\right]$. Note, that $\SS_{n}^{S,k}$ is isomorphic to the
space of arrowhead matrices by the isomorphism
$\displaystyle\Gamma\colon\SS^{S,k}_{n}\rightarrow\SS^{k+Sn},\
\left[{\mathsf{A}}\right]\mapsto\Gamma\left(\left[{\mathsf{A}}\right]\right)\coloneqq\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}&\dots&{\mathsf{O}}\\\
\vdots&\vdots&\ddots&\vdots\\\
{\mathsf{Z}}_{S}&{\mathsf{O}}&\dots&{\mathsf{Y}}_{S}\end{pmatrix},$
where for the inverse we have
$\displaystyle\Gamma^{-1}\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}&\dots&{\mathsf{O}}\\\
\vdots&\vdots&\ddots&\vdots\\\
{\mathsf{Z}}_{S}&{\mathsf{O}}&\dots&{\mathsf{Y}}_{S}\end{pmatrix}=\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}\end{pmatrix},\dots,\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
{\mathsf{Z}}_{S}&{\mathsf{Y}}_{S}\end{pmatrix}\right]\in\SS^{S,k}_{n}.$
Thus, $\SS_{n}^{S,k}$ is a vector space with a natural inner product
$\left[{\mathsf{A}}\right]\odot\left[{\mathsf{B}}\right]\coloneqq\Gamma\left(\left[{\mathsf{A}}\right]\right)\bullet\Gamma\left(\left[{\mathsf{A}}\right]\right)$,
sum
$\left[{\mathsf{A}}\right]\oplus\left[{\mathsf{B}}\right]\coloneqq\Gamma\left(\left[{\mathsf{A}}\right]\right)+\Gamma\left(\left[{\mathsf{A}}\right]\right)$
and scalar multiplication
$\lambda\left[{\mathsf{A}}\right]\coloneqq\Gamma^{-1}\left(\lambda\Gamma\left(\left[{\mathsf{A}}\right]\right)\right)$.
For notational convenience we will expand the meaning of the inverse
$\Gamma^{-1}$ so that it is applicable to non-arrowhead matrices as well,
where the nonzero off-diagonal blocks are treated as though they were blocks
of zeros as in the definition above. Also, we define a second, analogous
isomorphism $\Gamma_{*}(\cdot)$ that maps into the space of partial matrices
where the blocks of zeros in the definition of $\Gamma(\cdot)$ are not
specified. We also will use the shorthand notation
$\displaystyle\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}\end{pmatrix},\dots,\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
{\mathsf{Z}}_{S}&{\mathsf{Y}}_{S}\end{pmatrix}\right]=\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}.$
The central object we are interested in is the following subset of
$\SS_{n}^{S,k}$:
$\displaystyle\mathcal{CMP}\left({\mathcal{K}}_{0},\dots,{\mathcal{K}}_{S}\right)$
$\displaystyle\coloneqq\mathrm{conv}\left\\{\left[\begin{pmatrix}\mathbf{x}\\\
\mathbf{y}_{i}\end{pmatrix}\begin{pmatrix}\mathbf{x}\\\
\mathbf{y}_{i}\end{pmatrix}^{\mathsf{T}}\right]_{i\in[1\\!:\\!S]}\colon\begin{pmatrix}\mathbf{x}\\\
\mathbf{y}_{i}\end{pmatrix}\in{\mathcal{K}}_{0}\times{\mathcal{K}}_{i},\
i\in[1\\!:\\!S]\right\\},$
where ${\mathcal{K}}_{0}\subseteq\mathbb{R}^{k},\
{\mathcal{K}}_{i}\subseteq\mathbb{R}^{n-k},\ i\in[1\\!:\\!S]$ are convex
cones, which we refer to as ground cones. We often use $\mathcal{CMP}$ without
its arguments as a colloquial term, in case the respective ground cones are
not important to, or clear from, the context at hand. The same is true for all
abbreviations of its inner and outer approximations that will be discussed
later in the text.
We call $\mathcal{CMP}$ the cone of completable, completely positive,
connected components and we will justify that name in a latter section.
Further, we define $\mathrm{gen}\mathcal{CMP}$ to be the set of its
generators, i.e. the set we obtain by omitting the
$\operatorname{conv}$-operator in the definition of $\mathcal{CMP}$.
### 3.2 Main result: a new type of convex reformulation, with reduced
dimensionality
The deriviation of our main result relies heavily on the very general
framework from [11], for achieving convex reformulations for a large array of
problems. In the following paragraphs we will give a small and simplified
account of their results in order to make the derivation of our main result as
transparent as possible. The two theorems we discuss shortly are
specializations of theorems in [11], which we prove here for the readers
convenience. To distinguish this more abstract discussion from the rest of the
paper, and to highlight the special role of the sets we are about to
introduce, we diverge from the convention of denoting sets via calligraphic
capital letters and use blackboard bold capital letters.
We start out be investigating a more general question. So, let $\mathbb{V}$ be
a vector space of dimension $n$. For a (possibly nonconvex) cone
$\mathbb{K}\subseteq\mathbb{V}$, and vectors
${\mathsf{Q}},{\mathsf{H}}_{0}\in\mathbb{V}$ and a convex set
$\mathbb{J}\subseteq\mathrm{conv}(\mathbb{K})$. We want to know when we have
the equality:
$\displaystyle\min_{{\mathsf{X}}\in\mathbb{V}}\left\\{\langle{\mathsf{Q}},{\mathsf{X}}\rangle\colon{\mathsf{X}}\in\mathbb{K}\cap\mathbb{J},\
\langle{\mathsf{H}}_{0},{\mathsf{X}}\rangle=1\right\\}=\min_{{\mathsf{X}}\in\mathbb{V}}\left\\{\langle{\mathsf{Q}},{\mathsf{X}}\rangle\colon{\mathsf{X}}\in\mathbb{J},\
\langle{\mathsf{H}}_{0},{\mathsf{X}}\rangle=1\right\\}?$
Defining
$\mathbb{H}\coloneqq\left\\{{\mathsf{X}}\colon\langle{\mathsf{H}}_{0},{\mathsf{X}}\rangle=1\right\\}$,
we can equivalently ask for conditions for the equality
$\displaystyle\mathrm{conv}(\mathbb{H}\cap\mathbb{K}\cap\mathbb{J})=\mathbb{H}\cap\mathbb{J}.$
The following theorem gives an answer based on convex geometry.
###### Theorem 6.
For $\mathbb{H},\mathbb{K},\mathbb{J}$ as above, assume that
$\mathbb{H}\cap\mathbb{J}\neq\emptyset$ is bounded and that $\mathbb{J}$ is a
face of $\mathrm{conv}(\mathbb{K})$. Then
$\mathrm{conv}(\mathbb{H}\cap\mathbb{K}\cap\mathbb{J})=\mathbb{H}\cap\mathbb{J}$.
###### Proof.
For the "$\subseteq$"-inclusion, since
$\mathbb{H}\cap\mathbb{K}\cap\mathbb{J}\subseteq\mathbb{H}\cap\mathbb{J}$ and
the latter set is convex, there is nothing left to show. For the converse, let
${\mathsf{X}}\in\mathbb{H}\cap\mathbb{J}$. Then
${\mathsf{X}}\in\mathrm{conv}(\mathbb{K})$ since
$\mathbb{J}\subseteq\mathrm{conv}(\mathbb{K})$, so that
${\mathsf{X}}=\sum_{i=1}^{n}{\mathsf{X}}_{i}$ with
${\mathsf{X}}_{i}\in\mathbb{K}\setminus\left\\{{\mathsf{O}}\right\\}$ but also
${\mathsf{X}}_{i}\in\mathbb{J}$ since $\mathbb{J}$ is a face of
$\mathrm{conv}(\mathbb{K})$ so that
${\mathsf{X}}_{i}\in\mathbb{K}\cap\mathbb{J}$. Now,
$\langle{\mathsf{H}}_{0},{\mathsf{X}}_{i}\rangle>0$ since
$\mathbb{H}\cap\mathbb{J}\neq\emptyset$ is bounded. Define
$\lambda_{i}=\langle{\mathsf{H}}_{0},{\mathsf{X}}_{i}\rangle.$ We have
$\langle{\mathsf{H}}_{0},{\mathsf{X}}\rangle=\sum_{i}\langle{\mathsf{H}}_{0},{\mathsf{X}}_{i}\rangle=\sum_{i}\lambda_{i}=1$
and
$\lambda_{i}^{-1}{\mathsf{X}}_{i}\eqqcolon\bar{\mathsf{X}}_{i}\in\mathbb{K}\cap\mathbb{J}$
and thus
${\mathsf{X}}=\sum_{i}\lambda_{i}\bar{\mathsf{X}}_{i}\in\mathrm{conv}(\mathbb{H}\cap\mathbb{K}\cap\mathbb{J})$.
∎
This theorem motivates the search for a condition that lets us identify faces
of convex cones, which are provided in the following theorem.
###### Theorem 7.
Assume that
$\mathbb{J}=\left\\{{\mathsf{X}}\in\mathrm{conv}(\mathbb{K})\colon\langle{\mathsf{A}}_{i},{\mathsf{X}}\rangle=0,\
i\in[1\\!:\\!m]\right\\}$ and define
$\mathbb{J}_{p}\coloneqq\left\\{{\mathsf{X}}\in\mathrm{conv}(\mathbb{K})\colon\langle{\mathsf{A}}_{i},{\mathsf{X}}\rangle=0,\
i\in[1\\!:\\!p]\right\\}$ so that $\mathbb{J}_{m}=\mathbb{J}$ and
$\mathbb{J}_{0}=\mathrm{conv}(\mathbb{K})$. If
${\mathsf{A}}_{p}\in\mathbb{J}_{p-1}^{*},i\in[1\\!:\\!m]$ then $\mathbb{J}$ is
a face of $\mathrm{conv}(\mathbb{K})$.
###### Proof.
Since a face of a face a convex set is itself a face of that set, the claim
will follow by induction if we can show that
${\mathsf{A}}_{p}\in\mathbb{J}_{p-1}^{*}\implies\mathbb{J}_{p}\mbox{ is a face
of }\mathbb{J}_{p-1}.$
So let $\mathbb{J}_{p}\ni{\mathsf{X}}={\mathsf{X}}_{1}+{\mathsf{X}}_{2}$ with
${\mathsf{X}}_{i}\in\mathbb{J}_{p-1},\ i\in\left\\{1,2\right\\}$. We have
$\langle{\mathsf{A}}_{p},{\mathsf{X}}_{i}\rangle\geq 0$ since
${\mathsf{A}}_{p}\in\mathbb{J}_{p-1}^{*}$ so that
$0=\langle{\mathsf{A}}_{p},{\mathsf{X}}\rangle=\langle{\mathsf{A}}_{p},{\mathsf{X}}_{1}\rangle+\langle{\mathsf{A}}_{p},{\mathsf{X}}_{2}\rangle$
implies that actually $\langle{\mathsf{A}}_{p},{\mathsf{X}}_{i}\rangle=0$ and
we indeed have ${\mathsf{X}}_{i}\in\mathbb{J}_{p},\ i\in\left\\{1,2\right\\}$.
∎
Based on the above theorems, it is quite straight forward to prove the
classical result from [4], at least for the case where the linear portion of
the set is bounded, with
$\mathbb{K}=\mathrm{ext}\mathcal{CPP}(\mathbb{R}_{+}\times{\mathcal{K}})$ and
$\mathbb{J}$ equal to the feasible set of the conic reformulation (we omit
laying out the details here, but the steps required are equivalent to the ones
laid out in the proof of 8). A natural question is, whether we can execute a
similar strategy for proving the exactness of a conic reformulation of reduced
dimension by replacing the cone of extreme rays of
$\mathcal{CPP}({\mathcal{K}})$ with another appropriately structured object as
our choice for $\mathbb{K}$.
In the following theorem we show that by choosing
$\mathbb{K}=\mathrm{gen}\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)$
and $\mathbb{J}$ and $\mathbb{H}$ appropriately we can use 6 in order to
obtain an exact conic reformulation of (1).
###### Theorem 8.
Considering (1), assume
${\mathcal{F}}_{i}~{}\coloneqq~{}\left\\{\left(\mathbf{x}^{\mathsf{T}},\mathbf{y}_{i}^{\mathsf{T}}\right)\in{\mathcal{K}}_{0}\times{\mathcal{K}}_{i}\colon{\mathsf{F}}_{i}\mathbf{x}+{\mathsf{G}}_{i}\mathbf{y}_{i}={\mathbf{r}}_{i}\right\\}$
are nonempty bounded sets. Further, assume that
$\displaystyle\begin{pmatrix}\mathbf{x},\mathbf{y}_{i}\end{pmatrix}\in{\mathcal{F}}_{i},\
i\in[1\\!:\\!S]\implies
Q_{j}(\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S})\geq 0,\
j\in[1\\!:\\!K].$ (7)
Then (1) is equivalent to the following conic optimization problem:
$\displaystyle\begin{split}\min_{[{\mathsf{X}}]\in\SS_{n_{1}+n_{2}+1}^{S,n_{1}+1}}[{\mathsf{C}}]\odot[{\mathsf{X}}]&\\\
\mathrm{s.t.:}\ [{\mathsf{H}}_{0}]\odot[{\mathsf{X}}]&=1,\\\
[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]&=0,\ i\in[1\\!:\\!S],\\\
\hat{Q}_{j}\left([{\mathsf{X}}]\right)&=0,\ j\in[1\\!:\\!K],\\\
[{\mathsf{X}}]&\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right),\end{split}$
(8)
where
$[{\mathsf{C}}],[{\mathsf{H}}_{0}],[{\mathsf{F}}_{i}]\in\SS_{n_{1}+n_{2}+1}^{S,n_{1}+1},\
i\in[1\\!:\\!S]$ are defined as
$\displaystyle[{\mathsf{C}}]$
$\displaystyle\coloneqq\Gamma^{-1}\begin{pmatrix}0&\tfrac{1}{2}\mathbf{a}^{\mathsf{T}}&\tfrac{1}{2}\mathbf{c}_{1}^{\mathsf{T}}&\dots&\tfrac{1}{2}\mathbf{c}_{S}^{\mathsf{T}}\\\
\tfrac{1}{2}\mathbf{a}&{\mathsf{A}}&\tfrac{1}{2}{\mathsf{B}}_{1}&\dots&\tfrac{1}{2}{\mathsf{B}}_{S}\\\
\tfrac{1}{2}\mathbf{c}_{1}^{\mathsf{T}}&\tfrac{1}{2}{\mathsf{B}}_{1}^{\mathsf{T}}&{\mathsf{C}}_{1}&\dots&{\mathsf{O}}\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
\tfrac{1}{2}\mathbf{c}_{S}^{\mathsf{T}}&\tfrac{1}{2}{\mathsf{B}}_{S}^{\mathsf{T}}&{\mathsf{O}}&\dots&{\mathsf{C}}_{S}\end{pmatrix},\quad[{\mathsf{H}}_{0}]=\Gamma^{-1}\left(\mathbf{e}_{1}\mathbf{e}_{1}^{\mathsf{T}}\right),$
$\displaystyle[{\mathsf{F}}_{i}]$
$\displaystyle\coloneqq\Gamma^{-1}\left(\left(-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{O}},\dots,{\mathsf{G}}_{i},\dots,{\mathsf{O}}\right)^{\mathsf{T}}\left(-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{O}},\dots,{\mathsf{G}}_{i},\dots,{\mathsf{O}}\right)\right).$
and
$\hat{Q}_{j}(\cdot)\colon\SS^{S,n_{1}+1}_{n_{1}+n_{2}+1}\rightarrow\mathbb{R}$
are linear functions such that
$\displaystyle\hat{Q}_{j}\left(\Gamma^{-1}\left(\begin{pmatrix}x_{0}\\\
\mathbf{x}\\\ \mathbf{y}_{1}\\\ \vdots\\\
\mathbf{y}_{S}\end{pmatrix}\begin{pmatrix}x_{0}\\\ \mathbf{x}\\\
\mathbf{y}_{1}\\\ \vdots\\\
\mathbf{y}_{S}\end{pmatrix}^{\mathsf{T}}\right)\right)=Q_{j}(\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S}),\
j\in[1\\!:\\!K]$
###### Proof.
Consider the following equivalences
$\displaystyle{\mathsf{F}}_{i}\mathbf{x}+{\mathsf{G}}_{i}\mathbf{y}_{i}$
$\displaystyle={\mathbf{r}}_{i},\ \mathbf{y}\in{\mathcal{K}}_{i},\
i\in[1\\!:\\!S],\ \mathbf{x}\in{\mathcal{K}}_{0},$ $\displaystyle
Q_{j}(\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S})$ $\displaystyle=0,\
j\in[1\\!:\\!K],$ $\displaystyle\Updownarrow$
$\displaystyle\begin{Vmatrix}\left(-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{O}},\dots,{\mathsf{G}}_{i},\dots,{\mathsf{O}}\right)\begin{pmatrix}x_{0}\\\
\mathbf{x}\\\ \mathbf{y}_{1}\\\ \vdots\\\
\mathbf{y}_{S}\end{pmatrix}\end{Vmatrix}^{2}$ $\displaystyle=0,\
\mathbf{y}\in{\mathcal{K}}_{i},\ i\in[1\\!:\\!S],\
\mathbf{x}\in{\mathcal{K}}_{0},\ x_{0}\geq 0,\ x_{0}^{2}=1,$ $\displaystyle
Q_{j}(\mathbf{x},\mathbf{y}_{1},\dots,\mathbf{y}_{S})$ $\displaystyle=0,\
j\in[1\\!:\\!K],$ $\displaystyle\Updownarrow$
$\displaystyle[{\mathsf{X}}]=\Gamma^{-1}\left(\begin{pmatrix}x_{0}\\\
\mathbf{x}\\\ \mathbf{y}_{1}\\\ \vdots\\\
\mathbf{y}_{S}\end{pmatrix}\begin{pmatrix}x_{0}\\\ \mathbf{x}\\\
\mathbf{y}_{1}\\\ \vdots\\\ \mathbf{y}_{S}\end{pmatrix}^{\mathsf{T}}\right),\
[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]$ $\displaystyle=0,\
\mathbf{y}\in{\mathcal{K}}_{i},\ i\in[1\\!:\\!S],\
\mathbf{x}\in{\mathcal{K}}_{0},\ x_{0}\geq 0,\ x_{0}^{2}=1,$
$\displaystyle\hat{Q}_{j}([{\mathsf{X}}])$ $\displaystyle=0\ j\in[1\\!:\\!K]$
$\displaystyle\Updownarrow$
$\displaystyle[{\mathsf{H}}_{0}]\odot[{\mathsf{X}}]$ $\displaystyle=1,$
$\displaystyle[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]$ $\displaystyle=0,\
i\in[1\\!:\\!S],$ $\displaystyle\hat{Q}_{j}([{\mathsf{X}}])$ $\displaystyle=0\
j\in[1\\!:\\!K],$ $\displaystyle[{\mathsf{X}}]$
$\displaystyle\in\mathrm{gen}\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right).$
Invoking 6, we specify
$\displaystyle\mathbb{K}$
$\displaystyle=\mathrm{gen}\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right),$
$\displaystyle\mathbb{H}$
$\displaystyle=\left\\{[{\mathsf{X}}]\in\SS_{n_{1}+n_{2}+1}^{S,n_{1}+1}\colon[{\mathsf{H}}_{0}]\odot[{\mathsf{X}}]=1\right\\},$
and we need to show is that
$\displaystyle\mathbb{J}$
$\displaystyle=\left\\{[{\mathsf{X}}]\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)\colon\begin{array}[]{l}[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]=0,\
i\in[1\\!:\\!S]\\\ \hat{Q}_{j}\left([{\mathsf{X}}]\right)=0,\
j\in[1\\!:\\!K]\end{array}\right\\},$
is a face of
$\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)$.
By 7, this will follow if we can show that
$[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]\geq 0,\
\forall[{\mathsf{X}}]\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right),\
i\in[1\\!:\\!S]$ and that $\hat{Q}_{j}([{\mathsf{X}}])\geq 0,\
j\in[1\\!:\\!K]$ whenever $[{\mathsf{X}}]$ fulfills the homogeneous and conic
constraints in the description of the feasible set of the conic optimization
problem. We will first show, that the statement of the theorem held if the
quadratic constraints were omitted. Indeed for any of the $[{\mathsf{F}}_{i}]$
and any
$[{\mathsf{X}}]\in\mathrm{gen}\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)$
we have
$\displaystyle\begin{split}[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]=&\left(\left(-{\mathbf{r}}_{i},{\mathsf{F}}_{i},\dots,{\mathsf{G}}_{i},\dots\right)^{\mathsf{T}}\left(-{\mathbf{r}}_{i},{\mathsf{F}}_{i},\dots,{\mathsf{G}}_{i},\dots\right)\right)\bullet\begin{pmatrix}\mathbf{x}\mathbf{x}^{\mathsf{T}}&\mathbf{x}\mathbf{y}_{1}^{\mathsf{T}}&\dots&\mathbf{x}\mathbf{y}_{S}^{\mathsf{T}}\\\
\mathbf{y}_{1}\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{1}\mathbf{y}_{1}^{\mathsf{T}}&\dots&{\mathsf{O}}\\\
\vdots&\vdots&\ddots&\vdots\\\
\mathbf{y}_{S}\mathbf{x}^{\mathsf{T}}&{\mathsf{O}}&\dots&\mathbf{y}_{S}\mathbf{y}_{S}^{\mathsf{T}}\end{pmatrix}\\\
=&\begin{pmatrix}-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{G}}_{i}\end{pmatrix}^{\mathsf{T}}\begin{pmatrix}-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{G}}_{i}\end{pmatrix}\bullet\begin{pmatrix}\mathbf{x}\mathbf{x}^{\mathsf{T}}&\mathbf{x}\mathbf{y}_{i}^{\mathsf{T}}\\\
\mathbf{y}_{i}\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{i}\mathbf{y}_{i}^{\mathsf{T}}\end{pmatrix}\\\
=&\begin{Vmatrix}\begin{pmatrix}-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{G}}_{i}\end{pmatrix}\begin{pmatrix}\mathbf{x}\\\
\mathbf{y}_{i}\end{pmatrix}\end{Vmatrix}^{2}\geq 0.\end{split}$
To complete the first part of the argument we need to show that the feasible
set is bounded. To this end we consider its recession cone which by
[rockafellar2015convex, Corollary 8.3.3.] is given by
$\displaystyle
0^{+}{\mathcal{F}}\coloneqq\left\\{[{\mathsf{X}}]\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)\colon[{\mathsf{H}}_{0}]\odot[{\mathsf{X}}]=0,\
[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]=0,\ i\in[1\\!:\\!S]\right\\}.$
Take an arbitrary $\left[{\mathsf{X}}\right]\in 0^{+}{\mathcal{F}}$, then
$\displaystyle\left[{\mathsf{F}}_{i}\right]\odot\left[{\mathsf{X}}\right]$
$\displaystyle=\sum_{l=1}^{k}\lambda_{l}\begin{Vmatrix}\begin{pmatrix}-{\mathbf{r}}_{i},{\mathsf{F}}_{i},{\mathsf{G}}_{i}\end{pmatrix}\begin{pmatrix}x^{0}_{l}\\\
\mathbf{x}_{l}\\\ \mathbf{y}^{i}_{l}\end{pmatrix}\end{Vmatrix}^{2}=0,\
i\in[1\\!:\\!S]\mbox{ and }$
$\displaystyle\left[{\mathsf{H}}_{0}\right]\odot\left[{\mathsf{X}}\right]$
$\displaystyle=\sum_{l=1}^{k}\left(x^{0}_{l}\right)^{2}=0,\mbox{ implying that
}x^{0}_{l}=0,\ l\in[1\\!:\\!k].$
Thus, for any $i\in[1\\!:\\!S]$ and $l\in[1\\!:\\!k]$ we have
${\mathsf{F}}_{i}\mathbf{x}_{l}+{\mathsf{G}}_{i}\mathbf{y}^{i}_{l}=\mathbf{o}$
and
$\left(0,\mathbf{x}_{l},\mathbf{y}^{i}_{l}\right)\in\mathbb{R}_{+}\times{\mathcal{K}}_{0}\times{\mathcal{K}}_{i}$
so that we have a element of the recession cone of ${\mathcal{F}}_{i}$, which
only contains the origin by the boundedness assumption, so that
$\left[{\mathsf{X}}\right]=\left[{\mathsf{O}}\right]$. So far our arguments
imply that
$\displaystyle\hat{\mathbb{J}}\coloneqq\left\\{[{\mathsf{X}}]\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)\colon[{\mathsf{F}}_{i}]\odot[{\mathsf{X}}]=0,\
i\in[1\\!:\\!S]\right\\}$
is a face of $\mathbb{K}$, hence its extreme points correspond to extreme rays
of $\mathbb{K}$ by 6, that is
$\mathrm{gen}\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right)$.
But then (7) implies that $\hat{Q}_{j}([{\mathsf{X}}])\geq 0,\
j\in[1\\!:\\!K]$ whenever $[{\mathsf{X}}]\in\hat{\mathbb{J}}$ so that by 7 the
set $\mathbb{J}$ is a face of $\mathbb{K}$ and our theorem follows from 6. ∎
While the above representation of the conic problem is convenient for the
application of Theorems 6 and 7 and the statement of the proof, we can use [5,
Proposition 3] in order to present it in a more familiar form:
$\displaystyle\begin{split}\min_{{\mathsf{X}},{\mathsf{Y}}_{i},{\mathsf{Z}}_{i},\mathbf{x},\mathbf{y}_{i}}{\mathsf{A}}_{i}\bullet{\mathsf{X}}+\mathbf{a}^{\mathsf{T}}\mathbf{x}&+\sum_{i=1}^{S}{\mathsf{B}}_{i}\bullet{\mathsf{Z}}_{i}+{\mathsf{C}}_{i}\bullet{\mathsf{Y}}_{i,i}+\mathbf{c}_{i}^{\mathsf{T}}\mathbf{y}\\\
\mathrm{s.t.:}\
{\mathsf{F}}_{i}\mathbf{x}+{\mathsf{G}}_{i}\mathbf{y}_{i}&={\mathbf{r}}_{i},\quad\hskip
19.91684pti\in[1\\!:\\!S],\\\
\operatorname{diag}\left(\begin{pmatrix}{\mathsf{F}}_{i}&{\mathsf{G}}_{i}\end{pmatrix}\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\begin{pmatrix}{\mathsf{F}}_{i}^{\mathsf{T}}\\\
{\mathsf{G}}_{i}^{\mathsf{T}}\end{pmatrix}\right)&={\mathbf{r}}_{i}\circ{\mathbf{r}}_{i},\quad
i\in[1\\!:\\!S],\\\
\hat{Q}_{j}(\mathbf{x},{\mathsf{X}},\mathbf{y}_{1},{\mathsf{Z}}_{1},{\mathsf{Y}}_{1},\dots,\mathbf{y}_{S},{\mathsf{Z}}_{S},{\mathsf{Y}}_{S})&=0,\quad\hskip
17.07182pt\ j\in[1\\!:\\!K],\\\
\left[\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{i}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
\mathbf{y}_{i}&{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}&\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right).\end{split}$
(9)
Before discussing this new type of conic reformulation, we want to point out,
that there is a another way to prove 8. First, we make the following
observation:
###### Theorem 9.
The partial matrix
$\displaystyle{\mathsf{M}}_{*}\coloneqq\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}&\dots&*\\\ \vdots&\vdots&\ddots&\vdots\\\
{\mathsf{Z}}_{S}&*&\dots&{\mathsf{Y}}_{S}\end{pmatrix},$
is completable to a matrix in
$\mathcal{CPP}({\mathcal{K}}_{0}\times_{i=1}^{S}{\mathcal{K}}_{i})$ if and
only if there are decompositions
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}=\begin{pmatrix}\bar{{\mathsf{X}}}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{X}}}\bar{{\mathsf{Y}}}_{i}^{\mathsf{T}}\\\
\bar{{\mathsf{Y}}}_{i}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{Y}}}_{i}\bar{{\mathsf{Y}}}_{i}^{\mathsf{T}}\end{pmatrix},\mbox{
with }\begin{pmatrix}\bar{{\mathsf{X}}}\\\
\bar{{\mathsf{Y}}}_{i}\end{pmatrix}\in{\mathcal{K}}_{0}^{r}\times{\mathcal{K}}_{i}^{r},\
i\in[1\\!:\\!S],\ r\in\mathbb{N},$
hence, if and only if
$\displaystyle\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}$
$\displaystyle\in\mathcal{CMP}\left(\left(\mathbb{R}_{+}\times{\mathcal{K}}_{0}\right),{\mathcal{K}}_{1},\dots,{\mathcal{K}}_{S}\right).$
###### Proof.
Given said decompositions we can create a matrix
$\displaystyle\begin{pmatrix}\bar{{\mathsf{X}}}\\\ \bar{{\mathsf{Y}}}_{1}\\\
\vdots\\\ \bar{{\mathsf{Y}}}_{S}\end{pmatrix}\mbox{ for which
}\begin{pmatrix}\bar{{\mathsf{X}}}\\\ \bar{{\mathsf{Y}}}_{1}\\\ \vdots\\\
\bar{{\mathsf{Y}}}_{S}\end{pmatrix}\begin{pmatrix}\bar{{\mathsf{X}}}\\\
\bar{{\mathsf{Y}}}_{1}\\\ \vdots\\\
\bar{{\mathsf{Y}}}_{S}\end{pmatrix}^{\mathsf{T}}=\begin{pmatrix}\bar{{\mathsf{X}}}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{X}}}\bar{{\mathsf{Y}}}_{1}^{\mathsf{T}}&\dots&\bar{{\mathsf{X}}}\bar{{\mathsf{Y}}}_{S}^{\mathsf{T}}\\\
\bar{{\mathsf{Y}}}_{1}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{Y}}}_{1}\bar{{\mathsf{Y}}}_{1}^{\mathsf{T}}&\dots&\bar{{\mathsf{Y}}}_{1}\bar{{\mathsf{Y}}}_{S}^{\mathsf{T}}\\\
\vdots&\vdots&\ddots&\vdots\\\
{\mathsf{Y}}_{S}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{Y}}}_{S}\bar{{\mathsf{Y}}}_{1}^{\mathsf{T}}&\dots&\bar{{\mathsf{Y}}}_{S}\bar{{\mathsf{Y}}}_{S}^{\mathsf{T}}\end{pmatrix}\in\mathcal{CPP}(\times_{i=0}^{S}{\mathcal{K}}_{i}),$
is the desired completion of ${\mathsf{M}}_{*}$. Conversely, if
${\mathsf{M}}_{*}$ has a completion ${\mathsf{M}}\in\
\mathcal{CPP}(\times_{i=0}^{S}{\mathcal{K}}_{i})$ then by definition of the
latter cone we have
$\displaystyle{\mathsf{M}}=\begin{pmatrix}\bar{{\mathsf{X}}}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{X}}}\bar{{\mathsf{Y}}}_{1}^{\mathsf{T}}&\dots&\bar{{\mathsf{X}}}\bar{{\mathsf{Y}}}_{S}^{\mathsf{T}}\\\
\bar{{\mathsf{Y}}}_{1}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{Y}}}_{1}\bar{{\mathsf{Y}}}_{1}^{\mathsf{T}}&\dots&\bar{{\mathsf{Y}}}_{1}\bar{{\mathsf{Y}}}_{S}^{\mathsf{T}}\\\
\vdots&\vdots&\ddots&\vdots\\\
{\mathsf{Y}}_{S}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{Y}}}_{S}\bar{{\mathsf{Y}}}_{1}^{\mathsf{T}}&\dots&\bar{{\mathsf{Y}}}_{S}\bar{{\mathsf{Y}}}_{S}^{\mathsf{T}}\end{pmatrix},$
so that
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}=\begin{pmatrix}\bar{{\mathsf{X}}}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{X}}}\bar{{\mathsf{Y}}}_{i}^{\mathsf{T}}\\\
\bar{{\mathsf{Y}}}_{i}\bar{{\mathsf{X}}}^{\mathsf{T}}&\bar{{\mathsf{Y}}}_{i}\bar{{\mathsf{Y}}}_{i}^{\mathsf{T}}\end{pmatrix},\mbox{
with }\begin{pmatrix}\bar{{\mathsf{X}}}\\\
\bar{{\mathsf{Y}}}_{i}\end{pmatrix}\left({\mathcal{K}}_{0}\times{\mathcal{K}}_{i}\right)^{r},\
i\in[1\\!:\\!S],\ r\in\mathbb{N}.$
∎
###### Remark 3.
The lemma is easily derived, but it highlights the key difficulty for the
construction of a completion of the arrow-head arrangement of a set of matrix
blocks connected by a common submatrix ${\mathsf{X}}$. If all of the blocks
have representations as convex-conic combinations (i.e. nonnegative linear
combinations) where the parts of the representations that form the connecting
${\mathsf{X}}$-component are identical for all blocks, obtaining the
completion is simply a matter of concatenating the individual factors of the
decompositions. However, there is no guarantee that decompositions that are
coordinated in this manner do exist.
Now, it is clear that the following optimization problem is equivalent to (2):
$\displaystyle\begin{split}\min_{{\mathsf{X}},{\mathsf{Y}}_{i},{\mathsf{Z}}_{i},\mathbf{x},\mathbf{y}_{i}}&{\mathsf{A}}_{i}\bullet{\mathsf{X}}+\mathbf{a}^{\mathsf{T}}\mathbf{x}+\sum_{i=1}^{S}{\mathsf{B}}_{i}\bullet{\mathsf{Z}}_{i}+{\mathsf{C}}_{i}\bullet{\mathsf{Y}}_{i,i}+\mathbf{c}_{i}^{\mathsf{T}}\mathbf{y}\\\
\mathrm{s.t.:}\ &\mbox{ the linear constraints of
(\ref{eqn:DecomposableQCQPBurer}) hold and }\\\
&\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{1}^{\mathsf{T}}&\dots&\mathbf{y}_{S}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}&\dots&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
\mathbf{y}_{1}&{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1,1}&\dots&*\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
\mathbf{y}_{S}&{\mathsf{Z}}_{S}&*&\dots&{\mathsf{Y}}_{S,S}\end{pmatrix}\mbox{
can be completed to a matrix in
}\mathcal{CPP}(\mathbb{R}_{+}\times_{i=0}^{S}{\mathcal{K}}_{i}),\end{split}$
(10)
but the latter constraint holds whenever the conic constraint in (9) holds.
Thus, we can close the relaxation gap between (9) and (1) by appealing to
Burer’s reformulation and 9. However, we believe it is valuable to have a
direct proof that is solely based on the geometry of $\mathcal{CMP}$ and does
not explicitly reference matrix completion. Firstly, we avoid referencing
something abstract, namely completability, by invoking something relatively
concrete, i.e. the geometry of the respective convex cone. Secondly, the proof
shows that the homogenized feasible set of (9) is a face of the respective
instance of $\mathcal{CMP}$, which may be a useful insight for future
investigations of this object. Finally, the proof is a somewhat unexpected
application of the theory laid out in [11], which may inspire similar
approaches to convex reformulations where a desired property, in our case
completability, is inscribed in the structure of the cone $\mathbb{K}$.
To summarize, the reformulation we obtained is similar to the one obtainable
from [4] in that it is a linear-conic optimization problem over an
appropriately structured convex cone. The advantage of our reformulation is
that the number of variables is $S(n_{1}+n_{2})(n_{1}+n_{2}+1)/2$, while for
the traditional approach this number would be
$(n_{1}+Sn_{2})(n_{1}+Sn_{2}+1)/2$, which is a bigger number if $S$ is big
enough. However, similarly to $\mathcal{CPP}$, we cannot directly optimize
over $\mathcal{CMP}$ since no workable description is yet known for this novel
object. We therefore propose the following strategy.
### 3.3 A new strategy for sparse conic reformulations
As stated before, optimizing over $\mathcal{CMP}$ necessitates the
applications of appropriate inner and outer approximations of that cone. On
the one hand, we thus look for necessary conditions
$[{\mathsf{M}}]\in\SS^{S,k}_{n}$ has to meet lest completing
$\Gamma_{*}\left([{\mathsf{M}}]\right)$ to a matrix in the respective
$\mathcal{CPP}$-cone is impossible, and we denote the subset of connected
components that meet these conditions by
${\mathcal{C}}_{nes}\supseteq\mathcal{CMP}$. On the other hand we look for
subsets ${\mathcal{C}}_{suf}\subseteq\mathcal{CMP}$, in other words, we look
for sufficient conditions on a connected component
$[{\mathsf{M}}]\in\SS^{S,k}_{n}$ so that
$\Gamma_{*}\left([{\mathsf{M}}]\right)$ is in fact completable.
As we will show in the next section such necessary and/or sufficient
conditions can be formulated in terms of $\mathcal{CPP}$ constraints. Such
constraints are again intractable in general so we need an additional step in
order to take advantage of these approximations. Set-compeltely positive
matrix cones are very well studied objects and strong inner and out
approximations feature prominently in the existing literature. These
approximations can thus be used to find tractable approximations of
${\mathcal{C}}_{suf}$ and ${\mathcal{C}}_{nes}$. More precisely, whenever we
describe ${\mathcal{C}}_{nes}$ via set-completely postive constraints, we can
loosen these constraints via tractable outer approximation of $\mathcal{CPP}$
as to obtain a new set ${\mathcal{C}}_{outer}\supseteq{\mathcal{C}}_{nes}$.
Conversely, replacing $\mathcal{CPP}$ in the description ${\mathcal{C}}_{suf}$
with a tractable inner approximation we obtain an inner approximation
${\mathcal{C}}_{inner}\subseteq{\mathcal{C}}_{suf}$. In total we get:
${\mathcal{C}}_{inner}\ \subseteq\ {\mathcal{C}}_{suf}\ \subseteq\
\mathcal{CMP}\ \subseteq{\mathcal{C}}_{nes}\ \subseteq\
{\mathcal{C}}_{outer},$
hence, tractable inner and outer approximations of $\mathcal{CMP}$.
The two step nature of our proposed approximation procedure stems from the
fact that there are two sources of difficulty that necessitate resorting to
approximations. The first one is the requirement of completability, which is
addressed by the inner two of the above inclusions. The second one is the
requirement of set-completely positivity, addressed by the outer two of the
above inclusions.
Hence, whenever we approximately solve (9) by replacing $\mathcal{CMP}$ by its
tractable inner and outer approximations we incur a relaxation gap that
consists of two components. The portion of the gap the results from a failure
of meeting the completability requirement we henceforth refer to as
complitability gap, while the portion of the gap the stems from the
approximation error caused by the relaxation of the $\mathcal{CPP}$
constraints will be refered to as the completepositivity gap.
In the next section we will mostly be concerned with narrowing the
complitability gap by providing promising examples for ${\mathcal{C}}_{nes}$
and ${\mathcal{C}}_{suf}$. Also, most of the discussion in the rest of the
article will focus on the quality of this gap. We will, however, also provide
some references to approximations of $\mathcal{CPP}$, in order to give some
orientation on how to narrow the completepositivity gap as well. In the
section on our numerical experiments we will also show some strategies on how
to bypass this gap entirely, albeit in limited cases.
## 4 Inner and outer approximation of $\mathcal{CMP}$ based on set-completely
positive matrix cones
Our goal in this section is to identify conditions on an element
$[{\mathsf{M}}]\in\SS^{S,k}_{n}$ that are either suffiecient or necessary for
$\Gamma_{*}([{\mathsf{M}}])$ to have a set-completely positive completion. In
the following discussion we will show that many such conditions can be given
in terms of set-completely positive
### 4.1 An outer approximation via necessary conditions
For a vector of ground cones
$\bar{{\mathcal{K}}}\coloneqq\left({\mathcal{K}}_{0},\dots,{\mathcal{K}}_{S}\right)$,
we define yet another generalization of the set-completely positive matrix
cone
$\displaystyle\mathcal{CPI}\left(\bar{{\mathcal{K}}}\right)$
$\displaystyle\coloneqq\left\\{\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{1}^{\mathsf{T}}\\\
{\mathsf{Z}}_{1}&{\mathsf{Y}}_{1}\end{pmatrix},\dots,\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{S}^{\mathsf{T}}\\\
{\mathsf{Z}}_{S}&{\mathsf{Y}}_{S}\end{pmatrix}\right]\colon\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\in\mathcal{CPP}({\mathcal{K}}_{0}\times{\mathcal{K}}_{i}),\
i\in[1\\!:\\!S]\right\\},$
where $k<n$.
###### Theorem 10.
We have that
$\mathcal{CPI}\left(\bar{{\mathcal{K}}}\right)\supseteq\mathcal{CMP}\left(\bar{{\mathcal{K}}}\right)$.
###### Proof.
By setting ${\mathsf{X}}=\mathbf{x}\mathbf{x}^{\mathsf{T}},\
{\mathsf{Z}}_{i}=\mathbf{y}_{i}\mathbf{x},\
{\mathsf{Y}}_{i}=\mathbf{y}_{i}\mathbf{y}_{i}^{\mathsf{T}}$ we see that the
generators of $\mathcal{CMP}$ are contained in $\mathcal{CPI}$ and by
convexity $\mathcal{CMP}$ itself is contained. ∎
We thus have an outer approximations of $\mathcal{CMP}$ in terms of set-
completely positive matrix blocks, which is convinient for approximately
optimizing of over $\mathcal{CMP}$ since set-completely positive optimization
is a well researched field.
### 4.2 Inner approximations via sufficient conditions
We define
$\displaystyle\mathcal{CPS}\left(\bar{{\mathcal{K}}}\right)$
$\displaystyle\coloneqq\left\\{\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}\colon\begin{pmatrix}{\mathsf{W}}_{i}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\in\mathcal{CPP}({\mathcal{K}}_{0}\times{\mathcal{K}}_{i}),\
i\in[1\\!:\\!S],\ \sum_{i=1}^{S}{\mathsf{W}}_{i}={\mathsf{X}}\right\\},$
where $k<n$. While it is not immediately obvious, the above cone is in fact a
subset of $\mathcal{CMP}$. In fact the generators of $\mathcal{CPS}$ are a
subset of the generators of $\mathcal{CMP}$ as we will now show.
###### Theorem 11.
$\mathcal{CPS}\left(\bar{{\mathcal{K}}}\right)\subseteq\mathcal{CMP}\left(\bar{{\mathcal{K}}}\right)$
if ${\mathcal{K}}_{i},\ i\in[1\\!:\\!S]$ contain the origin.
###### Proof.
Let $[\bar{{\mathsf{X}}}]\in\mathcal{CPS}$, we need to to show that
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}=\sum_{k=1}^{r}\begin{pmatrix}\mathbf{x}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}\begin{pmatrix}\mathbf{x}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}^{\mathsf{T}},\ \mbox{with
}\begin{pmatrix}\mathbf{x}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}\in{\mathcal{K}}_{0}\times{\mathcal{K}}_{i},\
k\in[1\\!:\\!r],\ i\in[1\\!:\\!S].$
for some fixed $r\in\mathbb{N}$. The important aspect is that the
decomposition of the ${\mathsf{X}}$-component does not change across
$i\in[1\\!:\\!S]$. We have
$\displaystyle\begin{pmatrix}{\mathsf{W}}_{i}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}=\sum_{k=1}^{r_{i}}\begin{pmatrix}\mathbf{w}_{i}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}\begin{pmatrix}\mathbf{w}_{i}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}^{\mathsf{T}}\mbox{with
}\begin{pmatrix}\mathbf{w}^{k}_{i}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}\in{\mathcal{K}}_{0}\times{\mathcal{K}}_{i},\
k\in[1\\!:\\!r_{i}],\ i\in[1\\!:\\!S].$
We can set $r=\sum_{i=1}^{S}r_{i}$ and we have
${\mathsf{X}}=\sum_{i=1}^{S}{\mathsf{W}}_{i}=\sum_{i=1}^{S}\sum_{k=1}^{r_{i}}\mathbf{w}_{i}^{k}(\mathbf{w}_{i}^{k})^{\mathsf{T}}$
so that
$\displaystyle\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}=\sum_{k=1}^{r_{i}}\begin{pmatrix}\mathbf{w}_{i}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}\begin{pmatrix}\mathbf{w}_{i}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}^{\mathsf{T}}+\sum_{j\in[1\\!:\\!S]\setminus\left\\{i\right\\}}\sum_{k=1}^{r_{j}}\begin{pmatrix}\mathbf{w}_{j}^{k}\\\
\mathbf{o}\end{pmatrix}\begin{pmatrix}\mathbf{w}_{j}^{k}\\\
\mathbf{o}\end{pmatrix}^{\mathsf{T}}.$
with
$\displaystyle\begin{pmatrix}\mathbf{w}_{i}^{k}\\\
\mathbf{y}_{i}^{k}\end{pmatrix}\in{\mathcal{K}}_{0}\times{\mathcal{K}}_{i},\
k\in[1\\!:\\!r_{i}],\ \begin{pmatrix}\mathbf{w}_{j}^{k}\\\
\mathbf{o}\end{pmatrix}\in{\mathcal{K}}_{0}\times{\mathcal{K}}_{i},\
k\in[1\\!:\\!r_{j}],\ j\in[1\\!:\\!S]\setminus\left\\{i\right\\},$
where the last inclusion holds, since ${\mathcal{K}}_{i}$ contain the origin.
∎
For obtaining a second approximation, we can use a slight generalization of
known results on matrix completion to obtain another inner approximation for
the case
${\mathcal{K}}_{0}\in\left\\{\mathbb{R}^{n_{1}}_{+},\mathbb{R}^{n_{1}}\right\\}$
that can be used in conjunction with $\mathcal{CPS}$. We define
$\displaystyle\mathcal{CBC}_{k}\left(\bar{{\mathcal{K}}}\right)$
$\displaystyle\coloneqq\left\\{\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}\colon\begin{pmatrix}x&\mathbf{z}_{i}^{\mathsf{T}}\\\
\mathbf{z}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\in\mathcal{CPP}(\mathbb{R}_{+}\times{\mathcal{K}}_{i}),\begin{array}[]{l}{\mathsf{Z}}_{i}=\mathbf{z}_{i}\mathbf{e}_{k}^{\mathsf{T}}\\\
{\mathsf{X}}=x\mathbf{e}_{k}\mathbf{e}_{k}^{\mathsf{T}}\end{array},\
i\in[1\\!:\\!S]\ \right\\},$
and
$\displaystyle\mathcal{CBC}\left(\bar{{\mathcal{K}}}\right)\coloneqq\sum_{k=1}^{n_{1}}\mathcal{CBC}_{k}\left(\bar{{\mathcal{K}}}\right).$
Then we can prove the containment
###### Theorem 12.
$\mathcal{CBC}\left(\bar{{\mathcal{K}}}\right)\subseteq\mathcal{CMP}\left(\bar{{\mathcal{K}}}\right)$
for
${\mathcal{K}}_{0}\in\left\\{\mathbb{R}^{n_{1}}_{+},\mathbb{R}^{n_{1}}\right\\}$.
###### Proof.
Since convex cones are closed under addition the statement will follow if we
show that
$\mathcal{CBC}_{k}\left(\bar{{\mathcal{K}}}\right)\subseteq\mathcal{CMP}\left(\bar{{\mathcal{K}}}\right)$
for any $k\in[1\\!:\\!n_{1}]$. We will discuss the case, where
${\mathcal{K}}_{0}=\mathbb{R}^{n_{1}}_{+}$ first. For an element of
$\mathcal{CBC}_{k}$ consider the partial matrix
$\displaystyle{\mathsf{M}}\coloneqq\begin{pmatrix}x&\mathbf{z}_{1}^{\mathsf{T}}&\dots&\mathbf{z}_{S}^{\mathsf{T}}\\\
\mathbf{z}_{1}&{\mathsf{Y}}_{1}&\dots&*\\\ \vdots&\vdots&\ddots&\vdots\\\
\mathbf{z}_{S}&*&\dots&{\mathsf{Y}}_{S}\end{pmatrix},\mbox{for which by
construction }\begin{pmatrix}x&\mathbf{z}_{i}^{\mathsf{T}}\\\
\mathbf{z}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\mathcal{CPP}\left(\mathbb{R}_{+}\times{\mathcal{K}}_{i}\right)\mbox{
holds.}$
W.l.o.g. we cann assume $x=1$. We will proof that ${\mathsf{M}}$ can be
completed to a member in
$\mathcal{CPP}(\mathbb{R}_{+}\times_{i=1}^{S}{\mathcal{K}}_{i})$. The desired
inclusion then follows since zero rows and columns can be added in oder to
obtain a member of
$\mathcal{CPP}(\mathbb{R}_{+}^{n_{1}}\times_{i=1}^{S}{\mathcal{K}}_{i})$. Our
proof involves merely a slight adaptation of the argument used for the
completion of partial completely positive matrices given in [6], who
considered the case where ${\mathcal{K}}_{i}$ are all positive orthants. We
show that such an assumption is unnecessary. We proceed by induction and start
by showing that the first $(2n_{2}+1)\times(2n_{2}+1)$ principal submatrix of
${\mathsf{M}}$ can be completed to a matrix in
$\mathcal{CPP}\left(\mathbb{R}_{+}\times{\mathcal{K}}_{1}\times{\mathcal{K}}_{2}\right)$.
After a perturbation, this matrix can be written as
$\displaystyle\bar{{\mathsf{M}}}\coloneqq\begin{pmatrix}{\mathsf{Y}}_{1}&\mathbf{z}_{1}&{\mathsf{X}}^{\mathsf{T}}\\\
\mathbf{z}_{1}^{\mathsf{T}}&1&\mathbf{z}_{2}^{\mathsf{T}}\\\
{\mathsf{X}}&\mathbf{z}_{2}&{\mathsf{Y}}_{2}\end{pmatrix}$
where we replaced the unspecified entries by ${\mathsf{X}}$. Observe that the
submatrices
$\displaystyle{\mathsf{M}}_{1}\coloneqq\begin{pmatrix}{\mathsf{Y}}_{1}&\mathbf{z}_{1}\\\
\mathbf{z}_{1}^{\mathsf{T}}&1\end{pmatrix}\in\mathcal{CPP}\left({\mathcal{K}}_{1}\times\mathbb{R}_{+}\right),\
{\mathsf{M}}_{2}\coloneqq\begin{pmatrix}1&\mathbf{z}_{2}^{\mathsf{T}}\\\
\mathbf{z}_{2}&{\mathsf{Y}}_{2}\end{pmatrix}\in\mathcal{CPP}\left(\mathbb{R}_{+}\times{\mathcal{K}}_{2}\right),$
so that
$\displaystyle{\mathsf{M}}_{1}=\sum_{l=1}^{m_{1}}\begin{pmatrix}\mathbf{f}_{l}\\\
f^{0}_{l}\end{pmatrix}\begin{pmatrix}\mathbf{f}_{l}\\\
f^{0}_{l}\end{pmatrix}^{\mathsf{T}}\mbox{ with
}\begin{pmatrix}\mathbf{f}_{i}\\\
f^{0}_{i}\end{pmatrix}\in{\mathcal{K}}_{1}\times\mathbb{R}_{+},$
$\displaystyle{\mathsf{M}}_{2}=\sum_{k=1}^{m_{2}}\begin{pmatrix}g^{0}_{k}\\\
\mathbf{g}_{k}\end{pmatrix}\begin{pmatrix}g^{0}_{k}\\\
\mathbf{g}_{k}\end{pmatrix}^{\mathsf{T}}\mbox{ with
}\begin{pmatrix}g^{0}_{k}\\\
\mathbf{g}_{k}\end{pmatrix}\in\mathbb{R}_{+}\times{\mathcal{K}}_{2}.$
Define $m_{1}m_{2}$ vectors
$\displaystyle\mathbf{v}_{lk}\coloneqq\begin{pmatrix}g^{0}_{k}\mathbf{f}_{l}\\\
f^{0}_{l}g^{0}_{k}\\\
f^{0}_{l}\mathbf{g}_{k}\end{pmatrix}\in{\mathcal{K}}_{1}\times\mathbb{R}_{+}\times{\mathcal{K}}_{2},\
l\in[1\\!:\\!m_{1}],\ k\in[1\\!:\\!m_{2}],$ (11)
then the matrix $\sum_{k,l}\mathbf{v}_{lk}\mathbf{v}_{lk}^{\mathsf{T}}$ is the
matrix $\bar{{\mathsf{M}}}$ with
${\mathsf{X}}=\mathbf{z}_{2}\mathbf{z}_{1}^{\mathsf{T}}$. Hence, after undoing
the perturbation, we generate the desired completion. For the $j$-th induction
step we can repeat the argument with ${\mathcal{K}}_{1}$ replaced by
$\times_{i=1}^{j-1}{\mathcal{K}}_{1}$ and ${\mathcal{K}}_{2}$ replaced by
${\mathcal{K}}_{j}$. For the case, where
${\mathcal{K}}_{0}=\mathbb{R}^{n_{1}}$ the proof proceeds analogously with
$\mathbb{R}_{+}$ replaced by $\mathbb{R}$. ∎
Finally, we present a simple, yet, as we will see in the numerical
experiments, very effective inner approximation, which is applicable whenever
${\mathcal{K}}_{i}\in\left\\{\mathbb{R}^{n_{2}}_{+},\
\mathbb{R}^{n_{2}}\right\\},\ i\in[1\\!:\\!S]$. Again, we express it as the
sum of simpler cones given by
$\displaystyle\mathcal{DDC}_{k,s}\left(\bar{{\mathcal{K}}}\right)$
$\displaystyle\coloneqq\left\\{\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}\colon\begin{pmatrix}{\mathsf{X}}&\mathbf{z}_{i}^{\mathsf{T}}\\\
\mathbf{z}_{i}&y_{i}\end{pmatrix}\in\mathcal{CPP}({\mathcal{K}}_{0}\times\mathbb{R}_{+}),\begin{array}[]{l}{\mathsf{Z}}_{s}=\mathbf{z}_{i}\mathbf{e}_{k}^{\mathsf{T}}\\\
{\mathsf{Y}}_{s}=y\mathbf{e}_{k}\mathbf{e}_{k}^{\mathsf{T}}\\\
{\mathsf{Y}}_{i}={\mathsf{O}},\ i\in[1\\!:\\!S]\setminus\left\\{s\right\\}\\\
{\mathsf{Z}}_{i}={\mathsf{O}},\
i\in[1\\!:\\!S]\setminus\left\\{s\right\\}\end{array}\right\\}.$
We can then define
$\displaystyle\mathcal{DDC}\left(\bar{{\mathcal{K}}}\right)\coloneqq\sum_{s=1}^{S}\sum_{k=1}^{n_{2}}\mathcal{DDC}_{k,s}\left(\bar{{\mathcal{K}}}\right),$
about which the following statement is easily proved.
###### Theorem 13.
We have
$\mathcal{DDC}\left(\bar{{\mathcal{K}}}\right)\subseteq\mathcal{CMP}\left(\bar{{\mathcal{K}}}\right)$
if ${\mathcal{K}}_{i}\in\left\\{\mathbb{R}^{n_{2}}_{+},\
\mathbb{R}^{n_{2}}\right\\},\ i\in[1\\!:\\!S]$.
###### Proof.
Since $\mathcal{CMP}$ is convex, it is enough to proof that
$\mathcal{DDC}_{s,k}\left(\bar{{\mathcal{K}}}\right)\subseteq\mathcal{CMP}\left(\bar{{\mathcal{K}}}\right)$
for any $k\in[1\\!:\\!n_{2}],\ s\in[1\\!:\\!S]$. For any
$[{\mathsf{M}}]\in\mathcal{DDC}_{s,k}\left(\bar{{\mathcal{K}}}\right)$ the
required completion of $\Gamma_{*}([{\mathsf{M}}])$ is easily obtained by
obtained filling out the uncspecified entries with zeros. ∎
Note, that the statement remains true if we merely work with a selection of
$\mathcal{DDC}_{k,s}$ in order to alleviate some if the numerical burden.
All these inner and outer approximations we now discussed represent an effort
to tackle the completability gap. But, as we laid out at the beginning of this
section, they all have it in common that they are constructed using set-
completely positive matrix cones, over which we cannot optimized directly. In
this text we will give discuss some instances where the completepositivity gap
can be bypassed conveniently, so that we can focus on assessing the extent of
the completability gap.
#### 4.2.1 Limitations of the inner approximations of $\mathcal{CMP}$
We will now critically asses the strength of the inner approximations
discussed above. Of course, an obvious limitation of $\mathcal{CBC}$ and
$\mathcal{DDC}$ is that the ${\mathsf{X}}$ and the ${\mathsf{Y}}_{i}$
components respectively can only be diagonal matrices. In case of
$\mathcal{CBC}$, this has some undesirable consequences when approximating an
exact reformulation of (1) based on 8.
Obviously, if $\mathcal{CMP}$ is replaced by $\mathcal{CBC}$, then
$\mathbf{x}=\mathbf{o}$, since these values reside in off-diagonal of the
north-west blocks. But then 8 implies that $\mathbf{x}$ is the convex
combination of some $\mathbf{x}_{j},j\in[1\\!:\\!k]$ that are part of a
feasible solution to (1). Since the feasible set is bounded we get
$\mathbf{x}_{j}=\mathbf{o}$ as well, which eventually yields
${\mathsf{Z}}_{i}=\sum_{j=1}^{k}\lambda_{j}\mathbf{y}^{i}_{j}\mathbf{x}_{j}^{\mathsf{T}}={\mathsf{O}}$,
so that the approximations eliminates these components entirely.
A similar deficiency can be identified for $\mathcal{CPS}$. To see this, note
the from the proof of 11 we have that all extreme rays of $\mathcal{CPS}$ are
in fact rank one, in the sense that all matrix components are rank one
matrices. In other words the generators of $\mathcal{CPS}$ are a subset of the
generators of $\mathcal{CMP}$. Hence, if $\mathcal{CMP}$ is replaced by
$\mathcal{CPS}$ in (8) the extreme point of the feasible can be shown to be
rank one as well, by invoking a similar argument as in 8. If
${\mathsf{Y}}_{i},\mathbf{y}_{i},{\mathsf{W}}_{i},\ i\in[1\\!:\\!S]$ are part
of feasible extremal solution of the respective approximation we get
$\displaystyle\begin{pmatrix}w_{0}^{i}&\mathbf{y}_{i}^{\mathsf{T}}\\\
\mathbf{y}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\in\SS^{n_{2}+1}_{+},\
i\in[1\\!:\\!S],\quad\sum_{i=1}^{S}w^{i}_{0}=1,$
where $w_{0}^{i}$ are the north-west entries of ${\mathsf{W}}_{i},\
i\in[1\\!:\\!S]$. By Schur complementation we get $\SS^{n_{1}}_{+}\ni
w_{0}^{i}{\mathsf{Y}}_{i}-\mathbf{y}_{i}\mathbf{y}_{i}^{\mathsf{T}}=w_{0}^{i}\mathbf{y}_{i}\mathbf{y}_{i}^{\mathsf{T}}-\mathbf{y}_{i}\mathbf{y}_{i}^{\mathsf{T}}$,
for any fixed $i\in[1\\!:\\!S]$, which implies that either $w^{0}_{i}=1$ or
$\mathbf{y}_{i}=\mathbf{o}$. Thus, the approximations based on $\mathcal{CPS}$
eliminates all but one of the ${\mathsf{Y}}_{i}$ components and as a
consequence all but one of the ${\mathsf{Z}}_{i}$ components. Depending on the
model at hand, this can be an advantage as we will see in the numerical
experiments in Section 5. However, in case $({\mathsf{F}}_{i},\
{\mathsf{G}}_{i},{\mathbf{r}}_{i})$ is identical across $i\in[1\\!:\\!S]$, the
approximations actually eliminates all ${\mathsf{Y}}_{i}$ and
${\mathsf{Z}}_{i}$ components. To see this, consider that in said case we have
that
$\operatorname{diag}({\mathsf{G}}_{i}{\mathsf{Y}}_{i}{\mathsf{G}}_{i}^{\mathsf{T}})={\mathsf{O}}$
for one $i$ forces the same for all $i$, which by boundedness implies
${\mathsf{Y}}_{i}={\mathsf{O}}$, entailing ${\mathsf{Z}}_{i}={\mathsf{O}}$ for
all $i\in[1\\!:\\!S]$.
Despite these limitations we have found instances of (1) where the inner
approximations yield favorable results. We will discuss these instances in the
next section, where we conduct numerical experiments assessing the efficacy of
the inner and outer approximations.
## 5 Numerical experiments
As discussed in the introduction, the authors of [3] tried to solve (2St3QP)
using copositive reformulations, where they compared the traditional model
akin to (2), with what we can now conceptualize as the $\mathcal{CPI}$
relaxation of the $\mathcal{CMP}$ reformulation of (2St3QP). For both models,
they used their respective $\mathcal{DNN}$ relaxations in order to produce
solutions. For the purpose of certifying optimality, they exploited the fact
that either relaxation also produces feasible solutions, hence upperbounds,
since both leave the original space of variables in tact. For many instances,
these bounds alone closed the optimality gap, but for some gaps persisted,
even thought they were narrowed by extensive polishing procedures, about which
we will not go into detail here. What we are setting out to do in this section
is revisiting these instances and new variants of them, in order to see if the
bounds we introduced in this article can further narrow the optimality gap.
In what follows we will use Mosek as a conic optimization solver, and Gurobi
as a global optimization solver, to both of which we interface via the YALMIP
enviroment in Matlab. All experiments are run on a Intel Core i5-9300H CPU
with 2.40GHz and 16GB of ram.
In our epxeriments consider the following problem
$\displaystyle
v({\mathcal{F}})\coloneqq\min_{\mathbf{x}\in\mathbb{R}^{n_{1}},\mathbf{y}_{i}\in\mathbb{R}^{n_{2}}}\left\\{\mathbf{x}^{\mathsf{T}}{\mathsf{A}}\mathbf{x}+\sum_{i=1}^{S}p_{i}\left(\mathbf{x}^{\mathsf{T}}{\mathsf{B}}_{i}\mathbf{y}_{i}+\mathbf{y}_{i}^{\mathsf{T}}{\mathsf{C}}_{i}\mathbf{y}_{i}\right)\colon(\mathbf{x},\bar{\mathbf{y}})\in{\mathcal{F}},\right\\},$
(12)
with the following specifications for ${\mathcal{F}}$:
$\displaystyle{\mathcal{F}}_{1}$
$\displaystyle\coloneqq\left\\{\begin{pmatrix}\mathbf{x}\\\
\bar{\mathbf{y}}\end{pmatrix}\in\mathbb{R}^{n_{1}+Sn_{2}}_{+}\colon\mathbf{e}^{\mathsf{T}}\mathbf{x}+\mathbf{e}^{\mathsf{T}}\mathbf{y}_{i}=1,\
i\in[1\\!:\\!S]\right\\},$ $\displaystyle{\mathcal{F}}_{2}$
$\displaystyle\coloneqq\left\\{\begin{pmatrix}\mathbf{x}\\\
\bar{\mathbf{y}}\end{pmatrix}\in\left\\{0,1\right\\}^{S}\times\mathbb{R}^{Sn_{2}}\colon\mathbf{e}^{\mathsf{T}}\mathbf{x}=(S-1),\
\sum_{i=1}^{S}\mathbf{y}_{i}^{\mathsf{T}}\mathbf{y}_{i}=1,\
\mathbf{y}_{i}x_{i}=\mathbf{o},\ i\in[1\\!:\\!S]\right\\},$
$\displaystyle{\mathcal{F}}_{3}$
$\displaystyle\coloneqq\left\\{\begin{pmatrix}\mathbf{x}\\\
\bar{\mathbf{y}}\end{pmatrix}\in\mathbb{R}^{n_{1}}\times\mathbb{R}^{Sn_{2}}_{+}\colon\mathbf{x}^{\mathsf{T}}\mathbf{x}=1,\
\sum_{i=1}^{S}\mathbf{y}_{i}^{\mathsf{T}}\mathbf{y}_{i}=1\right\\}.$
where
$\bar{\mathbf{y}}\coloneqq\left(\mathbf{y}_{1},\dots,\mathbf{y}_{S}\right)$.
The data for the objective functions coefficients were generated using the
same two approaches as in [3]. Next to setting $p_{i}=1/S,\ i\in[1\\!:\\!S]$,
the following two schemes for generating the problem data have been
implemented:
* Scheme 1:
For the first one, we sample $n_{1}+n_{2}$ points from the unit square. The
first $n_{1}$ points are fixed and their mutual distances are used to populate
the entries in ${\mathsf{A}}$. For the other $n_{2}$ points we assume that
they are only known to lie in square with side length $2\varepsilon$, where
their position follows a uniform distribution. For these points $S$ samples
are generated and for the $s$-th sample, the distances between them and
between them and the first $n_{1}$ points populate the entries of
${\mathsf{C}}_{s}$ and ${\mathsf{B}}_{s}$ respectively.
* Scheme 2:
For the second one, we choose $A_{ij}\sim\mathcal{U}_{\left\\{0,1\right\\}}$,
$B_{ij}\sim\mathcal{U}_{[0:10]}$, $C_{ij}\sim\mathcal{U}_{[0,0.1]}$,
independently of each other, where $\mathcal{U}_{\mathcal{M}}$ is the uniform
distribution with support $\mathcal{M}$.
We will now proceed with a discussion of the different instances of
${\mathcal{F}}_{i},\ i\in[1\\!:\\!4]$, where we present the conic respective
reformulation/relaxation and the inner and outer approximations of its sparse
counterpart. Regarding the inner approximations, note that if one combines
them as suggested above, there will only ever be one non-redundant
approximation. The reason is that the linear functions attain the optimum at
an extreme point of the feasible set and the extreme rays of a sum of cones
are a subset of the extreme rays of the individual cones. Thus, for every
${\mathcal{F}}_{i},\ i\in[1\\!:\\!4]$ we will discuss the merits of only one
specific inner approximation at a time. The upperbounds will be obtained by
using $\mathcal{CPI}$ by default. The focus of the discussion will be the
quality of the bounds obtained. Specifically, we are interested in assessing
the completability gap, which necessitates guaranteeing a completepositivity
gap of zero. We will discuss how the latter was achieved case by case.
### 5.1 Using $\mathcal{DDC}$ under ${\mathcal{F}}_{1}$
By choosing ${\mathcal{F}}={\mathcal{F}}_{1}$ we are recovering the scenario
problem for the two-stage stochastic standard quadratic optimization problem
introduced in [3]. In the experiments conducted there, a conic lower bound was
used that is equivalent to the outer approximation of (8) based on
$\mathcal{CPI}$. Since the original space of variables is preserved, the conic
relaxation also yielded an upper bound that conveniently closed the optimality
gap for all instances generated by sc heme 1. However, the gaps generated by
the $\mathcal{CPI}$-approximation were typically large. In this section we
will test whether the gap can be also be improved by using the inner
approximations introduced here.
Due to the limitations discussed in Section 4.2.1, the only inner
approximation that is meaningfully applicable here is the one based on
$\mathcal{DDC}$. Thus, the approximation will involve $Sn_{2}$ constraints
involving $\mathcal{CPP}(\mathbb{R}^{n_{1}+1}_{+}\times\mathbb{R}_{+})$. In
case $n_{1}+2\leq 4$ these constraints can be represented via semidefinite
constraints, since
$\mathcal{CPP}(\mathbb{R}^{n}_{+})=\SS^{n}_{+}\cap{\mathcal{N}}^{n}\eqqcolon\mathcal{DNN}^{n}$
whenever $n\leq 4$, so that the completepositivity gap can be conveniently
bypassed. If $n_{1}+2>4$ the relaxation based on $\mathcal{DNN}$ is an outer
approximation of $\mathcal{DDC}$, which itself is an inner approximation of
$\mathcal{CMP}$ so that we cannot qualify the resulting approximation as
neither outer nor inner. However, the original space of variables stays in
tact regardless so that in cases where $n_{1}+2>4$ we can still obtain another
upper bound that potentially narrows the optimality gap.
### 5.2 Using $\mathcal{CPS}$ under ${\mathcal{F}}_{2}$
The model encodes selecting one out of $S$ groups of variables to be nonzero
and optimizing the objective using just these variables. While there are more
straightforward ways of encoding this process, the one presented here is the
one for which the conic bounds behaved most favorably.
In order to obtain an $\mathcal{CMP}$ reformulation for computing
$v({\mathcal{F}}_{2})$ via 8 we would have to do some prior adaption of the
problem. First of all, $x_{i}\in\\{0,1\\},i\in[1\\!:\\!S]$ can be reformulated
as quadratic constraints $x^{2}_{i}-x_{i}=0$, so that in order for the
assumptions of the theorem to hold, we would have to introduce redundant
constraints and additional variables given by $x_{i}+s_{i}=1,\ s_{i}\geq 0,\
i\in[1\\!:\\!S]$. Secondly, in order for $\mathbf{y}_{i}x_{i}=0$ to fulfill
said assumptions we would have to split each $\mathbf{y}_{i}$ into a positive
and a negative component and enforce the constraints for both components.
Lastly, the quadratic constraints would need to be absorbed into the a second
order cone constraint. Due to the introduction of this many variables, we
would have no chance at bypassing the completepositivity gap. Thus, we will
merely work with the the following $\mathcal{CMP}$ based relaxation:
$\displaystyle\begin{split}\min_{{\mathsf{X}},{\mathsf{Y}}_{i},{\mathsf{Z}}_{i},\mathbf{x},\mathbf{y}_{i}}{\mathsf{A}}_{i}\bullet{\mathsf{X}}+\mathbf{a}^{\mathsf{T}}\mathbf{x}&+\sum_{i=1}^{S}{\mathsf{B}}_{i}\bullet{\mathsf{Z}}_{i}+{\mathsf{C}}_{i}\bullet{\mathsf{Y}}_{i,i}+\mathbf{c}_{i}^{\mathsf{T}}\mathbf{y}\\\
\mathrm{s.t.:}\ \mathbf{e}^{\mathsf{T}}\mathbf{x}&=(S-1),\\\
\mathbf{e}\mathbf{e}^{\mathsf{T}}\bullet{\mathsf{X}}&=(S-1)^{2},\\\
\sum_{i=1}^{S}{\mathsf{Y}}_{i}&=1,\\\ {\mathsf{Z}}_{i}\mathbf{e}_{1}&=0,\
i\in[1\\!:\\!S],\\\
\left[\begin{pmatrix}1&\mathbf{x}^{\mathsf{T}}&\mathbf{y}_{i}^{\mathsf{T}}\\\
\mathbf{x}&{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
\mathbf{y}_{i}&{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}&\in\mathcal{CMP}\left(\mathbb{R}_{+}^{n_{1}+1},\mathbb{R}^{n_{2}},\dots,\mathbb{R}^{n_{2}}\right).\end{split}$
(13)
When working with teh $\mathcal{CPS}$ based lower bound, we obtain a problem
with $S$ conic constraints involving
$\mathcal{CPP}(\mathbb{R}_{+}^{n_{1}+1}\times\mathbb{R}^{n_{2}})$. Here we can
use a result from [12, Theorem 1], which states that
$\displaystyle\mathcal{CPP}(\mathbb{R}_{+}^{n_{1}+1}\times\mathbb{R}^{n_{2}})=\left\\{\begin{pmatrix}{\mathsf{M}}_{1}&{\mathsf{M}}_{2}^{\mathsf{T}}\\\
{\mathsf{M}}_{2}&{\mathsf{M}}3\end{pmatrix}\in\SS^{n_{1}+n2+1}\colon{\mathsf{M}}_{1}\in\mathcal{CPP}(\mathbb{R}^{n_{1}+1}_{+})\right\\}.$
This allows us to bypass the completepositivity gap whenever $n_{1}+1\leq 4$.
However, since the above conic problem is already a lower bound, the bounds we
obtain from further approximating it can merely be used to asses how well that
lower bound hast been approximated. We will see, however, that the results we
obtain seem to indicate that (13) is at the very least a very tight bound,
that is very well approximated via $\mathcal{CPI}$ and $\mathcal{CPS}$.
### 5.3 Using $\mathcal{CBC}$ under ${\mathcal{F}}_{3}$
In order to bypass the weakness of $\mathcal{CBC}$ outlined in Section 4.2.1
we will work with a simplified, sparse, conic reformulation given by
$\displaystyle\begin{split}\min_{{\mathsf{X}},{\mathsf{Y}}_{i},{\mathsf{Z}}_{i},\mathbf{x},\mathbf{y}_{i}}{\mathsf{A}}_{i}\bullet{\mathsf{X}}+\sum_{i=1}^{S}{\mathsf{B}}_{i}\bullet{\mathsf{Z}}_{i}&+{\mathsf{C}}_{i}\bullet{\mathsf{Y}}_{i,i}\\\
\mathrm{s.t.:}\
{\mathsf{I}}\bullet{\mathsf{X}}+\sum_{i=1}^{S}{\mathsf{I}}\bullet{\mathsf{Y}}_{i}&=1,\\\
\left[\begin{pmatrix}{\mathsf{X}}&{\mathsf{Z}}_{i}^{\mathsf{T}}\\\
{\mathsf{Z}}_{i}&{\mathsf{Y}}_{i}\end{pmatrix}\right]_{i\in[1\\!:\\!S]}&\in\mathcal{CMP}\left(\mathbb{R}^{n_{1}}_{+},\mathbb{R}^{n_{2}},\dots,\mathbb{R}^{n_{2}}\right).\end{split}$
(14)
The fact that this is in fact a valid reformulation and not just a lower bound
can be deduced from 6 by choosing $\mathbb{H}$ to be the hyperplane
corresponding to the one linear constraint that is present in (14), and
$\mathbb{J}$ to be all of $\mathcal{CMP}$. The boundedness of
$\mathbb{J}\cap\mathbb{H}$ follows from the fact that the identity matrix
${\mathsf{I}}$ is positive definite. In this reformulation $\mathbf{x}$ is
absent so that the problem lined out in Section 4.2.1 is mute. Of course, this
comes at the cost, of having merely a single upper bound given by the optimal
solution of the $\mathcal{CBC}$ approximation. However, since
${\mathcal{K}}_{i}=\mathbb{R}^{n_{2}},\ i\in[1\\!:\\!S]$ we can use the fact
that $\mathcal{CPP}(\mathbb{R}_{+}\times\mathbb{R}^{n})=\SS^{n}_{+}$ (see [2,
Section 2]) in order to close the completepositivity gap regardless of the
dimension of the problem. We also like to note, that there always is a
feasible value with optimal value equal to zero, which leads to optimality
gaps being equal to infinity if the lower bound is at zero precisely. In order
to avoid this inconvenience we added 1 as a constant to the objective
function.
### 5.4 Design of the experiments and results
For each of the models we conduct two types of experiments. For the first one
we choose the dimension of the problem such, that the completepositivity gap
could be bypassed and one where that is not the case. For the latter
instances, we worked with outer approximations of the respective set-
completely positive constraints. Hence, the $\mathcal{CPI}$ relaxation was
further relaxed, so that the resulting problem can be qualified as a valid
lower bound. The relaxation of the inner approximation does not allow for such
a qualification, since we obtain lower bound to an upper bound. However, the
relaxation yields valid upperbounds as a byproduct since the original space of
variables stays in tact for all but the $\mathcal{CBC}$ relaxation of
${\mathcal{F}}_{3}$. But for the latter, the completepositivity gap can be
bypassed regardless of the dimension of the problem data. For every choice on
$(n_{1},n_{2},S)$ and ${\mathcal{F}}_{i},\ i=1,2,3$ we generated 10 instances
from t scheme 1 and 2 respectively. For every instance we calculated the
$\mathcal{CPI}$ lower bound, the bounds and approximations based on the
respective inner approximations, and in addition we used upper and lower
bounds achieved by Gurobi within a 5 minute time limit.
The results are summarized in Table 1. The "instance-types" are indicated by a
quadruple of the form $n_{1}\\_n_{2}\\_S\\_s$, where
$s\in\left\\{1,2\right\\}$ indicates the scheme by which we constructed the
instances. In the multi-column "Conic Gaps" we report the average gap between
the $\mathcal{CPI}$ lower bound and the feasible solution generated from the
$\mathcal{CPI}$ bound (UB), the optimal value of the inner approximations (I)
and the feasible solution generated from the latter approximations (IUB). For
"Gurobi Gaps" we calculate these gaps with respect to the lower bound fond by
Gurobi instead of the one obtained from $\mathcal{CPI}$. In addition we
present the gap between the $\mathcal{CPI}$ based lower bound and the
upperbound generated by Gurobi (O), and we also report the optimality gap
obtained by Gurobi itself within the 5 minute time limit (G). All the gaps are
reported in percentages relative to the respective lower bound. Finally, in
the last two multi-columns, we count the number of times the conic and the
gurobi gaps were smaller then $0.01\%$, at which point we consider the
instance solved.
For the experiments on ${\mathcal{F}}_{1}$ we see that phenomenon already
documented in [3] persists: instances from scheme 1 are regularly solved via
$\mathcal{CPI}$ alone, while that is not the case for the scheme 2 instances.
However, for these instances the feasible solutions from the $\mathcal{DDC}$
approximation yield excellent bounds, revealing that both approximations are
very good, albeit not quite good enough to solve the instance on $0.01\%$
tolerance threshold. We also note that $\mathcal{CPI}$ yields a much better
lower bound than Gurobi does within the time limit, sometimes even certifying
optimality of Gurobi’s feasible solution. The upper bound provided by
$\mathcal{DDC}$ performs similar to Gurobi’s upperbound, when measured
relative to Gurobi’s lower bound.
Regarding ${\mathcal{F}}_{2}$ we see that the conic gaps were narrowed quite
substantially by the inner approximation and regularly closed. Gurobi on its
own performed similarly except for the largest instance types, for which it
was outperformed quite substantially. What is remarkable is the fact that the
upper bounds of the approximations of the $\mathcal{CMP}$ relaxation seem to
also upper bound Gurobi’s lower bounds. Conversely, Gurobis upper bounds seem
to live close to the $\mathcal{CPI}$ based lower bounds on average. This might
indicate that the $\mathcal{CMP}$ relaxation may in fact be tight despite the
fact that 8 is not applicable. We hope we can address this phenomenon in
future research. Again, instances from scheme 2 seemed to be a greater
challenge. Note that the smallest gaps are regularly produced by the feasible
solution of the inner approximation.
Finally we can see that for ${\mathcal{F}}_{3}$ the upper bound based on
$\mathcal{CBC}$, unfortunately, performed quite poorly, which is surprising,
given that the derivation of $\mathcal{CBC}$ is the one that is closest to
classical results in matrix completion. On the brighter side, we see that the
$\mathcal{CPI}$ produced good lower bounds that narrow the gap to Gurobi’s
feasible solution better than Gurobi itself.
We also recorded the average time spent on the different approaches in Table
2. We decomposed these running times into the time the respective solver used
to produce the bounds (solver time), the internal model-building time reported
by Yalmip (yalmip time) and the time our implementation used for building the
model that is passed to yalmip (model time). We like to point out a couple of
things. Firstly, for the majority of the instance types Gurobi ran into the
time limit on average. Also, on average the $\mathcal{DDC}$ approximations are
more demanding for Mosek than the other approximations. Still, the models were
solved quite quickly, certainly quicker than 5 minutes. Hence, our methods can
produce good bounds with reasonable effort. Finally we would like to point out
that there are also spikes in the model time for some of the inner
approximations. This, however, is an artifact of our implementation that can
potentially be avoided with better programming.
## Conclusion
In this text we presented a new approach to sparse conic optimization based on
a generalization of the set-completely positive matrix cone, which was
motivated by the study of the two-stage stochastic standard quadratic
optimization problem. Using innner and outer approximations of said cone
allows for certificates of exactness of a sparsification outside of
traditional matrix completion approaches. We demonstrate in numerical
experiments, that this approach can close or at least narrow the optimality
gap in interesting cases. We think that this provides a prove of concept that
may motivate future research. Interesting questions remain, for example, about
the quality of the inner approximations and whether they can be proven to be
exact for special cases.
#### Data availability statement
The datasets generated during and/or analysed during the current study are
available from the corresponding author on reasonable request.
| | Conic Gaps | Gurobi Gaps | # Gurobi Gaps | # Conic Gaps
---|---|---|---|---|---
| instance-types | UB | I | IUB | G | UB | I | IUB | O | UB | I | IUB | O | UB | I | IUB
${\mathcal{F}}_{1}$ | 10_10_10_1 | 0,00 | (2,57) | 0,02 | 27,20 | 25,59 | (28,93) | 28,44 | 1,27 | 0 | (0) | 0 | 0 | 10 | (0) | 4
20_20_20_1 | 0,00 | (3,42) | 0,03 | 164,45 | 95,96 | 102,58 | (102,19) | 35,32 | 0 | (0) | 0 | 0 | 10 | (0) | 1
2_10_10_1 | 0,00 | 18,36 | 0,18 | 23,16 | 21,63 | 44,12 | 43,31 | 1,24 | 0 | 0 | 0 | 0 | 10 | 0 | 1
2_5_5_1 | 0,00 | 16,97 | 0,16 | 6,68 | 6,58 | 24,76 | 23,97 | 0,09 | 0 | 0 | 0 | 1 | 10 | 0 | 0
10_10_10_2 | 39,57 | (0,29) | 0,37 | 40,36 | 94,88 | (40,36) | 91,35 | 0,30 | 0 | (0) | 0 | 0 | 0 | (0) | 1
20_20_20_2 | 67,37 | (0,32) | 0,46 | 117,23 | 216,92 | (89,96) | 177,13 | 14,72 | 0 | (0) | 0 | 0 | 0 | (0) | 1
2_10_10_2 | 1,66 | 0,25 | 0,23 | 0,48 | 2,15 | 0,73 | 24,04 | 0,00 | 0 | 0 | 0 | 10 | 1 | 0 | 4
2_5_5_2 | 2,53 | 0,18 | 0,19 | 0,08 | 2,61 | 0,25 | 18,77 | 0,00 | 0 | 0 | 1 | 10 | 1 | 0 | 4
${\mathcal{F}}_{2}$ | 10_10_10_1 | 0,06 | (0,01) | 0,00 | 0,01 | 0,06 | (0,01) | 0,03 | 0,01 | 8 | (1) | 1 | 8 | 8 | (8) | 10
20_20_20_1 | 0,01 | (0,00) | 0,00 | 4,53 | 4,36 | (4,35) | 4,38 | 0,17 | 0 | (0) | 0 | 0 | 7 | (10) | 10
3_10_3_1 | 0,00 | 0,00 | 0,00 | 0,01 | 0,01 | 0,01 | 0,03 | 0,00 | 7 | 7 | 0 | 10 | 10 | 10 | 10
3_5_3_1 | 0,19 | 0,00 | 0,00 | 0,00 | 0,20 | 0,01 | 0,14 | 0,00 | 9 | 9 | 7 | 10 | 9 | 9 | 9
10_10_10_2 | 3,73 | (1,48) | 0,02 | 0,01 | 2,21 | (0,00) | 0,06 | 1,48 | 2 | (7) | 3 | 2 | 2 | (2) | 4
20_20_20_2 | 1,22 | (0,48) | 0,01 | 11,84 | 11,49 | (10,67) | 10,78 | 1,54 | 0 | (0) | 0 | 0 | 2 | (3) | 8
3_10_3_2 | 15,09 | 2,66 | 0,03 | 0,01 | 11,90 | 0,01 | 0,07 | 2,66 | 2 | 10 | 2 | 3 | 3 | 3 | 5
3_5_3_2 | 7,73 | 1,21 | 0,01 | 0,00 | 6,31 | 0,00 | 0,01 | 1,21 | 6 | 10 | 10 | 6 | 6 | 6 | 7
${\mathcal{F}}_{3}$ | 10_10_10_1 | - | 404,20 | - | 1,97 | - | 412,79 | - | 0,27 | - | 0 | - | 0 | - | 0 | -
20_20_20_1 | - | 484,57 | - | 1302,01 | - | 954,89 | - | 675,72 | - | 0 | - | 0 | - | 0 | -
10_3_10_1 | - | 907,85 | - | 0,62 | - | 914,15 | - | 0,00 | - | 0 | - | 10 | - | 0 | -
5_3_5_1 | - | 257,07 | - | 0,21 | - | 257,83 | - | 0,00 | - | 0 | - | 10 | - | 0 | -
10_10_10_2 | - | 149,54 | - | 1,79 | - | 153,92 | - | 0,04 | - | 0 | - | 0 | - | 0 | -
10_3_10_2 | - | 164,45 | - | 0,62 | - | 166,06 | - | 0,01 | - | 0 | - | 5 | - | 0 | -
20_20_20_2 | - | 278,50 | - | 1909,44 | - | 304,09 | - | 1781,02 | - | 0 | - | 0 | - | 0 | -
5_3_5_2 | - | 60,58 | - | 0,22 | - | 60,93 | - | 0,00 | - | 0 | - | 10 | - | 0 | -
Table 1: Results on the quality of bounds | | Solver time | Yalmip time | Model time | Total
---|---|---|---|---|---
| instanctypes | Exact | Inner | Outer | Exact | Inner | Outer | Exact | Inner | Outer | Exact | Inner | Outer
${\mathcal{F}}_{1}$ | 10_10_10_1 | 300,088 | 1,269 | 0,149 | 0,126 | 0,280 | 0,132 | 0,229 | 1,419 | 0,176 | 300,443 | 2,969 | 0,457
20_20_20_1 | 300,076 | 54,426 | 4,898 | 0,191 | 1,880 | 0,262 | 2,550 | 27,699 | 0,968 | 302,817 | 84,004 | 6,128
2_10_10_1 | 300,339 | 0,102 | 0,036 | 0,105 | 0,176 | 0,108 | 0,097 | 0,638 | 0,116 | 300,540 | 0,916 | 0,259
2_5_5_1 | 300,630 | 0,026 | 0,010 | 0,084 | 0,100 | 0,088 | 0,029 | 0,152 | 0,050 | 300,742 | 0,278 | 0,147
10_10_10_2 | 300,052 | 1,093 | 0,159 | 0,098 | 0,228 | 0,106 | 0,176 | 1,072 | 0,148 | 300,326 | 2,394 | 0,412
20_20_20_2 | 300,169 | 72,279 | 4,340 | 0,035 | 1,685 | 0,199 | 2,265 | 25,130 | 0,806 | 302,469 | 99,094 | 5,346
2_10_10_2 | 300,767 | 0,103 | 0,043 | 0,102 | 0,203 | 0,111 | 0,118 | 0,710 | 0,130 | 300,987 | 1,016 | 0,283
2_5_5_2 | 260,024 | 0,022 | 0,011 | 0,100 | 0,116 | 0,101 | 0,046 | 0,173 | 0,055 | 260,170 | 0,311 | 0,167
${\mathcal{F}}_{2}$ | 10_10_10_1 | 102,145 | 0,137 | 0,132 | 0,141 | 0,108 | 0,087 | 0,432 | 0,146 | 0,088 | 102,718 | 0,391 | 0,308
20_20_20_1 | 311,688 | 4,922 | 4,596 | 0,882 | 0,358 | 0,117 | 9,657 | 0,600 | 0,291 | 322,228 | 5,879 | 5,004
3_10_3_1 | 31,109 | 0,013 | 0,014 | 0,103 | 0,084 | 0,085 | 0,079 | 0,046 | 0,030 | 31,291 | 0,143 | 0,128
3_5_3_1 | 0,298 | 0,006 | 0,008 | 0,086 | 0,081 | 0,080 | 0,042 | 0,043 | 0,026 | 0,427 | 0,129 | 0,113
20_20_20_2 | 302,046 | 4,701 | 8,235 | 0,844 | 0,350 | 0,114 | 9,204 | 0,550 | 0,244 | 312,094 | 5,600 | 8,593
10_10_10_2 | 168,211 | 0,148 | 0,216 | 0,146 | 0,112 | 0,089 | 0,459 | 0,149 | 0,087 | 168,816 | 0,409 | 0,392
3_10_3_2 | 12,291 | 0,016 | 0,045 | 0,090 | 0,083 | 0,083 | 0,056 | 0,044 | 0,027 | 12,437 | 0,142 | 0,155
3_5_3_2 | 0,130 | 0,007 | 0,015 | 0,088 | 0,080 | 0,082 | 0,035 | 0,044 | 0,026 | 0,252 | 0,131 | 0,122
${\mathcal{F}}_{3}$ | 10_10_10_1 | 300,266 | 0,247 | 0,116 | 0,102 | 0,216 | 0,097 | 0,221 | 1,294 | 0,056 | 300,589 | 1,758 | 0,269
20_20_20_1 | 300,395 | 6,348 | 4,149 | 0,190 | 1,554 | 0,178 | 3,291 | 31,444 | 0,256 | 303,876 | 39,346 | 4,583
10_3_10_1 | 300,270 | 0,062 | 0,027 | 0,099 | 0,193 | 0,092 | 0,134 | 0,940 | 0,050 | 300,503 | 1,194 | 0,169
5_3_5_1 | 300,432 | 0,014 | 0,008 | 0,100 | 0,121 | 0,092 | 0,032 | 0,188 | 0,027 | 300,564 | 0,324 | 0,127
10_10_10_2 | 300,194 | 0,279 | 0,125 | 0,098 | 0,224 | 0,104 | 0,221 | 1,416 | 0,056 | 300,513 | 1,919 | 0,284
20_20_20_2 | 300,096 | 4,593 | 2,919 | 0,169 | 1,373 | 0,161 | 2,822 | 22,726 | 0,173 | 303,087 | 28,692 | 3,253
10_3_10_2 | 300,422 | 0,074 | 0,032 | 0,103 | 0,207 | 0,100 | 0,118 | 1,015 | 0,057 | 300,643 | 1,295 | 0,189
5_3_5_2 | 300,451 | 0,024 | 0,014 | 0,134 | 0,154 | 0,124 | 0,038 | 0,255 | 0,036 | 300,624 | 0,433 | 0,173
Table 2: Running times of the models
## References
* [1] A. Berman and N. Shaked-Monderer. Completely positive matrices. World Scientific, 2003.
* [2] I. Bomze and M. Gabl. Interplay of non-convex quadratically constrained problems with adjustable robust optimization. Mathematical Methods of Operations Research, 93:115–151, 2021.
* [3] I. M. Bomze, M. Gabl, F. Maggioni, and G. C. Pflug. Two-stage Stochastic Standard Quadratic Optimization. European Journal of Operational Research, 299(1):21–34, 2022.
* [4] S. Burer. On the copositive representation of binary and continuous nonconvex quadratic programs. Mathematical Programming, 120(2):479–495, 2009.
* [5] S. Burer. Copositive programming. In Handbook on semidefinite, conic and polynomial optimization, pages 201–218. Springer, 2012.
* [6] J. H. Drew and C. R. Johnson. The completely positive and doubly nonnegative completion problems. Linear and Multilinear Algebra, 44(1):85–92, 1998.
* [7] G. Eichfelder and J. Povh. On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets. Optimization Letters, 7(6):1373–1386, 2013.
* [8] L. Fan, H. G. Aghamolki, Z. Miao, and B. Zeng. Achieving SDP Tightness Through SOCP Relaxation with Cycle-Based SDP Feasibility Constraints for AC OPF. arXiv preprint arXiv:1804.05128, 2018.
* [9] M. Fukuda, M. Kojima, K. Murota, and K. Nakata. Exploiting sparsity in semidefinite programming via matrix completion I: General framework. SIAM Journal on Optimization, 11(3):647–674, 2001.
* [10] S. Kim, M. Kojima, and K.-C. Toh. Doubly nonnegative relaxations are equivalent to completely positive reformulations of quadratic optimization problems with block-clique graph structures. Journal of Global Optimization, 77:513–541, 2020.
* [11] S. Kim, M. Kojima, and K.-C. Toh. A geometrical analysis on convex conic reformulations of quadratic and polynomial optimization problems. SIAM Journal on Optimization, 30(2):1251–1273, 2020.
* [12] K. Natarajan and C.-P. Teo. On reduced semidefinite programs for second order moment bounds with applications. Mathematical Programming, 161(1):487–518, 2017.
* [13] D. Padmanabhan, K. Natarajan, and K. Murthy. Exploiting partial correlations in distributionally robust optimization. Mathematical Programming, 186(1):209–255, 2021.
* [14] J. Sliwak, M. Anjos, L. Létocart, J. Maeght, and E. Traversi. Improving clique decompositions of semidefinite relaxations for optimal power flow problems. arXiv preprint arXiv:1912.09232, 2019.
* [15] L. Vandenberghe and M. S. Andersen. Chordal graphs and semidefinite optimization. Foundations and Trends in Optimization, 1(4):241–433, 2015.
|
LLudography
# Player-AI Interaction: What Neural Network Games Reveal About AI as Play
Jichen Zhu<EMAIL_ADDRESS>Drexel UniversityPhiladelphia, PA, USA ,
Jennifer Villareale<EMAIL_ADDRESS>Drexel UniversityPhiladelphia, PA, USA ,
Nithesh Javvaji<EMAIL_ADDRESS>Northeastern UniversityBoston, MA,
USA , Sebastian Risi<EMAIL_ADDRESS>IT University CopenhagenCopenhagen, Denmark
, Mathias Löwe<EMAIL_ADDRESS>IT University CopenhagenCopenhagen, Denmark ,
Rush Weigelt<EMAIL_ADDRESS>Drexel UniversityPhiladelphia, PA, USA and
Casper Harteveld<EMAIL_ADDRESS>Northeastern UniversityBoston,
MA, USA
(2021)
###### Abstract.
The advent of artificial intelligence (AI) and machine learning (ML) bring
human-AI interaction to the forefront of HCI research. This paper argues that
games are an ideal domain for studying and experimenting with how humans
interact with AI. Through a systematic survey of neural network games (n =
38), we identified the dominant interaction metaphors and AI interaction
patterns in these games. In addition, we applied existing human-AI interaction
guidelines to further shed light on player-AI interaction in the context of
AI-infused systems. Our core finding is that AI as play can expand current
notions of human-AI interaction, which are predominantly productivity-based.
In particular, our work suggests that game and UX designers should consider
flow to structure the learning curve of human-AI interaction, incorporate
discovery-based learning to play around with the AI and observe the
consequences, and offer users an invitation to play to explore new forms of
human-AI interaction.
Human-AI Interaction; Neural Networks; User Experience; Game Design
††journalyear: 2021††copyright: acmcopyright††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††price: 15.00††doi:
10.1145/3411764.3445307††isbn: 978-1-4503-8096-6/21/05††ccs: Human-centered
computing Human computer interaction (HCI)††ccs: Applied computing Computer
games††ccs: Computing methodologies Artificial intelligence Figure 1.
Selection of neural network (NN) games. (From left to right, Top: How to Train
Your Snake snake, Idle Machine Learning Game idle, Evolution evolution,
EvoCommander jallov2016evocommander, Machine Learning Arena
ferguson2019machine, Hey Robot hey; Middle: Quick, Draw! quick, Semantris
semantris, Dr. Derks Mutant Battlegrounds derks, Forza Car Racing forza,
Democracy 3 democracy, Darwin’s Avatars lessin2015darwin, AudioinSpace
hoover2015audioinspace; Bottom: NERO stanley2005evolving, Black & White
blackandwhite, Creatures grand1997creatures, MotoGP19 (moto), Supreme
Commander 2 rabin2015game, Galactic Arms Race hastings2009evolving, Petalz
risi2015petalz)
## 1\. Introduction
With the recent boom in artificial intelligence (AI) technology, 111Unless
otherwise specified, we use the term AI broadly to include a wide range of
artificial intelligence and machine learning techniques. people are
interacting with a growing number of AI-infused products in many aspects of
everyday life. Already, these AI systems influence our decisions (e.g.,
recommendation systems (Smith and Linden, 2017)), inhabit our households
(e.g., robotic appliances (Sung et al., 2007)), and accompany us in our
playful experiences (Mateas and Stern, 2003; Zhu and Ontanón, 2010; Zhu and
Ontañón, 2013) and educational games (Valls-Vargas et al., 2015; Zhu et al.,
2019).
The technological development has precipitated renewed interest in the Human
Computer Interaction (HCI) community. In addition to improving the usability
of individual products (Myers et al., 2018), HCI researchers synthesized
design guidelines for human-AI interaction from the past decades (Amershi et
al., 2019). There is a growing recognition that, compared to traditional
interactive systems, AI-infused products impose additional challenges (e.g.,
technical barrier, low interpretability) to the current user experience (UX)
design process (Dove et al., 2017). Furthermore, new interdisciplinary
research areas have emerged around topics such as explainable AI (Tintarev and
Masthoff, 2007; Zhu et al., 2018; Binns et al., 2018; Rader et al., 2018),
ethics & fairness (Bryson, 2010; Dove et al., 2017; Holmquist, 2017), and
machine learning (ML) as a design material for UX (Yang et al., 2018b; Yang et
al., 2018a).
Among the fast-growing body of literature on human-AI interaction in the CHI
community, one overlooked area is the context of play. With few exceptions,
most recent literature focuses on productivity-related domains such as
e-commerce, navigation and autocomplete (Amershi et al., 2019). While these
are important domains for human-AI interaction, the history of AI and human-AI
interaction has long been associated with play. For instance, ELIZA, one of
the first AI programs designed to interact with lay users, was a playful
satire of a certain school of psychotherapy (Weizenbaum, 1966). Games such as
Chess, Poker, Go, and StarCraft have continued to serve as benchmarks that
propelled the development of AI since the beginning of the field. Since games
naturally focus on end-user experience, game AI research has accumulated
valuable knowledge related to human-AI interaction (Mateas, 1999; Mateas and
Stern, 2003; Young et al., 2004; Zhu and Harrell, 2008; Risi and Togelius,
2015; Valls-Vargas et al., 2017).
In this paper, we propose the new construct of player-AI interaction to
highlight how people interact with AI in the context of play, especially
through computer games. To provide an overview of existing work in this area,
we conducted the first systematic review of player-AI interaction in the scope
of Neural Network games — computer games in which players interact with an NN
as part of the core gameplay. A neural network (NN) is a computational model
that includes nodes (i.e., neurons) and connections between these nodes that
transmit information (Stanley, 2007). The strengths of these connections
(i.e., weights) are typically adjusted through some learning process. For our
paper, this definition covers both NNs that control agents / non-player
characters (NPCs) in games (Jallov et al., 2016; Vinyals et al., 2019) and
generative NN models that produce game content (Hastings et al., 2009; Risi
and Togelius, 2015).
We chose NN games for two key reasons. First, given the wide adoption of AI in
games, we had to constrain our systematic (qualitative) review. Second, and
more importantly, NN games provide insights into some of the most pressing
open problems in human-AI interaction. For example, NNs are notorious for UX
designers to work with because of NNs’ low interpretability of the underlying
process and the frequent unpredictability of its outcome. Studying NN games
can thus provide valuable information on how game designers have to work with
these challenges.
We collected 38 NN games and applied a two-phased qualitative analysis to
examine them. In the first phase, we use close reading and grounded theory to
identify the overarching interaction metaphors and patterns of how NNs are
represented in the game user interface (UI). In the second phase, we apply
current human-AI interaction design guidelines (Amershi et al., 2019),
compiled from a wide range of productivity-based domains, to our dataset. From
these analyses, we derive design lessons for where games do well and identify
open areas that can expand our current notion of human-AI interaction. A key
design insight is that reframing AI as play offers a useful approach for
considering human-AI interaction in games and beyond.
The core argument of this paper is that games are a rich and currently
overlooked domain for advancing human-AI interaction. The design space
afforded by structuring AI as play, as game designers have been exploring, can
point out new opportunities for AI-infused products in general. At the same
time, insights of the generalized guidelines from other domains can be adapted
to improve player-AI interaction. The key contributions of this paper are as
follows:
* •
We propose the new research area of player-AI interaction. Through the first
systematic review on player-AI interaction in the context of NN games, we
showcase how player-AI interaction can expand the current productivity-based
discussions around human-AI interaction.
* •
We adapted existing design guidelines for human-AI interaction to the context
of games. Currently, there are no synthesized metrics to evaluate player-AI
interaction.
* •
We provide several insights from NN games (e.g., flow, exploration) to improve
current challenges in human-AI interaction (e.g., learnability of AI).
## 2\. Related Work
In this section, we summarize related work in human-AI interaction and AI-
based games research. We aim to bridge these two disconnected areas.
### 2.1. Human-AI Interaction
The HCI community has developed a body of work on how to design user
interactions to improve productivity through AI-based applications (Herlocker
et al., 2000; Horvitz, 1999; Höök, 2000; Steinfeld et al., 2006; Winograd,
2006). Thanks to increasingly sophisticated big data and deep neural networks
(i.e., deep learning), AI-infused products have started to enter the consumer
market, prompting a new surge of interest in human-AI interaction (Amershi et
al., 2019; Bansal et al., 2019; Oulasvirta et al., 2020) and its societal
impact in topics such as explainability and transparency (Tintarev and
Masthoff, 2007; Zhu et al., 2018; Binns et al., 2018; Rader et al., 2018).
An important research area is user-centered design for human-AI interaction.
For instance, in conversational agents, a widely adopted type of AI-infused
products, researchers have reported the wide gap between user expectations and
the user experience (UX) of these systems (Luger and Sellen, 2016) and how
users develop their own strategies to work around the obstacles (Myers et al.,
2018). Recently, researchers synthesized guidelines, principles, and theories
into coherent design frameworks for human-AI interaction (Amershi et al.,
2019; Sukis, 2019; Wang et al., 2019). While user-centered (or player-
centered) design is key in game development (Sweetser and Johnson, 2004), few
works have looked at games as a domain for human-AI interaction, despite the
long history of games and AI. Our work is the first to do so in a systematic
and empirical way. More specifically, we leverage the existing meta-review of
human-AI interaction design principles (Amershi et al., 2019) to investigate
NN games.
There is a growing understanding in recent literature that designing AI and ML
products is especially challenging for UX designers. Dove et al. (Dove et al.,
2017) acknowledged that, despite the regular use of ML in UX products, there
has been little design innovation. Echoing the challenges of using ML as
design material, Yang et al. (Yang et al., 2020) argued that capability
uncertainty and output complexity of AI systems are the two root causes of why
human-AI interaction is uniquely tricky to design. The work presented in this
paper aims to identify player-AI interaction, which is currently separated
from the mainstream human-AI interaction literature, as a rich domain for
further study and experimentation. Lessons from the game research community on
how to structure human-AI interaction in the context of play can help to
expand the current body of work in human-AI interaction.
### 2.2. AI-based Game Design and Player Experience
In AI research, there is an extended history of using games as a rich domain
to motivate algorithmic advancements. Salient examples include Chess in the
era of “Good Old Fashioned AI” (GOFAI) (Campbell et al., 2002), Go (Silver et
al., 2016), classic Atari video games (Mnih et al., 2013), or even the popular
AAA game StarCraft (Ontanón et al., 2013; Vinyals et al., 2019) in the age of
deep learning. The advances in game AI in turn opened new design spaces of
player experience in research (Mateas, 1999; Mateas and Stern, 2003; Young et
al., 2004; Zhu and Harrell, 2008; Valls-Vargas et al., 2017) as well as
commercially released games and game engines (e.g., Versu (Evans and Short,
2013), Left4Dead (Valve, 2008) and Civilization VI (Games, 2016)).
However, with few exceptions, games have only recently started to be used as a
serious domain for human-AI interaction research. For instance, Gomme and
Bartle (Gomme and Bartle, 2020) used strategy games to study players’
expectations for what they consider to be a worthy AI-controlled opponent.
Along those lines, several researchers proposed using games and playful
experiences to help designers and users learn AI (Myers et al., 2020; Fulton
et al., 2020; Pemberton et al., 2019).
Most existing work has focused on high-level metaphors (often referred to as
“design patterns”) of how players and designers can interact with AI. For
instance, Treanor et al. (Treanor et al., 2015) derived nine patterns based on
what players do: AI as role-model, trainee, editable, co-creator, adversary,
villain, or spectacle, and whether AI is visible or guided. Cook et al. (Cook
et al., 2016) further examined design patterns in procedural content
generation (PCG)-based games and derived different AI design patterns. In the
context of assisting the game development process, Riedl and Zook (2013)
proposed that AI plays the role of actor, designer, and producer. While the
above work provides a critical starting point for our work, they are “meant to
be a tool for thinking about creating AI-based games, rather than serve as a
comprehensive taxonomy of methods” (Treanor et al., 2015). Finally, Guzdial et
al. (Guzdial et al., 2019) used the taxonomy of friend, collaborator, student,
and manager to describe the different interaction metaphors for how game
designers interact with an AI-based game level editor. Our work builds on this
tradition of using human-human interaction as metaphors to structure the
interaction between humans and AI. In addition, we extend this literature by
conducting an in-depth empirical analysis through grounded theory instead of
relying on researchers’ domain expertise, as is the case for the above-
mentioned existing work.
Finally, there is a significant body of work in games research to understand
player experience (Nacke et al., 2009; Lucero et al., 2013; Desurvire and
Wiberg, 2009; Denisova and Cairns, 2015; Abeele et al., 2020). For example,
the game engagement questionnaire (GEQ) (Brockmyer et al., 2009) is a widely
used instrument for measuring player engagement, although recently it has been
approached with increasing criticism (Law et al., 2018). Other notable
frameworks include game involvement (Calleja, 2007), game usability (Desurvire
and Wiberg, 2009), and design heuristics (Lucero et al., 2013). While these
frameworks are useful to improve the general player experience, they do not
have sufficient focus on the interaction between players and AI to guide the
human-AI interaction design of games. Thus, our work proposes the first set of
guidelines to design and evaluate player-AI interaction.
## 3\. Dataset: Neural Network Games
This section describes our systematic search process and the resulting dataset
of 38 NN Games. Table 1 provides an overview of this dataset, including the
characteristics described in this section.
### 3.1. Search Strategy and Data Collection
Figure 2. Data collection process.
We searched two popular web gaming portals—Steam and itch.io—and a widely used
game AI book, Artificial Intelligence and Games (Yannakakis and Togelius,
2018). We chose these three sources because they collectively cover a wide
variety of games of different production modes. Steam is the largest digital
distribution platform for PC gaming, offering over 34,000 games with over 95
million monthly active users in 2019 (ste, 2020). itch.io is one of the
largest platforms for indie games, containing nearly 100,000 games with
various metadata. Artificial Intelligence and Games is the most cited book on
game AI that includes examples of games with notable AI innovations. The book
complements the previous two sources for its coverage on AAA commercial games
and research games.
Our inclusion criteria were computer games wherein players can interact with
an NN as part of the core gameplay (i.e., gameplay loop). We use the
definition of core gameplay as “the set of actions the player iterates on the
most while playing the game [and which] should directly influence the outcomes
of the game” (Guardiola, 2016). We further excluded work with no clear win
condition and no clear feedback on how player interaction with the AI impacts
the game, as these games lack the basic elements for meaningful player-AI
interaction. Notice that sandbox games with clear feedback to player
interaction are included (blackandwhite; aidungeon; corral; Grand et al.,
1997; Hastings et al., 2009). For the same reason, we excluded games where the
NN did not interact with players (e.g., ML agents that can automatically play
games (Snodgrass and Ontañón, 2014)). Finally, we excluded digitized versions
of traditional board/card games (e.g., Chess, Go, Poker) to focus on computer
games. Future research is needed for investigating player-AI interaction in
traditional games.
Our search process is summarized in Figure 2. Similar to other systematic
reviews on large game repositories (Alharthi et al., 2018), we used pre-
existing game tags in these systems. On itch.io, we used its tags “neural-
network” and “machine-learning,” and “AI.” On Steam we searched with the terms
“neural network”, “machine learning”, and also used Steam’s own tag
“artificial intelligence.” We acknowledge that not all NN games identify
themselves as such, and this is a limitation of our study. However, it is
possible that the games that advertise their use of NNs are more likely to pay
extra attention to player-AI interaction. In the Artificial Intelligence and
Games book (Yannakakis and Togelius, 2018), we went through the chapter “A
Brief History of Artificial Intelligence and Games” to collect the relevant
games. In order to include as many influential examples as possible, we also
asked on social media in the games and game AI communities for additional
work. The suggested games are included in the category of “additional games”
along with the games the authors were aware of.
After screening the 125 games resulting from the above process for
eligibility, we found 38 games that met the inclusion criteria (Table 1). The
most common reasons for games to be excluded were that 1) they used content
(e.g., music) generated by an external NN, but the NN was not part of the
gameplay loop, and 2) they did not have full human-AI interaction due to the
lack of feedback for player actions. For example, Bird by Example bird is a
single-player RPG where players navigate a forest environment and interact
with their bird offspring that is controlled by an NN. This game was excluded
because the player’s actions (i.e., walk, jump, punch) produced no visible
change in the NN’s behavior. As a result, it was unclear how the player was
intended to interact with the NN and to what end the NN impacted gameplay.
Note that our goal was not to develop a comprehensive list, but rather to
capture a representative sample of NN games to analyze current trends in
player-AI interaction.
### 3.2. Dataset
For each game we included, two researchers collected the following to form our
dataset: 1) screen recording of one researcher playing at least one hour of
the game, 2) game developers’ description of the game and their design intent
(i.e., via the game’s website, developer blog, and academic publications), and
3) technical features of the NN (online vs. offline learning, the types of
output to the NN). We used findings from 1) and 2) to determine 3). If it was
not apparent, the two researchers consulted our NN expert coauthors for
resolution. This section summarizes the key characteristics of our dataset. It
should be noted that seven games did not have playable versions publicly
available (marked with * in Table 1). For those games, the researchers used
existing gameplay footage available on the Internet for our Phase 1 analysis.
However, for Phase 2, directly playing the games is necessary. Thus, we
excluded the unplayable games for our guideline analysis (see Figure 2).
#### 3.2.1. Characteristics as Games
As a collection of games, our dataset is diverse in multiple aspects. Indie
and research games make up most of the dataset (74%), while only 26% are AAA
games. Using Heintz and Law’s taxonomy (Heintz and Law, 2015), our games cover
all the genres: 15 simulation games (39%), 7 puzzle games (18%), 5 strategy
games (13%), 4 role-playing games (11%), 3 action games (8%), 3 sports games
(8%) and 1 adventure game (3%). In addition, 16 games are multiplayer (42%),
and the remaining 22 games are single player (58%). This suggests that our
dataset has a good representation of different types of games.
#### 3.2.2. Characteristics of NN and Game AI
From the technical point of view, our dataset also covered a wide range of
varieties. There is a relatively even split between NN games with online
learning (58%) and offline learning methods (42%). This technical feature is
associated with different gameplay characteristics. In online learning games,
the network is (further) trained as the player interacts with it. Therefore,
these games can adapt to individual players’ actions in real-time. Offline
learning games, on the other hand, are shipped with fixed NNs and are not
adaptive in the same way. However, offline learning games have the advantage
of handling more complex user input, such as natural language
aidungeon,semantris,guesstheword,hey and images villareale2020innk,quick. The
output of the NN can be divided into behavior (89%) and content (11%).
Behavior output consists of the actions and decisions by NN-controlled
characters. Content output typically take the form of in-game assets such as
flowers risi2015petalz or weapons hastings2009evolving.
A key challenge to our analysis is the black-box nature of AI and NNs,
especially from the players’ point of view. Similar to other AI-infused
products, games often contain complex interactions supported by different
algorithms. It can be difficult to attribute gameplay features to specific
algorithms without access to source code. The authors, including two game AI
researchers, made our best effort to determine whether there were multiple AIs
in a game (e.g., Black and White blackandwhite) and what the NN was
responsible for. We did so by using game developers’ descriptions (e.g.,
conference talks and online articles) and our technical expertise to analyze
the gameplay. Still, many such technical details remain unknown for commercial
games. We acknowledge this as a limitation of our study. However, what makes
NNs uniquely demanding in their human-AI interaction design is their reduced
predictability and low interpretability. We argue that an NN, whether as the
entire game AI or as a component, will introduce these characteristics to the
games they are part of. As a result, NN games can be studied as a whole,
regardless of their technical differences.
Table 1. Overview of the 38 NN games and the results of Phase 1 analysis. (*
Games without available playable versions and excluded for Phase 2.)
Game Characteristics NN Characteristics Player-NN Framework Characteristics
Game Title [Ludography] Publisher Genre Multiple AIs? NN Responsibilities NN
Output Learning Interaction Metaphor UI 2D Walk Evolution 2dwalkevolution
Indie Simulation No controls creature movement Behaviors Offline Teammate NN-
Specific AI Dungeon aidungeon Indie Adventure No creates natural language
responses Behaviors Offline Designer NN-Limited AIvolution aivolution Indie
Simulation Unknown controls creature movement Behaviors Online Teammate NN-
Limited AudioinSpace* hoover2015audioinspace(Hoover et al., 2015) Research
Action Yes creates weapon visuals and audio Content Online Designer NN-
Agnostic Black & White blackandwhite(Wexler, 2002) AAA Role-play Yes creates
creature desires Behaviors Online Apprentice NN-Agnostic UI Blitzkrieg 3
blitzkrieg AAA Strategy Unknown controls ”Boris” battle behavior Behaviors
Offline Competitor NN-Limited UI BrainCrafter* (braincrafter) Research Puzzle
No controls robot movement Behaviors Online Apprentice NN-Specific UI Colin
McRae Rally 2.0* colin AAA Sports Unknown controls the car’s driving
performance Behaviors Offline Competitor NN-Agnostic UI Competitive Snake
competitivesnake Indie Puzzle No controls enemy snake behavior Behaviors
Offline Competitor NN-Agnostic UI Corral corral Indie Simulation Unknown
controls chicken movement and preservation skills Behaviors Online Apprentice
NN-Agnostic UI Creatures (grand1997creatures)(Grand et al., 1997) AAA Role-
play Yes controls the creature’s sensor-motor coordination Behaviors Online
Apprentice NN-Agnostic UI Darwin’s Avatar* lessin2015darwin (Lessin and Risi,
2015) Research Action No controls creature movement Content Offline Designer
NN-Limited UI Democracy 3 democracy AAA Role-play Unknown creates motivations
and desires of the public Behaviors Offline Designer NN-Agnostic UI Dr. Derk’s
Mutant Battlegrounds derks Indie Simulation Unknown controls creature movement
and behavior Behaviors Online Apprentice NN-Limited UI EvoCommander*
jallov2016evocommander (Jallov et al., 2016) Research Simulation No controls
tank movement and shooting behavior Behaviors Online Apprentice NN-Specific UI
Evolution evolution Indie Simulation Unknown controls creature movement
Behaviors Online Teammate NN-Specific UI evolution for beginners evolutionbeg
Indie Simulation Unknown controls creature movement and sensory input
Behaviors Online Apprentice NN-Limited UI Football Evo football Indie
Simulation No controls player movement and behavior Behaviors Online
Apprentice NN-Limited UI Forza Car Racing forza(Takahashi, 2018) AAA Sports
Unknown controls the car’s driving performance Behaviors Online Competitor NN-
Limited UI GAR (hastings2009evolving)(Hastings et al., 2009) Research Action
Yes creates particle weapons Content Online Designer NN-Limited UI Gridworld
grid Indie Simulation Unknown controls creature behavior Behaviors Online
Designer NN-Limited UI Guess the Word guesstheword (Gero et al., 2020)
Research Puzzle Yes creates natural language responses Behaviors Offline
Teammate NN-Limited UI Hey Robot hey Indie Puzzle No controls language
processing Behaviors Offline Teammate NN-Agnostic UI How to Train your Snake
snake Indie Simulation No controls snake movement Behaviors Online Apprentice
NN-Specific UI Idle Machine Learning Game idle Indie Simulation No controls
performance of the vehicle’s movement Behaviors Online Apprentice NN-Specific
UI iNNk villareale2020innk Research Puzzle No identifies sketches drawn by the
player Behaviors Offline Competitor NN-Specific UI Machine Learning Arena*
ferguson2019machine Research Simulation Unknown controls robot behavior
Behaviors Online Teammate NN-Specific UI MotoGP19 moto AAA Sports Unknown
controls the car’s driving performance Behaviors Offline Competitor NN-
Agnostic UI Neat Race neat Indie Simulation No controls car movement Behaviors
Online Apprentice NN-Specific UI NERO stanley2005evolving(Stanley et al.,
2005) Research Simulation No controls robot movement and shooting behavior
Behaviors Online Apprentice NN-Specific UI Oui Chef!! oui(Cimolino et al.,
2019) Research Role-play No controls chef behavior Behaviors Online Apprentice
NN-Agnostic UI Petalz* risi2015petalz(Risi et al., 2015) Research Simulation
No creates flowers Content Offline Designer NN-Agnostic UI Quick, Draw! quick
Research Puzzle No identifies sketches drawn by the player Behaviors Offline
Teammate NN-Limited UI Race for the Galaxy raceforthegalaxy AAA Strategy
Unknown controls opponent behavior Behaviors Offline Competitor NN-Limited UI
Roll for the Galaxy rollforthegalaxy AAA Strategy Unknown controls opponent
behavior Behaviors Offline Competitor NN-Limited UI Semantris semantris
Research Puzzle No controls the classification of words Behaviors Offline
Teammate NN-Limited UI Supreme Commander 2 rabin2015game(Rabin, 2015) AAA
Strategy Yes controls enemy unit flight and fight behavior Behaviors Offline
Competitor NN-Agnostic UI The Abbattoir Intergrade abbattoir Indie Strategy
Unknown controls enemy unit offense behavior Behaviors Online Competitor NN-
Agnostic UI
## 4\. Phase 1: Analyzing Player-AI Interaction in NN Games
The first broad question we attempt to answer is how existing games use neural
networks (NN), especially in terms of human-AI interaction. In particular, we
focused on two subsidiary research questions:
* •
RQ 1.a: How do NN games structure player-AI interaction?
* •
RQ 1.b: How visible are NNs in the UI of the core gameplay?
### 4.1. Methods
The overall analysis procedure involved a close reading of the gameplay data
in our dataset of 38 games. We used grounded theory to iteratively develop a
framework for how players interact with the NN (i.e., the interaction
metaphors) and how much the existence of NN is foregrounded in the core
gameplay (i.e., levels of visibility).
After initial observations of the games, two researchers conducted a close
reading of the dataset based on the following questions: 1) What role does the
NN play in the overall game system? 2) How does the player interact with the
NN in the game? 3) Where does the interaction with the NN occur in the
gameplay experience? and 4) How, if at all, are the NNs presented in the UI?
Next, the two researchers conducted a preliminary open coding to label notable
characteristics of the observations. During this step of the analysis, both
researchers first went through each game individually and noted initial labels
(e.g., player input via parameters, NN outputs behavior, NN is represented as
a creature) into a shared document. Then, they discussed this document to
iterate on the labels to form concepts (e.g., player directs NN toward a
desired goal). While constructing the concepts, they re-observed some of the
games and reviewed related literature to refine the classifications.
Once a common concept list was achieved and agreed upon, both researchers
separately re-analyzed the shared document to develop preliminary categories
that fit into a framework. During this step of the analysis, the researchers
presented each other’s framework to one another and then collectively iterated
on the categories to finalize the framework. The result of this phase is a set
of categories that make up the player-NN framework (i.e., the interaction
metaphors discussed in Section 4.2.1 and levels of visibility discussed in
Section 4.2.2). Using the framework, each researcher independently coded the
same 7 games (20%), one from each genre. After a complete agreement on the
codes, they then coded the rest of the games independently. When the codes
were complete, both researchers reviewed each other’s work to ensure there
were no discrepancies. If there was a discrepancy, the game would be discussed
and reviewed again by both researchers.
We opted for this consensual qualitative approach (Hill, 2012) instead of
inter-rater reliability (IRR) because analyzing NN games and their compliance
with guidelines (next section) is complex. This is due to the incredibly
varying contexts and nuances that need to be considered in this emerging but
under-studied area. Consensus coding is generally more suited for small
samples and for considering multiple viewpoints, which fits our study better
than IRR (McDonald et al., 2019). Specifically, in order to get such multiple
viewpoints, the two researchers discussed above were a game researcher and an
AI engineer. A much richer and deeper understanding is thus gained.
### 4.2. Results
This section presents the results of our Phase 1 analysis. All classifications
of each game according to our player-NN framework are presented in Table 1.
#### 4.2.1. Interaction Metaphors
The HCI literature shows that interface metaphors (e.g., “Desktop” and “Search
Engine”) are “useful ways to provide familiar entities that enable people
readily to understand the underlying conceptual model [of a system] and know
what to do at the interface” (Sharp et al., 2019, p.78). Critical AI studies
revealed the importance of metaphors to AI (Agre, 1997; Mateas, 2003; Zhu,
2009). Our analysis found four interaction metaphors that provide familiar
structures for players to interact with the AI: NN as Apprentice, Competitor,
Designer, and Teammate. This finding is consistent with recent work in the
game design literature. Based on their expert knowledge and intuition, game
developers discuss how interaction metaphors (often referred to as “design
patterns”) have been used in game design (Treanor et al., 2015; Cook et al.,
2016) and in game production (Riedl and Zook, 2013). Our analysis extends the
existing literature by conducting the first empirical work that uses deep
qualitative analysis to analyze the interaction metaphors.
The largest portion of NN games (34%) adopted what we call Neural Network as
Apprentice. In these games, the player interacts with the NN as its mentor,
and the focus of the gameplay is how player changes the NN over time. The
player’s mentoring of the NN can be achieved by providing direct feedback to
the NN’s behaviors blackandwhite,grand1997creatures,evolutionbeg. For example,
in Creatures, the player provides positive feedback (petting) when the NN-
controlled creature displays desirable behavior (e.g., eat when hungry) and
punishes it (slapping) for the opposite. A second way the player can mentor
the NN is by configuring the right training setting for it
football,hey,braincrafter,stanley2005evolving. The gameplay afforded by this
interaction metaphor focuses on getting the player to train the NN. As shown
in Figure 4, all games in this category use online learning.
Another interaction metaphor our NN games use is Neural Network as Competitor.
The key characteristics of this group, consisting of 26% of the games, is that
player-AI interaction is adversarial. For example, in Supreme Commander 2
rabin2015game, the player fights an NN through their respective army platoons.
As the player customizes their army, the NN weighs the player’s unit
composition against its own and makes tactical battle decisions, such as how
its army will respond, which enemy to target first, or when to retreat. The NN
can exploit players that are over-reliant on a single strategy and counter the
player to create an evolving challenge
forza,moto,rabin2015game,abbattoir,blitzkrieg,raceforthegalaxy,rollforthegalaxy.
In these games, the NN counters the player during gameplay, thus encouraging
them to adapt and try new strategies. A key distinction in this category is
that the NN learns player’s actions to create a more difficult challenge for
the player to overcome. As discussed further in the next section, only one
game villareale2020innk here explicitly highlights the existence of the NN in
their core UI.
For 21% of the games we identified Neural Network as Teammate, which happens
if the interaction between the player and the NN is structured as those
between colleagues. In these games, the player and the NN work together toward
a shared goal. For example, in Evolution evolution, players and the NN create
a stick-figure-like creature together. Players assemble the creature by
placing bones, muscles, and joints in different ways. The NN takes the
player’s creation and improves it through evolving it over many iterations.
This interaction creates a collaborative cycle between the player and the NN.
A unique characteristic of this interaction metaphor is that the player and
the NN have complementary skills. Both are needed to complete the game
objective.
The final 19% of the games used the Neural Network as Designer metaphor. In
these games, the NN acts as a creator and the player as its client. The NN
generates new content lessin2015darwin,aidungeon or customizes content based
on the preferences of the player risi2015petalz,hastings2009evolving, usually
determined passively through players frequently interacting with a particular
game element. For example, in Petalz risi2015petalz, players arrange and
nurture a balcony of flowers, which are generated by an NN. The NN generates
each flower (shape and color) based on the player’s selection of flowers to
breed or cross-pollinate. The NN extends the game’s playability by creating
flowers that match the preferences of the player. Notice that compared to NN
as Teammate, the player here generally has less well-defined goals to
accomplish with the NN.
Figure 3. From left to right, we display _Neat Race_ neat categorized as _NN-
Specific_ , iNNk villareale2020innk categorized as _NN-Specific_ , and
Blitzkrieg 3 blitzkrieg categorized as _NN-Limited_.
#### 4.2.2. Visibility of NN in Core UI
For the second research question of “how visible are NNs in the UI of the core
gameplay,” we found 3 levels in which the NN is called into the player’s
attention in the UI: NN-Specific, NN-Limited, and NN-Agnostic.
A significant number of games (26%) foregrounded the existence of its NN
through what we call NN-Specific UI. These UIs highlight the presence of the
NN during core gameplay through linguistic features (e.g., using the term
“neural network” snake,grid,idle,villareale2020innk). For instance, How to
Train your Snake describes each NN-controlled snake as “…hooked up to a Neural
Network” snake. Some games use visual features (e.g., visualizing the
underling NN snake,idle,neat,football,villareale2020innk). In iNNk (Middle,
Figure 3), the word “neural network” is prominently featured in the core game
UI along with the NN’s confidence meter. More interesting, some games
visualize the parameters of the NN training algorithm to make the training
process playable. For example, in Neat Race neat (Left, Figure 3), the game
visualizes the NN’s internal structure (bottom left of the screenshot) and
displays its parameters as sliders (top right).
The majority of our games (40%) used NN-Limited UI. They acknowledge the
presence of the NN in the game, but only through non-essential UI, such as
using technical terminology in tutorials
aidungeon,grand1997creatures,aivolution, menus outside the core gameplay loop
lessin2015darwin,forza,stanley2005evolving,grid, or explicitly referring to
the NN only in title screens semantris,aidungeon. For instance, Blitzkrieg 3
blitzkrieg is a WWII strategy game where players build and command a variety
of units to defeat the opposing NN-controlled enemy. The game’s opening screen
(Right, Figure 3) personifies the NN as an evil-looking person with the text
“Meet Boris, a neural-network AI you can fight against…”
Finally, 34% of the games used NN-Agnostic UI, which does not reference the
NN. By masking the NN, these games maintain the narrative immersion of the
game worlds without revealing the algorithms used to build them.
Figure 4. Distribution of the NN games (n = 38) categorized by interaction
metaphor, online/offline learning, and UI visibility. Each black dot
represents one NN game. Figure 5. Distribution of the NN games by publishing
date.
### 4.3. Discussion
#### 4.3.1. NN Is Driving the Experimentation of Novel Gameplay Experiences
We observed a surge of NN games in recent years (Figure 5). NNs have been
adopted in a wide variety of game genres and gameplay experiences. The games
in our collection covered all the common game genres, showing vibrant efforts
in trying different ways of incorporating NN in games. The interaction
metaphor with the longest history is NN as Apprentice, whereas NN as Teammate
has only been published since 2014. Our hypothesis is that the metaphor of NN
as Teammate has the highest technical requirement for the NN because it needs
to sufficiently understand and anticipate player interaction in order to
accomplish a common goal. Thus, they are the most recent interaction metaphor.
Most of the games are experimental, as the game design community has not
formed established ways to use NNs. This is also echoed by the fact that indie
developers and researchers developed the majority of our games (74%).
We noticed that some of the NN games are providing novel gameplay experiences
that would not have been possible without NNs. For example, thanks to
advancements in deep learning, AI Dungeon aidungeon offered a text-based,
human-AI collaborative storytelling experience with few constraints for what
the player could type into the game. This is a drastic improvement over
traditional text-based adventure games, which are infamous for their
intolerance for player input that is slightly different from what the designer
programmed for (Montfort, 2005). More significant is that no matter what story
elements the player enters, AI Dungeon can respond in reasonable ways.
The NN games in our dataset have improved established gameplay experiences
that had been supported by other AI techniques. Compared with traditional AI,
NNs are more adaptive and flexible. In NN as competitor games, we observed
that these games use established game mechanics (e.g., racing forza,moto), but
the NN offers a more challenging competitive experience through more capable
NPCs than other AI techniques.
The most unique gameplay is found in making the training of the NN itself a
playable mechanic. By representing the NN and displaying features of its
internal processes, this may lead to new player experiences, such as using
gameplay to inform their understanding of the system to be successful in the
game. For example, we observed that this occurs in games such as How to Train
your Snake snake and Idle Machine Learning Game idle, which made the NN
explicit in the UI and displayed parameters for the players to use when
steering the NN’s output. The gameplay became a puzzle about how to best
configure the NN to be successful in the game. To do so requires players to
have a basic understanding of the capabilities and limitations of the system,
which are discovered over time through play.
#### 4.3.2. Metaphors We Play By
In our analysis, we saw that interaction metaphors based on human
relationships played a powerful role in structuring player-AI interaction. We
did not notice that any games break or even complicate (e.g., a teammate that
back-stabs) the interaction metaphor they use. Our empirical analysis
validates what prior researchers proposed based on intuitions and domain
knowledge (Treanor et al., 2015; Cook et al., 2016; Riedl and Zook, 2013;
Guzdial et al., 2019), all of which uses human relationships. The role of
metaphors has been extensively studied in human cognition (Lakoff and Johnson,
2008; Lakoff and Turner, 2009) and in UI design (Sharp et al., 2019; Lubart,
2005), as it has been more recently in human-AI interaction as well (Dove and
Fayard, 2020).
We believe this full, uncomplicated adoption of interaction metaphors reflects
the early stage of human-AI interaction. While we did notice some laudable
innovations such as those mentioned in the previous section, by and large,
game designers and developers went with familiar concepts and metaphors. This
is consistent with Bolter’s notion of how new technology remediates familiar
forms before taking on its distinctive forms (Bolter, 2016). Compared to ML-
based UX (Dove et al., 2017), an advantage of the games community is that game
AI developers are often game designers themselves or work closely with the
latter. This cross-pollination between algorithm and design makes games a
vibrant domain for new experimentation.
It is also notable that the metaphors are connected with the algorithmic
characteristics of different NNs. As shown in Figure 4, all games that adopted
NN as Apprentice use online learning for player agency, whereas most games
with NN as Competitor use offline learning for opponent competency.
#### 4.3.3. The Struggle with Transparency and Interpretability
Similar to most NN-based UX applications (see further discussions in Section
5), NN games struggle with how and what to communicate to the player regarding
the use of the NN. In our initial coding stage, even our NN researcher
coauthors could not always figure out the NN-specific questions by simply
looking at the game. Because the use of NNs may not be apparent in the
gameplay, and since only 26% of our games references NNs (including simply
using the word “neural network”) in their core UI, the game requires the
player to believe they are interacting with an NN.
Even when the players are directly playing with the parameters of the NN, it
is not always clear what these features do. For example, unless the player has
a background in evolutionary algorithms, terms such as “population” and
“retraining” are not necessarily understandable.
Like other AI-infused systems, games also struggle with the lack of
interpretability of NNs. We see a full spectrum from completely blackbox NNs
grand1997creatures,blackandwhite,rabin2015game,democracy (often in NN as
competitors) to attempts to visualize the underlying NN
idle,football,snake,braincrafter,neat,stanley2005evolving. Most notably, NERO
stanley2005evolving, gives insights about the NN and its training in two ways.
First, by visualizing the NN training parameters, players can steer the NN
behavior by tweaking the reward structure for what is preferred behavior
(e.g., approach enemy, attack enemy). Second, players are able to see graphs
of the NN’s internal structure and fitness values across generations for all
robots across various combat stats (e.g., enemy hits).
The strongest designs for making NNs more interpretable come from simulation
games where the player can tweak different training parameters. In these
cases, even though the names of the parameters are sometimes too technical for
players without an AI background, the NN’s behavioral change feedback through
different iterations of trial-and-error gameplay helps the player develop an
intuition. In other words, most games in our dataset manage to reframe the
difficulties of interacting with an NN as a puzzle and thus make it more
engaging.
## 5\. Phase 2: Analyzing NN Games with General Human-AI Interaction
Guidelines
The second broad research question is to explore what neural network (NN)
games can tell us about designing human-AI interaction. Here we focus on the
following subsidiary research questions:
* •
RQ 2.a: To what extent do NN games comply with contemporary design guidelines
for human-AI interaction?
* •
RQ 2.b: Using the design guidelines for human-AI interaction, how can NN games
be differentiated according to their characteristics (see Table 1) and in
comparison with other AI-infused products?
### 5.1. Methods
For inferring what NN games can tell us about human-AI interaction, we used
the human-AI design guidelines proposed by Amershi et al. (Amershi et al.,
2019). It is the most recent and comprehensive manner in which the design for
human-AI interaction is documented thus far by the HCI community. As discussed
above (Section 2.2), currently no equivalent guidelines exist specifically for
games. There are 18 guidelines in (Amershi et al., 2019) in total. They are
grouped according to when the user is interacting with the AI: 1) initially,
2) during, 3) when wrong, and 4) overtime. The analysis procedure in adapting
these guidelines to the NN games involved a three-step process: Step 1.
defining guiding questions for NN games, Step 2 establishing codes for
analyzing games, and Step 3 analyzing the games with Step 1 & 2\. Two
researchers performed all steps in close coordination and checked the outcomes
of each step with the other authors for verification and to reach consensus
(Hill et al., 2005; Richards and Hemphill, 2018).
#### 5.1.1. Step 1: Guiding Questions for NN games.
Two researchers completed a detailed reading of each guideline to understand
the guideline in the context of the original AI application examples (e.g.,
recommender systems, activity devices, etc.). Then, both researchers explored
how the guidelines may be applied in the context of games. This process led to
the definition of a question for each guideline to help orient the researchers
when observing the games in Step 3. For example, for Guideline 15 “encourage
granular feedback,” we defined the question, “how do players indicate their
feedback such as preference to the NN during gameplay?” Table 2 shows all the
original guidelines and our associated “Guiding questions for NN games.”
From this process, we agreed that guidelines in the “when wrong” category did
not apply in the context of games. Games handle failure differently than other
AI products, where failure is expected and, in fact, part of the main
interaction and resulting experience (Juul, 2013; Anderson et al., 2018).
Additionally, for AI-infused products, humans are consumers of the AI. By
contrast, players in many of our games actively control how the AIs are
trained. This close relationship significantly complicates the notion of
failure in games. As a result, fully unpacking what failure means in games is
out of the scope of this paper. We hence excluded this category and focused
our analysis on the initially, during, and overtime categories. We do offer
some observations in the discussion, but further research is needed in this
important area of player-AI interaction.
#### 5.1.2. Step 2: Codes for Analyzing NN Games
Two researchers took an iterative approach to arrive at a set of codes to
analyze the games in Step 3. Amershi et al.’s (Amershi et al., 2019) do not
necessarily specify or recommend how the guidelines should be evaluated, but
in their user study with UX designers (n = 49), they applied a 5-point
semantic differential scale from “clearly violated” to “clearly applied.” We
combined their approach for heuristic evaluation with our guiding questions to
create a 3-point coding scheme for each guideline. In a nutshell, this coding
scheme is used to decide whether a game (A) clearly applies, (B) partially
applies, or (C) violates the guideline. For example, for Guideline 1 “Make
clear what the system can do” we added the guiding question “How does the game
make clear what the NN can do?” and the following three codes: (A) Makes the
NN’s capabilities known; (B) Makes part of NN’s capabilities known; and (C)
Does not make the capabilities known at all. With this coding scheme, the
researchers were able to clearly label in the context of a specific guideline
and NN games. Additionally, the 3-point scale is much more suitable for
qualitative/consensus coding. We aimed to describe rather than only rate or
score the games to extract meaningful insights, hence why we defined the
3-point coding scheme for each guideline in accordance with the associated
guiding question. Table 2 shows the resulting codes.
#### 5.1.3. Step 3: Analyzing NN Games
Two researchers applied the codes from Step 2 for the analysis of the 31
playable games from our corpus, the results of which are presented in Section
5.2. When analyzing each game, both researchers reviewed the data collected
from Phase 1 (i.e., gameplay footage and written observations) and played the
game. After reviewing this material, they independently assigned codes for
each guideline per game. Disagreements were resolved through discussion.
After the coding results were agreed upon, scores were assigned for cross-
comparison with the characteristics found in Section 4 (see Table 1). We
assigned a score of 2 for the clearly applies codes (A), 1 for the partially
applies codes (B), and 0 for the violation codes (C). For the comparison of
the guidelines (i.e., comparing G1 with G2, etc.), we then took the sum of
scores of these resulting scores per guidelines and divided them by the
maximum score per guideline to calculate the % per guideline. For the cross-
comparisons of the characteristics (on the interaction metaphors, visibility,
developer, etc.), we normalized the sum of scores as we have a different
number of games per characteristic and then took the average of the normalized
sum of scores to calculate the % per characteristic.
In this section, we do not report the results on the characteristics of NN
input and NN output as they do not provide any meaningful insights. We further
omitted categories with a low number of cases (e.g., there is only one
adventure game). Finally, for the comparison of our outcomes with other AI-
infused products reported by Amershi et al. (Amershi et al., 2019), we
considered the major patterns in the aggregate data for both (i.e., NN games
vs. all other AI-infused products).
Table 2. Summary of the guideline analysis codes.
### 5.2. Results
Figure 6 shows the results of applying the adapted guidelines in the context
of NN games. Below, we discuss per guideline category (i.e., initially,
during, and overtime) the results in more detail. Note that the reported %
here are based on the % of games that were coded as A = clearly applied, B =
partially applied, and C = violation. Following this, we compare the
application of the guidelines across the characteristics discussed in Section
4, and with other AI-infused products. The reported results here are based on
the % derived from the normalized scores.
Figure 6. The distribution of compliance codes (A, B, and C) per guidelines in
count (n = 31) and %.
#### 5.2.1. Initially
The initially guidelines (G1–G2) refer to player interaction prior to
gameplay. During the initial interaction of these games, Guideline 1 “makes
clear what the system can do” has the most reported applications: 23 games
(74%) made either full or part of the NN’s capabilities known to the player
prior to gameplay. These games helped the player understand the capabilities
in a variety of ways, such as tutorials, intro screens, or developer notes
prior to gameplay. Some were not as direct and did not provide such textual
content, but still made the capabilities known by immediately providing the
player with an output to observe. For example, in _How to Train your Snake_
snake, players start the game to find the NN already training the snakes to
move and find food, thus, showcasing an immediate result.
While these games are doing well in most cases by communicating the
capabilities of the NN during the initial interaction, they are doing poorly
regarding communicating the limitations of NNs to the players. Guideline 2
“makes clear how well the system can do what it can do” had 27 games (85%) not
making the limitations of the NNs known to the player at all — the second-
highest number of violations among the guidelines. This may be justified in
cases playing against the NNs (i.e., NN as Competitor) to avoid exploiting the
system to win the game. Or in cases where the NN is the focus of gameplay
(i.e., NN as Apprentice), limitations become part of the puzzle and
understanding through play.
#### 5.2.2. During
The during guidelines (G3–G6) refer to player interaction at any given
gameplay loop. Guideline 3 “time services based on context” and 5 “match
relevant social norms” had the most reported applications: 21 games’ (68%) NNs
provided timely service, and 25 games (81%) were labeled as an expected
interaction with the NN. The clear application of G3 and G5 suggests how
designers carefully considered how the NN can help assist with the flow of the
player experience by suggesting interactions and providing immediate
consequences that are consistent with player expectations.
Games also performed well in complying with Guideline 4 “show contextually
relevant information,” as none of the games violated showing contextually
relevant information. In the majority of these games, the focus of the
gameplay centers on changing or affecting the output of the NN. We observed
that the games provide information in regards to how the NN was responding to
or utilizing player actions. A common approach is the use of UI elements
(e.g., NN performance stats increasing) accompanied by a continuous animation
or visual change to enable players to observe and then inform their next
gameplay action. For example, in iNNk villareale2020innk, players can observe
the exposed confidence meter that displays as a percentage under the NPC
character in relation to their drawing, thus building a better mental model of
how to subvert the NN in future drawings.
Other games provide additional visual information during this animation, such
as an overlay of the entire NN population. Providing this extra information
allows players to assess the NN’s progress and observe both successful and
failed attempts. For example, in Evolution evolution, players are able to see
an overlay of all the NN attempts at training the same creature
simultaneously. This additional animation shows the highest and the lowest-
performing creatures training at the same time, which enables players to
better understand the progress as a whole and determine if the creature needs
to be tweaked further.
While games are doing well to display relevant information regarding gameplay,
Guideline 6 “mitigate social biases” had 17 games (55%) with a violation.
Examination of these instances revealed that it was unclear if the NN
mitigates undesirable social stereotypes and biases. Additionally, we reported
half of such games as a violation because such biases may emerge directly from
players and are not mitigated. For example, games that made the NN the focus
of gameplay (i.e., training or evolving using an NN) provides a new NN to play
with. Therefore, players may steer the NN with their own personal preferences.
Stereotypes and biases can emerge through the player’s direction and
reinforcement.
#### 5.2.3. Overtime
The overtime guidelines (G12–G18) refer to player interaction with the NN over
a longer period of time. Guideline 13 “learn from user behavior” and 14
“update and adapt cautiously” are performing well. Based on the player’s
behavior, 17 games (55%) NN personalize the experience, and 26 games (84%) do
not disrupt the gameplay experience when the NN changes its behavior.
Guideline 15 “encourage granular user feedback” is doing well with 14 games
(45%) that allow players to directly indicate their preferences to the NN
during gameplay. Games are performing moderately in regards to Guideline 17
“provide global controls” with 15 games (48%) providing full or partial global
control to adjust how the NN behaves, respectively.
A common approach to provide players more agency is the ability to adjust the
NN through parameters or the environment it interacts in. We observed these in
setting menus, or in other cases, directly in the core GUI. For example, in
_NERO_ , the game allows players to edit the reward structure of the NNs
during a training session. Further, players are able to edit the training
environment, such as adding barriers and placing particular enemies to
directly influence the NNs training.
Guideline G16 “convey the consequences of user actions related to NN” and G18
“clear notifications of changes in NN capability” had the most violations in
this category: 22 games (71%) did not provide any feedback conveying how
players’ actions will impact the NN, and 28 games (90%) did not notify of any
changes or updated to the NN capabilities. Further, Guideline G12 “remember
recent transactions” was another difficult guideline to apply in games. In
these cases, 10 games (32%) leveraged the history of the player actions to
generating content tailored to the player or a more challenging experience but
did not allow the users to access that memory. Only 2 games (6%) make this
history useful to the player as other AI products do (e.g., navigation
products, search engines) and allowed the users to reference that history.
(a) Developer categories.
(b) UI Visibility categories.
(c) Interaction metaphor categories.
Figure 7. A comparison of guidelines across different characteristics.
#### 5.2.4. Comparison
Consistent with the analysis in Section 4, NN-Specific games (60%) outperform
NN-Limited (50%) and NN-Agnostic (40%) as shown in Figure 7(b). G1 “make clear
what the system can do” is understandably what separates NN-Agnostic from the
other two UI categories and G17 “provide global controls” is the most
differentiating guideline for visibility. Consistent with the analysis in
Section 4, we find that the simulation genre complies better (63%) compared to
all others (32–58%), due to G4, G15, and G17. Also, not unexpectedly, NN games
with online learning score higher (57%) than those with offline learning
(39%), specifically in the guidelines in the overtime category.
With the interaction metaphors (see Figure 7(c)), we see that NN as Apprentice
scores the highest (60%) with here too G17 as the most differentiating
guideline, followed by G13 “learn from user behavior.” Most apparent is that
NN as Competitor (35%) scores the lowest, which is not unexpected given that
there are gameplay reasons to not abide by the guidelines. Of note is that NN
as Teammate do reasonably well on G12 “remember recent transactions,” which
makes sense given that players need to build trust with the NN to work with
them.
We further find that the AAA games score lower (38%) compared to the indie
(56%) and research (49%) games (see Figure 7(a)). This is mostly due that 6
out of 9 AAA games are classified as NN as Competitor games. It is interesting
that research games score high on G3 and G14, while indie games score high on
G13, G15, and G17. It indicates that the research games are more focused on
integrating the NN into the game flow, while indie games are more
experimenting in how players can interact with the NN.
Aside from contrasting the outcomes with the game characteristics, we compared
our guideline outcomes with the reported outcomes of AI-infused products by
Amershi et al. (Amershi et al., 2019) to see what key similarities and
differences exist. We find that making the limitations known (G2) is poorly
addressed in both NN games as other AI-infused products. AI-infused products
do tend to perform better on G12, which is not surprising given that this a
feature that is critical for many recommender types of products. NN games,
however, generally perform better on G15 and G17, suggesting that NN games
facilitate granular user feedback through direct interaction and provide more
controls to their users, respectively.
### 5.3. Discussion
By leveraging the design guidelines for human-AI interaction by Amershi et al.
(Amershi et al., 2019), our aim was to explore what NN games tell us about
designing human-AI interaction. In this process, we also learned more about NN
games themselves and were able to verify and confirm the findings reported in
Section 4. In fact, after applying the guidelines, we see more specifically
what NN games do and how they differ from other AI-infused products. Here we
describe the main takeaways from analyzing NN games with general human-AI
interaction guidelines, especially in terms of how the notion of AI as play
can be used to inform human-AI interaction.
#### 5.3.1. Learning AI and Its Limitations Through Play
Clearly communicating the affordances and limitations of a system is a long-
established design principle (Norman, 2013) and confirmed in human-AI
interaction (Amershi et al., 2019; De Graaf et al., 2017; Furqan et al.,
2017). While the NN games do overall relatively well in communicating what the
NN does (G1), they do not in communicating what it does not (G2). Other AI-
infused products performed equally poorly on G2 (Amershi et al., 2019).
Through our close analysis of the games as well as developers’ notes, we
noticed that NN games purposely ignore G2 because the point is to learn the
limitations through play.
The developers of aidungeon,quick,semantris all explicitly describe their
games as experiences to explore or test the limits of the NN. For example,
Semantris asked the players to “play around ” and “see what references the AI
understands best.” In such games, the NN cannot directly communicate the
limitations of the system upfront without jeopardizing the gameplay — because
discovering the system’s limitations is the gameplay. By framing the discovery
of NN limitations as play, many NN games are able to foster a sense of
curiosity, discovery, and accomplishment in players.
We also saw creative ways of making AI’s limitations part of the rule of play.
Most notably, we saw a number of projects with online-learning adopted by the
idle game genre, which has a built-in play-wait-play cycle (Alharthi et al.,
2018). This game feature makes the technical requirement of waiting for the NN
to finish the training part of the expected experience and uses the idle
game’s reward system to incentivize players to return to the game after
waiting.
#### 5.3.2. Highlighting Failure as Part of Play
As argued above, failures in games are more complex than in other AI-infused
systems. While we removed the “when wrong” category of guidelines in our
analysis, we noticed many interesting uses of failure in the player-AI
interactions in the NN games. Overall, failures in games are used
productively. In many NN games, failure is used to motivate the player to
continue improving their AI. This is consistent with the use of failure in
game design (Juul, 2013).
Most notably, we noticed that failure is highlighted, instead of minimized,
from the beginning. When the player first starts the game, the snake they
control dies in How to Train your Snake snake, or robots run to the edges of
the arena instead of approaching the enemy in NERO stanley2005evolving. This
is a design pattern not commonly observed in non-AI games. We believe that the
game designers used this device in order to re-frame players from “problem
makers” to the AI into “problem fixers”, making controlling the NN a less
intimidating task. We noticed that this design strategy was used particularly
in the NN as Apprentice games.
#### 5.3.3. Playing with Different forms of Human-AI Interaction
Our analysis highlights the differences between various types of games, most
notably in games using NN as Competitor. Compared to the other interaction
metaphors, they violate many more guidelines. Through close examination,
however, many of the violations are motivated and intentionally designed to be
so. As argued above, when players compete with the AI, common design
assumptions such as transparency and explainability do not apply directly and
needs to be re-examined and adapted for the context.
Literature on human-AI interaction primarily focuses on the paradigm of AI as
a tool/augmentation to the user. However, an increasing number of AI-infused
products fall outside this assumption. For example, AI in cybersecurity
applications explores AI as an adversary, and AI products with high privacy
concerns (e.g., in healthcare) require a different way to think about
transparency. In these cases, we believe NN games can offer many design
insights and cases for inspiration. We also need to expand current human-AI
interaction guidelines so that it can encompass elements essential to play,
such as engagement, flow, and fun.
## 6\. The Future of Player-AI Interaction
Through the specific case of NN games, we have demonstrated the richness of
player-AI interaction as a research topic. In this section, we intend to
situate player-AI interaction in the broader context of related research
areas. We then distill the design implications of our study for games and for
UX/HCI.
### 6.1. Establishing Player-AI Interaction
The study of players, game design, and game technology (here we focus on AI)
are three main pillars in games research (Figure 8). At the intersection of
Games and AI, there is a well-established research community of game AI. In
fact, game AI has often been driving the world of AI research, in which the
most advanced forms of AI algorithms are often first developed and tested in
games (Risi and Preuss, 2020). Results in domains such as StarCraft 2 and
Quake III now frequently appear in the most prestigious journals (Vinyals et
al., 2019; Jaderberg et al., 2019). Combining the study of the player with
game design, there is the research area of player experience. However, the
intersection between players and AI has so far been relatively under-explored.
Until recently, real players are typically only brought in at the end of game
AI research as the means to evaluate the effectiveness of the algorithms.
Carving out the topic of player-AI interaction will fill in this gap.
Figure 8. Research areas of games.
In this paper, we present the first empirical study on this gap that we call
player-AI interaction. In addition to establishing games as a rich domain for
human-AI interaction, our analyses contribute insights into how we can
classify AI-based games based on interaction metaphors (i.e., apprentice,
designer, teammate, and competitor) and the visibility of the AI system as
part of the core UI (i.e., specific, limited, and agnostic). We further
adapted design guidelines for human-AI interaction to the context of games,
which can be useful for others when designing their AI-based games. However,
we encourage the community to further scrutinize these design guidelines as
our work indicates that AI-based games are different from AI-infused products,
for example, with regards to the role of failure, and to work towards more
formalized and evaluated design guidelines for player-human interaction.
### 6.2. Encouraging Playing with AI
For game designers, UX designers, and HCI researchers interested in human-AI
interaction, one of our key takeaways is that reframing AI as play offers a
useful design approach, complementary to the current instrument-based views on
AI. As as play can offer new human-AI interaction design space where the users
can tinker, explore, and experiment with AI. We propose the following design
considerations:
Use flow to structure the learning curve of human-AI interaction. For many
users, interaction with AI can be overwhelming, especially when they encounter
unexpected output from the algorithm. One important lesson from our study is
that the concept of flow (Csikszentmihalyi, 1990), widely used to balance game
difficulty and player engagement over time, can be useful to design human-AI
interaction. In Section 5.3, we discussed that NN games should better support
players for “Learning AI and Its Limitations” and make experimentation with AI
more acceptable by “Highlighting Failure as Part of Play.” The use of flow can
be useful to structure how to gradually expose users to different AI features
(see also (Cruz and Uresti, 2017)).
Incorporate enhanced discovery-based learning. Many games in our analysis,
especially simulation games, offer discovery-based learning (Alfieri et al.,
2011) with mixed success. Since players come with different background
knowledge and needs, explicit instruction for AI is challenging to design.
Discovery-based learning offers players the opportunity to play around with
the NN at their own pace and observe the consequences of their actions on the
NN and the game world. However, most NN games in our dataset offered very
little scaffold, making it difficult for players without a technical
background to succeed. We suggest that UX designers use enhanced discovery-
based learning and provide feedback, worked examples, scaffolding, and
elicited explanations to further assist their users.
Extend the invitation to play. Finally, for researchers and designers
interested in exploring new forms of human-AI interaction, we believe offering
users an invitation to play can unleash their imagination and empower them to
explore new ways to interact with even the same technology. As we can see from
Hey Robot!, the magic circle of play turns the smart speaker user from the
seeker of information to the provider. The voice assistant’s inability to
understand user command/intent is transformed from failure to perform to the
source of fun.
## 7\. Limitations
We recognize that several limitations impact the scope of our work. First, our
study considered only NN games with specific tags on popular game platforms
and textbooks. Our goal was to find a representative sample of salient NN
games, not a comprehensive list. However, we acknowledge that we may have
missed some relevant games. Second, we omitted the failure-related human-AI
interaction guidelines. While we offered some related observations, further
research is needed to study failure in games, separate from failure outside
the context of play, and how it relates to player-AI interaction. Third, we
used 3-point codes in our qualitative analysis. While it is appropriate for
our purpose of the analysis, future research can adopt a more fine-grained
analysis that can better distinguish the “violations” and “does not apply.”
Finally, we do not have necessary (sufficient) technical information about how
NNs are used in many commercial games. The blackbox nature of game AI limited
our ability to conduct in-depth analyses of specific features of games (see
“Multiple AI?” in Table 1).
## 8\. Conclusion
We introduced the term player-AI interaction to study how human players
interact with AI in the context of games. While we intend to situate it in the
broader context of human-AI interaction, we also highlight the unique
opportunities and challenges presented by re-framing AI as play. Through a
systematic search of existing neural network games, we conducted two deep
qualitative analyses. In the first one, we analyzed the common metaphors that
structure the player-AI interaction and how much the NNs are foregrounded in
the core UI. In the second analysis, we adapted the current human-AI
interaction guidelines to player-AI interaction and applied them to identify
the strengths and weaknesses of NN games. Based on our findings, we proposed
that the notion of AI as play, which is an alternative to the current paradigm
of performance-centric human-AI interaction, can contribute to both game
design and HCI communities.
###### Acknowledgements.
This work is partially supported by the National Science Foundation (NSF)
under Grant Number IIS-1816470 and a DFF-Danish ERC-programme grant
(9145-00003B). The authors would like to thank all past and current members of
the project, especially Evan Freed and Anna Acosta for assistance in
collecting initial data. We want to thank those who suggested additional games
on Twitter. Finally, we thank Robert C. Gray for assistance in editing this
paper.
## References
* (1)
* ste (2020) 2020\. Steam (service) — Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Steam_(service)&oldid=978516977 [Online].
* Abeele et al. (2020) Vero Vanden Abeele, Katta Spiel, Lennart Nacke, Daniel Johnson, and Kathrin Gerling. 2020. Development and validation of the player experience inventory: A scale to measure player experiences at the level of functional and psychosocial consequences. _International Journal of Human-Computer Studies_ 135 (2020), 102370\.
* Agre (1997) Philip E Agre. 1997\. _Computation and human experience_. Cambridge University Press.
* Alfieri et al. (2011) Louis Alfieri, Patricia J Brooks, Naomi J Aldrich, and Harriet R Tenenbaum. 2011. Does discovery-based instruction enhance learning? _Journal of educational psychology_ 103, 1 (2011), 1\.
* Alharthi et al. (2018) Sultan A Alharthi, Olaa Alsaedi, Zachary O Toups, Joshua Tanenbaum, and Jessica Hammer. 2018\. Playing to wait: A taxonomy of idle games. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–15.
* Amershi et al. (2019) Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019\. Guidelines for human-ai interaction. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_. 1–13.
* Anderson et al. (2018) Craig G Anderson, Jen Dalsen, Vishesh Kumar, Matthew Berland, and Constance Steinkuehler. 2018\. Failing up: How failure in a game environment promotes learning through discourse. _Thinking Skills and Creativity_ 30 (2018), 135–144.
* Bansal et al. (2019) Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019\. Beyond accuracy: The role of mental models in human-AI team performance. In _Proceedings of the AAAI Conference on Human Computation and Crowdsourcing_ , Vol. 7. 2–11.
* Binns et al. (2018) Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018\. ’It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions. In _Proceedings of the 2018 Chi conference on human factors in computing systems_. 1–14.
* Bolter (2016) Jay David Bolter. 2016\. Remediation. _The international encyclopedia of communication theory and philosophy_ (2016), 1–11.
* Brockmyer et al. (2009) Jeanne H Brockmyer, Christine M Fox, Kathleen A Curtiss, Evan McBroom, Kimberly M Burkhart, and Jacquelyn N Pidruzny. 2009. The development of the Game Engagement Questionnaire: A measure of engagement in video game-playing. _Journal of Experimental Social Psychology_ 45, 4 (2009), 624–634.
* Bryson (2010) Joanna J Bryson. 2010\. Robots should be slaves. _Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues_ (2010), 63–74.
* Calleja (2007) Gordon Calleja. 2007\. Digital game involvement: A conceptual model. _Games and culture_ 2, 3 (2007), 236–260.
* Campbell et al. (2002) Murray Campbell, A Joseph Hoane Jr, and Feng-hsiung Hsu. 2002\. Deep blue. _Artificial intelligence_ 134, 1-2 (2002), 57–83.
* Cimolino et al. (2019) Gabriele Cimolino, Sam Lee, Quentin Petraroia, and TC Nicholas Graham. 2019. Oui, Chef!!: Supervised Learning for Novel Gameplay with Believable AI. In _Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts_. 241–246.
* Cook et al. (2016) Michael Cook, Mirjam Eladhari, Andy Nealen, Mike Treanor, Eddy Boxerman, Alex Jaffe, Paul Sottosanti, and Steve Swink. 2016\. PCG-Based Game Design Patterns. _arXiv preprint arXiv:1610.03138_ (2016).
* Cruz and Uresti (2017) Christian Arzate Cruz and Jorge Adolfo Ramirez Uresti. 2017\. Player-centered game AI from a flow perspective: Towards a better understanding of past trends and future directions. _Entertainment Computing_ 20 (2017), 11–24.
* Csikszentmihalyi (1990) Mihaly Csikszentmihalyi. 1990\. _Flow: The psychology of optimal experience_. Vol. 1990. Harper & Row New York.
* De Graaf et al. (2017) Maartje De Graaf, Somaya Ben Allouch, and Jan Van Diik. 2017\. Why do they refuse to use my robot?: Reasons for non-use derived from a long-term home study. In _2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI_. IEEE, 224–233.
* Denisova and Cairns (2015) Alena Denisova and Paul Cairns. 2015. The Placebo Effect in Digital Games: Phantom Perception of Adaptive Artificial Intelligence. In _Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play_ (London, United Kingdom) _(CHI PLAY ’15)_. Association for Computing Machinery, New York, NY, USA, 23–33. https://doi.org/10.1145/2793107.2793109
* Desurvire and Wiberg (2009) Heather Desurvire and Charlotte Wiberg. 2009. Game usability heuristics (PLAY) for evaluating and designing better games: The next iteration. In _International conference on online communities and social computing_. Springer, 557–566.
* Dove and Fayard (2020) Graham Dove and Anne-Laure Fayard. 2020. Monsters, Metaphors, and Machine Learning. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–17.
* Dove et al. (2017) Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017\. UX design innovation: Challenges for working with machine learning as a design material. In _Proceedings of the 2017 chi conference on human factors in computing systems_. 278–288.
* Evans and Short (2013) Richard Evans and Emily Short. 2013. Versu—a simulationist storytelling system. _IEEE Transactions on Computational Intelligence and AI in Games_ 6, 2 (2013), 113–130.
* Fulton et al. (2020) Laura Beth Fulton, Ja Young Lee, Qian Wang, Zhendong Yuan, Jessica Hammer, and Adam Perer. 2020. Getting Playful with Explainable AI: Games with a Purpose to Improve Human Understanding of AI. In _Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–8.
* Furqan et al. (2017) Anushay Furqan, Chelsea Myers, and Jichen Zhu. 2017. Learnability through adaptive discovery tools in voice user interfaces. In _Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems_. 1617–1623.
* Games (2016) Firaxis Games. 2016\. Sid Meier’s Civilization® VI. https://civilization.com/
* Gero et al. (2020) Katy Ilonka Gero, Zahra Ashktorab, Casey Dugan, Qian Pan, James Johnson, Werner Geyer, Maria Ruiz, Sarah Miller, David R Millen, Murray Campbell, et al. 2020\. Mental Models of AI Agents in a Cooperative Game Setting. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–12.
* Gomme and Bartle (2020) Daniel Gomme and Richard Bartle. 2020. Strategy Games : The Components of A Worthy Opponent. In _Proceedings of 2020 Foundation of Digital Games_.
* Grand et al. (1997) Stephen Grand, Dave Cliff, and Anil Malhotra. 1997\. Creatures: Artificial life autonomous software agents for home entertainment. In _Proceedings of the first international conference on Autonomous agents_. 22–29.
* Guardiola (2016) Emmanuel Guardiola. 2016\. The gameplay loop: a player activity model for game design and analysis. In _Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology_. 1–7.
* Guzdial et al. (2019) Matthew Guzdial, Nicholas Liao, Jonathan Chen, Shao-Yu Chen, Shukan Shah, Vishwa Shah, Joshua Reno, Gillian Smith, and Mark O Riedl. 2019. Friend, collaborator, student, manager: How design of an ai-driven game level editor affects creators. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_. 1–13.
* Hastings et al. (2009) Erin J Hastings, Ratan K Guha, and Kenneth O Stanley. 2009\. Evolving content in the galactic arms race video game. In _2009 IEEE Symposium on Computational Intelligence and Games_. IEEE, 241–248.
* Heintz and Law (2015) Stephanie Heintz and Effie Lai-Chong Law. 2015. The game genre map: a revised game classification. In _Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play_. 175–184.
* Herlocker et al. (2000) Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000\. Explaining collaborative filtering recommendations. In _Proceedings of the 2000 ACM conference on Computer supported cooperative work_. 241–250.
* Hill (2012) Clara E Hill. 2012\. _Consensual qualitative research: A practical resource for investigating social science phenomena._ American Psychological Association.
* Hill et al. (2005) Clara E Hill, Sarah Knox, Barbara J Thompson, Elizabeth Nutt Williams, Shirley A Hess, and Nicholas Ladany. 2005. Consensual qualitative research: An update. _Journal of counseling psychology_ 52, 2 (2005), 196\.
* Holmquist (2017) Lars Erik Holmquist. 2017\. Intelligence on tap: artificial intelligence as a new design material. _interactions_ 24, 4 (2017), 28–33.
* Höök (2000) Kristina Höök. 2000\. Steps to take before intelligent user interfaces become real. _Interacting with computers_ 12, 4 (2000), 409–426.
* Hoover et al. (2015) Amy K Hoover, William Cachia, Antonios Liapis, and Georgios N Yannakakis. 2015. Audioinspace: Exploring the creative fusion of generative audio, visuals and gameplay. In _International Conference on Evolutionary and Biologically Inspired Music and Art_. Springer, 101–112.
* Horvitz (1999) Eric Horvitz. 1999\. Principles of mixed-initiative user interfaces. In _Proceedings of the SIGCHI conference on Human Factors in Computing Systems_. 159–166.
* Jaderberg et al. (2019) Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. 2019\. Human-level performance in 3D multiplayer games with population-based reinforcement learning. _Science_ 364, 6443 (2019), 859–865.
* Jallov et al. (2016) Daniel Jallov, Sebastian Risi, and Julian Togelius. 2016\. EvoCommander: A novel game based on evolving and switching between artificial brains. _IEEE Transactions on Computational Intelligence and AI in Games_ 9, 2 (2016), 181–191.
* Juul (2013) Jesper Juul. 2013\. _The art of failure: An essay on the pain of playing video games_. MIT press.
* Lakoff and Johnson (2008) George Lakoff and Mark Johnson. 2008. _Metaphors we live by_. University of Chicago press.
* Lakoff and Turner (2009) George Lakoff and Mark Turner. 2009. _More than cool reason: A field guide to poetic metaphor_. University of Chicago press.
* Law et al. (2018) Effie L-C Law, Florian Brühlmann, and Elisa D Mekler. 2018\. Systematic review and validation of the game experience questionnaire (geq)-implications for citation and reporting practice. In _Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play_. 257–270.
* Lessin and Risi (2015) Dan Lessin and Sebastian Risi. 2015. Darwin’s Avatars: A Novel Combination of Gameplay and Procedural Content Generation. In _Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation_. 329–336.
* Lubart (2005) Todd Lubart. 2005\. How can computers be partners in the creative process: classification and commentary on the special issue. _International Journal of Human-Computer Studies_ 63, 4-5 (2005), 365–369.
* Lucero et al. (2013) Andrés Lucero, Jussi Holopainen, Elina Ollila, Riku Suomela, and Evangelos Karapanos. 2013\. The playful experiences (PLEX) framework as a guide for expert evaluation. In _Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces_. 221–230.
* Luger and Sellen (2016) Ewa Luger and Abigail Sellen. 2016. ”Like Having a Really Bad PA”: The Gulf between User Expectation and Experience of Conversational Agents. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems_ (San Jose, California, USA) _(CHI ’16)_. Association for Computing Machinery, New York, NY, USA, 5286–5297. https://doi.org/10.1145/2858036.2858288
* Mateas (1999) Michael Mateas. 1999\. An Oz-centric review of interactive drama and believable agents. In _Artificial intelligence today_. Springer, 297–328.
* Mateas (2003) Michael Mateas. 2003\. Expressive AI: A semiotic analysis of machinic affordances. In _3rd Conference on Computational Semiotics for Games and New Media_. 58.
* Mateas and Stern (2003) Michael Mateas and Andrew Stern. 2003. Façade: An experiment in building a fully-realized interactive drama. In _Game developers conference_ , Vol. 2. 4–8.
* McDonald et al. (2019) Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019\. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. _Proceedings of the ACM on Human-Computer Interaction_ 3, CSCW (2019), 1–23.
* Mnih et al. (2013) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_ (2013).
* Montfort (2005) Nick Montfort. 2005\. _Twisty Little Passages: an approach to interactive fiction_. Mit Press.
* Myers et al. (2018) Chelsea Myers, Anushay Furqan, Jessica Nebolsky, Karina Caro, and Jichen Zhu. 2018. Patterns for how users overcome obstacles in voice user interfaces. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–7.
* Myers et al. (2020) Chelsea M. Myers, Jiachi Xie, and Jichen Zhu. 2020. A Game-Based Approach for Helping Designers Learn Machine Learning Concepts. arXiv:arXiv:2009.05605
* Nacke et al. (2009) Lennart Nacke, Anders Drachen, Kai Kuikkaniemi, Joerg Niesenhaus, Hannu J Korhonen, Wouter M Hoogen, Karolien Poels, Wijnand A IJsselsteijn, and Yvonne AW De Kort. 2009\. Playability and player experience research. In _Proceedings of digra 2009: Breaking new ground: Innovation in games, play, practice and theory_. DiGRA.
* Norman (2013) Don Norman. 2013\. _The design of everyday things: Revised and expanded edition_. Basic books.
* Ontanón et al. (2013) Santiago Ontanón, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. 2013. A survey of real-time strategy game AI research and competition in StarCraft. _IEEE Transactions on Computational Intelligence and AI in games_ 5, 4 (2013), 293–311.
* Oulasvirta et al. (2020) A. Oulasvirta, N. R. Dayama, M. Shiripour, M. John, and A. Karrenbauer. 2020. Combinatorial Optimization of Graphical User Interface Designs. _Proc. IEEE_ 108, 3 (2020), 434–464.
* Pemberton et al. (2019) Derrick Pemberton, Zhiguo Lai, Lotus Li, Shitong Shen, Jue Wang, and Jessica Hammer. 2019\. AI or Nay-I? Making Moral Complexity More Accessible. In _Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts_ (Barcelona, Spain) _(CHI PLAY ’19 Extended Abstracts)_. Association for Computing Machinery, New York, NY, USA, 281–286. https://doi.org/10.1145/3341215.3358248
* Rabin (2015) Steven Rabin. 2015\. _Game AI pro 2: collected wisdom of game AI professionals_. AK Peters/CRC Press.
* Rader et al. (2018) Emilee Rader, Kelley Cotter, and Janghee Cho. 2018\. Explanations as mechanisms for supporting algorithmic transparency. In _Proceedings of the 2018 CHI conference on human factors in computing systems_. 1–13.
* Richards and Hemphill (2018) K Andrew R Richards and Michael A Hemphill. 2018. A practical guide to collaborative qualitative data analysis. _Journal of Teaching in Physical Education_ 37, 2 (2018), 225–231.
* Riedl and Zook (2013) Mark Owen Riedl and Alexander Zook. 2013. AI for game production. In _2013 IEEE Conference on Computational Inteligence in Games (CIG)_. IEEE, 1–8.
* Risi et al. (2015) Sebastian Risi, Joel Lehman, David B D’Ambrosio, Ryan Hall, and Kenneth O Stanley. 2015. Petalz: Search-based procedural content generation for the casual gamer. _IEEE Transactions on Computational Intelligence and AI in Games_ 8, 3 (2015), 244–255.
* Risi and Preuss (2020) Sebastian Risi and Mike Preuss. 2020. From Chess and Atari to StarCraft and Beyond: How Game AI is Driving the World of AI. _KI-Künstliche Intelligenz_ 34, 1 (2020), 7–17.
* Risi and Togelius (2015) Sebastian Risi and Julian Togelius. 2015. Neuroevolution in games: State of the art and open challenges. _IEEE Transactions on Computational Intelligence and AI in Games_ 9, 1 (2015), 25–41.
* Sharp et al. (2019) Helen Sharp, Yvonne Rogers, and Jennifer Preece. 2019\. _Interaction Design: beyond human-computer interaction_ (5th ed.). John Wiley & Sons.
* Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016\. Mastering the game of Go with deep neural networks and tree search. _nature_ 529, 7587 (2016), 484–489.
* Smith and Linden (2017) Brent Smith and Greg Linden. 2017. Two decades of recommender systems at Amazon. com. _Ieee internet computing_ 21, 3 (2017), 12–18.
* Snodgrass and Ontañón (2014) Sam Snodgrass and Santiago Ontañón. 2014. Experiments in map generation using Markov chains.. In _FDG_.
* Stanley (2007) Kenneth O Stanley. 2007\. Compositional pattern producing networks: A novel abstraction of development. _Genetic programming and evolvable machines_ 8, 2 (2007), 131–162.
* Stanley et al. (2005) Kenneth O Stanley, Bobby D Bryant, and Risto Miikkulainen. 2005\. Evolving neural network agents in the NERO video game. _Proc. IEEE_ (2005), 182–189.
* Steinfeld et al. (2006) Aaron Steinfeld, Terrence Fong, David Kaber, Michael Lewis, Jean Scholtz, Alan Schultz, and Michael Goodrich. 2006. Common metrics for human-robot interaction. In _Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction_. 33–40.
* Sukis (2019) Jennifer Sukis. 2019\. Ai design & practices guidelines (a review). https://medium.com/design-ibm/ai-design-guidelines-e06f7e92d864
* Sung et al. (2007) Ja-Young Sung, Lan Guo, Rebecca E Grinter, and Henrik I Christensen. 2007. “My Roomba is Rambo”: intimate home appliances. In _International conference on ubiquitous computing_. Springer, 145–162.
* Sweetser and Johnson (2004) Penelope Sweetser and Daniel Johnson. 2004. Player-centered game environments: Assessing player opinions, experiences, and issues. In _International Conference on Entertainment Computing_. Springer, 321–332.
* Takahashi (2018) Dean Takahashi. 2018\. How Microsoft’s Turn 10 fashioned the A.I. for cars in Forza Motorsport 5 (interview). https://venturebeat.com/2013/11/06/how-microsofts-turn-10-fashioned-the-ai-for-cars-in-forza-motorsport-5-interview/
* Tintarev and Masthoff (2007) Nava Tintarev and Judith Masthoff. 2007. A survey of explanations in recommender systems. In _2007 IEEE 23rd international conference on data engineering workshop_. IEEE, 801–810.
* Treanor et al. (2015) Mike Treanor, Alexander Zook, Mirjam P Eladhari, Julian Togelius, Gillian Smith, Michael Cook, Tommy Thompson, Brian Magerko, John Levine, and Adam Smith. 2015\. AI-based game design patterns. (2015).
* Valls-Vargas et al. (2015) Josep Valls-Vargas, Santiago Ontanón, and Jichen Zhu. 2015\. Exploring player trace segmentation for dynamic play style prediction. In _Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment_ , Vol. 11.
* Valls-Vargas et al. (2017) Josep Valls-Vargas, Jichen Zhu, and Santiago Ontañón. 2017\. Graph grammar-based controllable generation of puzzles for a learning game about parallel programming. In _Proceedings of the 12th International Conference on the Foundations of Digital Games_. 1–10.
* Valve (2008) Valve. 2008. Left 4 Dead. https://www.l4d.com
* Vinyals et al. (2019) Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. 2019\. Grandmaster level in StarCraft II using multi-agent reinforcement learning. _Nature_ 575, 7782 (2019), 350–354.
* Wang et al. (2019) Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019\. Designing theory-driven user-centric explainable AI. In _Proceedings of the 2019 CHI conference on human factors in computing systems_. 1–15.
* Weizenbaum (1966) Joseph Weizenbaum. 1966\. ELIZA—a computer program for the study of natural language communication between man and machine. _Commun. ACM_ 9, 1 (1966), 36–45.
* Wexler (2002) James Wexler. 2002\. Artificial Intelligence in Games. _Rochester: University of Rochester_ (2002).
* Winograd (2006) Terry Winograd. 2006\. Shifting viewpoints: Artificial intelligence and human–computer interaction. _Artificial intelligence_ 170, 18 (2006), 1256–1258.
* Yang et al. (2018a) Qian Yang, Nikola Banovic, and John Zimmerman. 2018a. Mapping machine learning advances from hci research to reveal starting places for design innovation. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–11.
* Yang et al. (2018b) Qian Yang, Alex Scuito, John Zimmerman, Jodi Forlizzi, and Aaron Steinfeld. 2018b. Investigating how experienced UX designers effectively work with machine learning. In _Proceedings of the 2018 Designing Interactive Systems Conference_. 585–596.
* Yang et al. (2020) Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-Examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ (Honolulu, HI, USA) _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376301
* Yannakakis and Togelius (2018) Georgios N Yannakakis and Julian Togelius. 2018. _Artificial intelligence and games_. Vol. 2. Springer.
* Young et al. (2004) R Michael Young, Mark O Riedl, Mark Branly, Arnav Jhala, RJ Martin, and CJ Saretto. 2004\. An architecture for integrating plan-based behavior generation with interactive game environments. _J. Game Dev._ 1, 1 (2004), 1–29.
* Zhu (2009) Jichen Zhu. 2009\. _Intentional systems and the artificial intelligence (ai) hermeneutic network: Agency and intentionality in expressive computational systems_. Ph.D. Dissertation. Georgia Institute of Technology.
* Zhu et al. (2019) Jichen Zhu, Katelyn Alderfer, Anushay Furqan, Jessica Nebolsky, Bruce Char, Brian Smith, Jennifer Villareale, and Santiago Ontañón. 2019. Programming in game space: how to represent parallel programming concepts in an educational game. In _Proceedings of the 14th International Conference on the Foundations of Digital Games_. 1–10.
* Zhu and Harrell (2008) Jichen Zhu and D Fox Harrell. 2008. Daydreaming with Intention: Scalable Blending-Based Imagining and Agency in Generative Interactive Narrative.. In _AAAI Spring Symposium: Creative Intelligent Systems_ , Vol. 156.
* Zhu et al. (2018) Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G Michael Youngblood. 2018\. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In _2018 IEEE Conference on Computational Intelligence and Games (CIG)_. IEEE, 1–8.
* Zhu and Ontanón (2010) Jichen Zhu and Santiago Ontanón. 2010. Towards Analogy-Based Story Generation.. In _ICCC_. 75–84.
* Zhu and Ontañón (2013) Jichen Zhu and Santiago Ontañón. 2013. Shall I compare thee to another story?—An empirical study of analogy-based story generation. _IEEE Transactions on Computational Intelligence and AI in Games_ 6, 2 (2013), 216–227.
ACM-Reference-Format references_games *
|
# LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Aeiau Zzzz Bauiu C. Yyyy Cieua Vvvvv Iaesut Saoeu Fiuea Rrrr Tateu H.
Yasehe Aaoeu Iasoh Buiui Eueu Aeuia Zzzz Bieea C. Yyyy Teoau Xxxx Eee
Pppp
###### Abstract
While designing inductive bias in neural architectures has been widely
studied, we hypothesize that transformer networks are flexible enough to
_learn_ inductive bias from suitable generic tasks. Here, we replace
architecture engineering by encoding inductive bias in the form of datasets.
Inspired by Peirce’s view that deduction, induction, and abduction are the
primitives of reasoning, we design three synthetic tasks that are intended to
require the model to have these three abilities. We specifically design these
tasks to be synthetic and devoid of mathematical knowledge to ensure that only
the fundamental reasoning biases can be learned from these tasks. This defines
a new pre-training methodology called “LIME” (Learning Inductive bias for
Mathematical rEasoning). Models trained with LIME significantly outperform
vanilla transformers on four very different large mathematical reasoning
benchmarks. Unlike dominating the computation cost as traditional pre-training
approaches, LIME requires only a small fraction of the computation cost of the
typical downstream task. The code for generating LIME tasks is available at
https://github.com/tonywu95/LIME.
## 1 Introduction
Inductive bias is essential for successful neural network learning. Many of
the breakthroughs in machine learning are accompanied by new neural
architectures with better inductive biases, such as locality bias in
convolutional neural networks (lecun1999cnn), recurrence and memory in LSTMs
(hochreiter1997lstm), and structural bias in graph neural networks
(scarselli2008graph). However, explicitly encoding inductive biases as new
neural architectures can be difficult for abstract concepts such as
_mathematical reasoning_. Attempts to design elaborate architectures for
reasoning often fall short of the performance of the more generic transformer
architecture. In this work, we aim to avoid the search for new architectures
and investigate whether one can _learn useful inductive bias for mathematical
reasoning through pretraining_.
Large-scale unsupervised pretraining of language models revolutionized the
field of natural language processing (NLP), improving the state-of-the-art in
question answering, name entity recognition, text classification, and other
domains, e.g. (radford2019gpt; devlin2019bert; YangDYCSL19; roberta;
raffel2019exploring; gpt3). As a result, pretraining has become a common
practice for modern neural network based NLP. A popular explanation for the
benefit of pretraining is that the model can learn world knowledge by
memorizing the contents of the natural language corpus, which can be useful in
downstream tasks, such as question answering and text classification. However,
there is another potential advantage of pretraining—it may distill inductive
biases into the model that are helpful for training on downstream tasks (gpt3;
Warstadt2020CanNN). We focus on the latter and design pretraining tasks that
are intentionally devoid of world knowledge and only allow the model to learn
inductive bias for reasoning.
Inspired by the logician Charles Peirce (peirce1992reasoning), we consider the
following three reasoning primitives:
1. 1.
Deduction: the ability to deduce new truths from given facts and inference
rules.
2. 2.
Induction: the ability to induce general inference rules from a set of known
facts.
3. 3.
Abduction: the ability to explain the relationship between the evidences and
inference rules.
To endow the models with an inductive bias for mathematical reasoning, we
design a synthetic task for each of the three reasoning primitives. We
hypothesize that the transformer networks are flexible enough to learn strong
inductive bias from the three synthetic reasoning tasks, which helps to
improve the performance on downstream tasks. Although such inductive bias may
be useful in general reasoning tasks (e.g., NLP tasks), in this work, we focus
on mathematical reasoning benchmarks, for which we expect to observe the
largest gains. We call training on these tasks LIME – an acronym for “Learning
Inductive Bias for Mathematical rEasoning”. Note that there is only a limited
amount of pretraining data available for formal mathematical benchmarks,
therefore the study of generic pre-training techniques is particularly
important for the success of machine learning in mathematical reasoning.
We demonstrate that LIME pretrained models provide significant gains across
four large mathematical reasoning benchmarks: IsarStep (li2020modelling),
HOList Skip-tree (rabe2020mathematical), MetaMathStep (polu2020generative),
and LeanStep (MouraKADR15). Notably, LIME improved the top-1 accuracy from
$20.4\%$ to $26.9\%$ IsarStep, and from $15.5\%$ to $29.8\%$ on LeanStep.
Compared to traditional pretraining tasks, LIME has two major differences.
First, LIME requires only a fraction of the computational cost of downstream
tasks. With only about two hours of training on a single modern GPU, one
already obtains all the benefits, in contrast to days of training on a large
natural language corpus with hundreds of GPUs/TPUs. Secondly, LIME does not
load the input embeddings or the weights in the output layer for finetuning on
downstream tasks. This allows one to use the same pretrained model for a
variety of downstream tasks, which can have vastly different vocabularies due
to language or tokenization differences.
Our method can also be regarded as a form of curriculum learning, in which the
model is taught basic, extremely generic but general skills before being
trained on the specific problem domain.
To summarize, the contributions of the paper are:
1. 1.
Providing the first method to design inductive biases in the form of datasets
for mathematical reasoning.
2. 2.
Demonstrating significant improvements in the reasoning performance of
transformer models on three large mathematical reasoning benchmarks with
negligible extra computation cost.
3. 3.
By showing how pretraining brings benefits other than learning content
knowledge, disentangling the study of its working mechanism.
## 2 Related Work
#### Learning Models Applied to Mathematics
There has been increasing interest in applying deep learning methods to
Interactive Theorem Provers (ITP) (bansal2019holist; bansal2020learning;
gauthier2018learning; huang2018gamepad; yang2019learning; wu2020int;
li2020modelling; polu2020generative). The work that is most related to ours is
GPT-$f$ (polu2020generative). The authors performed pretraining on several
natural language corpora and showed significant improvements for an ITP system
– MetaMath. Different from ours, they used GPT-style large-scale language
modeling pretraining, which dominates the computation cost compared to the
downstream task. We, on the other hand, propose pretraining on a few
lightweight synthetic tasks costing only a minor fraction of the computation
spent on the downstream task.
lample2020deep have demonstrated that transformer models can be used for
symbolic mathematics by successfully predicting the integrals of formulas from
a randomly generated dataset. Similar observations are made for logical
problems relevant to verification: that transformer networks can learn the
semantics of logics (hahn2020transformers). rabe2020mathematical have shown
that mathematical reasoning can emerge from self-supervised training alone.
li2020modelling show that language models can learn to synthesize missing
high-level intermediate propositions given a local context.
piotrowski2020guiding used RNNs in automated theorem provers for first-order
logic. Wang_2020 explored the use of machine translation to translate between
synthetically generated natural language descriptions of proofs and formally
represented proofs. urban2020neural present initial experiments on generating
mathematical conjectures with a Transformer model.
saxton2019analysing suggest a dataset for the analysis of mathematical
reasoning skills. In contrast to the datasets considered here, their dataset
is synthetic, focuses on calculation with concrete numbers, and only contains
relatively few symbolic tasks.
#### Language Model Pretraining
The advent of the transformer architecture (transformer17) and the BERT style
pretraining (devlin2019bert) represented a huge improvement in the quality of
language modeling. Since then, an explosion of research activity in the area
pushed the quality of language models through better pretraining tasks. Where
BERT (devlin2019bert) masks out a fraction of the input tokens, later works
demonstrated the advantages of masking out subsequences (song2019mass;
dong2019unified; joshi2020spanbert; raffel2019exploring; conneau2019cross) and
whole sentences (zhang2019pegasus).
Besides the choice of pretraining tasks, the scale of language models is also
an important factor. Language models improve in quality and develop new
abilities as they grow larger while trained on the same data (radford2019gpt;
raffel2019exploring; gpt3).
#### Inductive Biases in General
There have been works studying learning inductive biases in other contexts. In
particular, McCoy2020UniversalLI studied whether one can learn linguistic
inductive biases on synthetic datasets via meta-learning. papadimitriou-
jurafsky-2020-learning shows inductive biases learned in music data can be
useful for natural language. They further designed several synthetic tasks and
showed similar kind of improvements for natural language tasks. From a more
theoretical point of view, xu2020WhatCanNeuralNetworksReasonAbout formalize an
aspect of inductive (architectural) bias under the context of GNNs, with a
notation called _architectural alignment_. The architecture is aligned when
the architecture can perfectly simulates the ground truth solution. But their
work is limited to showing alignment in combinatorial problems, whose ground
truth solutions are known. In contrast, our work tries to learn architectural
bias by relying on the flexible Transformer architecture and training on
synthetic datasets.
#### Inductive Biases for Mathematics
Previous work studying inductive biases for logical reasoning has focused on
encoding bias in the neural architecture. Initial works focused on encoding
the tree structure of expressions using TreeRNNs (evans2018can). Graph neural
networks are shown to provide a much stronger performance than tree models in
premise selection (wang2017premise) and theorem proving (paliwal2019graph).
GNNs also scale to larger formulas in SAT (neuro-sat; selsam2019neurocore;
han2020enhancing), QBF (lederman2020qbf), and #SAT (neurosharp).
crouse2019improving have shown that pooling mechanisms can have an impact on
the performance of GNNs on logical formulas as well. Closely related,
Hellendoorn2020GREAT have shown that it can be helpful to hard-code the tree
structure of programs in the attention mask of transformers. Schlag2019
developed an architecture for encoding relational information using tensor
product representation for mathematical reasoning.
## 3 Methods
In this section, we first discuss the primitives of reasoning, inspired by
Peirce’s views, and design one synthetic task for each reasoning primitive.
### 3.1 Reasoning Primitives
In Peirce’s view, there are exactly three kinds of reasoning: deduction,
abduction, and induction. Deduction is known as the workhorse for mathematics.
It is the process of deriving new facts by applying logical inference rules to
known facts or premises. On the other hand, abduction and induction can be
thought of as the inverses of deduction. If we call the premise used in
deduction as _Case_ , its logical rule as _Rule_ , and its conclusion as
_Result_ , then abduction is equivalently the inference of a Case from a Rule
and a Result, while induction may be said to be the inference of a Rule from a
Case and a Result. We summarize the three reasoning primitives in the
following table:
Reasoning Primitives | Inference Map
---|---
Deduction | Rule, Case $\to$ Result
Abduction | Rule, Result $\to$ Case
Induction | Case, Result $\to$ Rule
To give an example, we let Rule be “All the beans in this bag are white”, Case
be “These beans are from this bag”, and Result be “These beans are white”.
Deduction is to derive the fact that these beans are white (Re) from knowing
all the beans from this bag are white (R) and these beans are from this bag
(C). Abduction explains why the beans are white (Re) from knowing that all the
beans in the bag are white (R) – because these beans must be from the bag (C).
Lastly, induction aims to provide a general principle to observing the fact
that the beans are white (Re) and they come from this bag (C), which is that
all the beans in the bag must be white (R). We refer to peirce1992reasoning
and peirce for more elaborate discussions on the primitives of reasoning.
Mathematical reasoning exhibits nontrivial uses of these reasoning primitives.
Deduction happens when one needs to derive new valid statements from the given
premise (Case) and theorems in the library (Rule). Abduction is used to
postulate conjectures from the known facts and theorems, allowing one to
decompose the challenging theorem into subgoals for proof. Induction, the
ability to extract general principles from known facts and theorems is also
one of the major activities of mathematical reasoning. It is used when one
derives theorems from special cases and proposes new definitions and general
frameworks to encapsulate existing knowledge.
### 3.2 LIME Synthetic Tasks For Reasoning Primitives
We design three synthetic tasks inspired by the three reasoning primitives. As
discussed in the previous section, all of the reasoning primitives consist of
three essential elements: Rule, Case, and Result. Inspired by this, we first
design a method to generate those elements. Once they are generated, we can
construct tasks that predict one element from the other two. In the following,
we describe one simple way to generate those three elements, though we
acknowledge that there are many other possible approaches.
We require two types of symbols: 1. _math symbols_ , 2. _rule symbols_. In
general, these symbols can take any forms (e.g., integer representations). But
for the ease of discussion, we will think of math symbols as the union of
those operators used in mathematics (e.g., “$+-*=()\&$”) and lower case
letters (e.g., $a$, $b$, $c$ …), and rule symbols as upper case letters (e.g.,
$A$, $B$, $C$ …). We now construct Rule, Case, and Result in order:
1. 1.
Rule is a randomly sampled string that consists of i) rule symbols and ii)
math symbols. The length of the string is randomly sampled from a range. For
instance, a randomly sampled rule can be: $A*A+B=C$ with rule symbols $A$,
$B$, and $C$.
2. 2.
Case is a dictionary that represents substitutions. For each rule symbol used
in the Rule string, we sample a random string of random length that consists
of math symbols. This forms a dictionary, whose keys are all rule symbols, and
the values are the corresponding sampled string. To illustrate, following the
previous example, for each $A$, $B$ and $C$, we sample a random string to form
a dictionary as: $\\{A:a,~{}B:b,~{}C:d+e\\}$.
3. 3.
Result is the outcome of the substitution. For each rule symbol in the Rule
string, we replace it with the corresponding value stored in the Case
dictionary. This gives rise to the Result string. As per the previous example,
we now substitute $A$ with $a$, $B$ with $b$, and $C$ with $d+e$ into the Rule
string, generating the Result string: $a*a+b=d+e$.
After Rule, Case, and Result are generated, we can construct three tasks for
deduction, abduction, and induction respectively. We define the three
synthetic tasks as follows:
* •
Deduct: Source: Rule string and Case dictionary.
Target: Result string.
* •
Abduct: Source: Rule string and Result string.
Target: Case dictionary.
* •
Induct: Source: Case dictionary and Result string.
Target: Rule string.
We also consider a task called Mix, which is a uniform mix of three tasks.
Namely, during generation, we randomly select a task and sample an example
from that task. To formulate them as sequence to sequence tasks, we represent
the Case dictionary also as a string, e.g., “$\\{A:a,~{}B:b,~{}C:d+e\\}$”. An
example of Abduct using the examples of Rule, Case, and Result above is to
predict the target $\\{A:a,~{}B:b,~{}C:d+e\\}$ from the source $A*A+B=C$ <s>
$a*a+b=d+e$.
Pre-training on our synthetic tasks can be seen as a form of skip-component
learning. There are three essential components: Rule, Case and Result, and we
skip one of them and use the remaining two elements to reconstruct the missing
one. Past work has shown that learning to predict missing words
(devlin2019bert), subsequences (song2019mass; raffel2019exploring), or
subtrees (rabe2020mathematical) are strong pre-training tasks.
### 3.3 Symbol-Agnostic Representation
In order to solve the synthetic tasks, the model needs to distinguish which
set of symbols can be substituted (rule symbols). As a result, the model may
memorize information about the symbols that is irrelevant to the inductive
biases encoded in the task. To prevent such memorization, we propose a way to
make the synthetic tasks agnostic to the choice of symbols.
We first note that the choice of symbols is irrelevant to our synthetic tasks.
To avoid symbol-specific memorization, for each training and evaluation
example, we randomly sample two sets of symbols to be used in Rules and in the
rest of the example. But for the Abduct task, the model needs to know which
symbols are replaced by the Rule part of the example and which symbols are in
the Result language. We simply list the split of the symbols used in the
example at the beginning of the input string, marked by two special symbols,
<Rule> and <Math>. They are followed by the original source string. The target
string remains unchanged. For example, the previous example in the Abduct task
becomes,
Source: <Rule> $A$ $B$ $C$ <Math> $*$ $+$ $=$ $a$ $b$ $d$ $e$ <s> $A*A+B=C$
<s> $a*a+b=d+e$
Target: $\\{A:a,~{}B:b,~{}C:d+e\\}$
In our implementation, we use integers to represent symbols. Specifically, for
each example, we sample two disjoint sets of integers from the set
$\\{1,\dots,S\\}$ to represent the math symbols and the rule symbols, where
$S$ is the size of the vocabulary. In our experiments, we sample 44 math
symbols and 24 rule symbols for each problem. The complete pseudo-code of
generating the symbols, Rule, Case, and Result for one task example is
provided in Appendix Algorithm 1.
## 4 Experiments
In this section, we present results on four large mathematical reasoning tasks
that are especially useful in the context of automated theorem proving. Our
results show significant gains in learning inductive biases from synthetic
tasks. We have selected four tasks to cover various different styles of
interactive theorem provers: The HOL-Light (skip-tree) corpus was created from
very high-level tactic-based proofs, but it is less interpretable than
IsarStep’s declarative style corpus. We also evaluate on model’s ability to
conjecture unseen lemma strings with Lean theorem prover, which is host to
some of the most sophisticated formalized mathematics. Lastly, we evaluate the
next proof-step prediction task on the set.mm library of MetaMath, which
consists of very granular, basic proof steps. Namely, the proof steps are more
predicable and average proof lengths have significantly increased.
### 4.1 Experiment Details
#### LIME Pretraining
We generate datasets of our synthetic tasks for pretraining: Deduct, Abduct,
Induct, Mix. For pretraining of IsarStep, we used a vocabulary size $S$ of
$1000$. For the other two downstream tasks, we used a vocabulary size of
$100$. The reason we used different vocabulary sizes was that we found (cf.
appendix) the discrepancy in vocabulary size affects the performance of a
downstream task if it has a very large vocabulary size (IsarStep has $28$K).
We use $44$ math symbols and $24$ rule symbols. The length of the Rule string
is sampled from $5$ to $20$, the length of the string for each substitution
(the values of Case dictionary) is sampled from 2 to 8. We used word-level
tokenization for all the tasks. We pretrained the model for $20$K updates. For
tasks with larger vocabulary size (i.e., $1000$), we found the learning became
more difficult. Hence we used a curriculum learning scheme: we first trained
the model for $10$K steps on the same task with a vocabulary size of $100$,
then continue training for another $10$K step on vocabulary size of $1000$.
The pretraining was done on a single Nvidia Tesla T4 GPU with $4$ CPU cores
for $2$ hours. We set the maximum number of tokens in a batch to $4096$, and
accumulate four batches of gradients for one parameter update. We used the
Adam optimizer (kingma2014adam) with learning rate $3\cdot 10^{-4}$. We used a
dropout rate of $0.1$ and label smoothing (szegedy2016rethinking) with a
coefficient $0.1$.
#### Fine-tuning
For all the downstream tasks in this section, when loading the pretrained
models for fine-tuning, we do not load in the vocabulary embeddings nor the
output layer weights. For the downstream task IsarStep and MetaMathStep, we
used four Nvidia Tesla T4 GPU with $16$ CPU cores for training. We set the
maximum number of tokens in a batch to $4096$, and accumulated four batches of
gradients for one parameter update. We trained the model for $200$K updates.
We used the Adam optimizer, and we searched over the learning rates $\\{3\cdot
10^{-4}$, $7\cdot 10^{-4}\\}$, and warmup steps $\\{4000,8000\\}$. We used a
dropout rate of $0.1$ and label smoothing with a coefficient $0.1$. For the
HOList skip-tree task, we used TPUs for running the experiments. We used a
batch size of 256 sequences and trained the model for 1 million updates.
#### Architecture
All experiments used the transformer base model from (transformer17), i.e. 512
hidden size, 2048 filter size, 8 attention heads. For the IsarStep and
MetaMathStep task, we used 6 layers for both the encoder and decoder,
implemented using fairseq (ott2019fairseq). For the HOList skip-tree
experiment, we used a somewhat modified transformer architecture with 8
encoder and 4 decoder layers of the same size as above in which the self-
attention and attention over the encoder output were merged.
#### Evaluation
During training, we kept track of the best validation tokenized BLEU score
111https://github.com/pytorch/fairseq/blob/master/fairseq/tasks/translation.py#L396,
and we used the model with validation BLEU for evaluation on the test set. We
report top-1 and top-10 accuracies. We consider an output sequence as correct
if it matches the target sequence exactly. We performed a beam search with
width 10. The top-1 accuracy is then defined as the percentage of the best
output sequences that are correct. The top-$n$ accuracy is defined as the
percentage of target sequences appearing in the top $n$ generated sequences.
Table 1: Test top-1, top-10 ($\%$) accuracy on the IsarStep task.
Model Top-1 Acc. Top-10 Acc. No pretrain (li2020modelling) 20.4 33.1 HAT
(li2020modelling) 22.8 35.2 LIME Deduct 24.7 37.7 LIME Abduct 26.7 41.0 LIME
Induct 23.9 38.8 LIME Mix 26.9 40.4
Table 2: Test top-8 Accuracy on Skip-Tree HOList ($\%$).
Model Equation completion Hard type inference Missing assumptions Easy type
inference No pretrain (rabe2020mathematical) 46.3 95.0 41.8 95.9 LIME Deduct
50.3 94.8 47.9 97.0 LIME Abduct 48.4 94.8 46.1 96.3 LIME Induct 44.8 94.9 42.6
96.4 LIME Mix 51.7 95.6 46.1 97.6
Figure 1: Validation BLEU along with training on the IsarStep task.
### 4.2 IsarStep
The IsarStep task is taken from (li2020modelling). IsarStep is a task of
predicting the missing intermediate propositions given surrounding
propositions to bridge the gap between the goal and the current state of the
proof. The dataset was mined from the public repository of formal proofs of
the Isabelle proof assistant (Paulson, 1994). Unlike HOList and MetaMath,
IsarStep contains mostly declarative proofs, a proof style close to humans’
prose proofs. The dataset has a broad coverage of undergraduate and research-
level mathematics and computer science theorems. There are 820K, 5000, 5000
sequence pairs for the training, validation, and test sets with a maximum of
800 tokens in source sequences and 200 tokens in the target sequences.
Following (li2020modelling), during training, we use 512 as the maximum length
for both the source and target, and truncated those that exceed the length to
512. For reporting, we evaluate all 5000 test examples regardless of their
lengths.
The results on the IsarStep task for four pretrained models and the baseline
transformer model without pretraining is shown in Table 1. We also include
another baseline, HAT transformer introduced in (li2020modelling), which is a
specially designed hierarchical transformer architecture tailored to this
task. We see the pretrained model achieved substantial improvement over the
model trained from scratch as well as HAT. Notably, the model that was
pretrained on Abduct improved the top-10 accuracy from $33.1\%$ to $41.0\%$,
for almost $8\%$ absolute improvement. The model pretrained on Mix performed
the best on top-1 accuracy, improving the baseline by $6.5\%$ accuracy. We
also showed the validation BLEU scores along training in Figure 1. We can see
that the pretrained models learned much faster than the model trained from
scratch. With around $50$K steps of updates, the pretrained model already
obtained better BLEU scores than the best score achieved by the un-pretrained
model. Moreover, since the downstream task requires $200$K steps of training
with $4$ GPUs, the amount of computation spent on pretraining is only $2.5\%$
of the downstream task, strongly demonstrating the efficiency of the proposed
pretraining method.
Table 3: Test top-1, top-10 ($\%$) accuracy on the MetaMathStep task.
Model Top-1 Acc. Top-10 Acc. No pretrain 67.7 76.5 LIME Deduct 68.8 77.4 LIME
Abduct 68.8 76.1 LIME Induct 69.9 78.0 LIME Mix 69.1 77.9
### 4.3 HOList Skip-Tree
As the second mathematical reasoning benchmark, we consider the HOList skip-
tree evaluation tasks by rabe2020mathematical. These tasks include two
variants of type inference, predicting under which assumptions theorems hold,
and completing equalities. All source expressions for these tasks are taken
from the validation set of the theorem database of the HOList proof logs
(bansal2019holist). The evaluations are done on a random sample of 1000
instances from the full evaluation sets. We initialized the model parameters
with the pretrained weights and then repeated the experiments by
rabe2020mathematical. That is, we trained the models for up to 1M parameter
updates on the training set with batch size 256 and repeat the evaluation
every 100K steps. In Table 2 we present the best result from these 10
evaluation runs. We see a significant improvement in these reasoning tasks
when the models are initialized with the pretrained weights. Notably, on
equation completion and missing assumptions task, we improved the beam search
(with width $8$) exact match rate performance from $46.3\%$ to $51.7\%$ and
$41.8\%$ to $47.9\%$. Note that this is despite the amount of pretraining
compute cost being negligible: it takes less than 1 percent of the cost of the
downstream task training. Pretraining used $1$/$20$ number of the update steps
($50$K vs $1$M) with $8$ (and $4$) times smaller batches (pretraining has much
shorter sequence lengths, $128$ vs. $1024$ and $512$, respectively).
Table 4: Test top-1, top-10 ($\%$) accuracy on the LeanStep unseen lemma
prediction task.
Model Top-1 Acc. Top-10 Acc. No pretrain 15.8 27.4 LIME Deduct 25.8 38.0 LIME
Abduct 26.0 38.6 LIME Induct 25.0 38.2 LIME Mix 29.8 41.8
### 4.4 MetaMathStep
Compared to other ITPs, MetaMath is a low-level proving system: each proof
step makes only a small step towards the goal. As such, each proof contains
many more proof steps than in other ITPs: with $37,000$ theorems in the human-
written theorem library, there are around 3 million proof steps. We extract
the proof steps and use them to construct a sequence-to-sequence task
following polu2020generative (their proof step training objective).
In this task, the model is asked to generate PROOFSTEPS given a GOAL, namely,
the GOAL string is the source input, and PROOFSTEPS is the target output. We
follow (polu2020generative) and use their string representation for the GOAL
and the PROOFSTEPS. Instead of using subword tokenization in
polu2020generative, we use a character-level representation for our task.
Following polu2020generative, we split theorems into train/valid/test theorems
of size $35$K, $1$K, $1$K, and associate all proof steps of a theorem with
that split. For each dataset, we filter examples with lengths longer than
1024. This reduced the total number of proof steps to $1.4$ million. For
validation and test set, we randomly sample $3000$ examples out of $40$K
(after filtering) and perform validation and test evaluations on them. In
Table 3 we present the impact of LIME on MetaMathStep. We also observe gains
from LIME on this dataset, with the model trained on Induct task achieving
$2.2\%$ top-$1$ and $1.5\%$ top-$10$ test accuracy improvement. Similarly, as
for the IsarStep task, the computation spent on pretraining is only $2.5\%$ of
the downstream task.
### 4.5 LeanStep: Unseen Next Lemma Prediction Task
Lastly, we look at a mathematical benchmark based on Lean 3 theorem prover.
Lean has an extremely active community and is host to some of the most
sophisticated formalized mathematics in the world, including scheme theory
(buzzard2019schemes), forcing (DBLP:conf/cpp/HanD20), perfectoid spaces
(DBLP:conf/cpp/BuzzardCM20), and condensed mathematics (lean-liquid). We
extracted a similar style of dataset as MetaMathStep from Lean, that is, we
predict the next lemma to apply given the current goal state (or commonly
known as the tactic state in Lean). Unlike MetaMathStep, we focus on
predicting lemmas that have not been seen during training time. Namely, in
this task, we evaluate the model’s capability of conjecturing a novel lemma
string given a goal. Specifically, we extracted $498,624$ number of goal, next
lemma pairs from Lean mathlib library (DBLP:conf/cpp/X20; pact). We found
there are $34,867$ lemmas that appeared only once in the entire dataset. We
then randomly sampled $8$k of lemmas from this set and used the corresponding
goal lemma pairs for the validation and the tests (each 4$k$). As such, during
validation and testing, the model needs to predict lemmas that have not been
seen during training. We present the results on LIME and the baseline in Table
4. We observed a huge gain with LIME pretraining. Remarkably, LIME MIX doubled
the top-1 accuracy compared to the baseline unpretrained model, improving the
accuracy from $15.8\%$ to $29.8\%$.
## 5 Ablation Studies
In this section, we perform ablation studies. Additional ablation studies can
be found in Appendix Appendix C.
Table 5: Comparisons to other pretraining tasks on IsarStep task. Model | Top-1 Acc. | Top-10 Acc
---|---|---
No pretrain (li2020modelling) | 20.4 | 33.1
LIME Mix | 26.9 | 40.4
Pretrain on MetaMathStep | 23.1 | 35.7
Pretrain on WMT En-De | 17.2 | 30.3
### 5.1 Pretraining on Formal Reasoning and Natural Language Tasks
Here we investigate how LIME compares to pretraining on natural language or
existing formal reasoning datasets. In this set of experiments, we pretrained
three models on Mix, MetaMathStep, and on the WMT 2016 English-to-Germany (WMT
En-De) translation task, and then we fine-tuned and evaluated these models on
the IsarStep task. We pretrained the model on MetaMathStep and WMT EN-DE for
$200$K steps with 4 GPUs, which is $40$ times more computation spent than on
LIME. Due to the mismatch between vocabularies of the pretraining task and the
downstream task, we do not load the vocabulary embeddings nor output layer
weights. The results in Table 5 show that pretraining on MetaMathStep did
provide gains, though significantly smaller than gains provided by LIME Mix,
despite their $40$ times higher computational cost. Moreover, pre-training on
WMT translation had even a negative effect on the performance. We also
conducted an analogous experiment with an evaluation on the MetaMathStep. The
result is shown in Table 6. In contrast to MetaMath helping IsarStep, we see
that pretraining on IsarStep task did not help the downstream task
MetaMathStep. We hypothesize that this could be due to MetaMathStep task is
closer to the LIME tasks than IsarStep, and hence providing more gains than
the opposite direction. We leave investigations to the future versions.
Table 6: Pretraining on IsarStep for the MetaMathStep task. Model | Top-1 Acc. | Top-10 Acc.
---|---|---
No pretrain | 67.7 | 76.5
LIME Mix | 69.1 | 77.9
Pretrain on IsarStep | 67.0 | 76.1
### 5.2 Do we need vocabulary embeddings for fine-tuning?
As mentioned earlier, we did not load in the vocabulary embeddings from the
pretrained models when we switched to fine-tuning on downstream tasks. Even
without loading the vocab embeddings, the pretrained models still improved the
performance. In this ablation study, we investigate how much this decision has
affected the results and whether vocabulary embeddings can help improve the
performance even further. We performed the comparisons on IsarStep. The task
contains a token vocabulary of size 28336. We generated new synthetic tasks
for the same vocabulary size, such that we can load the vocabulary embeddings
and output layers when initializing the model for IsarStep. Table 7 shows that
this led to similar performance. This aligns with our expectation that the
model should not learn content specific knowledge that is potentially stored
in the vocabulary. These weights turn out to be non-essential for the final
performance, supporting the evidence that the transformer learns inductive
biases from the pretraining task.
Table 7: Whether one needs to load vocabulary embeddings and output layer
weights on IsarStep tasks.
Model Top-1 Acc. Top-10 Acc No pretrain (li2020modelling) 20.4 33.1 LIME Mix
26.9 40.4 LIME Mix \+ Loading All Weights 26.7 40.6
### 5.3 Does LIME help LSTMs?
In this section, we investigate if LIME also helps other architectures than
transformers. In particular, we applied LIME to two LSTM based architectures:
1. vanilla LSTM, 2. LSTM with attention mechanism. The vanilla LSTM is a
stacking LSTM with 4 layers, each with 1000 cells, and 1000-dimensional
embeddings. The LSTM with attention architecture is taken from (LuongPM15),
also with 4 layers, 1000 cells and 1000-dimensional embeddings. We evaluate on
the IsarStep task, and compared a model trained from scratch and a model pre-
trained on LIME abduct task. We used the same training protocol as described
in 4.1. The results are shown in Table 8, along with the results on
transformer. We observe that LIME improved LSTM as well as LSTM with
attention, but the improvements were small compared to transformer.
Specifically, if we compare Top-1 accuracy, we can see that LIME improved LSTM
from $5.5\%$ to $6.9\%$, LSTM with attention from $12.3\%$ to $13.4\%$, and
transformer from $20.4\%$ to $26.7\%$. This observation is aligned with our
hypothesis that the transformer is a malleable architecture and hence it is
capable of learning architectural inductive biases from datasets. This is
mainly attributed to the potential of learning dynamic attention graphs in
self-attention layers. We note that this still warrants further investigation
as the performance of these architectures are not at the same level, and that
may also lead to different improvements.
Table 8: Comparing LIME’s benefits on LSTMs on the IsarStep Task
Model Top-1 Acc. Top-10 Acc. LSTM 5.5 11.3 LSTM + LIME Abduct 6.9 14.3 LSTM +
attention 12.3 22.7 LSTM + attention + LIME Abduct 13.4 26.3 Transformer 20.4
33.1 Transformer + LIME Abduct 26.7 41.0
## 6 Does LIME encode Induction, deduction and abduction?
Although LIME has shown to achieve substantial improvements across various
benchmarks, it is not entirely clear that the specific synthetic tasks
necessarily enforce the reasoning ability of induction, deduction and
abduction. We would like to note that deduction, induction, and abduction are
high-level and philosophical concepts, and serve only as an inspiration for us
to design the synthetic tasks. We do not expect the model will necessarily
learn exactly these three capabilities. After all, we have chosen a particular
implementation of "Case", "Rule" and "Result". Furthermore, we also design
tasks mimic proof steps in formal theorem proving (see the rewrite task in
Appendix Appendix B.1), which also achieved excellent results. Nevertheless,
we believe LIME is a first step towards building reasoning inductive biases,
and provides many inspirations and directions for future work.
## 7 Conclusion
In this work, we encoded inductive biases for mathematical reasoning in the
form of datasets. We created three synthetic tasks inspired by three reasoning
primitives of deduction, induction, and abduction. We demonstrated that
pretraining on these tasks (LIME) significantly improved the performances
across four mathematical reasoning benchmarks. Notably, LIME requires
negligible computation compared to the downstream task, unlike being the
dominating factor in previous pretraining methods. Our work naturally poses
many future research questions. Could the primitive tasks provide similar
gains for NLP tasks? Are there similar primitive tasks for natural language
reasoning? We also look forward to disentangling the effects of pretraining
between learning content knowledge and inductive bias for all downstream tasks
to better understand pre-training.
## Appendix Appendix A Synthetic Task Generation Pseudocode
Algorithm 1
1:function generate_Tuple( Vocabulary size $S$)
2: Vocabulary $\mathcal{V}$ $\leftarrow$ $\\{1,2,\dots,S\\}$. $\triangleright$
Use an integer representation of symbols.
3: Math symbol set $\mathcal{M}$ $\leftarrow$ sample($\mathcal{V}$, $n$=44,
replacement=False). $\triangleright$ Sample 44 distinct symbols.
4: Rule symbol set $\mathcal{R}$ $\leftarrow$
sample($\mathcal{V}\backslash\mathcal{M}$, $n$=20, replacement=False).
$\triangleright$ Sample 20 distinct symbols.
5: Rule $R$ $\leftarrow$ sample($\mathcal{M}\bigcup\mathcal{R}$,
$n$=Random(5,20), replacement=False). $\triangleright$ Sample a sequence of
symbols of length between 5 and 20.
6: Case dictionary $C$ $\leftarrow$ $\\{\\}$.
7: for $s$ in $\mathcal{R}$ do
8: Case dictionary $C[s]$ $\leftarrow$ sample($\mathcal{M}$, $n$=Random(2,8),
replacement=True). $\triangleright$ Sample a sequence of symbols for each rule
symbol, of length of length between 2 and 8.
9: end for
10: Result $R^{\prime}$ $\leftarrow$ Rule $R$. $\triangleright$ Set result
string $R^{\prime}$ to be the same as rule string $R$.
11: for $s$ in $\mathcal{R}$ do
12: substitute($R^{\prime}$, $s$, $C[s]$). $\triangleright$ Substitute every
rule symbol $s$ in result string $R^{\prime}$ with previously randomly sampled
string $C[s]$.
13: end for
14: return Math symbol set $\mathcal{M}$, Rule symbol set $\mathcal{R}$, Rule
$R$, Case $C$, Result $R^{\prime}$.
15:end function
## Appendix Appendix B Other synthetic tasks
In this section, we give descriptions of other variants of the synthetic tasks
we considered than the ones introduced in the main paper.
### Appendix B.1 Rewrite and Rewrite_multistep
We propose a rewrite task, inspired by the rewrite tactic used in interactive
theorem provers. The Rewrite task requires the model to rewrite a string
according to a rule transformation. One example of the task is:
Source: $a+b-c$ <s> $A+B=B+A$
Target: $b+a-c$
“$A+B=B+A$“ is the rule transformation, which is applied to the LHS string
“$a+b-c$”. The model needs to predict the RHS string as the result of the rule
application, i.e., $b+a-c$. Besides rule symbols and math symbols, we also
require the third set of symbols, named as "string symbols". For the ease of
our discussion, we we will think of math symbols as the union of those
operators used in mathematics (e.g., “$+-*=()\&$”), rule symbols as upper case
letters (e.g., $A$, $B$, $C$ …), and string symbols as lower case letters
(e.g., $a$, $b$, $c$ …). We first sample a random string as the LHS string,
consisting of math symbols and string symbols (e.g., $a+b-c$). We sample a
sub-string of the LHS string, and replace the string symbols in the sub-string
with rule symbols. For example, we sample and obtain the substring $a+b$ from
$a+b-c$, and we replace $a$, $b$ with rule symbols $A$, $B$. This then forms
the LHS of the rule transformation, $A+B$, with the substitution dictionary
$\\{A:a,B:b\\}$. We then sample the RHS of the rule transformation from the
union of rule symbols $A$ and $B$, and all math symbols, e.g., $B+A$. This
gives the rule transformation $A+B=B+A$. We substitute the value of the
substitution dictionary for each rule symbol in the RHS rule, and then
substitute back to the original LHS string to obtain $b+a-c$. The task example
is constructed by using the LHS string and the rule transformation as the
source input, and use the result of the rule transformation as the target.
We further introduce a multi-step version of the rewrite task:
Rewrite_multistep. In this task, the source may contain more than one rewrite
rule, and the target is the result of applying all the rewrite rules in a
sequence. This task is motivated from the need to perform multi-step planning
in mathematical reasoning tasks. During pre-training, for each training
example, we uniformly sample the number of rewrite steps from 1 to 5.
### Appendix B.2 Other variants of Induct Task
We introduce three other variants of the Induct task.
1. 1.
Induct_v2: We move the Case dictionary from the source input to the target
output. This makes the task significantly harder, which requires the agent to
synthesize a rule and a possible explanation (Case) to explain the Result.
2. 2.
Induct_v3: Instead of providing the Case dictionary, we provide two Result
strings, coming from the same Rule. Namely, we sample two Case dictionaries,
and applying each to the Rule string to obtain two Result strings. Both Result
strings are used as source, and the target is the Rule string.
3. 3.
Induct_rewrite: We also create a “induction” version of the Rewrite task. In
this task, the source is the LHS string concatenated with the RHS string, that
is the result of the rewrite. The target is the rewrite rule that is used to
do the rewrite.
### Appendix B.3 A full comparison of all synthetic tasks
In this section we present a full comparison for all synthetic tasks. We
followed the training protocol in 4.1 and evaluate the method on IsarStep. The
results are reported in Table 9. We can see that the Rewrite_multistep
achieved the best performance across all synthetic tasks, surpassing the
baseline by $8.2\%$ for Top-1 accuracy and $10.8\%$ for Top-10 accuracy. This
indicates the inductive bias for long horizon reasoning encoded in
Rewrite_multistep is very useful for the reasoning task.
Table 9: Test top-1, top-10 ($\%$) accuracy on the IsarStep task.
Model Top-1 Acc. Top-10 Acc. No pretrain [li2020modelling] 20.4 33.1 HAT
[li2020modelling] 22.8 35.2 LIME Deduct 24.7 37.7 LIME Abduct 26.7 41.0 LIME
Induct 23.9 38.8 LIME Mix 26.9 40.4 LIME Rewrite 26.0 38.6 LIME
Rewrite_multistep 28.6 43.9 LIME Induct_v2 25.6 39.8 LIME Induct_v3 25.0 38.8
LIME Induct_rewrite 25.8 39.5
Table 10: Vocabulary sizes’ effects on the IsarStep task. Model | Top-1 Acc. | Top-10 Acc.
---|---|---
No pretrain | 20.4 | 33.1
LIME on Rewrite, $S=100$ | 24.1 | 37.5
LIME on Rewrite, $S=512$ | 25.4 | 38.8
LIME on Rewrite, $S=1000$ | 26.0 | 38.6
LIME on Rewrite, $S=5000$ | 25.8 | 38.5
LIME on Rewrite, $S=25000$ | 27.4 | 40.9
## Appendix Appendix C More Ablation Studies
### Appendix C.1 Does the vocabulary size matter?
In this section, we investigate whether the vocabulary size $S$ in the
synthetic task generation algorithm has an effect on the performance. We used
the REWRITE task for the experiment in this section. We generated datasets of
various vocabulary sizes, $100$, $512$, $1000$, $5000$, $25000$. We used the
same curriculum learning for pre-training as described in 4.1 on larger
vocabulary sizes: first training on the Rewrite task of vocabulary size $100$
for $10$K steps, then training on each individual dataset for another $10$K
steps. We compare the performance on the downstream task Isarstep. The results
are presented in Table 10. We see that when the vocabulary size is equal or
larger than $512$, the performance were similar. The smallest vocabulary size
$100$ obtained the worst performance among all, and all the other four models
achieved similar BLEU scores. The model trained on the largest vocabulary
achieved best performance on top-1 accuracy and top-10 accuracy. The results
show there is a non-trivial effect of the vocabulary size of the synthetic
task to the performance of the downstream task. Hence we use vocabulary size
of $1000$ for all the experiments in the main paper. We leave investigations
of the causes to future work.
|
# Quantifying the Long-Range Structure of Foams and Other Cellular Patterns
with Hyperuniformity Disorder Length Spectroscopy
A. T. Chieco & D. J. Durian Department of Physics and Astronomy, University
of Pennsylvania, Philadelphia, PA 19104-6396, USA
###### Abstract
We investigate the local- and long-range structure of four different space-
filling cellular patterns: bubbles in a quasi-2d foam plus Voronoi
constructions made around points that are uncorrelated (Poisson patterns), low
discrepancy (Halton patterns), and displaced from a lattice by Gaussian noise
(Einstein patterns). We study distributions of local quantities including cell
areas and topological features; the former is the widest for bubbles in a foam
making them locally the most disordered but the latter show no significant
differences between the cellular patterns. Long-range structure is probed by
the spectral density and also by converting the real-space spectrum of number
density or volume fraction fluctuations for windows of diameter $D$ to the
effective distance $h(D)$ from the window boundary where these fluctuations
occur. This real-space hyperuniformity disorder length spectroscopy is
performed on various point patterns which are determined by the centroids of
the bubbles in the foam, by the points patterns around which the Voronoi cells
are created and by the centroids of the Voronoi cells. These patterns are
either unweighted or weighted by the area of the cells they occupy. The
unweighted bubble centroids have $h(D)$ that collapses for the different ages
of of the foam with random Poissonian fluctuations at long distances. All
patterns of area-weighted points have constant $h(D)=h_{e}$ for large $D$;
$h_{e}=0.084\sqrt{\left<a\right>}$ for the bubble centroids is the smallest
value, meaning they are most uniform. All the weighted centroids collapse to
the same constant $h_{e}=0.084\sqrt{\left<a\right>}$ as for the foams. A
similar analysis is performed on the edges of the cells where the spectra of
$h(D)$ for the foam edges show $h(D)\sim D^{1-\epsilon}$ where $\epsilon=0.3$.
## I Introduction
There are many well-established ways to quantify the local structure of foams
and other cellular systems using cell size, shape, topology, and neighbor
correlations Weaire and Hutzler (2001); Glazier and Weaire (1992); Stavans
(1993a); Flyvbjerg (1993). Distributions of such local measures are used to
describe the entire foam packing and under proper normalization they remain
the same as the foam coarsens; i.e. they exhibit statistical self-similarity
Stavans (1990, 1993b); Stavans and Glazier (1989); de Icaza _et al._ (1994);
Glazier _et al._ (1990); Herdtle and Aref (1992); Segel _et al._ (1993);
Rutenberg and McCurdy (2006); Neubert and Schreckenberg (1997). By contrast,
quantifying the long range structure of foams and other cellular systems
remains an open question.
Recently the concept of hyperuniformity was introduced regarding the structure
in disordered materials at long distances Torquato and Stillinger (2003);
Zachary and Torquato (2009); Torquato (2018). Materials are called
“hyperuniform” if long range density fluctuations are suppressed to the same
extent as in crystals. Work done on hard-particle packings of bidisperse
disks, ellipses and superballs at the jamming transition found they are
hyperuniform and the researchers posit that hyperuniformity exists in all
systems at the jamming transition regardless of particle shape or
polydispersity Zachary _et al._ (2011a, b, c). Analysis on a wide assortment
of other disordered materials at or slightly below the jamming transition
finds they have signatures of hyperuniformity Kurita and Weeks (2011);
Berthier _et al._ (2011); Jiao _et al._ (2014); Dreyfus _et al._ (2015);
Weijs _et al._ (2015); Atkinson _et al._ (2016). However, hyperuniformity is
not a signature of all jammed systems and work on simulated packing of
bidisperse soft disks demonstrates it does not exist above the jamming
transition Wu _et al._ (2015); Ikeda and Berthier (2015); Chieco _et al._
(2018). Additionally Ref. Chieco _et al._ (2018) shows for 2-dimensions and
Ref. Ikeda and Berthier (2015) shows for 3-dimensions that not only does
hyperuniformity not exist above jamming but the overall uniformity of the
packing decreases as the distance above jamming increases. This is where foams
pose an interesting problem. Foam is far above the jamming transition where
hyperuniformity has not been observed, yet it is completely space-filling and
the total absence of density fluctuations makes it trivially hyperuniform.
Nevertheless the bubble packing structure and the distribution of liquid films
are visually disordered, possessing large spatial fluctuations that could
impact behavior and need to be quantified.
While foams are a naturally occurring cellular solid, this same problem exists
for any disordered system with global packing fraction $\phi=1$. To examine
this problem in detail we generate space-filling cellular packings by
partitioning space with Voronoi constructions around point patterns of varying
disorder. Such patterns were analyzed in recent studies with regards to their
long range uniformity Klatt _et al._ (2019); Kim and Torquato (2019). In Ref.
Kim and Torquato (2019) the authors, using a usual method for diagnosing
hyperuniformity, find that for small-$q$ wave vectors the scaling of the
spectral density behaves like $\chi(q)\sim q^{4}$ where the exponent is exact
based on the conditions of their simulation. They do not present experimental
data, so we perform the same kind of Fourier analysis as they do for our foam
systems as well as our Voronoi constructions. Since all of the packings
closely mirror the conditions analyzed in Ref. Kim and Torquato (2019), we are
interested in whether analyzing our systems recover the same spectral density
scaling exponent. This also allows us to test the extent to which foams are
hyperuniform and more generally we compare the uniformity of their long range
structure to the other space filling cellular patterns.
In real space the method to test for hyperuniformity is to randomly placing a
series of local observation windows throughout a sample, measuring the area
fraction covered by the particles that land within each window and calculating
the variance for the set of measured area fractions; this is repeated for
growing observation windows, and if at large length scales the variance is
suppressed to the same extent as in crystals then the system is said to be
hyperuniform. There are two ways to define the area fraction within a
measuring window. One method calculates the area covered by a particular phase
of the media that lands inside the window. If a cellular packing has global
packing fraction $\phi=1$ then the local area fractions are $\phi_{w}=1$ for
every measuring window and no meaningful signature of hyperuniformity can be
found. The other method, which we employ here, defines cells as a point
weighted by the area of the cell; if that point lands inside the local
observation window then the entire area of the cell is counted but if it lands
outside the observation window then none of the area is considered.
Measuring the asymptotic scaling behavior provides an answer to whether these
systems are hyperuniform but does not provide additional insight into the
actual structure of the underlying pattern. This can be done in principle
using the same tools used to diagnose hyperuniformity but a necessary step is
converting the fluctuations observed in a local measurement window into a
length scale; this length scale is called the hyperuniformity-disorder length
$h$ and its size is the distance from the boundary of a local measurement
window where particles set number density fluctuations Chieco _et al._
(2017); Durian (2017). Therefore the value of $h$ provides us with a length
scale for disorder that probes the nature of long-range structure as well as
the structure at smaller distances. This technique is called “hyperuniformity
disorder length spectroscopy” (HUDLS) and has shown success in identifying
long range structure for other soft systems Chieco _et al._ (2018). Here we
use it to uncover and compare the extent of potential hidden order of various
structural features of foam and other cellular patterns. We are also able to
determine whether the local structure informs long range structure.
## II Methods
### II.1 Hyperuniformity: Scaling and Definitions
In this section we begin with an optional review of established methods we
shall use to diagnose hyperuniformity in ways that quantify long-range
structure. This may be done either by the asymptotic scaling of either the
spectral density $\chi(q)$ or by the variance $\sigma_{\phi}^{2}(D)$ in the
set of local volume fractions measured in randomly-placed windows of diameter
$D$. If the spectral density has small wave vector behavior like $\chi(q)\sim
q^{\epsilon}$ with $\epsilon>0$, or more generally if $\chi(0^{+})=0$, then a
system is said to be hyperuniform. Scaling with $0<\epsilon\leq 1$ corresponds
to the long length scaling $\sigma_{\phi}^{2}(D)\sim 1/D^{d+\epsilon}$ where
$d$ is dimensionality; for $\epsilon\geq 1$, $\chi(q)\sim q^{\epsilon}$
corresponds to $\sigma_{\phi}^{2}(D)\sim 1/D^{d+1}$ and we say the system is
strongly hyperuniform. By contrast, ordinary systems exhibit Poissonian
fluctuations where $\epsilon=0$. In reciprocal space the spectral density is
$\chi(0^{+})=C$ where $C>0$ is some constant and the volume fraction variance
scales like $\sigma_{\phi}^{2}(D)\sim 1/D^{d}$ according to the
dimensionality. The power $\epsilon$ can be used as a proxy for order, but the
actual value does not have an intuitive physical interpretation.
It is important to have meaningful interpretations of order from the actual
data. For the spectral density, this is achieved by choosing a proper
normalization of $\chi(q)$ by comparing the data to the that for a “Poisson
pattern” where particles are placed totally at random. If a cellular patterns
in 2-dimensions is represented by a central-point with the entire area $a_{j}$
of particle $j$ is at the location ${\bf r}_{j}$ of its center then a suitable
definition of the spectral density is
$\chi(q)\equiv{\left(\sum a_{j}e^{i{\bf q}\cdot{\bf r}_{j}}~{}\sum
a_{k}e^{-i{\bf q}\cdot{\bf r}_{k}}\right)}/{\sum a_{j}^{2}}$ (1)
where $q=|{\bf q}|$ for isotropic packings and the sums are over all
particles. This normalization means Poisson patterns have $\chi(q)=1$ which
becomes a nominal upper bound and insight into structure at a given $q$ is
extracted from how far $\chi(q)$ lies below this value. Another benefit of
this normalization is for systems of monodisperse particles or for point
patterns the spectral density reduces to the structure factor,
$S\left(q\right)$.
In real space, order is determined from the spectrum of hyperuniformity
disorder lengths $h(D)$. Determining $h(D)$ for 2-dimensional systems first
requires finding the area fraction variance $\sigma_{\phi}^{2}(D)$. To find
$\sigma_{\phi}^{2}(D)$ the variance is measured from a set of local area
fractions $\sum N_{i}a_{i}/A_{\Omega}$, where $N_{i}$ is the number of
particles of species $i$ whose centers lie inside a randomly placed window of
area $A_{\Omega}=\pi(D/2)^{2}$ and the sum is over species. This is the real
space definition of the central point representation. Using these definitions
a completely random arrangement of particles will have
$\sigma_{\phi}^{2}\left(D\right)=\left<a\right>/A_{\Omega}$ where $\langle
a\rangle=\sum\phi_{i}{a_{i}}/\sum\phi_{i}$ is the area fraction weighted
average particle area, $\phi_{i}$ is the area fraction covered by particles
with $a_{i}$, and $\phi=\sum\phi_{i}$ is the area fraction of all the
particles in the system.
The measured area fractions fluctuate depending on where the measuring window
lands within the system; hyperuniform configurations have fluctuations that
are understood to be due to particles at the surface of the measuring windows
Torquato and Stillinger (2003). Since particle centers do not actually lie on
the window surface, it is more appropriate to picture fluctuations as
determined by the average number of particles whose centers lie within some
distance $h$ of the surface. For circular windows with area
$A_{\Omega}=\pi(D/2)^{2}$ we can thus define $h$ from the number variance via
$\sigma_{N_{i}}^{2}=(\phi_{i}/a_{i})\pi[(D/2)^{2}-(D/2-h)^{2}]$, which is
shown pictorially in Fig. 1. The number variance is converted to an area
fraction variance
$\sigma_{\phi}^{2}=\sum\sigma_{N_{i}}^{2}a_{i}^{2}/A_{\Omega}^{2}$ which leads
to the following explicit definition of $h(D)$ in terms of the measured
variance:
$\displaystyle\frac{\sigma_{\phi}^{2}(D)}{\phi}$ $\displaystyle\equiv$
$\displaystyle\frac{\langle
a\rangle}{\pi\left(D/2\right)^{2}}\left\\{1-\left[1-\frac{h(D)}{D/2}\right]^{2}\right\\},$
(2) $\displaystyle\approx$ $\displaystyle 2\frac{\langle a\rangle
h(D)}{\left(D/2\right)^{3}}\ {\rm for}\ D\gg h(D).$ (3)
Accordingly, smaller $h(D)$ means more uniformity, larger $h(D)$ means more
disorder, and $\sigma_{\phi}^{2}(D)\sim 1/D^{d+\epsilon}$ corresponds to
$h(D)\sim D^{1-\epsilon}$. Poissonian fluctuations have $\epsilon=0$ and
correspond to $h(D)\sim D$; the upper bound is $h(D)=D/2$ for a Poisson
pattern. Strong hyperuniformity where $\epsilon\geq 1$ corresponds to a
large-$D$ asymptote that is constant: $h(D)=h_{e}$. For this case,
$\sigma_{\phi}^{2}(D)\sim\langle a\rangle h_{e}/D^{3}$ is made dimensionally
correct by the existence of $h_{e}$ as an emergent length rooted in the
intuitive notion of what it means to be hyperuniform. Thus $h_{e}$ is the
desired measure of structure that is independent of $D$ when the system is
hyperuniform, and Eq. (2) generalizes upon this to systems with any degree of
uniformity.
The definitions for the hyperuniformity-disorder length are discussed in much
more detail in Chieco _et al._ (2017); Durian (2017). These references also
go through the calculations of the upper bound $h(D)=D/2$ as well as a lower
bound for the “separated-particle limit” where the size of the measuring
window is smaller than the average distance between two particles. Additional
discussion about our methods calculating the spectral density can be found in
the appendix of Chieco _et al._ (2018).
Figure 1: Image of a quasi 2-dimensional foam with the bubble centroids marked
by dots. The total area of bubbles enclosed in a circular window is taken as
the sum of areas for bubbles with enclosed centroids (red). The area fraction
variance is controlled by the number of particles in the shaded region of
thickness $h(D)$, averaged over window placements. As depicted here for
$D=8\sqrt{\left<a\right>}$, the hyperuniformity disorder length is
$h(D)=\sqrt{\left<a\right>}$ where $\langle a\rangle$ is the area-fraction
weighted average bubble area. The value of $h(D)$ is inflated by over
$10\times$ its actual value for illustrative purposes.
We also note that hyperuniformity is truly a measure of number fluctuations
and because points are given a weight equal to their area the above discussion
is in the context of long wavelength area fraction fluctuations. However,
using the central point representation fluctuations in any order parameter can
be determined by assigning an appropriate weight to each point i.e. for
fluctuations in coordination number each point is given a weight equal to its
number of contacts Hexner _et al._ (2018). Assigning equal weight to each
point makes the system monodisperse and it is treated simply as a point
pattern; the signature of hyperuniformity for these systems is fluctuations in
the number variance that grow more slowly than the volume of the window. This
treatment changes the definitions and bounds from above: the spectral density
reduces to the structure factor $S(q)$ and all particle “areas” are $a_{j}=1$;
the random expectation for the number variance is
$\sigma_{N}^{2}\left(D\right)=\rho A_{\Omega}$ where $\rho$ is the number
density. The definition of $h(D)$ also changes and is defined by rearranging
$\sigma_{N}^{2}=\rho\pi[(D/2)^{2}-(D/2-h)^{2}]$; $h(D)$ whether defined from
the number variance or the area fraction variance is calculated from a ratio
of the measured variance to the expected variance for a totally random system
in both cases and our intuition for what it measures remains the same. Because
foams have not been studied in the context of hyperuniformity, we explore our
systems both as monodisperse point patterns using the centroids of the bubbles
and as polydisperse systems where the centroids are weighted by their bubble
area.
Figure 2: Disordered points patterns: (a) shows the locations of the centroids
of the bubbles in a quasi 2-dimensional foam; (b) the green dots are an
Einstein pattern where points are randomly displaced from a square lattice
with RMS displacement $\delta/b=0.26$ where $b$ is the lattice spacing; (c)
the blue dots are a Halton set which is a low discrepancy pattern where points
are determined algorithmically; (d) the dark red dots are a Poisson pattern
where points are placed totally at random. Parts (e-h) show the cellular
patterns used to partition space around the corresponding point pattern: Part
(e) displays the bubbles of a quasi-2d foam as well as the bubble centroids;
parts (f-h) show the cells of a Voronoi construction which are created around
the points that occupy each cell. For analysis of area fraction fluctuations
all points are given a value equal to the area of the cell they occupy.
### II.2 Foam and Voronoi Data
We study foam made from a solution that is 75% deionized water, 20% glycerin
and 5% Dawn Ultra Concentrated Dish Detergent. It is generated inside a sample
cell made from two 1.91 cm-thick acrylic plates separated by a spacing of 0.3
cm and sealed with two concentric o-rings, the inner of which has a 23 cm
diameter; this is the same apparatus used in Roth _et al._ (2013) for foam
coarsening experiments. Foams are produced as follows. First the trough is
filled with the desired amount of liquid, then flushed with Nitrogen and
sealed. The entire sample cell is vigorously shaken for several minutes until
the gas is uniformly dispersed as fine bubbles that are small compared to the
gap between plates. The foam is thus initially very wet, opaque, and three-
dimensional. The cell is immediately placed above a Vista Point A light box
and below a Nikon D90 camera with a Nikkor AF-S 300mm 1:2.8D lens. After a few
hours, the bubbles become large compared to the gap and the foam has coarsened
into a quasi two dimensional state; once the foam is quasi-2d, images of it
are taken every 2 minutes for 24 hours.
To gather relevant data for bubbles, such as their locations and areas, we
first have to reconstruct the foam microstructure and film network. The
reconstruction methods are described more thoroughly in the supplemental
materials of Chieco and Durian (2020). More briefly, the first step is to
locate the vertices via a convolution method of an example vertex structuring
element and the foam image. After the vertex locations are identified they are
connected to their neighbors by exploiting Plateau’s laws. Plateau’s laws in
2-dimensions say that vertices are the junction of three films which meet at
$120^{\circ}$ and that pairs of vertices are connected by films that are arcs
of circles. Therefore we know where to look for neighboring vertices and once
neighbors are identified we find equations for the circular arcs that connect
them. Finally bubbles are identified by making closed loops of vertices.
Analysis for hyperuniformity is ultimately performed on point patterns that
represent the bubbles and the bubble centroids $(x_{c},y_{c})$ are a logical
pattern to choose. These points are defined as
$\displaystyle x_{c}$ $\displaystyle=$
$\displaystyle\sum{\left(x_{i}+x_{i+1}\right)\left(x_{i}y_{i+1}+x_{i+1}y_{i}\right)}/\left(6\alpha\right),$
(4) $\displaystyle y_{c}$ $\displaystyle=$
$\displaystyle\sum{\left(y_{i}+y_{i+1}\right)\left(x_{i}y_{i+1}+x_{i+1}y_{i}\right)}/\left(6\alpha\right),$
(5) $\displaystyle\alpha$ $\displaystyle=$
$\displaystyle\sum{\left(x_{i}y_{i+1}+x_{i+1}y_{i}\right)}/2$ (6)
where the sums are between all neighboring pairs of vertices on a bubble. An
example of the large scale point pattern and a zoomed in version showing the
points inside the bubbles they represent are shown in Fig. 2(a) and (e),
respectively.
To understand the nature of the disorder in the location of bubble centroids,
we compare with different disordered point patterns. Three types of patterns
with varying degrees of uniformity are analyzed. The first type is an
“Einstein pattern”; these consist of points initially placed on a square
lattice and then randomly displaced by kick sizes drawn from a Gaussian
distribution. Varying the root mean square (RMS) displacements of the
particles will tune the disorder in the patterns Chieco _et al._ (2017). For
the purposes of this study the Gaussian kicks come from a distribution whose
standard deviation is $\delta=0.26b$, where $b$ is the lattice spacing. We
choose this value to make the number variance for the Einstein patterns the
same as the number variance for the “Halton patterns” which are the second
type of pattern analyzed. Halton patterns use points from a low discrepancy
sequence Halton (1964). They are of interest because although they are non-
crystalline they fill space quite evenly; these properties make them and other
low discrepancy patterns favorable for use in e.g. Monte Carlo integration
Halton (1960); Niederreiter (1992); Kocis and Whiten (1997). Making a Halton
pattern in two dimensions is done by choosing two integers $\\{j_{1},j_{2}\\}$
whose only common denominator is 1; each number is an independent seeding
element for a list of numbers and our patterns have $j_{1}=2$ and $j_{2}=3$.
The ${n^{th}}$ number in the sequence is determined by converting $n$ into a
number with base $j_{k}$, writing the number in reverse order after a decimal
point and converting this fraction back into base 10 representation. This is
done for both seeding elements and the pair of numbers creates one point in
the Halton pattern. The fourth cellular pattern is a “Poisson” pattern where
uncorrelated points are laid down by drawing numbers from a random number
generator. Fig. 2(b-d) shows the sample point patterns.
However, bubbles are not simply points but are actually highly polydisperse
cells of a larger space filling pattern. Therefore we also study how the areas
of bubbles are distributed throughout space. For this analysis to keep with
the definitions for $\chi(q)$ and $h(D)$ from Eqs. 1 and 2, the bubble
centroids are given a weight equal to the area of the bubble they occupy. To
find the areas, the bubbles are first treated like a polygon and the polygonal
area is calculated using Eq. (6). The curved edges of the bubbles are not
accounted for in this initial calculation. Accounting for them makes the final
calculation of the bubble area the polygonal area plus or minus the area under
each of the circular arcs if the arc bends away or towards from the centroid
of the bubble, respectively. The foams are space filling and have a packing
fraction of $\phi=1$.
Similar to the point pattern analysis, we want to compare data from bubbles to
data from other cellular structures. In simulation we are free to partition
space however we choose as long as we maintain a packing fraction $\phi=1$;
for this study we create cellular patterns from Voronoi constructions around
the three types of simulated point patters described earlier in this section.
A Voronoi construction tiles space by separating points into cells whose edges
are lines equidistant from the two points that share that edge. Voronoi
patterns are generated using an intrinsic MATLAB function. This function also
identifies the locations of the vertices for each cell and all cells are
polygons; therefore Eq. (6) is used once again to calculate the cell area.
Voronoi constructions, especially those made around Poisson patterns, have
been studied extensively but much of the work is beyond the scope of this
paper Okabe _et al._ (2009); here they are used to study the structure of
cellular patterns around point patterns of known disorder and compare that to
the structure of quasi-2d foams which are cellular patterns around point
patterns of unknown disorder. Recently work on cellular patterns in the
context of hyperuniformity by partitioning space using several methods
including Voronoi constructions was published Torquato and Chen (2018); Kim
and Torquato (2019); it does not include any experimental data nor does it
consider the hyperuniformity disorder length.
## III Results
Using the methods described above we reconstruct three snapshots of the same
foam as it coarsens. They are taken $\\{6,10,18\\}$ hours after its initial
preparation and have $N=\\{2767,1842,1099\\}$ bubbles, respectively. The total
number of bubbles decreases as the foam ages because foam coarsening involves
small bubbles shrinking and large bubbles growing due to differences in
Laplace pressure until eventually the small bubbles disappear. There is not
only an overall decrease in the total number of bubbles but also an overall
increase in both the mean bubble area $\overline{a}$ and the $\phi$-weighted
average bubble area $\left<a\right>$. Individual foam data sets are referred
to by their value of $\left<a\right>=\\{10,15,25\\}~{}\text{mm}^{2}$; for
polydisperse systems like the bubbles and Voronoi cells the $\left<a\right>$
is calculated by $\left<a\right>=\sum{{a_{i}}^{2}}/\sum{{a_{i}}}$ where the
$a_{i}$ are the bubble or cell areas and the sum is over all particles. For
the simulated data the Voronoi constructions are made in a square box bounded
by $\left(0,1\right)$ with $N\geq 4.97\times 10^{5}$ cells each. Only one
Voronoi construction is generated for each type of point pattern.
### III.1 Local Properties
Though it’s not our main interest, for orientation and completeness we start
by investigating several usual local structural features, beginning with the
distribution of bubble areas. Fig. 3(a) shows the cumulative distribution
function for the bubble areas normalized by the mean bubble area for the three
snapshots of the coarsening foam. Amazingly, the data collapse regardless of
the foam age. This is in fact a phenomena for foams aging referred to as a
self-similar scaling state where distributions of local quantities are
unchanged under proper normalization regardless of the age of the foam.
Statistically, older foams are the same as taking a smaller subsection of a
younger foam. Self-similarity is well documented and has been observed in
experiment Stavans (1990, 1993b); Stavans and Glazier (1989); de Icaza _et
al._ (1994) and simulation Glazier _et al._ (1990); Herdtle and Aref (1992);
Segel _et al._ (1993); Rutenberg and McCurdy (2006); Neubert and
Schreckenberg (1997). It is once again found here and the data are fit well to
a compressed exponential consistent with previous work Glazier and Weaire
(1992); Stavans (1993a); Roth _et al._ (2013).
Figure 3: Cumulative distribution function data for (a) bubble areas for foam
as it coarsens and (b) areas of Voronoi cells constructed around point
patterns as labeled. In part (a) the bubble areas collapse after normalizing
by the mean area $\overline{a}$. In part (b) all the foam data are collected
into one distribution and plotted as the black curve. In both parts the red
dashed line shows an exponential area distribution and the gold dotted curve
is a compressed exponential.
In addition to providing insight into the local structure of the foam the
collapse of these distributions serves two more purposes. First it shows our
methods for calculating the bubble areas are correct, which is very important
for our hyperuniformity analysis. Second because the foam is in a scaling
state the data from the three images can be collected together to make one
distribution with better statistics. This is done for the normalized bubble
areas and the data is plotted in Fig. 3(b) as a black curve. Comparing the
cumulative distributions of cell areas for the Voronoi constructions to the
bubble area distribution shows the latter is the widest. This demonstrates the
local structure of the foam is the most disordered. The distributions for cell
areas from the Voronoi constructions show the cells generated around the
Einstein and Halton patterns have the most local order with nearly identical
distributions and the cells generated around the Poisson patterns have a local
order between Einstein/Halton and the foams. The local disorder is thus
quantified by the width of the area distributions; one way to do this is by
taking the mean squared cell area $\overline{a^{2}}=\sum{{a_{i}}^{2}}/N$ and
dividing it by the mean cell area squared
$\overline{a}^{2}=\left(\sum{{a_{i}}}/N\right)^{2}$; we find
$\overline{a^{2}}/\overline{a}^{2}$ for each distribution in Fig. 3(b) and
present their values in Table 1.
Figure 4: Distributions of the elongation shape parameter for the various
cellular patterns as labeled. The foam data is collected from the combined
data from the three different times during the aging process. The
distributions have statistical uncertainties described in Ref. Roth _et al._
(2013) but the error bars are smaller than the symbol.
The area is made dimensionless by dividing out the average area of at time $t$
but we can quantify other dimensionless shape parameters. One such parameter
is the “elongation” $E=P/\sqrt{4\pi A}$ which takes the ratio of the bubble or
cell perimeter to the square root of its area and it is defined such that
$E=1$ for circles. Ref. Roth _et al._ (2013) finds elongation to be one of
two dimensionless shape parameters important in the physics of foam coarsening
with the other being “circularity”; circularity is defined to be 0 for
polygons so we can not compare the quantity between the bubbles in a foam and
the cells in a Voronoi construction. Calculating the elongation for all
bubbles and collecting them into one distribution we compare the data to the
elongation of the Voronoi cells. The distributions are shown in Fig. 4 and the
inset of the figure is a zoom in for the small $E$ data. The distribution for
the foam is not the widest like it was for the areas but instead is smaller
than the Poisson and goes further than both Halton and Einstein. We show the
average elongation and the average squares elongations in Table 1 and in both
cases these values are ordered from low to high as foam, Einstein, Halton and
Poisson. Foam has the smallest average values because the data plunge away
from $1$ the fastest which is seen clearly in the inset of Fig. 4.
Figure 5: Distributions of the elongation shape parameter for different
packings as labeled. The actual distribution is plotted as a black line and
data for $n$-sided cells are colored according to the number of sides. The
foam data is collected from the combined data from the three different times
during the aging process.
It is generally true for foams that bubbles with less sides have smaller areas
and, similarly, we can ask how the number of sides affects the elongation.
This is plotted in Fig. 5 where each part shows the elongation distribution
for the entire packing along with the individual contributions to the
distribution for $n$-sided cells. Fig. 5(a) shows the data for the foam where
data with small $E$ have large $n$ and bubbles with a smaller number of sides
have larger $E$ values. Interestingly the foam have regions with little to no
overlap for different $n$-sided bubbles; this is exhibited by the peaks of the
individual $n$-sided distributions nearly matching the entire distribution
especially for bubbles with less than 7-sides. For the Voronoi packings these
regions of little overlap do not exist and the peaks of the distributions are
not separated. Only the foams have well separated elongation distributions for
different $n$-sided cells.
Pattern | $\overline{a^{2}}/\overline{a}^{2}$ | $\overline{n}$ | $\sigma_{n}$ | $\overline{E}$ | $\overline{E^{2}}$ | $\overline{s^{2}}/\overline{s}^{2}$
---|---|---|---|---|---|---
Einstein | 1.05 $\pm$ 0.002 | 5.99 $\pm$ 0.01 | 0.993 $\pm$ 0.002 | 1.127 $\pm$ 0.0001 | 1.271 $\pm$ 0.002 | 1.29 $\pm$ 0.03
Halton | 1.06 $\pm$ 0.002 | 5.99 $\pm$ 0.01 | 1.092 $\pm$ 0.002 | 1.142 $\pm$ 0.0001 | 1.308 $\pm$ 0.002 | 1.33 $\pm$ 0.04
Poisson | 1.28 $\pm$ 0.006 | 5.99 $\pm$ 0.01 | 1.332 $\pm$ 0.002 | 1.181 $\pm$ 0.0002 | 1.403 $\pm$ 0.003 | 1.42 $\pm$ 0.04
Foam | 1.82 $\pm$ 0.07 | 5.98 $\pm$ 0.08 | 1.17 $\pm$ 0.02 | 1.10 $\pm$ 0.02 | 1.22 $\pm$ 0.03 | 1.19 $\pm$ 0.01
Table 1: Quantities characterizing distributions for the cellular patterns.
The columns are the cellular pattern type, the average squared area divided by
the average area squared, the average number of sides of a cell, the standard
deviation for the side number distribution, the average elongation, the
average squared elongation, and the average squared edge length divided by the
average edge length squared. Data for all bubbles at the three times are
collected into one distribution because the foam is in a self-similar state.
Figure 6: (a) Side-number distributions, and (b) area-weighted side-number
distributions, for the various cellular patterns as labeled. The foam data is
collected from the combined data from the three different times during the
aging process. The distributions have statistical uncertainties described in
Ref. Roth _et al._ (2013) but the error bars are smaller than the symbol.
Other standard distributions we study include the side-number distribution
$p(n)$ which tells the probability of finding a bubble or cell with $n$-sides
and the area-weighted side-number distribution $F(n)$ which details how much
area is covered by $n$-sided bubbles or cells. The distributions for
$p\left(n\right)$ and $F\left(n\right)$ are plotted in Fig. 6 parts (a) and
(b), respectively. The $p\left(n\right)$ distributions are remarkably similar
which is expected given that both the bubbles and Voronoi cells are convex
polyhedra where the vertices are a junction of three edges; this
microstructure also makes it so the average number of sides per cell is
$\overline{n}=6$ and Table 1 shows this is almost exactly achieved for all
packings. Part (b) shows the $F\left(n\right)$ distribution for the foam is
skewed more towards cells with large $n$ when compared to the other
distributions particularly for bubbles with $n=\left[7,8\right]$ sides. This
is understood because bubbles with a larger number of sides also have larger
areas. These distributions allow us to understand the local structure of the
cellular patterns but next we investigate whether they provide any insight
into the long range structure.
### III.2 Spatial Fluctuations of Number Density
This and the two remaining subsections contain our main results, which concern
the nature of long-range fluctuations in space-filling cellular structures. We
begin with number density fluctuations for all of the point patterns. For this
analysis the points are all given an equal weight $w=1$ and distances are
normalized by the square root of the $\phi$-weighted average area
$\sqrt{\left<a\right>}$.
The two ways we diagnose hyperuniformity in our point patterns is with the
small-$q$ scaling of the structure factor $S(q)$ and with the large-$D$
behavior of the hyperuniformity disorder length. In Figs. 7(a,b) the structure
factor and hyperuniformity disorder length are plotted, respectively. For both
quantities it is clear that data for the Poisson point patterns follow the
totally random expectation. This is important for two reasons. The first is it
confirms our analysis tools are working correctly for both the structure
factor and the hyperuniformity disorder length. That is because the points in
the Poisson pattern are totally uncorrelated and should follow Poisson
statistics which they do. The second is that values of both $S(q)$ and $h(D)$
are made meaningful because the order is determined by how much smaller the
data are than the upper bound set by the Poisson limit.
Poisson patterns are an example of completely random systems. Conversely,
Einstein patterns are examples of systems we know are hyperuniform and
previous work shows their uniformity is linked to $\delta$, the size of the
RMS displacement of the particles away from their lattice site Chieco _et
al._ (2017). Hyperuniformity in the Einstein patterns is evident from the
asymptotic behavior of $S(q)\sim q^{2}$ for small $q$ and
$h(D)=h_{e}=0.15\sqrt{\left<a\right>}$ for large-$D$. Recall these Einstein
patterns have a $\delta=0.26b$ where $b$ is the lattice spacing so
$h_{e}=0.15\sqrt{\left<a\right>}\approx 0.55\delta$; this is consistent with
previous work and so is the decay exponent for the structure factor Chieco
_et al._ (2017, 2018).
We had no a priori knowledge about the long range order in the Halton pattern
but the data show they too are hyperuniform. Their data behave nearly
identically as the Einstein patterns but this similarity is no accident.
Recall in Sec. II.2 that the kick size $\delta=0.26b$ was chosen such that the
measured number variance for the Einstein patterns and Halton patterns are the
same. In actuality the value of $\delta$ was determined by varying it until
one was found to make the value for $h_{e}$ for the Einstein pattern match the
value of $h_{e}$ of the Halton pattern. What is rather amazing here is that we
matched the long range disorder of the point patterns and from that the
distributions of Voronoi cell areas and topology are nearly identical. We have
a direct observation at least for our example systems how the microscopics
(locations of points around which Voronoi cells are constructed) affects the
macroscopics (distributions of cell areas and topology).
For the foams each snapshot is analyzed separately and in Fig. 7 plot the
individual data sets for $\left<a\right>=\\{10,15,25\\}~{}\text{mm}^{2}$ as
the curves that go from dark to light gray. The structure factor for the
bubble centroids is interpreted as follows: at large $q$ the bubbles are
initially random; there is short range order at approximately the average
bubble separation indicated by a decay away from $\chi\left(q\right)=1$; for
small $q$ there Poissonian fluctuations indicated by a leveling off to a
constant. This behavior is mirrored in the hyperuniformity disorder length
spectra. The data initially follow the separated particle limit for small $D$
because the bubble centroids have a minimum point separation based on average
bubble size. The $h(D)$ spectra follow this expectation until
$D\approx\sqrt{\left<a\right>}$ where the data reach some local minima but
very quickly rise with $h(D)\sim D$ indicating Poissonian number density
fluctuations. Rather remarkably the data collapse for the three ages of foam
in both real and $q$-space; we interpret this to mean the arrangement of the
bubble centroids while uncorrelated at long lengths does not change on average
as the foam coarsens. This is likely an additional signature of the self-
similar state of foams but now one that is observed at long distances.
The foam point patterns are unique among the types of point patterns we study
because in Fig. 7(a-b) they are the only pattern where the points lie at the
centroid of their cell. Thus far we have only analyzed our three types of
point patterns. However we used those patterns to seed Voronoi constructions
so each point exists within a Voronoi cell; to make more direct comparisons to
the foam data we make three new point patterns where we find the centroids of
the Voronoi cells. The exact same foam data is plotted in Fig. 7(c),(d) as in
parts (a),(b) and compare it to the data for these “centroid patterns”; for
simplicity we will continue to refer to the data of these centroid patterns by
the Voronoi construction seeding patterns. This naming convention is justified
in Fig. 7(c) because the data at small-$q$ for the centroid patterns the same
as the data for the initial point patterns. The only difference in the $S(q)$
data for the centroid patterns is at intermediate-$q$ they have an initial
dip, similar to the bubbles data, from an imposed length scale due to average
size of the Voronoi cells. However the fact that the data is nearly identical
at small-$q$ indicates the centroid patterns have some memory of the initial
patterns used to seed the Voronoi cells.
There is a similar effect for the hyperuniformity disorder lengths shown Fig.
7(d). Here the induced short range order is exhibited because the $h(D)$
spectra are qualitatively the same as the separated particle limit; the curves
do not match exactly because the expectation was plotted using the number
density for the foam data. At large-$D$ the Poisson centroid data return to
the random expectation. This “memory” effect can be explained because the
relative displacements from the seeding points to the centroids of the Voronoi
cell creates a short range repulsion based on the average size of the cells
and the points can not overlap. However points at long distances are still
totally uncorrelated. Therefore fluctuations occur throughout the entire
window for window sizes that are large compared to the distance between
seeding points and the centroids of the Voronoi cell.
No such memory exists for the hyperuniform patterns because the average
particle displacement is relatively large compared to $h_{e}$. For the
Einstein patterns, points have an RMS displacement of about $0.26$ the lattice
space so the Voronoi cell constructed around them almost certainly has their
initial lattice site in it. Creating the pattern for Voronoi cell centroids
simply constructs a different Einstein pattern with a different kick size; the
centroids have $h_{e}=0.11\sqrt{\left<a\right>}$ and extracting a kick size
from $h_{e}=0.55\delta$ finds a new smaller displacement is $\delta=0.2b$.
This informs us that moving the points to the centroids of their Voronoi cell
simply moves the seeding points closer on average to their original lattice
site. Interestingly, the Halton centroid patterns continue to have the same
long range number density fluctuations and the same value of $h_{e}$ as the
Einstein patterns even after every point in both patterns is individually
displaced. The fact that these centroid patterns are more ordered than the
point patterns that are used to generate them but remain statistically
identical is only seen by comparing the spectra of hyperuniformity disorder
lengths; the structure factor have small-$q$ data that are the same for both
the point and centroid patterns. For hyperuniform systems with small values of
$h_{e}$, meaningful physical insight is gained into the spatial distribution
of the points even with very small perturbations to their initial position.
Figure 7: Structure factor and associated real-space hyperuniformity disorder
length spectra for various point and centroid patterns as labeled. The foam
data, for systems with $\left<a\right>=\\{10,15,25\\}~{}\text{mm}^{2}$ as the
curves go from dark to light gray, are the same in either parts (a)/(c) or
(b)/(d) and have long range Poissonian fluctuations. The data for the
simulated point patterns are as follows: the Poisson data lie along the random
expectation (red dashed line); the Halton/Einstein data are hyperuniform
indicated by $h(D)=h_{e}$ (magenta dot-dashed line) and by the power law decay
of $S\left(0^{+}\right)\sim q^{2}$ (purple double dotted-dashed line). The
centroid data are the same as the points data at long distances but there is
an induced short range order at short distances; this additional order is
continued to long distances for the Einstein/Halton centroids indicated by a
smaller value of $h_{e}$.
Finding number density fluctuations in point and centroid patterns opens up
some interesting new avenues of research in particular in the case of scaling
state foams and the general similarity between the Einstein and Halton
patterns. However in the context of hyperuniformity the proper metric to study
is the spectral density and fluctuations in area fraction for any pattern
where particles have an area. This is also the analysis that is perhaps
informed by the local distributions of bubble and cell sizes.
### III.3 Spatial Fluctuations of Area Fraction
In order to measure the spectral density and the hyperuniformity disorder
length with regards to area fraction fluctuations each point is given a weight
equal to the area of the cell it occupies in accordance with the central point
representation. We start with the real space analysis. Because the patterns
are space filling every observation window will be entirely covered in its
interior and only the cells along the boundary will determine differences in
packing fraction from $\phi=1$; this is essentially the definition of a
hyperuniform system so we might expects all $\phi=1$ configurations with a
reasonable size distribution of cells are trivially hyperuniform. This is
borne out in Fig. 8(b) where we convert the real space area fraction variance
to a spectra of hyperuniformity disorder lengths for the various cellular
patterns and all of the spectra have $h(D)=h_{e}$ for large measuring windows.
Unique to the spectra of hyperuniformity disorder lengths is that they not
only determine whether a system is hyperuniform but also provide a meaningful
length scale for the disorder. The value of $h_{e}$ indicates the distance
from an observation window boundary where particles set the area fraction
fluctuations; smaller $h_{e}$ means more order and we find the Poisson,
Halton/Einstein, foam patterns are the least to most ordered. This lines up
with our basic intuition for the Voronoi patterns: the Poisson point pattern
is the most disordered and has the largest $h_{e}$; the unweighted data for
the Einstein and Halton point patterns have matching long range order and so
too does the area-weighted data. This one to one comparison of the unweighted
point pattern data to the weighted point pattern data breaks down for the
bubble centroids. The weighted bubble centroids data have area fraction
fluctuations which are more suppressed at long lengths than the fluctuations
for any of the area-weighted Voronoi point patterns. This is made even more
surprising by the fact that the bubbles are the most disordered locally of the
various cellular patterns. However the actual spatial arrangement of the
bubbles is most ordered as dictated by foams having the smallest value of
$h_{e}$.
This arrangement for the bubble locations corresponds with the points being at
the centroid of each cell which is not the case for the area-weighted Voronoi
point patterns. When HUDLS is performed on the area-weighted Voronoi centroid
patterns the data in Fig. 8(d) show the values of $h_{e}$ nearly collapse to
the same value as $h_{e}$ for the foam centroids. Recent studies have found
other metrics also collapse as they anneal Voronoi constructions with
thousands of updates to the location of the point inside the Voronoi cell to
the centroid of the cell Klatt _et al._ (2019); here $h(D)$ shows a collapse
after just one step and it would be interesting to see if repeated annealing
collapse the data even more towards the $h_{e}$ value of the foams.
We compare the $h(D)$ data to the spectral density in Figs. 8(a,c) and observe
similar trends. For all curves nominal hyperuniformity is observed because the
spectral density data decay like $\chi(q)\sim q^{-\epsilon}$ with
$\epsilon>1$. In part (a) the data for the weighted Voronoi points for the
Einstein and Halton patterns initially decay with some exponent close to
$\epsilon=4$ but have a final asymptotic scaling closer to $\epsilon=3$. The
crossover from the initial to the final scaling finishes with less than one
decade of data left so the actual value of $\epsilon$ is unreliable. No such
crossover exists for the weighted Poisson points and for nearly all values of
$q$ where the data decay they are fit well to $\chi(q)\sim q^{3.5}$. From the
definition of $\chi(q)$ it is determined that weighted Halton and Einstein
point patterns have more uniformity than the weighted Poisson points because
the smaller the value of the spectral density the more order. The data for the
weighted Voronoi centroid patterns shows a total collapse of the values of
$\chi(q)$. Though the data do not fit well to a power law over more than one
decade the final decay has $\epsilon\approx 3$.
Good estimates for these decay exponents are required not only because they
act as a proxy for order but also because recent theoretical work provides an
expectation for their value. In Ref. Kim and Torquato (2019) the authors find
that if a fundamental cubic cell with periodic boundary conditions is
tessellated into $N$ disjointed cells $\\{C_{1},.C_{j},C_{j+1}..,C_{N}\\}$
then the tessellations are all hyperuniform under some conditions. For two
dimensional Voronoi constructions the conditions are as follows; all $C_{j}$
have a maximum side length much smaller than the total side length of the
system; each $C_{j}$ is represented by a point or hard particle entirely
within the Voronoi cell; each point or particle within $C_{j}$ has an assigned
area $\psi|C_{j}|$ where $0<\psi<1$ and $|C_{j}|$ is the area of the $j^{th}$
Voronoi cell. Spectral density analysis for these tessellations show they are
all hyperuniform with small wave vector scaling like $\chi(q)\sim q^{2}$ if
the cells are represented by points away from the their centroid or
$\chi(q)\sim q^{4}$ if the cells are represented by points at their centroid.
None of our Voronoi patterns between the area weighted Voronoi points and the
area weighted Voronoi centroids have spectral density with exponents that
match the expectation in Ref. Kim and Torquato (2019). Instead the area
weighted points all have $\chi(q)$ decay faster than expected and the area
weighted centroids all have $\chi(q)$ decay slower than expected. The
discrepancies may arise for two reasons: the first is we do not use periodic
boundary conditions for our Voronoi constructions; the second is we use a
$\psi=1$ for our analysis. These conditions may affect the decay exponents for
the Voronoi constructions but amazingly analysis on the foam patterns show
excellent agreement with the expectations.
Only one decade of $\chi(q)$ decays for the foams and the data only measure to
a relatively large $q$ when compared to the Voronoi data. However, where the
foam data decay they have $\epsilon=4.2$. This in very nearly the value of
$\epsilon=4$ expected for cellular patterns where the area weighting is
assigned to the centroid of the cell from Ref. Kim and Torquato (2019); this
is the first experimental work that we are aware of to confirm this
expectation. Furthermore it is interesting to note that only the foams which
are naturally occurring actually have this value while the simulated systems
which are larger and more locally ordered do not match the expectation.
Additionally the foams exhibit the fastest decay when compared to all the
other weighted point or centroid patterns indicating the highest long range
order. Improving order in the Voronoi constructions likely involves repeating
the process of moving the points to the centroids of their Voronoi cell and
then making new Voronoi constructions. Upon many repetitions of this process
the data will become more ordered and the exponent should fall more in line
with the expectation. However in 2d the points will then start to crystallize.
It would be interesting to devise a method to create non-crystalline patterns
that can evolve in a way that their data matches the foam data.
In this section the results have been discussed through the lens of
hyperuniformity. We make this choice because the space filling materials show
suppressed density fluctuations in both real and Fourier space and they have
the same asymptotic behavior that is normally associated with hyperuniform
materials. However it should be noted that while this behavior is nominally
hyperuniform it may not truly be representative of the phenomena. This means
the patterns made by the cellular points and centroids are likely not endowed
with the certain special properties that are sometimes seen in hyperuniform
materials like the having of complete photonic bandgaps Florescu _et al._
(2009); Man _et al._ (2013). Instead this behavior is due to the fact that
$\phi=1$ for reasons either described earlier in this section for real space
measurements or in Kim and Torquato (2019) for Fourier space measurements.
Whether these systems are actually considered hyperuniform or not, the power
of HUDLS as an analysis tool is still obvious because it is able to identify
subtle differences in the underlying structure in the patterns generated
within and by the cellular structures. If the cells are not truly hyperuniform
because they are space filling then it is worth investigating whether
hyperuniformity presents itself in the other much less dense phase of these
patterns.
Figure 8: Spectral density and associated hyperuniformity disorder length
spectra for various cellular patterns as labeled. The foam data are for
systems with $\left<a\right>=\\{10,15,25\\}~{}\text{mm}^{2}$ as the curves go
from dark to light gray and are the same in either parts (a)/(c) or (b)/(d).
In Part (b), the spectra of $h(D)$ for all patterns at intermediate and long
lengths becomes constant. The values of $h_{e}$ are different depending on the
disorder and $h_{e}$ for the foam centroids is marked in both (b),(d) as a
magenta dashed-dotted line. The spectral density decays like $q^{4.2}$ for the
foam data for all $q\sqrt{\left<a\right>}/\left(2\pi\right)<1$; for the
Voronoi point and centroid data the $\chi\left(q\right)$ decay more slowly
than the foam data but still have signatures of hyperuniformity. Only the foam
data have a decay exponent near the $\epsilon=4$ expectation determined in Kim
and Torquato (2019). In part (d), the $h(D)$ nearly collapse following the
separated particle limit (gold dotted curve) at small-$D$ and have very
similar values of $h_{e}$.
### III.4 Spatial Fluctuations of Cell Boundaries
Until now all of the analysis focuses on the locations and areas of the
bubbles and cells. For the foam this analyzes the location of the gas phase of
the material. However, foam is a two-phase medium and consists of both gas and
liquid phases and the liquid is contained in the surface Plateau borders,
vertices, and films of the foam. For purposes of this study all of the liquid
containing elements of the foam are referred to as the “film network”. This
film network is also what constitutes the structure of the foams and makes the
faces that separate bubbles; similarly the walls of the Voronoi constructions
allow us to differentiate between cells. For simplicity we will use the term
“edges” to discuss both the foam films and Voronoi cell walls.
To quantify the spectrum of spatial fluctuations in the distribution of the
liquid phase in foams, and to test for hyperuniformity, we need to define both
the locations of the edges and their lengths. The foam films are arcs of
circles that connect two vertices and the equation of the circle that defines
each film is determined in the reconstructions; for a film with arc length
$s=r\theta$ its location is defined as the point on the arc that bisects the
angle theta. The Voronoi cell walls that connect two vertices whose locations
are $(x_{i},y_{i})$ and $(x_{j},y_{j})$ have $s$ equal to the distance between
the vertices and a location defined by the midpoint. The term area is used for
the edges just for simplicity and is calculated with $a=ts$ . It is clear that
the length $s$ for both the films and walls is important but the thickness $t$
is arbitrary. It is true that films in foam actually have some thickness but
this value of $t$ is constant. Additionally the decoration theorem instructs
that all of the liquid for a foam in the dry limit can be concentrated at the
vertices with no effect to its structure Bolton and Weaire (1991). The Voronoi
cell walls should not have a thickness at all as they are lines. In both cases
by assigning a constant value of $t$ to all the edges it drops out completely
from Eq. (1) and while it is not as obvious mathematically the same is true
for the $h(D)$ calculations. We performed auxiliary measurements changing the
size $t$ and it does not affect our results; here we set $t$ equal to the film
thickness $t=\ell$ so it has appropriate units.
The only important feature then is $s$ the length of the edges. We measure
values of $s$ for both the foam and Voronoi edges and plot them normalized by
the mean edge length $\overline{s}$ in Fig. 9. Immediately evident is the
foams have the narrowest distribution of edge lengths; this is juxtaposed with
the fact that they have the broadest distribution of cell areas. This makes
sense physically because Plateau’s laws and coarsening both act to reduce the
surface tension energy of the film network. No surface tension forces or size
effects play a role in constraining the lengths of the Voronoi cell walls. The
Poisson data is the widest which should be expected as there are large voids
in these patterns which would lead to large edge lengths. The Einstein and
Halton patterns have very similar distributions showing another macroscopic
measure influenced by the microscopic point pattern. It is clear on the log-
linear scale like Fig. 9(a) the the lengths of the edges are relatively longer
for the Voronoi networks than for the foams. The film length distributions are
characterized similarly to the area distributions; here we use the mean
squared edge length $\overline{s^{2}}=\sum{{s_{i}}^{2}}/N_{s}$ divided by the
mean edge length squared
$\overline{s}^{2}=\left(\sum{{s_{i}}}/N_{s}\right)^{2}$ where $N_{s}$ is the
total number of edges and the results are displayed in Table 1.
Not seen in Fig. 9(a) is the fascinating behavior for small $s$. This is
evident in Fig. 9(b) which shows a log-log plot of just the CDF and these
distributions have a power law scaling where $\mathcal{N}_{CDF}\sim
s/\overline{s}$ for all three types of the Voronoi constructions. No such
scaling exists for the foam films. This power law scaling for the Voronoi edge
lengths is rather remarkable and it is a result of the vertices of these
Voronoi construction being overdispered compared to a Poisson patterns.
Figure 9: The cumulative distribution function of edge lengths normalized by
the mean edge length $\overline{s}$ for types of system as labeled. The edges
for the foams (Voronoi cells) are the films (walls) that connect two
neighboring vertices on a bubble (Voronoi cell) and they are circular arcs
(straight line segments). In part (b) we show only the CDF on a log-log scale
because the power law scaling where $\mathcal{N}_{CDF}\sim s/\overline{s}$ for
the very small Voronoi wall lengths is lost when the CDF is subtracted from 1.
We know from the previous section that local distributions do not always
predict long range uniformity. In Fig. 10(a) and (b) the data for the spectral
density and the hyperuniformity disorder length with regards to area fraction
fluctuations is plotted. To normalize the lengths in this figure the length
scale $\left<s\right>=\sum{s^{2}}/\sum{s}$ is used; this is akin to our
$\phi$-weighted average area.
The spectral densities for the length-weighted Voronoi edge patterns shows
that none of them are hyperuniform because each spectrum has some minimum
before turning up towards the random limit. In real space the $h(D)$ for the
length-weighted patterns confirm the Poisson edges are random because
$h(D)\sim D$ for large $D$. However, there is ambiguity in the spectra of
hyperuniformity disorder lengths for the Einstein and Halton edges. For these
latter two cases there is less than a decade of data to potentially fit a
power law to and the spectral density shows without a doubt that these systems
are not hyperuniform. For the yes or no question of hyperuniformity in this
case we defer to $\chi(q)$ and note that $h(D)$ does not equal a constant for
either of these weighted Voronoi edge spectra. The values of $h(D)$ for the
Halton data do lie below those for the Einstein data in a rare instance, but
not unique see Fig. 8(a), where the data are not practically the same. It is
possible that the local structure informs this difference in uniformity and
Fig. 9 shows why. The data show the Halton edges appear to have some minimum
cutoff length scale and this is not the case for the Einstein edges. These
very small-length edges in the Einstein patterns could form more dense
clusters of edges which in turn makes them less uniform with regards to area
fraction fluctuations; this merits further study and if true would be quite
amazing that such a small fraction of points can destroy overall uniformity.
Unlike for the Voronoi edges, the spectral density for the length-weighted
edges of the foam do not exhibit any minimum. The data are noisy and it is
unclear whether they could be either decaying like a power law and
hyperuniform or leveling off to a constant and Poissonian. A clearer signal
comes from the hyperuniformity disorder length. The foam data is well fit to a
power law over one decade of window sizes; the power law exponent
$\epsilon=0.3$ shows long range uniformity in both real and Fourier space and
follows the expectation that $h(D)\sim D^{(1-\epsilon)}$ and $\chi(q)\sim
q^{\epsilon}$. This shows a weaker variant of hyperuniformity than if $h(D)$
were some constant but nonetheless fluctuations are suppressed at long length
scales. The data for the weighted foam edges are not the most uniform in an
absolute sense because for this to be true the spectra of $h(D)$ and $\chi(q)$
would have to have smallest values like they do for the weighted point data.
However they are the only edge pattern that has a legitimate signature of
hyperuniformity.
Figure 10: Spectral density and associated hyperuniformity disorder length
spectra for weighted edge patterns as labeled. In both parts the Poisson
patterns show long range random fluctuations but do not lie exactly on the
random expectation (red dashed line). The Einstein and Halton patterns are not
hyperuniform; this is more evident in the spectral density data where the data
have clearly defined minima but part (b) shows neither the Einstein nor the
Halton data have $h(D)=h_{e}$ at large window sizes which is evident by
comparing the data to a fiduciary constant (double-dot dashed line). The power
law growth (magenta dot-dashed line) of $h(D)\sim D^{1-\epsilon}$ and
$\chi(q)\sim q^{\epsilon}$ where $\epsilon=0.3$ is consistent with a class of
hyperuniform materials.
## IV Conclusion
We have presented the use of a recently defined emergent length scale, the
hyperuniformity disorder length, as a method to describe the structure of
various cellular patterns at all length scales. We have shown that usual local
descriptors of cellular patterns like the cell area or side number
distribution do little to properly differentiate the underlying disorder in
the cellular structure. Similarly the asymptotic scaling of the spectral
density fails to differentiate order because the exponents are not always
obvious even for very large systems. However, one important result from the
Fourier space analysis arises in the foams having a spectral density that
decays for all small-$q$ like $q^{4.2}$. This is an unambiguous result and the
decay is faster for foam data than it is for the Voronoi point and centroid
data. Furthermore the foams which are the only naturally occurring patterns we
study are also the only structures that have a spectral density decay with an
exponent near the $\epsilon=4$ expectation set forth in Ref. Kim and Torquato
(2019). To more clearly compare order between the packings we use
hyperuniformity disorder length spectroscopy; the spectra of values of $h(D)$
provide a physically significant description to the meaning of both number and
area fraction fluctuations and has helped to discover big differences in
uniformity based on subtle differences in structure.
Some of these findings are most apparent when comparing the data of Einstein
and Halton point patterns. We saw that by tuning the underlying microscopic
disorder in the Einstein pattern to match the disorder in the Halton pattern
that we can construct nearly identical macroscopic patterns in terms of
particle area and topology. Both patterns are hyperuniform with the same
values of $h_{e}$. Additionally while a massive amount of particle
rearrangement occurs when we shift these two point patterns to their centroid
patterns, both the Einstein and Halton patterns had an overall increase in
order which was the same for both patterns. This is only understood because
the values of $h_{e}$ dropped but to the same value for both the Einstein and
Halton pattern. Being able to differentiate these small structural differences
after a lot of particle motion may be useful in studying the reversible to
irreversible transition; in these experiments particles can be tracked near
the critical amplitude for the transition and the small differences in
structure may be evident using HUDLS.
Turning to foams we found them to be the most ordered of the area-weighted
point patterns even though they are the most locally disordered. This is
likely due their points being located at the centroid of the bubble as opposed
to the point patterns weighted by the Voronoi cell areas that are located at
the points that seeded the Voronoi construction. When we compare all the
centroid patterns the the values of $h_{e}$ collapse nearly to the same value
as the foams. This $h_{e}$ also happens to be the same value that soft disks
above jamming have before an onset of Poissonian fluctuations Chieco _et al._
(2018). This value is potentially some universal minimum $h_{e}$ for
disordered patterns; it is possible that if we continue to update the centroid
patterns the $h_{e}$ will either converge to the value of $h_{e}$ for the
foams or the systems crystallize and the $h(D)$ will have increasing
oscillations. It is an open question as to whether 2d configurations can be
constructed by updating Voronoi patterns and avoid crystallization. Foams are
also the only system we study whose edges have any signature of
hyperuniformity. This hyperuniformity is defined by the scaling exponent
$\epsilon=0.3$ and $h(D)\sim D^{1-\epsilon)}$ and $\chi(q)\sim q^{\epsilon}$.
Also for foams we have found a long range signature of the scaling state with
regards to number density fluctuations because both the hyperuniformity
disorder length spectra and the structure factor are unchanged as the foam
ages. It would be interesting to try and push this to system sizes on the
order of $N=10^{5}$ like we did for the Voronoi constructions but this would
require simulation.
Besides foams we can use the hyperuniformity disorder length to determine long
range disorder in other naturally occurring cellular patterns This analysis
can be used in 2-dimensions on networks made from cracks in dried mud, from
peaks and valleys in crumpled paper, or from biological cells. In 3-dimensions
one could study biological networks of trabecular bone or any other types of
porous materials. A natural extension of our work is to perform analysis for
3-dimensional foams. Recent experiments on 3d foam has found that they, like
2d foams, enter a self-similar scaling state Lambert _et al._ (2010).
Applying HUDLS to any of these systems offers a general and intuitive real-
space method to characterize the spectrum of structural features which is a
fundamental step in understanding material properties.
###### Acknowledgements.
We thank Nigel Goldenfeld for introducing us to the concept of low discrepancy
sequences. This work was supported primarily by NASA grant 80NSSC19K0599 and
also by NSF grant MRSEC/DMR-1720530.
## References
* Weaire and Hutzler (2001) D. Weaire and S. Hutzler, _The Physics of Foams_ (Oxford University Press, New York, 2001).
* Glazier and Weaire (1992) J.A. Glazier and D. Weaire, “The kinetics of cellular patterns,” Journal of Physics: Condensed Matter 4, 1867 (1992).
* Stavans (1993a) J. Stavans, “The evolution of cellular structures,” Reports on Progress in Physics 56, 733 (1993a).
* Flyvbjerg (1993) Henrik Flyvbjerg, “Model for coarsening froths and foams,” Phys. Rev. E 47, 4037 (1993).
* Stavans (1990) J. Stavans, “Temporal evolution of two-dimensional drained soap froths,” Phys. Rev. A 42, 5049 (1990).
* Stavans (1993b) J. Stavans, “Evolution of two-dimensional cellular structures: The soap froth,” Physica A: Statistical Mechanics and its Applications 194, 307 (1993b).
* Stavans and Glazier (1989) J. Stavans and J. A. Glazier, “Soap froth revisited: Dynamic scaling in the two-dimensional froth,” Phys. Rev. Lett. 62, 1318 (1989).
* de Icaza _et al._ (1994) M. de Icaza, A. Jiménez-Ceniceros, and V.M. Castaño, “Statistical distribution functions in 2d foams,” Journal of Applied Physics 76, 7317 (1994).
* Glazier _et al._ (1990) J.A. Glazier, M.P. Anderson, and G.S. Grest, “Coarsening in the two-dimensional soap froth and the large-q potts model: A detailed comparison,” Philosophical Magazine B 62, 615 (1990).
* Herdtle and Aref (1992) T. Herdtle and H. Aref, “Numerical experiments on two-dimensional foam,” Journal of Fluid Mechanics 241, 233 (1992).
* Segel _et al._ (1993) D. Segel, D. Mukamel, O. Krichevsky, and J. Stavans, “Selection mechanism and area distribution in two-dimensional cellular structures,” Phys. Rev. E 47, 812 (1993).
* Rutenberg and McCurdy (2006) A. D. Rutenberg and M. B. McCurdy, “Scaling state of dry two-dimensional froths: Universal angle-deviations and structure,” Phys. Rev. E 73, 011403 (2006).
* Neubert and Schreckenberg (1997) L. Neubert and M. Schreckenberg, “Numerical simulation of two-dimensional soap froth,” Physica A: Statistical Mechanics and its Applications 240, 491 (1997).
* Torquato and Stillinger (2003) S. Torquato and F. H. Stillinger, “Local density fluctuations, hyperuniformity, and order metrics,” Phys. Rev. E 68, 041113 (2003).
* Zachary and Torquato (2009) C. E. Zachary and S. Torquato, “Hyperuniformity in point patterns and two-phase random heterogeneous media,” J. Stat. Mech.: Theory and Experiment , P12015 (2009).
* Torquato (2018) S. Torquato, “Hyperuniform states of matter,” Physics Reports 745, 1 (2018).
* Zachary _et al._ (2011a) C. E. Zachary, Y. Jiao, and S. Torquato, “Hyperuniform long-range correlations are a signature of disordered jammed hard-particle packings,” Phys. Rev. Lett. 106, 178001 (2011a).
* Zachary _et al._ (2011b) C. E. Zachary, Y. Jiao, and S. Torquato, “Hyperuniformity, quasi-long-range correlations, and void-space constraints in maximally random jammed particle packings. i. polydisperse spheres,” Phys. Rev. E 83, 051308 (2011b).
* Zachary _et al._ (2011c) C. E. Zachary, Y. Jiao, and S. Torquato, “Hyperuniformity, quasi-long-range correlations, and void-space constraints in maximally random jammed particle packings. ii. anisotropy in particle shape,” Phys. Rev. E 83, 051309 (2011c).
* Kurita and Weeks (2011) R. Kurita and E. R. Weeks, “Incompressibility of polydisperse random-close-packed colloidal particles,” Phys. Rev. E 84, 030401 (2011).
* Berthier _et al._ (2011) L. Berthier, P. Chaudhuri, C. Coulais, O. Dauchot, and P. Sollich, “Suppressed compressibility at large scale in jammed packings of size-disperse spheres,” Phys. Rev. Lett. 106, 120601 (2011).
* Jiao _et al._ (2014) Y. Jiao, T. Lau, H. Hatzikirou, M. Meyer-Hermann, J.C. Corbo, and S. Torquato, “Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem,” Phys. Rev. E 89, 022721 (2014).
* Dreyfus _et al._ (2015) R. Dreyfus, Y. Xu, T. Still, L. A. Hough, A. G. Yodh, and S. Torquato, “Diagnosing hyperuniformity in two-dimensional, disordered, jammed packings of soft spheres,” Phys. Rev. E 91, 012302 (2015).
* Weijs _et al._ (2015) J. H. Weijs, R. Jeanneret, R. Dreyfus, and D. Bartolo, “Emergent hyperuniformity in periodically driven emulsions,” Phys. Rev. Lett. 115, 108301 (2015).
* Atkinson _et al._ (2016) S. Atkinson, G. Zhang, A. B. Hopkins, and S. Torquato, “Critical slowing down and hyperuniformity on approach to jamming,” Phys. Rev. E 94, 012902 (2016).
* Wu _et al._ (2015) Y. Wu, P. Olsson, and S. Teitel, “Search for hyperuniformity in mechanically stable packings of frictionless disks above jamming,” Phys. Rev. E 92, 052206 (2015).
* Ikeda and Berthier (2015) A. Ikeda and L. Berthier, “Thermal fluctuations, mechanical response, and hyperuniformity in jammed solids,” Phys. Rev. E 92, 012309 (2015).
* Chieco _et al._ (2018) A. T. Chieco, M. Zu, A. J. Liu, N. Xu, and D. J. Durian, “The spectrum of structure for jammed and unjammed soft disks,” Phys. Rev. E 98, 042606 (2018).
* Klatt _et al._ (2019) M. A Klatt, J. Lovrić, D. Chen, S. C. Kapfer, F. M. Schaller, P. W. A. Schönhöfer, B. S. Gardiner, A. Smith, G. E. Schröder-Turk, and S. Torquato, “Universal hidden order in amorphous cellular geometries,” Nature Communications 10, 811 (2019).
* Kim and Torquato (2019) J. Kim and S. Torquato, “Methodology to construct large realizations of perfectly hyperuniform disordered packings,” Phys. Rev. E 99, 052141 (2019).
* Chieco _et al._ (2017) A. T. Chieco, R. Dreyfus, and D. J. Durian, “Characterizing pixel and point patterns with a hyperuniformity disorder length,” Phys. Rev. E 96, 032909 (2017).
* Durian (2017) D. J. Durian, “Hyperuniformity disorder length spectroscopy for extended particles,” Phys. Rev. E 96, 032910 (2017).
* Hexner _et al._ (2018) D. Hexner, A. J. Liu, and S. R. Nagel, “Two diverging length scales in the structure of jammed packings,” Phys. Rev. Lett. 121, 115501 (2018).
* Roth _et al._ (2013) A. E. Roth, C. D. Jones, and D. J. Durian, “Bubble statistics and coarsening dynamics for quasi-two-dimensional foams with increasing liquid content,” Phys. Rev. E 87, 042304 (2013).
* Chieco and Durian (2020) A. T. Chieco and D. J. Durian, “Experimentally testing a generalized coarsening model for individual bubbles in quasi-two-dimensional wet foams,” (2020), arXiv:2010.06664 [cond-mat.soft] .
* Halton (1964) J. H. Halton, “Algorithm 247: Radical-inverse quasi-random point sequence,” Commun. ACM 7, 701 (1964).
* Halton (1960) J. H. Halton, “On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals,” Numerische Mathematik 2, 84 (1960).
* Niederreiter (1992) H. Niederreiter, _Random Number Generation and quasi-Monte Carlo Methods_ (Society for Industrial and Applied Mathematics, Philadelphia, 1992).
* Kocis and Whiten (1997) L. Kocis and W. J. Whiten, “Computational investigations of low-discrepancy sequences,” ACM Trans. Math. Softw. 23, 266 (1997).
* Okabe _et al._ (2009) A. Okabe, B. Boots, K. Sugihara, and S. N. Chiu, _Spatial tessellations: concepts and applications of Voronoi diagrams_ , Vol. 501 (John Wiley & Sons, 2009).
* Torquato and Chen (2018) S. Torquato and D. Chen, “Multifunctional hyperuniform cellular networks: optimality, anisotropy and disorder,” Multifunctional Materials 1, 015001 (2018).
* Florescu _et al._ (2009) M. Florescu, S. Torquato, and P. J. Steinhardt, “Designer disordered materials with large, complete photonic band gaps,” Proc. Nat. Acad. Sci. 106, 20658 (2009).
* Man _et al._ (2013) W. Man, M. Florescu, E. P. Williamson, Y. He, S. R. Hashemizad, B. Y. C. Leung, D. R. Liner, S. Torquato, P. M. Chaikin, and P. J. Steinhardt, “Isotropic band gaps and freeform waveguides observed in hyperuniform disordered photonic solids,” Proc. Nat. Acad. Sci. 110, 15886 (2013).
* Bolton and Weaire (1991) F. Bolton and D. Weaire, “The effects of plateau borders in the two-dimensional soap froth i. decoration lemma and diffusion theorem,” Philosophical Magazine B 63, 795–809 (1991).
* Lambert _et al._ (2010) J. Lambert, R. Mokso, I. Cantat, P. Cloetens, J. A. Glazier, F. Graner, and R. Delannay, “Coarsening foams robustly reach a self-similar growth regime,” Phys. Rev. Lett. 104, 248304 (2010).
|
††thanks: e-mail<EMAIL_ADDRESS>
# Modelling Universal Order Book Dynamics in Bitcoin Market
Fabin Shi CAS Key Laboratory of Network Data Science and Technology,
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
University of Chinese Academy of Sciences, Beijing 100049, China Nathan Aden
Department of Physics, University of Miami, Coral Gables, Florida 33142, USA
Shengda Huang Department of Physics, University of Miami, Coral Gables,
Florida 33142, USA Neil Johnson Physics Department, George Washington
University, Washington D.C. 20052 Xiaoqian Sun CAS Key Laboratory of Network
Data Science and Technology, Institute of Computing Technology, Chinese
Academy of Sciences, Beijing, China Jinhua Gao CAS Key Laboratory of Network
Data Science and Technology, Institute of Computing Technology, Chinese
Academy of Sciences, Beijing, China Li Xu CAS Key Laboratory of Network Data
Science and Technology, Institute of Computing Technology, Chinese Academy of
Sciences, Beijing, China Huawei Shen CAS Key Laboratory of Network Data
Science and Technology, Institute of Computing Technology, Chinese Academy of
Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing
100049, China Xueqi Cheng CAS Key Laboratory of Network Data Science and
Technology, Institute of Computing Technology, Chinese Academy of Sciences,
Beijing, China University of Chinese Academy of Sciences, Beijing 100049,
China Chaoming Song Department of Physics, University of Miami, Coral Gables,
Florida 33142, USA
###### Abstract
Understanding the emergence of universal features such as the stylized facts
in markets is a long-standing challenge that has drawn much attention from
economists and physicists. Most existing models, such as stochastic volatility
models, focus mainly on price changes, neglecting the complex trading
dynamics. Recently, there are increasing studies on order books, thanks to the
availability of large-scale trading datasets, aiming to understand the
underlying mechanisms governing the market dynamics. In this paper, we collect
order-book datasets of Bitcoin platforms across three countries over millions
of users and billions of daily turnovers. We find a 1+1D field theory, govern
by a set of KPZ-like stochastic equations, predicts precisely the order book
dynamics observed in empirical data. Despite the microscopic difference of
markets, we argue the proposed effective field theory captures the correct
universality class of market dynamics. We also show that the model agrees with
the existing stochastic volatility models at the long-wavelength limit.
Understanding universal emergent properties in different markets is a long-
standing challenge for both economists and physicists. As early as the 1960s,
Mandelbrot 1 pointed out that the distribution of logarithmic price return was
heavy-tailed in the cotton market which, soon after, was found to hold true in
numerous other markets 2; 3; 4; 5. Since then many many stylized facts have
been observed as common across a wide range of instruments, markets, and time
periods 6; 7; 8; 9; 10; 11 . This raises a fundamental question: what are the
general mechanisms in a financial market leading to these phenomena.
Existing approach esof modeling price evolution as a stochastic process to
capture the volatility of a market, such as stochastic volatility (SV) models
12; 13; 14; 15; 16, have been met with success when attempting to tease out
numerous stylized facts such as the volatility clustering and heavy-tailed
price return distributions. However since these models do not include aspects
of the actual trading process, connections between these facts and human
behavior remain outside their scope. A natural extension is then to include
the actions of traders as the mechanism behind creating the price by
incorporating all limit orders into bid/ask order books at a given time $t$
and price $x$, and the matching price at the position $x=0$ at which buyers
and sellers agree to trade. Thanks to technological advances during the past
decade there are an increasing number of datasets available about order-book
dynamics which provide the microscopic details of trading dynamics 17. These
details have been used to construct several models that attempt to bridge the
gap between human behavior and market dynamics 18; 19; 20; 21; 22; 23; 24; 25.
Bak et.al. considered orders as particles and models the movement of each
particle along the price lattice using a random walk 26. Further work also
took into account fixed limit orders and market orders that trigger
transactions 18. More recently, orders were modelled as Brownian colloidal
particles bumbling along in a price-fluid 25 . However the common approach in
these models is the discretization of price which has the potential to
obfuscate the behavior/market connection with details about how orders are
transacted in the specific market analyzed. In our model we ignore some of
these details by smoothing out the limit order price axis into a continuous
spacial dimension. Along with a continuous time axis, we propose a 1+1D field
theory to explain some of the stylized facts as resulting directly from the
tendencies of traders in a limit order based market.
## I Analyzing and modeling the order-book dynamics
Despite there being several studies based on the order-book datasets for
varies securities, these datasets are often limited by quantity, time span,
and accessibility. The novelty of Bitcoin however lies in the decentralized
nature of how transactions are executed. Trades involving BTC are only
recognized as valid once they have been communally mined into the publicly
available ledger, which is known as Blockchain. This intrinsic market data
availability has lead to extensive study since its inception in 2009. The
first Bitcoin exchanges emerged in 2010 providing a uniquely public look into
the mechanics of exchange trading including orderbook dynamics. Some early
analyses of this data focused primarily on standard financial methods to
compare Bitcoin to normal currencies 27. Later works explored price prediction
and stability analysis 28; 29; 30. More recently, Bitcoin has entered the
public discourse by exploding in value throughout 2017 and then bursting soon
after in early 2018 thereafter continuing to rise and fall in diminishing
motions, seemingly approaching a stable value. This long and varied public
economic history makes it an ideal candidate on which to test our model.
We use three datasets collected through different online Bitcoin trading
platforms: (i) OKCoin was the largest Bitcoin exchange platform in China,
consisting of millions of users and billions of turnovers per day until being
shut down in 2017 due to government policy. We collected order-book data from
OKCoin from Nov. 3rd, 2016 to Jul. 28th, 2017 (with an unfortunate gap from
Jan. 4th, 2017 to Mar. 1st, 2017 due to machine failure). Since OKCoin
introduced an additional transaction fee on each order after Jan. 24, 2017, we
decided to split the data in two: Nov. 3rd, 2016 to Jan. 4th, 2017 (OKCoin1)
and Mar. 1st, 2017 to Jul. 28th, 2017 (OKCoin2). (ii) We also collected data
from BTC-e, one of the largest Bitcoin trading platforms headquartered in
Russia, from May 3rd, 2017 to Jul. 26th, 2017. (iii) And lastly from Coinbase,
a US-based Bitcoin trading platform, from Jan. 23rd, 2018 to Apr. 18th, 2018.
The order-book datasets collected for each of these three domains record the
profiles of the bid (limit buy) and ask (limit sell) orders every few seconds
during the stated observation period. We are unable to track the instantaneous
change of each order. Nevertheless, for OKCoin1 we also collected the market
order transaction data per second by recording the total number of market
orders which are higher/lower than the best price (bid/ask) and immediately
match to one or more active orders upon arrival.
Fig. 1: Analysis and modelling the three types of order-book operations in
different Bitcoin markets. a) A typical ask/bid order-book profile. b) The
schematic description of three order-book operations. c–f) The conditional
distribution, $P(\Delta n|n)$ for c) OKCoin1, d) OKCoin2, e BTC-e and f)
Coinbase. Dots denote measurements from data and lines are measurements from
simulation. g–j) The change of order volume $\Delta n$ versus order volume $n$
for g) OKCoin1, h) OKCoin2, i BTC-e and j) Coinbase. Dots denote measurements
from data and lines are measurements from simulation. k–n) The correlation
$\langle\Delta n_{x},\Delta n_{y}\rangle$ (the correlation between the change
of order volume at different positions) versus position $x$ for k) OKCoin1, l)
OKCoin2, m) BTC-e and n) Coinbase. Dots denote measurements from data and
lines are measurements from simulation.
We introduce a $1+1D$ continuous field (CF) model to explain the dynamics
found in the bid/ask order volumes, $n_{+}\quantity(x,t)$ and
$n_{-}\quantity(x,t)$. The spatial dimension
$x\equiv\pm\ln{p\quantity(t)}\mp\ln{p_{x}}\geq 0$ is the logarithmic distance
between an order price $p_{x}$ and the trading price $p\quantity(t)$ with the
two signs correspond to the bid/ask axes respectively for the notational
convenience of keeping $x$ positive. Fig. 1a demonstrates a typical bid/ask
order-book profile over time. Figure 1c–f plots the distribution of the order
volume change among bids, $\Delta n_{+}\quantity(x,t)\equiv
n_{+}\quantity(x,t)-n_{+}\quantity(x,t-\Delta t)$, for a fixed $x$ and $n$ and
various values of $\Delta t$, revealing a fat-tailed nature for both positive
and negative tails. Similar results observed among the ask side for $\Delta
n_{-}$. Any change in the volume of orders away from the $x=0$ boundary must
come from one of three possible order-book operations, i) order placement
(OP), ii) order cancellation (OC), and iii) order modification (OM), as
illustrated in Fig. 1b. We model these three operations as follows:
(1)Order Placement: Traders place a new order on top of previous orders at
some price $x\neq 0$. It suggests that in the continuous case we can model the
change in order volume due to order placement, notated as $dn_{\pm}^{OP}(x,t)$
as
$dn_{\pm}^{OP}(x,t)=\sigma_{\pm}^{in}(x)\xi_{\pm}(x,t)dt,$ (1)
where $\xi_{\pm}(x,t)$ is continuous set of random variables satisfying some
one-sided stable distribution. We find that we must allow the scale parameter
$\sigma_{\pm}^{in}$ to depend on the position $x$. This general ingredient of
order-book dynamics has been found in both the Paris Stock Exchange 20 and the
London Stock Exchange 31.
(2) Order Cancellation: Traders cancel orders which they have placed
previously. In Fig. 1g–j, we have plotted the time averaged change of order
volume at some fixed $x$, $\expectationvalue{\Delta n_{+}\quantity(x,t)}_{t}$
against the current order volume. Unlike the Order Placement (1) where changes
are independent of $n$, we see a linear dependence consistent with an existing
study 32 from which we can intuit the form of the order cancellation term to
be
$dn_{\pm}^{OC}(x,t)=-\sigma_{\pm}^{out}(x)n_{\pm}(x,t)\zeta_{\pm}(x,t)dt.$ (2)
The scale parameter $\sigma_{\pm}^{out}$, similar to $\sigma_{\pm}^{in}$,
depends on the current position $x$ and again $\zeta_{\pm}(x,t)$ is a random
variable satisfying the same stable distribution above.
(3) Order Modification: Traders change the price of orders that they own.
Empirically there exists a negative correlation between $\Delta
n_{+}\quantity(x,t)$ at different positions in Fig. 1k–n suggesting that the
order modification operation can be modeled as a diffusion process along the
order-books. Therefore, the order modification term is
$dn_{\pm}^{OM}(x,t)=\frac{\partial^{2}}{\partial
x^{2}}D_{\pm}(x)n_{\pm}(x,t)dt,$ (3)
where the diffusion rate $D_{\pm}(x)$ depends on the position in general. It
is possible that the negative correlation we observed is due to a combination
of order modification and the correlated behaviors of adding/removing orders,
perhaps through different users. As an effective field model such microscopic
differences are effectively the same and all captured by the diffusion term
(see Supplementary Section S2 for a direct validation of (3) using an
additional dataset).
Directly from the chain rule we obtain
$\frac{dn_{\pm}(x,t)}{dt}=\frac{\partial n_{\pm}(x,t)}{\partial
t}\pm\frac{\partial n_{\pm}(x,t)}{\partial x}v(t),$ (4)
where $v(t)\equiv d{\ln p(t)}/dt$ is the velocity of logarithmic price. The
total derivative would then be simply the sum of the effects of order
operations determined above (1)–(3) leading to our first stochastic
differential equation
$\frac{\partial n_{\pm}(x,t)}{\partial
t}=\frac{\partial^{2}D_{\pm}(x)n_{\pm}(x,t)}{\partial x^{2}}\mp
v(t)\frac{\partial n_{\pm}(x,t)}{\partial
x}+\sigma_{\pm}^{in}(x)\xi_{\pm}(x,t)-\sigma_{\pm}^{out}(x)n_{\pm}(x,t)\zeta_{\pm}(x,t).$
(5)
Unlike limit orders, when market orders are placed they are set to execute
immediately at the trading price – even before limit orders momentarily
existing at the $x=0$ boundary. Therefore the discrepancy in these orders
placed in a short period of time, denoted $J\quantity(v)\equiv\Delta
n^{MO}_{+}\quantity(v,t)-\Delta n^{MO}_{-}\quantity(v,t)$ controls the flow of
orders through the $x=0$ boundary meaning a positive excess would indicate
more buyers than sellers so the discrepancy would begin depleting the
reservoir of ask limit orders and vice-versa. Applying the continuity equation
gives the rate of change of the total volume in $n_{\pm}\quantity(x,t)$ as
$\partialderivative{x}\quantity(D_{\pm}\quantity(0,t)n_{\pm}\quantity(0,t))\mp
v\quantity(t)n_{\pm}\quantity(0,t)$ which must be conserved by the market
orders leading to
$v(t)=\frac{1}{n_{0}(t)}\left[J(v,t)+\frac{\partial D_{-}n_{-}}{\partial
x}(0,t)-\frac{\partial D_{+}n_{+}}{\partial x}(0,t)\right],$ (6)
where $n_{0}(t)=n_{+}(0,t)+n_{-}(0,t)$. Equations. (5)–(6) give a complete
description of our CF model which exhibits the relationship between order
placement, order cancellation, order modification, and price change.
From here we describe two important aspects of the traders’ reactions to
velocity of the price, $J\quantity(v,t)$. The first is the influence of trend-
following. The intuition being that the traders will try to follow the
changing price _e.g._ that traders would prefer placing bid orders as the
price is increasing and ask orders as it is decreasing. In Fig. 2a, we observe
exactly this: $J\quantity(v,t)$ is the linear response to $v$ for small
velocity but also saturates at high speeds. The work done by Kanazawa 24
suggests that this curve approximately follows a hyperbolic tangent. Thus we
set $J\propto\tanh({v}/{v_{0}})$, fitting the empirical data well. The other
is the influence of market activity. When the market is moving at high speeds
in either direction, it seems to cause more activity among the traders. In
Fig. 2b, the total change in market order volume over a small time-step
$\Delta n_{+}^{MO}+\Delta n_{-}^{MO}$ is observed to increase as the magnitude
of the velocity grows, verifying the existence of this influence. We chose a
natural fit to this data using $\Delta n_{+}^{MO}+\Delta n_{-}^{MO}\propto
1-\sech({v}/{v_{0}})$. These two equations combine to describe the behavior of
market orders (Fig. 2c),
$\Delta n^{MO}_{\pm}=[\pm
k_{0}\tanh({v}/{v_{0}})+k_{\infty}-k_{1}\sech{({v}/{v_{0}})}]v_{0}.$ (7)
We also analyzed the rms change in the total number of limit orders over short
period of time and found that it too approximately follows equation (7)
according to (Fig. 2d). It is then reasonable to believe that the traders’
reactions to the movement of the trading price at any $x$ should mirror in
form that of the reaction seen in market order activity. We propose that the
limit order placement activity function is of the form
$\sigma^{in}(x,v)=[k_{0}^{in}(x)\tanh({v}/{v_{0}^{in}(x)})+k_{\infty}^{in}(x)-k_{1}^{in}(x)\sech{({v}/{v_{0}^{in}(x)})}]v_{0}^{in}(x)$
where to avoid cluttering the notation we have left off the $\pm$ subscripts.
Fig. 2: The traders’ reaction to velocity. a) The discrepancy of market order
$\Delta n_{+}^{MO}-\Delta n_{-}^{MO}$ versus velocity $v$ in OKCoin1. Dots
denote measurements from data, whereas the curve is a guide to the eye,
following $\Delta n_{+}^{MO}-\Delta n_{-}^{MO}\propto k_{0}\tanh(v/v_{0})$. b)
The market order volume $\Delta n_{+}^{MO}+\Delta n_{-}^{MO}$ versus velocity
$v$ in OKCoin1. Dots denote measurements from data, whereas the curve is a
guide to the eye, following $\Delta n_{+}^{MO}+\Delta n_{-}^{MO}\propto
k_{\infty}-k_{\infty}\sech(v/v_{0})$. c) The market order $\Delta
n_{\pm}^{MO}$ versus velocity $v$ in OKCoin1. Dots denote measurements from
data, whereas the curve is a guide to the eye, following $\Delta
n_{\pm}^{MO}\propto[\pm
k_{0}\tanh(v/v_{0})+k_{\infty}-k_{1}\sech{(v/v_{0})}]v_{0}$. d) The root mean
square of $\Delta n$ versus normalized $v$ in OKCoin1. Dots denote
measurements from data, whereas the line is a guide to the eye, following
$\langle\Delta n\rangle^{1/2}\propto[\pm
k_{0}^{{}^{\prime}}(x)\tanh(v/v_{0}^{{}^{\prime}}(x))+k_{\infty}^{{}^{\prime}}(x)-k_{1}^{{}^{\prime}}(x)\sech{(v/v_{0}^{{}^{\prime}}(x))}]v_{0}^{{}^{\prime}}(x)$.
## II Model Predictions
To test the validity of our model, we conduct some simulations of the order-
book dynamics (Supplementary Section S1) and compare the simulation results
with empirical data in the OKCoin1, OKCoin2, BTC-e, and Coinbase datasets. We
first indirectly provide evidence supporting the validity of the form of the
three trader operations that we have included in the model. A consideration of
the diffusion-less ($D_{\pm}\quantity(x)=0$) and point process $J=0$ limits of
our model consisting only of traders placing and canceling orders at random
leads to a linear relationship between $\expectationvalue{\Delta
n_{\pm}\quantity(x,t)}_{t}$ and $\expectationvalue{n_{\pm}\quantity(x,t)}_{t}$
which is verified in (Fig. 1g–j) since empirically the contributions of the
diffusion term were small on time scales where the velocity doesn’t change
very much. We also see justification for the heavy tails of
$\xi_{\pm}\quantity(x,t)$ and $\zeta_{\pm}\quantity(x,t)$ in the heavy tail
observed on the distribution for $\Delta n_{+}$ (Fig. 1c–f). The final row of
figures (Fig. 1k–n) show the classic signs of the negative rebounds on either
side of the self-correlating spike common to diffusion processes with a more
detailed analysis given in the supplementary materials (Supplementary Section
S2).
Fig. 3: The distribution of absolute value of the normalized price return in
different Bitcoin market. The probability distribution of absolute value of
the normalized price return for a) OKCoin1, b) OKCoin2, c) BTC-e and d)
Coinbase. Dots denote measurements from data, lines are measurements from
simulation. The dot dash, shown as a guide to the eye, represents a power-law
decay with exponent $\alpha=-4$.
Moreover, we measure the distribution of the absolute value of instantaneous
price return, which is important for understanding the market, quantifying
risk, and optimizing portfolios 33; 34. Because it is defined as the
logarithmic ratio of the price before and after the smallest discernible unit
of time in the market (tic, $\tau$) the price return is equivalent to the
velocity times the tic $\absolutevalue{v\quantity(t)\tau}$. In Fig. 3a–d, we
plot the pdf for the price return normalized to absorb the effect of tic size
which is of course irrelevant in our continuous model. We show that the heavy
tail of this distribution decays with an exponent of $\alpha\approx-4$ in
agreement with our theory which reproduces the well known cubic (quartic) law
of returns found in the ccdf (pdf) for many different financial markets 2; 5;
10; 11. As it will be shown, the method in which our model predicts this
exponent is very general suggesting that this mechanism is a sufficient
explanation for this universality class independent of Bitcoin specific market
details.
Simulations reveal the diffusion terms in equation 6 to be negligible in the
influence on price movement allowing
$v\quantity(t+\tau)\approx{J\quantity(v\quantity(t))}/{n_{0}}\quantity(t2S)$
where care has been taken in the writing the correct time dependence. Thus we
construct the infinitesimal for velocity as
$dv\approx\frac{J\quantity(v)}{n_{0}}-v$ where every quantity is now evaluated
at the same time. We can construct the Fokker-Planck equation for the
distribution of the returns by writing the it$\hat{\text{o}}$ SDE
$dv=\mu\quantity(v)dt+\sigma\quantity(v)dW_{t}$. We then measure the drift and
diffusion coefficients by finding the relationships $\expectationvalue{\Delta
v}=-v$ so that
$\mu\quantity(v)=\derivative{t}\expectationvalue{v}\approx\frac{1}{\tau}\expectationvalue{\Delta
v}=-\frac{v}{\tau}$ and $\expectationvalue{\text{Var}\quantity(v)}\approx 0$
when $v=0$ so that $k_{1}\approx k_{\infty}$ and
$\sigma^{2}\quantity(v)=\frac{v^{2}_{0}}{n^{2}_{0}\tau^{2}}\quantity[k_{0}^{2}\tanh[2](\frac{v}{v_{0}})+\quantity(k_{\infty}-k_{1}\sech[2](\frac{v}{v_{0}}))]$
(Supplementary Section S3). We then use the Fokker-Planck equation,
$\frac{\partial}{\partial t}p(v)=-\frac{\partial}{\partial
v}[\mu(v)p(v)]+\frac{\partial^{2}}{\partial
v^{2}}[\frac{\sigma^{2}(v)}{2}p(v)].$ (8)
to solve for the stable solution
$p(v)\propto\frac{2}{\sigma^{2}(v)}e^{2\int{\frac{\mu(v)}{\sigma^{2}(v)}dv}}.$
(9)
which is the general form of the price return distribution. A summary of
solutions to this equation are given below
$\displaystyle p\quantity(v)$
$\displaystyle\propto\text{exp}\quantity(-\frac{n_{0}^{2}v^{2}}{v_{0}^{2}\quantity(k_{\infty}-k_{1})^{2}})$
$\displaystyle\absolutevalue{v}\to 0$ $\displaystyle p\quantity(v)$
$\displaystyle\propto v^{-2-2k_{0}^{-2}n_{0}^{2}}$ $\displaystyle 0\ll$
$\displaystyle\absolutevalue{v}\ll v_{0}$ $\displaystyle p\quantity(v)$
$\displaystyle\propto\text{exp}\quantity(-\frac{n_{0}^{2}v^{2}}{v_{0}^{2}\quantity(k_{\infty}+k_{1})^{2}})$
$\displaystyle v_{0}\ll$ $\displaystyle\absolutevalue{v}$
In the regime where the power law dominates we find that $n_{0}\approx k_{0}$
in the OKCoin1 dataset (Supplementary Section S3) which gives a power of $-4$.
Fig. 4: The properties of order-book dynamic in OKCoin1. a) The probability
distribution of the absolute value of normalized price return in OKCoin1. Dots
denote measurements from data and lines are measurements from different
models. b) The variance of velocity $\langle v^{2}\rangle$ versus order volume
$n_{0}$ in OKCoin1. Dots denote measurements from data and lines are
measurements from different models. c) The correlation $\langle v,\Delta
n\rangle$ (correlation between the change of order volume $\Delta n$ and
velocity $v$) versus position $x$ in OKCoin1. Dots denote measurements from
data and lines are measurements from different models. d) The root mean square
of the change of order volume $\langle\Delta n^{2}\rangle^{1/2}$ versus
velocity $v$ in OKCoin1. Dots denote measurements from data and lines are
measurements from different models.
We also compare our model to two existing models – CS 32 and KSTT 24. The CS
model deals with specific trader behavior in allowing for the placement and
cancellation of orders whereas the KSTT model focuses on the traders’ reaction
to changing price. However neither produce a price return distribution with
the appropriate universal exponent (Fig. 4a) since in both models the variance
of the change in velocity is independent of velocity which, according to
equation (9), implies that the distribution of price return follows a
Gaussian.
In addition to price return, we also verify some other useful quantities from
the model with our data. The relation between second moment of the velocity
$\langle v^{2}\rangle$ and market order volume $n_{0}$ is calculated with an
expectation with respect to the conditional distribution on $n_{0}$ so we have
$\langle v^{2}\rangle=\int{p(v|n_{0})v^{2}dv}.$ (10)
In our model, the theoretical value of $\langle v^{2}\rangle$ fits the
empirical data well in Fig. 4b. $\langle v^{2}\rangle$ is composed of an
exponential decay and/or a power-law decay with power-law exponent -2 for
different limits on $n_{0}$ summarized in the supplementary section
(Supplementary Section S4). We again note the predictions from the CS and KSTT
models are insufficient to fully explain this observation. In the CS model
$\sigma^{2}(\Delta v)\propto n_{0}^{-2}$ as in our model however the fit is
poor and in the KSTT model the conditional probability $p(v|n_{0})$ is
independent of $n_{0}$ giving an approximately constant result.
Another point of distinction for our model is the correlation between velocity
and total change of order volume. In our model, the correlation. Fig. 4c shows
that $\langle v,\Delta n\rangle$ decreases from positive to negative for bid
order and vice-versa for ask orders and both go to $0$ as $x\to\infty$ in
agreement with the empirical data. The previous works capture only the
properties of order-book dynamics in certain regimes. The KSTT model assumes
all of the investors are high-frequency traders, which enlarges the influence
of trend-following in the region far away from price leading to a correlation
that does not taper to zero far away from the trading price. In contrast, the
CS model completely ignores traders’ reaction to the changing price velocity,
leading to the deviation from an empirical value near the price. Both of the
previous works also neglect the influence of market activity therefore,
$\langle\Delta n^{2}\rangle^{1/2}$ is approximately equal at different
velocities while the curve our model produces agrees with the empirical data
(Fig. 4d).
To conclude, the above simulation results and analysis indicate that our model
can precisely capture and potentially explain the power law decay of the price
return distribution found to be universal across a wide range of markets. We
also report the success of our model in demonstrating some of the key features
of order-book dynamics as an improvement over previous work. However one
obvious limitation of our model is the lack of of temporal correlation in the
traders’ reactions. It is reported that the time series constructed by
assigning the value $+1$ to incoming buy orders and $-1$ to incoming sell
orders exhibits long memory on the Paris Bourse 35. Since the order placement
$\Delta n^{OP}$ in our model follows a stable distribution which leads to the
absence of long memory in the order flow, we cannot predict these results.
Finally, we point out that our model can be used for price prediction provided
quality data. Current methods of predicting price, such as ARMA 36 and GARCH
12, are based on the price series and while order book data can be over-valued
in some financial analyses, our model would constitute the basis for a
complementary approach to more conventional methods. Since our model is only
concerned with the mesoscopic details of order book trading, many of the
complications of using order book data for price return prediction aren’t an
issue such as so-called iceberg orders wherein a single market maker tries to
sell a large amount of a security secretly by not listing it all at once. As
long as the orders follow one-sided stable distributions and general trader
reaction trends, our model is applicable.
## Acknowledgements
The authors thank Hao Zhou for helpful comments. X.S was supported by the
National Natural Science Foundation of China under award numbers 61802370. X.C
was supported by the National Natural Science Foundation of China under award
numbers 60873245. H.S was supported by the K.C. Wong Education Foundation.
## Author contributions
C.Song conducted the project. F.S. collected and curated datasets, and
performed numeric simulation. C.S., F.S., N.A., S.H. developed the model and
calculated analytical results. C.S., F.S., N.A., S.H., X.S., J.G., L.X., H.S.,
X.C., and N.J. analyzed data and contributed to the writing of the manuscript.
## Competing Interests
The authors declare no competing interests.
## References
* Mandelbrot (1963) B. Mandelbrot, Journal of Business 36, 394 (1963).
* Gopikrishnan _et al._ (1998) P. Gopikrishnan, M. Meyer, L. N. Amaral, and H. E. Stanley, The European Physical Journal B-Condensed Matter and Complex Systems 3, 139 (1998).
* Plerou and Stanley (2008) V. Plerou and H. E. Stanley, Physical Review E 77, 037101 (2008).
* Gu _et al._ (2008) G.-F. Gu, W. Chen, and W.-X. Zhou, Physica A: Statistical Mechanics and its Applications 387, 495 (2008).
* Gopikrishnan _et al._ (1999) P. Gopikrishnan, V. Plerou, L. A. N. Amaral, M. Meyer, and H. E. Stanley, Physical Review E 60, 5305 (1999).
* Cont (2001) R. Cont, Quantitative Finance 1, 223 (2001).
* Cont (2005) R. Cont, in _Fractals in Engineering_ (Springer, 2005) pp. 159–179.
* Stanley _et al._ (2008) H. E. Stanley, V. Plerou, and X. Gabaix, Physica A: Statistical Mechanics and its Applications 387, 3967 (2008).
* Campbell _et al._ (1997) J. Y. Campbell, A. W. Lo, A. C. MacKinlay, _et al._ , _The econometrics of financial markets_ , Vol. 2 (Princeton University Press, 1997).
* Chakraborti _et al._ (2011) A. Chakraborti, I. M. Toke, M. Patriarca, and F. Abergel, Quantitative Finance 11, 991 (2011).
* Gould _et al._ (2013) M. D. Gould, M. A. Porter, S. Williams, M. McDonald, D. J. Fenn, and S. D. Howison, Quantitative Finance 13, 1709 (2013).
* Bollerslev (1986) T. Bollerslev, Journal of Econometrics 31, 307 (1986).
* Heston (1993) S. L. Heston, The Review of Financial Studies 6, 327 (1993).
* Beckers (1980) S. Beckers, The Journal of Finance 35, 661 (1980).
* Hagan _et al._ (2002) P. S. Hagan, D. Kumar, A. S. Lesniewski, and D. E. Woodward, The Best of Wilmott 1, 249 (2002).
* Ahn and Gao (1999) D.-H. Ahn and B. Gao, The Review of Financial Studies 12, 721 (1999).
* Martin D. Gould and Howison (2013) S. W. M. M. D. J. F. Martin D. Gould, Mason A. Porter and S. D. Howison, Quantitative Finance (2013).
* Maslov (2000) S. Maslov, Physica A: Statistical Mechanics and its Applications 278, 571 (2000).
* Cont _et al._ (2010) R. Cont, S. Stoikov, and R. Talreja, Operations Research 58, 549 (2010).
* Bouchaud _et al._ (2002) J.-P. Bouchaud, M. Mézard, M. Potters, _et al._ , Quantitative Finance 2, 251 (2002).
* Mike and Farmer (2008) S. Mike and J. D. Farmer, Journal of Economic Dynamics and Control 32, 200 (2008).
* Daniels _et al._ (2003) M. G. Daniels, J. D. Farmer, L. Gillemot, G. Iori, and E. Smith, Physical Review Letters 90, 108102 (2003).
* Smith _et al._ (2003) E. Smith, J. D. Farmer, L. Gillemot, and S. Krishnamurthy, Quantitative Finance 3, 481 (2003).
* Kanazawa _et al._ (2018) K. Kanazawa, T. Sueshige, H. Takayasu, and M. Takayasu, Physical Review Letters 120, 138301 (2018).
* Yura _et al._ (2014) Y. Yura, H. Takayasu, D. Sornette, and M. Takayasu, Physical Review Letters 112, 098703 (2014).
* Bak _et al._ (1997) P. Bak, M. Paczuski, and M. Shubik, Physica A: Statistical Mechanics and its Applications 246, 430 (1997).
* Raventós and Anadón Rosinach (2012) H. Raventós and M. Anadón Rosinach, (2012).
* Shah and Zhang (2014) D. Shah and K. Zhang, in _2014 52nd annual Allerton conference on communication, control, and computing (Allerton)_ (IEEE, 2014) pp. 409–414.
* Donier and Bouchaud (2015) J. Donier and J.-P. Bouchaud, PloS one 10, e0139356 (2015).
* Donier and Bonart (2015) J. Donier and J. Bonart, Market Microstructure and Liquidity 1, 1550008 (2015).
* Zovko _et al._ (2002) I. Zovko, J. D. Farmer, _et al._ , Quantitative Finance 2, 387 (2002).
* Challet and Stinchcombe (2001) D. Challet and R. Stinchcombe, Physica A: Statistical Mechanics and its Applications 300, 285 (2001).
* Bouchaud and Potters (2003) J.-P. Bouchaud and M. Potters, _Theory of financial risk and derivative pricing: from statistical physics to risk management_ (Cambridge University Press, 2003).
* Johnson _et al._ (2003) N. F. Johnson, P. Jefferies, P. M. Hui, _et al._ , OUP Catalogue (2003).
* Bouchaud _et al._ (2004) J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart, Quantitative Finance 4, 176 (2004).
* Box _et al._ (2015) G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, _Time series analysis: forecasting and control_ (John Wiley & Sons, 2015).
|
Fully Bayesian Estimation under Dependent and Informative Cluster Sampling
Bayes Estimation/Informative Cluster Sampling
Luis G. León-Noveloaddr1[label=e1]
Terrance D. Savitskyaddr2
León-Novelo & Savitsky
Assistant Professor.
University of Texas Health Science Center at Houston-School of Public Health, 1200 Pressler St. Suite E805, Houston, TX, 77030, USA
Senior Research Mathematical Statistician.
Office of Survey Methods Research, U.S. Bureau of Labor Statistics,
Washington, DC, 20212-0001, USA
Survey data are often collected under multistage sampling designs where units are binned to clusters that are sampled in a first stage. The unit-indexed population variables of interest are typically dependent within cluster. We propose a Fully Bayesian method that constructs an exact likelihood for the observed sample to incorporate unit-level marginal sampling weights for performing unbiased inference for population parameters while simultaneously accounting for the dependence induced by sampling clusters of units to produce correct uncertainty quantification. Our approach parameterizes cluster-indexed random effects in both a marginal model for the response and a conditional model for published, unit-level sampling weights. We compare our method to plug-in Bayesian and frequentist alternatives in a simulation study and demonstrate that our method most closely achieves correct uncertainty quantification for model parameters, including the generating variances for cluster-indexed random effects. We demonstrate our method in an application with NHANES data.
Statement of Significance
We propose a fully Bayesian framework for parameter estimation of a population model from survey data obtained via a multi-stage sampling design. Inference incorporates sampling weights. Our framework delivers estimates that achieve asymptotically correct uncertainty quantification unlike popular Bayesian and frequentist alternatives. In particular, our method provides asymptotically unbiased point and variance estimates under the sampling of clusters of units. This type of sampling design is common in national and large surveys.
Inclusion probabilities
mixed effects linear model
primary stage sampling unit
sampling weights
survey sampling.
§ INTRODUCTION
Inference with data from a complex sampling scheme, such as that collected by the National Health and Nutrition Examination Survey (NHANES), requires consideration of the sampling design. A common multistage sampling scheme in public survey datasets is formulated as:
* Divide survey population into $H$ strata.
* Each stratum is assigned $N_h$ clusters of individuals called
primary stage sampling units (PSUs) from which $J_h$ PSUs are selected.
PSU $hj$ is selected with probability
$\pi_{1hj}$. By design, at least one PSU is selected in each stratum, $J_h\geq 1, \forall h$.
* Within each selected PSU, $n_{hj}$ individuals are sampled out of the total $N_{hj}$ population units nested in the PSU.
Each individual or last stage unit is sampled with probability $\pi_{i\mid hj}$, $i=1,\dots,N_{hj}$.
The indices $i,j,h$ index individual, PSU and stratum, respectively.
The marginal probability of including an individual in the sample is then
$\pi_{ihj}^\prime=\pi_{i\mid hj}\pi_{1hj}$.
In addition to sampling clusters of dependent individuals, both clusters and individuals-within-clusters are typically selected with unequal sampling inclusion probabilities in order to improve estimation power for a population subgroup or to reduce variance of a global estimator. The sample inclusion probabilities are constructed to be correlated with or “informative" about the response variable of interest to reduce variance of the estimator. On the one hand, stratification reduces the standard error (SE) of estimates while, on the other hand, clustering tends to increase the standard error since clustering induces dependence and is used for convenience and to reduce cost. Utilizing unequal inclusion probabilities can reduce the variance of the estimator where a subset of units is highly influential for the estimator of interest, such as is the case where larger-sized employers drive the estimation of total employment for the Current Employment Statistics survey administered by the U.S. Bureau of Labor Statistics; more often, the use of unequal inclusion probabilities tends to increase the variance of the estimator due to the variation in the information about the population reflected in observed samples. Ignoring PSU and unequal sampling inclusion probabilities underestimates the SE because of the dependence among individuals within a PSU and the variation of information about the population reflected in informative samples drawn from it.
The statistical analyst receives variables of interest for each survey participant
along with the stratum and PSU identifiers
to which s/he belongs, as well as sampling weights, $w_{ihj}\propto 1/\pi_{ihj}$. The inclusion probability, $\pi_{ihj}$,
is proportional to $\pi_{ihj}^\prime$ after adjusting for oversampling of subpopulations and nonresponse.
In the context of NHANES, a stratum is defined by the intersection of geography with concentrations of minority populations and a
PSU is constructed as a county or a group of geographically continuous counties. Secondary and tertiary stage sampling units include segments (contiguous census blocks) and households. The final unit is an eligible participant in the selected household. NHANES released masked stratum and PSU information to protect participant's privacy.
Every 2-year NHANES-data cycle <cit.> releases information
obtained from $H=15$ strata with $J_h=2$ PSUs per stratum.
In this paper, we focus on a two-stage sampling design that excludes strata for both our simulation study and application, in the sequel, without loss of generality since the inclusion of strata would be expected to improve estimation by reducing variability across realized samples. Our two-stage sampling design of focus is characterized by the first stage sampling of PSUs and, subsequent, second stage sampling of units. We employ a fully Bayesian estimation approach that co-models the response variable of interest and the marginal inclusion probabilities as introduced in <cit.>, hereafter referred to . We extend their approach by constructing PSU-indexed random effects specified in both the marginal model for the response variable and the conditional model for the sampling inclusion probabilities.
Since our sampling design does not utilize strata, we do not consider subindex $h$ that indexes strata in the discussion above.
Sampled individual $ij$ denotes individual $i \in \{1,\ldots,n_{j}\}$ in cluster $j \in \{1,\ldots,J_{pop}\}$ included in the sample, where $J_{pop}$ denotes the total number of PSUs in the population.
Let $J \leq J_{pop}$ denote the number of PSUs actually sampled.
We assume that the sampling weight, $w_{ij}$, is proportional to
the inverse marginal inclusion probability, $\pi_{ij}$, of individual $ij$ being
included in the sample; or $\pi_{ij}\propto 1/w_{ij}$. We denote the vector of predictors associated to individual $ij$ as $\bx_{ij}$.
The data analyst aims to estimate the parameters, $\bth$, of a population model, $p(y\mid\bth,\bx)$,
that they specify from these data.
Relabeling PSU indices in the sample so they run from $1,\ldots,J$,
the analyst observes sample of size, $n=\sum_{j=1}^J n_j$, and the associated variables,
$\{\is{y}_{ij},\sampled{\bx}_{ij},\is{\pi}_{ij}\propto 1/\is{w}_{ij},j\}_{i=1,\dots,n_j,j=1,\dots,J}$
with $n_j$ the number of participants from PSU $j$ and superindex $(s)$ denotes in the sample.
By contrast, $y_{ij}$ without superindex $(s)$ denotes a
response of an individual in the survey population but not, necessarily, a survey participant included in the observed sample.
The probability of inclusion of each PSU (denoted as $\pi_{1j}$ in point ii, above) is unknown to the analyst because it is not typically published for the observed sample, though the PSU inclusion probabilities are used to construct published unit marginal inclusion probabilities,
(such that inclusion probabilities within the same PSU tend to be more similar or correlated), but the dependence of units in the same PSU may not be fully accounted for by the dependence on their inclusion probabilities.
A sampling design is informative for inference about individuals within a group when $y_{ij}\not\perp \pi_{ij}\mid \bx_{ij}$.
A sampling design will also be informative for PSUs in the case that
$\bar{y}_{\cdot j} - \bar{y} = (1/N_{j})\mathop{\sum}_{i=1}^{N_{j}} y_{ij} - \bar{y} \not\perp \pi_{1j}\mid \bar{\bx}_{j}$ with
$\bar{y}$ the population mean response and
$\bar{\bx}_{j}=(1/N_j) \sum_{i=1}^{N_j} \bx_{ij} $. Even if a sampling design is not informative for individuals and/or groups, however, there are typically scale effects induced by within group dependence that must be accounted for to produce correct uncertainty quantification.
propose a model-based Bayesian approach appropriate under informative sampling
that incorporates the sampling weights into the model
by modelling both the response given the parameter of interest and the inclusion probability given the response, $\pi_{ij}\mid y_{ij}$. The main advantages of this approach is that it yields
(1) consistent point estimates [LS2019] (point estimates converge in probability to true values),
(2) credible intervals that achieve nominal (frequentist) coverage, and
(3) robust inference against mis-specification of $\pi_{ij}\mid y_{ij}$.
focus on fixed effect models and ignore
the dependence induced by the sampling design; that is, both
association among the responses within the same PSU (that we label, dep-$y$), and
possible association among inclusion probabilities within the same PSU (that we label, dep-$\pi$).
This paper extends the approach of
to account for these associations via mixed effect models. More specifically,
we include PSU-specific random effects (PSU-REs)
in both the model for the
responses and in the model for the inclusion probabilities.
<cit.> propose, as we do, Bayesian inference
under a
two-stage sampling design.
in particular, they consider
the case where clusters/PSUs are selected
with probability, $\pi_{1j}$, proportional to a measure of PSU size
(that is commonly the number of individuals in the PSU). They require $\pi_{1j}$ to be available and published to the data analyst for the sampled PSUs. They assume that individuals nested in PSUs are drawn under simple random sampling (SRS) in a second stage.
Their estimation focus is on the population mean or proportion.
By contrast, we focus on estimation of model parameters and assume that
the analyst does not know $\pi_{1j}$ (because it is not published), but instead only knows $\pi_{ij}$ (up to a multiplying constant) for the sampled individuals, as is the case for NHANES data. We do not assume that individuals within the sampled PSUs are selected under SRS, but allow for informativeness.
We introduce our general approach in Section <ref>, though in the rest of the paper
we focus on the linear regression setting for ease-of-illustration. Competing methods are summarized in this section as well.
In Section <ref>,
we show via simulation that our approach yields
credible intervals with nominal (frequentist) coverage,
while the competing methods do not in some simulation scenarios.
In Section <ref> we demonstrate our approach by applying it to an
NHANES dataset
to estimate the daily kilocalorie consumption of persons in different demographic groups in the U.S. population.
Inference under our Fully Bayes approach is compared against inference under competing plug-in Bayesian and frequentist methods.
We provide a final discussion section and an Appendix
containing details not discussed, but referred to in the main paper.
§ REVIEW OF
introduces the inclusion probabilities into the Bayesian paradigm by assuming them to be random.
In this section we review their approach before we extend it to include PSU information, in the next section.
The superpopulation approach in assumes that the finite population of size $N$,
$(y_1,\pi_1,\bx_1),\dots (y_N,\pi_N,\bx_N)$ is a realization of
\begin{equation}\label{eq:population}
(y_i,\pi_i)\mid \bx_i,\bth,\bka \sim p(y_i,\pi_i\mid \bx_i,\bth,\bka)=
p(\pi_i\mid y_i,\bx_i,\bka)\ p(y_i\mid \bx_i,\bth), \quad i=1,\dots,N.
\end{equation}
Here, $y_i$ is the response for individual $i$ with vector of covariates $\bx_i$ and
$\pi_i\in [0,1]$ is a proper survey sampling inclusion probability for individual $i$ being sampled.
It is assumed that
$(y_i,\pi_i)\perp (y_{i^\prime},\pi_{i^\prime})\mid \bx_i,\bx_{i^\prime},\bth,\bka$,
for $i\neq i^\prime$, and $\bx_i$ is assumed known; that is, the unit responses and inclusion probabilities are conditionally (on the model parameters) independent.
Note also that (<ref>) above presumes that
$\pi_i\perp \bth \mid y_i,\bx_i,\bka$ and
$y_i\perp \bka \mid \bx_i,\bth$; that is, the parameters for the models for the response and weights are a priori independent.
The population parameter $\bth$ determines the relationship between $\bx_i$ and $y_i$, and is of main interest. The parameter $\bka$ is a nuisance parameter that allows modeling the association between $\pi_i$ and $y_i$, though we later see in our simulation study section that it provides insight on the informativeness of the sampling design for a particular response variable of interest.
The informative sample of size $n$ is drawn so that
$P[\hbox{individual $i$ in sample}]=\pi_i$, a proper sampling inclusion probability.
Bayes theorem implies,
\begin{align}\label{eq:samplingdist}
p(y_{i},\pi_{i}\vert \bx_{i}, \bth, \bka,&\hbox{individual $i$ in sample})\\
{\mbox{Pr}(\hbox{individual $i$ in sample} \vert y_{i},\pi_{i},\bx_{i}, \bth, \bka )\times p(y_i,\pi_i\vert \bx_i, \bth, \bka) \nonumber}
%f(y_i, \pi_i | x_i, I_i = 1) \times Pr(I_i = 1 | x_i) =& Pr(I_i = 1 | y_i, x_i, \pi_i) \times p(\pi_i|y_i,x_i) \\
% &\times p(y_i|x_i)
\end{align}
By the way the informative sample is drawn, and the population model in (<ref>), the numerator in
\begin{equation}\label{eq:numerator}
\pi_i \times
p(\pi_i\mid y_i,\bx_i,\bka)\ p(y_i\mid \bx_i,\bth)
\end{equation}
The denominator is obtained by integrating out $(y_i,\pi_i)$ in the numerator,
\begin{equation}\label{eq:denominator}
\int \pi_i^\star
p(\pi_i^\star \mid y_i^\star,\bx_i,\bka)\ p(y_i^\star\mid \bx_i,\bth)\, d\pi_i^\star dy_i^\star=
E_{y_i^\star\mid \bx_i,\bth}\left[E\left(\pi_i^\star\mid y_i^\star,\bx_i,\bka\right)\right]
\end{equation}
The superindex $\star$ is used to distinguish the quantities integrated out from the ones in the numerator.
Plugging (<ref>) and (<ref>) in (<ref>) we obtain Equation (5) in ,
and also Equation (7.1) in
given by,
\begin{equation}\label{eq:IScorrection}
p_s(y_{i},\pi_{i}\vert \bx_{i}, \bth, \bka)=
\left\{\frac
{\pi_i\, p(\pi_i\vert y_i,\bx_i,\bka) }
{E_{y_i^\star\vert \bx_i,\bth}\left[E(\pi_i^\star\vert y_i^\star, \bx_i, \bka) \right]}\right\}
\times p( y_i\vert \bx_i, \bth)
\end{equation}
where the LHS, $p_s(\cdots\mid \cdots)$, denotes the joint distribution of $(y_i,\pi_i)$
conditioned on the individual $i$ being in the sample, , the LHS of (<ref>) is equal to
$p(\dots\mid \cdots,\hbox{individual $i$ in sample})$. Inference is based on this exact likelihood for the observed sample with,
\begin{equation*}\label{eq:likelihood}
\ell(\bth,\bka;\sampled{y},\sampled{\pi},\sampled{\bx})=\prod_{i=1}^n\left[p_s(\is{y_i},\is{\pi}_i\mid \sampled{x_i},\bth,\bka) \right]
\end{equation*}
where the superindex $(s)$ is used to emphasize that these are the values observed in the sample. We also relabel the index $i$ running from $1,\dots,N$ in the population so it runs from $1,\dots,n$ in the sample.
A Bayesian inference model is completed by
assigning priors to $\bth$ and $\bka$.
Note that under noninformative sampling, when $y_i\perp \pi_i\mid \bx_i$,
the quantity between curvy brackets in (<ref>) does not depend on $y_i$ and therefore inference on $\bth$ does not depend on the inclusion probabilities, or the $\pi_i$s. In other words, inference using (<ref>) is the same as if treating the sample as an SRS from the model $y_i\sim p(y_i\mid \bx_i,\bth)$.
For the informative sampling case,
in theory, we can assume any distribution for $y_i\mid \bx_i,\bth$ and $\pi_i\mid y_i,\bx_i,\bka$.
In practice, the calculation of $E_{y_i^\star\vert \bx_i,\bth}[\cdots]$ in (<ref>)
is a computational bottleneck. Theorem 1 in , stated below, provides
conditions to obtain a closed form for this expected value.
Let $\bxp_i$ and $\bxy_i$ be subvectors of $\bx_i$, the covariates used to specify the conditional distribution of $\pi_i\mid y_i,\bx_i,\bka$
and $y_i\mid \bx_i,\bth$, respectively; that is,
$\pi_i\mid y_i,\bx_i,\bka \sim \pi_i \mid y_i,\bxp_i,\bka$ and
$y_i\mid \bx_i,\bth\sim y_i\mid \bxy_i,\bth$. Note that we allow for $\bxp_i$ and $\bxy_i$ to have common covariates.
Let $\hbox{normal}(x\mid\mu,s^2)$ denote the normal distribution pdf with mean $\mu$ and variance $s^2$ evaluated at $x$, and $\hbox{lognormal}(\cdot\mid\mu,s^2)$ denote the lognormal pdf, so that
$X\sim \hbox{lognormal}(\mu,s^2)$ is equivalent to $\log X\sim\hbox{normal}(\mu,s^2)$.
(Theorem 1 in )
$p(\pi_i\mid y_i,\bxp_i,\bka) =\emph{lognormal}(\pi_i\mid h(y_i,\bxp_i,\bka),\sigma_{\pi}^2)$,
with the function $h(y_i,\bxp_i,\bka)$ of the form $h(y_i,\bxp_i,\bka)=\hy(y_i,\bxp_i,\bka)+\hmy(\bxp_i,\bka)$ where
$\sigma_{\pi}^2=\sigma_{\pi}^2(\bka,\bxp_i)$, possibly a function of $(\bka,\bxp_i)$
p_s(y_i,\pi_i\mid \bxy_i,\bxp_i,\bth,\bka)=
\frac{\emph{normal}\left(\log \pi_i\mid \hy(y_i,\bxp_i,\bka)+\hmy(\bxp_i,\bka),\sigma_\pi^2\right)}
{\exp\left\{\hmy(\bxp_i,\bka)+\sigma^2_\pi/2\right\} \times M_y(\bka;\bxy_i,\bxp_i,\bth) }
\times p(y_i\mid \bxy_i,\bth)\nonumber
with $M_y(\bka;\bxy_i,\bxp_i,\bth):=E_{y^\star_i\mid \bxy_i,\bth}\left[\exp\left\{\hy(y^\star_i,\bxp_i,\bka)\right\}\right]$.
If both $M_y$ and $p(y_i\mid\cdots)$ admit closed form expressions, then $p_s(y_i,\pi_i\mid\cdots)$ has a closed form, as well;
for example, when
$\hy(y_i,\bxp_i,\bka)=\kappa_y y_i$
$\kappa_y$ an element of the parameter vector, $\bka$, with
$\kappa_y \in \mathbb{R}$,
then $M_y(\bka;\bxy_i,\bxp_i,\bth)$ is the moment generating function (MGF) of $y_i\mid \bth$ evaluated at $\kappa_y$, which may have a closed form defined on $\mathbb{R}$. This implies a closed form for
Analogously, we may consider an interaction between $y$ and $\bxp$, using
$\hy(y_i,\bxp_i,\bka)=(\kappa_y+\bxp_i^t \bka_\bxp) y_i\equiv r y_i$ with
$\bka=(\kappa_y,\bka_\bxp,\sigma_\pi^2)$. In this case, we achieve, $M_y(r;\cdots)$, which is the MGF evaluated at $r$.
As mentioned in , the assumption of a lognormal distribution for $\pi_{i}$ is mathematically appealing. The inclusion probability, $\propto \pi_i$, for individual, $i$, is composed from the product of inclusion probabilities of selection across the
stages of the multistage survey design.
If each of these stagewise probabilities are lognormal then their product, $\propto\pi_i$, is lognormal as well. This is particularly helpful in the setting that includes PSUs, discussed in next section.
For implementation, we observe sampled $\{(\sampled{\pi}_i,\sampled{y}_i)\}_{i=1,\dots,n}$ and we estimate the exact posterior distributions for the population model parameters on the observed sample.
Under our lognormal conditional model for $\pi_i$s, there is no restriction imposed on $\sum_{i=1}^n \sampled{\pi}_i$, such that we may normalize the $\sampled{\pi}_s$ to any positive constant, $\sum_{i=1}^n \sampled{\pi}_i = c$,
as long as $h(y_i,\bxp_i,\bka)=\kappa_0+\dots$ includes an intercept parameter that we label $\kappa_0$.
Since $\pi_i\sim\hbox{lognormal}(\kappa_0+\dots,\dots)$ is equivalent to
$\pi_i/c\sim\hbox{lognormal}(\kappa_0-\log c+\dots,\dots)$ such that the estimated intercept is either
$\kappa_0$, or a shifted version, $\kappa_0-\log c$, inference is unaffected.
§ INCLUSION OF PSU INFORMATION INTO FULLY BAYESIAN APPROACH
In Subsection <ref>, we extend the approach in , reiviewed in
Section <ref>,
that co-models the response variable and sampling weights, modifying their notation by adding cluster-indexed parameters,
in preparation to include PSU information into the analysis in
Subsection <ref> to capture within PSU dependence in the response and sample inclusion probabilities. In Subsection <ref> we introduce the Fully Bayes joint population model for the response and the sample inclusion probabilities
in the linear regression case. In Subsections <ref> and <ref> we briefly review competing approaches to analyze informative samples that we will compare in a simulation study.
§.§ Extend Joint Population Model to incorporate PSU-indexed Parameters
We assume a population with a total of $J_{pop}$ PSUs and
size $N=\sum_{j=1}^J N_j$ with $N_j$ the number of population individuals or units in PSU $j$.
More specifically, the population consists of
\begin{array}{rl}
\underbrace{(y_{1,1},\pi_{1\mid1},\bx_{1,1}),\dots,(y_{N_1,1},\pi_{N_1\mid1},\bx_{N_1,1} )}_{\hbox{PSU }1},&\dots,
\underbrace{(y_{1,j},\pi_{1\mid j},\bx_{1,j}),\dots,(y_{N_j,j},\pi_{N_j\mid j},\bx_{N_j,j})}_{\hbox{PSU }j},\dots\\
\multicolumn{2}{c}{
\underbrace{(y_{1,J_{pop}},\pi_{1\mid J_{pop}},\bx_{1,J_{pop}}),
\dots,(y_{N_{J_{pop}},J_{pop}},\pi_{N_J{_{pop}} \mid J_{pop}},\bx_{N_{J_{pop}},J_{pop}} )}_{\hbox{PSU }J_{pop}}
\end{array}
and also of,
\pi_{11},\pi_{12},\dots,\pi_{1j},\dots ,\pi_{1J_{pop}}>0
with $\pi_{i\mid j}\in (0,1],~ \forall i,j$.
The sample of size $n=\sum_{j=1}^J n_j$
(with $n_j$ and $J$ specified by the survey sampler)
is drawn in two steps:
* Step 1: PSU sampling.
Sample $J$ different PSUs $j_1,\dots,j_J\in\{1,\dots,J_{pop}\}$ so that
\begin{equation*}%\label{eq:PSUsamplingstep}
Pr[\hbox{PSU } j \hbox{ is in the sample}] = \pi_{1j}
\end{equation*}
* Step 2: Sampling of individuals. Within each PSU in observed sample $j\in\{j_1,\dots,j_J\}$,
$n_j$ different individuals
so that individual $i$ (in the sampled PSU $j$) is in the sample with probability
P[\hbox{Individual } {ij} \hbox{ is in the sample}\mid \hbox{PSU } j\hbox { is in the sample}]= \pi_{i\mid j} %\frac{\pi_{ji}}{\pi_{1j}}
and therefore the marginal inclusion probability is proportional to $\pi_{ij}:=\pi_{1j}\pi_{i\mid j}$.
The superpopulation approach assumes
that the population is a realization of
the joint distribution for values of the response variable and inclusion probabilities,
\begin{align}\label{eq:jointyandpi}
(y_{ij},\pi_{ij})\mid \bx_{ij},\bth,\REypop_j,\bka,\pi_{1j} \sim&\
p(y_{ij},\pi_{ij}\vert \bx_{ij}, \bth,\REypop_j,\bka,\pi_{1j})\\%=&p(\pi_i\vert y_i,\bx_i,\bth, \bka) p(y_i\vert \bx_i,\bth, \bka)\\
=&\ p(\pi_{i j}\vert y_{ij},\bx_{ij},\bka,\pi_{1j})\, p(y_{ij}\vert \bx_{ij},\bth,\REypop_j)
\nonumber
\end{align}
We model $\pi_{ij}\mid y_{ij}$ with
$p(\pi_{ij}\vert y_{ij},\bx_{ij},\bka,\pi_{1j})$.
The (population) parameter of interest is $\bth$.
This construction allows for an informative sampling design by modeling $\pi_{ij}$ conditioned on $y_{ij}$. While $(\pi_{ij},y_{ij})$ are assumed to be conditionally independent over PSUs $j$ and units $i$, they are unconditionally (on model parameters) dependent under our construction. We have augmented the parameters used in , given in
to incorporate
$\REypop_j$ and $\pi_{1j}$ that are shared by all observations in PSU $j$. Parameters,
$\REypop_j$, induce a correlation in the response for individuals in the same PSU (dep-$y$)
while $\pi_{1j}$ induces association of marginal inclusion probabilities (dep-$\pi$) among respondents nested in the same PSU. We will later construct priors on these parameters to define PSU-indexed random effects.
$\bka$ is a nuisance parameter used to model the inclusion probabilities.
After relabeling the sampled PSU indices
$j_1,\dots j_J$ to $1,\dots J$, and the indices $i$ in the sample to run from $i=1,\dots,n_j$,
the sample of size $n=\sum_{j=1}^J n_j$ consists of
\hbox{\emph{data} }:=
\{\is{y}_{ij},\sampled{\bx}_{ij},\is{\pi}_{ij},j\}_{i=1,\dots,n_j; j=1,\dots,J}$$
$j$ indicating from which PSU the individual was sampled,
$n_j$ the number of participants from PSU $j$,
$J$ the total number of sampled PSUs.
Recall, superindex $(s)$ denotes in the sample.
The equality in (<ref>) assumes that
$y_{ij}\perp (\bka,\pi_{1j})\mid \bx_{ij},\bth,\REypop_j$ and
$\pi_{ij}\perp (\bth,\REypop_j) \mid y_{ij},\bx_{ij}, \bka,\pi_{1j}$.
Examples of noninformative sample are
* SRS: equivalent to $J_{pop}=J=1$ a and $\pi_{i\mid 1}=1$ for $i=1,\dots,N$.
* SRS within PSU with PSU sampling probability $\pi_{1j}$ independent of the response,
equivalent to $\pi_{1j}\perp y_{ij}\mid \bx_{ij}$ $\forall i$ and $\pi_{i\mid j}=1$.
We extend (<ref>) that captures the joint probability model for the sample by replacing $\bth$ and $\bka$ with
($(\bth,\REypop)$ and $(\bka,\pi_{1j})$) to achieve,
\begin{equation}\label{eq:IScorrectionPSU}
p_s(y_{ij},\pi_{ij}\vert \bx_{ij}, \bth, \REypop_j,\bka,\pi_{1j})=\frac
{\pi_{ij}\, p(\pi_{ij}\vert y_{ij},\bx_{ij},\bka,\pi_{1j}) }
{E_{y_{ij}^\star\vert \bx_{ij},\bth,\REypop_j}\left[E(\pi_{ij}^\star\vert y_{ij}^\star, \bx_{ij}, \bka,\pi_{1j}) \right]}
\times p( y_{ij}\vert \bx_{ij}, \bth,\REypop_j)
\end{equation}
The subindex $s$ on the joint distribution $p_s$ on the LHS denotes that we condition on individual $ij$ being in the sample; that is,
$p_s(y_{ij},\pi_{ij}\mid \dots)=p(y_{ij},\pi_{ij}\mid \hbox{individual } ij \hbox{ is in the sample},\dots)$.
In contrast, the distributions on the RHS are population distributions.
Inference on $\bth$ utilizes the joint likelihood for the observed sample,
\begin{equation*}%\label{eq:likelihood}
\ell(\bth,\boldsymbol{\REypop},\bka,\boldsymbol{\pi}_1;
\hbox{\emph{data}}
\prod_{j=1}^J \prod_{i=1}^{n_j}
\left[p_s(\is{y_{ij}},\is{\pi}_{ij}\mid \sampled{x}_{ij},\bth,\REypop_j,\bka,\pi_{1j}) \right]
\end{equation*}
with $\boldsymbol{\REypop}:=(\REypop_1,\dots,\REypop_J)$,
Inference for $\bth$ is achieved via the posterior distribution of the model parameters:
\begin{align*}
p_s\left(\bth,\boldsymbol{\REypop},\bka,\boldsymbol{\pi}_1 \mid
\hbox{\emph{data}}
\right)\propto& \
\ell\left(\bth,\boldsymbol{\REypop},\bka;
\hbox{\emph{data}}
\right)
\times \hbox{Prior}(\bth) \times \hbox{Prior}(\REypop)
\times \hbox{Prior}(\boldsymbol{\pi}_1) \times \hbox{Prior}(\bka).
\end{align*}
To obtain a closed form for the likelihood we need a closed form for the expected value
in the denominator in (<ref>), in turn.
Theorem <ref> (same as Theorem 1 in ) is here extended under our extended PSU-indexed parameterization, ($(\bth,\REypop)$ and $(\bka,\pi_{1j})$), to provide conditions
that allow a closed form expression of this expected value.
Similar to the set-up for Theorem <ref>,
let $\bxp$ and $\bxy$ be subvectors of $\bx$,
the covariates used to specify the conditional distribution of
$\pi_{ij}\mid y,\bx,\bka,\pi_{1j}$
and $y\mid \bx,\bth,\REypop$, respectively; that is,
$\pi_{ij}\mid y_{ij},\bx_{ij},\bka,\pi_{1j} \sim \pi_{ij} \mid y_{ij},\bxp_{ij},\bka,\pi_{1j}$ and
$y_{ij}\mid \bx_{ij},\bth,\REypop\sim y_{ij}\mid \bxy_{ij},\bth,\REypop$.
$p(\pi_{ij}\mid y_{ij},\bxp_{ij},\bka,\pi_{1j}) =\emph{lognormal}(\pi_{ij}\mid h(y_{ij},\bxp_{ij},\bka,\pi_{1j}),\sigma_{\pi}^2)$,
with the function $h(y_{ij},\bxp_{ij},\bka,\pi_{1j})$ of the form $h(y_{ij},\bxp_{ij},\bka,\pi_{1j})=
\hy(y_{ij},\bxp_{ij},\bka)+
\hmy(\bxp_{ij},\bka,\pi_{1j})$ where
possibly a function of $(\bxp_{ij},\bka,\pi_{1j})$
\begin{align*}
p_s(y_{ij},\pi_{ij}\mid \bxy_{ij},\bxp_{ij},\bth,\REypop_j,\bka,\pi_{1j})=&
\frac{\emph{normal}\left(\log \pi_{ij}\mid \hy(y_{ij},\bxp_{ij},\bka)+\hmy(\bxp_{ij},\bka,\pi_{1j}),\sigma_\pi^2\right)}
\times M_y(\bka;\bxy_{ij},\bxp_{ij},\bth,\REypop_j) }\\
&\times p(y_{ij}\mid \bxy_{ij},\bth,\REypop_j)
\end{align*}
with $M_y(\bka;\bxy_{ij},\bxp_{ij},\bth):=
E_{y_{ij}^\star\mid \bxy_{ij},\bth,\REypop_j}\left[\exp\left\{\hy(y_{ij}^\star,\bxp_{ij},\bka)\right\}\right]$.
So, analogously to the discussion after Theorem <ref>,
if $\hy(y,\bxp,\bka)=\kappa_y y$
with $\kappa_y$ depending on
$\bka$ and, perhaps, on $\bxp$,
E_{y_{ij}^\star\mid \bxy_{ij},\bth,\REypop_j}\left[\exp\left(\kappa_y y_{ij}^\star\right)\right]$$
is the moment generating function of $y$ evaluated at $\kappa_y$. So when
both the population distribution of $y$, $p(y\mid \bth,\REypop,\bxy)$,
has a closed form
the moment generating function has a closed form over the real line,
then the likelihood, $p_s$, has a closed form, as well.
§.§ Inclusion of PSU Information into Conditional Population Model for Weights
The marginal inclusion probability
of the individual $ij$, $\propto\pi_{ij}$,
is the product of the probability of selecting PSU $j$, $\propto\pi_{1j}$,
and the probability of selecting the individual $i$ conditioning on PSU being in the sample, $\propto\pi_{i\mid j}$ such that
$\pi_{ij}=\pi_{1j} \pi_{i\mid j}.$
\log \pi_{ij}=\log \pi_{i\mid j}+\log \pi_{1j}
$\log \pi_{1j}\sim \hbox{normal}(\mu_j,\sigma_{\REpipop}^2)$ where
$\mu_j$ could depend on PSU covariates
(e.g. county population, etc)
but, for simplicity, we assume that it does not and set $\mu_j=0$.
Choosing a normal distribution for
\log \pi_{i\mid j}\sim \hbox{normal}(
\hy(y,\bxp,\kappa)+\hmy^\prime(\bxp,\bka),\sigma_{\pi}^2)$ yields
\begin{equation}\label{eq:logpiij}
\log \pi_{ij}\mid y_{ij},\bxp_{ij},\bka,\REpipop_j
\sim \hbox{normal}\left(\hy(y_{ij},\bxp_{ij},\bka)+
\hmy^\prime(\bxp_{ij},\bka)+\eta^{\pi}_{j},\sigma_{\pi}^2\right)
\end{equation}
with $\REpipop_j:=\log \pi_{1j}\iid \hbox{normal}(0,\sigma_{\REpipop}^2)$ PSU-specific random effects.
So defining
$\hmy(\bxp,\bka,\pi_{1j}):=$ $\hmy^\prime(\bxp,\bka)+\log \pi_{1j}=\hmy^\prime(\bxp,\bka)+\REpipop_j$,
the distribution of $\pi_{ij}$ satisfies the conditions of Theorem <ref>.
This set-up is coherent with our assumption that the data analyst does not have information about the PSU-indexed sampling weights for either the population or sampled units because they are not typically published by survey administrators. Nevertheless, our derivation of (<ref>) by factoring the marginal inclusion probabilities, $\pi_{ij}$, demonstrates how we may capture within PSU dependence among $(\pi_{ij})$ by inclusion of random effects, $\REpipop_j$.
Notice that, as before,
since we will include an intercept parameter,
in the model for $\pi_{ij}$ in (<ref>), ,
we do not impose any restriction on
$\sum_{i=1}^{n_j} \pi_{ij}$ or $\sum_{j=1}^J \sum_{i=1}^{n_j} \pi_{ij}$.
§.§ Linear Regression Joint Population Model
We construct a linear regression model for the population with,
\begin{equation}\label{eq:SLR_likelihood}
{y_{ij}\mid \bxy_{ij},\bth,\REypop_j}\sim\text{normal}\left(\bxy_{ij}^t\bbe+\REypop_j ,\sigma_y^2 \right) ,
\quad\hbox{with }\bth=(\bbe,\sigma_y^2)
\end{equation}
with the PSU-specific random effect $\REypop_j$ in (<ref>)
playing the roll of $\REypop_j$ in (<ref>).
The conditional population model for inclusion probabilities is specified as in (<ref>),
\begin{equation}\label{eq:lonnormalpriorforpi}
{\pi_{ij}\mid y_{ij},\bxp_{ij},\bka,\REpipop_j}\sim
\text{lognormal}\Big(\kappa_y y_{ij}+\bxp_{ij}^t \bka_\bxp+\REpipop_j, \sigma_\pi^2\Big),\quad\hbox{with } \bka=(\kappa_y,\bka_\bxp,\sigma_\pi^2)
\end{equation}
This construction results from setting, $\hy(y_{ij},\bxp_{ij},\bka)=k_y y_{ij}$,
$\hmy(\bxp_{ij},\bka,\pi_{1j})=\bxp_{ij}^t \bka_\bxp+\REpipop_j$ (remember $\REpipop_j=\log \pi_{1j}$), and
$\sigma_\pi^2(\bka,\bxp_{ij},\pi_{1j})=\sigma_\pi^2$ in (<ref>).
Here $\bbe$ and $\bka_\bxp$ are vectors of regression coefficients
that include an intercept, so the first entry of both $\bxy_{ij}$ and $\bxp_{ij}$ equals 1.
We select prior distributions,
\begin{equation}\label{eq:priors}
\begin{array}{c}
\bbe \sim \hbox{MVN}(\mathbf{0},100 \mathbf{I}), \quad
\bka \sim \hbox{MVN}(\mathbf{0},100 \mathbf{I}), \quad
\REypop_1,\dots,\REypop_J\iid \hbox{normal}(0,\sigma_{\REypop}^2), \\
\REpipop_1,\dots,\REpipop_J\iid \hbox{normal}(0,\sigma_{\REpipop}^2), \quad \hbox{and}
\quad
\sigma_y,\sigma_\pi,\sigma_{\REypop},\sigma_{\REpipop} \iid \hbox{normal}^+(0,1)
\end{array}
\end{equation}
with $\hbox{normal}^+(m,s^2)$
denoting a normal distribution with mean $m$ and variance $s^2$ restricted to the positive real line;
$\hbox{MVN}(\mathbf{m},\bm{\Sigma})$ the multivariate normal distribution with
mean vector $\mathbf{m}$ and variance-covariance matrix $\bm{\Sigma}$; and
$\mathbf{I}$ the identity matrix. Since
$y\sim\hbox{normal}(m,s^2)$ admits a closed form expression for
moment generating function
$M_y(t)=\exp(tm+t^2 s^2/2)$,
we apply Theorem <ref> to obtain,
\begin{align}\label{eq:LRp_s}
p_s\left(y_{ij},\pi_{ij}\mid \bxy_{ij},\bxp_{ij},\bth,\REypop_j,\bka,\REpipop_j\right)
=&\frac{\hbox{normal}\left(\log \pi_{ij}\mid \kappa_y y_{ij}+\bxp_{ij}^t\bka_\bxp+\REpipop_j,\sigma_\pi^2\right)}
{\exp\left\{\bxp^t_{ij} \bka_\bxp+\REpipop_j +\sigma^2_\pi/2+
\kappa_y (\bxy^t_{ij}\bbe+\REypop_j)+\kappa_y^2\sigma_y^2/2 \right\}}\nonumber \\
&\times \hbox{normal}\left(y_{ij}\mid \bxy^t_{ij}\bbe+\REypop_j ,\sigma^2_y\right)
\end{align}
The implementation of the Gibbs sampler is not straightforward in this case due to non-conjugacy under the exact likelihood in (<ref>).
To obtain a posterior sample of the model parameters,
we rely on the “black box” solver, “Stan" <cit.>, which performs an
efficiently-mixing Hamiltonian Monte Carlo sampling algorithm with a feature that
non-conjugate model specifications are readily accommodated.
§.§ Pseudolikelihood Approach
<cit.> propose an approach
to incorporate sampling weights using a plug-in observed data pseudolikelihood:
\begin{equation}\label{eq:fullpseudo}
\left[\prod_{j=1}^J \prod_{i=1}^{n_j} p(\sampled{y}_{ij}\mid \theta)^ {\sampled{w}_{ij}}\right] \times \prod_{j=1}^J p(\REypop_{j}\mid \sigma^{2}_{\REypop})
\end{equation}
where we start with a pseudolikelihood that exponentiates each observed data likelihood contribution by its marginal unit sampling weight, $\sampled{w}_{ij}$, to re-balance the information in the observed sample to approximate that in the population. So the observed data pseudolikelihood (in square brackets) is not an exact likelihood, but an approximation for the unobserved population. We apply the pseudolikelihood approach by augmenting it with the prior for the unobserved, linked random effect to form an augmented pseudolikelihood. Inference on $\bth$ utilizes the pseudoposterior.
In Section <ref> we label this approach, “Pseudo".
<cit.> standardize marginal individual sampling weights so that
$\sum_{j=1}^J\sum_{i=1}^{n_j} \sampled{w}_{ij}=n$ to approximately reflect the amount of posterior uncertainty in the sample. Nevertheless, as in ,
neither do they account for dep-$y$ nor dep-$\pi$, so resultant credibility intervals are overly optimistic (short). In a related work, <cit.> propose to utilize a separate, post-processing step applied to the pseudoposterior samples that produces posterior draws with the sandwich variance estimator (that depends on $(y_{ij},w_{ij})$) of the pseudo MLE to account for the dependence induced by clustering. The added post processing step to correct the posterior variance to the sandwich form that characterizes the frequentist construction is required because the pseudolikelihood treats the weights as fixed. In our fully Bayesian approach,
by contrast, the frequentist sandwich form collapses to the Bayesian estimator for the asymptotic covariance matrix under joint modeling of the response variable and sampling weights under assumption of a joint population generating model <cit.>.
Related frequentist approaches of <cit.> and
also employ an augmented likelihood similar to that of
(<ref>), but where they also weight the prior for the random effect with the marginal group (PSU)-level weight, $\sampled{w}_{1j}$. They proceed to integrate out the random effect,
$\eta^{y}_{j}$, to perform estimation.
They also focus on consistent estimation of parameters,
rather than correct uncertainty quantification.
We will see in the sequel that because our approach uses a joint model for $(y_{ij},\pi_{ij}\mid \bx_{ij})$, the asymptotic covariance matrix of the joint posterior, $H^{-1}$, is the same for the MLE such that we achieve correct uncertainty quantification.
In the context of the linear regression in
(<ref>), the quantity between square brackets in
(<ref>) matches the likelihood of the weighted regression model
${y_{ij}\mid \bxy_{ij},\bth,\REypop_j}\sim\text{normal}\left(\bxy_{ij}^t\bbe+\REypop_j ,\sigma_y^2/w_{ij}\right)$
(See Appendix <ref> for details.)
So (<ref>) becomes
\begin{equation*}
\left[\prod_{j=1}^J
\prod_{i=1}^{n_j} \text{normal}\left(\sampled{y}_{ij}\mid \bxy_{ij}^t\bbe+\REypop_j ,\sigma_y^2/\sampled{w}_{ij} \right)
\right] \times \prod_{j=1}^J p(\eta^{y}_{j}\mid \sigma^{2}_{\eta^{y}})
\end{equation*}
This becomes useful under estimation in Stan where one can specify the weighted linear regression model and add the log of $p(\REypop\mid \cdots)$ to the $\log$ of the full conditional for joint sampling of the model parameters.
§.§ Frequentist Approach
Frequentist estimation approaches are designed-based, assuming the population is fixed.
The formulation that we highlight employs the pseudolikelihood construction, but without PSU-REs.
The point estimate of $\bth$, called $\tilde{\bth}_{freq}$, maximizes
$p_{pseudo}(\bth; \sampled{y},\sampled{\pi},\sampled{x})=
\prod_{j=1}^J \prod_{i=1}^{n_j} \left[p(\sampled{y}_{ij}\vert \sampled{\bx}_{ij}, \bth)\right]^{\sampled{w}_{ij}}$
with $\sampled{w}_{ij}\propto 1/\sampled{\pi}_{ij}$ standardized so that
$\sum_{ij} \sampled{w}_{ij}=n$.
The PSU indices, together with $p_{pseudo}$,
are used to
estimate the standard error of
$\tilde{\bth}_{freq}$ via resampling methods
(, balanced repeated replication, Jack-Knife, Bootstrap)
or Taylor series linearization.
The R function svyglm in the R package
survey <cit.>
uses the latter (as default) to fit
common generalized regression models such as
linear, Poisson, logistic, etc.
For multiple linear regression with $\bth=(\bbe,\sigma_y^2)$,
inference, in particular the construction of confidence regions,
for the $(p+1)$-dimension vector of regression coefficients $\bbe$
(that includes an intercept)
is based on the asymptotic result,
$\tilde{\Sigma}^{1/2} (\tilde{\bbe}_{freq}-\bbe)\sim (p+1)\hbox{-variate Student-t}$ with
degrees of freedom equal to
$df=\# PSUs-\#Strata$, that represents the design-based degrees of freedom;
$\tilde{\Sigma}^{1/2}$ is a lower triangular scale matrix such that $\tilde{\Sigma}^{1/2}\tilde{\Sigma}^{1/2}=\tilde{\Sigma}$ with
$\tilde{\Sigma}$ the estimate of the variance-covariance matrix of $\tilde{\bbe}_{freq}$.
No stratification is equivalent to
having one stratum and the degrees of freedom reduces to
$df=J-1$ (recall, $J:=\#PSUs$).
This frequentist approach for uncertainty quantification is
similar to the post processing correction of <cit.> in that the analysis model for the population does not employ a PSU-indexed random effects term; rather, the resampling of clusters captures the dependence within clusters. Both methods perform nearly identically, in practice, so that we focus on comparing our Fully Bayes approach to this frequentist resampling method in the simulation study that follows.
§ SIMULATION
We perform a Monte Carlo simulation study to compare the performance of our fully Bayes method of (<ref>) that employs PSU-indexed random effects in both the models for the response and inclusion probabilities to the pseudoposterior and frequentist methods, presented in Sections <ref> and <ref>, respectively. In each Monte Carlo iteration, we generate a population of $J_{pop}$ clusters and $N_{j}$ individuals per cluster.
The response variable is generated proportionally to size-based group and marginal inclusion probabilities to induce informativeness (dependence between the response variable and inclusion probabilities). We next take a sample of groups and, subsequently, individuals within group. A clustered simple random sample (cSRS) is also generated from the same population. The cSRS is included to serve as a gold standard for point estimation and uncertainty quantification (under the population model) and is compared to our model alternatives designed for estimation on the informative sample taken from the same population. For each population and sample we utilize the Fully Bayes method and associated comparative methods. We assess the bias, MSE and coverage properties under each model formulation.
§.§ Monte Carlo Simulation Scheme
Steps 1-5 describe how the synthetic population dataset is generated, steps 6-7 how the samples are drawn and 8-10 how they are analyzed. We use the superindex `DG' to refer to the data generating (population) model as opposed to the analysis model. In the sequel, gamma$(a,b)$ denotes the gamma distribution with shape and rate parameters $a$ and $b$ (, mean $a/b$).
Generate $\pi_{i\mid j}\iid \hbox{gamma}(a_\pi=2,b_\pi=2)$ for $i=1,\dots, (N_j=20)$ individuals nested in PSU, $j=1,\dots,(J_{pop}=10^3)$ total PSUs. The total population size is $J_{pop}\times N_j = 20,000$.
* Define PSU $j$ inclusion probability $\pi_{1j}^{tem}:=\sum_{i=1}^{N_j}\pi_{i\mid j}$ (therefore $\pi_{1j}^{tem}\iid$ $\hbox{gamma}(N_j a_\pi,b_\pi)$).
* Standardize $\pi_{1j}:=\pi_{1j}^{tem}/\sum_{j^\prime=1}^{J_{pop}} \pi_{1j^\prime}^{tem}$, so
$$(\pi_{1,1},\pi_{1,2},\dots,\pi_{1,J_{pop}})\sim\hbox{Dirichlet}\left( a_\pi \times(N_1,N_2,\dots,N_{J_{pop}})\right).$$
(Thus, $b_\pi$ does not play a roll on the distribution of $\pi_{1j}$.)
* Generate $\REyDG_j\iid \hbox{normal}(0,\sigma_{\REyDG}^2=0.1^2)$
PSU specific random effects and predictor
$x_{ij}\iid \hbox{Uniform}(0,1)$.
* Generate the response.
We consider three simulation scenarios to generate the response by different settings for coefficients in the following generating expression,
\beta_0^{DG}+\beta_1^{DG} x_{ij}+
\beta_{\pi,1} {\pi}_{1j}+
\beta_{\pi,2} {\pi_{i\mid j}}+
\beta_{\REyDG} \REyDG_j+
%\beta_{\pi,\REyDG} (\pi_{1j} \times \REyDG_j)+
\epsilon_{ij}^{DG},$$
with $\epsilon^{DG}_{ij}\iid \hbox{normal}(0,(\sigma_y^{DG})^2)$. The three scenarios each set the last three regression coefficients, as follows:
Scenario $\beta_{\pi,1}$ $\beta_{\pi,2}$ $\beta_{\REyDG}$
: Informative PSU-RE $J_{pop}$ 1 0
: Non-informative PSU-RE 0 1 1
: No stage is informative 0 0 1
with $(\sigma_y^{DG})^{2}=0.1^2,\beta_0^{DG}=0,\beta_1^{DG}=1$. Note that, in scenario ,
$\beta_{\pi,1}=1/E(\pi_{1j})=J_{pop}$ so that $\beta_{\pi,1} E(\pi_{1j})=1$. Informative random effects are instantiated in Scenario by generating $y_{ij}$ from $\pi_{1j}$, where $\pi_{1j}$, the inclusion probability for PSU, $j$, is equivalent to a PSU-indexed random effect.
We set the regression coefficient for $\pi_{1j}$ equal to $0$ and that for $\REyDG_j$ equal to $1$ in Scenario , where we generated random effects as non-informative (uncorrelated with the selection probabilities).
* Take a clustered simple random sample (cSRS) from the population:
From each population dataset we draw two samples,
one informative
and the other under two-stage cluster SRS. Both samples contain $J=30$ PSUs and $n_j=5$ individuals that produces a total sample size of
$n=\sum_{j=1}^J n_j=150$. Results under cSRS will serve as a Gold Standard and will be compared
to the results under comparative methods designed to analyze informative samples. To implement the clustered random sample we,
* Draw an SRS (without replacement) of size $J$ of PSUs indices from $\{1,\dots,J_{pop}\}$.
* Within each drawn PSU $j$, obtain a SRS (without replacement) of size $n_j$.
* Relabel PSU indices to run from $1$ to $J$ and individual indices to
run from $1$ to $n_j$.
* The cSRS consists of $\{(y_{ij},x_{ij},j)\}_{i=1,\dots,n_j;j=1,\dots,J}$
* Take an informative sample:
* Draw, without replacement, $J$ PSU indices $j_1,\dots,j_J\in\{1,\dots,J_{pop}\}$ with $Pr(j\in\hbox{sample})=\pi_{1j}$.
* For each $j\in\{j_1,\dots,j_J\}$ drawn, sample, without replacement,
$n_j$ individual indices
$i\in\{1,\dots, N_{j}\}$ with probability $Pr(i\in \hbox{ sample from PSU }j)=\pi_{i\mid j}/\sum_{i^\prime=1}^{N_j} \pi_{i^\prime\mid j}$.
Define $\pi_{ij}=\pi_{1j}\pi_{i\mid j}$ and
relabel the PSU and individual indices so they run from $1$ to $J$ and from $1$ to $n_j$,
respectively, and add superindex “$(s)$” to denote sampled quantities.
* The informative sample consists of
i=1,\dots,n_j, j=1,\dots,J}$.
* Analyze the realized informative sample by estimating parameters under the following modeling approaches:
* FULL.both: Denotes the approach enumerated in Subsection <ref> that employs PSU-REs in models for both response and inclusion probability; the model for the response includes PSU-indexed random effects with,
\begin{equation}\label{eq:AnalysisModel}
y_{ij}=\beta_{0}^{Ana}+\beta_{1}^{Ana} x_{ij}+\REyAna_j+\epsilon^{Ana}_{ij}
\quad\hbox{with }\epsilon^{Ana}_{ij}\iid \hbox{normal}(0,(\sigma_y^{Ana})^2)
\end{equation}
where the superscript, “$Ana$" denotes the model for analysis or estimation as contrasted with the $DG$ model used for population data generation.
We are interested in estimating $\beta_0^{Ana}$ and the standard deviation of the
PSU-REs, $\sigma_{\REyAna}$ where $\REyAna_j\iid \hbox{normal}(0,(\sigma_{\REyAna})^2)$.
(Note that the estimation of $\beta_1^{Ana}$ is unbiased regardless of the sampling scheme)
We, subsequently, use the conditional estimation model for the marginal inclusion probabilities of (<ref>) to include a PSU-indexed random effects term,
\begin{equation*}
\log \pi_{ij}\mid y_{ij},\REpipop_j\sim
\hbox{normal}(\kappa_0+\kappa_y y_{ij}+\kappa_x x_{ij}+\REpipop_j,\sigma_\pi^2)
\end{equation*}
These two formulations describe the joint population model that employs random effects in both the marginal model for the response and conditional model for the inclusion probabilities. We leverage the Bayes rule approach of Section <ref> under the linear regression population to produce (<ref>) that adjusts the population model to condition on the observed sample. We use this equation to estimate the Fully Bayes population model on the observed sample.
It bears noting that FULL.both assumes that the data analyst does not have access to PSU-indexed sampling weights ($\propto 1/\pi_{1j}$). Yet, we show in the simulation study results that FULL.both is able to adjust for informative sampling of PSUs for estimation of population model parameters (, the intercept and PSU random effects variance). This relatively good result owes to the inclusion of PSU-indexed random effects in the conditional model for the inclusion probabilities, $\pi_{ij}$, because it captures the within PSU dependence among them.
Note that the Full.both analysis assumes that $\log \pi_{ij}\mid y_{ij},\dots$ is a normal distribution, but that does not hold under any simulation scenario, which allows our assessment of the robustness of FULL.both to model misspecification.
* FULL.y: This alternative is a variation of FULL.both that uses the same population estimation for the response stated in (<ref>). In this option, however, PSU-REs are excluded from the conditional model for the marginal inclusion probabilities; ,
$\log \pi_{ij}\mid y_{ij},\REpipop_j\sim
\hbox{normal}(\kappa_0+\kappa_y y_{ij}+\kappa_x x_{ij},\sigma_\pi^2)$
does not include PSU-REs.
* Pseudo: Denotes the pseudolikelihood that exponentiates the likelihood by marginal sampling weights,
as described in Subsection <ref>.
* Freq: Denotes the frequentist, design-based, inference under simple linear regression model as described in Subsection
Note that this analysis model does not include PSU- REs because we employ a step that resamples the PSUs in order to estimate confidence intervals. To fit the model, we use R function svyglm in library survey <cit.>.
* Pop: Ignore the informative sampling and fit model in (<ref>)
(as if the sample were a cSRS).
The inclusion probabilities do not play a roll in the inference, though the model for the response includes PSU-REs. This is equivalent to Pseudo with
sampling weights set equal to 1.
* Analyze the cluster simple random sample. cSRS: Fit the model in (<ref>) to the sample taken under a cSRS (generated in step 6) design. The same as Pop but applied to the cSRS generated in step 6.
* Save parameter estimates to compute Bias, MSE, coverage probability of central 95% credible intervals and their expected length. The parameters of inferential interest are the point estimate of $\beta_0^{Ana,TRUE}$:
$\tilde{\beta}_0^{Ana}:=E(\beta_0^{Ana}\mid data)$
(or $\tilde{\beta}_{0,freq}^{Ana}$ for Freq), and its central 95% credible (or confidence for Freq) interval lower and upper limits.
We also produce point and interval estimates for $\sigma_y^{Ana,TRUE}$ for those methods that include PSU-REs in the marginal response model
( exclude Freq). The computation of
$\beta_0^{Ana,TRUE}$ and $\sigma_y^{Ana,TRUE}$ is discussed in Subsection
Once we have run steps 1-10 $1000$ times,
we use the quantities stored in step 10 to estimate the bias and MSE of the points estimate
of $\beta_0^{Ana,TRUE}$ (defined below in Subsection <ref>)
of each method as the average of $\tilde{\beta}^{Ana}_0-\beta^{Ana,TRUE}_0$ and average of
$(\tilde{\beta}^{Ana}_0-\beta^{Ana,TRUE}_0)^2$, respectively.
The coverage and expected length of the 95% credible (or confidence) are estimated as the
proportion of times that the credible intervals contain $\beta^{Ana,TRUE}_0$ and their
average length. We do the same for $\sigma_{\REyAna}^{TRUE}$ also defined in subsection
§.§ True Model Parameters under Analysis Model
In this section we compute the true values of the intercept and random effects variance parameters for the analysis ($Ana$) models that are obtained from associating parameters of the analysis model to the data generating ($DG$) model. Having true values for the intercept and random effect variance under our analysis models allows our assessment of bias, MSE and coverage. We use the superindex “$TRUE$" to refer to the true parameter values for the $Ana$ model implied by the simulation true parameter values in the $DG$ model. The true value of the intercept parameter under the analysis model is achieved by integration,
$$\beta_0^{Ana,TRUE}=E(y_{ij}\mid x_{ij}=0)=
\beta_0^{DG}+\beta_{\pi,1} E(\pi_{1j})+\beta_{\pi,2} E(\pi_{i\mid j})
%+\beta_{\REyDG}E(\pi_{1j})\, \underbrace{E(\REyDG)}_{=0},
yielding, $\beta_0^{Ana,TRUE}=2,1$ and $0$ under simulation scenarios , and , respectively. The true value for the population random effect is,
${\REyAna_j}^{,TRUE}=\beta_{\pi,1} \left[\pi_{1j}-E(\pi_{1j})\right]+
\beta_{\REyDG} \REyDG_j
%+\beta_{\pi,\REyDG} (\pi_{1j} \times \REyDG_j)
and the true values random errors under the analysis model are
$\epsilon_{ij}^{Ana,TRUE}=\beta_{\pi,2} \pi_{i\mid j}+\epsilon_{ij}^{DG}$.
Since $\pi_{i\mid j}$s are not normally distributed, $\epsilon_{ij}^{Ana,TRUE}$s are also not normally distributed.
Since the normality assumption of the errors of the simple regression model is violated, the variance of random effects for the marginal population model for the response,
\begin{array}{rl}
\Var({\REyAna_j}^{TRUE})=&\beta_{\pi,1}^2 \Var(\pi_{1j})+
\beta_{\REyDG}^2 \underbrace{\Var(\REyDG_j)}_{\sigma_{\REyDG}^2}
%+\beta_{\pi,\REyDG}^2 var(\pi_{1j}) \, var (\REyDG_j)\\
%2 \beta_{\pi,\REyDG} \beta_{\REyDG} E(\pi_{1j})\, var (\REyDG_j)
\end{array}
is different from $(\sigma_{\REyAna}^{TRUE})^2$.
Nevertheless, we may compute $\sigma_{\REyAna}^{TRUE}$ by fitting the
linear mixed effect population model in (<ref>), which corresponds to the analysis model for the response, directly to the population dataset via the lmer R function
lmer($y\sim x + (1\mid$ PSU index),data= population). In practice, the PSU inclusion probabilities, $(\pi_{1j})$, are not available to the data analyst
for either sampled or non-sampled individuals from the population.
Under scenario
0.27$, and
under the other two scenarios $\sigma_{\REyAna}^{TRUE}\approx 0.1$.
§.§ Simulation Results
Tables <ref>, <ref> and <ref>
show the simulation results under all scenarios.
As expected in all scenarios cSRS credible intervals have coverage close
to nominal level (0.95) and the lowest MSE.
In informative scenarios, and ,
(i) Pop performs poorly showing
the consequences of ignoring the informative sampling scheme,
(ii) all methods to analyze informative samples yield similar quality point estimators (similar MSE),
(iii) FULL.both and FULL.y credible intervals maintain nominal coverage while
Pseudo and Freq do not.
Under non-informative scenario ,
all methods to analyze informative samples
yield similar results to Pop (now correctly specified model).
Both Pseudo and Freq under-estimate uncertainty such that they both under cover in the informative sampling case. Interestingly, only Freq pays the price of the noise introduced by the non-informative sampling weights
producing considerable wider confidence intervals for $\beta_0^{Ana,TRUE}$ than all other methods.
Overall, Tables <ref>-<ref>
show that FULL.both and FULL.y are the best methods
to analyze informative samples, particularly in terms of uncertainty quantification.
But, so far, results have not shown advantage of FULL.both over FULL.y.
To do so, under scenario , we increase level of informativeness of the PSUs by increasing the value of
$\beta_{\pi,1}$ (See step 5 in Subsection <ref>) from
$J_{pop}$ to $2J_{pop}$ and $3J_{pop}$.
As shown in Table <ref>,
coverage of FULL.y credible intervals deteriorates as informativeness increases
while FULL.both, in contrast to all other methods,
maintains coverage
similar to cSRS (at nominal level).
The strength of FULL.both over all other
considered approaches is that it accounts for the association among the inclusion probabilities within the same PSU.
Table <ref> shows that
FULL.both is the only approach that performs well under simulation scenario
when the level of informativeness of $\pi_{ij}$ (or correlation between $y_{ij}$ and $\pi_{1j}$) increases.
FULL.both is
the only method whose inference quality is not affected
$\pi_{1j}\not\perp y_{ij}\mid \bx_{ij} \forall i,j$.
Since the population simulation true
distribution of $\pi_{ij}$, given in
point 7 (3.) in
Subsection <ref>,
is not lognormal the simulation shows that FULL.both is robust to
misspecification of the distribution of
$\pi_{ij}\mid y_{ij},\cdots$.
FULL.both FULL.y Pseudo Freq Pop cSRS
Bias 0.035 0.046 0.067 0.015 0.518 0.005
MSE 0.028 0.028 0.028 0.030 0.294 0.016
95% CI Coverage% CI 0.949 0.948 0.902 0.905 0.088 0.957
95% CI Length 95% CI 0.670 0.668 0.538 0.617 0.620 0.520
Bias 0.012 0.012 0.043 NA 0.014 -0.012
MSE 0.010 0.010 0.010 NA 0.010 0.008
95% CI Coverage 0.964 0.967 0.902 NA 0.961 0.951
95% CI Length 0.424 0.422 0.360 NA 0.416 0.357
Simulation Scenario : Informative PSU-RE.
cSRS analyses the cSRS sample while all other approaches analyze the informative sample.
CI denotes central credible interval except for Freq where it denotes confidence interval.
NA stands for not applicable, Freq does not include PSU-REs.
FULL.both FULL.y Pseudo Freq Pop cSRS
Bias 0.008 0.009 0.055 0.011 0.494 0.001
MSE 0.020 0.020 0.022 0.025 0.263 0.014
95% CI Coverage 0.958 0.962 0.908 0.926 0.064 0.962
95% CI Length 0.623 0.624 0.496 0.565 0.574 0.479
Bias 0.063 0.065 0.107 NA 0.065 0.049
MSE 0.008 0.008 0.017 NA 0.008 0.006
95% CI Coverage 0.967 0.971 0.752 NA 0.966 0.956
95% CI Length 0.332 0.331 0.312 NA 0.329 0.285
Simulation Scenario : Non informative PSU-REs.
Same as Table <ref> but under .
FULL.both FULL.y Pseudo Freq Pop cSRS
Bias 0.000 0.000 -0.000 0.000 -0.000 0.000
MSE 0.001 0.001 0.001 0.001 0.001 0.001
95% CI Coverage 0.957 0.964 0.944 0.947 0.956 0.951
95% CI Length 0.103 0.103 0.104 0.134 0.102 0.103
Bias 0.002 0.002 0.006 NA 0.002 0.003
MSE 0.000 0.000 0.000 NA 0.000 0.000
95% CI Coverage 0.935 0.936 0.933 NA 0.938 0.955
95% CI Length 0.067 0.067 0.068 NA 0.067 0.068
Simulation Scenario : No stage informative.
Same as Table <ref> but under , where Pop is correctly specified.
$\beta_{\pi,1}$ FULL.both FULL.y Pseudo Freq cSRS
$J_{pop}$ .949,.964 .948,.967 .902,.902 .905,NA .957,.951
$2J_{pop}$ .949,.929 .914,.927 .852,.929 .908,NA .961,.927
$3J_{pop}$ .949,.946 .85,.954 .808,.914 .910,NA .957,.949
Coverage of central 95% credible (confidence for Freq) intervals for
under scenario
increasing the level of informativeness of the PSUs (by increasing $\beta_{\pi,1})$.
NA stands for not applicable, Freq does not include PSU-REs.
§ APPLICATION
The National Health and Nutrition Examination Survey (NHANES) is designed
to assess the health and nutritional status of the non-institutionalized civilian population living in one of the 50 U.S. states and Washington D.C.
Although nationally representative, NHANES is designed to oversample specific subpopulations
( persons 60 and older, African Americans, Asians, and Hispanics)
and follows a complex sampling design <cit.>.
The NHANES sampling design is constructed as multi-stage with stages that include sampling strata and nested primary sampling units (PSUs) that further nest respondents. A PSU is a cluster or grouping of spatially contiguous counties, while a stratum is a region nesting multiple PSUs.
NHANES publishes respondent-level marginal sampling weights based on resulting respondent marginal inclusion probabilities in the sample after accounting for clustering.
The sampling weights measure the number of people in the population represented by that sampled individual,
reflecting unequal probability of selection, nonresponse adjustment, and adjustment to independent population controls.
The survey consists of both interviews and physical examinations.
The NHANES interview includes demographic, socioeconomic, dietary, and health-related questions. The examination component consists of medical, dental, and physiological measurements, as well as laboratory tests.
Data obtained from $J=30$ PSUs, corresponding to
15 strata with two PSU per stratum, are released in two-year cycles.
The example considers the dietary data.
The analyses consider PSU information and sampling weights as provided by NHANES but does not incorporate strata information.
Priors are assigned in (<ref>), and
posterior inference under non-frequentist methods is based on a posterior sample
of the model parameters of size 10,000.
The Gibbs sampler was run 10,000 iterations, after a burn-in period of another 10,000 iterations, on Stan.
§.§ Proportion of Body Fat and BMI
In <cit.>, here after referred as H2012,
the authors model relationship of
percentage of body fat (PBF) and body mass index (BMI in $kg/m^2$)
using the simple linear regression (SLR) $\hbox{PBF}=\beta_0+\beta_1 (1/\hbox{BMI})$.
H2012 combine data from three NHANES biannual cycles: 1999-2000,2001-2002 and 2003-2004. They fit a SLR model for each combination of sex (men and women), 3 age groups (18–29, 30–49, and 50–84 years of age), and 3 race-ethnicity groups (non-Hispanic Whites, non-Hispanic Blacks, and Mexican Americans).
Table 3 of H2012 reports the estimated values and standard errors of $\beta_0$ and $\beta_1$. Their table 4 reports the
predicted PBF, $\hat{\beta}_0+\hat{\beta}_1/\hbox{BMI}$, for individuals with BMI levels of $18.5, 25, 30, 35$ and $40$ that represent BMI cutoffs for underweight, normal, overweight, and obesity classes I, II, and III, respectively.
The PBF variable expresses a high rate of missing values.
NHANES releases two datasets (per cycle) with five sets of imputed PBF values;
the first dataset
includes participants with observed PBF or with imputed PBF values with low variability, while the second data set participants with high variability in their imputed values.
H2012 analysis considers sampling design and multiple imputation of missing values.
In this section we mimic their analysis but with the 2005-2006 NHANES dataset.
We use a multiple linear regression model where we
control for stratification variables in H2012.
Since PBF in the $2005-2006$ cycle is reported only for $18-69$ year old participants,
we categorize age into $3$ groups: $18-29, 30-49,$ and $50-69$ years of age
and excluded participants not in these age ranges.
We also include two more race/ethnicity groups: “Other Hispanic” and
“Other Race - Including Multi-Racial”.
As in H2012, we exclude participants with missing BMI,
women who tested positive in pregnant test or who claimed to be pregnant
(for which by design PBF is not measured), or,
with PBF imputed values with high variability. Our final sample size is
The analysis model of the non-frequentist methods is the
mixed effect linear regression with PBF as the response variable, along with the following predictors:
(1/BMI), gender, age group and race ethnicity,
with male, 18-29 age group and non-Hispanic White as reference groups, and PSU-REs.
The frequentist analysis model is same (now fixed effect) model but without PSU-REs.
We recall that $\bxy$ denotes predictors in the marginal model for $y$ in (<ref>), and construct,
\begin{equation}\label{eq:bxyinfirstapplication}
\begin{array}{rl}
\bxy^t=\big(&1,1/\hbox{BMI},1(gender=\hbox{Female}),
1(Age\in [30,49]),1(Age\in [50,69])),\\
&1(Race/Eth=\hbox{NonHisp black}),
1(Race/Eth=\hbox{Other or Multiracial})\big)
\end{array}
\end{equation}
with dimension $p+1=9$, where $1(A)$ denotes the indicator function of the individual in the set $A$.
analyze the dataset with the first set of PBF imputed values under the following comparator models used in the simulation study:
Full.both, Full.y, Pseudo.w, Pseudo, Freq and Pop.
We recall that we jointly model the response and sampling inclusion probabilities under Full.both and Full.y, with the response denoted as $y=\hbox{PBF}$, and we use the same predictors in the conditional model for the inclusion probabilities and the marginal model for the response such that
$\bxp=\bxy$ in (<ref>).
In the implementation of pseudo.w we use sampling weights, $\sampled{w}_{\cdot j}$ of
(<ref>), that sum individual sampling weights for those units nested in each PSU in order to exponentiate the random effects prior distribution.
Our analyses consider PSU information and sampling weights as provided by NHANES but not strata information.
We add the comparator method Freq.strata that does consider strata information, which would be expected to produce more optimistic confidence intervals, fitted using the R package Survey <cit.>.
We recall that the NHANES design employs 15 strata with 2 PSUs per stratum such that frequentist inferences will be based on the Student-$t$ distribution with
$df=\# PSU-\#strata-p$ degrees of freedom, equal to $df=21$ and $df=7$ under Freq and Freq.strata, respectively.
The left panel in Figure
compares violin plots and central 95% credibility (or confidence) intervals for the expected PBF value for a person in the reference group with “normal" BMI or $\hbox{BMI}=18.5$ (where $\hbox{BMI}<18.5$ is labeled as underweight), which represents uncertainty intervals of $\beta_0+\beta_1/18.5$.
All point estimates are close to the value reported in Table 4 of H2012 of 14.5% for this group with FULL.both and FULL.y at $14.7\%$ are closest to H2012.
The right panel of Figure <ref> depicts the same point estimates and uncertainty intervals but now for non-Hispanic White woman with $\hbox{BMI}=18.5$, which are computed computed as the uncertainty intervals of
$\beta_0+\beta_1/18.5+\beta_2$. Here, again, all CIs contain the PBF estimated in H2012 for this group of 26.9%.
In both figures, the Frequentist CIs ( Freq and Freq.strata) are much wider than the other methods, which indicates an inefficiency of this Freq.strata despite it's consideration of strata should produce smaller uncertainty intervals. By contrast, inference under Full-both and Full-y is similar to Pop indicating the possibility of a non-informative design. This is confirmed by the central 95% CI for $\kappa_y, (-0.479,0.397)$ in FULL.both and FULL.y that contains 0 indicating a non-informative sample; more formally, $y_{ij}\perp \pi_{ij}\mid \bx_{ij}$. The posterior mean estimates for the plug-in Pseudo.w and Pseudo express slightly more bias (relative to the $14.5\%$ of H2012) because of the use of use of noisy weights, which are not necessary since the NHANES design is non-informative for PBF. The fully Bayesian methods (FULL.both, FULL.y), by contrast, performs weight smoothing to mitigate bias induced by noisy weights.
Figure <ref> displays violin plots
of the posterior distribution of the standard deviation of the PSU-REs, $\sigma_{\REypop}$ (in the marginal model for $y$),
for the non-frequentist methods (as we recall that the frequentist comparator methods do not include PSU-REs).
As before, the inference under Full.Both and
Full.y are similar to Pop due to the non-informativeness of the sampling design for PBF.
We now discuss inference under Full.Both.
The estimate of correlation between the PBF of
individuals in the same cluster (after controlling for BMI and other predictors) is
$E[\sigma^2_{\REypop}/(\sigma^2_{\REypop}+\sigma_y^2) \mid data]\approx 0.01553$.
Table <ref> shows the point estimates of the inference under Full.both when using the first set of PBF imputed values.
The estimated correlation between the log of inclusion probabilities in the same cluster ($\approx 10\%$),
as expected by the way the inclusion probabilities are built, is greater than zero,
(, $cor(\log \pi_{ij},\log \pi_{i^\prime j})=E[\sigma^2_{\REpipop}/(\sigma^2_{\REpipop}+\sigma_\pi^2)\mid data]\approx 0.0991$).
But this fact has little impact in the inference since the sample is non-informative.
Interpreting the coefficients in the model for $y$ (top rows in Table <ref>), after controlling for BMI,
women have, on average, 12.3% higher PBF than men, PBF increases with age and
Other or multiracial and MX-AME people have the highest PDF followed by white and other Hispanic while nonhispanic blacks have the lowest PBF.
Since we are using just one set of imputed values we are underestimating the SE in the discussion above.
We adjust our estimates and standard error following <cit.>
<cit.>. In short,
assume we have $M$ completed data sets with a missing imputation algorithm,
let $\tilde{\theta}_m$ and $var(\tilde{\theta}_m)$ the points estimates of the generic parameter $\theta$ and
its variance using completed dataset $m$, the point estimate of
$\theta$ is $\bar{\theta}:=(1/M)\sum_{m=1}^M \tilde{\theta}_m$ with $var(\bar{\theta})=U+[(M+1)/M]B$, with
$U:=(1/M)\sum_{m=1}^M var(\tilde{\theta}_m)$ the within imputation variance and $B:=[1/(M-1)] \sum_{m=1}^M (\tilde{\theta}_m-\bar{\theta})^2$ the between imputation variance.
In our example $m=1,\dots,5$ corresponding to the five sets of PMF of imputed values,
results are shown in table <ref>. The point estimates under all models are similar.
The confidence intervals under the frequentist approaches tend to be wider that the confidence intervals under all other approaches.
This phenomenon was observed under simulation scenario d
(See Table <ref>);
when the sample is non-informative, the frequentist confidence intervals are wider than the credible intervals under all other methods here considered.
When implementing our comparator frequentist approaches <cit.>
<cit.> recommends
to fit the model with and without weights.
If the point and standard error estimates are different between the two,
they recommend that the analyst should explain the difference based on the construction of the sampling weights, oversampling of certain minority or age groups.
In our example, inference under Freq and Pop generate similar point estimated but Freq yields, in general, greater standard errors.
The analyst needs to decide and justify which model to use inference under frequentist modeling;
though under NHANES guidelines <cit.> the weights should be used for all NHANES 2005-2006 analyses.
By contrast, the fully Bayesian approaches do not require the data analyst to make this choice about whether to use the weighted or unweighted estimates.
Inference for model parameter $\kappa_y$, under
Full.Both or Full.y,
informs the analyst if the design is informative;
$\kappa_y=0$ implies non-informative design and the magnitude of $\kappa_y$ is a measure of informativeness (in the scale of $y$). Full.both and Full.y
correct inference when the design is informative, but also mitigate against bias induced by weights when the sampling design is non-informative (through weight smoothing),
as in this particular application,
and produces results that are similar to the Pop method that ignores the sampling weights.
Full.Both, is the only method that,
as shown in Subsection <ref>,
also provides appropriate model uncertainty estimation
when the PSUs are informative (not the case in this application); e.g., when $y_{ij}\not\perp j\mid \bx_{ij}$ for some $i$ and $j$.
Parameter mean sd 2.5% 97.5%
5c Parameters for $y_{ij}\mid \cdots$
intercept 0.518 0.003 0.512 0.524
$1/\hbox{BMI}$ -6.862 0.071 -6.999 -6.721
gender: Female 0.123 0.001 0.121 0.125
Age: 30-49 0.005 0.001 0.002 0.007
Age: 50-69 0.022 0.001 0.019 0.025
Race/eth: MX-AME 0.004 0.002 0.001 0.007
Race/eth: Other-Hisp -0.001 0.003 -0.007 0.005
Race/eth: Non Hisp Black -0.018 0.002 -0.021 -0.015
Race/eth: Other-Multiracial 0.007 0.003 0.002 0.013
$\sigma_{\REypop}$ 0.004 0.001 0.003 0.006
$\sigma_y$ 0.035 0.000 0.034 0.036
5c Parameters for $\pi_{ij}\mid y_{ij},\cdots$
$\hbox{PBF}$ -0.042 0.223 -0.479 0.397
Intercept -0.429 0.128 -0.682 -0.177
$1/\hbox{BMI}$ 2.240 1.850 -1.375 5.812
gender: Female -0.025 0.031 -0.086 0.038
Age: 30-49 -0.549 0.020 -0.587 -0.510
Age: 50-69 -0.207 0.021 -0.249 -0.167
Race/eth: MX-AME 1.589 0.025 1.539 1.637
Race/eth: Other-Hisp 0.454 0.045 0.366 0.542
Race/eth: Non Hisp Black 1.277 0.023 1.231 1.323
Race/eth: Other-Multiracial 0.367 0.040 0.290 0.446
$\sigma_{\REpipop}$ 0.166 0.025 0.124 0.221
$\sigma_{\pi}$ 0.500 0.006 0.490 0.512
Inferende under Full.both using the first set of NHANES imputed values of PBF.
Column headers: mean, sd, 2.5% and 97.5% denote the posterior expected value, standard deviation, and 0.025 and 97.5 quantiles.
Model 3c| for $y_{ij}\mid \cdots$ 3c for $\pi_{ij}\mid y_{ij},\cdots$
Parameter mean 2.5% 97.5% mean 2.5% 97.5%
$\hbox{PBF}$ $-\ \ $ $-\ \ $ $-\ \ $ -0.042 -0.479 0.397
intercept 0.518 0.512 0.524 -0.429 -0.682 -0.177
$1/\hbox{BMI}$ -6.862 -6.999 -6.721 2.240 -1.375 5.812
gender: Female 0.123 0.121 0.125 -0.025 -0.086 0.038
Age: 30-49 0.005 0.002 0.007 -0.549 -0.587 -0.510
Age: 50-69 0.022 0.019 0.025 -0.207 -0.249 -0.167
Race/eth: MX-AME 0.004 0.001 0.007 1.589 1.539 1.637
Race/eth: Other-Hisp -0.001 -0.007 0.005 0.454 0.366 0.542
Race/eth: Non Hisp Black -0.018 -0.021 -0.015 1.277 1.231 1.323
Race/eth: Other-Multiracial 0.007 0.002 0.013 0.367 0.290 0.446
PSU-RE SD 0.004 0.003 0.006 0.166 0.124 0.221
Error SD 0.035 0.034 0.036 0.500 0.490 0.512
Inference under Full.both using the first set of NHANES imputed values of PBF.
Column headers: mean, 2.5% and 97.5% denote the posterior expected value, and 0.025 and 97.5 quantiles.
PSU-RE SD (Error SD) represents the standard deviation of the PSU specific random effect (of the error), ,
$\sigma_{\REypop}$ (and $\sigma_y$) in the model for
$y_{ij\mid \dots}$ and $\sigma_{\REpipop}$ (and $\sigma_{\pi}$) in the model for $\pi_{ij}\mid y_{ij},\dots$
Estimate Std. Error t value Pr($>$$|$t$|$)
(Intercept) 0.5174 0.0039 132.39 0.0000
InvBMI -6.8402 0.1029 -66.44 0.0000
gender_Fem 0.1219 0.0010 124.86 0.0000
X29.48 0.0040 0.0019 2.11 0.0474
X49.Inf 0.0209 0.0018 11.41 0.0000
MX.AME 0.0040 0.0023 1.74 0.0972
Other.Hisp 0.0011 0.0049 0.22 0.8267
NONHisp.Black -0.0153 0.0020 -7.65 0.0000
Other.Multiracial 0.0102 0.0034 3.02 0.0066
Freq fitted values using the first set of imputed values of PBF.
Parameter FULL.both FULL.y Pseudo.w Pseudo Freq FreqwStrata Pop
$\beta_0$ 0.52(0.0035) 0.52(0.0035) 0.519(0.0036) 0.519(0.0036) 0.519(0.004) 0.519(0.0036) 0.52(0.0035)
$\beta_1$ -6.888(0.074) -6.887(0.0745) -6.872(0.0806) -6.869(0.079) -6.882(0.1021) -6.882(0.0939) -6.886(0.074)
$\beta0+\beta_1/18.5$ 0.147(0.0019) 0.147(0.0019) 0.148(0.0021) 0.148(0.002) 0.147(0.0024) 0.147(0.0024) 0.147(0.0019)
$\sigma_y$ 0.035(5e-04) 0.035(4e-04) 0.035(5e-04) 0.035(5e-04) 0.001(NA) 0.001(NA) 0.035(5e-04)
$\sigma_{\REypop}$ 0.004(9e-04) 0.004(9e-04) 0.005(0.001) 0.005(0.001) NA(NA) NA(NA) 0.004(9e-04)
Point estimate (SE) after adjusting for multiple imputation. L may need to add all the other parameters.
Luis needs to rerun this to make a little mistake in the age groups Results very likely to NOT change
Left: Violin plots along with, mean (dot) and central 95% credible or confidence interval (horizontal line) for
the expected PBF for subjects in the reference group
(non-Hispanic White man in age group 18-29) with $\hbox{BMI}=18.5$, $\beta_0+\beta_1/18.5$, under all considered methods. Right: the same but
for non-Hispanic White woman in age group 18-29, $\beta_0+\beta_1/18.5+\beta_2$.
Individuals with $\hbox{BMI}<18.5$ are labeled as underweight.
Violin plots of posterior samples of PSU RE, $(\sigma_{\REypop})$
along with central 95% credible interval (black vertical line) and mean (dot) for non frequentist methods.
In this application, we estimate the average kilocalories (kcal) consumed in each one of the gender,
age and ethnicity groupings.
We use dietary data from the 2015-2016 NHANES cycle.
Each participant answers a 24-hour dietary recall interview in two days: Day 1 and Day 2.
The Day 1 recall interview takes place when the participant visits the Mobile Exam Center (MEC) unit where other NHANES measurements are taken. The Day 2 recall interview is collected by telephone and it is scheduled for 3 to 10 days later (See <cit.> for more details).
Based on theses interviews NHANES provides datasets with estimates
of kilocalories (and many nutrients) ingested by the participant 24 hours before the interview along with their dietary sampling weight. In this application, we consider the Day 1 dataset and the sampling weights that come in it.
There are 8,506 participants who completed the Day 1 dietary recall, of which this analysis
considers the $n=8,327$ with positive sampling weights or, equivalently, with recall status labeled “complete and reliable” by NHANES <cit.>.
The underlying analysis model for the non-frequentist methods
(FULL.both, FULL.y, Pseudo and Pop) is the
mixed effect linear regression with response
$y=\log(\hbox{kcal}+1)$ with kcal the NHANES estimate of kilocalories consumption based on Day 1 recall interview; predictors: gender, age group and race/ethnicity; and, PSU-REs.
The frequentist analysis model, Freq, is the same (now fixed effect) model but without PSU-REs.
Age is categorized in 5 groups: $[0,8],[9,17],[18,29],[30,49]$ and $[50,80]$ years old, while race/ethnicity categories are
non-Hispanic White,
Mexican American,
non-Hispanic Black, and
other or multiracial.
Male, $[0,8]$ age group and non-Hispanic White are the reference groups.
We recall that $\bxy$ denotes predictors in the marginal model for $y$ in (<ref>), and construct,
\begin{equation}%\label{eq:bxyinfirstapplication}
\begin{array}{rl}
\bxy^t=\big(&1,1(gender=\hbox{Female}),\\
&1(Age\in [9,17]),1(Age\in [18,29]),
1(Age\in [30,49]),1(Age\in [50,80]),\\
&1(Race/Eth=\hbox{Mexican American}),1(Race/Eth=\hbox{other Hispanic}),\\
&1(Race/Eth=\hbox{non-Hispanic Black}),
1(Race/Eth=\hbox{other or multiracial})\big)
\end{array}
\end{equation}
with dimension $p+1=10$, where $1(A)$ denotes the indicator function of the individual in the set $A$.
In this application, we set $\bxp=\bxy$
in (<ref>).
For the non-frequentist methods,
priors are assigned in (<ref>), and
posterior inference
is based on a posterior sample
of the model parameters of size 10,000.
The MCMC sampler was run 10,000 iterations, after a burn-in period of another 10,000 iterations, on Stan. Relatively fewer draws are required when using Stan's Hamiltonian Monte Carlo (HMC) than a Gibbs sampler because the Stan draws are less correlated.
Figure <ref> depicts violin plots of the estimated mean daily kcal consumption
for White males in age groups [0,8] (left) and [30,49] (right).
More specifically, the left panel depicts
violin plots
of the posterior distribution of $\exp(\beta_0)-1$ for the set of non-frequentist methods.
For the frequentist method (Freq) depicts
the distribution of $\exp(\beta_0)-1$ with $\beta_0$ drawn from
$\hat{\beta}_0+t\times SE(\hat{\beta}_0)$ with $t\sim \hbox{Student-t}$ with
$J-1=30-1=29 $ degrees of freedom.
The right panel depicts the violin plot of the
posterior distributions of
$\exp(\beta_0+\beta_4)-1$ for the non-frequentist methods. It also depicts
the violin plot of $\exp(\beta_0+\beta_4)-1$ for Freq, though with
$(\beta_0,\beta_4)$ drawn from the distribution
$(\hat{\beta}_0,\hat{\beta}_4)^t+ \hat{\Sigma}_{1,4}^{1/2} \mathbf{t}$ where the
random vector
$\mathbf{t}=(t_0,t_4)^t$ has entries $t_0,t_4 \iid \hbox{Student-t}$,
with 29 degrees of freedom
$\hat{\Sigma}_{0,4}^{1/2} \hat{\Sigma}_{0,4}^{1/2} =\hat{\Sigma}_{0,4}$
with $\hat{\Sigma}_{0,4}$ the estimated variance-covariance matrix of $(\hat{\beta}_0,\hat{\beta}_4)^t$.
Figure <ref> shows that for these groups, inference under FULL.both and FULL.y are similar to one another but different from Pop.
The FULL.both central 95% credible interval
(and also the FULL.y one, not shown) for $\kappa_y$,
$(-0.110,-0.048)$, does not contain zero, indicating that the sampling design is informative. FULL.both
and FULL.y correct for this.
The point estimates under
Pseudo and Freq are close to one another, but differ from those for FULL.both and FULL.y, indicating that the weight smoothing provided by the fully Bayesian methods is more robust to noise present in the weights that may injure point estimation.
Table <ref> displays inference for the model parameters under Full.Both.
Table <ref> confirms the expected pattern of kcal
consumption; it
increases with age when young, plateau at middle age and decreases in the oldest age group. Table <ref> also shows that,
in average, White people consumes more kcals than each other race/ethnicity groups.[T: maybe White people are taller. We could have controlled for height, when controlling for BMI the design becomes non-informative.]
In contrast, Freq.strata
(See Table <ref> in the appendix subsection <ref>)
concludes that the only group with, statistically significant
lower kcal consumption than
the White people group is the non-Hispanic Black people group.
Figure <ref> depicts the violin plots for the standard deviation of the PSU-RE, $\sigma_{\REypop}$ under the non-frequentist
(Freq does not model PSU-RE). Inference for this parameter under Pseudo differs from those under the two fully Bayesian methods.
Figure <ref> shows that the posterior distribution of individual PSU random effects in the marginal response model (PSU-RE) also differs. The figure focuses on two particular random effects, $\REypop_{15}$ and $\REypop_{27}$, that are coherent with the general
pattern we see over the random effects; in particular, the fully Bayes methods express less estimation uncertainty than does Pseudo, indicating a greater estimation efficiency by jointly modeling the response and inclusion probabilities.
Violin plots, under all methods, for average kcal consumption for people in the reference group
White males 8 year old or younger (left), and White males in the age group [30,49] (right)
along with point estimate (dot) and central 95% credible, or confidence, interval (horizontal line within violin plot).
Violin plots of the estimate of the standard deviation of the PSU-RE, $\sigma_{\REypop}$,
under all non-frequentist methods.
Violin plots, under all non-frequentist methods,
for PSU-REs $\REypop_{15}$ and $\REypop_{27}$. PSUs 15 and 27 here correspond to
PSUs 1 in strata 8 and 14, respectively, in the 2015-2016 demographic NHANES dataset.
mean sd 2.5% 97.5%
5c Parameters for $y_{ij}\mid \cdots$
intercept 7.242 0.016 7.212 7.273
gender:Female 0.009 0.010 -0.011 0.030
age 9-17 0.295 0.017 0.261 0.329
age 18-29 0.394 0.019 0.358 0.431
age 30-49 0.401 0.016 0.369 0.433
age.50-80 0.269 0.015 0.239 0.299
MX.AME -0.031 0.015 -0.060 -0.002
Other.Hisp -0.049 0.017 -0.083 -0.016
NONHisp.Black -0.053 0.014 -0.082 -0.025
Other.Multiracial -0.054 0.017 -0.087 -0.022
$\sigma_{\REypop}$ 0.012 0.007 0.001 0.028
$\sigma_y$ 0.473 0.004 0.466 0.481
5c Parameters for $\pi_{ij}\mid y_{ij},\cdots$
log(kcal+1) -0.079 0.016 -0.110 -0.048
Intercept 0.213 0.117 -0.015 0.443
gender:Female 0.005 0.015 -0.024 0.034
age 9-17 -0.249 0.025 -0.299 -0.199
age 18-29 -0.769 0.028 -0.823 -0.715
age 30-49 -0.710 0.025 -0.759 -0.660
age 50-80 -0.350 0.022 -0.394 -0.307
MX.AME 1.103 0.022 1.061 1.146
Other.Hisp 1.215 0.024 1.167 1.262
NONHisp.Black 1.128 0.021 1.087 1.168
Other.Multiracial 0.991 0.023 0.945 1.037
$\sigma_{\REpipop}$ 0.026 0.013 0.003 0.052
$\sigma_\pi$ 0.678 0.005 0.668 0.689
Parameter estimates for the regression model with response
$\log(kcal+1)$ and predictors
gender, race/ethnicity and age group using FULL.Both
§ DISCUSSION
We have extended our work in to include PSU information in a model-based, fully Bayesian analysis of informative samples.
The extension consists of replacing the fixed effect model by a mixed effect model that includes
PSU/cluster-indexed random effects in the marginal model for $y$ and the conditional model for $\pi\mid y$ to capture dependence induced by the clustering structure.
We have shown via simulation that our fully Bayesian approach yields correct uncertainty
quantification, or equivalently CIs, with coverage close to their nominal level, including for the random effects variances. Competing methods fail to do so in at least one simulation scenario. In particular, FULL.both is the only appropriate method, of all here considered,
when the sample design is informative for the selection of PSUs. The results in simulation scenario , where the design is not informative, revealed that the method is also robust to noise in weights.
Our fully Bayesian methods proposed here are mixed effect linear models that not only take into account the possible association
of individuals within the same cluster but also, in contrast to the design-based frequentist methods, quantify this association; that is, the within PSU correlation can be estimated. We demonstrated our method with an NHANES dietary dataset whose sampling design includes stratification.
The next natural step of the method is to include strata information into the analysis.
This application only analyzed data from Day 1 dietary questionnaire.
To analyze Day 1 and Day 2 data with one model we need to adapt our approach to repeated measures. This is another current line of research.
To implement our Bayesian method, we derived an exact likelihood for the observed sample.
In principle, this likelihood can also be used for maximum likelihood estimation opening the door for model-based frequentist inference.
Our approach requires the modeler to specify a distribution for $\pi_i\mid y_i,\cdots$.
Estimation requires the computation of an expected value, the denominator in (<ref>). We assume a lognormal conditional likelihood for the marginal inclusion probability, given the response, with linear relationship between the location parameter and the the response, both of which facilitate use of Theorem <ref> to obtain a closed form for this expected value. Our simulation study showed that the Bayesian method is robust against misspecification of these assumptions. Future work is needed to ease conditions in Theorem <ref>.
To sum up, we have presented the first model-based Bayesian estimation approach that accounts for both informative sampling within the individuals in the same PSU
and when the PSU is informative to produce correct uncertainty quantification.
§.§ Application Details for PBF and BMI analysis
§.§.§ Data used
Three publicly available datatset were downloaded from the NHANES website.
* Demographic Variables & Sample Weights: DEMO_D.XPT (<cit.>, <https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Demographics CycleBeginYear=2005>).
Used columns:
* sdmvstra: Stratum to which the participant belongs
* sdmvpsu: PSU indicator
* wtmec2yr: Full Sample 2 Year MEC Exam Weight
* riagendr: Sex
* ridageyr: age in years
* ridreth1:Race/ethnicity
* Body Measures:
(<https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Examination CycleBeginYear=2005>).
Used columns:
* bmxbmi: BMI ($kg/m^2$)
* Dual-Energy X-ray Absorptiometry - Whole Body:
dxx_d.XPT (<https://wwwn.cdc.gov/Nchs/Nhanes/Dxa/Dxa.aspx>)
* dxdtopf: Total percent body fat (%)
* _MULT_: Imputation Version in $1,\dots,5$. For Figures
<ref> and <ref>
Table <ref>
we use the first set of imputations. This is, rows with _MULT_=1.
§.§ Application Details for Daily Kilocalories Analysis
§.§.§ Dataset
Two publicly available datatset were downloaded from the NHANES website.
* Demographic Variables and Sample Weights: DR1TOT_I.XPT
(<https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Dietary CycleBeginYear=2015>).
Columns used
* sdmvstra: Stratum to which the participant belongs
* sdmvpsu: PSU indicator
* riagendr: Sex
* ridageyr: age in years
* ridreth1:Race/ethnicity
* Dietary Interview - Total Nutrient Intakes, First Day: DR1TOT_I.XPT
(<https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Dietary CycleBeginYear=2015>).
Columns used
* dr1tkcal: Energy (kcal)
* wtdrd1: Dietary day one sample weight
Notice that,
in this example,
following the NHANES guidelines for the analisys of dietary data,
we are not using the Full Sample 2 Year MEC Exam Weights “wtmec2yr” included in the demographic dataset
DR1TOT_I.XPT but instead dietary day one sample weights “wtdrd1” constructed based on
“MEC sample weights and further adjusting for
(a) the additional non-response and
(b) the differential allocation by weekdays (Monday through Thursday), Fridays, Saturdays and Sundays
for the dietary intake data collection” <cit.>.
QUANTITY BETWEEN BRACKETS IN AUGMENTED PSEUDOLIKELIHOOD IN (<REF>)
MATCHES LIKELIHOOD UNDER WEIGHTED LINEAR REGRESSION
The weighted linear regression model is
${y_{ij}\mid \bxy_{ij},\bth,\REypop_j}\sim\text{normal}\left(\bxy_{ij}^t\bbe+\REypop_j ,\sigma_y^2/w_{ij}\right)$ with $w_{ij}>0$ known.
Using the fact that
$$\left[\text{normal}(y\mid \mu,\sigma^2)\right]^w= \frac{1}{w^{1/2}(2\pi)^{(w-1)/2}} \frac{1}{(\sigma^2)^{(w-1)/2}} \times \text{normal}(y\mid \mu,\sigma^2/w)
we obtain that the expression between brackets in (<ref>),
$\prod_{j=1}^J \prod_{i=1}^{n_j} p(\sampled{y}_{ij}\mid \theta)^ {\sampled{w}_{ij}}$, equals
\begin{align*}
\prod_{j=1}^J
\prod_{i=1}^{n_j}
\left[\text{normal}\left(\sampled{y}_{ij}\mid \bxy_{ij}^t\bbe+\REypop_j ,\sigma_y^2 \right)
\right]^{\sampled{w}_{ij}} \propto&
\frac{1}{(\sigma_y^2)^{\left(\sum_{j,i}[\sampled{w}_{ij}-1]\right)/2}}\\
\left[\prod_{j=1}^J
\prod_{i=1}^{n_j} \text{normal}\left(\sampled{y}_{ij}\mid \bxy_{ij}^t\bbe+\REypop_j ,\sigma_y^2/\sampled{w}_{ij} \right)
\right]
\end{align*}
but, by construction, $\sum_{j=1}^J\sum_{i=1}^{n_j} \sampled{w}_{ij}=n$ and therefore the expontent of $\sigma_y^2$ in the denominator in the expression above is zero,
and the quantity between brackets is the likelihood of the weighted linear regression model.
Estimate Std. Error t value Pr($>$$|$t$|$)
(Intercept) 7.2605 0.0159 456.42 0.0000
gender_Fem 0.0089 0.0140 0.63 0.5512
age.9.17 0.2626 0.0163 16.08 0.0000
age.18.29 0.3620 0.0267 13.55 0.0000
age.30.49 0.3631 0.0268 13.52 0.0000
age.50.Inf 0.2436 0.0245 9.95 0.0001
MX.AME -0.0233 0.0139 -1.67 0.1455
Other.Hisp -0.0325 0.0175 -1.85 0.1131
NONHisp.Black -0.0548 0.0125 -4.37 0.0047
Other.Multiracial -0.0376 0.0180 -2.08 0.0826
Inference under model Freq.strata of the multiple linear regression model
with response $\log (kcal+1)$ and predictors gender, age group and race/ethnicity
§ T COMMENTS: LINEAR REGRESSION POPULATION MODEL
Additional sections include:
* Fully Bayes Model for Random Effects
* Discuss inclusion of random effects in both models for y and pi|y, as well as separately for each. Note that our experiments (without including simulation) reveals that solely including random effects in model for pi|y only captures dependence among the pi and not the y, such that uncertainty quantification would not be correct.
* Address that failure to include random effects will produce overly confident credibility intervals that will fail to achieve nominal coverage even under non-informative sampling. The likelihood for the sample must still account for the dependence induced by the sampling design.
* Introduce the Pseudo posterior (PP) comparator model that includes a random effects term.
* Discuss both non-informative and informative sampling of groups. Under the latter, introduce the pseudo posterior that exponentiates both the likelihood *and* prior for random effects by the sampling weights. The pseudo prior for the random effects functions as a pseudo likelihood for the generating parameters of the random effects.
* Simulation study
* Non-informative sampling of random effects
* Informative sampling of random effects
* Introduce our diagnostic procedure to differentiate one from the other to decide whether to employ a RE term solely in the model for y or also for $pi\mid y$.
* Application
* Run the diagnostic and decide on the fully Bayesian model. Compare the PP (with and without a random effects term and both weights for the random effects prior).
914pt plus.8pt minus .6pt
|
# Blind Image Deblurring based on Kernel Mixture††thanks: This work has been
submitted to the IEEE for possible publication. Copyright may be transferred
without notice, after which this version may no longer be accessible.
Sajjad Amrollahi Biyouki
Department of Industrial and Systems Engineering
The University of Tennessee
Knoxville, TN 37996
<EMAIL_ADDRESS>
&Hoon Hwangbo
Department of Industrial and Systems Engineering
The University of Tennessee
Knoxville, TN 37996
<EMAIL_ADDRESS>
###### Abstract
Blind Image deblurring tries to estimate blurriness and a latent image out of
a blurred image. This estimation, as being an ill-posed problem, requires
imposing restrictions on the latent image or a blur kernel that represents
blurriness. Different from recent studies that impose some priors on the
latent image, this paper regulates the structure of the blur kernel. We
propose a kernel mixture structure while using the Gaussian kernel as a base
kernel. By combining multiple Gaussian kernels structurally enhanced in terms
of scales and centers, the kernel mixture becomes capable of modeling nearly
non-parametric shape of blurriness. A data-driven decision for the number of
base kernels to combine makes the structure even more flexible. We apply this
approach to a remote sensing problem to recover images from blurry images of
satellite. This case study shows the superiority of the proposed method
regulating the blur kernel in comparison with state-of-the-art methods that
regulates the latent image.
_K_ eywords Blind deconvolution $\cdot$ Gaussian kernel $\cdot$ Mixture of
kernels $\cdot$ Remote Sensing $\cdot$ Image Restoration
## 1 Introduction
Image restoration is widely used when images do not provide desired quality in
terms of clarity and contrast of target objects as a consequence of noisy
disturbance so-called blurriness. Such poor-quality images are often observed
in remote sensing [23],[30], underwater objects detection [12],[19], and
healthcare applications [9],[28]. Due to great need in many real-world
applications, a wide spectrum of methods have been developed to restore a
“best-quality” latent image from a blurred image [8],[7],[22].
In general, a blur process is modeled as the convolution of a latent image and
blurriness represented by a kernel. The deconvolution method, the inverse
process of the convolution, is the process of extracting the latent image by
deconvolving a blurred image into the latent image and the blur kernel. If
some prior knowledge is given for the blur kernel so if a specific kernel can
be assumed a priori, the deconvolution process becomes straightforward as it
only requires estimating the latent image. This type of approaches that use a
pre-defined kernel are called non-blind deconvolution [21],[5]. The kernels
are, however, unknown or only partially known in real-world problems. With the
rapid increase in the usage of images for an analysis of various systems,
assuming a pre-defined kernel is often too restrictive. In this regard, the
blind deconvolution [1],[11] (also referred to as blind image deblurring) that
estimates both the latent image and the blur kernel has attracted great
attentions in the last two decades.
The blind deconvolution process requires solving an ill-posed problem. This is
because, for a given, single blurred image, there can be an infinite number of
solutions for the latent image and the blur kernel that satisfy the system of
equations defined for the blur process model:
$\mathbf{B}=\mathbf{I}*\mathbf{K}+\mathbf{n}$ (1)
where $*$ is the convolution operator, $\mathbf{B}$ is a degraded (blurred)
image, $\mathbf{I}$ is a recovered (latent) image, $\mathbf{K}$ is a blur
kernel, and $\mathbf{n}$ is an additive noise. Therefore, to derive a unique
solution for the blind deconvolution problem, additional restrictions need to
be imposed. For this purpose, some priors and regularization terms have been
used to maintain and intensify image edges (restricting latent images) or to
diminish harmful degradation and noise (restricting blur kernels). Low-rank
prior [20], dark channel prior [18], and graph-based prior [4] have been
developed recently to restrict the latent image, and total variation (TV)
regularization [6] and gradient prior [26] have been applied to either the
blur kernel or the latent image.
Recently, the approaches restricting the latent image, e.g., through
statistical priors [8],[17] or image related priors [20],[18],[4], have been a
mainstream in blind image deblurring research. However, image characteristics
can vary by different images and applications, and there is no guarantee that
the restrictions justified for a specific latent image work well for other
blind deconvolution problems. On the other hand, blurriness is typically
originated from common sources, including atmospheric turbulence (mostly for
remote sensing), out-of-focus, camera shake or object motion [27],[10]. This
suggests that restricting the blur kernel rather than the latent image can be
more effective and generalized more easily to broad applications of the blind
deconvolution. In this paper, we propose a novel kernel structure to model the
blur kernel, which can be leveraged for any blurry image no matter what image
prior (on the latent image) is appropriate.
A Gaussian kernel has been used to model the blur kernel, especially to
estimate atmospheric turbulence blurs [27]. In general, a circular shape
modeled by a simple Gaussian blur kernel is not sufficient to represent the
underlying blurriness, and there could be multiple sources of blurriness
making the overall shape complicated. To address this problem, we propose
using a mixture of several Gaussian kernels as a blur kernel while allowing
different scales, centers, and rotations of each individual kernel. This
kernel mixture is capable of modeling a complex shape of blurriness from
symmetric to asymmetric and from circular to linear without significant
limitation. Consequently, the kernel mixture shows flexible behaviors as if it
were estimated nonparametrically even with its parameterized structure. All
the decisions to define the kernel structure, including how many base kernels
to combine, are data-driven, so the proposed kernel mixture is adaptive and
can be applied to any blind deconvolution problem.
The main contributions of this paper can be summarized as:
* •
This paper develops a novel kernel structure, a kernel mixture, that can be
applied to a broad class of blind image deblurring problems, independent of
the characteristics of latent images.
* •
The parametric structure of the proposed kernel, induced by Gaussian base
kernels, restricts and characterizes the blur kernel effectively producing a
good solution for the ill-posed blind deconvolution problem.
* •
The proposed kernel is flexible in modeling blurriness; with different scales,
centers, and rotations, the Gaussian kernels become capable of modeling
various shapes of blurriness.
* •
The proposed kernel is adaptive to given images since the determination of its
structure is data-driven.
The rest of the paper is organized as follows. Section 2 reviews the related
works in the blind image deblurring domain. Section 3 describes the proposed
method in detail elaborating the development of the kernel mixture, the
optimization of associated parameters, and the overall process of blind
deconvolution based on the kernel mixture. Section 4 presents a case study of
deblurring noisy satellite images, discusses the dataset and experimental
settings, and compares the proposed method with other state-of-the-art
benchmark methods. Section 5 discusses future research directions and
concludes the paper.
## 2 Related Works
Image deconvolution methods can be grouped into two general categories: the
non-blind deconvolution where the kernel information is known and the blind
deconvolution where the kernel is also unknown and subject to estimation. In
the non-blind deconvolution domain, Wiener filter [25] and Richardson-Lucy
algorithm [21] are the most well-known methods among earlier works, and they
are still in use for the image restoration problems. The major shortcoming of
these methods is in their noise sensitivity that leads to ringing artifacts in
the recovered image. In addition, these methods require assuming a specific
kernel, but it is hard to find a proper kernel that works well for different
images/applications. Albeit more difficult to implement, the blind
deconvolution has been used more broadly with better capability of image
recovery in general. Earlier studies mostly focused on removing motion blur
[7],[8],[22] caused by dynamic movement of an object while an image is taken.
Recent papers also considered other types of blurriness stemmed from various
sources such as atmosphere turbulence and camera shake [4],[27].To solve a
blind deconvolution problem, some Bayesian approaches, specifically Maximum a
Posteriori (MAP) estimation, and other optimization techniques have been used.
A few decades ago, Likas and Galatsanos [13] proposed using a hierarchical
Bayesian modeling for blind deblurring. They used Gaussian probability
distribution for modeling image prior, blur kernel, and hyperparameters of the
priors. They employed the variational Expectation Maximization (EM) to obtain
the MAP estimates from their Bayesian model. Fergus et al. [8] used Miskin’s
ensemble learning [15] for a variational Bayesian approach while assuming a
Gaussian mixture prior for the latent image. Inspired by this study, other
researchers proposed more efficient approaches by considering various priors
[2],[3],[16]. Babacan et al. [2] employed the variational Bayesian approach
while assuming sparse priors for the latent image (super-Gaussian priors).
Babacan et al. [3] imposed total variation prior on the latent image and
assumed a Gaussian blur kernel. Molina et al. [16] proposed simultaneous
autoregressions as priors for both the latent image and blur kernel and used
gamma distributions to model the hyperparameters of the priors. The major
weakness of this type of approaches based on the MAP estimation is their
strong dependency on the choice of priors and the lack of generality as a
consequence. It has been shown that the blind deconvolution tends to estimate
a trivial unblur image when an MAP approach is applied [2],[11]. In addition,
when sparse priors are used, the computational performance of an MAP
estimation exacerbates as the objective function for the estimation becomes
non-convex [26].
Others have used optimization techniques to solve a blind deconvolution
problem. The main idea is to solve an individual optimization problem for each
of the latent image and blur kernel while keeping one constant in the
estimation of the other and iteratively updating the estimates [29], [6]. You
and Kaveh [29] introduced such an alternating optimization problem in which
they regularized both the latent image and blur kernel by using the Laplace
operator. Chan and Wong [6] took advantage of the alternating optimization
structure and proposed the usage of total variation regularization, which can
improve recovering the edges of an image. Since then, the alternating
optimization-based approaches have been evolved in two different ways: i)
introducing novel image priors and ii) developing new blur kernel structures.
Most recent works have developed more sophisticated image priors, including
$l_{0}$-norm prior [17], low-rank prior [20], dark channel prior [18], and
most recently, reweighed graph-based prior [4]. Pan et al. [17] proposed using
the $l_{0}$-norm that regulates the number of nonzero pixels as it can
distinguish a clear image from a blurred image based on their opposite
behaviors in terms of nonzero intensities. Regulating the $l_{0}$-norm prior,
however, makes the objective function for estimating the latent image and blur
kernel non-convex, so the estimation problem becomes hard to solve [4]. Ren et
al. [20] enforced image prior to be low-rank approximation of a degraded image
by using a weighted nuclear norm minimization and combined the image prior
with the gradient map of the image. However, this requires solving pixel-based
singular value decomposition that has $O(N^{3})$ complexity [4]. Pan et al.
[18] observed that the dark channel of blurred images should be less sparse
than that of clear images due to the nature of a typical blur process. Based
on this observation, they proposed regulating the dark channel of the latent
image and making it sparse. This process though involves computing nonlinear
dark channel and its $l_{0}$-norm, so implementing the approach is
computational intensive [4]. Most recently, Bai et al. [4] incorporated a
graph structure to model a blurred image in nodal domain and used the graph
structure as the image prior. This was the first effort mapping pixels to a
domain other than frequency and real domains. All the discussed methods with
novel image priors have improved the quality of image recovery. However, they
may not be effective in extracting a latent image if the characteristics
assumed for the priors are not so obvious in a given image. In other words,
their quality can vary substantially depending on the images given.
In the domain of blur kernel regularization, Xu and Jia [26] utilized
influential edges of an image (image gradients) to create an initial kernel
and refined the kernel by using an iterative support detection. Although this
approach regularizes the blur kernel, the kernel estimation strongly relies on
the construction of edges of which characteristics vary by images. More
recently, other studies have imposed specific structures on the blur kernel by
combining multiple kernels of known structures. Mai and Liu [14] fused blur
kernels estimated from other methods by using Gaussian Conditional Random
Fields. Their approach outperformed the other methods from which the
individual kernels were extracted. However, this requires implementing all the
other methods, and the entire estimation process becomes computationally
expensive. Furthermore, its performance depends on the quality of individual
kernels. If none of the individual kernels is capable of capturing specific
properties of the true blurriness, the kernel fusion will also fail to model
such a property. Later, Lee and Hwang [10] modeled a blur kernel as a linear
combination of basic two-dimensional patterns. To construct the patterns, they
used “one-dimensional” Gaussian density with a different scale parameter for
each pattern. Since this blur kernel relies on simple Gaussian densities, it
cannot represent various shapes of blurriness. In this paper, we develop a
kernel mixture that combines structure-enhanced Gaussian kernels, which is
capable of modeling almost any shape of blurriness.
## 3 Methodology
In this section, we develop the kernel mixture of structure-enhanced Gaussian
kernels and present its capability of modeling general shapes of blurriness.
We also propose a blind image deblurring formulation that properly regulates
the latent image and blur kernel in the context of the kernel mixture. Then,
we describe the alternating optimization problem used to estimate the blur
kernel and latent image.
### 3.1 Mixture of Structure-Enhanced Gaussian Kernels
The major objective of this section is to develop a blur kernel that is
flexible in modeling blurriness while maintaining a certain parametric
structure. We use Gaussian kernels as base kernels to impose parametric
structures and improve the structures in terms of scales, centers, and
rotations to grant various shape characteristics when they are combined.
The general model of our proposed blur kernel $\mathbf{K}$ is defined as a
mixture of $N$ Gaussian kernels as
$\mathbf{K}=\sum_{t=1}^{N}K_{G,t}$ (2)
where $K_{G,t}$ for $t=1,\ldots,N$ is a base Gaussian kernel. The base kernels
will have different shape complexity which is determined by the underlying
blurriness of a given image. We present possible base kernel structures and
the shape of their mixture from the simplest to the most complicated ones with
their distinguished specifications.
The simplest structure of a two-dimensional base kernel is an isotropic zero-
mean Gaussian kernel with a scale parameter $\sigma$:
$K_{G}^{\text{simple}}(x,y;\sigma^{2})\propto\exp{(-\frac{x^{2}+y^{2}}{2\sigma^{2}})}$
(3)
where $x$ and $y$ are the locations of a pixel being evaluated, relative to a
target pixel location in the horizontal and vertical axes of an image,
respectively. For example, suppose pixel A in Fig. 1 is a target pixel. When
evaluating blurriness propagating from the target pixel to others, the
location of B relative to A is calculated by $(x,y)=(-1,1)$ as it moves to the
left and to the top each by one pixel. Similarly, the location of C and D are
$(1,0)$ and $(0,-1)$, respectively.
Figure 1: Relative locations of pixels
In Eq. (3), the kernel $K_{G}^{\text{simple}}$ is determined by the single
scale parameter $\sigma$. When using a single parameter to describe the scale
toward all directions, the propagation of blurriness is modeled as a circular
shape. Even the mixture of $K_{G}^{\text{simple}}$ will remain as a concentric
circular shape. As such, this kernel is too simple to represent general
blurriness.
(a) Blurriness shape with individual scales
(b) Scale-enhanced base kernel
(c) Mixture of $K_{G}^{\text{scale}}$
Figure 2: Kernel mixture with scale enhancement
To improve the simple structure, we allow a different scale parameter for each
axis and construct a covariance matrix to account for the varying scales of
different axes. A scale-enhanced Gaussian kernel is expressed in matrix form
as
$K_{G}^{\text{scale}}(\mathbf{p};\bm{\Sigma})\propto\exp{(-\frac{1}{2}\mathbf{p}^{T}\bm{\Sigma}^{-1}\mathbf{p})}$
(4)
where $\mathbf{p}=\begin{bmatrix}x&y\end{bmatrix}^{T}$,
$\bm{\Sigma}=\text{diag}(\sigma_{x}^{2},\sigma_{y}^{2})$, and $\sigma_{x}^{2}$
and $\sigma_{y}^{2}$ are the variances associated with each axis. By allowing
different scales on each axis, this kernel can be represented as an elliptic
shape as shown in Fig. 2(a). In an extreme condition, as one of
$\sigma_{x}^{2}$ or $\sigma_{y}^{2}$ gets close to zero, the elliptic shape
can converge to a linear shape since the kernel is defined on the discretized
grid formed by the pixels. A combination of $K_{G}^{\text{scale}}$ kernels
produces a centralized multi-layer cross-line shape as shown in Fig. 2(c).
(a) Blurriness shape with a non-zero center
(b) Center-enhanced base kernel
(c) Mixture of $K_{G}^{\text{center}}$
Figure 3: Kernel mixture with center enhancement
(a) Blurriness shape with rotation
(b) Rotation-enhanced base kernel
(c) Mixture of $K_{G}^{\text{rotation}}$
Figure 4: Kernel mixture with rotation enhancement
Still, the blur kernel constructed by the mixture of $K_{G}^{\text{scale}}$
cannot model asymmetric blurriness or multi-source blurriness because every
base kernel is symmetric and centered at zero. To overcome this weakness, we
propose using a non-zero center for each base kernel which is formulated as
$\displaystyle
K_{G}^{\text{center}}(\mathbf{p};\bm{\Sigma},\bm{\mu})\propto\exp{\left(-\frac{1}{2}(\mathbf{p}-\bm{\mu})^{T}\bm{\Sigma}^{-1}(\mathbf{p}-\bm{\mu})\right)}$
(5)
where $\bm{\mu}=\begin{bmatrix}\mu_{x}&\mu_{y}\end{bmatrix}^{T}$ and $\mu_{x}$
and $\mu_{y}$ are the relative locations of the center on each axis. Fig. 3
displays the structure of a Gaussian kernel with a non-zero center as well as
a mixture of the same kind. With a non-zero center, the
$K_{G}^{\text{center}}$ can be located anywhere within the range where the
kernel is defined. By combining multiple of them, the blur kernel as a whole
can present an asymmetric shape of blurriness as shown in Fig. 3(c). The
mixture is also capable of modeling a sparse blur kernel by combining multiple
base kernels with small scales. With this great flexibility, the ultimate
shape of the blur kernel will be determined by the underlying shape of
blurriness inherent in a blurred image. As such, this blur kernel exhibits
almost nonparametric behaviors.
One limitation of the mixture of $K_{G}^{\text{center}}$ is that the shapes of
the base kernels should be parallel to horizontal and vertical axes. We relax
this condition and further consider rotations of the base kernels by using a
rotation matrix $\mathbf{R}$ and a rotation angle $\theta$:
$K_{G}^{\text{rotation}}(\mathbf{p};\bm{\Sigma},\bm{\mu},\theta)\propto\exp{\left(-\frac{1}{2}(\mathbf{Rp}-\bm{\mu})^{T}\bm{\Sigma}^{-1}(\mathbf{Rp}-\bm{\mu})\right)},\mathbf{R}=\begin{bmatrix}\cos\theta&-\sin\theta\\\
\sin\theta&\cos\theta\end{bmatrix}.$ (6)
This formulation provides the most enhanced kernel structure, which leverages
all the aforementioned structural features. When $\theta=0$,
$K_{G}^{\text{rotation}}$ is equivalent to $K_{G}^{\text{center}}$. If
$\bm{\mu}$ is also a zero vector, the kernel becomes $K_{G}^{\text{shape}}$.
In fact, the shape complexity of each base kernel is determined by a given
image if we use the most advanced structure as a base kernel. The mixture of
$K_{G}^{\text{rotation}}$ can model nearly any shape of blurriness thanks to
its high flexibility as shown in Fig. 4(c).
### 3.2 Blind Deblurring Formulation
For blind image deblurring, both the latent image and blur kernel need to be
estimated. In a Bayesian framework, the MAP estimates of $\mathbf{K}$ and
$\mathbf{I}$ are determined by maximizing the joint posterior distribution,
$p(\mathbf{I,K|B})$, as
$\mathbf{I}^{*},\mathbf{K}^{*}=\underset{\mathbf{I,K}}{\mbox{arg\\!
max}}\hskip 2.84526ptp(\mathbf{I,K|B})=\underset{\mathbf{I,K}}{\mbox{arg\\!
max}}\hskip 2.84526ptp(\mathbf{B|I,K})p(\mathbf{K})p(\mathbf{I})$ (7)
while assuming independence between $\mathbf{K}$ and $\mathbf{I}$. This MAP
estimation requires specifying $p(\mathbf{K})$ and $p(\mathbf{I})$, the prior
distributions of the blur kernel and latent image, respectively.
Instead of assuming particular distributions for priors, we apply the
alternating optimization technique. To construct an optimization problem, we
modify the objective function of the MAP estimation in Eq. (7). By taking the
negative logarithm, Eq. (7) becomes
$\mathbf{I}^{*},\mathbf{K}^{*}=\underset{\mathbf{I,K}}{\mbox{arg\\!
min}}\hskip 2.84526pt-
log(p(\mathbf{B|I,K}))-log(p(\mathbf{K}))-log(p(\mathbf{I})).$ (8)
A more general optimization problem can be formulated by replacing the log
probability in Eq. (8) by some loss functions:
$\mathbf{I}^{*},\mathbf{K}^{*}=\underset{\mathbf{I,K}}{\mbox{arg\\!
min}}\hskip
2.84526pt\ell(\mathbf{I}*\mathbf{K},\mathbf{B})+\lambda_{1}\ell_{\mathbf{K}}(\mathbf{K})+\lambda_{2}\ell_{\mathbf{I}}(\mathbf{I})$
(9)
where the first term measures the loss occurring from estimating $\mathbf{B}$
as the convolution of the estimates of $\mathbf{I}$ and $\mathbf{K}$,
$\ell_{\mathbf{K}}$ and $\ell_{\mathbf{I}}$ are prior terms calculating some
loss associated with the estimates of $\mathbf{K}$ and $\mathbf{I}$,
respectively. $\lambda_{1}$ and $\lambda_{2}$ regularizes the prior terms
while presenting their relative importance to the first term, the estimation
loss.
Typically, the kernel prior, $\ell_{\mathbf{K}}(\mathbf{K})$ is used to
stabilize the blur kernel, and the image prior,
$\ell_{\mathbf{I}}(\mathbf{I})$, is used to recover the latent image with
sharp edges. These priors play a significant role in blind deblurring. While
this paper focuses on developing a blur kernel with a flexible structure, a
proper prior is still needed to control the level of flexibility in the blur
kernel. To achieve that, we solve the following minimization problem:
$\mathbf{I}^{*},\mathbf{K}^{*}=\underset{\mathbf{I},\mathbf{K}}{\mbox{arg\\!
min}}\hskip
2.84526pt||\mathbf{I}*\mathbf{K}-\mathbf{B}||_{2}^{2}+\lambda_{1}||\mathbf{K}||_{2}^{2}+\lambda_{2}||\bm{\sigma}^{2}||_{2}^{2}+\lambda_{3}(||\nabla\mathbf{I}||_{2}^{2}+||\mathbf{I}||_{2}^{2})$
(10)
where
$\bm{\sigma}^{2}=(\sigma_{x,1}^{2},\ldots,\sigma_{x,N}^{2},\sigma_{y,1}^{2},\ldots,\sigma_{y,N}^{2})^{T}$
is a vector including the diagonal elements
$(\sigma_{x,t}^{2},\sigma_{y,t}^{2})$ of the covariance matrix for each base
kernel $K_{G,t}$ for $t=1,\ldots,N$, $\nabla I$ is the image gradient, and
$||\cdot||_{2}$ is the $l_{2}$-norm. We use $||\mathbf{K}||_{2}^{2}$ and
$||\bm{\sigma}^{2}||_{2}^{2}$ as the kernel prior and
$||\nabla\mathbf{I}||_{2}^{2}+||\mathbf{I}||_{2}^{2}$ as the image prior.
Regulating $||\mathbf{K}||_{2}^{2}$ and $||\mathbf{I}||_{2}^{2}$ induces the
sparsity of $\mathbf{K}$ and $\mathbf{I}$, respectively. The inclusion of
$||\nabla\mathbf{I}||_{2}^{2}$ restricts the estimate of $\mathbf{I}$ by
eliminating tiny gradient segments but keeping large ones only. In addition to
these common priors, we propose to include $||\bm{\sigma}^{2}||_{2}^{2}$, say
a covariance prior, to further regulate the blur kernel.
Input: Degraded image $\mathbf{B}$, number of base kernels $N$, kernel size
$h$
Let $i\leftarrow 0$;
Initialize the latent image, $\mathbf{I}^{0}\leftarrow\mathbf{B}$;
Generate random numbers to initialize the model parameters,
$\bm{\Sigma}_{t}^{0}$ and $\bm{\mu}_{t}^{0}$ for $t=1,\ldots,N$;
Use the model parameters to initialize base kernels $K_{G,t}^{0}$ for
$t=1,\ldots,N$;
Initialize the blur kernel $\mathbf{K}^{0}$ by combining $K_{G,t}^{0}$ for
$t=1,\ldots,N$ according to Eq. (2);
repeat
Update the number of iteration, $i\leftarrow i+1$;
Blur kernel estimation steps:
* Given $\mathbf{I}^{i-1}$, estimate $\mathbf{K}^{i}$ by optimizing Eq. (11) with respect to $\bm{\Sigma}_{t}^{i}$ and $\bm{\mu}_{t}^{i}$ for $t=1,\ldots,N$;
* Normalize $\mathbf{K}^{i}$;
Latent image restoration steps:
* Given $\mathbf{K}^{i}$, recover an intermediate latent image $\mathbf{I}^{i}$ by using Eq. (13);
* Update a tuning parameter, $\lambda_{3}\leftarrow\lambda_{3}/1.1$;
until _
$\frac{||\mathbf{K}^{i}-\mathbf{K}^{i-1}||_{2}}{||\mathbf{K}^{i-1}||_{2}}<\epsilon\quad\text{and}\quad\frac{||\mathbf{I}^{i}-\mathbf{I}^{i-1}||_{2}}{||\mathbf{I}^{i-1}||_{2}}<\epsilon$
_;
Output: Latent image $\mathbf{I}^{*}\leftarrow\mathbf{I}^{i}$
Algorithm 1 Alternating optimization for blind deblurring with the kernel
mixture
As done in typical coefficient shrinkage, e.g., ridge regression, the
covariance prior enforces insignificant $\sigma_{x,t}^{2}$ and
$\sigma_{y,t}^{2}$ to be close to zero and makes the corresponding base kernel
$K_{G,t}$ almost negligible in modeling blurriness. This property is used to
determine the number of base kernels for the kernel mixture. Instead of
predetermining the exact number of base kernels, we include a large enough
number of base kernels in the model. Then, some of them with little impact
will become negligible due to the covariance prior.
### 3.3 Estimation Procedure
To solve Eq. (10), we use alternating optimization as described in Alg. 1. The
ultimate purpose of Alg. 1 is to recover the latent image $\mathbf{I}$ by
modeling a proper blur kernel $\mathbf{K}$, given the degraded image
$\mathbf{B}$. To construct $\mathbf{K}$ as a kernel mixture, the information
about the number of base kernels, $N$, and the kernel size, $h$, is also
needed. The kernel size does not need to be the same with the size of
$\mathbf{B}$, but it will be sufficient if the blur kernel $\mathbf{K}$ is
large enough to model the blurriness in the degraded image $\mathbf{B}$. Once
the inputs are ready, we use $\mathbf{B}$ to initialize the latent image,
$\mathbf{I}^{0}$. Then, we generate random numbers to set the initial values
of model parameters, initialize the base kernels, and combine them to form the
initial blur kernel, $\mathbf{K}^{0}$. After the initialization, we keep
updating the blur kernel and latent image forming an iterative loop. Within a
loop, we optimize $\mathbf{K}$ given the latent image at hand,
$\mathbf{I}^{i-1}$, and use the resulting optimal solution to update
$\mathbf{K}^{i}$. The updated blur kernel, $\mathbf{K}^{i}$, is then used to
obtain a new estimate of the latent image, $\mathbf{I}^{i}$. This procedure
will be repeated until the estimates of the latent image and blur kernel
converge. In fact, the procedure in the loop consists of two sub-problems, one
for blur kernel estimation and another for latent image restoration, which is
further elaborated in the following sections.
#### 3.3.1 Blur kernel
Under the alternating optimization framework, a latent image estimate is given
for the blur kernel estimation. Then, the latent image is assumed constant, so
the objective function in Eq. (10) reduces to
$E(\mathbf{K})=||\mathbf{I}*\mathbf{K}-\mathbf{B}||_{2}^{2}+\lambda_{1}||\mathbf{K}||_{2}^{2}+\lambda_{2}||\bm{\sigma}^{2}||_{2}^{2}.$
(11)
For the blur kernel estimation, we minimize the energy function,
$E(\mathbf{K})$, with respect to the model parameters that compose
$\mathbf{K}$, by using the Conjugate Gradient (CG) method. Once a new estimate
of $\mathbf{K}$ is obtained, it is normalized to ensure
$\sum_{i}\sum_{j}\mathbf{K}_{ij}=1$.
#### 3.3.2 Latent image
The goal of this sub-problem is to recover the latent image given a blur
kernel estimate. Based on Eq. (10), the energy function is formulated as
$E(\mathbf{I})=||\mathbf{I}*\mathbf{K}-\mathbf{B}||_{2}^{2}+\lambda_{3}(||\nabla\mathbf{I}||_{2}^{2}+||\mathbf{I}||_{2}^{2}).$
(12)
For the minimization of Eq. (12), the closed-form solution exists. By using
the Fast Fourier Transform (FFT), the solution is formulated [26] as
$\mathbf{I}^{*}=\mathcal{F}^{-1}(\frac{\overline{\mathcal{F}(\mathbf{K})}\mathcal{F}(\mathbf{B})}{\overline{\mathcal{F}(\mathbf{K})}\mathcal{F}(\mathbf{K})+\lambda_{3}[\overline{\mathcal{F}(\bm{\partial_{x}})}\mathcal{F}(\bm{\partial_{x}})+\overline{\mathcal{F}(\bm{\partial_{y}})}\mathcal{F}(\bm{\partial_{y}})]+\lambda_{3}})$
(13)
where $\mathcal{F}(\cdot)$, $\mathcal{F}^{-1}(\cdot)$ are the FFT and inverse
FFT operators, respectively, and $\overline{\mathcal{F}(\cdot)}$ denote the
complex conjugate operator of FFT. $\bm{\partial_{x}}$ and $\bm{\partial_{y}}$
are the horizontal and vertical partial differential operators, respectively.
To achieve a better estimate of $\mathbf{I}$, we keep updating $\lambda_{3}$
at each iteration as has been done in [4],[17].
## 4 Comparative Experiments
In this section, we use satellite image data and compare the performance of
our proposed method with other state-of-the-art methods. We describe the
dataset and experimental settings used for the implementation of the methods
and discuss the comparison results.
### 4.1 Dataset
The dataset used in this study includes a simulated image of satellite that is
convolved with unknown kernels of various noises to generate a wide spectrum
of blurred noisy images. These synthetic images are distorted by a two-layer
wind model whose strength turbulence parameters create different types of
noisy images, characterized as $Dr=10$ (Dr10) and $Dr=20$ (Dr20) [24]. The
dataset consists of 200 images in total, including 100 images for each
distortion type of Dr10 or Dr20. The images are in RGB (three channels) format
and their patch size is 365$\times$365\. The authors of Swindle et al. [24]
generously provided the 200 blurred images and the original (simulated)
satellite image for the analysis of this study. As such, we are not aware of
what kernels were used for the convolution and how the final blurry images
were created.
Fig. 5(a) and 5(b) presents a sample of Dr10 and Dr20 image, respectively.
Dr20 images are noisier and more degraded, and it is even hard to distinguish
the object of satellite. Fig. 5(c) shows the simulated satellite image without
any degradation, which will be referred to as the real image. The different
images demonstrate the distortion severity of the dataset, especially for
Dr20.
(a) Blurred image at Dr10
(b) Blurred image at Dr20
(c) Simulated (real) Image
Figure 5: Sample images and the original image without degradation
### 4.2 Experimental Settings
Our proposed method is evaluated in comparison with other state-of-the-art
methods in both quantitative and qualitative fashion. The benchmark methods
include Xu and Jia [26], Ren et al. [20], Pan et al. [18], and Bai et al. [4].
These papers are not only recent but also have shown their superior
performance to others. In addition, their methods are publicly available
online as software or Matlab source codes, so the implementation of these
methods can be accurate and simple. Xu and Jia [26] can be implemented via
software, but for its implementation, a specific kernel size needs to be
supplied. There are three options of small, medium, and large kernels. The
medium kernel provided the best quality of deblurred images when applied Xu
and Jia [26] to our data. For a fair comparison, we consider the same size of
kernel for all the methods including ours, i.e., $h=31\times 31$ (the value
for the medium size kernel). Our method is implemented in Python on an i7-8700
CPU system.
To fully specify our model and estimation method, we need to determine a set
of parameters other than the model parameters. This includes the number of
base kernels, $N$, and the regularization parameters of
$\lambda_{1},\lambda_{2}$, and $\lambda_{3}$.
Fig. 3(c) and 4(c) implies that the number of kernels is one of the most
critical factors defining the overall shape of blur kernel. As its impact
being so crucial, we do not choose the exact number of kernels with any prior
knowledge, but we make the selection adaptive to the given image and
underlying blurriness structure therein. This is achieved by modeling the
covariance prior in Eq. (10) while leaving the value of $N$ as a sufficiently
large number. From our preliminary study, we found that $N=9$ could provide
enough level of flexibility for the kernel mixture. For the regularization
parameters, we sampled a few images and selected $\lambda_{1}=10^{-4}$ and
$\lambda_{2}=10^{-2}$ based on the quantitative metrics described in Section
4.3 and visual inspection of recovered images. On the other hand, as described
in Alg. 1, $\lambda_{3}$ is kept updated in each iteration of the estimation
procedure, which is initialized to $10^{-2}$.
One thing to note is, for the base kernels, we use $K_{G}^{\text{center}}$
instead of $K_{G}^{\text{rotation}}$ to form the kernel mixture, $\mathbf{K}$.
Although $K_{G}^{\text{rotation}}$ is the most flexible and capable of
modeling the most complicated shape of blurriness, it adds additional $N$
rotation parameters to estimate, one for each base kernel. Then, the total
number of parameters to estimate becomes $5N$ which is 45 when $N=9$. From the
perspective of the nonlinear optimization in Eq. (11), these additional
parameters require significantly more computations. At least for the images
used in this study, the additional parameters do not add much benefit in terms
of the image recovery (also see the similarity between Fig. 3(c) and 4(c)) but
increase the variance of the final estimate producing a poorer quality of a
recovered image.
### 4.3 Performance Measures
We use two performance measures to quantify and compare the quality of various
blind deblurring methods. The first measure is root mean square error (RMSE)
that is widely used to evaluate the estimation (or prediction) accuracy of a
model not only in the blind deblurring domain but also in general machine
learning applications. The second measure is peak signal-to-noise ratio (PSNR)
that is often used in computer vision and image processing applications to
quantify the quality of the reconstructed images.
The RMSE and PSNR can be calculated as follows:
$\displaystyle\text{RMSE}(\mathbf{I},\hat{\mathbf{I}})=\sqrt{\frac{1}{n_{h}n_{v}}\sum_{i=1}^{n_{h}}\sum_{j=1}^{n_{v}}(\mathbf{I}_{ij}-\hat{\mathbf{I}}_{ij})^{2}}$
(14)
$\displaystyle\text{PSNR}(\mathbf{I},\hat{\mathbf{I}})=\frac{\max_{i,j}\hat{\mathbf{I}}_{i,j}-\min_{i,j}\hat{\mathbf{I}}_{i,j}}{\sum_{i=1}^{n_{h}}\sum_{j=1}^{n_{v}}(\mathbf{I}_{ij}-\hat{\mathbf{I}}_{ij})^{2}/n_{h}n_{v}}$
(15)
where $\mathbf{I}_{ij}$ and $\hat{\mathbf{I}}_{ij}$ for $i=1,\ldots,n_{h}$ and
$j=1,\ldots,n_{v}$ are the values of $i$th horizontal and $j$th vertical pixel
from the real and recovered image, respectively. Since the size of the images
used in this study is $365\times 365$, $n_{h}=n_{v}=365$. The RMSE evaluates
how different the real and recovered images are whereas the PSNR quantifies
how much (peak) variation is captured by the recovered image relative to the
average of remaining variation. As such, the smaller the RMSE is and the
larger the PSNR is, the better the quality of a recovered image is. Because
they measure different aspects of the quality, we use both of them to compare
the proposed method with others.
### 4.4 Comparison Results
We randomly sample 10 images from each of the Dr10 and Dr20 datasets to
compare the performance.
#### 4.4.1 Dr10 Results
Table 1 shows the RMSE and PSNR values for the ten recovered Dr10 images. In
all cases, the proposed method outperforms Xu and Jia [26] and Ren et al.
[20]. In addition, our method performs better in most cases than the most
sophisticated method, the graph-based blind deblurring [4]. Still, the graph-
based method consistently shows descent performance while producing the best
recovered image in one case. In couple other cases, the dark channel prior
method [18] performs the best, but its performance is worse than the graph-
based method on average. Even though the dark channel method can provide
really great quality for certain images, it can also suffer from considerably
poor performance; see the results of Image 3, 9, and 10 where its RMSE and
PSNR significantly deviate from the best values. On the contrary, our proposed
method not only has the most best cases but also remains close to the best
performance whenever it loses the first place. All these observations are
applied to both RMSE and PSNR measures. Overall, the proposed method has the
lowest RMSE and the highest PSNR on average. Fig. 6 visualizes the relative
performance of all methods, demonstrating the superiority of our method.
Table 1: Performance measures for randomly sampled Dr10 images; the boldface
highlights the best value for each image.
RMSE PSNR Image Xu et al. [26] Ren et al. [20] Pan et al. [18] Bai et al. [4]
Ours Xu et al. [26] Ren et al. [20] Pan et al. [18] Bai et al. [4] Ours 1
16.89 19.47 14.26 14.55 14.90 24.98 23.75 26.46 26.29 26.08 2 23.51 23.25
20.31 20.18 19.20 22.12 22.21 23.39 23.44 23.87 3 22.38 18.12 16.81 14.11
13.27 22.54 24.38 25.03 26.55 27.08 4 19.43 18.44 14.62 14.95 14.61 23.77
24.22 26.24 26.05 26.25 5 17.33 20.54 15.56 17.19 16.20 24.76 23.29 25.70
24.84 25.35 6 16.00 14.72 13.77 14.96 14.84 25.45 26.18 26.76 26.04 26.11 7
18.29 22.33 19.40 19.27 17.73 24.30 22.56 23.78 23.84 24.57 8 18.78 20.29
14.68 15.34 15.71 24.10 23.40 26.21 25.83 25.62 9 18.90 20.99 15.27 14.50
11.96 24.01 23.10 25.86 26.31 27.99 10 19.41 13.50 15.88 12.49 13.19 23.78
24.72 25.52 27.61 27.14 Mean 19.10 19.16 16.05 15.75 15.16 23.98 23.78 25.50
25.68 26.00
(a) RMSE results
(b) PSNR results
Figure 6: Visualization of relative performance - Dr10
(a)
(b)
(c)
(d)
(e)
(f)
Figure 7: Dr10 - Results output (Image 3): (a) Blurred images, (b) Xu and Jia
[26], (c) Ren et al. [20], (d) Pan et al. [18], (e) Bai et al. [4], (f)
Proposed method
(a)
(b)
(c)
(d)
(e)
(f)
Figure 8: Dr10 - Results output (Image 9): (a) Blurred images, (b) Xu and Jia
[26], (c) Ren et al. [20], (d) Pan et al. [18], (e) Bai et al. [4], (f)
Proposed method
Fig. 7 and 8 illustrate some deblurring outcomes from all methods as well as
the blurred image that has been used as an input (Image 3 and 9,
respectively). As expected, the visual outcomes are in accordance with the
results derived from the quantitative measures. By taking a closer look into
the deblurred images, we observe that the dark channel method (Fig. 7(d)) and
the graph-based method (Fig. 7(e)) create non-smooth edges of the object,
especially around the wings of the satellite. On the contrary, the recovered
image from our method (Fig. 7(f)) shows fine edges around the object. The
difference in the continuity (or smoothness) of nearby pixel values is
reflected in the RMSE and PSNR calculation where our method attains better
values than the other two. Fig. 7(b) and 7(c) show crude reconstruction of the
images without much details on the surface of the satellite, which causes
higher RMSE and lower PSNR for the corresponding methods.
Fig. 7(d) shows a higher contrast than Fig. 7(e) and 7(f) though its contrast
level is still lower than that of Fig. 7(b) and 7(c). This is likely because
the dark channel method regularizes the dark channel of the image producing
more of darker pixels. By having a relatively high contrast, the image in Fig.
7(d) looks clearer at distance. However, the high contrast, when it is not
appropriate, can result in poorer performance in the image recovery (see Table
1 for Image 3 and 9).
The similar argument can be made for Fig. 8. The recovered images from Xu and
Jia [26] and Ren et al. [20] lack detail. The dark channel method and the
graph-based method generate even more corrupted edges of the object. The image
recovered from the dark channel method has a higher contrast than the images
from the graph-based method and our method. All the results in terms of RMSE,
PSNR, and visual inspection demonstrate that the proposed method is
competitive with and superior to other state-of-the-art methods in recovering
the latent images.
Table 2: Performance measures for randomly sampled Dr20 images; the boldface
highlights the best value for each image.
RMSE PSNR Image Xu et al. [26] Ren et al. [20] Pan et al. [18] Bai et al. [4]
Ours Xu et al. [26] Ren et al. [20] Pan et al. [18] Bai et al. [4] Ours 1
23.42 23.80 21.67 22.64 21.08 22.15 22.01 22.82 22.44 23.06 2 24.80 26.35
26.17 24.23 23.34 21.65 21.13 21.18 21.85 22.18 3 24.02 25.49 23.45 20.57
20.37 21.93 21.42 22.14 23.28 23.36 4 23.28 26.19 24.18 22.78 21.18 22.20
21.18 21.87 22.39 23.02 5 21.06 24.32 22.56 22.14 18.84 23.07 21.82 22.47
22.64 24.04 6 23.85 21.93 23.69 23.29 21.26 21.99 22.72 22.05 22.20 22.99 7
23.42 25.05 24.56 24.09 22.61 22.15 21.57 21.74 21.90 22.45 8 22.60 21.80
19.97 21.24 19.90 22.46 22.77 23.53 23.00 23.56 9 19.75 22.59 20.28 19.77
19.77 23.63 22.46 23.40 23.62 23.62 10 20.47 21.92 19.39 17.96 18.17 23.32
22.73 23.79 24.45 24.35 Mean 22.67 23.94 22.59 21.87 20.65 22.45 21.98 22.49
22.77 23.26
(a) RMSE results
(b) PSNR results
Figure 9: Visualization of relative performance - Dr20
#### 4.4.2 Dr20 Results
As discussed earlier, Dr20 images involve more blurriness, so it is natural to
observe worse performance of the methods compared to Dr10 results. Table 2
shows the RMSE and PSNR calculations for randomly chosen ten Dr20 images. For
these images with a higher level of noises, the results show the overwhelming
superiority of the proposed method. taking advantage of explicitly modeling
the structure of the blur kernel. Our method attains the lowest RMSE and the
highest PSNR in all but one cases. Even for the case where the graph-based
method achieves the best quality, our method is the only method producing a
competitive result to the best value. This superior performance is likely
because our method explicitly models the structure of the blur kernel that
represents blurriness.
## 5 Concluding Remarks
In this paper, we propose a novel blind image deblurring method that imposes a
specific structure on the blur kernel and achieves great flexibility in
modeling blurriness. To this end, we develop structure-enhanced Gaussian
kernels and form a mixture of the kernels to model the blur kernel. While the
behavior of the resulting blur kernel is regulated within a parametric
structure, it can represent various shapes of blurriness. Still, the modeling
capability and flexibility of the blur kernel depend on how many kernels to
incorporate in the kernel mixture. To address this issue, we reformulate the
optimization framework for kernel estimation and let the optimization process
itself decide the number of kernels through a covariance prior of the blur
kernel.
Our experimental results based on satellite image data show that the proposed
method outperforms descent state-of-the-art methods that employed some complex
image priors, in both quantitative and qualitative manners. Our method attains
the lowest RMSE and the highest PSNR, and this superiority becomes more
apparent when the noise level gets higher. From visual inspection, while other
methods suffer from discontinuity in nearby pixel values, the proposed method
recovers images with smooth edges without any corruption. Yet, the purpose of
this method regularizing the blur kernel is not in replacing image prior-based
methods but to advance general blind deconvolution practices through a proper
combination with the image prior-based methods.
In this paper, we combined several kernels in an additive way which was the
simplest form of integrating kernels. Applying a more advanced method for the
kernel fusion such as Gaussian Conditional Random Fields would be an
interesting research topic. In addition, by improving the optimization process
of kernel estimation, the proposed method can be applied to much broader
deblurring problems of significant importance. Therefore, finding better
heuristics and deriving a closed form solution based on proper assumptions
would be a valuable research task.
## Acknowledgment
The authors would like to thank Ryan Swindle, Douglas Hope, Michael Hart, and
Stuart Jefferies for providing the dataset used in this study.
## References
* Almeida and Almeida [Aug. 2009] Mariana SC Almeida and Luís B Almeida. Blind and semi-blind deblurring of natural images. _IEEE Trans. Image Process._ , 19(1):36–52, Aug. 2009.
* Babacan et al. [2012] S Derin Babacan, Rafael Molina, Minh N Do, and Aggelos K Katsaggelos. Bayesian blind deconvolution with general sparse image priors. In _Proc. Eur. Conf. Comput. Vis._ , pages 341–355. Springer, 2012\.
* Babacan et al. [Nov. 2008] S Derin Babacan, Rafael Molina, and Aggelos K Katsaggelos. Variational bayesian blind deconvolution using a total variation prior. _IEEE Trans. Image Process._ , 18(1):12–26, Nov. 2008.
* Bai et al. [Oct. 2018] Yuanchao Bai, Gene Cheung, Xianming Liu, and Wen Gao. Graph-based blind image deblurring from a single photograph. _IEEE Trans. Image Process._ , 28(3):1404–1418, Oct. 2018.
* Bar et al. [Dec. 2006] Leah Bar, Nahum Kiryati, and Nir Sochen. Image deblurring in the presence of impulsive noise. _Int. J. Comput. Vis_ , 70(3):279–298, Dec. 2006\.
* Chan and Wong [Mar. 1998] Tony F Chan and Chiu-Kwong Wong. Total variation blind deconvolution. _IEEE Trans. Image Process._ , 7(3):370–375, Mar. 1998.
* Cho and Lee [2009] Sunghyun Cho and Seungyong Lee. Fast motion deblurring. _ACM Trans. Graph._ , 28(5):1–8, 2009.
* Fergus et al. [2006] Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T Roweis, and William T Freeman. Removing camera shake from a single photograph. _ACM Trans. Graph._ , 25(3):787–794, 2006.
* Jiang et al. [Jul. 2003] Ming Jiang, Ge Wang, Margaret W Skinner, Jay T Rubinstein, and Michael W Vannier. Blind deblurring of spiral ct images. _IEEE Trans. Med. Imag._ , 22(7):837–845, Jul. 2003.
* Lee and Hwang [May. 2017] Chia-Chen Lee and Wen-Liang Hwang. Mixture of gaussian blur kernel representation for blind image restoration. _IEEE Trans. Comput. Imag._ , 3(4):783–797, May. 2017.
* Levin et al. [Jul. 2011] Anat Levin, Yair Weiss, Fredo Durand, and William T Freeman. Understanding blind deconvolution algorithms. _IEEE Trans. Pattern Anal. Mach. Intell._ , 33(12):2354–2367, Jul. 2011.
* Li et al. [Sep. 2016] Chong-Yi Li, Ji-Chang Guo, Run-Min Cong, Yan-Wei Pang, and Bo Wang. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. _IEEE Trans. Image Process._ , 25(12):5664–5677, Sep. 2016.
* Likas and Galatsanos [Jul. 2004] Aristidis C Likas and Nikolas P Galatsanos. A variational approach for bayesian blind image deconvolution. _IEEE Trans. Signal Process._ , 52(8):2222–2233, Jul. 2004.
* Mai and Liu [2015] Long Mai and Feng Liu. Kernel fusion for better image deblurring. In _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._ , pages 371–380, June 2015.
* Miskin and MacKay [2000] James Miskin and David JC MacKay. Ensemble learning for blind image separation and deconvolution. In _Advances in Independent Component Analysis_ , pages 123–141. Springer, 2000.
* Molina et al. [Nov. 2006] Rafael Molina, Javier Mateos, and Aggelos K Katsaggelos. Blind deconvolution using a variational approach to parameter, image, and blur estimation. _IEEE Trans. Image Process._ , 15(12):3715–3727, Nov. 2006.
* Pan et al. [Apr. 2016] Jinshan Pan, Zhe Hu, Zhixun Su, and Ming-Hsuan Yang. $l_{0}$-regularized intensity and gradient prior for deblurring text images and beyond. _IEEE Trans. Pattern Anal. Mach. Intell._ , 39(2):342–355, Apr. 2016.
* Pan et al. [Jun. 2016] Jinshan Pan, Deqing Sun, Hanspeter Pfister, and Ming-Hsuan Yang. Blind image deblurring using dark channel prior. In _Proc. IEEE Conf. Comput. Vis. Pattern Recognit._ , pages 1628–1636, Jun. 2016.
* Peng and Cosman [Feb. 2017] Yan-Tsung Peng and Pamela C Cosman. Underwater image restoration based on image blurriness and light absorption. _IEEE Trans. Image Process._ , 26(4):1579–1594, Feb. 2017.
* Ren et al. [May. 2016] Wenqi Ren, Xiaochun Cao, Jinshan Pan, Xiaojie Guo, Wangmeng Zuo, and Ming-Hsuan Yang. Image deblurring via enhanced low-rank prior. _IEEE Trans. Image Process._ , 25(7):3426–3437, May. 2016.
* Richardson [1972] William Hadley Richardson. Bayesian-based iterative method of image restoration. _JoSA_ , 62(1):55–59, 1972.
* Shan et al. [2008] Qi Shan, Jiaya Jia, and Aseem Agarwala. High-quality motion deblurring from a single image. _ACM Trans. Graph._ , 27(3):1–10, 2008.
* Shen et al. [May. 2012] Huanfeng Shen, Lijun Du, Liangpei Zhang, and Wei Gong. A blind restoration method for remote sensing images. _IEEE Geosci. Remote Sens. Lett._ , 9(6):1137–1141, May. 2012.
* Swindle et al. [Sep. 2018] Ryan Swindle, Douglas Hope, Michael Hart, and Stuart Jefferies. High-resolution space situational awareness imaging using carbon fiber telescopes. _J. Appl. Remote Sens_ , 12(4):042406, Sep. 2018\.
* Wiener [1964] Norbert Wiener. _Extrapolation, Interpolation, and Smoothing of Stationary Time Series_. The MIT Press, 1964. ISBN 0262730057.
* Xu and Jia [2010] Li Xu and Jiaya Jia. Two-phase kernel estimation for robust motion deblurring. In _Proc. Eur. Conf. Comput. Vis._ , pages 157–170. Springer, 2010\.
* Yan and Shao [Feb. 2016] Ruomei Yan and Ling Shao. Blind image blur estimation via deep learning. _IEEE Trans. Image Process._ , 25(4):1910–1921, Feb. 2016.
* You et al. [Jun. 2019] Chenyu You, Guang Li, Yi Zhang, Xiaoliu Zhang, Hongming Shan, Mengzhou Li, Shenghong Ju, Zhen Zhao, Zhuiyang Zhang, Wenxiang Cong, et al. Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle). _IEEE Trans. Med. Imag._ , 39(1):188–203, Jun. 2019.
* You and Kaveh [Mar. 1996] Yu-Li You and Mostafa Kaveh. A regularization approach to joint blur identification and image restoration. _IEEE Trans. Image Process._ , 5(3):416–428, Mar. 1996.
* Zhang et al. [Jan. 2019] Shuo Zhang, Guanghui He, Hai-Bao Chen, Naifeng Jing, and Qin Wang. Scale adaptive proposal network for object detection in remote sensing images. _IEEE Geosci. Remote Sens. Lett._ , 16(6):864–868, Jan. 2019.
|
# Measurement-induced criticality as a data-structure transition
Xhek Turkeshi JEIP, USR 3573 CNRS, Collège de France, PSL Research
University, 11 Place Marcelin Berthelot, 75321 Paris Cedex 05, France
###### Abstract
We employ unsupervised learning tools to identify the dynamical phases and
their measurement-induced transitions in quantum systems subject to the
combined action of unitary evolution and stochastic local measurements.
Specifically, we show that the principal component analysis and the intrinsic
dimension estimation provide order parameters that directly locate the
transition and the critical exponents in the classical encoding data space.
Finally, we test our approach on stabilizer circuits as proof of principle,
finding robust agreement with previous studies.
## I Introduction
The advances in noisy intermediate scale quantum devices Preskill (2018); Roch
_et al._ (2014); Koh _et al._ have motivated a renewed interest in monitored
quantum systems Wiseman and Milburn (2009) – systems where the unitary
dynamics is interspersed by local measurements. The resulting non-unitary
evolution is described by stochastic quantum trajectories stemming from the
intrinsic randomness of the quantum measurement operations, that in the many-
body framework leads to measurement-induced transitions between unconventional
dynamical phases Nahum _et al._ (2021); Potter and Vasseur ; Lunt _et al._ ;
Sierant _et al._ (2022); Kalsi _et al._ (2022). These critical phenomena are
controlled by the competition between the entangling power of unitary
dynamics, which drives the system toward thermalization, and the disentangling
effect of local measurements, that collapse the system wave-function in
restricted manifolds of the Hilbert space Li _et al._ (2018, 2019); Skinner
_et al._ (2019); Szyniszewski _et al._ (2019, 2020); Fan _et al._ (2021);
Biella and Schiró (2021); Kumar _et al._ (2020); Iaconis and Chen (2021). In
the simplest setup of random quantum circuits, these measurement-induced
transitions separate a quantum error correcting phase at low measurement rate
from a quantum Zeno phase at high measurement rate Choi _et al._ (2020); Bao
_et al._ (2020); Gullans and Huse (2020a, b), as shown by extensive numerical
investigations Zabalo _et al._ (2020, 2022); Sierant and Turkeshi (2022);
Agrawal _et al._ ; Block _et al._ (2022); Sharma _et al._ (2022); Lunt _et
al._ (2021); Turkeshi _et al._ (2020); Ippoliti _et al._ (2021) and
analytical arguments Nahum _et al._ (2021); Jian _et al._ (2020); Lopez-
Piqueres _et al._ (2020); Jian _et al._ ; Lang and Büchler (2020); Zabalo
_et al._ ; Li _et al._ ; Vasseur _et al._ (2019); Ippoliti and Khemani
(2021); Lu and Grover (2021) on the entanglement properties of the system.
In this work, we propose an alternative viewpoint by analyzing the classical
encoding configurations of the system state and show that the measurement-
induced criticality manifests as a geometric transition in the data space (cf.
Fig. 1). To this end, we consider the principal component analysis (PCA) and
the intrinsic dimension estimation, which, as unsupervised learning
techniques, provide an ideal framework to seek a pattern in unlabelled raw
data Mehta _et al._ (2019); Mendes-Santos _et al._ (2021a, b). PCA aims to
detect the most relevant directions in data space and to compress (project)
the data set toward the significant and restricted manifold. Being a linear
method, PCA is particularly effective on linear problems but generally fails
when dealing with non-linear structures and complex data space topology Wold
_et al._ (1987). On the other hand, the intrinsic dimension estimation
extrapolates the effective dimension of the subspace of the data space where
the data lie and may be applied to non-linear geometries as well Mendes-Santos
_et al._ (2021a, b).
Using stabilizer circuits as a benchmark framework, we argue that the first
principal component and the intrinsic dimension are natural order parameters
for the measurement-induced transition, in the same fashion as they are for
classical and quantum criticality in equilibrium systems Wang (2016); Wetzel
(2017); Hu _et al._ (2017); Ch’ng _et al._ (2018); Costa _et al._ (2017);
Wang and Zhai (2017); Khatami _et al._ (2020); Mendes-Santos _et al._
(2021a); Beach _et al._ (2018); Lidiak and Gong (2020); Torlai _et al._ ;
Martiniani _et al._ (2019); Bagrov _et al._ (2020). (See also Ref. Mehta
_et al._ (2019); Carleo _et al._ (2019); Carrasquilla (2020) for general
reviews on machine learning methods in quantum physics). Furthermore, we find
that at the critical point the system develops a minimum intrinsic dimension,
which reflects the parametrical simplicity required to describe the system
around the transition by virtue of universality. Our numerical results
perfectly agree with previously reported values of the critical point and the
correlation length critical exponent and provide a viable alternative to
studying measurement-induced criticality in more general setups.
Figure 1: Pictorial representation of the phase transition. The Hilbert space
$\mathcal{H}$ explored by the quantum trajectories exhibit a structural
transition with the measurement rate $p$. This reflects in a geometric
transition on the encoding data space $\mathcal{G}$.
The remaining of the paper is structured as follows. In Sec. II we discuss the
unsupervised learning methods and how these can be applied to the data space
of quantum trajectories. In Sec. III the stabilizer circuits used to benchmark
our methods and review how these can be encoded and simulated in polynomial
resources, and the relevant results for (1+1)-dimensional systems which we
will use for comparison with our analysis. Sec. IV discuss our main numerical
findings on the principal component analysis and the intrinsic dimension
estimation. Finally, our concluding remarks and outlooks are presented in Sec.
V.
## II Data space of quantum trajectories
In this section, we introduce the principal component analysis and the
intrinsic dimension estimation and discuss the effectiveness and limitation
when applied to the encoding data set of quantum trajectories.
For any given quantum trajectory $|\Psi(\alpha,\xi)\rangle$, with $\alpha$
some control parameters, and $\xi$ a registry identifying the trajectory, we
define a feature $G_{i}$ with $i\equiv(\alpha,\xi)$ as the $d$-dimensional
classical encoding of a state. The space of the features is denoted by
$\mathcal{G}$ and is called data space. A few examples are the following,
where we consider states defined on a qubit lattice with $R$ sites. (i) The
computational basis representation of a quantum state: the feature is the
vector whose components are the amplitude with respect to the computational
basis, and hence the dimension of the feature is $d=2^{R}$ Schmitt and
Lenarčič . (ii) A Gaussian quantum state with correlation matrix $C$: the
feature is the one-dimensional reshaping of the correlation matrix, with
$d\propto R^{2}$. (iii) The matrix product state (MPS) representation of a
quantum state with uniform bond dimension $D$: the feature is the one-
dimensional reshaping of the MPS with $d\propto RD^{2}$. (iv) A stabilizer
state: the feature is the one-dimensional reshaping of the tableau
representation and $d=R(2R+1)$ (See Sec. III and Ref. Nielsen and Chuang
(2010); Aaronson and Gottesman (2004)).
A data set is a rectangular matrix $G({\tilde{\alpha},\tilde{\xi}})$ of
dimension $N\times d$, where each row is a feature $G_{i}$ and $N$ is the
total number of features, which can include different values of $\alpha$ and
of $\xi$. We denote $\tilde{\alpha}$ ($\tilde{\xi}$) the common parameters
(post-selected trajectories) of the data set. Despite from these data sets one
can, in principle, compute the physical properties of the system (e.g. the
entanglement entropy and the correlation functions), here we argue that the
measurement-induced criticality emerges as a geometric transition in the data
space $\mathcal{G}$, i.e. in the $d$-dimensional space of all the features
(cf. Fig. 1).
### II.1 Principal component analysis
Principal component analysis (PCA) is a projective method based on a linear
transformation of the data space basis Wold _et al._ (1987); Mehta _et al._
(2019); Wang (2016). Following Ref. Wetzel (2017); Hu _et al._ (2017), we
consider as data set a collection of $N_{\xi}$ quantum trajectory snapshots
for each of the $N_{\alpha}$ values of the parameter $\alpha$. (In this case,
there are no shared parameters $\tilde{\alpha}$ or $\tilde{\xi}$ among the
features). These $N=N_{\alpha}N_{\xi}$ features are identified as vectors in a
$d$-dimensional space. The PCA rotates the framework of reference, in such a
way that the variance of the data is the largest in the first transformed
direction, the second largest in the second direction, _etc._.
The method consists of three steps. (i) Define the centered data set $X$,
whose elements are $X_{i,j}=G_{i,j}-(1/N)\sum_{i}G_{i,j}$ and compute the
matrix $\Sigma=X^{T}X/(N-1)$. The centering preprocess guarantees that this is
the covariance matrix of the data set, whose elements are the cross-
correlations $\Sigma_{i,j}$ among features. (ii) Compute the
eigendecomposition $\Sigma=V^{T}KV$, where
$K=\mathrm{diag}(k_{1},\dots,k_{d})$ is the diagonal matrix of the eigenvalues
ordered in descending order, and $V=(v_{1},\dots,v_{d})$ is the rotation whose
columns $v_{j}$ identify the $j$-th relevant directions. In the new reference
frame defined by $V$, the transformed features have no cross-correlations, and
the variance of the data along the $j$-th direction is given by $k_{j}$. (iii)
Rotate the original data set to $W=GV$. The vectors $w_{j}$ along the
direction $v_{j}$ are termed $j$-th principal component.
A normalized and relative weight of the relevance for the principal components
is the explained variance ratios $\lambda_{j}\equiv k_{j}/(\sum_{i}k_{i})$
Mehta _et al._ (2019). By definition $\sum_{n}\lambda_{n}=1$, hence
$\lambda_{n}$ represent the percentage of encoded information along the
direction $v_{n}$.
Interestingly, in many-body physics at equilibrium, the first principal
component acts as an order parameter Wang (2016); Wetzel (2017); Hu _et al._
(2017); Ch’ng _et al._ (2018); Costa _et al._ (2017); Wang and Zhai (2017);
Khatami _et al._ (2020); Mendes-Santos _et al._ (2021a); Beach _et al._
(2018); Lidiak and Gong (2020); Torlai _et al._ ; Martiniani _et al._
(2019); Bagrov _et al._ (2020). In the following, we argue that the first
principal component plays the role of order parameter also on monitored
quantum systems.
### II.2 Intrinsic dimension
The main limitation of the principal component analysis is rooted in the
linear nature of the transformation. Hence, when the data space is non-linear
and with complex geometry, the PCA needs non-trivial preprocessing (e.g.
Kernel methods Mehta _et al._ (2019)) to give meaningful information on the
system.
We overcome this limitation by considering the intrinsic dimension estimation
Goldt _et al._ (2020); Facco _et al._ (2017), which aims to estimate the
effective dimension $I_{d}(\alpha)$ of the subspace of the data space where
the data lie at varying values of the control parameter (e.g. measurement
rate) $\alpha$. The data sets are given by $G(\alpha)$ with $N$ quantum
trajectory snapshots sharing a fixed value of $\alpha$. For monitored quantum
systems, we expect that sparse measurements reflect in a large intrinsic
dimension, as the system state will explore arbitrary large regions of the
Hilbert space (cf. Fig. 1). On the other hand, frequent measurements collapse
the dynamics to a restricted manifold with a lower intrinsic dimension, as the
wave-function will be strongly localized around the measurement dark states
Sierant and Turkeshi (2022).
We estimate the intrinsic dimension in a density-independent fashion using the
two nearest-neighboring technique (2NN) Mendes-Santos _et al._ (2021a, b).
For completeness, here we present the general ideas and the limitation of the
method and refer to Ref. Facco _et al._ (2017) for an in-depth discussion.
The method relies on the assumption of _locally_ uniform data manifolds. Here,
the locality is related to the scale at which we look at the data: the larger
the data set, the more resolved the distance between points. (Empirically, a
finer scale is inversely proportional to the data set size $N^{-1}$.). We
assume a notion of distance in the data space (e.g. the Hamming distance or
the Euclidean distance Mehta _et al._ (2019)). Under these hypotheses, we can
locally represent neighboring features as a uniform hypersphere, and using
simple geometric arguments we can identify the intrinsic dimension as detailed
below.
For a given feature $G_{i}$, we compute the first and second nearest-
neighboring distances $r_{1}(G_{i})$ and $r_{2}(G_{i})$ in data space, and the
ratio $\mu(G_{i})=r_{2}(G_{i})/r_{1}(G_{i})$. The hypersphere distribution of
neighboring data $G_{i}$ induce the distribution of the ratios $\mu$ given by
$f(\mu)=I_{d}\mu^{-I_{d}-1}.$ (1)
From the cumulative distribution
$P(\mu)=\int_{0}^{\mu}d\mu^{\prime}f(\mu^{\prime})$ we obtain
$I_{d}=-\frac{\ln(1-P(\mu))}{\ln\mu}.$ (2)
In practice, the cumulative distribution is numerically estimated, and $I_{d}$
is obtained through a linear fit.
The 2NN intrinsic dimension estimation is not predictive when the local
uniformity of the data set fails. This is the case when the number of features
is too small, but, for discrete data sets, also when the number of features is
too large. The latter is understood based on the relationship between $N$ and
the resolution of the data manifold: When the typical resolution is finer than
the typical distance between data points, the discrete structure of a data set
emerges and the local uniformity assumption breaks down. Thus, the optimal
choice for the number of features $N$ lies in a coarse-grain regime, that is,
in practice, empirically estimated.
The intrinsic dimension has been studied in many-body physics in Ref. Mendes-
Santos _et al._ (2021a, b) where it was found to display a local minimum at
criticality, which is approached with a critical finite-size collapse. This
minimum has an intuitive explanation: at criticality, physics is universal and
controlled by a few relevant fields. In the following, we show the intrinsic
dimension provides a robust order parameter also for monitored quantum
systems.
For self-consistency and completeness, in the next section, we review the
monitored quantum system of interest and recall the numerical estimates in the
literature which will serve as benchmarks for our analysis.
## III Stabilizer circuits
We consider a one-dimensional qubit lattice of size $L$ which evolve through
the architecture represented in Fig. 2. We assume periodic boundary conditions
and $L$ an even number. At each time step, the state evolve according to
$|\Psi_{t+1}\rangle=U_{t}M^{m_{t}}_{t}|\Psi_{t}\rangle,$ (3)
where $U_{t}$, $m_{t}$ and $M_{t}^{m_{t}}$ denote respectively the unitary
layer, the measurement outcomes, and the layer of projective measurements at
time $t$. We choose $U_{t}$ to be a layer of two-body unitary gates given by
$U_{t}=\prod_{i=\mathrm{mod(t,2)}}^{L/2}U_{2i-1,2i,t}$ (4)
with $U_{x,y,t}$ independent random Clifford two-body gates. (A Clifford gate
is a unitary gate that map a Pauli string into a _single_ Pauli string). The
measurement layer is a composition of local measurement operations, which are
stochastically picked with probability (measurement rate) $p$. If a local
measurement is performed, the resulting qubit is projected onto the
measurement result through the Born rule. In summary
$\displaystyle M^{m_{t}}_{t}|\Psi\rangle$
$\displaystyle=\frac{P^{m_{t}}_{t}|\tilde{\Psi}\rangle}{\|P^{m_{t}}_{t}|\tilde{\Psi}\rangle\|},\;P^{m_{t}}_{t}|\tilde{\Psi}\rangle=\left(\prod_{i=1}^{L}P_{i,t}^{m_{t}^{i}}\right)|\Psi\rangle$
$\displaystyle P_{i,t}^{m_{t}^{i}}$
$\displaystyle=\displaystyle\begin{cases}\openone&m_{t}^{i}=0,\\\ \frac{1\pm
Z_{i}}{2}&m_{t}^{i}=\pm 1\end{cases}.$ (5)
In a compact fashion, using the time-ordering $\mathcal{T}$ operator, we can
write the whole evolution in terms of
$K_{\mathbf{m}}=\mathcal{T}\prod_{t=0}^{T}(U_{t}P^{m_{t}}_{t})$ as
$|\Psi_{T}\rangle=\frac{K_{\mathbf{m}}|\Psi_{0}\rangle}{\|K_{\mathbf{m}}|\Psi_{0}\rangle\|},$
(6)
where $\mathbf{m}$ is a short-hand for the measurement-results and for the
unitary gate chosen. The late time regime does not depend on the initial
condition, hence without loss of generality we fix the initial state
$|\Psi_{0}\rangle=|0\dots 0\rangle$.
Figure 2: Cartoon of the hybrid quantum evolution. The brick-wall unitary is
designed to let the qubits propagate correlations, while the measurement gates
are randomly peaked with probability $p$. The measurement outcome $\pm 1$
determines the collapse operator $P_{i}^{\pm}$ through the Born rule.
The rate of measurement $p$ controls the dynamical phases of the system Li
_et al._ (2018). When the local measurements are suppressed $p\to 0$, the
dynamics is governed by the unitary part, which leads the system to explore
large manifolds of the Hilbert space at long times. In this regime,
measurements are not able to resolve the state of the system, which hence
results in a quantum error-correcting phase. In contrast, frequent
measurements $p\to 1$ prevent ergodic behavior as the system is incessantly
projected in a reduced manifold (quantum Zeno phase) Facchi and Pascazio
(2002); Burgarth _et al._ (2014).
Figure 3: (a) Results for the principal components $w_{1}$ and $w_{2}$ at
$L=32$. The data are organized in separate regions for different measurement
rates. (b) Explained variance ratios $\lambda_{n}$ for the most relevant
components. (c) The relative relevance of the directions does not change upon
increasing the number of components $N_{\mathrm{PCA}}=2\div 128$.
With the above specifications, the model is a stabilizer circuit, i.e. a
random quantum circuit whose state is a stabilizer at every time step.
Stabilizer states on $L$ qubits are states for which there exists a subgroup
of Pauli strings
$g=e^{i\pi\phi}X_{1}^{n_{1}}Z_{1}^{m_{1}}X_{2}^{n_{2}}Z_{2}^{m_{2}}\dots
X_{L}^{n_{L}}Z_{L}^{m_{L}},$ (7)
with $\phi,n_{j},m_{j}\in\\{0,1\\}$ such that $g|\Psi\rangle=|\Psi\rangle$.
(We denote $X$, $Y$, $Z$ the Pauli matrices). This group, denoted throughout
this paper $Q$, is abelian, and if it is generated by $L$ independent Pauli
strings $\hat{g}_{j}$, it uniquely specifies the system state as
$|\Psi\rangle\langle\Psi|=\prod_{j=1}^{L}\left(\frac{1+\hat{g}_{i}}{2}\right)=\frac{1}{2^{L}}\sum_{g\in
Q}g.$ (8)
Since a stabilizer state is encoded in the generating Pauli strings
$\hat{g}_{i}$ (cf. Eq. (8)), a random Clifford gate $U$ maps a stabilizer
state into a stabilizer, fixed by the new stabilizers
$U\hat{g}_{i}U^{\dagger}$.
In a similar fashion, projective measurements on a Pauli string, map a
stabilizer state into a stabilizer state. To see this, consider the
measurement on the Pauli string $g_{s}$. If $[g_{s},\hat{g}_{j}]=0$ for all
the generators $\hat{g}_{j}$ of $Q$, the state of the system is unaffected by
the measurement, and the measurement result is deterministic 111Determining
measurement result require the inversion of linear systems in the field
$\mathbb{F}_{2}$.. If this is not the case, there exists a set
$\\{g_{r_{1}},\dots,g_{r_{l}}\\}$ that do not commute (but anticommute) with
$g_{s}$. The measurement result is random with probability $1/2$, and the
projection onto the measurement result $\pm g_{s}$ is added to the generators.
The commuting generators are left untouched, while the anticommuting set is
reduced to $\\{g_{r_{1}}\cdot g_{r_{2}},g_{r_{2}}\cdot
g_{r_{3}},\dots,g_{r_{l}-1}\cdot g_{r_{l}}\\}$ (this certifies that all the
generators commute, as it should be). The above observations constitute the
Gottesman-Knill theorem Aaronson and Gottesman (2004); Nielsen and Chuang
(2010).
An important consequence is that stabilizer circuits are encoded and simulated
in polynomial resources. In particular, a stabilizer state is fixed by the
$L\times(2L+1)$ matrix
$\hat{G}=\begin{pmatrix}\vec{\phi}&M_{X}&M_{Z}\end{pmatrix}$ (9)
where $\phi^{j}$ is the vector defining the phases, $M_{X}=[n_{i}^{j}]$ is the
matrix defining the $X$ operators, and $M_{Z}=[m^{j}_{i}]$ the matrix of $Z$
operators of the generators $\hat{g}_{j}$. In a similar fashion, random
Clifford gates and projective measurements represent maps in the
$\mathbb{F}_{2}$ field of the matrix $\hat{G}$.
We note that the tableau representation is not unique. A particular instance
of $\hat{G}$ corresponds to fixing a basis on the stabilizer group $Q$ for the
state $|\Psi\rangle$, but any other choice of independent generators
$\hat{G}^{\prime}$ for the stabilizer group $Q$ corresponds to the same state
$|\Psi\rangle$. This redundancy is denoted as gauge freedom of the tableau
representation. With $|\Psi_{0}\rangle=|0\dots 0\rangle$, we shall fix the
gauge fixing the initial tableau $M_{Z}=\openone$, $M_{X}=0$ and
$\vec{\phi}=0$, and update the stabilizer group according to the measurement
prescription discussed in this section. However, in discussing physical
results, we shall compare our findings with randomized choices of the basis
for $Q$.
The stabilizer circuit in Fig. 2 exhibits a measurement-induced phase
transition at $p_{c}=0.1599(1)$ with correlation length critical exponent
$\nu=1.27(1)$ Gullans and Huse (2020b); Sierant _et al._ , between a quantum
error correcting phase at $p<p_{c}$ and a quantum Zeno phase at $p>p_{c}$. We
shall use this model in the next section to benchmark the methods discussed in
Ref. II.
Figure 4: (a) Quantified principal component $\bar{w}_{1}$ for different
system sizes $L$, and (b,c) its data collapses. The results show the order
parameter nature of $\bar{w}_{1}$. (b) The estimated $p_{c}=0.159(4)$,
$\nu=1.35(5)$, and $\zeta=0.51(3)$ are in agreement with the results in
literature. (c) Also the estimated $p_{c}=0.165(6)$ and $\nu=1.30(7)$. are in
agreement with the results in the literature. In the insets, we magnify the
data collapses close to the critical point.
## IV Numerical benchmarks
We implement the stabilizer circuit in Sec. III using the efficient library
STIM Gidney (2021) based on the algorithm introduced in Aaronson-Gottesman
algorithm 222 The measurement layer described in Sec. III would require
$O(L^{3})$ computational resources since, for deterministic measurements,
revealing the measurement result $|0\rangle$ or $|1\rangle$ would need
inverting a matrix in $\mathbb{F}_{2}$. In Ref. Aaronson and Gottesman (2004)
the authors optimize the measurement layers from $O(L^{3})$ to $O(L^{2})$ by
considering an additional $L\times(2L+1)$ tableau (of destabilizing
generator). We refer to Ref. Aaronson and Gottesman (2004) for a detailed
explanation of the algorithm and here mention that these are numerical tools
and are not stored as features and are neglected in the learning algorithms. .
We evolve the state at times $t\geq 8L$, and store the encoding tableau every
$\Delta t=L/2$ time-steps. From the $L\times(2L+1)$ tableau representation
$\hat{G}_{i}$ we obtain the feature $G_{i}$ through reshaping to a $d=L(2L+1)$
binary vector (cf. Sec. II).
For any system size $L$, we construct a data set of $N=N_{p}N_{s}$ features
for the principal component analysis, with $N_{p}$ the number of values
$p\in[0,1]$ considered, and $N_{s}$ the number of snapshots for each value of
$p$ 333We fix $N_{p}=43$, with
$p\in[0.0,0.01,\dots,0.29]\cup[0.35,0.4,0.45,\dots,0.95]$, and vary
$N_{s}=200,400,800,1600$. We present data only for $N_{s}=400$, as we find no
qualitative behavior on the results. The system size considered for the PCA
range between $L=16\div 320$.. For the intrinsic dimension estimation, we have
$N_{p}$ separate data sets each with $N_{s}$ features obtained at a fixed
$p\in[0,1]$. Both the PCA and the intrinsic dimension estimation are
implemented using the library sklearn Pedregosa _et al._ (2011).
### IV.1 Principal component analysis
We begin by discussing the results of the principal component analysis. We
truncate the PCA to $N_{\mathrm{PCA}}$ principal components for efficiency. In
fact, from the centered data set $X$ (cf. Sec. II) we can obtain the principal
directions and weight via singular value decomposition, simplifying the
computational complexity of the problem.
As an illustrative example, we present the results of the PCA for $L=32$ in
Fig. 3 varying the maximum number of components $N_{\mathrm{PCA}}$. We see
that the first principal direction alone captures around $16\%$ of the data
set, and within the first $4$ component the cumulative encoding reaches
$20\%$. (A large portion considered that the dimension of the feature space is
$d=L(2L+1)$). This fact is unaffected by varying the number of directions
required by the algorithm, as the explained variance ratios remain
qualitatively unchanged. Conversely, $\lambda_{n}$ distribute into the same
curve over the range of considered principal directions $N_{\textup{PCA}}$. We
stress that the data set considered in each case is different, and the small
fluctuations are related to the specific realizations. Finally, we note the
discrete binary nature of the data does not allow for a neat clustering of the
data points for $p<p_{c}$ and $p>p_{c}$ (for some critical rate $p_{c}$). The
same would occur also considering various kernel methods, and stem from the
equivalence between different metrics for discrete binary data, including
Euclidean and Hamming distances. This phenomenon should be contrasted with,
e.g., Ref. Long _et al._ (2020) where different phases clearly separate
through a diffusion map algorithm. Finding suitable clustering algorithm for
discrete data is an open field of investigation and is left for future
investigation.
Although the principal components contain all the relevant information of the
data set, it is convenient to extract a meaningful number depending on the
value of the measurement rate $p$. We consider the quantified principal
components, defined as the conditional averages
$\bar{w}_{j}=\frac{1}{N_{s}}\sum_{i(p)}w_{j}(i).$ (10)
Here the mean is over the $N_{s}$ configurations with the same measurement
rate $p$. We present the numerical data in Fig. 4 (a) for various $L$ and $p$,
that suggest the presence of a finite size scaling.
We choose two finite size scaling hypothesis. First, we consider the generic
finite-size scaling hypothesis
$\bar{w}_{1}(p,L)=L^{\zeta/\nu}f_{1}((p-p_{c})L^{1/\nu}),$ (11)
in the spirit of statistical mechanics order parameters. This ansatz is a
starting point for models where we do not have _ab-initio_ knowledge.
Furthermore, we also consider an _a fortiori_ finite-size scaling hypothesis.
It is motivated by the logarithmic corrections present for the entanglement
entropy for the measurement-induced criticality of (1+1)D stabilizer circuits
Li _et al._ (2019)
$|\bar{w}_{1}(p,L)-\bar{w}_{1}(p_{c},L)|=\tilde{f}_{1}((p-p_{c})L^{1/\nu}).$
(12)
We neglect the smallest system sizes and consider $L\geq 64$. Performing the
finite size scaling with standard techniques Zabalo _et al._ (2020), we find
an excellent data collapse for both the hypothesis, as demonstrated in Fig. 4.
For Eq. (11) our estimate for the critical point and exponents are:
$p_{c}=0.159(4)$, $\nu=1.35(5)$ and $\zeta=0.51(3)$. Instead, for Eq. (12) we
have $p_{c}=0.165(6)$ and $\nu=1.30(7)$. Given our numerical data, we cannot
differentiate which scaling is the correct one as their estimates for $p_{c}$
and $\nu$ are compatible. Nevertheless, the analysis demonstrate that
$\bar{w}_{1}$ is an effective order parameter for the measurement-induced
phase transition.
Importantly, $\bar{w}_{1}$ does not have a straightforward physical
interpretation. In general it is a non-local order parameter, as it depends
non-trivially on full correlation pattern in the data space. The advantage
compared to physically motivated observables (e.g. correlation functions) is
that it can be successfully applied also in problems which lack a local order
parameter, such as the Berezinskii-Kosterlitz-Thouless transitions or lattice
gauge theories Haldar _et al._ ; Wetzel and Scherzer (2017).
Next, we consider the subsequent (less relevant) components, and compute the
quantified principal components $\bar{w}_{k}$ with $k\geq 2$. We find these
exhibits a non-monotonic behavior with the measurement rate $p$, with
oscillations appearing in the error-correcting phase ($p<p_{c}$), while
saturating at a $O(1)$ value in the quantum Zeno phase ($p>p_{c}$) (See Fig.
6). These oscillations are due to the choice of gauge-fixing of the tableau
representation we have considered in Sec. III.
To test the gauge dependence of our results, we consider a choice of random
generators for the stabilizer group $Q$ fixing the state. This is obtained
through random linear rank-preserving linear combinations of the rows of
$\hat{G}_{i}$ on the field $\mathbb{F}_{2}$.
As anticipated, the secondary quantified principal component exhibit a
qualitative change of behavior at a low-measurement rate, with an $O(L)$ non-
monotonic value in the quantum error-correcting phase. At a high measurement
rate, the quantified principal components $\bar{w}_{k\geq 2}$ is $O(L)$
saturate to a constant value. (See $\bar{w}_{2}$ in Fig. 6 (Right), although
similar features are present for the subsequent principal components).
Figure 5: Secondary quantified principal component for different system sizes
$L$. The oscillatory behavior is due to the choice of gauge fixing for the
stabilizer tableau representation.
On the other hand, the first quantified principal component exhibit the same
qualitative behavior as in Fig. 4 (cf. Fig. 6 (Left)). Performing the finite
size scaling under the hypothesis Eq. (11), we find $p_{c}=0.159(7)$,
$\nu=1.35(8)$ and $\zeta=0.52(4)$, in agreement with the estimates in Fig. 6.
As a result, the first principal component accesses the universal content of
the monitored quantum system within the classical encoding space without prior
knowledge or choice of the specific observable.
Figure 6: First (Left) and second (Right) quantified principal component
obtained through a random choice of tableau. (Inset) Data collapse
$\bar{w}_{1}=L^{\zeta/\nu}f((p-p_{c})L^{1/\nu})$ with $p_{c}=0.159(7)$,
$\nu=1.35(8)$ and $\zeta=0.52(4)$. These results are compatible with the
analysis of $\bar{w}_{1}$ for the specific choice of gauge induced by the
algorithm in Sec. III.
### IV.2 Intrinsic dimension
We next consider how the intrinsic dimension, which is a density-independent
quantity applicable to non-linear data spaces, can locate the measurement-
induced criticality. Given the binary nature of our data points, we consider
the Hamming distance defined for two $N$-dimensional vectors $x$ and $y$ as
$d(x,y)=\sum_{i=1}^{N}\delta_{x_{i},y_{i}}.$ (13)
With this metric, we perform the 2NN algorithm on the stabilizer
configurations. For each data point we compute the (next)-nearest neighboring
distances ($r_{2}(G_{i})$) $r_{1}(G_{i})$ by computing and sorting
$d(G_{i},G_{j})$ for $j\neq i$.
To obtain a robust estimate of the intrinsic dimension, we collect
$N_{\mathrm{data}}=30$ datasets of $N_{s}=5000$ for each value of $L$ and $p$
considered, compute the intrinsic dimension over each dataset. Averaging over
the $N_{\mathrm{data}}$ data sets we obtain the final estimate $I_{d}$ 444We
consider $p\in[0.0,0.01,\dots,0.98,0.99]$, and vary the system size in
$L=16\div 128$..
The results are plotted in Fig. 7. We find a linear growth of the ID for
$p\lesssim 0.16$, while a logarithmic one at $p\gtrsim 0.16$. The physical
interpretation of these results is based on the dimensionality of the Hilbert
space. Since the quantum state $\rho$ is obtained by summing over all the
stabilizer Pauli strings $Q$ (cf. Eq. (8)), we have
$\mathrm{dim}\mathcal{H}\propto e^{\gamma I_{d}}$ for some constant $\gamma$.
When $I_{d}$ scales linearly with system size, the Hilbert space explored is
exponentially large and the stationary state is a random stabilizer state.
Conversely, deep in the Zeno phase, the Hilbert space explored is polynomial
in system size. In particular, in the thermodynamic limit, the system is
localized in a zero-measure manifold. As remarked before, these considerations
are consistent with the results obtained using the entanglement measures Li
_et al._ (2018). Let us stress an important difference: while the entanglement
entropy in the Zeno phase saturates, the intrinsic dimension scales
logarithmically. This is because the intrinsic dimension is not a measure of
entanglement, but include also classical correlations of the encoding data
set.
The intrinsic dimension develops a non-monotonic universal behavior close to
criticality. We identify the transition using the data-collapse under the
finite-size scaling ansatz
$I_{d}=L^{\alpha/\nu}h((p-p_{c})L^{1/\nu}),$ (14)
adapting the analysis to values of $p\in[p^{\mathrm{est}}_{c}-\delta
p,p^{\mathrm{est}}_{c}+\delta p]$ close to the empirically estimated critical
point $p_{c}^{\mathrm{est}}=0.17$, $\delta p=0.15$. We obtain $p_{c}=0.16(2)$,
$\nu=1.3(1)$ and $\alpha=0.3(1)$, compatibly with the literature and the PCA
analysis in Sec. II.1 (See Fig. 7). In turn, the critical point corresponds to
the thermodynamic limit $L\to\infty$ of the local minimum position
$p^{*}(L)\equiv\arg\min_{p}I_{d}(L)$. At finite size, this minimum is
estimated by fitting a cubic function around $p_{c}^{\mathrm{est}}$ and
finding the local minimum. The phase transition is encoded in a diverging
correlation length that, in turns, translates to Mendes-Santos _et al._
(2021a)
$p^{*}(L)-p_{c}\propto\frac{1}{L^{1/\nu}}.$ (15)
Therefore, we expect $p_{c}=\lim_{L\to\infty}p^{*}(L)$, that we obtain by
performing a linear fit of $p^{*}(L)$ against $1/L^{1/\nu}$. Our results are
given in Fig. 7, and our estimated critical point is $p_{c}=0.16(2)$, in
agreement with the previous analysis.
This local minimum can be understood by virtue of universality. The critical
point is parametrically simpler to describe compared to its vicinity, as
irrelevant fields are negligible in the renormalization group sense. However,
they play an important role in the off-critical region, which increases the
number of parameters close to the transition. This picture is _a fortiori_
confirmed in the present setup by the presence of a conformal field theory Li
_et al._ (2019); Yang _et al._ (2022), but holds on general ground (i.e. for
non-conformal critical points Mendes-Santos _et al._ (2021a, b)).
The critical change of the intrinsic dimension is the hallmark of a geometric
transition in the data space. It relates the change in the dimensionality of
the Hilbert manifold describing the late time state $|\Psi_{T}\rangle$ to a
change in the classical encoding space.
Lastly, we stress that the intrinsic dimension capture the gauge-independent
content of the system. We have performed, but not shown here for readability,
the intrinsic dimension estimation for random gauge fixing on the tableau
representation $\hat{G}$, and find qualitatively the same results and the same
critical value $p_{c}$ and exponents $\nu$, $\alpha$.
Figure 7: (a) Intrinsic dimension for different system sizes $L$. Notice the
non-monotonic behavior, with a minimum close to criticality. (b) Scaling of
the intrinsic dimension with the system size for various values of the
measurement rate. We distinguish a linear region for $p<p_{c}$, and a
logarithmic one for $p>p_{c}$. (c) Data collapse after a finite-size scaling
analysis. The estimated $\nu=1.3(1)$, $p_{c}=0.16(2)$, and $\alpha=0.3(1)$,
are in agreement with the previous analysis (Fig. 4 and Fig. 6). (d)
Estimation of the critical point through the minimum of the intrinsic
dimension. The points are obtained by fitting a third-order polynomial and
locating the minimum. The dashed line is the optimal linear fit in
$1/L^{1/\nu}$, where we excluded small system sizes. The intersection
$p_{c}(L\to\infty)=0.16(2)$ is in agreement with the data collapse.
## V Conclusion and outlooks
In this paper, we employed principal component analysis and intrinsic
dimension estimation to characterize the measurement-induced phase transition
in monitored quantum systems as a geometric transition in the classical
encoding data space.
In full analogy to equilibrium classical physics Wang (2016); Wetzel (2017);
Hu _et al._ (2017), the principal component analysis captures the critical
behavior and the structural change of the phase for stabilizer circuits. This
is exemplified by the first quantified principal component $\bar{w}_{1}$,
which develops a critical finite size scaling around the measurement-induced
transition.
The structural transition is also manifest in the change of the intrinsic
dimension, which behaves linearly in the quantum error-correcting phase and
logarithmically in the quantum Zeno phase. At criticality, the intrinsic
dimension develops a local minimum, which reflects the parametrical simplicity
of the underlying conformal field theory. Overall, our results show full
compatibility with the numerical investigation present in the literature,
while giving a complementary viewpoint on the nature of the measurement-
induced transition.
The unsupervised character of the considered methods requires no a priori
knowledge of the phase space, making them attractive tools in the
investigation of monitored quantum systems. In this paper, we have focused for
simplicity on stabilizer circuits, but the toolbox can be easily adapted to
other monitored frameworks, such as Gaussian systems Ladewig _et al._ (2022);
Minoguchi _et al._ (2022); Müller _et al._ (2022); Buchhold _et al._
(2021); Alberton _et al._ (2021); Boorman _et al._ (2022); Turkeshi _et
al._ (2022a); Turkeshi and Schiró ; Chen _et al._ (2020); Le Gal _et al._ ;
Turkeshi _et al._ (2022b, 2021); Minato _et al._ (2022); Zhang _et al._
(2022); Zhou and Chen (2021); Tang _et al._ (2021); Zhang _et al._ (2021),
many-body interacting models Fuji and Ashida (2020); Tang and Zhu (2020);
Altland _et al._ (2022); Jian _et al._ (2021); Bentsen _et al._ (2021), or
topological and symmetry-protected topological models Fleckenstein _et al._ ;
Klocke and Buchhold ; Kells _et al._ ; Sang and Hsieh (2021); Lavasani _et
al._ (2021a, b).
Furthermore, principal component analysis can be used to preprocess large data
sets in reinforcement and supervised learning methods. We note that such
supervised techniques have been recently shown to identify measurement-induced
phase transition as a learnability problem Barratt _et al._ ; Dehghani _et
al._ , and may be suitably adapted to experimental frameworks Koh _et al._ ;
Roch _et al._ (2014); Noel _et al._ (2022); Czischek _et al._ (2021);
Sierant _et al._ (2022); Sierant and Turkeshi (2022). Similarly, it would be
interesting to extend the unsupervised toolbox for measurement-induced
criticality to variational autoencoders Schmitt and Lenarčič , which provide
an unsupervised neural network method that do not require prior knowledge of
the phase diagram.
###### Acknowledgements.
The author is indebted to M. Dalmonte, R. Fazio, A. Rodriguez, and T. Santos-
Mendes for the collaboration on related topics, and their enlightening
comments on the manuscript. The author is also grateful to S. Pappalardi, M.
Schiró, and I. Macocco for discussions. The author acknowledges support from
the ANR grant "NonEQuMat" (ANR-19-CE47-0001).
## References
* Preskill (2018) J. Preskill, Quantum 2, 79 (2018).
* Roch _et al._ (2014) N. Roch, M. E. Schwartz, F. Motzoi, C. Macklin, R. Vijay, A. W. Eddins, A. N. Korotkov, K. B. Whaley, M. Sarovar, and I. Siddiqi, Phys. Rev. Lett. 112, 170501 (2014).
* (3) J. M. Koh, S.-N. Sun, M. Motta, and A. J. Minnich, 2203.04338 [quant-ph] .
* Wiseman and Milburn (2009) H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_ (Cambridge University Press, Cambridge, 2009).
* Nahum _et al._ (2021) A. Nahum, S. Roy, B. Skinner, and J. Ruhman, PRX Quantum 2, 010352 (2021).
* (6) A. C. Potter and R. Vasseur, 2111.08018 [quant-ph] .
* (7) O. Lunt, J. Richter, and A. Pal, 2112.06682 [quant-ph] .
* Sierant _et al._ (2022) P. Sierant, G. Chiriacò, F. M. Surace, S. Sharma, X. Turkeshi, M. Dalmonte, R. Fazio, and G. Pagano, Quantum 6, 638 (2022).
* Kalsi _et al._ (2022) T. Kalsi, A. Romito, and H. Schomerus, J. Phys. A 55, 264009 (2022).
* Li _et al._ (2018) Y. Li, X. Chen, and M. P. A. Fisher, Phys. Rev. B 98, 205136 (2018).
* Li _et al._ (2019) Y. Li, X. Chen, and M. P. A. Fisher, Phys. Rev. B 100, 134306 (2019).
* Skinner _et al._ (2019) B. Skinner, J. Ruhman, and A. Nahum, Phys. Rev. X 9, 031009 (2019).
* Szyniszewski _et al._ (2019) M. Szyniszewski, A. Romito, and H. Schomerus, Phys. Rev. B 100, 064204 (2019).
* Szyniszewski _et al._ (2020) M. Szyniszewski, A. Romito, and H. Schomerus, Phys. Rev. Lett. 125, 210602 (2020).
* Fan _et al._ (2021) R. Fan, S. Vijay, A. Vishwanath, and Y.-Z. You, Phys. Rev. B 103, 174309 (2021).
* Biella and Schiró (2021) A. Biella and M. Schiró, Quantum 5, 528 (2021).
* Kumar _et al._ (2020) P. Kumar, A. Romito, and K. Snizhko, Phys. Rev. Research 2, 043420 (2020).
* Iaconis and Chen (2021) J. Iaconis and X. Chen, Phys. Rev. B 104, 214307 (2021).
* Choi _et al._ (2020) S. Choi, Y. Bao, X.-L. Qi, and E. Altman, Phys. Rev. Lett. 125, 030505 (2020).
* Bao _et al._ (2020) Y. Bao, S. Choi, and E. Altman, Phys. Rev. B 101, 104301 (2020).
* Gullans and Huse (2020a) M. J. Gullans and D. A. Huse, Phys. Rev. X 10, 041020 (2020a).
* Gullans and Huse (2020b) M. J. Gullans and D. A. Huse, Phys. Rev. Lett. 125, 070606 (2020b).
* Zabalo _et al._ (2020) A. Zabalo, M. J. Gullans, J. H. Wilson, S. Gopalakrishnan, D. A. Huse, and J. H. Pixley, Phys. Rev. B 101, 060301 (2020).
* Zabalo _et al._ (2022) A. Zabalo, M. J. Gullans, J. H. Wilson, R. Vasseur, A. W. W. Ludwig, S. Gopalakrishnan, D. A. Huse, and J. H. Pixley, Phys. Rev. Lett. 128, 050602 (2022).
* Sierant and Turkeshi (2022) P. Sierant and X. Turkeshi, Phys. Rev. Lett. 128, 130605 (2022).
* (26) U. Agrawal, A. Zabalo, K. Chen, J. H. Wilson, A. C. Potter, J. H. Pixley, S. Gopalakrishnan, and R. Vasseur, 2107.10279 [cond-mat.dis-nn] .
* Block _et al._ (2022) M. Block, Y. Bao, S. Choi, E. Altman, and N. Y. Yao, Phys. Rev. Lett. 128, 010604 (2022).
* Sharma _et al._ (2022) S. Sharma, X. Turkeshi, R. Fazio, and M. Dalmonte, SciPost Phys. Core 5, 23 (2022).
* Lunt _et al._ (2021) O. Lunt, M. Szyniszewski, and A. Pal, Phys. Rev. B 104, 155111 (2021).
* Turkeshi _et al._ (2020) X. Turkeshi, R. Fazio, and M. Dalmonte, Phys. Rev. B 102, 014315 (2020).
* Ippoliti _et al._ (2021) M. Ippoliti, M. J. Gullans, S. Gopalakrishnan, D. A. Huse, and V. Khemani, Phys. Rev. X 11, 011030 (2021).
* Jian _et al._ (2020) C.-M. Jian, Y.-Z. You, R. Vasseur, and A. W. W. Ludwig, Phys. Rev. B 101, 104302 (2020).
* Lopez-Piqueres _et al._ (2020) J. Lopez-Piqueres, B. Ware, and R. Vasseur, Phys. Rev. B 102, 064202 (2020).
* (34) C.-M. Jian, B. Bauer, A. Keselman, and A. W. W. Ludwig, 2012.04666 [cond-mat.stat-mech] .
* Lang and Büchler (2020) N. Lang and H. P. Büchler, Phys. Rev. B 102, 094204 (2020).
* (36) A. Zabalo, J. H. Wilson, M. J. Gullans, R. Vasseur, S. Gopalakrishnan, D. A. Huse, and J. H. Pixley, 2205.14002 [cond-mat.dis-nn] .
* (37) Y. Li, R. Vasseur, M. P. A. Fisher, and A. W. W. Ludwig, 2110.02988 [cond-mat.stat-mech] .
* Vasseur _et al._ (2019) R. Vasseur, A. C. Potter, Y.-Z. You, and A. W. W. Ludwig, Phys. Rev. B 100, 134203 (2019).
* Ippoliti and Khemani (2021) M. Ippoliti and V. Khemani, Phys. Rev. Lett. 126, 060501 (2021).
* Lu and Grover (2021) T.-C. Lu and T. Grover, PRX Quantum 2, 040319 (2021).
* Mehta _et al._ (2019) P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, Phys. Rep. 810, 1 (2019).
* Mendes-Santos _et al._ (2021a) T. Mendes-Santos, X. Turkeshi, M. Dalmonte, and A. Rodriguez, Phys. Rev. X 11, 011040 (2021a).
* Mendes-Santos _et al._ (2021b) T. Mendes-Santos, A. Angelone, A. Rodriguez, R. Fazio, and M. Dalmonte, PRX Quantum 2, 030332 (2021b).
* Wold _et al._ (1987) S. Wold, K. Esbensen, and P. Geladi, Chemom. Intell. Lab. Syst. 2, 37 (1987).
* Wang (2016) L. Wang, Phys. Rev. B 94, 195105 (2016).
* Wetzel (2017) S. J. Wetzel, Phys. Rev. E 96, 022140 (2017).
* Hu _et al._ (2017) W. Hu, R. R. P. Singh, and R. T. Scalettar, Phys. Rev. E 95, 062122 (2017).
* Ch’ng _et al._ (2018) K. Ch’ng, N. Vazquez, and E. Khatami, Phys. Rev. E 97, 013306 (2018).
* Costa _et al._ (2017) N. C. Costa, W. Hu, Z. J. Bai, R. T. Scalettar, and R. R. P. Singh, Phys. Rev. B 96, 195138 (2017).
* Wang and Zhai (2017) C. Wang and H. Zhai, Phys. Rev. B 96, 144432 (2017).
* Khatami _et al._ (2020) E. Khatami, E. Guardado-Sanchez, B. M. Spar, J. F. Carrasquilla, W. S. Bakr, and R. T. Scalettar, Phys. Rev. A 102, 033326 (2020).
* Beach _et al._ (2018) M. J. S. Beach, A. Golubeva, and R. G. Melko, Phys. Rev. B 97, 045207 (2018).
* Lidiak and Gong (2020) A. Lidiak and Z. Gong, Phys. Rev. Lett. 125, 225701 (2020).
* (54) G. Torlai, C. J. Wood, A. Acharya, G. Carleo, J. Carrasquilla, and L. Aolita, 2006.02424 [quant-ph] .
* Martiniani _et al._ (2019) S. Martiniani, P. M. Chaikin, and D. Levine, Phys. Rev. X 9, 011031 (2019).
* Bagrov _et al._ (2020) A. A. Bagrov, I. A. Iakovlev, A. A. Iliasov, M. I. Katsnelson, and V. V. Mazurenko, PNAS 117, 30241 (2020).
* Carleo _et al._ (2019) G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, Rev. Mod. Phys. 91, 045002 (2019).
* Carrasquilla (2020) J. Carrasquilla, Adv. Phys.: X 5, 1797528 (2020).
* (59) M. Schmitt and Z. Lenarčič, 2102.11328 [quant-ph] .
* Nielsen and Chuang (2010) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Cambridge University Press, Cambridge, 2010).
* Aaronson and Gottesman (2004) S. Aaronson and D. Gottesman, Phys. Rev. A 70, 052328 (2004).
* Goldt _et al._ (2020) S. Goldt, M. Mézard, F. Krzakala, and L. Zdeborová, Phys. Rev. X 10, 041044 (2020).
* Facco _et al._ (2017) E. Facco, M. d’Errico, A. Rodriguez, and A. Laio, Sci. Rep. 7 (2017).
* Facchi and Pascazio (2002) P. Facchi and S. Pascazio, Phys. Rev. Lett. 89, 080401 (2002).
* Burgarth _et al._ (2014) D. Burgarth, P. Facchi, V. Giovannetti, H. Nakazato, S. Pascazio, and K. Yuasa, Nat. Comm. 5, 5173 (2014).
* Note (1) Determining measurement result require the inversion of linear systems in the field $\mathbb{F}_{2}$.
* (67) P. Sierant, M. Schirò, M. Lewenstein, and X. Turkeshi, 2210.11957 .
* Gidney (2021) C. Gidney, Quantum 5, 497 (2021).
* Note (2) The measurement layer described in Sec. III would require $O(L^{3})$ computational resources since, for deterministic measurements, revealing the measurement result $|0\rangle$ or $|1\rangle$ would need inverting a matrix in $\mathbb{F}_{2}$. In Ref. Aaronson and Gottesman (2004) the authors optimize the measurement layers from $O(L^{3})$ to $O(L^{2})$ by considering an additional $L\times(2L+1)$ tableau (of destabilizing generator). We refer to Ref. Aaronson and Gottesman (2004) for a detailed explanation of the algorithm and here mention that these are numerical tools and are not stored as features and are neglected in the learning algorithms.
* Note (3) We fix $N_{p}=43$, with $p\in[0.0,0.01,\dots,0.29]\cup[0.35,0.4,0.45,\dots,0.95]$, and vary $N_{s}=200,400,800,1600$. We present data only for $N_{s}=400$, as we find no qualitative behavior on the results. The system size considered for the PCA range between $L=16\div 320$.
* Pedregosa _et al._ (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Journal of Machine Learning Research 12, 2825 (2011).
* Long _et al._ (2020) Y. Long, J. Ren, and H. Chen, Phys. Rev. Lett. 124, 185501 (2020).
* (73) S. Haldar, S. S. Rahaman, and M. Kumar, 2205.15151 [cond-mat.str-el] .
* Wetzel and Scherzer (2017) S. J. Wetzel and M. Scherzer, Phys. Rev. B 96, 184410 (2017).
* Note (4) We consider $p\in[0.0,0.01,\dots,0.98,0.99]$, and vary the system size in $L=16\div 128$.
* Yang _et al._ (2022) Z.-C. Yang, Y. Li, M. P. A. Fisher, and X. Chen, Phys. Rev. B 105, 104306 (2022).
* Ladewig _et al._ (2022) B. Ladewig, S. Diehl, and M. Buchhold, Phys. Rev. Research 4, 033001 (2022).
* Minoguchi _et al._ (2022) Y. Minoguchi, P. Rabl, and M. Buchhold, SciPost Phys. 12, 9 (2022).
* Müller _et al._ (2022) T. Müller, S. Diehl, and M. Buchhold, Phys. Rev. Lett. 128, 010605 (2022).
* Buchhold _et al._ (2021) M. Buchhold, Y. Minoguchi, A. Altland, and S. Diehl, Phys. Rev. X 11, 041004 (2021).
* Alberton _et al._ (2021) O. Alberton, M. Buchhold, and S. Diehl, Phys. Rev. Lett. 126, 170602 (2021).
* Boorman _et al._ (2022) T. Boorman, M. Szyniszewski, H. Schomerus, and A. Romito, Phys. Rev. B 105, 144202 (2022).
* Turkeshi _et al._ (2022a) X. Turkeshi, L. Piroli, and M. Schiró, Phys. Rev. B 106, 024304 (2022a).
* (84) X. Turkeshi and M. Schiró, 2201.09895 [cond-mat.stat-mech] .
* Chen _et al._ (2020) X. Chen, Y. Li, M. P. A. Fisher, and A. Lucas, Phys. Rev. Research 2, 033017 (2020).
* (86) Y. Le Gal, X. Turkeshi, and M. Schirò, 2210.11937 .
* Turkeshi _et al._ (2022b) X. Turkeshi, M. Dalmonte, R. Fazio, and M. Schirò, Phys. Rev. B 105, L241114 (2022b).
* Turkeshi _et al._ (2021) X. Turkeshi, A. Biella, R. Fazio, M. Dalmonte, and M. Schiró, Phys. Rev. B 103, 224210 (2021).
* Minato _et al._ (2022) T. Minato, K. Sugimoto, T. Kuwahara, and K. Saito, Phys. Rev. Lett. 128, 010603 (2022).
* Zhang _et al._ (2022) P. Zhang, C. Liu, S.-K. Jian, and X. Chen, Quantum 6, 723 (2022).
* Zhou and Chen (2021) T. Zhou and X. Chen, Phys. Rev. B 104, L180301 (2021).
* Tang _et al._ (2021) Q. Tang, X. Chen, and W. Zhu, Phys. Rev. B 103, 174303 (2021).
* Zhang _et al._ (2021) P. Zhang, S.-K. Jian, C. Liu, and X. Chen, Quantum 5, 579 (2021).
* Fuji and Ashida (2020) Y. Fuji and Y. Ashida, Phys. Rev. B 102, 054302 (2020).
* Tang and Zhu (2020) Q. Tang and W. Zhu, Phys. Rev. Research 2, 013022 (2020).
* Altland _et al._ (2022) A. Altland, M. Buchhold, S. Diehl, and T. Micklitz, Phys. Rev. Research 4, L022066 (2022).
* Jian _et al._ (2021) S.-K. Jian, C. Liu, X. Chen, B. Swingle, and P. Zhang, Phys. Rev. Lett. 127, 140601 (2021).
* Bentsen _et al._ (2021) G. S. Bentsen, S. Sahu, and B. Swingle, Phys. Rev. B 104, 094304 (2021).
* (99) C. Fleckenstein, A. Zorzato, D. Varjas, E. J. Bergholtz, J. H. Bardarson, and A. Tiwari, 2201.05341 [cond-mat.mes-hall] .
* (100) K. Klocke and M. Buchhold, 2204.08489 [cond-mat.stat-mech] .
* (101) G. Kells, D. Meidan, and A. Romito, 2112.09787 [quant-ph] .
* Sang and Hsieh (2021) S. Sang and T. H. Hsieh, Phys. Rev. Research 3, 023200 (2021).
* Lavasani _et al._ (2021a) A. Lavasani, Y. Alavirad, and M. Barkeshli, Nature Phys. 17, 342 (2021a).
* Lavasani _et al._ (2021b) A. Lavasani, Y. Alavirad, and M. Barkeshli, Phys. Rev. Lett. 127, 235701 (2021b).
* (105) F. Barratt, U. Agrawal, S. Gopalakrishnan, D. A. Huse, R. Vasseur, and A. C. Potter, 2111.09336 [quant-ph] .
* (106) H. Dehghani, A. Lavasani, M. Hafezi, and M. J. Gullans, 2204.10904 [quant-ph] .
* Noel _et al._ (2022) C. Noel, P. Niroula, D. Zhu, A. Risinger, L. Egan, D. Biswas, M. Cetina, A. V. Gorshkov, M. J. Gullans, D. A. Huse, and C. Monroe, Nature Phys. 18, 760 (2022).
* Czischek _et al._ (2021) S. Czischek, G. Torlai, S. Ray, R. Islam, and R. G. Melko, Phys. Rev. A 104, 062405 (2021).
|
# Local Navigation and Docking of an Autonomous Robot Mower using
Reinforcement Learning and Computer Vision
Ali Taghibakhshi Department of Mechanical Science
and Engineering
The University of Illinois
at Urbana-Champaign
Urbana, Illinois, USA
<EMAIL_ADDRESS>Nathan Ogden John Deere
Technology Innovation Center
Champaign, Illinois, USA
<EMAIL_ADDRESS>Matthew West Department of Mechanical Science
and Engineering
The University of Illinois
at Urbana-Champaign
Urbana, Illinois, USA
<EMAIL_ADDRESS>
###### Abstract
We demonstrate a successful navigation and docking control system for the John
Deere Tango autonomous mower, using only a single camera as the input. This
vision-only system is of interest because it is inexpensive, simple for
production, and requires no external sensing. This is in contrast to existing
systems that rely on integrated position sensors and global positioning system
(GPS) technologies. To produce our system we combined a state-of-the-art
object detection architecture, You Only Look Once (YOLO), with a reinforcement
learning (RL) architecture, Double Deep Q-Networks (Double DQN). The object
detection network identifies features on the mower and passes its output to
the RL network, providing it with a low-dimensional representation that
enables rapid and robust training. Finally, the RL network learns how to
navigate the machine to the desired spot in a custom simulation environment.
When tested on mower hardware, the system is able to dock with centimeter-
level accuracy from arbitrary initial locations and orientations.
###### Index Terms:
Reinforcement Learning, Deep Q-Learning, Object Detection, YOLO, Mower
## I Introduction
The ever-growing field of autonomous vehicles is a research hotbed for
applying Artificial Intelligence (AI) and Machine Learning (ML). In the past
few years, there have been a wide variety of improvement in autonomous
vehicles. These improvements include, but are not limited to, single agent
scale tasks such as path planning, lane changing, and self driving, and multi-
agent scale tasks such as collision avoidance and lane management. For
instance, in [1], authors have investigated an optimal lane change decision
model for autonomous vehicles in high-capacity urban roads. In recent studies
[2, 3], researchers have introduced a novel lane management in heterogeneous
framework with autonomous vehicles.
Recently, many companies have invested efforts in developing ML-aided driving
for increased comfort and safety of their vehicles. However, many autonomous
driving and navigation systems still rely on sensing modalities such as Laser
Range Finder (LRF), Light Detection and Ranging (LIDAR), and GPS, to name but
a handful. These systems can be expensive and may induce further complications
in the design of an autonomous vehicle. It is thus desirable to produce
systems which use only vision as the control input.
Over the past decade, reinforcement learning techniques and algorithms have
been able to solve complicated decision making problems, both in single-agent
[4, 5] and multi-agent frameworks [6, 7, 8]. With the rise of deep
reinforcement learning, there have been multiple studies on implementing
value-based learning methods, mostly deep Q-learning [5], in the field of
autonomous driving. Using a visual encoding, authors in [9] utilized a low
dimensional representation of the driving environment as the observation input
for model-free RL agents, whose aim is to achieve urban autonomous driving.
[10] has proposed a deep RL framework for autonomous driving, where the RL
agents observe raw sensor outputs and take driving actions.
This paper focus on the docking procedure for the John Deere Tango robot
mower, which is not reliable in it’s current form. This mower was designed to
dock using a guide loop wire buried underground, which leads the mower toward
the charging station by inducing an electric current in the wire, which is
then sensed by a detection system in the mower. Docking failures can occur
when the wire is misplaced under the ground or there is a bump in the mower’s
path. This motivated us to investigate a system that it is more robust to
variable initial position and environmental conditions.
In this paper, we have used a combination of a supervised and reinforcement
learning algorithms to locally navigate the robot mower and orient it toward
the docking station. Firstly, using transfer learning, we train a YOLO network
to detect two pre-existing markers on the robot mower to provide positioning
and orientation information. Secondly, using the output of the object
detection network, we train a DQN agent to learn how to move the robot mower
to the desired position employing a curriculum training technique. The real-
world scenario is shown in Fig. 1.
Figure 1: Real-world docking scenario. The mower is shown approaching the
charging station under the direction of the RL network, which receives inputs
from the YOLO network which localizes the markers in the camera feed. Marker 1
is the stop button and marker 2 is the John Deere logo.
## II Background
### II-A YOLO
Real time object detection has been a significant achievement of deep
convolutions neural networks (CNN) in the field of computer vision. Using
residual connections, deep CNNs can extract complex features from the observed
image and are highly accurate in localizing different objects [11]. Generally,
these networks are trained on multiple object training sets. Given an image,
they draw a bounding box, labeled with the object’s tag, around it. The R-CNN
algorithm [12] is arguably the pioneer object localization algorithm and, ever
since its introduction, many other algorithms have been proposed for the
purpose of object detection, including modified versions of R-CNN such as fast
R-CNN and faster R-CNN [13, 14]. One of the fastest and most accurate object
detection algorithms is You Only Look Once (YOLO), which achieves object
detection using a fixed-grid regression [15].
### II-B Deep Q-Learning
Value learning is a way of approaching reinforcement learning problems. Deep
Q-Learning (DQN) is a value-learning algorithm that has demonstrated success
on a wide range of tasks, including achieving human-level control in Atari
games [5]. The algorithm combines the traditional Q-learning update with
neural network function approximation. Similarly to many RL algorithms, the
problem is modeled as a discrete-time Markov Decision Process (MDP). At any
time, the agent is in a certain state of the environment’s state space,
$S=\\{s_{1},s_{1},...,s_{n}\\}$ and has some corresponding actions available
from environment’s action space, $A=\\{a_{1},a_{1},...,a_{m}\\}$, which
influence the transition to the next state. Transitioning from one state to
another provides the agent with a reward $r$, and the goal of agent is to
maximize the sum of its discounted rewards,
$\sum_{t=0}^{\infty}\gamma^{t}r_{t}$, where $0<\gamma\leq 1$ is the discount
factor. The state-action value function is denoted by $Q(s,a)$, and maps a
pair of a state and an action to a real number;
$Q:S{\times}A\rightarrow\mathbb{R}$. The Q-learning update [16] is
$\displaystyle Q(s_{t},a_{t})$ $\displaystyle\leftarrow Q(s_{t},a_{t})$
$\displaystyle\qquad+\alpha\Big{(}r_{t}-Q(s_{t},a_{t})+\gamma\displaystyle\max_{a^{\prime}}Q(s_{t+1},a^{\prime})\Big{)}$
where $\alpha$ is the learning rate. Mnih _et al._ [5] used the Q-learning
algorithm in conjunction with deep neural networks and a replay buffer to
produce the DQN (Deep Q-Network) algorithm. This uses two networks, the
primary and target. The primary network is the one that is being updated using
Stochastic Gradient Descent (SGD) at every iteration. The target network is
the latest copy of the primary network, and it is used for evaluating the
action values. The target network is updated once in every $N\in\mathbb{N}$
iterations to evaluate the action values using a recent version of the primary
network. However, the max operator in DQN algorithm is prone to overestimating
the state-action value function since it selects the maximum value of the same
Q network. To mitigate this function approximation overestimation, one needs
to decouple the selection and evaluation tasks. This leads to the Double DQN
algorithm [17], which is used in this research:
Algorithm 1 Double DQN
1:initialize the primary network $Q_{\theta}$, the target network
$Q_{\theta{{}^{\prime}}}$, the replay buffer $D$, and $\tau\ll 1$
2:for each iteration do
3: for each environment step do
4: observe state $s_{t}$ and select $a_{t}\sim\pi(s_{t},a_{t})$
5: execute $a_{t}$ and observe the next state $s_{t+1}$
6: and reward $r_{t}=R(s_{t},a_{t})$
7: store $(s_{t},a_{t},r_{t},s_{t+1})$ in the replay buffer $D$
8: end for
9: for each update step do
10: sample $e_{t}=(s_{t},a_{t},r_{t},s_{t+1})\sim D$
11: compute target $Q$ value:
12: $Q^{*}(s_{t},a_{t})\approx r_{t}+\gamma
Q_{\theta^{\prime}}(s_{t+1},\displaystyle\operatorname*{argmax}_{a^{\prime}}Q_{{\theta}}(s_{t+1},a^{\prime}))$
13: perform gradient descent step on
14: $(Q^{*}(s_{t},a_{t})-Q_{\theta}(s_{t},a_{t}))^{2}$
15: update the target network parameters:
$\theta^{\prime}\leftarrow\tau\theta+(1-\tau)\theta^{\prime}$
16: end for
17:end for
## III Simulation and Training
### III-A Motivation for Object Detection with RL
The charging station of the mower is viewed by a Logitech c270 webcam, and the
aim is to navigate the mower toward the docking station and either stop it at
a desired position or to help it dock, using only the vision input. Therefore,
the environment of the problem we are trying to solve, the video feed of the
camera, is not only very high dimensional, but it is also dependent on where
the setup is located, and it varies from yard to yard. Hence, we lower the
dimensionality of the environment and extract key features of the video feed
to both improve system robustness and accelerate RL training.
We use the YOLO algorithm to locate bounding boxes around two markers on the
mower, one in the front and the other at the back of the top surface of the
mower. The output of the YOLO network is then passed to the RL network as
input. Accordingly, the RL agent’s observation space of the world is low
dimensional and it is easier to train in this space. The fact that the two
markers are located at different ends of the mower allows the RL agent to
sense the angle that the mower makes with the straight line toward the docking
station. This is due to the fact that both bounding boxes will be in the
center of the image if the the mower is exactly oriented toward the docking
station. Therefore, in the setup designed in this paper, the centers of the
bounding boxes in the picture are the information that is passed to the RL
network, which outputs linear velocity and steering rate as actions.
### III-B RL Simulation Environment
We have designed a simple simulation environment for training the RL agent.
The kinematics of the agent in the simulation environment are ODEs driven by
the linear velocity and angular steering rate. The simulation environment
simulates the motion and computes the view of the markers from the point of
view of the camera. The markers are drawn as quadrilaterals in the simulation,
one in red and the other in black, and the rest of the simulated image is just
a white background, representing the remainder of the environment. The mower
initially starts at a uniformly distributed random position, $(X,Y)\sim
U(-0.2,0.2)\times U(-0.2,0.2)$ in meters, and orientation, $\theta\sim
U(-30,30)$ in degrees. Throughout the paper, $X$ and $Y$ coordinates
correspond to the position of the rear axle of the mower. The environment is
shown in Fig. 2. We stop training the agent when the average reward reaches a
certain amount, which was set experimentally. Moreover, in training, we
manually reset the mower when its Y component exceeds 1 m. The mower stops
when the $Y$ component of its position exceeds 1 m and the goal is to minimize
the $X$ and $\theta$ offsets from zero when it stops. This target goal was
chosen because it enables the two metallic rods on the front of the physical
mower to connect to the metal pads in the charging station.
Figure 2: Simulation environment. Left: the view from above of the mower
approaching the charging station. Right: the simulated camera view of the two
markers.
The DQN agent takes the center positions of the bounding boxes at the last
three time-steps as its input. The reason for using the data at the last three
time-steps is to provide the agent with some information about the past to
allow the estimation of velocity and acceleration. The agent outputs the
desired linear velocity and angular rate based on its observation. The DQN
network is shown in Fig. 3, where the state component has two fully connected
layers with 16 and 32 neurons and the action component has a single layer with
32 neurons.
For training the agent, we used curriculum training with four phases, numbered
$i=1,\ldots,4$. In the first phase the agent is rewarded to go forward, with a
small reward for arriving near the target docking location. The subsequent
phases increasingly reward more accurate docking. The four phases are
distinguished by different reward functions $r^{i}_{t}$ defined by
$r^{i}_{t}=\begin{cases}R^{i}&\text{if }Y\geq 1{\rm\ m}\text{ and
}|u^{1}|,|u^{2}|<c^{i}_{1},\\\ 0&\text{if }Y\geq 1{\rm\ m}\text{ and
}|u^{1}|,|u^{2}|<2c^{i}_{1},\\\ -(10|v^{1}(t)-v^{1}_{0}|+&\text{otherwise,}\\\
10|v^{2}(t)-v^{2}_{0}|+\\\ c^{i}_{2}|u^{1}|+c^{i}_{2}|u^{2}|)\end{cases}$
where $(c^{i}_{1},c^{i}_{2})$ are constants with values $(0.05,2)$,
$(0.05,5)$, $(0.02,5)$, $(0.02,10)$ for $i=\\{1,2,3,4\\}$, respectively;
$R^{i}=0$ for $i=1$, and $R^{i}=150$ for $i=\\{2,3,4\\}$;
$u^{1},v^{1},u^{2},v^{2}\in(-1,1)$ are the normalized coordinates of the
center of the two markers; $v^{1}_{0},v^{2}_{0}$ are the $y$ components of the
markers when the mower is at $Y=1{\rm\ m}$, $X=0{\rm\ m}$, $\theta=0^{\circ}$.
The training data is shown in Fig. 4.
Figure 3: The RL network architecture. The observation head receives labeled
markers in the last three time-steps of the environment and feeds it through
the network. The action head also passes each action through a single layer.
The last layers of each head are added together and passed through the final
layer to output the $Q$ values.
Figure 4: Training graphs of the agent during the four curriculum phases, with
panels from top left, top right, bottom left, and bottom right corresponding
to reward functions $r^{1}$ through $r^{4}$.
Figure 5: The two YOLO networks. Top: YOLO network trained on real-wold mower
data. Bottom: YOLO network trained on simulation images. In both cases the
image is sent to the corresponding YOLO network and the outputs are the
bounding boxes. Finally, the normalized positions of the center of the
bounding boxes are extracted as a 4 dimensional vector.
### III-C Object Detection Network
There are two YOLO object detection networks used in this study, one for
detecting the markers in the simulation and another for detecting the real-
world markers from the actual camera images. For each of the networks, a pre-
trained YOLO network was further trained to detect the markers. The training
data for the networks is shown in Fig. 5. Each of the networks was trained on
about 3000 labelled images of the markers. The pre-trained feature extraction
used ResNet50 [11], and the architecture is re-trained after the 20th ReLU
function.
## IV Hardware Experiments
### IV-A Experiment Setup
A Mosquitto MQTT broker was used to send the agent’s actions to the mower via
an Ethernet cable, and the connection between the computer and the mower was
controlled by a Raspberry Pi. As in the simulation, the mower started from an
initial position $(X,Y)$ and orientation $\theta$. The actions were sent to
the mower at a frequency of $5$ Hz. The only input to the controlling RL agent
was from the camera.
### IV-B Experiment Results
A total of 90 tests were performed with the initial position and orientation
of the mower $(X,Y,\theta)$ taken as all combinations of $X=-0.2,0,0.2$ m,
$Y=-0.2,0,0.2$ m, and $\theta=-30,-15,0,15,30^{\circ}$, with each initial
condition being tested twice. The performance of the agent was measured based
the $X$, $Y$, and $\theta$ offsets of the mower when it stopped (when its
observed $Y$ component has exceeded 1 m). The experiment data is shown in Fig.
6 and summary results are given in Tab. I, giving maximum absolute error, mean
absolute error (MAE), and root mean squared error (RMSE). The mower had a
maximum final position error of less than $4$ cm in both $X$ and $Y$
directions and a maximum final orientation error of less than $7^{\circ}$,
representing successful positioning of the mower in all cases. The mean
absolute error was less than 1 cm in both $X$ and $Y$ directions and less than
$2^{\circ}$ in orientation.
TABLE I: Experiment results. Error Measure | $X$ Offset (cm) | $Y$ Offset (cm) | $\theta$ Offset (deg)
---|---|---|---
Max Abs. Error | 3.800 | 3.642 | 6.200
Mean Abs. Error | 0.822 | 0.934 | 1.533
RMSE | 0.896 | 1.182 | 1.661
Figure 6: Experiment configurations. The mower started from one of the
positions and orientations shown in green, and from each initial configuration
two tests were performed. The desired final position is marked by a red cross.
The final positions of the mower are magnified and the final $X$, $Y$, and
$\theta$ values are shown.
## V Conclusions
We demonstrated a cheap and effective control system for autonomous docking of
a robotic lawn mower (the John Deere Tango mower), using only vision from a
single camera as the sensor. This system was shown to be robust in hardware
tests, achieving centimeter-level docking precision.
The controller was a neural network trained using reinforcement learning
(Double DQN) in a simple simulated environment. To avoid the need to simulate
realistic vision inputs, we trained an object detection network (YOLO) to
isolate two markers on the mower. The location of these markers was the only
input to the controller agent, making it easy to produce similar inputs in
simulation.
In addition to the ease of training, the use of an initial object detection
network made the final system robust to different backgrounds and other
environment variations. Our choice of markers was opportunistic (we used
existing features on the mower) and it is likely that custom markers may be
even better. We believe that this paradigm of an object detection network
coupled with an RL agent could be an effective strategy for other robot motion
control tasks.
## Acknowledgment
This research was supported by the John Deere Technology Innovation Center.
## References
* [1] P. Cao, Y. Hu, T. Miwa, Y. Wakita, T. Morikawa, and X. Liu, “An optimal mandatory lane change decision model for autonomous vehicles in urban arterials,” _Journal of Intelligent Transportation Systems_ , vol. 21, no. 4, pp. 271–284, 2017.
* [2] P. Karimi Shahri, S. Chintamani Shindgikar, B. HomChaudhuri, and A. H. Ghasemi, “Optimal lane management in heterogeneous traffic network,” in _ASME 2019 Dynamic Systems and Control Conference_. American Society of Mechanical Engineers Digital Collection, 2019\.
* [3] P. K. Shahri, A. H. Ghasemi, and V. Izadi, “Optimal lane management in heterogeneous traffic network using extremum seeking approach,” SAE Technical Paper, Tech. Rep., 2020.
* [4] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot _et al._ , “Mastering the game of go with deep neural networks and tree search,” _nature_ , vol. 529, no. 7587, pp. 484–489, 2016.
* [5] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski _et al._ , “Human-level control through deep reinforcement learning,” _nature_ , vol. 518, no. 7540, pp. 529–533, 2015.
* [6] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev _et al._ , “Grandmaster level in starcraft ii using multi-agent reinforcement learning,” _Nature_ , vol. 575, no. 7782, pp. 350–354, 2019.
* [7] A. Shojaeighadikolaei, A. Ghasemi, K. R. Jones, A. G. Bardas, M. Hashemi, and R. Ahmadi, “Demand responsive dynamic pricing framework for prosumer dominated microgrids using multiagent reinforcement learning,” _arXiv preprint arXiv:2009.10890_ , 2020.
* [8] A. Ghasemi, A. Shojaeighadikolaei, K. Jones, M. Hashemi, A. G. Bardas, and R. Ahmadi, “A multi-agent deep reinforcement learning approach for a distributed energy marketplace in smart grids,” in _2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)_ , 2020, pp. 1–6.
* [9] J. Chen, B. Yuan, and M. Tomizuka, “Model-free deep reinforcement learning for urban autonomous driving,” in _2019 IEEE Intelligent Transportation Systems Conference (ITSC)_. IEEE, 2019, pp. 2765–2771.
* [10] A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” _Electronic Imaging_ , vol. 2017, no. 19, pp. 70–76, 2017.
* [11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2014, pp. 580–587.
* [13] R. Girshick, “Fast r-cnn,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 1440–1448.
* [14] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: towards real-time object detection with region proposal networks,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 39, no. 6, pp. 1137–1149, 2016\.
* [15] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 779–788.
* [16] R. S. Sutton and A. G. Barto, _Reinforcement learning: An introduction_. MIT press, 2018.
* [17] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 30, no. 1, 2016.
|
# Manifestly Phased Communication via Shared Session Types
Chuta Sano Department of Computer Science, Carnegie Mellon University,
Pittsburgh, USA<EMAIL_ADDRESS>, Stephanie Balzer Department of
Computer Science, Carnegie Mellon University, Pittsburgh, USA
<EMAIL_ADDRESS>and Frank Pfenning Department of Computer Science,
Carnegie Mellon University, Pittsburgh, USA<EMAIL_ADDRESS>
###### Abstract.
Session types denote message protocols between concurrent processes, allowing
a type-safe expression of inter-process communication. Although previous work
demonstrate a well-defined notion of subtyping where processes have different
perceptions of the protocol, these formulations were limited to linear session
types where each channel of communication has a unique provider and client. In
this paper, we extend subtyping to shared session types where channels can now
have multiple clients instead of a single client. We demonstrate that this
generalization can statically capture protocol requirements that span multiple
phases of interactions of a client with a shared service provider, something
not possible in prior proposals. Moreover, the phases are manifest in the type
of the client.
###### Key words and phrases:
session types, subtyping, sharing
11footnotetext: This is a revised and extended version of a paper presented at
COORDINATION 2021 [SBP21]. The main changes in this version include an
additional example demonstrating phasing in Section 5.1 and a formalization of
a system implementing our work in Section 6 with proofs of relevant
metatheorems in the Appendix.
## 1\. Introduction
Session types prescribe bidirectional communication protocols between
concurrent processes [Hon93, HVK98]. Variations of this type system were later
given logical correspondences with _intuitionistic_ [CP10] and _classical_
[Wad12] linear logic where proofs correspond to programs and cut reduction to
communication. This correspondence mainly provides an interpretation of
_linear session types_ , which denote sessions with exactly one client and one
provider. _Shared session types_ , which encode communication between multiple
clients and one provider, were proposed with a _sharing semantics_
interpretation in prior work [BP17]. Clients communicating along a shared
channel follow an _acquire-release_ discipline where they must first _acquire_
exclusive access to the provider, communicate linearly, and then finally
_release_ the exclusive access, allowing other clients to acquire.
However, not all protocols that follow this acquire-release paradigm are safe;
if a client that successfully acquires some shared channel of type $A$
releases it at an unrelated type $B$, other clients that are blocked while
trying to acquire will still see the channel as type $A$ while the provider
will see the channel as type $B$. To resolve this, we require an additional
constraint that clients must release at the same type at which it acquired.
This is formally expressed in [BP17] as the _equi-synchronizing_ constraint,
which statically verifies that session types encode communication which does
not release at a different type than its original. Although shared session
types serve an important role in making session typed process calculi theory
applicable to practical scenarios, they cannot express _phases_ , or protocols
across successive acquire-release cycles, due to the equi-synchronizing
constraint being too restrictive (see Section 5) [San19].
We demonstrate that subtyping, first formalized in the session-typed process
calculi setting by Gay and Hole [GH05], and its behavior across the two linear
and shared modalities provide the groundwork for an elegant relaxation of the
equi-synchronizing constraint, allowing for phases to be _manifest_ in the
session type. In message passing concurrency, subtyping allows a client and
provider to safely maintain their own local views on the session type (or
protocol) associated with a particular channel. Although previous work [GH05,
AP16] investigate subtyping in the purely linear session type setting, we
found that extending these results to the linear and shared session type
setting as in [BP17] yields very powerful results with both practical and
theoretical significance.
In this paper, we propose $SILL_{S{\leq}}$, an extension of $SILL_{S}$ [BP17]
with subtyping, and show that metatheorems such as progress and preservation
that hold true in $SILL_{S}$ still hold true in $SILL_{S{\leq}}$. We in
particular introduce the _subsynchronizing_ constraint, a relaxation of the
equi-synchronizing constraint, which denote under what conditions clients and
providers can safely disagree on the protocol in shared communnication.
The main contributions of this paper include:
* •
A full formalization of a subtyping relation for shared session types and
their metatheory.
* •
The introduction of the subsynchronizing constraint, a relaxation of the equi-
synchronizing constraint.
* •
Demonstration of $SILL_{S{\leq}}$, a message passing concurrency system with
shared subtyping, along with proofs of the progress and preservation theorems.
* •
Illustrations of practical examples in this richer type system, further
bridging the gap between session-typed process calculi and practical
programming languages.
The rest of the paper proceeds as follows: Section 2 provides a brief
introduction to linear and shared session-typed message-passing concurrency.
Section 3 demonstrates the inability of prior systems to express phasing and
motivates our approach. Section 4 provides an introduction to linear subtyping
along with an attempt to extend the relation to the shared setting. Section 5
introduces the notion of phasing and the subsynchronizing judgment. Section 6
presents a message passing concurrent system using our type system and the
corresponding progress and preservation statements. Section 7 discusses
related work. Section 8 concludes the paper with some points of discussion and
future work. Finally, the Appendix contains detailed proofs of metatheorems
and lemmas that we introduce in the paper.
## 2\. Background
### 2.1. Linear Session Types
Based on the correspondence established between intuitionistic linear logic
and the session-typed $\pi$-calculus [CP10, Ton15] we can interpret a
intuitionistic _linear_ sequent
$A_{1},A_{2},\ldots,A_{n}\vdash B$
as the typing judgment for a process $P$ by annotating the linear propositions
with channel names:
$\underbrace{a_{1}:A_{1},a_{2}:A_{2},\ldots,a_{n}:A_{n}}_{\Delta}\vdash
P::(b:B)$
Interpreted as a typing judgment, we say that process $P$ _provides_ a session
of type $B$ along channel $b$ while _using_ channels $a_{1},\ldots,a_{n}$ with
session types $A_{1},\ldots,A_{n},$ respectively. Interpreted as a sequent, we
say that $P$ is a proof of some proposition $B$ with hypotheses
$A_{1},\ldots,A_{n}$. Following linear logic, the context $\Delta$ is
restricted and rejects contraction and weakening. Programatically, this means
that linear channels cannot be aliased nor freely deleted – they must be fully
consumed exactly once.
Since the session type associated with a channel denotes a bidirectional
protocol, each connective has two operational interpretations – one from the
perspective of the provider and one from the client. This operationally dual
interpretation results in a schema where for any connective, either the client
or provider will send while the other will receive as summarized in Table 1.
For example, a channel of type $A\otimes 1$ requires that the provider sends a
channel of type $A$ and proceeds as type $1$ while the client receives a
channel of type $A$ and proceeds as $1$. The multiplicative unit $1$ denotes
the end of the protocol – the provider must terminate and close its channel
while a client must wait for the channel to be closed. A channel of type
$\oplus\\{{\overline{l:A}}\\}$ ($n$-nary internal choice) requires the
provider to choose and send a label $i$ in $\overline{l}$ and proceed as
$A_{i}$ while the client must receive and branch on some label $i$ and proceed
as $A_{i}$. Similarly, a channel of type $\&\\{{\overline{l:A}}\\}$ requires
the client to choose and send a label and the provider to receive and branch
on a label. The _continuation type_ of some session type refers to the type
after a message exchange; for example, $B$ would be the continuation type of
$A\otimes B$ and similarly $A_{i}$ of $\oplus\\{{\overline{l:A}}\\}$ for some
$i$ in $\overline{l}$. The unit $1$ does not have a continuation type since it
marks the end of communication.
Type | Interpretation from provider | Interpretation from client | Continuation
---|---|---|---
$1$ | Close channel (terminate) | Wait for channel to close | -
$A\otimes B$ | Send channel of type $A$ | Receive channel of type $A$ | $B$
$A\multimap B$ | Receive channel of type $A$ | Send channel of type $A$ | $B$
$\oplus\\{{\overline{l:A}}\\}$ | Send a label $i\in\overline{l}$ | Receive and branch on $i\in\overline{l}$ | $A_{i}$
$\&\\{{\overline{l:A}}\\}$ | Receive and branch on $i\in\overline{l}$ | Send a label $i\in\overline{l}$ | $A_{i}$
Table 1. A summary of the linear connectives and their operational
interpretations
We consider a session type denoting the interaction with a provider of a queue
of integers, which we will develop throughout the paper:
$\displaystyle\textbf{queue}=\&\\{\mathit{enqueue}:$
$\displaystyle\text{int}\supset\textbf{queue},$
$\displaystyle\mathit{dequeue}:$
$\displaystyle\oplus\\{{\mathit{some}:\text{int}\land\textbf{queue},\mathit{none}:\textbf{queue}}\\}\\}$
where we informally adopt value input and output $\supset$ and $\land$ [Ton15]
as value analogues to channel input and output $\multimap$ and $\otimes$,
respectively, which are orthogonal to the advancements in this work. Following
this protocol, a client must send a label $\mathit{enqueue}$ or
$\mathit{dequeue}$. If it chooses $\mathit{enqueue}$, it must send an int and
then recur, and on the other hand, if it chooses $\mathit{dequeue}$, it will
receive either some int as indicated by the $\mathit{some}$ branch of the
internal choice or nothing as indicated by the $\mathit{none}$ branch. In
either case, we let the queue recur111We do not consider termination to more
easily align with later examples.. Dually, a server must first receive a label
$\mathit{enqueue}$ or $\mathit{dequeue}$ from the client. If it receives an
$\mathit{enqueue}$, it will receive an int and then recur. If it receives a
$\mathit{dequeue}$ instead, it must either send a $\mathit{some}$ label
followed by the appropriate int and then recur or send a $\mathit{none}$ label
and then recur.
We adopt an _equi-recursive_ [CHP99] interpretation which requires that
recursive session types be _contractive_ [GH05], guaranteeing that there are
no messages associated with the unfolding of a recursive type. This in
particular requires that we reason about session types _coinductively_.
We now attempt to encode a protocol representing an auction based on [DBH+21].
An auction transitions between the bidding phase where clients are allowed to
place bids and the collecting phase where a winner is given the item while all
the losers are refunded their respective bids.
$\displaystyle\textbf{bidding}=\&\\{\mathit{bid}:$
$\displaystyle\oplus\\{\mathit{ok}:\text{id}\supset\text{money}\supset\textbf{bidding},$
$\displaystyle\mathit{collecting}:\textbf{collecting}\\}\\}$
$\displaystyle\textbf{collecting}=\&\\{\mathit{collect}:\text{id}\supset$
$\displaystyle\oplus\\{\mathit{prize}:\text{item}\land\textbf{bidding},$
$\displaystyle\quad\mathit{refund}:\text{money}\land\textbf{bidding},$
$\displaystyle\quad\mathit{bidding}:\textbf{bidding}\\}\\}$
In this example, we make the bidding phase and collecting phase explicit by
separating the protocol into bidding and collecting. Beginning with bidding, a
client must send a $\mathit{bid}$ label 222The currently unnecessary unary
choice will be useful later.. The provider will either respond with an
$\mathit{ok}$, allowing the client to make a bid by sending its id, money, and
then recursing back to bidding, or a $\mathit{collecting}$, indicating that
the auction is in the collecting phase and thereby making the client
transition to collecting.
For collecting, the client must send a $\mathit{collect}$ label. For ease of
presentation, we require the client to also send its id immediately, giving
enough information to the provider to know if the client should receive a
$\mathit{prize}$ or a $\mathit{refund},$ along with $\mathit{bidding}$ if the
client is in the wrong phase. The $\mathit{prize}$ branch covers the case
where the client won the previous bid, the $\mathit{refund}$ branch covers the
case where the client lost the bid, and the $\mathit{bidding}$ branch informs
the client that the auction is currently in the bidding phase.
Because linear channels have exactly one provider and one client, what we have
described so far only encodes a single participant auction. One can assert
that the provider is actually a broker to an auction of multiple participants,
but that does not solve the fundamental problem, that is, encoding shared
communication with multiple clients.
### 2.2. Shared Session Types
Although linear session types and their corresponding process calculi give a
system with strong guarantees such as _session fidelity_ (preservation) and
_deadlock freedom_ (progress), as we show in the previous section while
attemping to encode an auction, they are not expressive enough to model
systems with shared resources. Since multiple clients cannot simultaneously
communicate with a single provider in an unrestricted manner, we adopt an
_acquire-release_ paradigm. The only action a client can perform on a shared
channel is to send an acquire request, which the provider must accept. After
successfully acquiring, the client is guaranteed to have exclusive access to
the provider and therefore can communicate linearly until the client releases
its exclusive access.
Instead of treating the acquire and release operations as mere operational
primitives, prior work [BP17] extends the type system such that the acquire
and release points are manifest in the type by stratifying session types into
shared and linear types. Unlike linear channels, shared channels are
unrestricted in that they can be freely aliased or deleted. In the remaining
sections, we will make the distinction between linear and shared explicit by
marking channel names and session type meta-variables with subscripts $L$ and
$S$ respectively where appropriate. For example, a linear channel is marked
$a_{\scriptscriptstyle L}$, while a shared channel is marked
$b_{\scriptscriptstyle S}$.
Since shared channels represent unrestricted channels that must first be
acquired, they are constructed by the modal upshift operator
${\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$ for some $A_{\scriptscriptstyle
L}$ requires clients to acquire and then proceed linearly as prescribed by
$A_{\scriptscriptstyle L}$. Similarly, the modal downshift operator
${\downarrow_{L}^{S}}B_{\scriptscriptstyle S}$ for some $B_{\scriptscriptstyle
S}$ requires clients to release and proceed as a shared type. Type
theoretically, these modal shifts mark transitions between shared to linear
and vice versa. In summary, we have:
(Shared Layer) $\displaystyle A_{\scriptscriptstyle S}\;::=\;$
$\displaystyle{\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$ (Linear Layer)
$\displaystyle A_{\scriptscriptstyle L},B_{\scriptscriptstyle L}\;::=\;$
$\displaystyle{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S}\;|\;1\;|\;A_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle
L}\;|\;A_{\scriptscriptstyle L}\multimap B_{\scriptscriptstyle
L}\;|\;\&\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\}\;|\;\oplus\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\}$
where we emphasize that the previously defined (linear) type operators such as
$\otimes$ remain only at the linear layer – a shared session type can only be
constructed by a modal upshift ${\uparrow_{L}^{S}}$ of some linear session
type $A_{\scriptscriptstyle L}$.
As initially introduced, clients of shared channels follow an _acquire-
release_ pattern – they must first acquire exclusive access to the channel,
proceed linearly, and then finally release the exclusive access that they had,
allowing other clients of the same shared channel to potentially acquire
exclusive access. The middle linear section can also be viewed as a _critical
region_ since the client is guaranteed unique access to a shared provider
process. Therefore, this system naturally supports atomic operations on shared
resources.
Using shared channels, we can encode a shared queue, where there can be
multiple clients interacting with the same data:
$\displaystyle\textbf{shared\\_queue}={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\uparrow_{L}^{S}}}\&\\{\mathit{enqueue}:$
$\displaystyle\text{int}\supset{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{shared\\_queue}},$
$\displaystyle\mathit{dequeue}:$
$\displaystyle\oplus\\{\mathit{some}:\text{int}\land{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{shared\\_queue}},$
$\displaystyle\quad\;\;\;\mathit{none}:{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{shared\\_queue}}\\}\\}$
A client of such a channel must first send an acquire message, being blocked
until the acquisition is successful. Upon acquisition, the client must then
proceed linearly as in the previously defined linear queue. The only
difference is that before recursing, the client must release its exclusive
access, allowing other blocked clients to successfully acquire.
## 3\. Equi-synchronizing Rules Out Phasing
We can also attempt to salvage the previous iteration of encoding (multi-
participant) auctions by “wrapping” the previous purely linear protocol
between ${\uparrow_{L}^{S}}$ and ${\downarrow_{L}^{S}}$.
$\displaystyle\textbf{bidding}={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\uparrow_{L}^{S}}}\&\\{\mathit{bid}:$
$\displaystyle\oplus\\{\mathit{ok}:\text{id}\supset\text{money}\supset{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{bidding}},$
$\displaystyle\quad\;\;\mathit{collecting}:{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{collecting}}\\}\\}$
$\displaystyle\textbf{collecting}={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\uparrow_{L}^{S}}}\&\\{\mathit{collect}:\text{id}\supset$
$\displaystyle\oplus\\{\mathit{prize}:\text{item}\land{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{bidding}},$
$\displaystyle\quad\;\;\mathit{refund}:\text{money}\land{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{bidding}},$
$\displaystyle\quad\;\;\mathit{bidding}:{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{bidding}}\\}\\}$
A client to bidding must first acquire exclusive access as indicated by
${\uparrow_{L}^{S}}$, proceed linearly, and then eventually release at either
bidding (in the $\mathit{ok}$ branch) or collecting (in the
$\mathit{collecting}$ branch). Similarly, a client to collecting must first
acquire exclusive access, proceed linearly, and then eventually release at
bidding since all branches lead to bidding.
Unfortunately, as formulated so far, this protocol is not sound. For example,
consider two auction participants $P$ and $Q$ that are both in the collecting
phase and blocked trying to acquire. Suppose $P$ successfully acquires, in
which case it follows the protocol linearly and eventually releases at
bidding. Then, if $Q$ successfully acquires, we have a situation where $Q$
rightfully believes that it acquired at collecting but since $P$ previously
released at type bidding, the auctioneer believes that it currently accepted a
connection from bidding. The subsequent label sent by the client,
$\mathit{collect}$ is not an available option for the provider; session
fidelity has been violated.
Previous work [BP17] addresses this problem by introducing an additional
requirement that if a channel was acquired at some type $A_{\scriptscriptstyle
S}$, all possible future releases (by looking at the continuation types) must
release at $A_{\scriptscriptstyle S}$. This is formulated as the _equi-
synchronizing_ constraint, defined coinductively on the structure of session
types. In particular, neither bidding nor collecting are equi-synchronizing
because they do not always release at the same type at which it was acquired.
For bidding, the $\mathit{collecting}$ branch causes a release at a different
type, and for collecting, all branches lead to a release at a different type.
A solution to the auction scenario is to unify the two phases into one:
$\displaystyle\textbf{auction}={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\uparrow_{L}^{S}}}\&\\{\mathit{bid}:$
$\displaystyle\oplus\\{\mathit{ok}:\text{id}\supset\text{money}\supset{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{auction}},$
$\displaystyle\quad\;\;\mathit{collecting}:{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{auction}}\\},$
$\displaystyle\mathit{collect}:\text{id}\supset$
$\displaystyle\oplus\\{\mathit{prize}:\text{item}\land{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{auction}},$
$\displaystyle\quad\;\;\mathit{refund}:\text{money}\land{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{auction}},$
$\displaystyle\quad\;\;\mathit{bidding}:{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{\downarrow_{L}^{S}}\textbf{auction}}\\}\\}$
The type auction is indeed equi-synchronizing because all possible release
points are at auction.
This presentation of the auction however loses the explicit denotation of the
two phases; although the previous linear single participant version of the
auction protocol can make explicit the bidding and collecting phases in the
session type, the equi-synchronizing requirement forces the two phases to
merge into one in the case of shared session types. In general, the
requirement that all release points are equivalent prevents shared session
types to encode protocols across multiple acquire-release cycles since
information is necessarily “lost” after a particular acquire-release cycle.
## 4\. Subtyping
So far, there is an implicit requirement that given a particular channel, both
its provider and clients agree on its protocol or type. A relaxation of this
requirement in the context of linear session types has been investigated by
Gay and Hole [GH05], and in this section, we present subtyping in the context
of both linear session types and shared session types.
If $A\leq B$, then a provider viewing its offering channel as type $A$ can
safely communicate with a client viewing the same channel as type $B$. This
perspective reveals a notion of _substitutability_ , where a process providing
a channel of type $A$ can be replaced by a process providing $A^{\prime}$ such
that $A^{\prime}\leq A$ and dually, a client to some channel of type $B$ can
be replaced by another process using the same channel as some type
$B^{\prime}$ such that $B\leq B^{\prime}$. The following subtyping rules,
interpreted coinductively, formalize the subtyping relation between session
types:
$1\leq 1\quad A_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle L}\leq
A^{\prime}_{\scriptscriptstyle L}\otimes B^{\prime}_{\scriptscriptstyle
L}\lx@proof@logical@and A_{\scriptscriptstyle L}\leq
A^{\prime}_{\scriptscriptstyle L}B_{\scriptscriptstyle L}\leq
B^{\prime}_{\scriptscriptstyle L}\quad A_{\scriptscriptstyle L}\multimap
B_{\scriptscriptstyle L}\leq A^{\prime}_{\scriptscriptstyle L}\multimap
B^{\prime}_{\scriptscriptstyle L}\lx@proof@logical@and
A^{\prime}_{\scriptscriptstyle L}\leq A_{\scriptscriptstyle
L}B_{\scriptscriptstyle L}\leq B^{\prime}_{\scriptscriptstyle L}$
$\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\}\leq\oplus\\{{\overline{l{:}A^{\prime}_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\}\forall{i}\in\overline{l}\quad{A_{i}}_{\scriptscriptstyle
L}\leq{A^{\prime}_{i}}_{\scriptscriptstyle
L}\quad\&\\{{\overline{l{:}A_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\}\leq\&\\{{\overline{l{:}A^{\prime}_{\scriptscriptstyle
L}}}\\}\forall{i}\in\overline{l}\quad{A_{i}}_{\scriptscriptstyle
L}\leq{A^{\prime}_{i}}_{\scriptscriptstyle L}$
One of the notable consequences of adopting subtyping is that internal and
external choices allow one side to have more labels or branches. For internal
choice, since the provider sends some label, there is no harm in a client to
be prepared to handle additional labels that it will never receive and vice
versa for external choice. Another observation is that subtyping of session
types is covariant in their continuations; following this paradigm, we can
immediately define subtyping for the new type connectives ${\uparrow_{L}^{S}}$
and ${\downarrow_{L}^{S}}$:
${\uparrow_{L}^{S}}A_{\scriptscriptstyle
L}\leq{\uparrow_{L}^{S}}B_{\scriptscriptstyle L}A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}\quad{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S}\leq{\downarrow_{L}^{S}}B_{\scriptscriptstyle S}A_{\scriptscriptstyle S}\leq
B_{\scriptscriptstyle S}$
###### Remark 1.
The subtyping relation $\leq$ is a partial order.
A key principle governing subtyping of session types is that _ignorance is
bliss_ ; neither the client nor the provider need to know the precise protocol
that the other party is following.
Let us revisit the shared queue example:
$\displaystyle\textbf{shared\\_queue}={\uparrow_{L}^{S}}\&\\{\mathit{enqueue}:$
$\displaystyle\text{int}\supset{\downarrow_{L}^{S}}\textbf{shared\\_queue},$
$\displaystyle\mathit{dequeue}:$
$\displaystyle\oplus\\{\mathit{some}:\text{int}\land{\downarrow_{L}^{S}}\textbf{shared\\_queue},$
$\displaystyle\quad\;\;\;\mathit{none}:{\downarrow_{L}^{S}}\textbf{shared\\_queue}\\}\\}$
Instead of allowing all clients to freely enqueue and dequeue, suppose we only
allow certain clients to enqueue and certain clients to dequeue. With
subtyping, we first fix the provider’s type to be shared_queue. Next, we
restrict writer clients by removing the $dequeue$ label and similarly restrict
reader clients by removing the $enqueue$ label:
producer
$\displaystyle={\uparrow_{L}^{S}}\&\\{\mathit{enqueue}:\text{int}\supset{\downarrow_{L}^{S}}\textbf{producer}\\}$
consumer
$\displaystyle={\uparrow_{L}^{S}}\&\\{\mathit{dequeue}:\oplus\\{{\mathit{some}:\text{int}\land{\downarrow_{L}^{S}}\textbf{consumer},\mathit{none}:{\downarrow_{L}^{S}}\textbf{consumer}}\\}\\}$
where it is indeed the case that
$\textbf{shared\\_queue}\leq\textbf{producer}$ and
$\textbf{shared\\_queue}\leq\textbf{consumer}$, justifying both the writer and
reader clients’ views on the type of the channel.
We will defer the detailed discussion of the subtle interactions that occur
between the notion of equi-synchronizing constraint and subtyping to Section
5.2. For this example however, the fact that all three types shared_queue,
producer, and consumer are independently equi-synchronizing is a strong
justification of its soundness.
## 5\. Phasing
One of the most common patterns when encoding data structures and protocols
via session types is to begin the linear type with an external choice. When
these types recur, we are met with another external choice. A notion of
_phasing_ emerges from this pattern, where a single phase spans from the
initial external choice to the recursion.
We introduced varying versions of an auction protocol, which in its linear
form (Section 2.1) can make explicit the two distinct phases, yet in its
shared form (Section 3) cannot due to the equi-synchronizing constraint. With
subtyping however, this seems to no longer be a problem; the auctioneer can
view the protocol as auction whereas the clients can independently view the
protocol as bidding or collecting depending on their current phase since
$\textbf{auction}\leq\textbf{bidding}$ and
$\textbf{auction}\leq\textbf{collecting}$.
provider $\displaystyle\begin{cases}\begin{aligned}
\textbf{auction}={\uparrow_{L}^{S}}\&\\{\mathit{bid}:&\oplus\\{\mathit{ok}:\text{id}\supset\text{money}\supset{\downarrow_{L}^{S}}\textbf{auction},\\\
&\quad\;\;\mathit{collecting}:{\downarrow_{L}^{S}}\textbf{auction}\\},\\\
\mathit{collect}:\text{id}\supset&\oplus\\{\mathit{prize}:\text{item}\land{\downarrow_{L}^{S}}\textbf{auction},\\\
&\quad\;\;\mathit{refund}:\text{money}\land{\downarrow_{L}^{S}}\textbf{auction},\\\
&\quad\;\;\mathit{bidding}:{\downarrow_{L}^{S}}\textbf{auction}\\}\\}\end{aligned}\end{cases}$
clients $\displaystyle\begin{cases}\begin{aligned}
\textbf{bidding}={\uparrow_{L}^{S}}\&\\{\mathit{bid}:&\oplus\\{\mathit{ok}:\text{id}\supset\text{money}\supset{\downarrow_{L}^{S}}\textbf{bidding},\\\
&\quad\;\;\mathit{collecting}:{\downarrow_{L}^{S}}\textbf{collecting}\\}\\}\\\
\textbf{collecting}={\uparrow_{L}^{S}}\&\\{\mathit{collect}:\text{id}\supset&\oplus\\{\mathit{prize}:\text{item}\land{\downarrow_{L}^{S}}\textbf{bidding},\\\
&\quad\;\;\mathit{refund}:\text{money}\land{\downarrow_{L}^{S}}\textbf{bidding},\\\
&\quad\;\;\mathit{bidding}:{\downarrow_{L}^{S}}\textbf{bidding}\\}\\}\end{aligned}\end{cases}$
Unfortunately, there is a critical issue with this solution. Since shared
channels can be aliased, a client in the collecting phase can alias the
channel, follow the protocol, and then ignore the released type (bidding
phase) – it can then use the previously aliased channel to communicate as if
in the collecting phase. In general, the strategy of encoding phases in shared
communication through a shared supertype allows malicious clients to re-enter
previously encountered phases since they may internally store aliases. Thus,
what we require is a subtyping relation across shared and linear modes since
linear channels are restricted and in particular cannot be aliased.
We first add two new linear connectives ${\uparrow_{L}^{L}}$ and
${\downarrow_{L}^{L}}$ that, like ${\uparrow_{L}^{S}}$ and
${\downarrow_{L}^{S}}$, have operationally an acquire-release semantics but
enforce a linear treatment of the associated channels. Prior work [Gri15] has
already explored such intra-layer shifts, albeit for the purpose of enforcing
synchronization in an asynchronous message-passing system. Thus for example,
the protocol denoted by ${\uparrow_{L}^{L}}A_{\scriptscriptstyle L}$ requires
the client to “acquire” as in the shared case. If the provider happens to
provide a linear channel ${\uparrow_{L}^{L}}A_{\scriptscriptstyle L}$, then
this merely adds a synchronization point in the communication. The more
interesting case is when the provider is actually providing a shared channel,
some ${\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$; a client should be able to
view the session type as ${\uparrow_{L}^{L}}A_{\scriptscriptstyle L}$ without
any trouble. We formalize this idea to the following additional subtyping
relations:
${\uparrow_{L}^{S}}A_{\scriptscriptstyle
L}\leq{\uparrow_{L}^{L}}B_{\scriptscriptstyle L}A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}\quad{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S}\leq{\downarrow_{L}^{L}}B_{\scriptscriptstyle L}A_{\scriptscriptstyle S}\leq
B_{\scriptscriptstyle L}\quad{\uparrow_{L}^{L}}A_{\scriptscriptstyle
L}\leq{\uparrow_{L}^{L}}B_{\scriptscriptstyle L}A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}\quad{\downarrow_{L}^{L}}A_{\scriptscriptstyle
L}\leq{\downarrow_{L}^{L}}B_{\scriptscriptstyle L}A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$
Using the new connectives, we can complete the auction protocol where the two
phases are manifest in the session type; a client must actually view the
auction protocol linearly!
$\displaystyle\textbf{bidding}={\uparrow_{L}^{L}}\&\\{\mathit{bid}:$
$\displaystyle\oplus\\{\mathit{ok}:\text{id}\supset\text{money}\supset{\downarrow_{L}^{L}}\textbf{bidding},$
$\displaystyle\quad\;\;\mathit{collecting}:{\downarrow_{L}^{L}}\textbf{collecting}\\}\\}$
$\displaystyle\textbf{collecting}={\uparrow_{L}^{L}}\&\\{\mathit{collect}:\text{id}\supset$
$\displaystyle\oplus\\{\mathit{prize}:\text{item}\land{\downarrow_{L}^{L}}\textbf{bidding},$
$\displaystyle\quad\;\;\mathit{refund}:\text{money}\land{\downarrow_{L}^{L}}\textbf{bidding},$
$\displaystyle\quad\;\;\mathit{bidding}:{\downarrow_{L}^{L}}\textbf{bidding}\\}\\}$
where $\textbf{auction}\leq\textbf{bidding}$ and
$\textbf{auction}\leq\textbf{collecting}$. Compared to the initially presented
linear auction protocol, this version inserts the purely linear shifts
${\uparrow_{L}^{L}}$ and ${\downarrow_{L}^{L}}$ where appropriate such that
the protocol is compatible with the shared auction protocol that the
auctioneer provides. Therefore, the addition of ${\uparrow_{L}^{L}}$ and
${\downarrow_{L}^{L}}$ to our system allows a natural subtyping relation
between shared session types and linear session types, where they serve as a
means to safely bridge between shared and linear modalities.
### 5.1. Deadlock Detection
Another instance where phasing naturally occurs is from centralized form of
Mitchell and Merritt’s distributed deadlock detection algorithm [MM84]. The
algorithm assumes a distributed system with shared resources and linear nodes,
where the intended behavior is that the linear nodes, encoded as linear
processes, acquire particular resources, encoded as shared processes, perform
appropriate computations, and then release unneeded resources as in typical
distributed systems. Both nodes and resources are identified by a unique
identification of type pid (process id) and rid (resource id) respectively,
which as in previous examples, we take as primitives. In this system, a
deadlock in the usual sense is detected when there is a cycle in the
dependency graph generated by the algorithm. The centralized deadlock
detection algorithm consists of a shared process that acts as an monitor that
all nodes report to.
The type of this global deadlock detection monitor is given as
$\displaystyle\textbf{dd}={\uparrow_{L}^{S}}\&\\{\mathit{tryacq}:$
$\displaystyle\text{pid}\supset\text{rid}\supset{\downarrow_{L}^{S}}\textbf{dd},$
$\displaystyle\mathit{didacq}:$
$\displaystyle\text{pid}\supset\text{rid}\supset{\downarrow_{L}^{S}}\textbf{dd},$
$\displaystyle\mathit{willrel}:$
$\displaystyle\text{pid}\supset\text{rid}\supset{\downarrow_{L}^{S}}\textbf{dd}\\}$
where the intention is that clients are expected to inform the monitor before
attempting to acquire a resource (tryacq), after successfully acquiring a
resource (didacq), and before releasing a resource (willrel).
As discussed in a previous work [San19], there are two phases of the protocol
across successive acquire-release cycles. Using subtyping, we can represent
this constraint statically:
$\displaystyle\textbf{dd\\_start}={\uparrow_{L}^{L}}\&\\{\mathit{tryacq}:$
$\displaystyle\text{pid}\supset\text{rid}\supset{\downarrow_{L}^{L}}\textbf{dd\\_acq},$
$\displaystyle\mathit{willrel}:$
$\displaystyle\text{pid}\supset\text{rid}\supset{\downarrow_{L}^{L}}\textbf{dd\\_start}\\}$
$\displaystyle\textbf{dd\\_acq}={\uparrow_{L}^{L}}\&\\{\mathit{didacq}:$
$\displaystyle\text{pid}\supset\text{rid}\supset{\downarrow_{L}^{L}}\textbf{dd\\_start}\\}$
where $\textbf{dd}\leq\textbf{dd\\_start}$. This session type enforces that
the message following tryacq must be didacq and that didacq cannot be sent
without a tryacq on the previous acquire-release cycle. It is important to
note that we are not enforcing other desirable constraints such as whether the
resource id sent by the client matches in a sequence of tryacq followed by
didacq (it is nonsensical for a client to attempt to acquire resource $r$ and
after claim that it successfully acquired a different resource $r^{\prime}$).
We believe that those additional constraints can be naturally expressed by
extending refinement types [DP20] to be compatible with this system.
A linear node is a process that uses a channel of type dd_start; since we
allow subtyping across modalities, we can spawn such a node by passing a
reference to the global monitor offering a shared channel of type dd, which
the node can safely view to be dd_start since
$\textbf{dd}\leq\textbf{dd\\_start}$.
###### Remark 2.
A protocol spanning multiple phases can also be interpreted as a deterministic
finite autonomata (DFA) where nodes represent the phase or the state of the
protocol and edges represent choice branches. The previous auction protocol
can be encoded as a two state DFA as shown in Figure 1 andd the deadlock
monitor protocol can similarly be encoded as shown in Figure 2.
biddingstartcollecting$\mathit{bid}\rightarrow\mathit{ok}$$\mathit{bid}\rightarrow\mathit{collecting}$$collect\rightarrow\\{\mathit{prize},\mathit{refund},\mathit{bidding}\\}$
Figure 1. A DFA representation of the two phases in the auction protocol,
where non-branching messages are omitted for presentation purposes since they
do not contribute to different protocol paths. Multiple labels enclosed in
brackets as in $\\{\mathit{prize},\mathit{refund},\mathit{bidding}\\}$ mean
that any of those labels can be selected.
startstartacq$\mathit{willrel}$$\mathit{tryacq}$$\mathit{didacq}$ Figure 2. A
DFA representation of the two phases in the deadlock monitor protocol. Non-
branching messages are omitted for presentation purposes like in Figure 1.
### 5.2. Subsynchronizing Constraint
We note in Section 2.2 that in previous work [BP17], we require session types
to be equi-synchronizing, which requires that processes following the protocol
are released at the exact type at which they were acquired. This constraint
guarantees that clients do not acquire at a type that they do not expect. With
the introduction of subtyping however, there are two major relaxations that we
propose on this constraint.
##### Releasing at a subtype
A client $P$ using some channel as some type $a_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle S}$ can safely communicate with any (shared)
process offering a channel of type $a_{\scriptscriptstyle
S}{:}A^{\prime}_{\scriptscriptstyle S}$ such that
$A^{\prime}_{\scriptscriptstyle S}\leq A_{\scriptscriptstyle S}$ due to
subtyping. If another client acquires $a_{\scriptscriptstyle S}$ and releases
it at some $A^{\prime\prime}_{\scriptscriptstyle S}$ such that
$A^{\prime\prime}_{\scriptscriptstyle S}\leq A^{\prime}_{\scriptscriptstyle
S}$, then $P$ can still safely communicate along $a_{\scriptscriptstyle S}$
since $A^{\prime\prime}_{\scriptscriptstyle S}\leq A_{\scriptscriptstyle S}$
by transitivity. Thus, one reasonable relaxation to the equi-synchronizing
constraint is that processes do not need to be released at the same exact type
but instead a subtype.
##### Branches that never occur
A major consequence of subtyping is that providers and clients can wait on
some branches in the internal and external choices which in fact never will be
sent by the other party. For example, suppose a provider $P$ provides a
channel of type $A_{\scriptscriptstyle
S}={{\uparrow_{L}^{S}}\&\\{{a:{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},b:{\downarrow_{L}^{S}}B_{\scriptscriptstyle S}}\\}}$. Assuming some
unrelated $B_{\scriptscriptstyle S}$, we can see that $A_{\scriptscriptstyle
S}$ is not equi-synchronizing because the $b$ branch can lead to releasing at
a different type. However, suppose some client $C$ views the channel as
${{\uparrow_{L}^{S}}\&\\{{a:{\downarrow_{L}^{S}}A_{\scriptscriptstyle S}}\\}}$
– in this case, $P$ can only receive $a$, and the $b$ branch can safely be
ignored since $C$ will never send the $b$ label. This points to the necessity
of using both the provider and client types to more finely verify the
synchronizing constraint. Of course, if there is another client $D$ that views
the channel in a way that the $b$ branch can be taken, then the entire setup
is not synchronizing. Thus, we must verify the synchronization constraint for
all pairs of providers and clients.
Following previous work [BP17], we formulate constraints by extending the
shared types: ${\hat{A}\;::=\;\bot\;|\;A_{\scriptscriptstyle S}\;|\;\top}$
where $\bot\leq A_{\scriptscriptstyle S}\leq\top$ for any
$A_{\scriptscriptstyle S}$. Intuitively, $\top$ indicates a channel that has
not been acquired yet (no constraints on a future release),
$A_{\scriptscriptstyle S}$ indicates the previous presentation of shared
channels, and $\bot$ indicates a channel that will never be available (hence,
any client attempting to acquire from this channel will never succeed and be
blocked).
We are now ready to present the _subsynchronizing_ judgment, interpreted
coinductively, which is of the form $\vdash(A,B,\hat{D})\;\text{ssync}$ for
some $A$ and $B$ such that $A\leq B$. It asserts that a provider providing a
channel of type $A$ and a client using that channel with type $B$ is
subsynchronizing with respect to some constraint $\hat{D}$. To verify a pair
of types $A$ and $B$ to be subsynchronizing, we take $\top$ as its initial
constraint (recall that $\top$ represents no constraint), that is, we say that
$A$ and $B$ are subsynchronizing if $\vdash(A,B,\top)\;\text{ssync}$.
$\vdash(1,1,\hat{D})\;\text{ssync}$ $\vdash(A_{\scriptscriptstyle L}\otimes
B_{\scriptscriptstyle L},A^{\prime}_{\scriptscriptstyle L}\otimes
B^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\vdash(B_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\quad\vdash(A_{\scriptscriptstyle L}\multimap
B_{\scriptscriptstyle L},A^{\prime}_{\scriptscriptstyle L}\multimap
B^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\vdash(B_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle L},\hat{D})\;\text{ssync}$
$\vdash(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}A^{\prime}_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\},\hat{D})\;\text{ssync}\forall
i\in\overline{l}\quad\vdash({A_{i}}_{\scriptscriptstyle
L},{A_{i}^{\prime}}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\quad\vdash(\&\\{{\overline{l{:}A_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\},\&\\{{\overline{l{:}A^{\prime}_{\scriptscriptstyle
L}}}\\},\hat{D})\;\text{ssync}\forall
i\in\overline{l}\quad\vdash({A_{i}}_{\scriptscriptstyle
L},{A_{i}^{\prime}}_{\scriptscriptstyle L},\hat{D})\;\text{ssync}$
$\vdash({\uparrow_{L}^{L}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\vdash(A_{\scriptscriptstyle
L},A^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\quad\vdash({\downarrow_{L}^{L}}A_{\scriptscriptstyle
L},{\downarrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\vdash(A_{\scriptscriptstyle
L},A^{\prime}_{\scriptscriptstyle L},\hat{D})\;\text{ssync}$
$\vdash({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A^{\prime}_{\scriptscriptstyle
L},\top)\;\text{ssync}\vdash(A_{\scriptscriptstyle
L},A^{\prime}_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L})\;\text{ssync}\quad\vdash({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}A^{\prime}_{\scriptscriptstyle
S},\hat{D})\;\text{ssync}\lx@proof@logical@and\vdash(A_{\scriptscriptstyle
S},A^{\prime}_{\scriptscriptstyle
S},\top)\;\text{ssync}{\downarrow_{L}^{S}}A_{\scriptscriptstyle S}\leq\hat{D}$
$\vdash({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},\top)\;\text{ssync}\vdash(A_{\scriptscriptstyle
L},A^{\prime}_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L})\;\text{ssync}\quad\vdash({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}\lx@proof@logical@and\vdash(A_{\scriptscriptstyle
S},A^{\prime}_{\scriptscriptstyle
L},\top)\;\text{ssync}{\downarrow_{L}^{S}}A_{\scriptscriptstyle S}\leq\hat{D}$
The general progression of derivations to verify that two types are
subsynchronizing is to first look for an upshift ${\uparrow_{L}^{S}}$ on the
provider’s type, involving either $S{\uparrow_{L}^{S}}$ or
$S{\uparrow_{L}^{S}}{\uparrow_{L}^{L}}$. After encountering a
${\uparrow_{L}^{S}}$, it “records” the provider’s type as the constraint and
continues to look at the continuations of the types. When encountering
internal and external choices, it only requires the continuations for the
common branches to be subsynchronizing. When it encounters a downshift
${\downarrow_{L}^{S}}$ from the provider’s side, it checks if the release
point as denoted by the continuation of ${\downarrow_{L}^{S}}$ is a subtype of
the recorded constraint, in which case it continues with the derivation with
the $\top$ constraint.
###### Remark 3.
Subsynchronizing constraint is a generalization of the equi-synchronizing
constraint. In particular, if $A$ is equi-synchronizing, then the pair $A,A$
are subsynchronizing and vice versa.
## 6\. Metatheory
In this section we present $SILL_{S{\leq}}$, a message-passing concurrency
system implementing the subtyping that we propose along with progress and
preservation theorems.
### 6.1. Process Typing
We take the typing judgment presented in Section 2.1 and extend it with shared
channels as introduced in Section 2.2:
$\displaystyle{\Gamma\vdash P::(a_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle S})}$ $\displaystyle{\Gamma;\Delta\vdash
Q::(a_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L})}$
where $\Gamma={a_{1}}_{\scriptscriptstyle
S}{:}\hat{A_{1}},\ldots,{a_{n}}_{\scriptscriptstyle S}{:}\hat{A_{n}}$ is a
structural context of shared channels and constraints ($\bot$ and $\top$)
which can appear at runtime.
The first judgment asserts that a process term $P$ provides a shared channel
$a_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle S}$ while using shared
channels in $\Gamma$; the lack of dependence on any linear channels $\Delta$
is due to the _independence principle_ presented in [BP17]. The second
judgment asserts that $Q$ provides a linear channel $a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}$ while using shared channels in $\Gamma$ and
linear channels in $\Delta$.
##### Global signature
In the following sections, we will implicitly assume a global signature
$\Sigma$, which is a set of process definitions that can be thought as the
process calculi analogue to a signature consisting of function definitions. A
process definition consists of the offering channel name and its type, the
client channel names and their types, and the process term:
$\displaystyle\Sigma\;::=\;$
$\displaystyle\cdot\;|\;\Sigma,x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\leftarrow X_{\scriptscriptstyle
L}\leftarrow\overline{y_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L}},\overline{w_{\scriptscriptstyle S}{:}E_{\scriptscriptstyle S}}=P$
$\displaystyle\;|\;$ $\displaystyle\Sigma,z_{\scriptscriptstyle
S}{:}C_{\scriptscriptstyle S}\leftarrow Z_{\scriptscriptstyle
S}\leftarrow\overline{v_{\scriptscriptstyle S}{:}D_{\scriptscriptstyle S}}=Q$
Leaving aside the $\cdot$ which denotes an empty signature, the former denotes
a linear process definition of a process named $X_{\scriptscriptstyle L}$ that
offers a channel $x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}$ while
using linear channels ${y_{1}}_{\scriptscriptstyle
L}{:}{B_{1}}_{\scriptscriptstyle L},\ldots,{y_{n}}_{\scriptscriptstyle
L}{:}{B_{n}}_{\scriptscriptstyle L}$ and shared channels
${w_{1}}_{\scriptscriptstyle S}{:}{E_{1}}_{\scriptscriptstyle
S},\ldots,{w_{m}}_{\scriptscriptstyle S}{:}{E_{m}}_{\scriptscriptstyle S}$ for
some $n$ and $m$, where $P$ consists of its implementation. Similarly, the
latter denotes a shared process definition of a process named
$Z_{\scriptscriptstyle S}$ that offers a channel $z_{\scriptscriptstyle
S}{:}C_{\scriptscriptstyle S}$ while using shared channels
${v_{1}}_{\scriptscriptstyle S}{:}{D_{1}}_{\scriptscriptstyle
S},\ldots,{v_{n}}_{\scriptscriptstyle S}{:}{D_{n}}_{\scriptscriptstyle S}$ for
some $n$, where $Q$ consists of its implementation. Again, it is important
that shared process definitions do not depend on linear channels due to the
independence principle.
#### 6.1.1. Identity Rules
_Forwarding_ is a fundamental operation that allows a process to identify its
offering channel with a channel it uses if the types are compatible.
${\Gamma;y_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L}\vdash\text{fwd}\;x_{\scriptscriptstyle L}\ y_{\scriptscriptstyle
L}::(x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L})}B_{\scriptscriptstyle L}\leq A_{\scriptscriptstyle
L}\quad{\Gamma,y_{\scriptscriptstyle
S}{:}\hat{B}\vdash\text{fwd}\;x_{\scriptscriptstyle S}\ y_{\scriptscriptstyle
S}::(x_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle S})}\hat{B}\leq
A_{\scriptscriptstyle S}$ ${\Gamma,y_{\scriptscriptstyle
S}{:}\hat{B};\cdot\vdash\text{fwd}\;x_{\scriptscriptstyle L}\
y_{\scriptscriptstyle S}::(x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L})}\hat{B}\leq A_{\scriptscriptstyle L}$
The rules $ID_{\scriptscriptstyle L}$ and $ID_{\scriptscriptstyle S}$ require
the offering channel to be a supertype of the channel it is being identified
with. Since we syntactically distinguish shared channels and linear channels,
we require an additional rule $ID_{\scriptscriptstyle{LS}}$ that allows linear
channels to be forwarded with a shared channel if the subtyping relation
holds.
When a linear process _spawns_ another linear process, it can transfer
channels that it currently communicates with to the new process. In
$SILL_{S}$, this resulted in linear to linear and shared to shared channel
substitutions, but with subtyping, the rule must now divide the channel
substitutions into three parts: linear to linear substitutions, shared to
linear substitutions, and shared to shared substitutions. The shared to linear
substitution in particular occurs when a process definition expects a linear
channel (or some type ${\uparrow_{L}^{L}}\ldots$) and is instead given a
smaller shared channel, and is in fact the key to the expressiveness of our
system.
${\Gamma;\Delta,\overline{y_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L}}\vdash x_{\scriptscriptstyle L}\leftarrow X_{\scriptscriptstyle
L}\leftarrow\overline{y_{\scriptscriptstyle
L}},\overline{v_{\scriptscriptstyle S}},\overline{w_{\scriptscriptstyle
S}};Q::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\lx@proof@logical@and\begin{subarray}{c}\overline{v_{\scriptscriptstyle
S}{:}\hat{D}}\in\Gamma\\\ \overline{w_{\scriptscriptstyle
S}{:}\hat{E}}\in\Gamma\end{subarray}\begin{subarray}{c}\overline{B_{\scriptscriptstyle
L}}\leq\overline{B^{\prime}_{\scriptscriptstyle L}}\\\
\overline{\hat{D}}\leq\overline{D^{\prime}_{\scriptscriptstyle L}}\\\
\overline{\hat{E}}\leq\overline{E^{\prime}_{\scriptscriptstyle
S}}\end{subarray}\begin{subarray}{c}\left(x^{\prime}_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\leftarrow
X_{L}\leftarrow\overline{y^{\prime}_{\scriptscriptstyle
L}{:}B^{\prime}_{\scriptscriptstyle
L}},\overline{v^{\prime}_{\scriptscriptstyle
L}{:}D^{\prime}_{\scriptscriptstyle
L}},\overline{w^{\prime}_{\scriptscriptstyle
S}{:}E^{\prime}_{\scriptscriptstyle S}}=P\right)\in\Sigma\\\
{\Gamma;\Delta,x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\vdash
Q::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}\end{subarray}$
Similar to forwarding, there are two additional spawn rules (linear to shared
and shared to shared) due to the syntactical distinguishment of the two
modalities:
${\Gamma;\Delta\vdash x_{\scriptscriptstyle S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{y_{\scriptscriptstyle S}};Q::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle
L})}\lx@proof@logical@and\begin{subarray}{c}\overline{y_{\scriptscriptstyle
S}{:}\hat{B}}\in\Gamma\\\
\overline{\hat{B}}\leq\overline{B^{\prime}_{\scriptscriptstyle
S}}\end{subarray}\left(x^{\prime}_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle S}\leftarrow
X_{S}\leftarrow\overline{y^{\prime}_{\scriptscriptstyle
S}{:}B^{\prime}_{\scriptscriptstyle
S}}=P\right)\in\Sigma{\Gamma,x_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle
S};\Delta\vdash Q::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$
${\Gamma\vdash x_{\scriptscriptstyle S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{y_{\scriptscriptstyle S}};Q::(z_{\scriptscriptstyle
S}{:}C_{\scriptscriptstyle
S})}\lx@proof@logical@and\begin{subarray}{c}\overline{y_{\scriptscriptstyle
S}{:}\hat{B}}\in\Gamma\\\
\overline{\hat{B}}\leq\overline{B^{\prime}_{\scriptscriptstyle
S}}\end{subarray}\left(x^{\prime}_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle S}\leftarrow
X_{S}\leftarrow\overline{y^{\prime}_{\scriptscriptstyle
S}{:}B^{\prime}_{\scriptscriptstyle
S}}=P\right)\in\Sigma{\Gamma,x_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle
S}\vdash Q::(z_{\scriptscriptstyle S}{:}C_{\scriptscriptstyle S})}$
#### 6.1.2. Logical Rules
As in standard sequent calculus presentations, typing judgments involving
connectives are presented through left and right rules. The multiplicative
unit $1$ denotes termination; providers must _close_ their offering channel
while clients must _wait_ for the channel to close:
${\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}1\vdash\text{wait}\;x_{\scriptscriptstyle L};P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}{\Gamma;\Delta\vdash P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle
L})}\quad{\Gamma;\cdot\vdash\text{close}\;x_{\scriptscriptstyle
L}::(x_{\scriptscriptstyle L}{:}1)}$
For tensor ($A_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle L}$),
providers (right rule) must _send_ a channel of some type $C_{*}$ such that
$C_{*}\leq A_{\scriptscriptstyle L}$ (note that $C_{*}$ can be either shared
or linear, meaning there must be a rule covering each case separately). On the
other hand, clients (left rule) must _receive_ a channel of type
$A_{\scriptscriptstyle L}$ (which due to subtyping could be smaller in
actuality).
${\Gamma;\Delta,x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\otimes
B_{\scriptscriptstyle L}\vdash y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;x_{\scriptscriptstyle L};P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}{\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\vdash P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\;{\Gamma;\Delta,y_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle
L}\vdash\text{send}\;x_{\scriptscriptstyle L}\ y_{\scriptscriptstyle
L};P::(x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\otimes
B_{\scriptscriptstyle L})}\lx@proof@logical@and A^{\prime}_{\scriptscriptstyle
L}\leq A_{\scriptscriptstyle L}{\Gamma;\Delta\vdash P::(x_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L})}$ ${\Gamma,y_{\scriptscriptstyle
S}{:}\hat{A};\Delta\vdash\text{send}\;x_{\scriptscriptstyle L}\
y_{\scriptscriptstyle S};P::(x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\otimes B_{\scriptscriptstyle L})}\lx@proof@logical@and\hat{A}\leq
A_{\scriptscriptstyle L}{\Gamma,y_{\scriptscriptstyle
S}{:}\hat{A};\Delta\vdash P::(x_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L})}$
Dually for linear implication ($A_{\scriptscriptstyle L}\multimap
B_{\scriptscriptstyle L}$), clients must _send_ a channel of some subtype of
$A_{\scriptscriptstyle L}$ while providers must _receive_ a channel of type
$A_{\scriptscriptstyle L}$:
${\Gamma;\Delta,x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\multimap
B_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L}\vdash\text{send}\;x_{\scriptscriptstyle
L}\ y_{\scriptscriptstyle L};P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}\lx@proof@logical@and
A^{\prime}_{\scriptscriptstyle L}\leq A_{\scriptscriptstyle
L}{\Gamma;\Delta,x_{\scriptscriptstyle L}{:}B\vdash P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}$ ${\Gamma;\Delta\vdash y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;x_{\scriptscriptstyle L};P::(x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\multimap B_{\scriptscriptstyle
L})}{\Gamma;\Delta,y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\vdash
P::(x_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle L})}$
${\Gamma,y_{\scriptscriptstyle S}{:}\hat{A};\Delta,x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\multimap B_{\scriptscriptstyle
L}\vdash\text{send}\;x_{\scriptscriptstyle L}\ y_{\scriptscriptstyle
S};P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\lx@proof@logical@and\hat{A}\leq A_{\scriptscriptstyle
L}{\Gamma,y_{\scriptscriptstyle S}{:}\hat{A};\Delta,x_{\scriptscriptstyle
L}{:}B\vdash P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$
In this system, binary internal and external choices, $A_{\scriptscriptstyle
L}\oplus B_{\scriptscriptstyle L}$ and $C_{\scriptscriptstyle
L}\&D_{\scriptscriptstyle L},$ are generalized to their $n$-ary versions,
$\oplus\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\}$ and
$\&\\{{\overline{m{:}B_{\scriptscriptstyle L}}}\\}$, where each continuation
type $A_{i}$ or $B_{i}$ has a corresponding (unique) label $l_{i}$ or $m_{i}$.
For internal choice, providers must _send_ a label $l_{i}$ and then continue
as $A_{i}$ whereas clients must _receive_ a label and continue as the type
that correspond with the label it received.
${\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\}\vdash\text{case}\;x_{\scriptscriptstyle
L}\;\text{of}\;\\{\overline{l\Rightarrow P}\\}::(c_{\scriptscriptstyle
L}{:}Z_{\scriptscriptstyle L})}\forall
i\in\overline{l}\quad{\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}{A_{i}}_{\scriptscriptstyle L}\vdash P_{i}::(c_{\scriptscriptstyle
L}{:}Z_{\scriptscriptstyle L})}\quad{\Gamma;\Delta\vdash
x.i;P::(x_{\scriptscriptstyle
L}{:}\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\})}\lx@proof@logical@and i\in\overline{l}{\Gamma;\Delta\vdash
P::(x_{\scriptscriptstyle L}{:}{A_{i}}_{\scriptscriptstyle L})}$
Dually for external choice, clients _send_ a label whereas providers _receive_
and branch on the input label:
${\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}\&\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\}\vdash
x.i;P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\lx@proof@logical@and i\in\overline{l}{\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}{A_{i}}_{\scriptscriptstyle L}\vdash P::(z_{\scriptscriptstyle
L}{:}{C}_{\scriptscriptstyle
L})}\quad{\Gamma;\Delta\vdash\text{case}\;x_{\scriptscriptstyle
L}\;\text{of}\;\\{\overline{l\Rightarrow P}\\}::(x_{\scriptscriptstyle
L}{:}\&\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\})}\forall
i\in\overline{l}\quad{\Gamma;\Delta\vdash P_{i}::(x_{\scriptscriptstyle
L}{:}{A_{i}}_{\scriptscriptstyle L})}$
Next, ${\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$ signifies a synchronization
point where clients must _acquire_ while (shared) providers must _accept_ a
client, both proceeding with $A_{\scriptscriptstyle L}$ as the continuation.
${\Gamma,x_{\scriptscriptstyle S}{:}\hat{A};\Delta\vdash x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle S}\;x_{\scriptscriptstyle
S};P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\lx@proof@logical@and\hat{A}\leq{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L}{\Gamma,x_{\scriptscriptstyle S}{:}\hat{A};\Delta,x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\vdash P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}\quad{\Gamma\vdash x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;x_{\scriptscriptstyle
S};P::(x_{\scriptscriptstyle S}{:}{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L})}{\Gamma;\cdot\vdash P::(x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L})}$
${\downarrow_{L}^{S}}A_{\scriptscriptstyle S}$ signifies a point where clients
must _release_ while providers _detach_ from a linear session, returning to a
shared state ready to _accept_ another client.
${\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}{\downarrow_{L}^{S}}A_{\scriptscriptstyle S}\vdash x_{\scriptscriptstyle
S}\leftarrow\text{rel}_{\scriptscriptstyle S}\;x_{\scriptscriptstyle
S};P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}{\Gamma,x_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle S};\Delta\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\quad{\Gamma;\cdot\vdash x_{\scriptscriptstyle
S}\leftarrow\text{det}_{\scriptscriptstyle S}\;x_{\scriptscriptstyle
S};P::(x_{\scriptscriptstyle L}{:}{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S})}{\Gamma\vdash P::(x_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle S})}$
Finally, we require the linear variants of the up and downshifts, which by
themselves can be interpreted as synchronization points in a linear protocol
[PG15]. However, in this paper, their purpose is to safely act as supertypes
to corresponding shared up and downshifts, which allow linearity to be
enforced on clients in shared protocols.
${\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}{\uparrow_{L}^{L}}A_{\scriptscriptstyle L}\vdash y_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle L}\;x_{\scriptscriptstyle
L};P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}{\Gamma;\Delta,y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\quad{\Gamma;\Delta\vdash y_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle L}\;x_{\scriptscriptstyle
L};P::(x_{\scriptscriptstyle L}{:}{\uparrow_{L}^{L}}A_{\scriptscriptstyle
L})}{\Gamma;\Delta\vdash P::(x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L})}$ ${\Gamma;\Delta,x_{\scriptscriptstyle
L}{:}{\downarrow_{L}^{L}}A_{\scriptscriptstyle L}\vdash y_{\scriptscriptstyle
L}\leftarrow\text{rel}_{\scriptscriptstyle L}\;x_{\scriptscriptstyle
L};P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}{\Gamma;\Delta,y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle L}\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle
L})}\quad{\Gamma;\Delta\vdash y_{\scriptscriptstyle
L}\leftarrow\text{det}_{\scriptscriptstyle L}\;x_{\scriptscriptstyle
L};P::(x_{\scriptscriptstyle L}{:}{\downarrow_{L}^{L}}A_{\scriptscriptstyle
L})}{\Gamma;\Delta\vdash P::(y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L})}$
One important observation is that typing judgments remain local in the
presence of subtyping; the channels in $\Gamma$ and $\Delta$ may be provided
by processes at some subtype (maintained in the configuration; see Section
6.4) and need not match. We therefore do not adopt a general subsumption rule
that allows arbitrary substitutions that preserve subtyping and instead
precisely manage where subtyping occurs in the system.
#### 6.1.3. Structural Rules
Structural rules are kept implicit in the system, but informally, the linear
context $\Delta$ only allows exchange whereas the shared context $\Gamma$
allows all structural rules.
### 6.2. Dynamics
The operational semantics of the system is formulated through _multiset
rewriting rules_ [CS09], which is of form ${S_{1},\ldots,S_{n}\to
T_{1},\ldots,T_{m}}$, where each $S_{i}$ and $T_{j}$ corresponds to a _process
predicate_ , which captures the state of a particular process and is of form:
$S\;::=\;\text{proc}(a_{\scriptscriptstyle
S},P)\;|\;\text{unavail}(b_{\scriptscriptstyle
S})\;|\;\text{proc}(c_{\scriptscriptstyle
L},Q)\;|\;\text{connect}(d_{\scriptscriptstyle L},e_{\scriptscriptstyle
S})\;|\;\text{!def}(A)$
where $P$ and $Q$ are process terms as formulated in Section 6.1. The
predicates $\text{proc}(a_{\scriptscriptstyle S},P)$ and
$\text{proc}(c_{\scriptscriptstyle L},Q)$ denote shared and linear processes
that offer channels along $a_{\scriptscriptstyle S}$ and
$c_{\scriptscriptstyle L}$ while executing process terms $P$ and $Q$,
respectively. The predicate $\text{unavail}(b_{\scriptscriptstyle S})$ denotes
a shared process that is currently unavailable, for example due to it being
acquired by another client, and the predicate
$\text{connect}(d_{\scriptscriptstyle L},e_{\scriptscriptstyle S})$ is an
explicit predicate that connects a shared channel with a linear channel which
is needed to dynamically express shared to linear subtyping. Finally,
$\text{!def}(A)$ is a (persistent) linear or shared process definition as
demonstrated in $\Sigma$. We adopt $\Psi_{a}$ as a metavariable for some
linear process predicate offering $a_{\scriptscriptstyle L}$; that is,
$\Psi_{a}$ is either $\text{proc}(a_{\scriptscriptstyle L},P)$ for some $P$ or
$\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle S})$ for some
$b_{\scriptscriptstyle S}$.
Each multiset rule captures local transitions in the system; for example,
there are three rules that represent forwarding, each corresponding to the
appropriate forwarding typing judgments:
$\displaystyle\text{proc}(a_{\scriptscriptstyle
L},\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle
L})\to\cdot\quad(b_{\scriptscriptstyle L}:=a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}:=a_{\scriptscriptstyle S})$ (D-FWDLL)
$\displaystyle\text{proc}(a_{\scriptscriptstyle
S},\text{fwd}\;a_{\scriptscriptstyle S}\ b_{\scriptscriptstyle
S})\to\text{unavail}(a_{\scriptscriptstyle S})\quad(b_{\scriptscriptstyle
S}:=a_{\scriptscriptstyle S})$ (D-FWDSS)
$\displaystyle\text{proc}(a_{\scriptscriptstyle
L},\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle
S})\to\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle S})$
(D-FWDLS)
The rules D-FWDLL and D-FWDSS are two exceptions to the local transformations;
they require the two channels to be “globally” identified. The rule D-FWDLS
says that a linear process that forwards with a shared channel must transition
to a connect predicate, which serves as a placeholder to denote shared to
linear subtyping.
A linear to linear spawn creates a process $P$ offering a fresh channel
$c_{\scriptscriptstyle L}$. One important point is that fresh linear channels
$\overline{d^{\prime}_{\scriptscriptstyle L}}$ are allocated alongside
corresponding connect predicates due to the possibility of shared channels
$\overline{d_{\scriptscriptstyle S}}$ being “passed” to the new process as
linear channels.
$\displaystyle\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},x_{\scriptscriptstyle L}\leftarrow X_{\scriptscriptstyle
L}\leftarrow\overline{b_{\scriptscriptstyle
L}},\overline{d_{\scriptscriptstyle S}},\overline{e_{\scriptscriptstyle
S}};Q)\\\ \text{!def}((x^{\prime}_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\leftarrow
X_{L}\leftarrow\overline{y^{\prime}_{\scriptscriptstyle
L}{:}B^{\prime}_{\scriptscriptstyle
L}},\overline{v^{\prime}_{\scriptscriptstyle
L}{:}D^{\prime}_{\scriptscriptstyle
L}},\overline{w^{\prime}_{\scriptscriptstyle
S}{:}E^{\prime}_{\scriptscriptstyle
S}})=P)\end{subarray}\to\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},[c_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]Q),\text{proc}(c_{\scriptscriptstyle L},[c_{\scriptscriptstyle
L}/x^{\prime}_{\scriptscriptstyle L},\overline{b_{\scriptscriptstyle
L}}/\overline{y^{\prime}_{\scriptscriptstyle
L}},\overline{d^{\prime}_{\scriptscriptstyle
L}}/\overline{v^{\prime}_{\scriptscriptstyle
L}},\overline{e_{\scriptscriptstyle
S}}/\overline{w^{\prime}_{\scriptscriptstyle S}}]P)\\\
\overline{\text{connect}(d^{\prime}_{\scriptscriptstyle
L},d_{\scriptscriptstyle S}),\text{unavail}(d^{\prime}_{\scriptscriptstyle
S})}\quad(\overline{d^{\prime}},c\;\;\text{fresh})\end{subarray}$ (D-SPAWNLL)
Note that corresponding $\text{unavail}(d^{\prime}_{\scriptscriptstyle S})$
predicates are spawned which solely makes later proofs easier. These can
essentially be ignored for now.
The two other spawn cases are similar, except since since linear channels
cannot be passed to shared processes, the verbose allocation of connect
predicates are not necessary.
$\displaystyle\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},x_{\scriptscriptstyle S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{b_{\scriptscriptstyle S}};Q)\\\
\text{!def}((x^{\prime}_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle
S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{y^{\prime}_{\scriptscriptstyle
S}{:}B^{\prime}_{\scriptscriptstyle
S}})=P)\end{subarray}\to\text{proc}(a_{\scriptscriptstyle
L},[c_{\scriptscriptstyle S}/x_{\scriptscriptstyle
S}]Q),\text{proc}(c_{\scriptscriptstyle S},[c_{\scriptscriptstyle
S}/x^{\prime}_{\scriptscriptstyle S},\overline{b_{\scriptscriptstyle
S}}/\overline{y^{\prime}_{\scriptscriptstyle S}}]P)\quad(c\;\;\text{fresh})$
(D-SPAWNLS) $\displaystyle\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
S},x_{\scriptscriptstyle S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{b_{\scriptscriptstyle S}};Q)\\\
\text{!def}((x^{\prime}_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle
S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{y^{\prime}_{\scriptscriptstyle
S}{:}B^{\prime}_{\scriptscriptstyle
S}})=P)\end{subarray}\to\text{proc}(a_{\scriptscriptstyle
S},[c_{\scriptscriptstyle S}/x_{\scriptscriptstyle
S}]Q),\text{proc}(c_{\scriptscriptstyle S},[c_{\scriptscriptstyle
S}/x^{\prime}_{\scriptscriptstyle S},\overline{b_{\scriptscriptstyle
S}}/\overline{y^{\prime}_{\scriptscriptstyle S}}]P)\quad(c\;\;\text{fresh})$
(D-SPAWNSS)
For the unit $1$, a client _waiting_ for a channel to _close_ can proceed when
the corresponding provider _closes_ its channel.
$\displaystyle\text{proc}(a_{\scriptscriptstyle
L},\text{wait}\;b_{\scriptscriptstyle L};P),\text{proc}(b_{\scriptscriptstyle
L},\text{close}\;b_{\scriptscriptstyle L})\to\text{proc}(a_{\scriptscriptstyle
L},P)$ (D-$1$)
The left hand side of the dynamics follow a pattern where one process receives
while another process sends. Starting with $\otimes$ and $\multimap$:
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle
L};P),\text{proc}(b_{\scriptscriptstyle L},\text{send}\;b_{\scriptscriptstyle
L}\ c_{\scriptscriptstyle L};Q),\Psi_{c}\to\text{proc}(a_{\scriptscriptstyle
L},[c_{\scriptscriptstyle L}/y_{\scriptscriptstyle
L}]P),\text{proc}(b_{\scriptscriptstyle L},Q),\Psi_{c}$ (D-$\otimes$)
$\displaystyle\text{proc}(a_{\scriptscriptstyle
L},\text{send}\;b_{\scriptscriptstyle L}\ c_{\scriptscriptstyle
L};P),\text{proc}(b_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle
L};Q),\Psi_{c}\to\text{proc}(a_{\scriptscriptstyle
L},P),\text{proc}(b_{\scriptscriptstyle L},[c_{\scriptscriptstyle
L}/y_{\scriptscriptstyle L}]Q),\Psi_{c}$ (D-$\multimap$)
When shared channels are sent instead, a fresh channel $d$ is allocated and a
connect predicate connects the shared channel:
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle
L};P),\text{proc}(b_{\scriptscriptstyle L},\text{send}\;b_{\scriptscriptstyle
L}\ c_{\scriptscriptstyle S};Q)$ (D-$\otimes$2) $\displaystyle\to\quad$
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},[d_{\scriptscriptstyle
L}/y_{\scriptscriptstyle L}]P),\text{proc}(b_{\scriptscriptstyle
L},Q),\text{connect}(d_{\scriptscriptstyle L},c_{\scriptscriptstyle
S}),\text{unavail}(d_{\scriptscriptstyle S})\quad(d\;\;\text{fresh})$
$\displaystyle\text{proc}(a_{\scriptscriptstyle
L},\text{send}\;b_{\scriptscriptstyle L}\ c_{\scriptscriptstyle
S};P),\text{proc}(b_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle L};Q)$ (D-$\multimap$2)
$\displaystyle\to\quad$ $\displaystyle\text{proc}(a_{\scriptscriptstyle
L},P)\text{proc}(b_{\scriptscriptstyle L},[d_{\scriptscriptstyle
L}/y_{\scriptscriptstyle L}]Q),\text{connect}(d_{\scriptscriptstyle
L},c_{\scriptscriptstyle S}),\text{unavail}(d_{\scriptscriptstyle
S})\quad(d\;\;\text{fresh})$
For $\oplus$ and $\&$, the pattern of one side sending (a label) and the other
receiving is maintained:
$\displaystyle\text{proc}(a_{\scriptscriptstyle
L},\text{case}\;b_{\scriptscriptstyle L}\;\text{of}\;\\{\overline{l\Rightarrow
P},\overline{m\Rightarrow P}\\}),\text{proc}(b_{\scriptscriptstyle
L},b.i;Q)\to\text{proc}(a_{\scriptscriptstyle
L},P_{i}),\text{proc}(b_{\scriptscriptstyle L},Q)\quad(i\in\overline{l})$
(D-$\oplus$) $\displaystyle\text{proc}(a_{\scriptscriptstyle
L},b.i;P),\text{proc}(b_{\scriptscriptstyle
L},\text{case}\;b_{\scriptscriptstyle L}\;\text{of}\;\\{\overline{l\Rightarrow
Q},\overline{m\Rightarrow Q}\\})\to\text{proc}(a_{\scriptscriptstyle
L},P),\text{proc}(b_{\scriptscriptstyle L},Q_{i})\quad(i\in\overline{l})$
(D-$\&$)
An important point is that due to subtyping, the process receiving a label can
accept a superset of the labels that the process sending will send. This is
syntactically expressed by having the recipient case on the list
$\overline{l},\overline{m}$ while having the sender pick a label in
$\overline{l}$.
Now for the modal connectives, the idea is similar to the previous logical
connectives; for ${\uparrow_{L}^{S}}$, a client must _acquire_ a shared
channel and the corresponding shard provider must _accept_. Similarly for
${\downarrow_{L}^{S}}$, a client must _release_ while the provider must
_detach_ , returning to a shared process:
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle
S};P),\text{proc}(b_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle
S};Q)\to\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{proc}(b_{\scriptscriptstyle L},[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q),\\\ \text{unavail}(b_{\scriptscriptstyle
S})\end{subarray}$ (D-${\uparrow_{L}^{S}}$)
$\displaystyle\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},x_{\scriptscriptstyle S}\leftarrow\text{rel}_{\scriptscriptstyle
S}\;b_{\scriptscriptstyle S};P),\text{proc}(b_{\scriptscriptstyle
L},x_{\scriptscriptstyle S}\leftarrow\text{det}_{\scriptscriptstyle
S}\;b_{\scriptscriptstyle S};Q),\\\ \text{unavail}(b_{\scriptscriptstyle
S})\end{subarray}\to\text{proc}(a_{\scriptscriptstyle
L},[b_{\scriptscriptstyle S}/x_{\scriptscriptstyle
S}]P),\text{proc}(b_{\scriptscriptstyle S},[b_{\scriptscriptstyle
S}/x_{\scriptscriptstyle S}]Q)$ (D-${\downarrow_{L}^{S}}$)
The linear variants have a similar semantics:
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle L}\;b_{\scriptscriptstyle
L};P),\text{proc}(b_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle L}\;b_{\scriptscriptstyle
L};Q)\to\text{proc}(a_{\scriptscriptstyle L},[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]P),\text{proc}(b_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle L}]Q)$
(D-${\uparrow_{L}^{L}}$) $\displaystyle\text{proc}(a_{\scriptscriptstyle
L},x_{\scriptscriptstyle L}\leftarrow\text{rel}_{\scriptscriptstyle
L}\;b_{\scriptscriptstyle L};P),\text{proc}(b_{\scriptscriptstyle
L},x_{\scriptscriptstyle L}\leftarrow\text{det}_{\scriptscriptstyle
L}\;b_{\scriptscriptstyle L};Q)\to\text{proc}(a_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{proc}(b_{\scriptscriptstyle L},[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q)$ (D-${\downarrow_{L}^{L}}$)
Finally, when a client linearly _acquires_ what happens to be a shared
process, it must go through the connect predicate, and similarly when a client
linearly _releases_ a provider that is _detaching_ to a shared state, a
connect predicate is allocated:
$\displaystyle\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},x_{\scriptscriptstyle L}\leftarrow\text{acq}_{\scriptscriptstyle
L}\;b_{\scriptscriptstyle L};P),\text{connect}(b_{\scriptscriptstyle
L},c_{\scriptscriptstyle S})\\\ \text{proc}(c_{\scriptscriptstyle
S},x_{\scriptscriptstyle L}\leftarrow\text{acc}_{\scriptscriptstyle
S}\;c_{\scriptscriptstyle
S};Q)\end{subarray}\to\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},[c_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{proc}(c_{\scriptscriptstyle L},[c_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q)\\\ \text{unavail}(c_{\scriptscriptstyle
S})\end{subarray}$ (D-${\uparrow_{L}^{S}}$2)
$\displaystyle\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},x_{\scriptscriptstyle L}\leftarrow\text{rel}_{\scriptscriptstyle
L}\;c_{\scriptscriptstyle L};P),\text{proc}(c_{\scriptscriptstyle
L},x_{\scriptscriptstyle S}\leftarrow\text{det}_{\scriptscriptstyle
S}\;c_{\scriptscriptstyle S};Q),\\\ \text{unavail}(c_{\scriptscriptstyle
S})\end{subarray}\to\begin{subarray}{c}\text{proc}(a_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{connect}(b_{\scriptscriptstyle L},c_{\scriptscriptstyle S}),\\\
\text{unavail}(b_{\scriptscriptstyle S})\end{subarray}$
(D-${\downarrow_{L}^{S}}$2)
### 6.3. Processes and Configuration
A configuration consists of a list of shared process predicates $\Lambda$ and
a list of linear process predicates $\Theta$. The order of shared processes
have no structure, but the order of linear processes can be seen to form a
tree structure; a linear process can use channels offered by processes to its
right, and due to linearity, if it is using a channel, it must be the unique
process doing so.
$\displaystyle\Omega$ $\displaystyle\;::=\;\Lambda;\Theta$
$\displaystyle\Lambda$
$\displaystyle\;::=\;\cdot\;|\;\Lambda_{1},\Lambda_{2}\;|\;\text{proc}(a_{\scriptscriptstyle
S},P)\;|\;\text{unavail}(a_{\scriptscriptstyle S})$ $\displaystyle\Theta$
$\displaystyle\;::=\;\cdot\;|\;\text{proc}(a_{\scriptscriptstyle
L},P),\Theta^{\prime}\;|\;\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta^{\prime}$
##### Well-formedness
$\Lambda$ is well-formed if for any channel name $a$,
${\text{proc}(a_{\scriptscriptstyle S},P),\text{unavail}(a_{\scriptscriptstyle
S})\notin\Lambda}$. Similarly, $\Theta$ is well-formed if for any $a$,
$\Psi_{a},\Psi_{a}^{\prime}\notin\Theta$ where
$\Psi_{a}\neq\Psi_{a}^{\prime}$. The configuration $\Lambda;\Theta$ is well-
formed if both its fragments are well-formed and
${\Psi_{a}\in\Theta\to\text{unavail}(a_{\scriptscriptstyle S})\in\Lambda}$.
### 6.4. Configuration Typing
A well-formed configuration $\Lambda;\Theta$ is typed by its shared and linear
fragments.
${\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}\lx@proof@logical@and{\Gamma\models\Lambda::(\Gamma)}{\Gamma\models\Theta::(\Delta)}$
${\Gamma\models\cdot::(\cdot)}\quad{\Gamma\models\Lambda_{1},\Lambda_{2}::(\Gamma_{1},\Gamma_{2})}\lx@proof@logical@and{\Gamma\models\Lambda_{1}::(\Gamma_{1})}{\Gamma\models\Lambda_{2}::(\Gamma_{2})}$
${\Gamma\models\text{proc}(a_{\scriptscriptstyle S},P)::(a_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle
S})}\lx@proof@logical@and\vdash(A^{\prime}_{\scriptscriptstyle
S},A_{\scriptscriptstyle S},\top)\;\text{ssync}{\Gamma\vdash
P::(a_{\scriptscriptstyle S}{:}A^{\prime}_{\scriptscriptstyle
S})}\quad{\Gamma\models\text{unavail}(a_{\scriptscriptstyle
S})::(a_{\scriptscriptstyle S}{:}\hat{A})}$
${\Gamma\models\cdot::(\cdot)}\quad{\Gamma\models\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta^{\prime}::(a:A_{\scriptscriptstyle
L},\Delta^{\prime})}\lx@proof@logical@and b_{\scriptscriptstyle
S}{:}\hat{B}\in\Gamma b_{\scriptscriptstyle S}\leq A_{\scriptscriptstyle
L}{\Gamma\models\Theta^{\prime}::(\Delta^{\prime})}$
${\Gamma\models\text{proc}(a_{\scriptscriptstyle
L},P),\Theta^{\prime}::(a:A_{\scriptscriptstyle
L},\Delta^{\prime})}\lx@proof@logical@and a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}{\Gamma;\Delta_{a}\vdash
P::(a_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle
L})}{\Gamma\models\Theta^{\prime}::(\Delta_{a},\Delta^{\prime})}$
### 6.5. Lemmas
In this section we present lemmas of interest to be used in the progress and
preservation proofs. The proofs of each lemma are in Appendix B.
#### 6.5.1. Lemmas involving the Configuration
4 allows the tail of linear configurations to be peeled off, 5 asserts that an
active shared process prevents an active linear process of the same channel
name, 6 allows individual process predicates in linear configurations to be
moved around as long as the overall invariant that linear processes can only
depend on processes to its right is maintained, 7 allows the substitution of
subconfigurations in a linear configuration if signatures match, and finally,
8 allows offering channels of linear processes to be viewed at supertypes.
###### Lemma 4.
If ${\Gamma\models\Psi,\Theta::(\Delta)}$, then
${\Gamma\models\Theta::(\Delta^{\prime})}$ for some $\Delta^{\prime}$.
More generally, if ${\Gamma\models\Theta_{1},\Theta_{2}::(\Delta)}$, then
${\Gamma\models\Theta_{2}::(\Delta^{\prime})}$ for some $\Delta^{\prime}$.
###### Lemma 5.
Given a well-formed $\Lambda;\Theta$,
$\forall\text{proc}(a_{\scriptscriptstyle
S},-)\in\Lambda,\Psi_{a}\notin\Theta$
###### Lemma 6.
If
${\Gamma\models\Psi_{a},\Theta_{1},\Psi_{b},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta)}$ and $\Psi_{a}$ uses
$b_{\scriptscriptstyle L}$, then
${\Gamma\models\Psi_{a},\Psi_{b},\Theta_{1},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta)}$
###### Lemma 7.
If ${\Gamma\models\Psi,\Theta::(\Delta)}$,
${\Gamma\models\Theta::(\Delta_{p})}$, and
${\Gamma\models\Theta^{\prime}::(\Delta_{p})}$, then
${\Gamma\models\Psi,\Theta^{\prime}::(\Delta)}$
More generally, if ${\Gamma\models\Theta_{1},\Theta_{2}::(\Delta)}$,
${\Gamma\models\Theta_{2}::(\Delta_{p})}$, and
${\Gamma\models\Theta_{2}^{\prime}::(\Delta_{p})}$, then
${\Gamma\models\Theta_{1},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta)}$
###### Lemma 8.
If ${\Gamma\models\Psi_{a},\Theta^{\prime}::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L},\Delta^{\prime})}$, then for any
$B_{\scriptscriptstyle L}$ such that $A^{\prime}_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$,
${\Gamma\models\Psi_{a},\Theta^{\prime}::(a_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},\Delta^{\prime})}$.
#### 6.5.2. Ordering of Contexts
A linear context is smaller than another if it shares the same variables with
their associated types respecting subtyping. Similarly, a shared context is
smaller than another if it contains at least the same variables (could contain
additional as shown in $\Gamma_{\preceq\cdot}$) with their associated types
respecting subtyping.
$\cdot\leq\cdot\quad\Delta,x_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\leq\Delta^{\prime},x_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle
L}\lx@proof@logical@and\Delta\leq\Delta^{\prime}A_{\scriptscriptstyle L}\leq
A^{\prime}_{\scriptscriptstyle L}$
$\Gamma\preceq\cdot\quad\Gamma,x_{\scriptscriptstyle
S}{:}\hat{A}\preceq\Gamma^{\prime},x_{\scriptscriptstyle
S}{:}\hat{A^{\prime}}\lx@proof@logical@and\Gamma\preceq\Gamma^{\prime}\hat{A}\leq\hat{A^{\prime}}$
The following two lemmas allow the substitution of smaller shared contexts in
both the configuration typing and process typing judgments.
###### Lemma 9.
Let $\Gamma^{\prime}\preceq\Gamma$ and ${\Gamma;\Delta\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$, then
${\Gamma^{\prime};\Delta\vdash P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}$.
###### Lemma 10.
Let $\Gamma^{\prime}\preceq\Gamma$ then
1. (1)
If ${\Gamma\models\Theta::(\Delta)}$ for some $\Theta,\Delta$, then
${\Gamma^{\prime}\models\Theta::(\Delta)}$
2. (2)
If ${\Gamma\models\Lambda::(\Gamma^{\prime\prime})}$ for some
$\Lambda,\Gamma^{\prime\prime}$, then
${\Gamma^{\prime}\models\Theta::(\Gamma^{\prime\prime})}$
#### 6.5.3. Subsynchronizing Judgment
The following lemmas apply to the subsynchronizing judgment defined in Section
5.2. 11 allows the client type (second argument) to become bigger, 12 allows
the provider type (first argument) to become smaller under a specific
circumstance, 13 allows the constraint (third argument) to become smaller if
both provider and clients are linear, and finally, 18 allows the construction
of a smaller constraint given two subsynchronizing judgments of the same
provider and client types.
###### Lemma 11.
If $A\leq B\leq C$ with all same modalities (that is, $A,B,C$ are either all
linear or all shared) and $\vdash(A,B,\hat{D})\;\text{ssync}$, then
$\vdash(A,C,\hat{D})\;\text{ssync}$ for some $\hat{D}$.
###### Lemma 12.
If $A\leq B\leq C$ with all same modalities,
$\vdash(B,C,\hat{D})\;\text{ssync}$, and
$\vdash(A,C,\hat{E})\;\text{ssync}$, then $\vdash(A,C,\hat{D})\;\text{ssync}$
for some $\hat{D}$ and $\hat{E}$.
###### Lemma 13.
If $\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{C})\;\text{ssync}$ and $\hat{D}\leq\hat{C}$, then
$\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}$ for some $A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{C},$ and $\hat{D}$.
###### Lemma 14.
If $\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{C})\;\text{ssync}$ and $\vdash(A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{D})\;\text{ssync}$,
${\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{C}\land\hat{D})\;\text{ssync}}$ for some $A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{C},$ and $\hat{D}$.
Note that the meet of two constraints $\hat{C}\land\hat{D}$ is defined in 18.
### 6.6. Theorems
The preservation theorem, or session fidelity, guarantees that well-typed
configurations remain well-typed. In particular, this means that processes
will always adhere to the protocol denoted by the session type.
###### Theorem 15 (Preservation).
If ${\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}$ for some
$\Lambda,\Theta,\Gamma,$ and $\Delta$, and
$\Lambda;\Theta\rightarrow\Lambda^{\prime};\Theta^{\prime}$ for some
$\Lambda^{\prime};\Theta^{\prime}$, then
${\Gamma^{\prime}\models\Lambda^{\prime};\Theta^{\prime}::(\Gamma^{\prime};\Delta)}$
where $\Gamma^{\prime}\preceq\Gamma$.
Here, $\Gamma^{\prime}\preceq\Gamma$ captures the idea that the configuration
can gain additional shared processes and that the types of shared channels can
become smaller. For example, if a process spawns an additional shared process,
then the configuration will gain an additional channel in $\Gamma$ and if a
shared channel is released to a smaller type, the type of the shared channel
in $\Gamma$ can become smaller. Note that although it is indeed true that
linear processes can be spawned, it will never appear in $\Delta$ since the
linear channel that the newly spawned process offers must be consumed by the
process that spawned the channel, meaning $\Delta$ is unchanged.
###### Proof 6.1.
By induction on the dynamics and constructing a well-typed (and therefore
well-formed) configuration for each case. We present a simple case below; a
complete proof is presented in Appendix C.
###### Case 1.
D-FWDLS
$\text{proc}(a_{\scriptscriptstyle L},\text{fwd}\;a_{\scriptscriptstyle L}\
b_{\scriptscriptstyle S})\to\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S})$
Let $\Psi_{a}=\text{proc}(a_{\scriptscriptstyle
L},\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle S})$ and
$\Psi_{a}^{\prime}=\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S})$. Let $\Theta=\Theta_{1},\Psi_{a},\Theta_{2}$.
Then by well-formedness, $\Lambda=\text{unavail}(a_{\scriptscriptstyle
S}),\Lambda_{1}$.
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a},\Theta_{2}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\text{proc}(a_{\scriptscriptstyle
L},\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle
S}),\Theta_{2}::(a_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L},\Delta_{p})}$ (by Lemma 4 and expanding $\Psi_{a}$)
$\displaystyle{\Gamma\models\Theta_{2}::(\Delta_{p})}\quad{\Gamma;\cdot\vdash\text{fwd}\;a_{\scriptscriptstyle
L}\ b_{\scriptscriptstyle S}::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ (by inversion on $\Theta 3$)
$\displaystyle b_{\scriptscriptstyle S}{:}\hat{B}\in\Gamma\quad\hat{B}\leq
A^{\prime}_{\scriptscriptstyle L}$ (by inversion on
$ID_{\scriptscriptstyle{LS}}$) $\displaystyle\hat{B}\leq A_{\scriptscriptstyle
L}$ (by transitivity of $\leq$)
$\displaystyle{\Gamma\models\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta_{2}::(a:A_{\scriptscriptstyle
L},\Delta_{p})}$ (by $\Theta 2$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Theta_{2}::(\Delta)}$
(by Lemma 7)
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a}^{\prime},\Theta_{2}::(\Gamma;\Delta)}$
(by $\Omega$)
The well-formedness conditions are maintained because only $\Psi_{a}\in\Theta$
was replaced by $\Psi_{a}^{\prime}$.
Many of the dynamics involving the standard logical connectives
$({\otimes},{\multimap},{\oplus},$ and ${\&})$ follow a similar pattern and
are fairly simple. However, cases involving the shift connectives
$({\uparrow_{L}^{S}},{\downarrow_{L}^{S}},{\uparrow_{L}^{L}},{\downarrow_{L}^{L}})$
and linear to linear forwarding cause more complexities and require further
subcase analysis. These cases are presented in detail in Appendix D.
The progress theorem is as in [BP17], where we only allow configurations to be
stuck due to failure of some client to acquire, for example, due to deadlock.
A shared and linear process term $\text{proc}(a,P)$ is _poised_ if $P$ is
currently communicating along its providing channel $a$. Poised process terms
in $SILL_{S{\leq}}$ are shown in the table below:
Receiving | Sending
---|---
| $\text{proc}(a_{\scriptscriptstyle L},\text{close}\;a_{\scriptscriptstyle
L})$
$\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle L}\leftarrow\text{recv}\;a_{\scriptscriptstyle L};P)$ | $\text{proc}(a_{\scriptscriptstyle L},\text{send}\;a_{\scriptscriptstyle L}\ c_{\scriptscriptstyle L};P)$
$\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle S}\leftarrow\text{recv}\;a_{\scriptscriptstyle L};P)$ | $\text{proc}(a_{\scriptscriptstyle L},\text{send}\;a_{\scriptscriptstyle L}\ c_{\scriptscriptstyle S};P)$
$\text{proc}(a_{\scriptscriptstyle L},\text{case}\;a_{\scriptscriptstyle L}\;\text{of}\;\\{\overline{l\Rightarrow P}\\})$ | $\text{proc}(a_{\scriptscriptstyle L},a.i;P)$
$\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle L}\leftarrow\text{acc}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle L};P)$ | $\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle L}\leftarrow\text{det}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle L};P)$
$\text{proc}(a_{\scriptscriptstyle S},x_{\scriptscriptstyle L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle S};P)$ | $\text{proc}(a_{\scriptscriptstyle S},x_{\scriptscriptstyle L}\leftarrow\text{det}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle S};P)$
In particular, we say that a configuration is poised if all of its
$\text{proc}(-,-)$ members are poised.
###### Theorem 16 (Progress).
If ${\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}$ then either:
1. (1)
$\Lambda;\Theta\rightarrow\Lambda^{\prime};\Theta$ for some $\Lambda^{\prime}$
or
2. (2)
$\Lambda$ is poised and one of:
1. (a)
$\Lambda;\Theta\rightarrow\Lambda^{\prime};\Theta^{\prime}$ or
2. (b)
$\Theta$ is poised or
3. (c)
a linear process in $\Theta$ is stuck and therefore unable to acquire
###### Proof 6.2.
For details, see Appendix D. We first show that either the shared
configuration $\Lambda$ steps $(\Lambda\to\Lambda^{\prime}$ for some
$\Lambda^{\prime})$ or that $\Lambda$ is poised by induction on the derivation
of ${\Gamma\models\Lambda::(\Gamma)}$. If $\Lambda$ is poised, then we proceed
by induction on the derivation of ${\Gamma\models\Theta::(\Delta)}$ to show
one of:
1. (a)
$\Lambda;\Theta\to\Lambda^{\prime};\Theta^{\prime}$ for some
$\Lambda^{\prime}$ and $\Theta^{\prime}$
2. (b)
$\Theta$ poised
3. (c)
some $\Psi\in\Theta$ is stuck
###### Remark 17.
Another paper [BTP19] introduces additional static restrictions to allow a
stronger and more common notion of progress, which are orthogonal to our
results. We expect that adopting this extension to our work would give the
usual notion of progress with deadlock freedom.
## 7\. Related Work
Our paper serves as an extension to the manifest sharing system defined in
[BP17] by introducing a notion of subtyping to the system which allows us to
statically relax the equi-synchronizing constraint. Early glimpses of
subtyping can be seen in the previous system with the introduction of $\bot$
and $\top$ as the minimal and maximal constraints, which happened to be
compatible with our subtyping relation.
Subtyping for session types was first proposed by Gay and Hole [GH05], which
was done in the classical setting for the linear connectives except for
${\uparrow_{L}^{L}}$ and ${\downarrow_{L}^{L}}$. Subtyping for the
intuitionistic setting that we work on was also formalized by [AP16], which
worked out subtyping for the linear connectives except for
${\uparrow_{L}^{L}}$ and ${\downarrow_{L}^{L}}$. That paper also introduces
subtyping for intersection and union types, which are orthogonal and thus
compatible to the subtyping in our system. Neither of these papers
investigates modalities or sharing, which are two of our contributions to the
understanding of subtyping. We believe that with a well-defined translation of
modal shifts and the sharing semantics to the classical setting, the subtyping
on the shifts could be defined in the classical setting as well.
There have also been many recent developments in subtyping in the context of
multiparty session types [CDCY14, CDCSY17, GJP+19, GPP+20], which are a
different class of type systems that describe protocols between an arbitrary
number of participants from a neutral global point of view. These systems are
quite different in how they interpret subtyping, since the subtyping we work
with are at the channel level, where two communicating processes can safely
disagree on the protocol. This creates a fairly simple definition where
subtyping is tightly coupled with the individual connectives. However, since
global types in multiparty session types can be projected to a binary setting,
there may be non-obvious connections that could be drawn. Thus, understanding
the relation of our subtyping system to these systems is a challenge and an
interesting item for future work.
## 8\. Conclusion
We propose a subtyping extension to a message passing concurrency programming
language introduced in previous work [BP17] and showed examples highlighting
the expressiveness that this new system provides. Throughout the paper, we
follow two important principles, _substitutability_ and _ignorance is bliss_ ,
which gave a rich type system that in particular allows _phases_ (in a shared
setting) to be manifest in the type.
One immediate application of shared subtyping is that combined with refinement
types [DP20, DBH+21], it can encode finer specifications of protocols. For
example in the auction scenario, we can statically show that each client that
does not win a bid gets refunded precisely the exact amount of money it bid.
Without shared to linear subtyping, specifications of shared communication
across multiple acquire-release cycles were not possible.
A future work in a more theoretical platform is to extend the setting to
adjoint logic [PP19], which provides a more general framework of reasoning
about modal shifts in a message passing system. In particular, we found that
affine session types, where contraction (aliasing) is rejected, have immediate
applications.
##### Acknowledgements
We would like to thank the anonymous reviewers for feedback on the initially
submitted version of this paper in COORDINATION 2021. Supported by NSF Grant
No. CCF-1718267 “Enriching Session Types for Practical Concurrent
Programming”.
## References
* [AP16] Coşku Acay and Frank Pfenning. Intersections and unions of session types. In N. Kobayashi, editor, 8th Workshop on Intersection Types and Related Systems (ITRS’16), pages 4–19, Porto, Portugal, June 2016. EPTCS 242\.
* [BP17] Stephanie Balzer and Frank Pfenning. Manifest sharing with session types. In International Conference on Functional Programming (ICFP), pages 37:1–37:29. ACM, September 2017. Extended version available as Technical Report CMU-CS-17-106R, June 2017.
* [BTP19] Stephanie Balzer, Bernardo Toninho, and Frank Pfenning. Manifest deadlock-freedom for shared session types. In L. Caires, editor, 28th European Symposium on Programming (ESOP 2019), pages 611–639, Prague, Czech Republic, April 2019. Springer LNCS 11423.
* [CDCSY17] Tzu-chun Chen, Mariangiola Dezani-Ciancaglini, Alceste Scalas, and Nobuko Yoshida. On the Preciseness of Subtyping in Session Types. Logical Methods in Computer Science, Volume 13, Issue 2, June 2017.
* [CDCY14] Tzu-Chun Chen, Mariangiola Dezani-Ciancaglini, and Nobuko Yoshida. On the preciseness of subtyping in session types. In Proceedings of the Conference on Principles and Practice of Declarative Programming (PPDP’14), Canterbury, UK, September 2014. ACM.
* [CHP99] Karl Crary, Robert Harper, and Sidd Puri. What is a recursive module? In In SIGPLAN Conference on Programming Language Design and Implementation, pages 50–63. ACM Press, 1999.
* [CP10] Luís Caires and Frank Pfenning. Session types as intuitionistic linear propositions. In Proceedings of the 21st International Conference on Concurrency Theory (CONCUR 2010), pages 222–236, Paris, France, August 2010\. Springer LNCS 6269.
* [CS09] Iliano Cervesato and Andre Scedrov. Relating state-based and process-based concurrency through linear logic. Information and Computation, 207(10):1044–1077, October 2009.
* [DBH+21] Ankush Das, Stephanie Balzer, Jan Hoffmann, Frank Pfenning, and Ishani Santurkar. Resource-aware session types for digital contracts. In R. Küsters and D. Naumann, editors, 34th Computer Security Foundations Symposium (CSF 2021), Dubrovnik, Croatia, June 2021. IEEE. To appear.
* [DP20] Ankush Das and Frank Pfenning. Session types with arithmetic refinements. In I. Konnov and L. Kovács, editors, 31st International Conference on Concurrency Theory (CONCUR 2020), pages 13:1–13:18, Vienna, Austria, September 2020. LIPIcs 171.
* [GH05] Simon J. Gay and Malcolm Hole. Subtyping for session types in the $\pi$-calculus. Acta Informatica, 42(2–3):191–225, 2005.
* [GJP+19] Silvia Ghilezan, Svetlana Jakšić, Jovanka Pantović, Alceste Scalas, and Nobuko Yoshida. Precise subtyping for synchronous multiparty sessions. Journal of Logical and Algebraic Methods in Programming, 104:127 – 173, 2019.
* [GPP+20] Silvia Ghilezan, Jovanka Pantović, Ivan Prokić, Alceste Scalas, and Nobuko Yoshida. Precise subtyping for asynchronous multiparty sessions, 2020.
* [Gri15] Dennis Griffith. Polarized Substructural Session Types. PhD thesis, University of Illinois at Urbana-Champaign, 2015. In preparation.
* [Hon93] Kohei Honda. Types for dyadic interaction. In E. Best, editor, 4th International Conference on Concurrency Theory (CONCUR 1993), pages 509–523. Springer LNCS 715, 1993.
* [HVK98] Kohei Honda, Vasco T. Vasconcelos, and Makoto Kubo. Language primitives and type discipline for structured communication-based programming. In C. Hankin, editor, 7th European Symposium on Programming Languages and Systems (ESOP 1998), pages 122–138. Springer LNCS 1381, 1998.
* [MM84] Don P. Mitchell and Michael Merritt. A distributed algorithm for deadlock detection and resolution. In Symposium on Principles of Distributed Computation (PODC 1984), pages 282–284, Vancouver, British Columbia, August 1984. ACM.
* [PG15] Frank Pfenning and Dennis Griffith. Polarized substructural session types. In A. Pitts, editor, Proceedings of the 18th International Conference on Foundations of Software Science and Computation Structures (FoSSaCS 2015), pages 3–22, London, England, April 2015. Springer LNCS 9034\. Invited talk.
* [PP19] Klaas Pruiksma and Frank Pfenning. A message-passing interpretation of adjoint logic. In F. Martins and D. Orchard, editors, Workshop on Programming Language Approaches to Concurrency and Communication-Centric Software (PLACES), pages 60–79, Prague, Czech Republic, April 2019. EPTCS 291.
* [San19] Chuta Sano. On session typed contracts for imperative languages. Masters thesis, Carnegie Mellon University, December 2019. Available as Technical Report CMU-CS-19-133, December 2019.
* [SBP21] Chuta Sano, Stephanie Balzer, and Frank Pfenning. Manifestly phased communication via shared session types. In Ferruccio Damiani and Ornela Dardha, editors, Coordination Models and Languages, pages 23–40, Valletta, Malta, 2021. Springer LNCS 12717\.
* [Ton15] Bernardo Toninho. A Logical Foundation for Session-based Concurrent Computation. PhD thesis, Carnegie Mellon University and Universidade Nova de Lisboa, May 2015. Available as Technical Report CMU-CS-15-109.
* [Wad12] Philip Wadler. Propositions as sessions. In Proceedings of the 17th International Conference on Functional Programming (ICFP 2012), pages 273–286, Copenhagen, Denmark, September 2012. ACM Press.
## Appendix A Meet Operator
$\hat{A}\land\hat{B}$ is defined coinductively from the structure of its
arguments. Note that there are many cases where these rules do not apply – in
that case the result of the meet is $\bot$.
$\displaystyle 1\land 1\to 1$ $\displaystyle A_{\scriptscriptstyle L}\otimes
A^{\prime}_{\scriptscriptstyle L}\land B_{\scriptscriptstyle L}\otimes
B^{\prime}_{\scriptscriptstyle L}\to(A_{\scriptscriptstyle L}\land
B_{\scriptscriptstyle L})\otimes(A^{\prime}_{\scriptscriptstyle L}\land
B^{\prime}_{\scriptscriptstyle L})$ $\displaystyle A_{\scriptscriptstyle
L}\multimap A^{\prime}_{\scriptscriptstyle L}\land B_{\scriptscriptstyle
L}\multimap B^{\prime}_{\scriptscriptstyle L}\to(A_{\scriptscriptstyle L}\land
B_{\scriptscriptstyle L})\multimap(A^{\prime}_{\scriptscriptstyle L}\land
B^{\prime}_{\scriptscriptstyle L})$
$\displaystyle\&\\{{\overline{l{:}A_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\}\land\&\\{{\overline{l{:}A^{\prime}_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle
L}}}\\}\to\&\\{{\overline{l:(A_{\scriptscriptstyle L}\land
A^{\prime}_{\scriptscriptstyle L})},\overline{m{:}B_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\}$
$\displaystyle\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\}\land\oplus\\{{\overline{l{:}A^{\prime}_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle
L}}}\\}\to\oplus\\{{\overline{l:(A_{\scriptscriptstyle L}\land
A^{\prime}_{\scriptscriptstyle L})}}\\}$ ($\overline{l}$ not empty)
$\displaystyle{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L}\land{\uparrow_{L}^{S}}B_{\scriptscriptstyle
L}\to{\uparrow_{L}^{S}}{(A_{\scriptscriptstyle L}\land B_{\scriptscriptstyle
L})}$ $\displaystyle{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L}\land{\uparrow_{L}^{L}}B_{\scriptscriptstyle
L}\to{\uparrow_{L}^{S}}{(A_{\scriptscriptstyle L}\land B_{\scriptscriptstyle
S})}$ $\displaystyle{\uparrow_{L}^{L}}A_{\scriptscriptstyle
L}\land{\uparrow_{L}^{S}}B_{\scriptscriptstyle
L}\to{\uparrow_{L}^{S}}{(A_{\scriptscriptstyle S}\land B_{\scriptscriptstyle
L})}$ $\displaystyle{\uparrow_{L}^{L}}A_{\scriptscriptstyle
L}\land{\uparrow_{L}^{L}}B_{\scriptscriptstyle
L}\to{\uparrow_{L}^{L}}{(A_{\scriptscriptstyle S}\land B_{\scriptscriptstyle
S})}$ $\displaystyle{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S}\land{\downarrow_{L}^{S}}B_{\scriptscriptstyle
S}\to{\downarrow_{L}^{S}}{(A_{\scriptscriptstyle S}\land B_{\scriptscriptstyle
S})}$ $\displaystyle{\downarrow_{L}^{S}}A_{\scriptscriptstyle
S}\land{\downarrow_{L}^{L}}B_{\scriptscriptstyle
L}\to{\downarrow_{L}^{S}}{(A_{\scriptscriptstyle S}\land B_{\scriptscriptstyle
L})}$ $\displaystyle{\downarrow_{L}^{L}}A_{\scriptscriptstyle
L}\land{\downarrow_{L}^{S}}B_{\scriptscriptstyle
S}\to{\downarrow_{L}^{S}}{(A_{\scriptscriptstyle L}\land B_{\scriptscriptstyle
S})}$ $\displaystyle{\downarrow_{L}^{L}}A_{\scriptscriptstyle
L}\land{\downarrow_{L}^{L}}B_{\scriptscriptstyle
L}\to{\downarrow_{L}^{L}}{(A_{\scriptscriptstyle L}\land B_{\scriptscriptstyle
L})}$
Intuitively, the idea with this construction is that on external choices, we
take the union of the labels on both sides whereas on internal choices, we
take the intersection of the labels on both sides. Since we do not allow the
nullary internal choice $\oplus\\{{}\\}$ in the language, we require that the
meet between two internal choices to be non-empty, that is, they must share at
least one label. Otherwise, the meet construction should produce a $\bot$.
###### Lemma 18.
$\hat{A}\land\hat{B}$ is the greatest lower bound between $\hat{A}$ and
$\hat{B}$ with respect to subtyping.
###### Proof A.1.
By coinduction on the construction rules. The interesting part is on the
external and internal choices; the construction tightly matches the
appropriate direction of subtyping in the sense that the set of labels grows
on external choices and shrinks on internal choices.
## Appendix B Proofs of Lemmas
###### Lemma 19.
If ${\Gamma\models\Psi,\Theta::(\Delta)}$, then
${\Gamma\models\Theta::(\Delta^{\prime})}$ for some $\Delta^{\prime}$.
More generally, if ${\Gamma\models\Theta_{1},\Theta_{2}::(\Delta)}$, then
${\Gamma\models\Theta_{2}::(\Delta^{\prime})}$ for some $\Delta^{\prime}$.
###### Proof B.1.
For the first part, by case analysis on the derivation of
${\Gamma\models\Psi,\Theta::(\Delta)}$. In both cases ($\Theta 2$ and $\Theta
3$), we directly see that ${\Gamma\models\Theta::(\Delta^{\prime})}$ for some
$\Delta^{\prime}$.
For the second part, we can repeatedly apply the first part sequentially for
every $\Psi\in\Theta_{1}$.
###### Lemma 20.
Given a well-formed $\Lambda;\Theta$,
$\forall\text{proc}(a_{\scriptscriptstyle
S},-)\in\Lambda,\Psi_{a}\notin\Theta$
###### Proof B.2.
By well-formedness of $\Lambda$, $\text{proc}(a_{\scriptscriptstyle
S},-)\in\Lambda$ means that $\text{unavail}(a_{\scriptscriptstyle
S})\notin\Lambda$. By the contrapositive of well-formedness of
$\Lambda;\Theta$, $\text{unavail}(a_{\scriptscriptstyle
S})\notin\Lambda\implies\Psi_{a}\notin\Theta$
###### Lemma 21.
If
${\Gamma\models\Psi_{a},\Theta_{1},\Psi_{b},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta)}$ and $\Psi_{a}$ uses
$b_{\scriptscriptstyle L}$, then
${\Gamma\models\Psi_{a},\Psi_{b},\Theta_{1},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta)}$
###### Proof B.3.
By well-formedness, $\Psi_{b}$ is the only process in the configuration
offering $b_{\scriptscriptstyle L}$. Furthermore by linearity, there can only
be one process that use $b_{\scriptscriptstyle L}$, which is $\Psi_{a}$ by
assumption, so $b_{\scriptscriptstyle L}$ will not be consumed by any
processes in $\Theta_{1}$. Therefore, we can repeatedly move $\Psi_{b}$ to the
left in the configuration until it is to the right of $\Psi_{a}$, the unique
process using $b_{\scriptscriptstyle L}$.
###### Lemma 22.
If ${\Gamma\models\Psi,\Theta::(\Delta)}$,
${\Gamma\models\Theta::(\Delta_{p})}$, and
${\Gamma\models\Theta^{\prime}::(\Delta_{p})}$, then
${\Gamma\models\Psi,\Theta^{\prime}::(\Delta)}$
More generally, if ${\Gamma\models\Theta_{1},\Theta_{2}::(\Delta)}$,
${\Gamma\models\Theta_{2}::(\Delta_{p})}$, and
${\Gamma\models\Theta_{2}^{\prime}::(\Delta_{p})}$, then
${\Gamma\models\Theta_{1},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta)}$
###### Proof B.4.
For the first part, by case analysis on the derivation of
${\Gamma\models\Psi,\Theta::(\Delta)}$. In both cases ($\Theta 2$ and $\Theta
3$), we can directly substitute $\Theta^{\prime}$ for $\Theta$ where it
appears in the configuration judgment.
For the second part, we can repeatedly apply the first part sequentially for
every $\Psi\in\Theta_{1}$.
###### Lemma 23.
If ${\Gamma\models\Psi_{a},\Theta^{\prime}::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L},\Delta^{\prime})}$, then for any
$B_{\scriptscriptstyle L}$ such that $A^{\prime}_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$,
${\Gamma\models\Psi_{a},\Theta^{\prime}::(a_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},\Delta^{\prime})}$.
###### Proof B.5.
By inversion on the derivation of
${\Gamma\models\Psi_{a},\Theta^{\prime}::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L},\Delta)}$.
###### Case 1.
${\Gamma\models\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta^{\prime}::(a:A_{\scriptscriptstyle L},a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L},\Delta^{\prime})}\lx@proof@logical@and
b_{\scriptscriptstyle S}{:}\hat{B}\in\Gamma\hat{b}\leq A_{\scriptscriptstyle
L}{\Gamma\models\Theta^{\prime}::(\Delta^{\prime})}$
By transitivity, $\hat{B}\leq B_{\scriptscriptstyle L}$ therefore
${\Gamma\models\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta^{\prime}::(a:A_{\scriptscriptstyle L},a_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},\Delta^{\prime})}$
###### Case 2.
${\Gamma\models\text{proc}(a_{\scriptscriptstyle
L},P),\Theta^{\prime}::(a:A_{\scriptscriptstyle
L},\Delta^{\prime})}\lx@proof@logical@and a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}{\Gamma;\Delta_{a}\vdash
P::(a_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle
L})}{\Gamma\models\Theta^{\prime}::(\Delta_{a},\Delta^{\prime})}$
By transitivity, $A^{\prime}_{\scriptscriptstyle L}\leq B_{\scriptscriptstyle
L}$ and therefore $\vdash(A^{\prime}_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ by Lemma 11.
Therefore, ${\Gamma\models\text{proc}(a_{\scriptscriptstyle
L},P),\Theta^{\prime}::(a:A_{\scriptscriptstyle L},\Delta^{\prime})}$
###### Lemma 24.
Let $\Gamma^{\prime}\preceq\Gamma$ and ${\Gamma;\Delta\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$, then
${\Gamma^{\prime};\Delta\vdash P::(z_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L})}$.
###### Proof B.6.
We first prove the admissibility of the substitution of a shared channel by a
smaller type in a typing judgment. In particular, we will begin by showing
that if
${\Gamma,x_{\scriptscriptstyle S}{:}\hat{A};\Delta\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$
then
${\Gamma,x_{\scriptscriptstyle S}{:}\hat{B};\Delta\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$
for some $\hat{B}\leq\hat{A}$ by induction on the derivation of
${\Gamma,x_{\scriptscriptstyle S}{:}\hat{A};\Delta\vdash
P::(z_{\scriptscriptstyle L}{:}C_{\scriptscriptstyle L})}$.
First, we begin by pointing out that rules that do not use
$x_{\scriptscriptstyle S}$ (most of them) are trivial since we can just appeal
to the induction hypothesis (IH) on the premise(s) in the appropriate
derivation. The rules that can use $x_{\scriptscriptstyle S}$ are
$ID_{\scriptscriptstyle
S},ID_{\scriptscriptstyle{LS}},SP_{\scriptscriptstyle{LL}},SP_{\scriptscriptstyle{LS}},SP_{\scriptscriptstyle{SS}},{\uparrow_{L}^{S}}L,{\multimap}L_{\scriptscriptstyle
S},$ and ${\otimes}R_{\scriptscriptstyle S}$. For these cases, we can confirm
that the substitution is valid by using the IH and using transitivity of
$\leq$. We will present one such case:
###### Case 1.
${\Gamma,x_{\scriptscriptstyle
S}{:}\hat{A};\Delta\vdash\text{send}\;y_{\scriptscriptstyle L}\
x_{\scriptscriptstyle S};P::(y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\otimes B_{\scriptscriptstyle L})}\lx@proof@logical@and\hat{A}\leq
A_{\scriptscriptstyle L}{\Gamma,x_{\scriptscriptstyle
S}{:}\hat{A};\Delta\vdash P::(y_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L})}$
Then by IH, ${\Gamma,x_{\scriptscriptstyle S}{:}\hat{B};\Delta\vdash
P::(y_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle L})}$. Furthermore, by
transitivity, $\hat{B}\leq A_{\scriptscriptstyle L}$. Therefore by
${\otimes}R_{\scriptscriptstyle S}$, ${\Gamma,x_{\scriptscriptstyle
S}{:}\hat{A};\Delta\vdash\text{send}\;y_{\scriptscriptstyle L}\
x_{\scriptscriptstyle S};P::(y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\otimes B_{\scriptscriptstyle L})}$
After showing that substitution by a smaller type in the shared context
$\Gamma$ is admissible, the remaining part is to note that $\Gamma^{\prime}$
either contains additional channels that is in $\Gamma,$ which we repeat the
argument above for, or $\Gamma^{\prime}$ contains new channel names compared
to $\Gamma$, which we resolve via weakening.
###### Lemma 25.
Let $\Gamma^{\prime}\preceq\Gamma$ then
1. (1)
If ${\Gamma\models\Theta::(\Delta)}$ for some $\Theta,\Delta$, then
${\Gamma^{\prime}\models\Theta::(\Delta)}$
2. (2)
If ${\Gamma\models\Lambda::(\Gamma^{\prime\prime})}$ for some
$\Lambda,\Gamma^{\prime\prime}$, then
${\Gamma^{\prime}\models\Theta::(\Gamma^{\prime\prime})}$
###### Proof B.7.
For the first part, by induction on the derivation of
${\Gamma\models\Theta::(\Delta)}$.
###### Case 1.
${\Gamma\models\cdot::(\cdot)}$
Any $\Gamma$ applies, so in particular any $\Gamma^{\prime}\preceq\Gamma$ will
as well.
###### Case 2.
${\Gamma\models\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta^{\prime}::(a:A_{\scriptscriptstyle
L},\Delta^{\prime})}\lx@proof@logical@and b_{\scriptscriptstyle
S}{:}\hat{B}\in\Gamma\hat{B}\leq A_{\scriptscriptstyle
L}{\Gamma\models\Theta^{\prime}::(\Delta^{\prime})}$
By exchange, we can assume without loss of generality that
$\Gamma=b_{\scriptscriptstyle S}{:}\hat{B},\Gamma_{r}$. Similarly, we can
assume without loss of generality that $\Gamma^{\prime}=b_{\scriptscriptstyle
S}{:}\hat{B^{\prime}},\Gamma_{r}^{\prime}$ where $\hat{B^{\prime}}\leq\hat{B}$
and $\Gamma_{r}^{\prime}\preceq\Gamma$.
$\hat{B^{\prime}}\leq A_{\scriptscriptstyle L}$ follows by transitivity of
$\leq$ and
${\Gamma^{\prime}\models\Theta^{\prime}::(\Delta_{a},\Delta^{\prime})}$
follows from the IH. Therefore,
${\Gamma^{\prime}\models\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta^{\prime}::(a:A_{\scriptscriptstyle
L},\Delta^{\prime})}$
###### Case 3.
${\Gamma\models\text{proc}(a_{\scriptscriptstyle
L},P),\Theta^{\prime}::(a:A_{\scriptscriptstyle
L},\Delta^{\prime})}\lx@proof@logical@and a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}{\Gamma;\Delta_{a}\vdash
P::(a_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle
L})}{\Gamma\models\Theta^{\prime}::(\Delta_{a},\Delta^{\prime})}$
By exchange, we can assume without loss of generality that
$\Gamma=a_{\scriptscriptstyle S}{:}\hat{A},\Gamma_{r}$. Similarly, we can
assume without loss of generality that $\Gamma^{\prime}=a_{\scriptscriptstyle
S}{:}\hat{A^{\prime}},\Gamma_{r}^{\prime}$ where $\hat{A^{\prime}}\leq\hat{A}$
and $\Gamma_{r}^{\prime}\preceq\Gamma$.
$\vdash(A^{\prime}_{\scriptscriptstyle L},A_{\scriptscriptstyle
L},\hat{A^{\prime}})\;\text{ssync}$ follows from Lemma 13,
${\Gamma^{\prime};\Delta_{a}\vdash P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ follows from Lemma 9, and
${\Gamma^{\prime}\models\Theta^{\prime}::(\Delta_{a},\Delta^{\prime})}$
follows from the IH. Therefore,
${\Gamma^{\prime}\models\text{proc}(a_{\scriptscriptstyle
L},P),\Theta^{\prime}::(a:A_{\scriptscriptstyle L},\Delta^{\prime})}$
For the second part, by induction on the derivation of
${\Gamma\models\Lambda::(\Delta^{\prime})}$
###### Case 1.
${\Gamma\models\cdot::(\cdot)}$
Any $\Gamma$ applies, so in particular any $\Gamma^{\prime}\preceq\Gamma$ will
as well.
###### Case 2.
${\Gamma\models\Lambda_{1},\Lambda_{2}::(\Gamma_{1},\Gamma_{2})}\lx@proof@logical@and{\Gamma\models\Lambda_{1}::(\Gamma_{1})}{\Gamma\models\Lambda_{2}::(\Gamma_{2})}$
Both ${\Gamma^{\prime}\models\Lambda_{1}::(\Gamma_{1})}$ and
${\Gamma^{\prime}\models\Lambda_{2}::(\Gamma_{2})}$ follow from the IH.
Therefore,
${\Gamma^{\prime}\models\Lambda_{1},\Lambda_{2}::(\Gamma_{1},\Gamma_{2})}$
###### Case 3.
${\Gamma\models\text{proc}(a_{\scriptscriptstyle S},P)::(a_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle
S})}\lx@proof@logical@and\vdash(A^{\prime}_{\scriptscriptstyle
S},A_{\scriptscriptstyle S},\top)\;\text{ssync}{\Gamma\vdash
P::(a_{\scriptscriptstyle S}{:}A^{\prime}_{\scriptscriptstyle S})}$
${\Gamma^{\prime}\vdash P::(a_{\scriptscriptstyle
S}{:}A^{\prime}_{\scriptscriptstyle S})}$ follows from Lemma 9. Therefore,
${\Gamma\models\text{proc}(a_{\scriptscriptstyle S},P)::(a_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle S})}$
###### Case 4.
${\Gamma\models\text{unavail}(a_{\scriptscriptstyle
S})::(a_{\scriptscriptstyle S}{:}\hat{A})}$
Any $\Gamma$ applies, so in particular any $\Gamma^{\prime}\preceq\Gamma$ will
as well.
To prove the following lemmas, we switch to a set-based formulation of safe
synchronization; $\vdash(A,B,\hat{D})\;\text{ssync}$ is written as
$(A,B,\hat{D})\in\text{ssync}$. We also define a monotone map $F$ from the
coinductive definition of ssync, giving us $\text{ssync}\in F(\text{ssync})$;
that is, ssync is $F$-consistent.
###### Lemma 26.
If $A\leq B\leq C$ with all same modalities (that is, $A,B,C$ are either all
linear or all shared) and $\vdash(A,B,\hat{D})\;\text{ssync}$, then
$\vdash(A,C,\hat{D})\;\text{ssync}$ for some $\hat{D}$.
###### Proof B.8.
We want to show that
$\text{ssync}^{\prime}\;::=\;\text{ssync}\cup\text{ssync}_{\Uparrow}$
is $F$-consistent where
$\displaystyle\text{ssync}_{\Uparrow}\;::=\;\\{(A,C,\hat{D})\;|\;\exists
B.B\leq C\land(A,B,\hat{D})\in\text{ssync}\\}$
Again, where $A,B,C$ must all be of the same modality.
We will prove $F$-consistency of $\text{ssync}^{\prime}$, that is,
$\text{ssync}^{\prime}\in F(\text{ssync}^{\prime})$ by showing that each of
the two sets ssync and $\text{ssync}_{\Uparrow}$ are subsets of
$F(\text{ssync}^{\prime})$.
First, $\text{ssync}\subseteq F(\text{ssync}^{\prime})$ immediately follows
because $\text{ssync}\subseteq F(\text{ssync})$ and $F(\text{ssync})\subseteq
F(\text{ssync}^{\prime})$ by monotonicity of $F$ given
$\text{ssync}\subseteq\text{ssync}^{\prime}$. We will now consider
$\text{ssync}_{\Uparrow}\in F(\text{ssync}^{\prime})$ by case analysis on the
structure of $A$. We can uniquely infer the structure of $B$ and $C$ from the
structure of $A$ by inversion on the appropriate subtyping rule for most
cases.
###### Case 1.
$A={\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle L}$; then
$B={\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L}$ with
$A^{\prime}_{\scriptscriptstyle L}\leq B^{\prime}_{\scriptscriptstyle L}\leq
C^{\prime}_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow},({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle
L},\hat{D})\in\text{ssync}$ (this case)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{L}}$)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow}$ (by definition of
$\text{ssync}_{\Uparrow}$ with $B^{\prime}_{\scriptscriptstyle L}\leq
C^{\prime}_{\scriptscriptstyle L}$)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{L}}$)
${\downarrow_{L}^{L}},\otimes,$ and $\multimap$ follow a similar pattern of
appealing to the covariance of subtyping on the continuation types.
###### Case 2.
$A=\oplus\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\}$; then
$B=\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\}$ and
$C=\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\}$ with
${A_{i}}_{\scriptscriptstyle L}\leq{B_{i}}_{\scriptscriptstyle
L}\leq{C_{i}}_{\scriptscriptstyle L}\;\forall i\in\overline{l}$ and
${B_{i}}_{\scriptscriptstyle L}\leq{C_{i}}_{\scriptscriptstyle L}\;\forall
i\in\overline{m}$.
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow}$
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\},\hat{D})$
$\displaystyle\in\text{ssync}$ (this case) $\displaystyle(\forall
i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle L},{B_{i}}_{\scriptscriptstyle
L},\hat{D})$ $\displaystyle\in\text{ssync}$ (by inversion on $D{\oplus}$)
$\displaystyle(\forall i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle
L},{C_{i}}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow}$ $\displaystyle(\forall
i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle L},{C_{i}}_{\scriptscriptstyle
L},\hat{D})$ $\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\oplus}$)
$D{\&}$ follows a similar pattern.
###### Case 3.
$A={\downarrow_{L}^{S}}A_{\scriptscriptstyle S}$; then there are three
possible assignments to $B$ and $C$ that satisfies the subtyping constraints,
so we will continue by subcasing on the structure of $B$ and $C$.
###### Subcase 1.
$B={\downarrow_{L}^{S}}B_{\scriptscriptstyle S}$ and
$C={\downarrow_{L}^{S}}C_{\scriptscriptstyle S}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}\leq C_{\scriptscriptstyle S}$.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle S},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle S},B_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},A_{\scriptscriptstyle S}\leq\hat{D}$ (by
inversion on $D{\downarrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}_{\Uparrow}$
(by definition of $\text{ssync}_{\Uparrow}$ with $B_{\scriptscriptstyle S}\leq
C_{\scriptscriptstyle S}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}^{\prime}$
(since $\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\downarrow_{L}^{S}}$)
###### Subcase 2.
$B={\downarrow_{L}^{S}}B_{\scriptscriptstyle S}$ and
$C={\downarrow_{L}^{L}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}\leq C_{\scriptscriptstyle L}$.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{L}}C_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle S},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle S},B_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},A_{\scriptscriptstyle S}\leq\hat{D}$ (by
inversion on $D{\downarrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle L},\top)$ $\displaystyle\in\text{ssync}_{\Uparrow}$
(by definition of $\text{ssync}_{\Uparrow}$ with $B_{\scriptscriptstyle S}\leq
C_{\scriptscriptstyle L}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle L},\top)$ $\displaystyle\in\text{ssync}^{\prime}$
(since $\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{L}}C_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\downarrow_{L}^{S}}$)
###### Subcase 3.
$B={\downarrow_{L}^{L}}B_{\scriptscriptstyle L}$ and
$C={\downarrow_{L}^{L}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle L}\leq C_{\scriptscriptstyle L}$.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{L}}C_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Uparrow},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{L}}B_{\scriptscriptstyle L},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle S},B_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync},A_{\scriptscriptstyle S}\leq\hat{D}$ (by
inversion on $D{\downarrow_{L}^{S}}{\downarrow_{L}^{L}}$)
$\displaystyle(A_{\scriptscriptstyle S},C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}_{\Uparrow}$ (by definition of
$\text{ssync}_{\Uparrow}$ with $B_{\scriptscriptstyle L}\leq
C_{\scriptscriptstyle L}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle L},\top)$ $\displaystyle\in\text{ssync}^{\prime}$
(since $\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{L}}C_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\downarrow_{L}^{S}}$)
###### Case 4.
$A={\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$; then there are three possible
assignments to $B$ and $C$ that satisfies the subtyping constraints, so we
will continue by subcasing on the structure of $B$ and $C$.
###### Subcase 1.
$B={\uparrow_{L}^{S}}B_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{S}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
L}\leq B_{\scriptscriptstyle L}\leq C_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}_{\Uparrow},({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}B_{\scriptscriptstyle L},\top)\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}_{\Uparrow}$ (by definition of
$\text{ssync}_{\Uparrow}$ with $A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{S}}$)
###### Subcase 2.
$B={\uparrow_{L}^{S}}B_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{L}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
L}\leq B_{\scriptscriptstyle L}\leq C_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}_{\Uparrow},({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}B_{\scriptscriptstyle L},\top)\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}_{\Uparrow}$ (by definition of
$\text{ssync}_{\Uparrow}$ with $A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C_{\scriptscriptstyle L},\top)$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{S}}$)
###### Subcase 3.
$B={\uparrow_{L}^{L}}B_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{L}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
L}\leq B_{\scriptscriptstyle L}\leq C_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}_{\Uparrow},({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}B_{\scriptscriptstyle L},\top)\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{S}}{\uparrow_{L}^{L}}$)
$\displaystyle(A_{\scriptscriptstyle L},C_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}_{\Uparrow}$ (by definition of
$\text{ssync}_{\Uparrow}$ with $A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Uparrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C_{\scriptscriptstyle L},\top)$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{S}}$)
We missed one case, when $A=B=C=1$, but this case is trivial since
$\text{ssync}_{\Uparrow}$ does not add any new members to the set.
###### Lemma 27.
If $A\leq B\leq C$ with all same modalities,
$\vdash(B,C,\hat{D})\;\text{ssync}$, and
$\vdash(A,C,\hat{E})\;\text{ssync}$, then $\vdash(A,C,\hat{D})\;\text{ssync}$
for some $\hat{D}$ and $\hat{E}$.
###### Proof B.9.
We want to show that
$\text{ssync}^{\prime}\;::=\;\text{ssync}\cup\text{ssync}_{\Downarrow}$
is $F$-consistent with
$\displaystyle\text{ssync}_{\Downarrow}$
$\displaystyle\;::=\;\\{(A,C,\hat{D})\;|\;\exists B.A\leq
B\land(B,C,\hat{D})\in\text{ssync}\land\exists\hat{E}.(A,C,\hat{E})\in\text{ssync}\\}$
The proof is very similar in style to the previous lemma, but there is one
additional constraint that $(A,C,\hat{E})\in\text{ssync}$ for any constraint
$\hat{E}$. This assumption is only necessary for the ${\uparrow_{L}^{S}}$
case.
In any case, we will prove $F$-consistency of $\text{ssync}^{\prime}$, that
is, $\text{ssync}^{\prime}\in F(\text{ssync}^{\prime})$ by showing that each
of the three sets ssync and $\text{ssync}_{\Downarrow}$ are subsets of
$F(\text{ssync}^{\prime})$.
First, $\text{ssync}\subseteq F(\text{ssync}^{\prime})$ immediately follows
from the same argument as in the previous proof.
We will now consider $\text{ssync}_{\Downarrow}\in F(\text{ssync}^{\prime})$
by case analysis on the structure of $B$. Because all of $A,B,C$ have the same
modality, we can uniquely infer the structure of $A$ and $C$ from the
structure of $B$ by inversion on the appropriate subtyping rule.
###### Case 1.
$A={\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle L}$; then
$B={\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L}$ with
$A^{\prime}_{\scriptscriptstyle L}\leq B^{\prime}_{\scriptscriptstyle L}\leq
C^{\prime}_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow},({\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle
L},\hat{D})\in\text{ssync}$ (this case)
$\displaystyle(B^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{L}}$)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow}$ (by definition of
$\text{ssync}_{\Downarrow}$ with $A^{\prime}_{\scriptscriptstyle L}\leq
B^{\prime}_{\scriptscriptstyle L}$)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Downarrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{L}}$)
${\downarrow_{L}^{L}},\otimes,$ and $\multimap$ follow a similar pattern of
appealing to the covariance of subtyping on the continuation types.
###### Case 2.
$A=\oplus\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\}$; then
$B=\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\}$ and
$C=\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\}$ with
${A_{i}}_{\scriptscriptstyle L}\leq{B_{i}}_{\scriptscriptstyle
L}\leq{C_{i}}_{\scriptscriptstyle L}\;\forall i\in\overline{l}$ and
${B_{i}}_{\scriptscriptstyle L}\leq{C_{i}}_{\scriptscriptstyle L}\;\forall
i\in\overline{m}$.
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow}$
$\displaystyle(\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$
$\displaystyle\in\text{ssync}$ (this case) $\displaystyle(\forall
i\in\overline{l},\overline{m})\;({B_{i}}_{\scriptscriptstyle
L},{C_{i}}_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in\text{ssync}$ (by
inversion on $D{\oplus}$) $\displaystyle(\forall
i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle L},{C_{i}}_{\scriptscriptstyle
L},\hat{D})$ $\displaystyle\in\text{ssync}_{\Downarrow}$
$\displaystyle(\forall i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle
L},{C_{i}}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Downarrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\oplus}$)
$D{\&}$ follows a similar pattern.
###### Case 3.
$A={\downarrow_{L}^{S}}A_{\scriptscriptstyle S}$; similar to the proof of
Lemma 11, there are three possible assignments for $B$ and $C$. We will
present one of those subcases: let
$B={\downarrow_{L}^{S}}B_{\scriptscriptstyle S}$ and
$C={\downarrow_{L}^{S}}C_{\scriptscriptstyle S}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}\leq C_{\scriptscriptstyle S}$. The other two
cases are similar.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow},({\downarrow_{L}^{S}}B_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(B_{\scriptscriptstyle S},C_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},B_{\scriptscriptstyle S}\leq\hat{D}$ (by
inversion on $D{\downarrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}_{\Downarrow}$
(by definition of $\text{ssync}_{\Downarrow}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}^{\prime}$
(since $\text{ssync}_{\Downarrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle A_{\scriptscriptstyle S}\leq\hat{D}$ (because
$A_{\scriptscriptstyle S}\leq B_{\scriptscriptstyle S}\leq\hat{D}$)
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})$ $\displaystyle\in
F{\text{ssync}^{\prime}}$ (by $D{\downarrow_{L}^{S}}$)
###### Case 4.
$A={\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$; again, there are three
possible assignments for $B$ and $C$, and we will take the subcase when
$B={\uparrow_{L}^{S}}B_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{S}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
L}\leq B_{\scriptscriptstyle L}\leq C_{\scriptscriptstyle L}$. The other two
cases are similar. This case finally uses our assumption that
$(A,C,\hat{E})\in\text{ssync}$ – $\hat{E}$ must be $\top$ due to
$A={\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}_{\Downarrow},({\uparrow_{L}^{S}}B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)\in\text{ssync}$ (this
case) $\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}$ (By assumption with $\hat{E}=\top$)
$\displaystyle(A_{\scriptscriptstyle L},C_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{S}}$)
We missed one case, when $A=B=C=1$, but this case is trivial since
$\text{ssync}_{\Downarrow}$ does not add any new members to the set.
###### Lemma 28.
If $\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{C})\;\text{ssync}$ and $\hat{D}\leq\hat{C}$, then
$\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{D})\;\text{ssync}$ for some $A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{C},$ and $\hat{D}$.
###### Proof B.10.
We want to show that
$\text{ssync}^{\prime}\;::=\;\text{ssync}\cup\text{ssync}_{\Downarrow}$
is $F$-consistent with
$\displaystyle\text{ssync}_{\Downarrow}$
$\displaystyle\;::=\;\\{(A,C,\hat{D})\;|\;\exists B.A\leq
B\land(B,C,\hat{D})\in\text{ssync}\land\exists\hat{E}.(A,C,\hat{E})\in\text{ssync}\\}$
The proof is very similar in style to the previous lemma, but there is one
additional constraint that $(A,C,\hat{E})\in\text{ssync}$ for any constraint
$\hat{E}$. This assumption is only necessary for the ${\uparrow_{L}^{S}}$
case.
In any case, we will prove $F$-consistency of $\text{ssync}^{\prime}$, that
is, $\text{ssync}^{\prime}\in F(\text{ssync}^{\prime})$ by showing that each
of the three sets ssync and $\text{ssync}_{\Downarrow}$ are subsets of
$F(\text{ssync}^{\prime})$.
First, $\text{ssync}\subseteq F(\text{ssync}^{\prime})$ immediately follows
from the same argument as in the previous proof.
We will now consider $\text{ssync}_{\Downarrow}\in F(\text{ssync}^{\prime})$
by case analysis on the structure of $B$. Because all of $A,B,C$ have the same
modality, we can uniquely infer the structure of $A$ and $C$ from the
structure of $B$ by inversion on the appropriate subtyping rule.
###### Case 1.
$A={\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle L}$; then
$B={\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L}$ with
$A^{\prime}_{\scriptscriptstyle L}\leq B^{\prime}_{\scriptscriptstyle L}\leq
C^{\prime}_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow},({\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle
L},\hat{D})\in\text{ssync}$ (this case)
$\displaystyle(B^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{L}}$)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow}$ (by definition of
$\text{ssync}_{\Downarrow}$ with $A^{\prime}_{\scriptscriptstyle L}\leq
B^{\prime}_{\scriptscriptstyle L}$)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Downarrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}C^{\prime}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{L}}$)
${\downarrow_{L}^{L}},\otimes,$ and $\multimap$ follow a similar pattern of
appealing to the covariance of subtyping on the continuation types.
###### Case 2.
$A=\oplus\\{{\overline{l{:}A_{\scriptscriptstyle L}}}\\}$; then
$B=\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\}$ and
$C=\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\}$ with
${A_{i}}_{\scriptscriptstyle L}\leq{B_{i}}_{\scriptscriptstyle
L}\leq{C_{i}}_{\scriptscriptstyle L}\;\forall i\in\overline{l}$ and
${B_{i}}_{\scriptscriptstyle L}\leq{C_{i}}_{\scriptscriptstyle L}\;\forall
i\in\overline{m}$.
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow}$
$\displaystyle(\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$
$\displaystyle\in\text{ssync}$ (this case) $\displaystyle(\forall
i\in\overline{l},\overline{m})\;({B_{i}}_{\scriptscriptstyle
L},{C_{i}}_{\scriptscriptstyle L},\hat{D})$ $\displaystyle\in\text{ssync}$ (by
inversion on $D{\oplus}$) $\displaystyle(\forall
i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle L},{C_{i}}_{\scriptscriptstyle
L},\hat{D})$ $\displaystyle\in\text{ssync}_{\Downarrow}$
$\displaystyle(\forall i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle
L},{C_{i}}_{\scriptscriptstyle L},\hat{D})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\Downarrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}C_{\scriptscriptstyle
L}},\overline{m{:}C_{\scriptscriptstyle
L}},\overline{n{:}C_{\scriptscriptstyle L}}}\\},\hat{D})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\oplus}$)
$D{\&}$ follows a similar pattern.
###### Case 3.
$A={\downarrow_{L}^{S}}A_{\scriptscriptstyle S}$; similar to the proof of
Lemma 11, there are three possible assignments for $B$ and $C$. We will
present one of those subcases: let
$B={\downarrow_{L}^{S}}B_{\scriptscriptstyle S}$ and
$C={\downarrow_{L}^{S}}C_{\scriptscriptstyle S}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}\leq C_{\scriptscriptstyle S}$. The other two
cases are similar.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})$
$\displaystyle\in\text{ssync}_{\Downarrow},({\downarrow_{L}^{S}}B_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(B_{\scriptscriptstyle S},C_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},B_{\scriptscriptstyle S}\leq\hat{D}$ (by
inversion on $D{\downarrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}_{\Downarrow}$
(by definition of $\text{ssync}_{\Downarrow}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}$) $\displaystyle(A_{\scriptscriptstyle
S},C_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}^{\prime}$
(since $\text{ssync}_{\Downarrow}\subseteq\text{ssync}^{\prime}$)
$\displaystyle A_{\scriptscriptstyle S}\leq\hat{D}$ (because
$A_{\scriptscriptstyle S}\leq B_{\scriptscriptstyle S}\leq\hat{D}$)
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},\hat{D})$ $\displaystyle\in
F{\text{ssync}^{\prime}}$ (by $D{\downarrow_{L}^{S}}$)
###### Case 4.
$A={\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$; again, there are three
possible assignments for $B$ and $C$, and we will take the subcase when
$B={\uparrow_{L}^{S}}B_{\scriptscriptstyle L}$ and
$C={\uparrow_{L}^{S}}C_{\scriptscriptstyle L}$ with $A_{\scriptscriptstyle
L}\leq B_{\scriptscriptstyle L}\leq C_{\scriptscriptstyle L}$. The other two
cases are similar. This case finally uses our assumption that
$(A,C,\hat{E})\in\text{ssync}$ – $\hat{E}$ must be $\top$ due to
$A={\uparrow_{L}^{S}}A_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}_{\Downarrow},({\uparrow_{L}^{S}}B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)\in\text{ssync}$ (this
case) $\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$
$\displaystyle\in\text{ssync}$ (By assumption with $\hat{E}=\top$)
$\displaystyle(A_{\scriptscriptstyle L},C_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$ $\displaystyle\in\text{ssync}$
(by inversion on $D{\uparrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},{\uparrow_{L}^{S}}A_{\scriptscriptstyle L})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{S}}A_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\top)$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{S}}$)
We missed one case, when $A=B=C=1$, but this case is trivial since
$\text{ssync}_{\Downarrow}$ does not add any new members to the set.
###### Lemma 29.
If $\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{C})\;\text{ssync}$ and $\vdash(A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{D})\;\text{ssync}$, then
${\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{C}\land\hat{D})\;\text{ssync}}$ for some $A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{C},$ and $\hat{D}$.
###### Proof B.11.
First, recall that $\vdash(A_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},-)\;\text{ssync}$ requires that $A_{\scriptscriptstyle L}\leq
B_{\scriptscriptstyle L}$. We want to show that
$\text{ssync}^{\prime}\;::=\;\text{ssync}\cup\text{ssync}_{\land}$
is $F$-consistent with
$\displaystyle\text{ssync}_{\land}\;::=\;\\{(A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{C}\land\hat{D})\;|\;(A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{C})\in\text{ssync}\land(A_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{D})\in\text{ssync}$
As per usual, we will prove $F$-consistency of $\text{ssync}^{\prime}$, that
is, $\text{ssync}^{\prime}\in F(\text{ssync}^{\prime})$ by showing that each
of the two sets ssync and $\text{ssync}_{\land}$ are subsets of
$F(\text{ssync}^{\prime})$.
$\text{ssync}\subseteq F(\text{ssync}^{\prime})$ immediately follows from the
same argument as in previous lemmas.
We will now consider $\text{ssync}_{\land}\in F(\text{ssync}^{\prime})$ by
case analysis on the structure of $A_{\scriptscriptstyle L}$. We can infer the
structure of $B_{\scriptscriptstyle L}$ by inversion on the appropriate
subtyping rule. For ease of presentation, let $\hat{E}=\hat{C}\land\hat{D}$;
we will expand $\hat{E}$ whenever necessary.
###### Case 1.
$A_{\scriptscriptstyle L}={\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L}$; then $B_{\scriptscriptstyle
L}={\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle L}$ with
$A^{\prime}_{\scriptscriptstyle L}\leq B^{\prime}_{\scriptscriptstyle L}$.
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle L},\hat{E})$
$\displaystyle\in\text{ssync}_{\land},({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle
L},\hat{C})\in\text{ssync},({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle
L},\hat{D})\in\text{ssync}$ (this case)
$\displaystyle(A^{\prime}_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle L},\hat{C})$
$\displaystyle\in\text{ssync},(A^{\prime}_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle L},\hat{D})\in\text{ssync}$ (by inversion on
$D{\uparrow_{L}^{L}}$) $\displaystyle(A^{\prime}_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle L},\hat{E})$
$\displaystyle\in\text{ssync}_{\land}$ (by definition of
$\text{ssync}_{\land}$) $\displaystyle(A^{\prime}_{\scriptscriptstyle
L},B^{\prime}_{\scriptscriptstyle L},\hat{E})$
$\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\land}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\uparrow_{L}^{L}}A^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle L},\hat{E})$
$\displaystyle\in F(\text{ssync}^{\prime})$ (by $D{\uparrow_{L}^{L}}$)
${\downarrow_{L}^{L}},\otimes,$ and $\multimap$ follow a similar pattern of
appealing to the continuation types.
###### Case 2.
$A_{\scriptscriptstyle L}=\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\}$; then $B_{\scriptscriptstyle
L}=\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\}$; with
${A_{i}}_{\scriptscriptstyle L}\leq{B_{i}}_{\scriptscriptstyle L}\;\forall
i\in\overline{l}$.
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\},\hat{E})$
$\displaystyle\in\text{ssync}_{\land}$
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\},\hat{C})$
$\displaystyle\in\text{ssync},(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(\forall i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle
L},{B_{i}}_{\scriptscriptstyle L},\hat{C})$
$\displaystyle\in\text{ssync},({A_{i}}_{\scriptscriptstyle
L},{B_{i}}_{\scriptscriptstyle L},\hat{D})\in\text{ssync},$ (by inversion on
$D{\oplus}$) $\displaystyle(\forall
i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle L},{B_{i}}_{\scriptscriptstyle
L},\hat{E})$ $\displaystyle\in\text{ssync}_{\land}$ (by definition of
$\text{ssync}_{\land}$) $\displaystyle(\forall
i\in\overline{l})\;({A_{i}}_{\scriptscriptstyle L},{B_{i}}_{\scriptscriptstyle
L},\hat{E})$ $\displaystyle\in\text{ssync}^{\prime}$ (since
$\text{ssync}_{\land}\subseteq\text{ssync}^{\prime}$)
$\displaystyle(\oplus\\{{\overline{l{:}A_{\scriptscriptstyle
L}}}\\},\oplus\\{{\overline{l{:}B_{\scriptscriptstyle
L}},\overline{m{:}B_{\scriptscriptstyle L}}}\\},\hat{E})$ $\displaystyle\in
F(\text{ssync}^{\prime})$ (by $D{\oplus}$)
$D{\&}$ follows a similar pattern.
###### Case 3.
$A_{\scriptscriptstyle L}={\downarrow_{L}^{S}}A_{\scriptscriptstyle S}$; then
there are two subcases for the structure of $B_{\scriptscriptstyle L}$. We
shall take the case when $B_{\scriptscriptstyle
L}={\downarrow_{L}^{S}}B_{\scriptscriptstyle S}$ with $A_{\scriptscriptstyle
S}\leq B_{\scriptscriptstyle S}$, but the other case, when
$B_{\scriptscriptstyle L}={\downarrow_{L}^{L}}B^{\prime}_{\scriptscriptstyle
L}$ follows a similar pattern.
At this point we realize what $\hat{E}$ has to be – either $\hat{E}=\bot$, in
which case we want to derive a contradiction for this case (the $\bot$
constraint requires that there be no releases) or
$\hat{E}=E_{\scriptscriptstyle S}$ meaning $\hat{E}$ is a non-trivial meet.
###### Subcase 1.
$\hat{E}=\bot$.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle S},\bot)$
$\displaystyle\in\text{ssync}_{\land},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle
S},\hat{C})\in\text{ssync},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle S},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle S},B_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},A_{\scriptscriptstyle S}\leq\hat{C}$ (by
inversion on $D{\downarrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
S},B_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},A_{\scriptscriptstyle S}\leq\hat{D}$ (by
inversion on $D{\downarrow_{L}^{S}}$) Contradiction (since
$A_{\scriptscriptstyle S}$ is a lower bound of $\hat{C}\land\hat{D}$ but
$A_{\scriptscriptstyle S}$ is strictly greater than $\bot$)
###### Subcase 2.
$\hat{E}=E_{\scriptscriptstyle S}$ for some $E_{\scriptscriptstyle S}$.
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle S},E_{\scriptscriptstyle S})$
$\displaystyle\in\text{ssync}_{\land},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle
S},\hat{C})\in\text{ssync},({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}B_{\scriptscriptstyle S},\hat{D})\in\text{ssync}$ (this
case) $\displaystyle(A_{\scriptscriptstyle S},B_{\scriptscriptstyle S},\top)$
$\displaystyle\in\text{ssync},A_{\scriptscriptstyle
S}\leq\hat{C},A_{\scriptscriptstyle S}\leq\hat{D}$ (by inversion on
$D{\downarrow_{L}^{S}}$) $\displaystyle(A_{\scriptscriptstyle
S},B_{\scriptscriptstyle S},\top)$ $\displaystyle\in\text{ssync}^{\prime}$
(since $\text{ssync}\subseteq\text{ssync}^{\prime}$)
$\displaystyle({\downarrow_{L}^{S}}A_{\scriptscriptstyle
S},{\downarrow_{L}^{S}}C_{\scriptscriptstyle S},E_{\scriptscriptstyle S})$
$\displaystyle\in F(\text{ssync}^{\prime})$ (by $D{\downarrow_{L}^{S}}$ with
$A_{\scriptscriptstyle S}\leq E_{\scriptscriptstyle S}$ because
$A_{\scriptscriptstyle S}$ is a lower bound of $\hat{C}$ and $\hat{D}$ and
$E_{\scriptscriptstyle S}$ is the greatest lower bound)
Unlike the previous lemmas, we require $A_{\scriptscriptstyle L}$ to be
linear, so we do not need to consider ${\uparrow_{L}^{S}}$. The case when
$A=B=1$ is trivial.
## Appendix C Preservation Theorem
###### Theorem 30 (Preservation).
If ${\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}$ for some
$\Lambda,\Theta,\Gamma,$ and $\Delta$, and
$\Lambda;\Theta\rightarrow\Lambda^{\prime};\Theta^{\prime}$ for some
$\Lambda^{\prime};\Theta^{\prime}$, then
${\Gamma^{\prime}\models\Lambda^{\prime};\Theta^{\prime}::(\Gamma^{\prime};\Delta)}$
where $\Gamma^{\prime}\preceq\Gamma$.
###### Proof C.1.
By induction on the dynamics to construct a well-formed and well-typed
configuration starting with ${\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}$.
##### Notation
Many of the proof cases involve transitions between linear process terms
(either proc or connect). When reasoning with these transitions, we adopt the
notation that $\Psi_{a}\to\Psi_{a}^{\prime}$ that is, $\Psi_{a}$ represents
the process term offering $a$ before the transition and $\Psi_{a}^{\prime}$
represents the process term offering $a$ after the transition.
###### Case 1.
D-FWDLS
$\text{proc}(a_{\scriptscriptstyle L},\text{fwd}\;a_{\scriptscriptstyle L}\
b_{\scriptscriptstyle S})\to\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S})$
where $\Psi_{a}=\text{proc}(a_{\scriptscriptstyle
L},\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle S})$ and
$\Psi_{a}^{\prime}=\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S})$ (for the remaining cases, these metavariable
assignments are implicit). Let $\Theta=\Theta_{1},\Psi_{a},\Theta_{2}$. Then
by well-formedness, $\Lambda=\text{unavail}(a_{\scriptscriptstyle
S}),\Lambda_{1}$.
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a},\Theta_{2}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\text{proc}(a_{\scriptscriptstyle
L},\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle
S}),\Theta_{2}::(a_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L},\Delta_{p})}$ (by Lemma 4 and expanding $\Psi_{a}$)
$\displaystyle{\Gamma\models\Theta_{2}::(\Delta_{p})}\quad{\Gamma;\cdot\vdash\text{fwd}\;a_{\scriptscriptstyle
L}\ b_{\scriptscriptstyle S}::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ (by inversion on $\Theta 3$)
$\displaystyle b_{\scriptscriptstyle S}{:}\hat{B}\in\Gamma\quad\hat{B}\leq
A^{\prime}_{\scriptscriptstyle L}$ (by inversion on
$ID_{\scriptscriptstyle{LS}}$) $\displaystyle\hat{B}\leq A_{\scriptscriptstyle
L}$ (by transitivity of $\leq$)
$\displaystyle{\Gamma\models\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta_{2}::(a:A_{\scriptscriptstyle
L},\Delta_{p})}$ (by $\Theta 2$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Theta_{2}::(\Delta)}$
(by Lemma 7)
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a}^{\prime},\Theta_{2}::(\Gamma;\Delta)}$
(by $\Omega$)
The well-formedness conditions are maintained because only $\Psi_{a}\in\Theta$
was replaced by $\Psi_{a}^{\prime}$.
###### Case 2.
D-$\&$
$\text{proc}(a_{\scriptscriptstyle L},b.i;P),\text{proc}(b_{\scriptscriptstyle
L},\text{case}\;b_{\scriptscriptstyle L}\;\text{of}\;\\{\overline{l\Rightarrow
Q},\overline{m\Rightarrow Q}\\})\to\text{proc}(a_{\scriptscriptstyle
L},P),\text{proc}(b_{\scriptscriptstyle L},Q_{i})\quad(i\in\overline{l})$
Then $\Theta=\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}$
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 4)
$\displaystyle{\Gamma\models\Psi_{a},\Psi_{b},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 6 and
$\Theta_{r}=\Theta_{2},\Theta_{3}$)
$\displaystyle{\Gamma\models\Psi_{b},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}\&\\{{\overline{l{:}B_{\scriptscriptstyle
L}}}\\},\Delta_{a},\Delta_{r})}\quad{\Gamma;\Delta_{a}\vdash
b.i;P::(a_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle L})}$
$\displaystyle\quad a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\quad\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by inversion on $\Theta
3$)
$\displaystyle{\Gamma\models\Theta_{r}::(\Delta_{a},\Delta_{b},\Delta_{r})}\quad{\Gamma;\Delta_{b}\vdash\text{case}\;b_{\scriptscriptstyle
L}\;\text{of}\;\\{\overline{l\Rightarrow Q},\overline{m\Rightarrow
Q}\\}::(b_{\scriptscriptstyle
L}{:}\&\\{{\overline{l{:}B^{\prime}_{\scriptscriptstyle
L}},\overline{m{:}B^{\prime}_{\scriptscriptstyle L}}}\\})}$
$\displaystyle\quad b_{\scriptscriptstyle
S}{:}\hat{B}\in\Gamma\quad\vdash(\&\\{{\overline{l{:}B^{\prime}_{\scriptscriptstyle
L}},\overline{m{:}B^{\prime}_{\scriptscriptstyle
L}}}\\},\&\\{{\overline{l{:}B_{\scriptscriptstyle
L}}}\\},\hat{B})\;\text{ssync}$ (by inversion on $\Theta 3$)
$\displaystyle{\Gamma;\Delta_{b}\vdash Q_{i}::(b_{\scriptscriptstyle
L}{:}{B_{i}}^{\prime}_{\scriptscriptstyle L})}$ (inversion on ${\&}R$)
$\displaystyle{B_{i}}^{\prime}_{\scriptscriptstyle
L}\leq{B_{i}}_{\scriptscriptstyle
L}\quad\vdash({B_{i}^{\prime}}_{\scriptscriptstyle
L},{B_{i}}_{\scriptscriptstyle L},\hat{B})\;\text{ssync}$ (by inversion on
$\leq_{\&}$ and E& respectively)
$\displaystyle{\Gamma\models\Psi_{b}^{\prime},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}{B_{i}}_{\scriptscriptstyle L},\Delta_{r},\Delta_{a})}$ (by $\Theta 3$)
$\displaystyle{\Gamma;\Delta_{a},b_{\scriptscriptstyle
L}{:}{B_{i}}_{\scriptscriptstyle L}\vdash P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ (inversion on ${\&}L$)
$\displaystyle{\Gamma\models\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by $\Theta 3$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{r}::(\Delta)}$
(by Lemma 7)
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{r}::(\Gamma;\Delta)}$
(by $\Omega$)
The well-formedness conditions are maintained because $\Psi_{a}$ and
$\Psi_{b}$ were replaced by $\Psi_{a}^{\prime}$ and $\Psi_{b}^{\prime}$
respectively in $\Theta$.
The proof of D-$\oplus$ is similar to D-$\&$.
###### Case 3.
D-$\otimes$
$\text{proc}(a_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle
L};P),\text{proc}(b_{\scriptscriptstyle L},\text{send}\;b_{\scriptscriptstyle
L}\ c_{\scriptscriptstyle L};Q),\Psi_{c}\to\text{proc}(a_{\scriptscriptstyle
L},[c_{\scriptscriptstyle L}/y_{\scriptscriptstyle
L}]P),\text{proc}(b_{\scriptscriptstyle L},Q),\Psi_{c}$
Then
$\Theta=\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3},\Psi_{c},\Theta_{4}$.
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3},\Psi_{c},\Theta_{4}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3},\Psi_{c},\Theta_{4}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3},\Psi_{c},\Theta_{4}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 4)
$\displaystyle{\Gamma\models\Psi_{a},\Psi_{b},\Psi_{c},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 6 and
$\Theta_{r}=\Theta_{2},\Theta_{3},\Theta_{4}$)
$\displaystyle{\Gamma\models\Psi_{b},\Psi_{c},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle
L},\Delta_{a},\Delta_{r})}\quad{\Gamma;\Delta_{a},b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L}\vdash y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle L};P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ $\displaystyle a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\quad\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by inversion on $\Theta
3$) $\displaystyle{\Gamma;\Delta_{a},b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},c_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L}\vdash[c_{\scriptscriptstyle
L}/y_{\scriptscriptstyle L}]P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ (by inversion on ${\otimes}L$ and
$\alpha$ equivalance)
$\displaystyle{\Gamma\models\Psi_{c},\Theta_{r}::(c_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle
L},\Delta_{a},\Delta_{b},\Delta_{r})}\quad{\Gamma;\Delta_{b},c_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L}\vdash\text{send}\;b_{\scriptscriptstyle L}\
c_{\scriptscriptstyle L};Q::(b_{\scriptscriptstyle
L}{:}C^{b}_{\scriptscriptstyle L}\otimes B^{\prime}_{\scriptscriptstyle L})}$
$\displaystyle b_{\scriptscriptstyle
S}{:}\hat{B}\in\Gamma\quad\vdash(C^{b}_{\scriptscriptstyle L}\otimes
B^{\prime}_{\scriptscriptstyle L},C^{a}_{\scriptscriptstyle L}\otimes
B_{\scriptscriptstyle L},\hat{B})\;\text{ssync}$ (by inversion on $\Theta 3$)
$\displaystyle{\Gamma;\Delta_{b}\vdash Q::(b_{\scriptscriptstyle
L}{:}B^{\prime}_{\scriptscriptstyle L})}\quad C_{\scriptscriptstyle L}\leq
C^{b}_{\scriptscriptstyle L}$ (by inversion on ${\otimes}R$) $\displaystyle
C^{b}_{\scriptscriptstyle L}\leq C^{a}_{\scriptscriptstyle L}\quad
B^{\prime}_{\scriptscriptstyle L}\leq B_{\scriptscriptstyle
L}\quad\vdash(B^{\prime}_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},\hat{B})\;\text{ssync}$ (by inversion on $\leq_{\otimes}$ and $E{\otimes}$
respectively)
$\displaystyle{\Gamma\models\Psi_{c},\Theta_{r}::(c_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L},\Delta_{a},\Delta_{b},\Delta_{r})}$ (by
Lemma 8 since $C_{\scriptscriptstyle L}\leq C^{b}_{\scriptscriptstyle L}\leq
C^{a}_{\scriptscriptstyle L}$.)
$\displaystyle{\Gamma\models\Psi_{b}^{\prime},\Psi_{c},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},c_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L},\Delta_{a},\Delta_{r})}$ (by $\Theta 3$)
$\displaystyle{\Gamma\models\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{c},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by $\Theta 3$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{c},\Theta_{r}::(\Delta)}$
(by Lemma 7)
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{c},\Theta_{r}::(\Gamma;\Delta)}$
(by $\Omega$)
The well-formedness conditions are maintained because $\Psi_{a}$ and
$\Psi_{b}$ were replaced by $\Psi_{a}^{\prime}$ and $\Psi_{b}^{\prime}$
respectively in $\Theta$.
###### Case 4.
D-$\otimes$2
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle
L};P),\text{proc}(b_{\scriptscriptstyle L},\text{send}\;b_{\scriptscriptstyle
L}\ c_{\scriptscriptstyle S};Q)$ $\displaystyle\to\quad$
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},[d_{\scriptscriptstyle
L}/y_{\scriptscriptstyle L}]P),\text{proc}(b_{\scriptscriptstyle
L},Q),\text{connect}(d_{\scriptscriptstyle L},c_{\scriptscriptstyle
S}),\text{unavail}(d_{\scriptscriptstyle S})\quad(d\;\;\text{fresh})$
Then $\Theta=\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}$.
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 4)
$\displaystyle{\Gamma\models\Psi_{a},\Psi_{b},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 6 and
$\Theta_{r}=\Theta_{2},\Theta_{3}$)
$\displaystyle{\Gamma\models\Psi_{b},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle
L},\Delta_{a},\Delta_{r})}\quad{\Gamma;\Delta_{a},b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L}\vdash y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle L};P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ $\displaystyle a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\quad\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by inversion on $\Theta
3$) $\displaystyle{\Gamma;\Delta_{a},b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},d_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L}\vdash[d_{\scriptscriptstyle
L}/y_{\scriptscriptstyle L}]P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ (by inversion on ${\otimes}L$ and
$\alpha$ equivalance)
$\displaystyle{\Gamma\models\Theta_{r}::(\Delta_{a},\Delta_{b},\Delta_{r})}\quad{\Gamma;\Delta_{b},c_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L}\vdash\text{send}\;b_{\scriptscriptstyle L}\
c_{\scriptscriptstyle S};Q::(b_{\scriptscriptstyle
L}{:}C^{b}_{\scriptscriptstyle L}\otimes B^{\prime}_{\scriptscriptstyle L})}$
$\displaystyle b_{\scriptscriptstyle
S}{:}\hat{B}\in\Gamma\quad\vdash(C^{b}_{\scriptscriptstyle L}\otimes
B^{\prime}_{\scriptscriptstyle L},C^{a}_{\scriptscriptstyle L}\otimes
B_{\scriptscriptstyle L},\hat{B})\;\text{ssync}$ (by inversion on $\Theta 3$)
$\displaystyle{\Gamma;\Delta_{b}\vdash Q::(b_{\scriptscriptstyle
L}{:}B^{\prime}_{\scriptscriptstyle L})}\quad\hat{C}\leq
C^{b}_{\scriptscriptstyle L}$ (by inversion on ${\otimes}R_{\scriptscriptstyle
S}$) $\displaystyle{\Gamma\models\text{connect}(d_{\scriptscriptstyle
L},c_{\scriptscriptstyle S}),\Theta_{r}::(d:C^{a}_{\scriptscriptstyle
L},\Delta_{a},\Delta_{b},\Delta_{r})}$
$\displaystyle{\Gamma\models\Psi_{b}^{\prime},\Psi_{d},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},d_{\scriptscriptstyle
L}{:}C^{a}_{\scriptscriptstyle L},\Delta_{a},\Delta_{r})}$ (by $\Theta 3$
where $\Psi_{d}=\text{connect}(d_{\scriptscriptstyle L},c_{\scriptscriptstyle
S})$)
$\displaystyle{\Gamma\models\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{d},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by $\Theta 3$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{d},\Theta_{r}::(\Delta)}$
(by Lemma 7)
$\displaystyle{\Gamma^{\prime}\models\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{d},\Theta_{r}::(\Delta)}$
(by Lemma 10 with $\Gamma^{\prime}=\Gamma,d_{\scriptscriptstyle S}{:}\bot$)
$\displaystyle{\Gamma^{\prime}\models\Lambda::(\Gamma)}$ (by Lemma 10)
$\displaystyle{\Gamma^{\prime}\models\text{unavail}(d_{\scriptscriptstyle
S})::(d_{\scriptscriptstyle S}{:}\bot)}$ (by $\Lambda 4$)
$\displaystyle{\Gamma^{\prime}\models\Lambda,\text{unavail}(d_{\scriptscriptstyle
S})::(\Gamma^{\prime})}$ (by $\Lambda 2$)
$\displaystyle{\Gamma^{\prime}\models\Lambda,\text{unavail}(d_{\scriptscriptstyle
S});\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Psi_{d},\Theta_{r}::(\Gamma^{\prime};\Delta)}$
(by $\Omega$)
The well-formedness conditions are maintained because $\Psi_{a}$ and
$\Psi_{b}$ were replaced by $\Psi_{a}^{\prime}$ and $\Psi_{b}^{\prime}$
respectively in $\Theta$ and a $\Psi_{d}$ was added in $\Theta$ where $d$ is
fresh along with a corresponding $\text{unavail}(d_{\scriptscriptstyle S})$ in
$\Lambda^{\prime}=\Lambda,\text{unavail}(d_{\scriptscriptstyle S})$.
The proofs of D-$\multimap$ and D-$\multimap$2 are similar to D-$\otimes$ and
D-$\otimes$2 respectively.
We will now present some of the harder cases:
###### Case 5.
D-FWDLL
$\text{proc}(a_{\scriptscriptstyle L},\text{fwd}\;a_{\scriptscriptstyle L}\
b_{\scriptscriptstyle L}),\Psi_{b}\to\Psi_{b}\quad(a_{\scriptscriptstyle
L}:=b_{\scriptscriptstyle L},a_{\scriptscriptstyle S}:=b_{\scriptscriptstyle
S})$
Then $\Theta=\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}$ and
$\Lambda=\text{unavail}(a_{\scriptscriptstyle
S}),\text{unavail}(b_{\scriptscriptstyle S}),\Lambda_{1}$ by Lemma 5.
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 4)
$\displaystyle{\Gamma\models\Psi_{a},\Psi_{b},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ (by Lemma 6 and
$\Theta_{r}=\Theta_{2},\Theta_{3}$)
$\displaystyle{\Gamma\models\Psi_{b},\Theta_{r}::(b_{\scriptscriptstyle
L}{:}b_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L},\Delta_{r})}\;{\Gamma;b_{\scriptscriptstyle L}{:}B_{\scriptscriptstyle
L}\vdash\text{fwd}\;a_{\scriptscriptstyle L}\ b_{\scriptscriptstyle
L}::(a_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle
L})}\;a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\;\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by inversion on $\Theta
3$) $\displaystyle B_{\scriptscriptstyle L}\leq A^{\prime}_{\scriptscriptstyle
L}$ (by inversion on $ID_{\scriptscriptstyle L}$)
At this point we need to case on the structure of $\Psi_{b}$. In both cases we
will show that
${\Gamma^{\prime}\models\Psi_{a}^{\prime},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ for some
$\Gamma^{\prime}\preceq\Gamma$ and $\Psi_{a}^{\prime}$ being directly defined
from $\Psi_{b}$.
###### Subcase 1.
$\Psi_{b}=\text{connect}(b_{\scriptscriptstyle L},c_{\scriptscriptstyle S})$
for some $c_{\scriptscriptstyle S}$.
$\displaystyle{\Gamma\models\text{connect}(b_{\scriptscriptstyle
L},c_{\scriptscriptstyle S}),\Theta_{r}::(b:B_{\scriptscriptstyle
L},\Delta_{r})}\quad c_{\scriptscriptstyle
S}{:}\hat{C}\in\Gamma\quad\hat{C}\leq B_{\scriptscriptstyle L}$ (by inversion
on $\Theta 2$) $\displaystyle\hat{C}\leq A_{\scriptscriptstyle L}$ (by
transitivity of $\leq$)
$\displaystyle{\Gamma\models\text{connect}(b_{\scriptscriptstyle
L},c_{\scriptscriptstyle S}),\Theta_{r}::(b:A_{\scriptscriptstyle
L},\Delta_{r})}$ (by $\Theta 2$)
$\displaystyle{\Gamma\models\text{connect}(a_{\scriptscriptstyle
L},c_{\scriptscriptstyle S}),\Theta_{r}::(a:A_{\scriptscriptstyle
L},\Delta_{r})}$ (from renaming)
###### Subcase 2.
$\Psi_{b}=\text{proc}(b_{\scriptscriptstyle L},P)$ for some process term $P$.
$\displaystyle{\Gamma\models\Theta_{r}::(\Delta_{b},\Delta_{r})}\quad{\Gamma;\Delta_{b}\vdash
P::(b_{\scriptscriptstyle L}{:}B^{\prime}_{\scriptscriptstyle L})}\quad
b_{\scriptscriptstyle
S}{:}\hat{B}\in\Gamma\quad\vdash(B^{\prime}_{\scriptscriptstyle
L},B_{\scriptscriptstyle L},\hat{B})\;\text{ssync}$ (by inversion on $\Theta
3$) $\displaystyle B^{\prime}_{\scriptscriptstyle L}\leq B_{\scriptscriptstyle
L}\leq A^{\prime}_{\scriptscriptstyle L}\leq A_{\scriptscriptstyle L}$
$\displaystyle\vdash(B^{\prime}_{\scriptscriptstyle L},A_{\scriptscriptstyle
L},\hat{B})\;\text{ssync}\quad\vdash(B^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by Lemma 11 and Lemma 12
respectively) $\displaystyle\vdash(B^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{B}\land\hat{A})\;\text{ssync}$ (by Lemma 14)
$\displaystyle{\Gamma^{\prime}\models\Theta_{r}::(\Delta_{b},\Delta_{r})}$ (by
Lemma 10 with $\Gamma^{\prime}=[a_{\scriptscriptstyle
S}{:}\hat{B}\land\hat{A}/a_{\scriptscriptstyle S}{:}\hat{A}]\Gamma$)
$\displaystyle{\Gamma^{\prime};\Delta_{b}\vdash P::(b_{\scriptscriptstyle
L}{:}B^{\prime}_{\scriptscriptstyle L})}$ (by Lemma 9)
$\displaystyle{\Gamma^{\prime};\Delta_{b}\vdash[a_{\scriptscriptstyle
L}/b_{\scriptscriptstyle L},a_{\scriptscriptstyle S}/b_{\scriptscriptstyle
S}]P::(a_{\scriptscriptstyle L}{:}B^{\prime}_{\scriptscriptstyle L})}$ (by
$\alpha$ equivalence for $a_{\scriptscriptstyle L}/b_{\scriptscriptstyle L}$
and a combination of $\alpha$ equivalence and Lemma 10 for
$a_{\scriptscriptstyle S}/b_{\scriptscriptstyle S}$)
$\displaystyle{\Gamma^{\prime}\models\text{proc}(a_{\scriptscriptstyle
L},[a_{\scriptscriptstyle L}/b_{\scriptscriptstyle L},a_{\scriptscriptstyle
S}/b_{\scriptscriptstyle S}]P),\Theta_{r}::(a:A_{\scriptscriptstyle
L},\Delta_{r})}$ (by $\Theta 3$)
We will now continue assuming
${\Gamma^{\prime}\models\Psi_{a}^{\prime},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{r})}$ with
$\Gamma^{\prime}\preceq\Gamma$ and $\Psi_{a}^{\prime}=[a_{\scriptscriptstyle
L}/b_{\scriptscriptstyle L},a_{\scriptscriptstyle S}/b_{\scriptscriptstyle
S}]\Psi_{b}$. For the connect case that did not require a smaller $\Gamma$,
simply set $\Gamma^{\prime}=\Gamma$ since $\Gamma^{\prime}\preceq\Gamma$ by
reflexivity.
$\displaystyle{\Gamma^{\prime}\models\Theta_{1},\Psi_{a},\Theta_{r}::(\Delta)}$
(by Lemma 10)
$\displaystyle{\Gamma^{\prime}\models\Theta_{1},\Psi_{a}^{\prime},\Theta_{r}::(\Delta)}$
(by Lemma 7) $\displaystyle{\Gamma^{\prime}\models\Lambda::(\Gamma)}$ (by
Lemma 7)
$\displaystyle{\Gamma^{\prime}\models\text{unavail}(a_{\scriptscriptstyle
S})::(a_{\scriptscriptstyle S}{:}\bot)}$ (by $\Lambda 4$)
$\displaystyle{\Gamma^{\prime}\models\text{unavail}(b_{\scriptscriptstyle
S}),\Theta_{1}::(\Gamma^{\prime\prime})}$ (by inversion on $\Lambda 2$ where
$\Gamma^{\prime}=\Gamma^{\prime\prime},a_{\scriptscriptstyle S}{:}\bot$)
$\displaystyle{\Gamma^{\prime}\models\Lambda::(\Gamma^{\prime})}$ (by $\Lambda
2$) $\displaystyle{\Gamma^{\prime}\models[a_{\scriptscriptstyle
S}/b_{\scriptscriptstyle S}]\Lambda::(\Gamma^{\prime})}$ (by $\alpha$
equivalence) $\displaystyle{\Gamma^{\prime}\models[a_{\scriptscriptstyle
S}/b_{\scriptscriptstyle
S}]\Theta_{1},\Psi_{a}^{\prime},[a_{\scriptscriptstyle
S}/b_{\scriptscriptstyle S}]\Theta_{r}::(\Delta)}$ (by $\alpha$ equivalence)
$\displaystyle{\Gamma^{\prime}\models\Lambda;[a_{\scriptscriptstyle
S}/b_{\scriptscriptstyle S},a_{\scriptscriptstyle L}/b_{\scriptscriptstyle
L}]\Theta_{1},\Psi_{a}^{\prime},[a_{\scriptscriptstyle
S}/b_{\scriptscriptstyle S}]\Theta_{r}::(\Gamma^{\prime};\Delta)}$ (by
$\Omega$)
Well-formedness is easily maintained because we only removed something from
the linear fragment (it is okay to have dangling unavail terms in the shared
fragment).
###### Case 6.
D-${\uparrow_{L}^{S}}$
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle
S};P),\text{proc}(b_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle S};Q)$
$\displaystyle\to\quad$ $\displaystyle\text{proc}(a_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{proc}(b_{\scriptscriptstyle L},[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q),\text{unavail}(b_{\scriptscriptstyle S})$
Then $\Lambda=\Lambda_{b},\Lambda_{1}$ and
$\Theta=\Theta_{1},\Psi_{a},\Theta_{2}$ with
$\Lambda_{b}=\text{proc}(b_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle S};Q)$.
We also define $\Psi_{b}^{\prime}=\text{proc}(b_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle L}]Q)$.
$\displaystyle{\Gamma\models\Lambda_{b},\Lambda_{1};\Theta_{1},\Psi_{a},\Theta_{2}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda_{b},\Lambda_{1}::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\Lambda_{b}::(b_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{S}}B_{\scriptscriptstyle
L})}\quad{\Gamma\models\Lambda_{1}::(\Gamma^{\prime})}$ (by inversion on
$\Lambda 2$ with $\Gamma=b_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{S}}B_{\scriptscriptstyle L},\Gamma^{\prime}$)
$\displaystyle\vdash({\uparrow_{L}^{S}}B^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}B_{\scriptscriptstyle
L},\top)\;\text{ssync}\quad{\Gamma\vdash x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle
S};Q::(b_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{S}}B^{\prime}_{\scriptscriptstyle L})}$ (by inversion on
$\Lambda 3$) $\displaystyle{\Gamma;\cdot\vdash[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q::(b_{\scriptscriptstyle
L}{:}B^{\prime}_{\scriptscriptstyle L})}$ (by inversion on
${\uparrow_{L}^{S}}R$ and $\alpha$ equivalence)
$\displaystyle\vdash(B^{\prime}_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}B^{\prime}_{\scriptscriptstyle L})\;\text{ssync}$ (by
inversion on $D{\uparrow_{L}^{S}}$)
$\displaystyle{\Gamma\models\Psi_{a},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{p})}$ (by Lemma 4)
$\displaystyle{\Gamma\models\Theta_{2}::(\Delta_{a},\Delta_{p})}\quad{\Gamma;\Delta_{a}\vdash
x_{\scriptscriptstyle L}\leftarrow\text{acq}_{\scriptscriptstyle
S}\;b_{\scriptscriptstyle S}::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ $\displaystyle a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\quad\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by inversion on $\Theta
3$) $\displaystyle{\Gamma;\Delta_{a},b_{\scriptscriptstyle
L}{:}B^{a}_{\scriptscriptstyle L}\vdash[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle
L})}\quad{\uparrow_{L}^{S}}B_{\scriptscriptstyle
L}\leq{\uparrow_{L}^{S}}B^{a}_{\scriptscriptstyle L}$ (by inversion on
${\uparrow_{L}^{S}}L$ and $\alpha$ equivalence)
$\displaystyle{\Gamma\models\Psi_{b}^{\prime},\Theta_{2}::(b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},\Delta_{a},\Delta_{p})}$ (by $\Lambda 3$)
$\displaystyle{\Gamma\models\Psi_{b}^{\prime},\Theta_{2}::(b_{\scriptscriptstyle
L}{:}B^{a}_{\scriptscriptstyle L},\Delta_{a},\Delta_{p})}$ (by Lemma 8)
$\displaystyle{\Gamma\models\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A,\Delta_{p})}$ (by $\Theta 3$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{2}::(\Delta)}$
(by Lemma 7) $\displaystyle{\Gamma\models\text{unavail}(b_{\scriptscriptstyle
S})::(b_{\scriptscriptstyle S}{:}{\uparrow_{L}^{S}}B_{\scriptscriptstyle L})}$
(by $\Lambda 4$)
$\displaystyle{\Gamma\models\text{unavail}(b_{\scriptscriptstyle
S}),\Lambda_{1}::(\Gamma)}$ (by $\Lambda 2$)
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{2}::(\Gamma;\Delta)}$
(by $\Omega$)
Well-formedness is maintained because $\Psi_{b}\notin\Theta$ and there is a
corresponding $\text{unavail}(b_{\scriptscriptstyle S})$ to the newly added
$\Psi_{b}^{\prime}$.
###### Case 7.
D-${\uparrow_{L}^{S}}$2
$\displaystyle\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle L}\;b_{\scriptscriptstyle
L};P),\text{connect}(b_{\scriptscriptstyle L},c_{\scriptscriptstyle
S}),\text{proc}(c_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;c_{\scriptscriptstyle S};Q)$
$\displaystyle\to\quad$ $\displaystyle\text{proc}(a_{\scriptscriptstyle
L},[c_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{proc}(c_{\scriptscriptstyle L},[c_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q),\text{unavail}(c_{\scriptscriptstyle S})$
Then $\Lambda=\Lambda_{c},\Lambda_{1}$ and
$\Theta=\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}$ with
$\Lambda_{c}=\text{proc}(c_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;c_{\scriptscriptstyle S};Q)$.
We also define $\Psi_{c}^{\prime}=\text{proc}(c_{\scriptscriptstyle
L},[c_{\scriptscriptstyle L}/x_{\scriptscriptstyle L}]Q)$.
$\displaystyle{\Gamma\models\Lambda_{c},\Lambda_{1};\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Gamma;\Delta)}$
(assumption)
$\displaystyle{\Gamma\models\Lambda_{c},\Lambda_{1}::(\Gamma)}\quad{\Gamma\models\Theta_{1},\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(\Delta)}$
(by inversion on $\Omega$)
$\displaystyle{\Gamma\models\Lambda_{c}::(c_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{S}}C_{\scriptscriptstyle
L})}\quad{\Gamma\models\Lambda_{1}::(\Gamma^{\prime})}$ (by inversion on
$\Lambda 2$ with $\Gamma=c_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{S}}C_{\scriptscriptstyle L},\Gamma^{\prime}$)
$\displaystyle\vdash({\uparrow_{L}^{S}}C^{\prime}_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C_{\scriptscriptstyle
L},\top)\;\text{ssync}\quad{\Gamma\vdash x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;c_{\scriptscriptstyle
S};Q::(c_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{S}}C^{\prime}_{\scriptscriptstyle L})}$ (by inversion on
$\Lambda 3$) $\displaystyle{\Gamma;\cdot\vdash[c_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]Q::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle L})}$ (by inversion on
${\uparrow_{L}^{S}}R$ and $\alpha$ equivalence)
$\displaystyle\vdash(C^{\prime}_{\scriptscriptstyle L},C_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C^{\prime}_{\scriptscriptstyle L})\;\text{ssync}$ (by
inversion on $D{\uparrow_{L}^{S}}$)
$\displaystyle{\Gamma\models\Psi_{a},\Theta_{2},\Psi_{b},\Theta_{3}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{p})}$ (by Lemma 4)
$\displaystyle{\Gamma\models\Psi_{a},\Psi_{b},\Theta_{r}::(a_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L},\Delta_{p})}$ (by Lemma 6 with
$\Theta_{r}=\Theta_{2},\Theta_{3}$)
$\displaystyle{\Gamma\models\text{connect}(b_{\scriptscriptstyle
L},c_{\scriptscriptstyle S}),\Theta_{2}::(b_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{L}}B_{\scriptscriptstyle
L},\Delta_{a},\Delta_{p})}\quad{\Gamma;\Delta_{a},b_{\scriptscriptstyle
S}{:}{\uparrow_{L}^{L}}B_{\scriptscriptstyle L}\vdash x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle L}\;b_{\scriptscriptstyle
L}::(a_{\scriptscriptstyle L}{:}A^{\prime}_{\scriptscriptstyle L})}$
$\displaystyle a_{\scriptscriptstyle
S}{:}\hat{A}\in\Gamma\quad\vdash(A^{\prime}_{\scriptscriptstyle
L},A_{\scriptscriptstyle L},\hat{A})\;\text{ssync}$ (by inversion on $\Theta
3$)
$\displaystyle{\Gamma\models\Theta_{r}::(\Delta_{a},\Delta_{p})}\quad{\uparrow_{L}^{S}}C_{\scriptscriptstyle
L}\leq{\uparrow_{L}^{L}}B_{\scriptscriptstyle L}$ (by inversion on $\Theta 2$)
$\displaystyle C_{\scriptscriptstyle L}\leq B_{\scriptscriptstyle
L}\quad\vdash(C^{\prime}_{\scriptscriptstyle L},B_{\scriptscriptstyle
L},{\uparrow_{L}^{S}}C^{\prime}_{\scriptscriptstyle L})\;\text{ssync}$ (by
inversion on $\leq_{{\uparrow_{L}^{S}}{\uparrow_{L}^{L}}}$ and Lemma 11
respectively)
$\displaystyle{\Gamma\models\Psi_{c}^{\prime},\Theta_{r}::(c_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L},\Delta_{a},\Delta_{p})}$ (by $\Lambda 3$)
$\displaystyle{\Gamma;\Delta_{a},c_{\scriptscriptstyle
L}{:}C_{\scriptscriptstyle L}\vdash[c_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]P::(a_{\scriptscriptstyle
L}{:}A^{\prime}_{\scriptscriptstyle L})}$ (by inversion on
${\uparrow_{L}^{S}}L$ and $\alpha$ equivalence)
$\displaystyle{\Gamma\models\Psi_{a}^{\prime},\Psi_{c}^{\prime},\Theta_{2}::(a_{\scriptscriptstyle
L}{:}A,\Delta_{p})}$ (by $\Theta 3$)
$\displaystyle{\Gamma\models\Theta_{1},\Psi_{a}^{\prime},\Psi_{b}^{\prime},\Theta_{2}::(\Delta)}$
(by Lemma 7) $\displaystyle{\Gamma\models\text{unavail}(c_{\scriptscriptstyle
S})::(c_{\scriptscriptstyle S}{:}{\uparrow_{L}^{S}}C_{\scriptscriptstyle L})}$
(by $\Lambda 4$)
$\displaystyle{\Gamma\models\text{unavail}(c_{\scriptscriptstyle
S}),\Lambda_{1}::(\Gamma)}$ (by $\Lambda 2$)
$\displaystyle{\Gamma\models\Lambda;\Theta_{1},\Psi_{a}^{\prime},\Psi_{c}^{\prime},\Theta_{2}::(\Gamma;\Delta)}$
(by $\Omega$)
Well-formedness is maintained because $\Psi_{c}\notin\Theta$ and there is a
corresponding $\text{unavail}(c_{\scriptscriptstyle S})$ to the newly added
$\Psi_{c}^{\prime}$.
Other omitted cases follow a similar strategy as presented.
## Appendix D Progress Theorem
###### Theorem 31 (Progress).
If ${\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}$ then either:
1. (1)
$\Lambda;\Theta\rightarrow\Lambda^{\prime};\Theta$ for some $\Lambda^{\prime}$
or
2. (2)
$\Lambda$ is poised and one of:
1. (a)
$\Lambda;\Theta\rightarrow\Lambda^{\prime};\Theta^{\prime}$ or
2. (b)
$\Theta$ is poised or
3. (c)
a linear process in $\Theta$ is stuck and therefore unable to acquire
###### Proof D.1.
$\displaystyle{\Gamma\models\Lambda;\Theta::(\Gamma;\Delta)}$ (by assumption)
$\displaystyle{\Gamma\models\Lambda::(\Gamma)}\quad{\Gamma\models\Theta::(\Delta)}$
(by inversion on $\Omega$)
for some $\Gamma,\Lambda,\Theta,$ and $\Delta$.
We first show that either $\Lambda\to\Lambda^{\prime}$ for some
$\Lambda^{\prime}$ or that $\Lambda$ is poised by induction on the derivation
of ${\Gamma\models\Lambda::(\Gamma)}$.
###### Case 1.
${\Gamma\models\cdot::(\cdot)}$
$(\cdot)$ is poised since there is no proc term.
###### Case 2.
${\Gamma\models\Lambda_{1},\Lambda_{2}::(\Gamma_{1},\Gamma_{2})}\lx@proof@logical@and{\Gamma\models\Lambda_{1}::(\Gamma_{1})}{\Gamma\models\Lambda_{2}::(\Gamma_{2})}$
Then either $\Lambda_{1}\to\Lambda_{1}^{\prime}$ or $\Lambda_{1}$ is poised by
IH, and similarly, either $\Lambda_{2}\to\Lambda_{2}^{\prime}$ or
$\Lambda_{2}$ is poised by IH. If both $\Lambda_{1}$ and $\Lambda_{2}$ are
poised, then the concatenation $\Lambda_{1},\Lambda_{2}$ is poised. Otherwise,
we take the concatenation of the components that progresses. In particular, if
$\Lambda_{1}\to\Lambda_{1}^{\prime}$ and $\Lambda_{2}$ is poised,
$\Lambda_{1},\Lambda_{2}\to\Lambda_{1}^{\prime},\Lambda_{2}$ (and similarly
for the other two combinations).
###### Case 3.
${\Gamma\models\text{proc}(a_{\scriptscriptstyle S},P)::(a_{\scriptscriptstyle
S}{:}A_{\scriptscriptstyle
S})}\lx@proof@logical@and\vdash(A^{\prime}_{\scriptscriptstyle
S},A_{\scriptscriptstyle S},\top)\;\text{ssync}{\Gamma\vdash
P::(a_{\scriptscriptstyle S}{:}A^{\prime}_{\scriptscriptstyle S})}$
We proceed by case analysis on the syntactic form of $P$ inferred from
inversion on the appropriate typing rule on the derivation of ${\Gamma\vdash
P::(a_{\scriptscriptstyle S}{:}A^{\prime}_{\scriptscriptstyle S})}$.
###### Subcase 1.
$P=\text{fwd}\;a_{\scriptscriptstyle S}\ b_{\scriptscriptstyle S}$. This case
requires a global substitution on the top level $\Lambda$. Since there is no
ordering constraint on $\Lambda$, let
$\Lambda=\text{proc}(a_{\scriptscriptstyle
S},\text{fwd}\;a_{\scriptscriptstyle S}\ b_{\scriptscriptstyle
S}),\Lambda_{1}$ without loss of generality. Then by D-FWDSS,
$\Lambda\to[a_{\scriptscriptstyle S}/b_{\scriptscriptstyle S}]\Lambda_{1}$
###### Subcase 2.
${P=x_{\scriptscriptstyle S}\leftarrow X_{\scriptscriptstyle
S}\leftarrow\overline{b_{\scriptscriptstyle S}};Q}$, then by D-SPAWNSS,
$\text{proc}(a_{\scriptscriptstyle S},x_{\scriptscriptstyle S}\leftarrow
X_{\scriptscriptstyle S}\leftarrow\overline{b_{\scriptscriptstyle
S}};Q)\to\text{proc}(a_{\scriptscriptstyle S},[c_{\scriptscriptstyle
S}/x_{\scriptscriptstyle S}]Q),\text{proc}(c_{\scriptscriptstyle
S},[c_{\scriptscriptstyle S}/x^{\prime}_{\scriptscriptstyle
S},\overline{b_{\scriptscriptstyle
S}}/\overline{y^{\prime}_{\scriptscriptstyle S}}]P)\quad(c\;\;\text{fresh})$
###### Subcase 3.
${P=a_{\scriptscriptstyle L}\leftarrow\text{acc}_{\scriptscriptstyle
S}\;a_{\scriptscriptstyle S};Q}$, then $\text{proc}(a_{\scriptscriptstyle
S},P)$ is poised by definition.
###### Case 4.
${\Gamma\models\text{unavail}(a_{\scriptscriptstyle
S})::(a_{\scriptscriptstyle S}{:}\hat{A})}$
$\text{unavail}(a_{\scriptscriptstyle S})$ is poised since there is no proc
term.
That concludes the first part of the proof. Now to show the second part, we
will assume that $\Lambda$ is poised and proceed by induction on the
derivation of ${\Gamma\models\Theta::(\Delta)}$ to show one of:
1. (a)
$\Lambda;\Theta\to\Lambda^{\prime};\Theta^{\prime}$ for some
$\Lambda^{\prime}$ and $\Theta^{\prime}$
2. (b)
$\Theta$ poised
3. (c)
some $\Psi\in\Theta$ is stuck
We will showcase the style of the proof along with the interesting cases.
###### Case 1.
${\Gamma\models\cdot::(\cdot)}$
$(\cdot)$ is poised since there is no proc term.
###### Case 2.
${\Gamma\models\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta_{1}::(a:A_{\scriptscriptstyle L},\Delta_{1})}\lx@proof@logical@and
b_{\scriptscriptstyle S}{:}\hat{B}\in\Gamma b_{\scriptscriptstyle S}\leq
A_{\scriptscriptstyle L}{\Gamma\models\Theta_{1}::(\Delta_{1})}$
By the IH, $\Theta_{1}$ either steps, is poised, or contains a $\Psi$ that is
stuck.
If $\Theta_{1}$ steps, then
${\Lambda;\Theta_{1}\to\Lambda^{\prime};\Theta_{1}^{\prime}}$ for some
$\Lambda^{\prime}$ and $\Theta_{1}^{\prime}$. Then
${\Lambda;\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta_{1}\to\Lambda^{\prime};\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta_{1}^{\prime}}$
If $\Theta_{1}$ is poised, then $\text{connect}(a_{\scriptscriptstyle
L},b_{\scriptscriptstyle S}),\Theta_{1}$ is poised because
$\text{connect}(-_{\scriptscriptstyle L},-_{\scriptscriptstyle S})$ is not a
proc term.
Finally, if there is some $\Psi\in\Theta_{1}$ that is stuck, then course
${\Psi\in(\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta_{1})}$ is stuck.
###### Case 3.
${\Gamma\models\text{proc}(c_{\scriptscriptstyle
L},P),\Theta_{1}::(c:C_{\scriptscriptstyle
L},\Delta_{1})}\lx@proof@logical@and c_{\scriptscriptstyle
S}{:}\hat{C}\in\Gamma\vdash(C^{\prime}_{\scriptscriptstyle
L},C_{\scriptscriptstyle L},\hat{C})\;\text{ssync}{\Gamma;\Delta_{c}\vdash
P::(c_{\scriptscriptstyle L}{:}C^{\prime}_{\scriptscriptstyle
L})}{\Gamma\models\Theta_{1}::(\Delta_{c},\Delta_{1})}$
By the IH, $\Theta_{1}$ either steps, is poised, or contains a $\Psi$ that is
stuck. We first cover two of the cases:
If $\Theta_{1}$ steps, then
${\Lambda;\Theta_{1}\to\Lambda^{\prime};\Theta_{1}^{\prime}}$ for some
$\Lambda^{\prime}$ and $\Theta_{1}^{\prime}$. Then
${\Lambda;\text{proc}(c_{\scriptscriptstyle
L},P),\Theta_{1}\to\Lambda^{\prime};\text{proc}(c_{\scriptscriptstyle
L},P),\Theta_{1}^{\prime}}$.
If there is some $\Psi\in\Theta_{1}$ that is stuck, then of course the same
${\Psi\in(\text{proc}(c_{\scriptscriptstyle L},P),\Theta_{1})}$ is stuck.
For the final case, we will assume that $\Theta_{1}$ is poised and proceed by
case analysis on the derivation of ${\Gamma;\Delta_{c}\vdash
P::(c_{\scriptscriptstyle L}{:}C^{\prime}_{\scriptscriptstyle L})}$. Unlike in
the first part, we make the step between identifying the appropriate typing
rule and inferring the form of $P$ explicit because some of the cases are more
complicated. In the typing judgment, we replace instantiated channel variables
in the context such as $x$ by actual channel names since they must already
exist in the configuration.
###### Subcase 1.
The form of $P$ inferred from all linear right rules
$(1R,{\otimes}R,{\otimes}R_{\scriptscriptstyle
S},{\multimap}R,{\oplus}R,{\&}R,\\\ {\uparrow_{L}^{L}}R,$ and
${\downarrow_{L}^{L}}R)$ directly coincide with the definition of poised. For
example, $1R$ implies that $P=\text{close}\;a_{\scriptscriptstyle L}$, which
is poised, and so on. Since $\Theta_{1}$ is poised,
$\text{proc}(a_{\scriptscriptstyle L},P),\Theta_{1}$ is poised.
###### Subcase 2.
${\Gamma;\Delta_{c}^{\prime},b_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\otimes B_{\scriptscriptstyle L}\vdash y_{\scriptscriptstyle
L}\leftarrow\text{recv}\;b_{\scriptscriptstyle L};P::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle
L})}{\Gamma;\Delta_{c}^{\prime},b_{\scriptscriptstyle
L}{:}B_{\scriptscriptstyle L},y_{\scriptscriptstyle L}{:}A_{\scriptscriptstyle
L}\vdash P::(c_{\scriptscriptstyle L}{:}C^{\prime}_{\scriptscriptstyle L})}$
where $\Delta_{c}=\Delta_{c}^{\prime},b_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle L}$. Then
$\Theta_{1}=\Theta_{2},\text{proc}(b_{\scriptscriptstyle L},-),\Theta_{3}$ for
some $\Theta_{2}$ and $\Theta_{3}$ (we know $b_{\scriptscriptstyle L}$ is not
provided by a connect term since connect terms offer channels of type
${\uparrow_{L}^{L}}D_{\scriptscriptstyle L}$). Since
$\text{proc}(b_{\scriptscriptstyle L},-)$ is poised and must offer a channel
of type $A_{\scriptscriptstyle L}\otimes B_{\scriptscriptstyle L}$, it must be
of form $\text{proc}(b_{\scriptscriptstyle
L},\text{send}\;b_{\scriptscriptstyle L}\ a_{\scriptscriptstyle L};Q)$. Thus,
by D-$\otimes$,
$\displaystyle\Lambda;\begin{subarray}{c}\text{proc}(c_{\scriptscriptstyle
L},y_{\scriptscriptstyle L}\leftarrow\text{recv}\;b_{\scriptscriptstyle
L};P),\Theta_{2},\\\ \text{proc}(b_{\scriptscriptstyle
L},\text{send}\;b_{\scriptscriptstyle L}\ a_{\scriptscriptstyle
L};Q),\Theta_{3}\end{subarray}\to\Lambda;\begin{subarray}{c}\text{proc}(c_{\scriptscriptstyle
L},[a_{\scriptscriptstyle L}/y_{\scriptscriptstyle L}]P),\Theta_{2},\\\
\text{proc}(b_{\scriptscriptstyle L},Q),\Theta_{3}\end{subarray}$
All the remaining linear left rules except ${\uparrow_{L}^{L}}L$ and
${\uparrow_{L}^{L}}R$ $(1L,{\multimap}L,{\multimap}L_{\scriptscriptstyle
S},{\oplus}L,{\&}L)$ follow a similar pattern.
###### Subcase 3.
${\Gamma,a_{\scriptscriptstyle S}{:}\hat{A};\Delta_{c}\vdash
x_{\scriptscriptstyle L}\leftarrow\text{acq}_{\scriptscriptstyle
S}\;a_{\scriptscriptstyle S};P::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle
L})}\lx@proof@logical@and\hat{A}\leq{\uparrow_{L}^{S}}A_{\scriptscriptstyle
L}{\Gamma,a_{\scriptscriptstyle S}{:}\hat{A};\Delta,x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\vdash P::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle L})}$
Since $\Lambda$ is poised, either
${\Lambda=\text{unavail}(a_{\scriptscriptstyle S}),\Lambda_{1}}$ or
${\Lambda=\text{proc}(a_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};Q),\Lambda_{1}}$ for some $\Lambda_{1}$. In the first case,
$\text{proc}(c_{\scriptscriptstyle L},a_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle S};P)$ is
stuck, so we are done. In the second case, by D-${\uparrow_{L}^{S}}$, we have
$\displaystyle\text{proc}(a_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};Q),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle L},a_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};P),\Theta_{1}$ $\displaystyle\to\quad$
$\displaystyle\text{unavail}(a_{\scriptscriptstyle
S}),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle L},[a_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]P),\text{proc}(a_{\scriptscriptstyle
L},[a_{\scriptscriptstyle L}/x_{\scriptscriptstyle L}]Q),\Theta_{1}$
###### Subcase 4.
${\Gamma;\Delta_{c}^{\prime},a_{\scriptscriptstyle
L}{:}{\downarrow_{L}^{S}}A_{\scriptscriptstyle S}\vdash x_{\scriptscriptstyle
S}\leftarrow\text{rel}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};P::(c_{\scriptscriptstyle L}{:}C^{\prime}_{\scriptscriptstyle
L})}{\Gamma,x_{\scriptscriptstyle S}{:}A_{\scriptscriptstyle
S};\Delta_{c}^{\prime}\vdash P::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle L})}$
where $\Delta_{c}=\Delta_{c}^{\prime},a_{\scriptscriptstyle
L}{:}{\downarrow_{L}^{S}}A_{\scriptscriptstyle S}$. Then
$\Theta_{1}=\Theta_{2},\text{proc}(a_{\scriptscriptstyle L},-),\Theta_{3}$ for
some $\Theta_{2}$ and $\Theta_{3}$. Since there is a
$\text{proc}(a_{\scriptscriptstyle L},-)$ in the linear configuration, by
well-formedness condition, there must be a corresponding
$\text{unavail}(a_{\scriptscriptstyle S})\in\Lambda$, so
$\Lambda=\text{unavail}(a_{\scriptscriptstyle S}),\Lambda_{1}$. Furthermore,
since $\Theta_{1}$ is poised, the proc term must be of form
$\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
S}\leftarrow\text{det}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle S};Q)$.
By D-${\downarrow_{L}^{S}}$, we have
$\displaystyle\text{unavail}(a_{\scriptscriptstyle
S}),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle L},x_{\scriptscriptstyle
S}\leftarrow\text{rel}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};P),\Theta_{2},\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
S}\leftarrow\text{det}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};Q),\Theta_{3}$ $\displaystyle\to\quad$
$\displaystyle\text{proc}(a_{\scriptscriptstyle S},[a_{\scriptscriptstyle
S}/x_{\scriptscriptstyle S}]Q),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle
L},[a_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\Theta_{2},\Theta_{3}$
###### Subcase 5.
${\Gamma;\Delta_{c}^{\prime},a_{\scriptscriptstyle
L}{:}{\uparrow_{L}^{L}}A_{\scriptscriptstyle L}\vdash x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle
L};P::(c_{\scriptscriptstyle L}{:}C^{\prime}_{\scriptscriptstyle
L})}{\Gamma;\Delta_{c}^{\prime},x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\vdash P::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle L})}$
where $\Delta_{c}=\Delta_{c}^{\prime},a_{\scriptscriptstyle
L}{:}{\uparrow_{L}^{L}}A_{\scriptscriptstyle L}$. Then
$\Theta_{1}=\Theta_{2},\Psi_{a},\Theta_{3}$ where $\Psi_{a}$ is either of form
$\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle S})$ for some
$b_{\scriptscriptstyle S}$ or $\text{proc}(a_{\scriptscriptstyle L},-)$. In
the latter case, we appeal to the term being poised and the proof proceeds
like the other left rules. In the former case, there must be a term in
$\Lambda$ that provides $b_{\scriptscriptstyle S}$. Since $\Lambda$ is poised,
either ${\Lambda=\text{unavail}(b_{\scriptscriptstyle S}),\Lambda_{1}}$ or
${\Lambda=\text{proc}(b_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle
S};Q),\Lambda_{1}}$. In the former case, we can conclude that
$\text{proc}(c_{\scriptscriptstyle L},-)$ is stuck, so we are done. In the
latter case, by D-${\uparrow_{L}^{S}}$2, we have
$\displaystyle\text{proc}(b_{\scriptscriptstyle S},x_{\scriptscriptstyle
L}\leftarrow\text{acc}_{\scriptscriptstyle S}\;b_{\scriptscriptstyle
S};Q),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{acq}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle
L};P),\Theta_{2},\text{connect}(a_{\scriptscriptstyle L},b_{\scriptscriptstyle
S}),\Theta_{3}$ $\displaystyle\to\quad$
$\displaystyle\text{unavail}(b_{\scriptscriptstyle
S}),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle L},[b_{\scriptscriptstyle
L}/x_{\scriptscriptstyle L}]P),\text{proc}(b_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]Q),\Theta_{2},\Theta_{3}$
###### Subcase 6.
${\Gamma;\Delta_{c}^{\prime},a_{\scriptscriptstyle
L}{:}{\downarrow_{L}^{L}}A_{\scriptscriptstyle L}\vdash x_{\scriptscriptstyle
L}\leftarrow\text{rel}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle
L};P::(c_{\scriptscriptstyle L}{:}C^{\prime}_{\scriptscriptstyle
L})}{\Gamma;\Delta_{c}^{\prime},x_{\scriptscriptstyle
L}{:}A_{\scriptscriptstyle L}\vdash P::(c_{\scriptscriptstyle
L}{:}C^{\prime}_{\scriptscriptstyle L})}$
where $\Delta_{c}=\Delta_{c}^{\prime},a_{\scriptscriptstyle
L}{:}{\downarrow_{L}^{L}}A_{\scriptscriptstyle L}$. Then
$\Theta_{1}=\Theta_{2},\text{proc}(a_{\scriptscriptstyle L},-),\Theta_{3}$.
Since $\Theta_{1}$ is poised, there are two possible forms of
$\text{proc}(a_{\scriptscriptstyle L},-)$. If we have
$\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{det}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle L};Q)$,
then we appeal to the term being poised like the other left rules. If we
instead have $\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
S}\leftarrow\text{det}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle S};Q)$,
then we first identify that $\Lambda=\text{unavail}(a_{\scriptscriptstyle
S}),\Lambda_{1}$ for some $\Lambda_{1}$ by the well-formedness condition. By
D-${\downarrow_{L}^{S}}$2, we have
$\displaystyle\text{unavail}(a_{\scriptscriptstyle
S}),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle L},x_{\scriptscriptstyle
L}\leftarrow\text{rel}_{\scriptscriptstyle L}\;a_{\scriptscriptstyle
L};P),\Theta_{2},\text{proc}(a_{\scriptscriptstyle L},x_{\scriptscriptstyle
S}\leftarrow\text{det}_{\scriptscriptstyle S}\;a_{\scriptscriptstyle
S};Q),\Theta_{3}$ $\displaystyle\to\quad$
$\displaystyle\text{proc}(a_{\scriptscriptstyle S},[a_{\scriptscriptstyle
S}/x_{\scriptscriptstyle S}]Q),\Lambda_{1};\text{proc}(c_{\scriptscriptstyle
L},[b_{\scriptscriptstyle L}/x_{\scriptscriptstyle
L}]P),\text{connect}(b_{\scriptscriptstyle L},a_{\scriptscriptstyle
S}),\Theta_{2},\Theta_{3}\quad(b\;\;\text{fresh})$
|
figuret
# GEO: Enhancing Combinatorial Optimization with Classical and Quantum
Generative Models
Javier Alcazar Zapata Computing Canada Inc., 325 Front St W, Toronto, ON, M5V
2Y1 Mohammad Ghazi Vakili Zapata Computing Canada Inc., 325 Front St W,
Toronto, ON, M5V 2Y1 Department of Chemistry, University of Toronto, Toronto,
ON, M5G 1Z8, Canada Department of Computer Science, University of Toronto,
Toronto, Ontario M5S 2E4, Canada Can B. Kalayci Zapata Computing Canada
Inc., 325 Front St W, Toronto, ON, M5V 2Y1 Department of Industrial
Engineering, Pamukkale University, Kinikli Campus, 20160, Denizli, Turkey
Alejandro Perdomo-Ortiz<EMAIL_ADDRESS>Zapata Computing Canada
Inc., 325 Front St W, Toronto, ON, M5V 2Y1
###### Abstract
We introduce a new framework that leverages machine learning models known as
generative models to solve optimization problems. Our Generator-Enhanced
Optimization (GEO) strategy is flexible to adopt any generative model, from
quantum to quantum-inspired or classical, such as Generative Adversarial
Networks, Variational Autoencoders, or Quantum Circuit Born Machines, to name
a few. Here, we focus on a quantum-inspired version of GEO relying on tensor-
network Born machines, and referred to hereafter as TN-GEO. We present two
prominent strategies for using TN-GEO. The first uses data points previously
evaluated by any quantum or classical optimizer, and we show how TN-GEO
improves the performance of the classical solver as a standalone strategy in
hard-to-solve instances. The second strategy uses TN-GEO as a standalone
solver, i.e., when no previous observations are available. Here, we show its
superior performance when the goal is to find the best minimum given a fixed
budget for the number of function calls. This might be ideal in situations
where the cost function evaluation can be very expensive. To illustrate our
results, we run these benchmarks in the context of the portfolio optimization
problem by constructing instances from the S&P 500 and several other financial
stock indexes. We show that TN-GEO can propose unseen candidates with lower
cost function values than the candidates seen by classical solvers. This is
the first demonstration of the generalization capabilities of quantum-inspired
generative models that provide real value in the context of an industrial
application. We also comprehensively compare state-of-the-art algorithms in a
generalized version of the portfolio optimization problem. The results show
that TN-GEO is among the best compared to these state-of-the-art algorithms; a
remarkable outcome given the solvers used in the comparison have been fine-
tuned for decades in this real-world industrial application. We see this as an
important step toward a practical advantage with quantum-inspired models and,
subsequently, with quantum generative models.
## I Introduction
Along with machine learning and the simulation of materials, combinatorial
optimization is one of top candidates for practical quantum advantage. That
is, the moment where a quantum-assisted algorithm outperforms the best
classical algorithms in the context of a real-world application with a
commercial or scientific value. There is an ongoing portfolio of techniques to
tackle optimization problems with quantum subroutines, ranging from algorithms
tailored for quantum annealers (e.g., Refs. Kadowaki and Nishimori (1998);
Farhi _et al._ (2001)), gate-based quantum computers (e.g., Refs. Edward
Farhi (2014); Hadfield _et al._ (2019)) and quantum-inspired (QI) models
based on tensor networks (e.g., Ref. Mugel _et al._ (2020)).
Regardless of the quantum optimization approach proposed to date, there is a
need to translate the real-world problem into a polynomial unconstrained
binary optimization (PUBO) expression – a task which is not necessarily
straightforward and that usually results in an overhead in terms of the number
of variables. Specific real-world use cases illustrating these PUBO mappings
are depicted in Refs. Perdomo-Ortiz _et al._ (2012) and Perdomo-Ortiz _et
al._ (2019). Therefore, to achieve practical quantum advantage in the near-
term, it would be ideal to find a quantum optimization strategy that can work
on arbitrary objective functions, bypassing the translation and overhead
limitations raised here.
In our work, we offer a solution to these challenges by proposing a novel
generator-enhanced optimization (GEO) framework which leverage the power of
(quantum or classical) generative models. This family of solvers can scale to
large problems where combinatorial problems become intractable in real-world
settings. Since our optimization strategy does not rely on the details of the
objective function to be minimized, it is categorized in the group of so-
called black-box solvers. Another highlight of our approach is that it can
utilize available observations obtained from attempts to solve the
optimization problem. These initial evaluations can come from any source, from
random search trials to tailored state-of-the-art (SOTA) classical or quantum
optimizers for the specific problem at hand.
Our GEO strategy is based on two key ideas. First, the generative-modeling
component aims to capture the correlations from the previously observed data
(step 0-3 in Fig. 1). Second, since the focus here is on a minimization task,
the (quantum) generative models need to be capable of generating new “unseen”
solution candidates which have the potential to have a lower value for the
objective function than those already “seen” and used as the training set
(step 4-6 in Fig. 1). This exploration towards unseen and valuable samples is
by definition the fundamental concept behind generalization: the most
desirable and important feature of any practical ML model. We will elaborate
next on each of these components and demonstrate these two properties in the
context of the tensor-network-based generative models and its application to a
non-deterministic polynomial-time hard (NP-hard) version of the portfolio
optimization in finance.
To the best of our knowledge, this is the first optimization strategy proposed
to do an efficient blackbox exploration of the objective-function landscape
with the help of generative models. Although other proposal leveraging
generative models as a subroutine within the optimizer have appeared recently
since the publication of our manuscript (e.g., see GFlowNets Bengio _et al._
(2021) and the variational neural annealing Hibat-Allah _et al._ (2021)
algorithms), our framework is the only capable of both, handling arbitrary
cost functions and also with the possibility of swapping the generator for a
quantum or quantum-inspired implementation. GEO also has the enhanced feature
that the more data is available, the more information can be passed and used
to train the (quantum) generator.
In this work, we highlight the different features of GEO by performing a
comparison with alternative solvers, such as Bayesian optimizers and generic
solvers like simulated annealing. In the case of the specific real-world
large-scale application of portfolio optimization, we compare against the SOTA
optimizers and show the competitiveness of our approach. These results are
presented in Sec. III. Next, in Sec. II, we present the GEO approach and its
range of applicability.
Figure 1: Scheme for our Generator-Enhanced Optimization (GEO) strategy. The
GEO framework leverages generative models to utilize previous samples coming
from any quantum or classical solver. The trained quantum or classical
generator is responsible for proposing candidate solutions which might be out
of reach for conventional solvers. This seed data set (step 0) consists of
observation bitstrings $\\{\boldsymbol{x}^{(i)}\\}_{\rm{seed}}$ and their
respective costs $\\{\sigma^{(i)}\\}_{\rm{seed}}$. To give more weight to
samples with low cost, the seed samples and their costs are used to construct
a softmax function which serves as a surrogate to the cost function but in
probabilistic domain. This softmax surrogate also serves as a prior
distribution from which the training set samples are withdrawn to train the
generative model (steps 1-3). As shown in the figure between steps 1 and 2,
training samples from the softmax surrogate are biased favoring those with low
cost value. For the work presented here, we implemented a tensor-network
(TN)-based generative model. Therefore, we refer to this quantum-inspired
instantiation of GEO as TN-GEO. Other families of generative models from
classical, quantum, or hybrid quantum-classical can be explored as expounded
in the main text. The quantum-inspired generator corresponds to a tensor-
network Born machine (TNBM) model which is used to capture the main features
in the training data, and to propose new solution candidates which are
subsequently post selected before their costs $\\{\sigma^{(i)}\\}_{\rm{new}}$
are evaluated (steps 4-6). The new set is merged with the seed data set (step
7) to form an updated seed data set (step 8) which is to be used in the next
iteration of the algorithm. More algorithmic details for the two TN-GEO
strategies proposed here, as a booster or as a stand-alone solver, can be
found in the main text and in A.5 and A.6 respectively.
## II Quantum-Enhanced Optimization with Generative Models
As shown in Fig. 1, depending on the GEO specifics we can construct an entire
family of solvers whose generative modeling core range from classical, QI or
quantum circuit (QC) enhanced, or hybrid quantum-classical model. These
options can be realized by utilizing, for example, Boltzmann machines Cheng
_et al._ (2018) or Generative Adversarial Networks (GAN) Goodfellow _et al._
(2014), Tensor-Network Born Machines (TNBM) Cheng _et al._ (2017), Quantum
Circuit Born Machines (QCBM)Benedetti _et al._ (2018) or Quantum-Circuit
Associative Adversarial Networks (QC-AAN)Rudolph _et al._ (2020)
respectively, to name just a few of the many options for this probabilistic
component.
QI algorithms come as an interesting alternative since these allow one to
simulate larger scale quantum systems with the help of efficient tensor-
network (TN) representations. Depending on the complexity of the TN used to
build the quantum generative model, one can simulate from thousands of problem
variables to a few tens, the latter being the limit of simulating an universal
gate-based quantum computing model. This is, one can control the amount of
quantum resources available in the quantum generative model by choosing the QI
model.
Therefore, from all quantum generative model options, we chose to use a QI
generative model based on TNs to test and scale our GEO strategy to instances
with a number of variables commensurate with those found in industrial-scale
scenarios. We refer to our solver hereafter as TN-GEO. For the training of our
TN-GEO models we followed the work of Han et al. Han _et al._ (2018) where
they proposed to use Matrix Product States (MPS) to build the unsupervised
generative model. The latter extends the scope from early successes of
quantum-inspired models in the context of supervised ML Stoudenmire and Schwab
(2016); Efthymiou _et al._ (2019); Roberts _et al._ (2019); Fishman _et
al._ (2020).
In this paper we will discuss two modes of operation for our family of
quantum-enhanced solvers:
* •
In TN-GEO as a ”booster” we leverage past observations from classical (or
quantum) solvers. To illustrate this mode we use observations from simulated
annealing (SA) runs. Simulation details are provided in Appendix A.5.
* •
In TN-GEO as a stand-alone solver all initial cost function evaluations are
decided entirely by the quantum-inspired generative model, and a random prior
is constructed just to give support to the target probability distribution the
MPS model is aiming to capture. Simulation details are provided in Appendix
A.6.
Both of these strategies are captured in the algorithm workflow diagram in
Fig. 1 and described in more detail in Appendix A.
## III Results and Discussion
To illustrate the implementation for both of these settings we tested their
performance on an NP-hard version of the portfolio optimization problem with
cardinality constraints. The selection of optimal investment on a specific set
of assets, or portfolios, is a problem of great interest in the area of
quantitative finance. This problem is of practical importance for investors,
whose objective is to allocate capital optimally among assets while respecting
some investment restrictions. The goal of this optimization task, introduced
by Markowitz Markowitz (1952), is to generate a set of portfolios that offers
either the highest expected return (profit) for a defined level of risk or the
lowest risk for a given level of expected return. In this work, we focus in
two variants of this cardinality constrained optimization problem. The first
scenario aims to choose portfolios which minimize the volatility or risk given
a specific target return (more details are provided in Appendix A.1.) To
compare with the reported results from the best performing SOTA algorithms, we
ran TN-GEO in a second scenario where the goal is to choose the best portfolio
given a fixed level of risk aversion. This is the most commonly used version
of this optimization problem when it comes to comparison among SOTA solvers in
the literature (more details are provided in Appendix A.2).
### III.1 TN-GEO as a booster for any other combinatorial optimization solver
Figure 2: TN-GEO as a booster. Top: Strategies 1-3 correspond to the current
options a user might explore when solving a combinatorial optimization problem
with a suite of classical optimizers such as simulated annealing (SA),
parallel tempering (PT), generic algorithms (GA), among others. In strategy 1,
the user would use its computational budget with a preferred solver. In
strategy 2-4 the user would inspect intermediate results and decide whether to
keep trying with the same solver (strategy 2), try a new solver or a new
setting of the same solver used to obtain the intermediate results (strategy
3), or, as proposed here, to use the acquired data to train a quantum or
quantum-inspired generative model within a GEO framework such as TN-GEO
(strategy 4). Bottom: Results showing the relative TN-GEO enhancement from TN-
GEO over either strategy 1 or strategy 2. Positive values indicate runs where
TN-GEO outperformed the respective classical strategies (see Eq. 1). The data
represents bootstrapped medians from 20 independent runs of the experiments
and error bars correspond to the 95% confidence intervals. The two instances
presented here correspond to portfolio optimization instances where all the
assets in the S&P 500 market index where included ($N=500$), under two
different cardinality constraints $\kappa$. This cardinality constraint
indicate the number of assets that can be included at a time in valid
portfolios, yielding a search space of $M=\binom{N}{\kappa}$, with $M\sim
10^{69}$ portfolios candidates for $\kappa=50$.
In Fig. 2 we present the experimental design and the results obtained from
using TN-GEO as a booster. In these experiments we illustrate how using
intermediate results from simulated annealing (SA) can be used as seed data
for our TN-GEO algorithm. As described in Fig. 2, there are two strategies we
explored (strategies 1 and 2) to compare with our TN-GEO strategy (strategy
4). To fairly compare each strategy, we provide each with approximately the
same computational wall-clock time. For strategy 2, this translates into
performing additional restarts of SA with the time allotted for TN-GEO. In the
case of strategy 1, where we explored different settings for SA from the start
compared to those used in strategy 2, this amounts to using the same total
number of number of cost functions evaluations as those allocated to SA in
strategy 2. For our experiments this number was set to 20,000 cost function
evaluations for strategies 1 and 2. In strategy 4, the TN-GEO was initialized
with a prior consisting of the best 1,000 observations out of the first 10,000
coming from strategy 2 (see Appendix A.5 for details). To evaluate the
performance enhancement obtained from the TN-GEO strategy we compute the
relative TN-GEO enhancement $\eta$, which we define as
$\eta=\frac{C^{\rm{cl}}_{\rm{min}}-C^{\rm{TN-
GEO}}_{\rm{min}}}{C^{\rm{cl}}_{\rm{min}}}\times 100\%.$ (1)
Here, $C^{\rm{cl}}_{\rm{min}}$ is the lowest minimum value found by the
classical strategy (e.g., strategies 1-3) while $C^{\rm{TN-GEO}}_{\rm{min}}$
corresponds to the lowest value found with the quantum-enhanced approach
(e.g., with TN-GEO). Therefore, positive values reflect an improvement over
the classical-only approaches, while negative values indicate cases where the
classical solvers outperform the quantum-enhanced proposal.
Figure 3: Generalization capabilities of our quantum-inspired generative
model. Left panel corresponds to an investment universe with $N=50$ assets
while the right panel corresponds to one with $N=100$ assets. The blue
histogram represents the number of observations or portfolios obtained from
the classical solver (seed data set). In orange we represent samples coming
from our quantum generative model at the core of TN-GEO. The green dash line
is positioned at the best risk value found in the seed data. This mark
emphasizes all the new outstanding samples obtained with the quantum
generative model and which correspond to lower portfolio risk value (better
minima) than those available from the classical solver by itself. The number
of outstanding samples in the case of $N=50$ is equal to 31, while 349
outstanding samples were obtained from the MPS generative model in the case of
$N=100$.
As shown in the Fig. 2, we observe that TN-GEO outperforms on average both of
the classical-only strategies implemented. The quantum-inspired enhancement
observed here, as well as the trend for a larger enhancement as the number of
variables (assets) becomes larger, is confirmed in many other investment
universes with a number of variables ranging from $N=30$ to $N=100$ (see
Appendix B for more details). Although we show an enhancement compared to SA,
similar results could be expected when other solvers are used, since our
approach builds on solutions found by the solver and does not compete with it
from the start of the search. Furthermore, the more data available, the better
the expected performance of TN-GEO is. An important highlight of TN-GEO as a
booster is that these previous observations can come from a combination of
solvers, as different as purely quantum or classical, or hybrid.
The observed performance enhancement compared with the classical-only strategy
must be coming from a better exploration of the relevant search space, i.e.,
the space of those bitstring configurations $\boldsymbol{x}$ representing
portfolios which could yield a low risk value for a specified expected
investment return. That is the intuition behind the construction of TN-GEO.
The goal of the generative model is to capture the important correlations in
the previously observed data, and to use its generative capabilities to
propose similar new candidates.
Generating new candidates is by no means a trivial task in ML and it
determines the usefulness and power of the model since it measure its
generalization capabilities. In this setting of QI generative models, one
expects that the MPS-based generative model at the core of TN-GEO is not
simply memorizing the observations given as part of the training set, but that
it will provide new unseen candidates. This is an idea which has been recently
tested and demonstrated to some extent on synthetic data sets (see e.g., Refs.
Bradley _et al._ (2020), Stokes and Terilla (2019) and Miller _et al._
(2020). In Fig. 3 we demonstrate that our quantum-inspired generative model is
generalizing to new samples and that these add real value to the optimization
search. To the best of our knowledge this is the first demonstration of the
generalization capabilities of quantum generative models in the context of a
real-world application in an industrial scale setting, and one of our main
findings in our paper.
Note that our TN-based generative model not only produces better minima than
the classical seed data, but it also generates a rich amount of samples in the
low cost spectrum. This bias is imprinted in the design of our TN-GEO and it
is the purpose of the softmax surrogate prior distribution shown in Fig. 1.
This richness of new samples could be useful not only for the next iteration
of the algorithm, but they may also be readily of value to the user solving
the application. In some applications there is value as well in having
information about the runners-up. Ultimately, the cost function is just a
model of the system guiding the search, and the lowest cost does not translate
to the best performance in the real-life investment strategy.
### III.2 Generator-Enhanced Optimization as a Stand-Alone Solver
Figure 4: TN-GEO as a stand-alone solver: In this comparison of TN-GEO against
four classical competing strategies, investment universes are constructed from
subsets of the S&P 500 with a diversity in the number of assets (problem
variables) ranging from $N=30$ to $N=100$. The goal is to minimize the risk
given an expected return which is one of the specifications in the
combinatorial problem addressed here. Error bars and their 95% confidence
intervals are calculated from bootstrapping over 100 independent random
initializations for each solver on each problem. The main line for each solver
corresponds to the bootstrapped median over these 100 repetitions,
demonstrating the superior performance of TN-GEO over the classical solvers
considered here. As specified in the text, with the exception of TN-GEO, the
classical solvers use to their advantage the a priori information coming from
the cardinality constraint imposed in the selection of valid portfolios.
Next, we explore the performance of our TN-GEO framework as a stand-alone
solver. The focus is in combinatorial problems whose cost functions are
expensive to evaluate and where finding the best minimum within the least
number of calls to this function is desired. In Fig. 4 we present the
comparison against four different classical optimization strategies. As the
first solver, we use the random solver, which corresponds to a fully random
search strategy over the $2^{N}$ bitstrings of all possible portfolios, where
$N$ is the number of assets in our investment universe. As second solver, we
use the conditioned random solver, which is a more sophisticated random
strategy compared to the fully random search. The conditioned random strategy
uses the a priori information that the search is restricted to bitstrings
containing a fixed number of $\kappa$ assets. Therefore the number of
combinatorial possibilities is $M=\binom{N}{\kappa}$, which is significantly
less than $2^{N}$. As expected, when this information is not used the
performance of the random solver over the entire $2^{N}$ search space is
worse. The other two competing strategies considered here are SA and the
Bayesian optimization library GPyOpt authors (2016). In both of these
classical solvers, we adapted their search strategy to impose this cardinality
constraint with fixed $\kappa$ as well (details in Appendix. A.4). This raises
the bar even higher for TN-GEO which is not using that a priori information to
boost its performance 111Specific adaptions of the MPS generative model could
be implemented such that it conserves the number of assets by construction,
borrowing ideas from condensed matter physics where one can impose MPS a
conservation in the number of particles in the quantum state.. As explained in
Appendix A.6, we only use this information indirectly during the construction
of the artificial seed data set which initializes the algorithm (step 0, Fig.
1) , but it is not a strong constraint during the construction of the QI
generative model (step 3, Fig. 1) or imposed to generate the new candidate
samples coming from it (step 4, Fig. 1). Post selection can be applied a
posteriori such that only samples with the right cardinality are considered as
valid candidates towards the selected set (step 5, Fig. 1).
In Fig. 4 we demonstrate the advantage of our TN-GEO stand-alone strategy
compared to any of these widely-used solvers. In particular, it is interesting
to note that the gap between TN-GEO and the other solvers seems to be larger
for larger number of variables.
### III.3 Comparison with state-of-the-art algorithms
Finally, we compare TN-GEO with nine different leading SOTA optimizers
covering a broad spectrum of algorithmic strategies for this specific
combinatorial problem, based on and referred hereafter as: 1) GTS Chang _et
al._ (2000), the genetic algorithms, tabu search, and simulated annealing; 2)
IPSO Deng _et al._ (2012), an improved particle swarm optimization algorithm
Deng _et al._ (2012); 3) IPSO-SA Mozafari _et al._ (2011), a hybrid
algorithm combining particle swarm optimization and simulated annealing; 4)
PBILD Lwin and Qu (2013), a population-based incremental learning and
differential evolution algorithm; 5) GRASP Baykasoğlu _et al._ (2015), a
greedy randomized adaptive solution procedure; 6) ABCFEIT Kalayci _et al._
(2017), an artificial bee colony algorithm with feasibility enforcement and
infeasibility toleration procedures; 7) HAAG Kalayci _et al._ (2020), a
hybrid algorithm integrating ant colony optimization, artificial bee colony
and genetic algorithms; 8) VNSQP Akbay _et al._ (2020), a variable
neighborhood search algorithm combined with quadratic programming; and, 9)
RCABC Cura (2021), a rapidly converging artificial bee colony algorithm.
The test data used by the vast majority of researchers in the literature who
have addressed the problem of cardinality-constrained portfolio optimization
come from OR-Library Beasley (1990), which correspond to the weekly prices
between March 1992 and September 1997 of the following indexes: Hang Seng in
Hong Kong (31 assets); DAX 100 in Germany (85 assets); FTSE 100 in the United
Kingdom (89 assets); S&P 100 in the United States (98 assets); and Nikkei 225
in Japan (225 assets).
Here we present the results obtained with TN-GEO and its comparison with the
nine different SOTA metaheuristic algorithms mentioned above and whose results
are publicly available from the literature. Table 1 shows the results of all
algorithms and all performance metrics for each of the 5 index data sets (for
more details on the evaluation metrics, see Appendix A.2). Each algorithm
corresponds to a different column, with TN-GEO in the rightmost column. The
values are shown in red if the TN-GEO algorithm performed better or equally
well compared to the other algorithms on the corresponding performance metric.
The numbers in bold mean that the algorithm found the best (lowest) value
across all algorithms.
From all the entries in this table, 67% of them correspond to red entries,
where TN-GEO either wins or draws, which is a significant percentage giving
that these optimizers are among the best reported in the last decades.
In Table 2 we show a pairwise comparison of TN-GEO against each of the SOTA
optimizers. This table reports the number of times TN-GEO wins, loses, or
draws compared to results reported for the other optimizer, across all the
performance metrics and for all the 5 different market indexes. Note that
since not all the performance metrics are reported for all the solvers and
market indexes, the total number of wins, draws, or losses varies. Therefore,
we report in the same table the overall percentage of wins plus draws in each
case. We see that this percentage is greater than 50% in all the cases.
Furthermore, in Table 2, we use the Wilcoxon signed-rank test Wilcoxon (1992),
which is a widely used nonparametric statistical test used to evaluate and
compare the performance of different algorithms in different benchmarks Demšar
(2006). Therefore, to statistically validate the results, a Wilcoxon signed-
rank test is performed to provide a meaningful comparison between the results
from TN-GEO algorithm and the SOTA metaheuristic algorithms. The Wilcoxon
signed-rank test tests the null hypothesis that the median of the differences
between the results of the algorithms is equal to 0. Thus, it tests whether
there is no significant difference between the performance of the algorithms.
The null hypothesis is rejected if the significance value ($p$) is less than
the significance level ($\alpha$), which means that one of the algorithms
performs better than the other. Otherwise, the hypothesis is retained.
As can be seen from the table, the TN-GEO algorithm significantly outperforms
the GTS and PBILD methods on all performance metrics rejecting the null
hypothesis at the $0.05$ significance level. On the other hand, the null
hypotheses are accepted at $\alpha=0.05$ for the TN-GEO algorithm over the
other remaining algorithms. Thus, in terms of performance on all metrics
combined, the results show that there is no significant difference between TN-
GEO and these remaining seven SOTA optimizers (IPSO, IPSO-SA, GRASP, ABCFEIT,
HAAG, VNSQP, and RCABC)
Overall, the results confirm the competitiveness of our quantum-inspired
proposed approach against SOTA metaheuristic algorithms. This is remarkable
given that these metaheuristics have been explored and fine-tuned for decades.
Table 1: Detailed comparison with SOTA algorithms for each of the five index data sets and on seven different performance indicators described in Appendix A.2. Entries in red correspond to cases where TN-GEO performed better or tied compared to the other algorithm. Entries in bold, corresponding to the best (lowest) value, for each specific indicator. Data Set | Performance Indicator | GTS | IPSO | IPSO-SA | PBILD | GRASP | ABCFEIT | HAAG | VNSQP | RCABC | TN-GEO
---|---|---|---|---|---|---|---|---|---|---|---
Hang Seng | Mean | 1.0957 | 1.0953 | - | 1.1431 | 1.0965 | 1.0953 | 1.0965 | 1.0964 | 1.0873 | 1.0958
| Median | 1.2181 | - | - | 1.2390 | 1.2155 | 1.2181 | 1.2181 | 1.2155 | 1.2154 | 1.2181
| Min | - | - | - | - | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
| Max | - | - | - | - | 1.5538 | 1.5538 | 1.5538 | 1.5538 | 1.5538 | 1.5538
| MEUCD | - | - | 0.0001 | - | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001
| VRE | - | - | 1.6368 | - | 1.6400 | 1.6432 | 1.6395 | 1.6397 | 1.6342 | 1.6392
| MRE | - | - | 0.6059 | - | 0.6060 | 0.6047 | 0.6085 | 0.6058 | 0.5964 | 0.6082
DAX100 | Mean | 2.5424 | 2.5417 | - | 2.4251 | 2.3126 | 2.3258 | 2.3130 | 2.3125 | 2.2898 | 2.3142
| Median | 2.5466 | - | - | 2.5866 | 2.5630 | 2.5678 | 2.5587 | 2.5630 | 2.5629 | 2.5660
| Minimum | - | - | - | - | 0.0059 | 0.0023 | 0.0023 | 0.0059 | 0.0059 | 0.0023
| Maximum | - | - | - | - | 4.0275 | 4.0275 | 4.0275 | 4.0275 | 4.0275 | 4.0275
| MEUCD | - | - | 0.0001 | - | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001
| VRE | - | - | 6.7806 | - | 6.7593 | 6.7925 | 6.7806 | 6.7583 | 6.8326 | 6.7540
| MRE | - | - | 1.2770 | - | 1.2769 | 1.2761 | 1.2780 | 1.2767 | 1.2357 | 1.2763
FTSE100 | Mean | 1.1076 | 1.0628 | - | 0.9706 | 0.8451 | 0.8481 | 0.8451 | 0.8453 | 0.8406 | 0.8445
| Median | 1.0841 | - | - | 1.0841 | 1.0841 | 1.0841 | 1.0841 | 1.0841 | 1.0841 | 1.0841
| Minimum | - | - | - | - | 0.0016 | 0.0047 | 0.0006 | 0.0045 | 0.0016 | 0.0047
| Maximum | - | - | - | - | 2.0576 | 2.0638 | 2.0605 | 2.0669 | 2.0670 | 2.0775
| MEUCD | - | - | 0.0000 | - | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
| VRE | - | - | 2.4701 | - | 2.4350 | 2.4397 | 2.4350 | 2.4349 | 2.4149 | 2.4342
| MRE | - | - | 0.3247 | - | 0.3245 | 0.3255 | 0.3186 | 0.3252 | 0.3207 | 0.3254
S&P100 | Mean | 1.9328 | 1.6890 | - | 1.6386 | 1.2937 | 1.2930 | 1.2930 | 1.2649 | 1.3464 | 1.2918
| Median | 1.1823 | - | - | 1.1692 | 1.1420 | 1.1369 | 1.1323 | 1.1323 | 1.1515 | 1.1452
| Minimum | - | - | - | - | 0.0009 | 0.0000 | 0.0000 | 0.0000 | 0.0009 | 0.0000
| Maximum | - | - | - | - | 5.4551 | 5.4422 | 5.4642 | 5.4551 | 5.4520 | 5.4422
| MEUCD | - | - | 0.0001 | - | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001 | 0.0001
| VRE | - | - | 2.6281 | - | 2.5211 | 2.5260 | 2.5255 | 2.5105 | 2.5364 | 2.5269
| MRE | - | - | 0.7846 | - | 0.9063 | 0.8885 | 0.7044 | 0.9072 | 0.8858 | 0.9117
Nikkei | Mean | 0.6066 | 0.6870 | - | 0.5972 | 0.5782 | 0.5781 | 0.5781 | 0.5904 | 0.5665 | 0.5793
| Median | 0.6093 | - | - | 0.5896 | 0.5857 | 0.5856 | 0.5854 | 0.5857 | 0.5858 | 0.5855
| Minimum | - | - | - | - | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
| Maximum | - | - | - | - | 1.1606 | 1.1606 | 1.1607 | 1.1606 | 1.1606 | 1.1606
| MEUCD | - | - | 0.0000 | - | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
| VRE | - | - | 0.9583 | - | 0.8359 | 0.8396 | 0.8191 | 0.8561 | 0.8314 | 0.8353
| MRE | - | - | 1.7090 | - | 0.4184 | 0.4147 | 0.4233 | 0.4217 | 0.4042 | 0.4229
Table 2: Pairwise comparison of TN-GEO against each of the SOTA optimizers.
The asymptotic significance is part of the Wilcoxon signed-rank test results.
The null hypothesis that the performance of the two algorithms is the same is
tested at the $95\%$ confidence level (significance level: $\alpha=.05$).
Results show that TN-GEO is on par with all the SOTA algorithms, and in two
cases, GTS and PBILD, it significantly outperforms them. We also report the
count for TN-GEO wins, losses, and ties, compared to each of the other
algorithms.
TN-GEO vs Other: GTS IPSO IPSO-SA PBILD GRASP ABCFEIT HAAG VNSQP RCABC Wins(+)
6 4 6 9 12 10 11 11 8 Loss(-) 2 1 4 0 12 9 11 12 16 Ties 2 0 5 1 11 16 13 12
11 (Wins+Ties)/Total 80% 80% 67% 100% 66% 74% 69% 66% 54% Asymptotic
significance ($p$) .036 .080 .308 .008 .247 .888 .363 .594 .110 Decision
Reject Retain Retain Reject Retain Retain Retain Retain Retain
## IV Outlook
Compared to other quantum optimization strategies, an important feature of TN-
GEO is its algorithmic flexibility. As shown here, unlike other proposals, our
GEO framework can be applied to arbitrary cost functions, which opens the
possibility of new applications that cannot be easily addressed by an explicit
mapping to a polynomial unconstrained binary optimization (PUBO) problem. Our
approach is also flexible with respect to the source of the seed samples, as
they can come from any solver, possibly more efficient or even application-
specific optimizers. The demonstrated generalization capabilities of the
generative model that forms its core, helps TN-GEO build on the progress of
previous experiments with other state-of-the-art solvers, and it provides new
candidates that the classical optimizer may not be able to achieve on its own.
We are optimistic that this flexible approach will open up the broad
applicability of quantum and quantum-inspired generative models to real-world
combinatorial optimization problems at the industrial scale.
Although we have limited the scope of this work to tensor network-based
generative quantum models, it would be a natural extension to consider other
generative quantum models as well. For example, hybrid classical quantum
models such as quantum circuit associative adversarial networks (QC-AAN)
Rudolph _et al._ (2020) can be readily explored to harness the power of
generative quantum models with so-called noisy intermediate-scale quantum
(NISQ) devices Preskill (2018). In particular, the QC-AAN framework opens up
the possibility of working with a larger number of variables and going beyond
discrete values (e.g., variables with continuous values). Both quantum-
inspired and hybrid quantum-classical algorithms can be tested in this GEO
framework in even larger problem sizes of this NP-hard version of the
portfolio optimization problem or any other combinatorial optimization
problem. As the number of qubits in NISQ devices increases, it would be
interesting to explore generative models that can utilize more quantum
resources, such as Quantum Circuit Born Machines (QCBM)Benedetti _et al._
(2018): a general framework to model arbitrary probability distributions and
perform generative modeling tasks with gate-based quantum computers.
Increasing the expressive power of the quantum-inspired core of MPS to other
more complex but still efficient QI approaches, such as tree-tensor networks
Cheng _et al._ (2019), is another interesting research direction. Although we
have fully demonstrated the relevance and scalability of our algorithm for
industrial applications by increasing the performance of classical solvers on
industrial scale instances (all 500 assets in the S&P 500 market index), there
is a need to explore the performance improvement that could be achieved by
more complex TN representations or on other combinatorial problems.
Although the goal of GEO was to show good behavior as a general black-box
algorithm without considering the specifics of the study application, it is a
worthwhile avenue to exploit the specifics of the problem formulation to
improve its performance and runtime. In particular, for the portfolio
optimization problem with a cardinality constraint, it is useful to
incorporate this constraint as a natural MPS symmetry, thereby reducing the
effective search space of feasible solutions from the size of the universe to
the cardinality size.
Finally, our thorough comparison with SOTA algorithms, which have been fine-
tuned for decades on this specific application, shows that our TN-GEO strategy
manages to outperform a couple of these and is on par with the other seven
optimizers. This is a remarkable feat for this new approach and hints at the
possibility of finding commercial value in these quantum-inspired strategies
in large-scale real-world problems, as the instances considered in this work.
Also, it calls for more fundamental insights towards understanding when and
where it would be beneficial to use this TN-GEO framework, which relies
heavily on its quantum-inspired generative ML model. For example,
understanding the intrinsic bias in these models, responsible for their
remarkable performance, is another important milestone on the road to
practical quantum advantage with quantum devices in the near future. The
latter can be asserted given the tight connection of these quantum-inspired TN
models to fully quantum models deployed on quantum hardware. And this question
of when to go with quantum-inspired or fully quantum models is a challenging
one that we are exploring in ongoing future work.
###### Acknowledgements.
The authors would like to acknowledge Manuel S. Rudolph, Marta Mauri, Matthew
J.S. Beach, Yudong Cao, Luis Serrano, Jhonathan Romero-Fontalvo, and Brian
Dellabetta for their feedback on an early version of this manuscript
## References
* Kadowaki and Nishimori (1998) Tadashi Kadowaki and Hidetoshi Nishimori, “Quantum annealing in the transverse ising model,” Phys. Rev. E. 58, 5355 (1998).
* Farhi _et al._ (2001) Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Joshua Lapan, Andrew Lundgren, and Daniel Preda, “A quantum adiabatic evolution algorithm applied to random instances of an NP-Complete problem,” Science 292, 472–475 (2001).
* Edward Farhi (2014) Sam Gutmann Edward Farhi, Jeffrey Goldstone, “A quantum approximate optimization algorithm,” arXiv:1411.4028 (2014).
* Hadfield _et al._ (2019) Stuart Hadfield, Zhihui Wang, Bryan O’Gorman, Eleanor G Rieffel, Davide Venturelli, and Rupak Biswas, “From the quantum approximate optimization algorithm to a quantum alternating operator ansatz,” Algorithms 12, 34 (2019).
* Mugel _et al._ (2020) Samuel Mugel, Carlos Kuchkovsky, Escolastico Sanchez, Samuel Fernandez-Lorenzo, Jorge Luis-Hita, Enrique Lizaso, and Roman Orus, “Dynamic portfolio optimization with real datasets using quantum processors and quantum-inspired tensor networks,” (2020), arXiv:2007.00017 [quant-ph] .
* Perdomo-Ortiz _et al._ (2012) A. Perdomo-Ortiz, N. Dickson, M. Drew-Brook, G. Rose, and A. Aspuru-Guzik, “Finding low-energy conformations of lattice protein models by quantum annealing,” Sci. Rep. 2, 571 (2012).
* Perdomo-Ortiz _et al._ (2019) Alejandro Perdomo-Ortiz, Alexander Feldman, Asier Ozaeta, Sergei V. Isakov, Zheng Zhu, Bryan O’Gorman, Helmut G. Katzgraber, Alexander Diedrich, Hartmut Neven, Johan de Kleer, Brad Lackey, and Rupak Biswas, “Readiness of quantum optimization machines for industrial applications,” Phys. Rev. Applied 12, 014004 (2019).
* Bengio _et al._ (2021) Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio, “Flow network based generative models for non-iterative diverse candidate generation,” (2021).
* Hibat-Allah _et al._ (2021) Mohamed Hibat-Allah, Estelle M. Inack, Roeland Wiersema, Roger G. Melko, and Juan Carrasquilla, “Variational neural annealing,” Nature Machine Intelligence 3, 952–961 (2021).
* Cheng _et al._ (2018) Song Cheng, Jing Chen, and Lei Wang, “Information perspective to probabilistic modeling: Boltzmann machines versus born machines,” Entropy 20, 583 (2018).
* Goodfellow _et al._ (2014) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial networks,” (2014), arXiv:1406.2661 [stat.ML] .
* Cheng _et al._ (2017) Song Cheng, Jing Chen, and Lei Wang, “Information perspective to probabilistic modeling: Boltzmann machines versus Born machines,” Entropy 20 (2017).
* Benedetti _et al._ (2018) Marcello Benedetti, Delfina Garcia-Pintos, Yunseong Nam, and Alejandro Perdomo-Ortiz, “A generative modeling approach for benchmarking and training shallow quantum circuits,” npj Quantum Information 5 (2018), 10.1038/s41534-019-0157-8.
* Rudolph _et al._ (2020) Manuel S. Rudolph, Ntwali Toussaint Bashige, Amara Katabarwa, Sonika Johr, Borja Peropadre, and Alejandro Perdomo-Ortiz, “Generation of high resolution handwritten digits with an ion-trap quantum computer,” (2020), arXiv:2012.03924 [quant-ph] .
* Han _et al._ (2018) Zhao-Yu Han, Jun Wang, Heng Fan, Lei Wang, and Pan Zhang, “Unsupervised generative modeling using matrix product states,” Phys. Rev. X 8, 031012 (2018).
* Stoudenmire and Schwab (2016) Edwin Stoudenmire and David J Schwab, “Supervised learning with tensor networks,” in _Advances in Neural Information Processing Systems 29_ , edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Curran Associates, Inc., 2016) pp. 4799–4807.
* Efthymiou _et al._ (2019) Stavros Efthymiou, Jack Hidary, and Stefan Leichenauer, “TensorNetwork for machine learning,” (2019), arXiv:1906.06329 [cs.LG] .
* Roberts _et al._ (2019) Chase Roberts, Ashley Milsted, Martin Ganahl, Adam Zalcman, Bruce Fontaine, Yijian Zou, Jack Hidary, Guifre Vidal, and Stefan Leichenauer, “TensorNetwork: A library for physics and machine learning,” (2019), arXiv:1905.01330 [physics.comp-ph] .
* Fishman _et al._ (2020) Matthew Fishman, Steven R. White, and E. Miles Stoudenmire, “The ITensor software library for tensor network calculations,” (2020), arXiv:2007.14822 [cs.MS] .
* Markowitz (1952) Harry Markowitz, “Portfolio selection,” The Journal of Finance 7, 77–91 (1952).
* Bradley _et al._ (2020) Tai-Danae Bradley, E M Stoudenmire, and John Terilla, “Modeling sequences with quantum states: a look under the hood,” Machine Learning: Science and Technology 1, 035008 (2020).
* Stokes and Terilla (2019) James Stokes and John Terilla, “Probabilistic modeling with matrix product states,” Entropy 21 (2019).
* Miller _et al._ (2020) Jacob Miller, Guillaume Rabusseau, and John Terilla, “Tensor networks for probabilistic sequence modeling,” (2020), arXiv:2003.01039 [cs.LG] .
* authors (2016) The GPyOpt authors, “Gpyopt: A bayesian optimization framework in python,” http://github.com/SheffieldML/GPyOpt (2016).
* Note (1) Specific adaptions of the MPS generative model could be implemented such that it conserves the number of assets by construction, borrowing ideas from condensed matter physics where one can impose MPS a conservation in the number of particles in the quantum state.
* Chang _et al._ (2000) T-J Chang, Nigel Meade, John E Beasley, and Yazid M Sharaiha, “Heuristics for cardinality constrained portfolio optimisation,” Computers & Operations Research 27, 1271–1302 (2000).
* Deng _et al._ (2012) Guang-Feng Deng, Woo-Tsong Lin, and Chih-Chung Lo, “Markowitz-based portfolio selection with cardinality constraints using improved particle swarm optimization,” Expert Systems with Applications 39, 4558–4566 (2012).
* Mozafari _et al._ (2011) M Mozafari, F Jolai, and S Tafazzoli, “A new ipso-sa approach for cardinality constrained portfolio optimization,” International Journal of Industrial Engineering Computations 2, 249–262 (2011).
* Lwin and Qu (2013) Khin Lwin and Rong Qu, “A hybrid algorithm for constrained portfolio selection problems,” Applied intelligence 39, 251–266 (2013).
* Baykasoğlu _et al._ (2015) Adil Baykasoğlu, Mualla Gonca Yunusoglu, and F Burcin Özsoydan, “A grasp based solution approach to solve cardinality constrained portfolio optimization problems,” Computers & Industrial Engineering 90, 339–351 (2015).
* Kalayci _et al._ (2017) Can B Kalayci, Okkes Ertenlice, Hasan Akyer, and Hakan Aygoren, “An artificial bee colony algorithm with feasibility enforcement and infeasibility toleration procedures for cardinality constrained portfolio optimization,” Expert Systems with Applications 85, 61–75 (2017).
* Kalayci _et al._ (2020) Can B Kalayci, Olcay Polat, and Mehmet A Akbay, “An efficient hybrid metaheuristic algorithm for cardinality constrained portfolio optimization,” Swarm and Evolutionary Computation 54, 100662 (2020).
* Akbay _et al._ (2020) Mehmet Anil Akbay, Can B Kalayci, and Olcay Polat, “A parallel variable neighborhood search algorithm with quadratic programming for cardinality constrained portfolio optimization,” Knowledge-Based Systems 198, 105944 (2020).
* Cura (2021) Tunchan Cura, “A rapidly converging artificial bee colony algorithm for portfolio optimization,” Knowledge-Based Systems 233, 107505 (2021).
* Beasley (1990) John E Beasley, “Or-library: distributing test problems by electronic mail,” Journal of the operational research society 41, 1069–1072 (1990).
* Wilcoxon (1992) Frank Wilcoxon, “Individual comparisons by ranking methods,” in _Breakthroughs in statistics_ (Springer, 1992) pp. 196–202.
* Demšar (2006) Janez Demšar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research 7, 1–30 (2006).
* Preskill (2018) John Preskill, “Quantum computing in the NISQ era and beyond,” Quantum 2, 79 (2018).
* Cheng _et al._ (2019) Song Cheng, Lei Wang, Tao Xiang, and Pan Zhang, “Tree tensor networks for generative modeling,” Phys. Rev. B 99, 155131 (2019).
* Martin Andersen and Vandenberghe (2020) Joachim Dahl Martin Andersen and Lieven Vandenberghe, “Python software for convex optimization,” http://cvxopt.org (2020).
* Cura (2009) Tunchan Cura, “Particle swarm optimization approach to portfolio optimization,” Nonlinear analysis: Real world applications 10, 2396–2406 (2009).
* Cirac _et al._ (2020) Ignacio Cirac, David Perez-Garcia, Norbert Schuch, and Frank Verstraete, “Matrix product states and projected entangled pair states: Concepts, symmetries, and theorems,” (2020), arXiv:2011.12127 [quant-ph] .
* cod (2018) “Code for unsupervised generative modeling using matrix product states,” https://github.com/congzlwag/UnsupGenModbyMPS (2018).
* Perry and Wagner (2019) Matthew T. Perry and Richard J. Wagner, “Python module for simulated annealing,” https://github.com/perrygeo/simanneal (2019).
* Alcazar _et al._ (2020) Javier Alcazar, Vicente Leyton-Ortega, and Alejandro Perdomo-Ortiz, “Classical versus quantum models in machine learning: insights from a finance application,” Machine Learning: Science and Technology 1, 035003 (2020).
## Appendix A Methods
### A.1 Generation of portfolio optimization instances
The portfolio optimization problem aims at determining the fractions $w_{i}$
of a given capital to be invested in each asset $i$ of a universe of $N$
assets, such that the risk $\sigma(\boldsymbol{w})$ for a given level $\rho$
of the expected return $\langle r(\boldsymbol{w})\rangle$ is minimized,
constrained to $\sum\limits_{i}^{N}w_{i}=1$. The problem can be formulated as:
$\displaystyle\min_{\boldsymbol{w}}\\{\sigma^{2}(\boldsymbol{w})=\boldsymbol{w^{T}}\cdot\boldsymbol{\Sigma}\cdot\boldsymbol{w}$
$\displaystyle:$ $\displaystyle\langle
r(\boldsymbol{w})\rangle=\boldsymbol{w}\cdot\boldsymbol{r}=\rho\\}$ (2)
where the vectors $\boldsymbol{w}$ and $\boldsymbol{r}$ have dimensionality
$N$, $\boldsymbol{\Sigma}$ is the sample covariance matrix obtained from the
return time series of pair of asset $i$ and $j$, and $\boldsymbol{r}$ is the
vector of average return of the time series for each asset, with each daily
return, $r^{t}$, calculated as the relative increment in asset price from its
previous day (i.e., $r^{t}=(p^{t}-p^{(t-1)})/p^{(t-1)}$, with $p^{t}$ as the
price for a particular asset at time $t$). The solution to Eq. 2 for a given
return level $\rho$ corresponds to the optimal portfolio strategy
$\boldsymbol{w}^{*}$ and the minimal value of this objective function
$\sigma(\boldsymbol{w})$ correspond to the portfolio risk and will be denoted
by $\sigma^{*}_{\rho}$.
Note that the optimization task in Eq. 2 has the potential outcome of
investing small amounts in a large number of assets as an attempt to reduce
the overall risk by ”over diversifying” the portfolio. This type of investment
strategy can be challenging to implement in practice: portfolios composed of a
large number of assets are difficult to manage and may incur in high
transaction costs. Therefore, several restrictions are usually imposed on the
allocation of capital among assets, as a consequence of market rules and
conditions for investment or to reflect investor profiles and preferences. For
instance, constraints can be included to control the amount of desired
diversification, i.e., modifying bound limits per asset $i$, denoted by
$\\{l_{i},u_{i}\\}$, to the proportion of capital invested in the investment
on individual assets or a group of assets, thus the constraint
$l_{i}<w_{i}<u_{i}$ could be considered.
Additionally, a more realistic and common scenario is to include in the
optimization task a cardinality constraint, which limits directly the number
of assets to be transacted to a pre-specified number $\kappa<N$. Therefore,
the number of different sets to be treated is $M=\binom{N}{\kappa}$. In this
scenario, the problem can be formulated as a Mixed-Integer Quadratic Program
(MIQP) with the addition of binary variables $x_{i}\in\\{0,1\\}$ per asset,
for $i=1,...,N$, which are set to “1” when the $i$-th asset is included as
part of the $\kappa$ assets, or “0” if it is left out of this selected set.
Therefore, valid portfolios would have a number $\kappa$ of $1$’s, as
specified in the cardinality constraint. For example, for $N=4$ and
$\kappa=2$, the six different valid configurations can be encoded as
$\\{0011,0101,0110,1001,1010,1100\\}$.
The optimization task can then be described as follows
$\displaystyle\min_{\boldsymbol{w},\boldsymbol{x}}\left\\{\sigma^{2}(\boldsymbol{w})\right.$
$\displaystyle:$ (3) $\displaystyle\langle r(\boldsymbol{w})\rangle=\rho,$
$\displaystyle l_{i}x_{i}<w_{i}<u_{i}x_{i}\quad i=1,...,N,$
$\displaystyle\boldsymbol{1}\cdot\boldsymbol{x}=\left.\kappa\right\\}.$
In this reformulated problem we denote by $\sigma_{\rho,\kappa}^{*}$ the
minimum portfolio risk outcome from Eq. 3 for a given return level $\rho$ and
cardinality $\kappa$. The optimal solution vectors $\boldsymbol{w}^{*}$ and
$\boldsymbol{x}^{*}$ define the portfolio investment strategy. Adding the
cardinality constraint and the investment bound limits transforms a simple
convex optimization problem (Eq. 2) into a much harder non-convex NP-hard
problem . For all the problem instance generation in this work we chose
$\kappa=N/2$ and the combinatorial nature of the problems lies in the growth
of the search space associated with the binary vector $\boldsymbol{x}$, which
makes it intractable to exhaustively explore for a number of assets in the few
hundreds. The size of the search space here is $M=\binom{N}{N/2}$
It is important to note that given a selection of which assets belong to the
portfolio by instantiating $\boldsymbol{x}$ (say with a specific
$\boldsymbol{x}^{(i)}$), solving the optimization problem in Eq. 3 to find the
respective investment fractions $\boldsymbol{w}^{(i)}$ and risk value
$\sigma_{\rho,N/2}^{(i)}$ can be efficiently achieved with conventional
quadratic programming (QP) solvers. In this work we used the python module
cvxopt Martin Andersen and Vandenberghe (2020) for solving this problem. Note
that we exploit this fact to break this constrained portfolio optimization
problem into a combinatorial intractable one (find best asset selection
$\boldsymbol{x}$), which we aim to solve with GEO, and a tractable subroutine
which can be solved efficiently with available solvers.
The set of pairwise $(\sigma_{\rho}^{\kappa},\rho)$, dubbed as the efficient
frontier, is no longer convex neither continuous in contrast with the solution
to problem in Eq. (2).
### A.2 Problem formulation for comparison with state-of-the-art algorithms
To carry out the comparison with State-of-the-Art Algorithms, in line with the
formulation used there, we generalizes the problem in Eq. 3 releasing the
constraint of a fix level of portfolio return, instead directly incorporating
the portfolio return in the objective function, encompassing now two terms:
the one on the left corresponding to the portfolio risk as beforeand the one
on the right corresponding to the portfolio return. The goal is to balance out
both terms such that return is maximized and risk minimized. Lambda is a
hyperparameter, named risk averse, that controls if an investor wants to give
more weight to risk or return. The new formulation reads as follows,
$\displaystyle\min_{\boldsymbol{w},\boldsymbol{x}}\\{\lambda\sigma^{2}(\boldsymbol{w})-(1-\lambda)\langle
r(\boldsymbol{w})\rangle:$ $\displaystyle l_{i}x_{i}<w_{i}<u_{i}x_{i}\quad
i=1,...,N,$
$\displaystyle\boldsymbol{1}\cdot\boldsymbol{x}=\left.\kappa\right\\}.$ (4)
With the rest of constraints and variables definition as in Appendix A.1.
#### A.2.1 Performance Metrics
To compare the performance of the proposed GEO with the SOTA metaheuristic
algorithms in the literature, the most commonly used performance metrics for
the cardinality constrained portfolio optimization problem are used. These
metric formulations compute the distance between the heuristic efficient
frontier and the unconstrained efficient frontier. Thus, the performance of
the algorithms can be evaluated.
Four of these performance metrics (the Mean, Median, Minimum and Maximum in
Table 1) are based on the so-called Performance Deviation Errors ($PDE$).
These $PDE$ metrics were formulated by Chang Chang _et al._ (2000) as
follows:
$PDE_{i}=min\left(\left|\frac{100\left(x_{i}-x^{*}_{i}\right)}{x^{*}_{i}}\right|,\left|\frac{100\left(y_{i}-y^{*}_{i}\right)}{y^{*}_{i}}\right|\right)\
$ (5)
$\begin{split}x^{*}_{i}&=X_{k_{y}}+\frac{\left(X_{j_{y}}-X_{k_{y}}\right)\left(y_{i}-Y_{k_{y}}\right)}{\left(Y_{j_{y}}-Y_{k_{y}}\right)}\\\
y^{*}_{i}&=Y_{k_{x}}+\frac{\left(Y_{j_{x}}-Y_{k_{x}}\right)\left(x_{i}-X_{k_{x}}\right)}{\left(X_{j_{x}}-X_{k_{x}}\right)}\\\
j_{y}&={\operatorname*{arg\,min}_{l=1,\dots,{\varepsilon}^{*}\bigwedge
Y_{l}\geq y_{i}}Y_{l}}\\\
k_{y}&={\operatorname*{arg\,max}_{l=1,\dots,{\varepsilon}^{*}\bigwedge
Y_{l}\leq y_{i}}Y_{l}}\\\
j_{x}&={\operatorname*{arg\,min}_{l=1,\dots,{\varepsilon}^{*}\bigwedge
X_{l}\geq x_{i}}X_{l}}\\\
k_{x}&={\operatorname*{arg\,max}_{l=1,\dots,{\varepsilon}^{*}\bigwedge
X_{l}\leq x_{i}}X_{l}}\\\ \end{split}$ (6)
where the pair $(X_{l},Y_{l})(l=1,...,\varepsilon^{*})$ represents the point
on the standard efficient frontier and the pair
$(x_{i},y_{i})(i=1,...,\varepsilon)$ represents the point on the heuristic
efficient frontier. Here, $\varepsilon^{*}$ denotes the number of points on
the standard efficient frontier while $\varepsilon$ denotes the number of
points on the heuristic efficient frontier. The mean, median, minimum, and
maximum of the $PDE$ can be used to compare the performance of the algorithms.
Later, three additional performance measures (MEUCD: Mean Euclidean Distance,
VRE: Variance of Return Error, MRE: Mean Return Error) were formulated by Cura
Cura (2009) as follows:
$MEUCD=\frac{\sum^{\varepsilon}_{i=1}{\sqrt{\left(X^{*}_{i}-x_{i}\right)+\left(Y^{*}_{i}-y_{i}\right)}}}{\varepsilon}$
(7)
$VRE=\frac{\sum^{\varepsilon}_{i=1}{100{\left|X^{*}_{i}-x_{i}\right|}/{x_{i}}}}{\varepsilon}$
(8)
$MRE=\frac{\sum^{\varepsilon}_{i=1}{100{\left|Y^{*}_{i}-y_{i}\right|}/{y_{i}}}}{\varepsilon}$
(9)
where $(X^{*}_{i},Y^{*}_{i})$ is the standard point closest to the heuristic
point $(x_{i},y_{i})$. Figure 5 shows a graphical representation of the
indices used to calculate the performance metrics for the convenience of the
reader and the values for TN-GEO and all the other SOTA optimizers are
reported in Table 1.
Figure 5: A graphical demonstration of indices used for performance metrics
calculation
### A.3 Quantum-Inspired Generative Model in TN-GEO
The addition of a probabilistic component is inspired by the success of
Bayesian Optimization (BO) techniques, which are among the most efficient
solvers when the performance metric aims to find the lowest minimum possible
within the least number of objective function evaluations. For example, within
the family of BO solvers, GPyOpt authors (2016) uses a Gaussian Process (GP)
framework consisting of multivariate Gaussian distributions. This
probabilistic framework aims to capture relationships among the previously
observed data points (e.g., through tailored kernels), and it guides the
decision of where to sample the next evaluation with the help of the so called
acquisition function. GPyOpt is one of the solvers we use to benchmark the new
quantum-enhanced strategies proposed here.
Although the GP framework in BO techniques is not a generative model, we
explore here the powerful unsupervised machine learning framework of
generative modeling in order to capture correlations from an initial set of
observations and evaluations of the objective function (step 1-4 in Fig. 1).
For the implementation of the quantum-inspired generative model at the core of
TN-GEO we follow the procedure proposed and implemented in Ref. Han _et al._
(2018). Inspired by the probabilistic interpretation of quantum physics via
Born’s rule, it was proposed that one can use the Born probabilities
$|\Psi(\boldsymbol{x})|^{2}$ over the $2^{N}$ states of an $N$ qubit system to
represent classical target probability distributions which would be obtained
otherwise with generative machine learning models. Hence,
$P(\boldsymbol{x})=\frac{|\Psi(\boldsymbol{x})|^{2}}{Z}\text{, with
}Z=\sum\limits_{\boldsymbol{x}\in\cal{S}}|\Psi(\boldsymbol{x})|^{2},$ (10)
with $\Psi(\boldsymbol{x})=\langle\boldsymbol{x}|\Psi\rangle$ and
$\boldsymbol{x}\in\\{0,1\\}^{\otimes N}$ are in one-to-one correspondence with
decision variables over the investment universe with $N$ assets in our
combinatorial problem of interest here. In Ref. Han _et al._ (2018) these
quantum-inspired generative models were named as Born machines, but we will
refer to them hereafter as tensor-network Born machines (TNBM) to
differentiate it from the quantum circuit Born machines (QCBM) proposal
Benedetti _et al._ (2018) which was developed independently to achieve the
same purpose but by leveraging quantum wave functions from quantum circuits in
NISQ devices. As explained in the main text, either quantum generative model
can be adapted for the purpose of our GEO algorithm.
On the grounds of computational efficiency and scalability towards problem
instances with large number of variables (in the order of hundreds or more),
following Ref. Han _et al._ (2018) we implemented the quantum-inspired
generative model based on Matrix Product States (MPS) to learn the target
distributions $|\Psi(\boldsymbol{x})|^{2}$.
MPS is a type of TN where the tensors are arranged in a one-dimensional
geometry. Despise its simple structure, MPS can efficiently represent a large
number of quantum states of interest extremely well Cirac _et al._ (2020).
Learning with the MPS is achieved by adjusting its parameters such that the
distribution obtained via Born’s rule is as close as possible to the data
distribution. MPS enjoys a direct sampling method that is more efficient than
other Machine Learning techniques, for instance, Boltzmann machines, which
require Markov chain Monte Carlo (MCMC) process for data generation.
The key idea of the method to train the MPS, following the algorithm on paper
Han _et al._ (2018), consists of adjusting the value of the tensors composing
the MPS as well as the bond dimension among them, via the minimization of the
negative log-likelihood function defined over the training dataset sampled
from the target distribution. For more details on the implementation see Ref.
Han _et al._ (2018) and for the respective code see Ref. cod (2018).
### A.4 Classical Optimizers
#### A.4.1 GPyOpt Solver
GPyOpt authors (2016) is a Python open-source library for Bayesian
Optimization based on GPy and a Python framework for Gaussian process
modelling. For the comparison exercise in TN-GEO as a stand-alone solver here
are the hyperparameters we used for the GPyOpt solver:
* •
Domain: to deal with the exponential growth in dimensionality, the variable
space for $n$ number of assets was partitioned as the cartesian product of $n$
1-dimensional spaces.
* •
Constraints: we added two inequalities in the number of assets in a portfolio
solution to represent the cardinality condition.
* •
Number of initial data points: 10
* •
Acquisition function: Expected Improvement
#### A.4.2 Simulated Annealing Solver
For simulated annealing (SA) we implemented a modified version from Ref. Perry
and Wagner (2019). The main change consists of adapting the update rule such
that new candidates are within the valid search space with fixed cardinality.
The conventional update rule of single bit flips will change the Hamming
weight of $\boldsymbol{x}$ which translates in a portfolio with different
cardinality. The hyperparameters used are the following:
* •
Max temperature in thermalization: 1.0
* •
Min temperature in thermalization: 1e-4
#### A.4.3 Conditioned Random Solver
This solver corresponds to the simplest and most naive approach, while still
using the cardinality information of the problem. In the conditioned random
solver, we generate, by construction, bitstrings which satisfy the cardinality
constraint. Given the desired cardinality $\kappa=N/2$ used here, one starts
from the bitstring with all zeros, $\boldsymbol{x}_{0}=0\cdots 0$, and flips
only $N/2$ bits at random from positions containing $0$’s, resulting in a
valid portfolio candidate $\boldsymbol{x}$ with cardinality $N/2$.
#### A.4.4 Random Solver
This solver corresponds to the simplest approach without even using the
cardinality information of the problem. In the random solver, we generate, by
construction, bitstrings randomly selected from the $2^{N}$ bitstrings of all
possible portfolios, where $N$ is the number of assets in our investment
universe.
### A.5 Algorithm Methodology for TN-GEO as a booster
As explained in the main text, in this case it is assumed that the cost of
evaluating the objective function is not the major computational bottleneck,
and consequently there is no practical limitations in the number of
observations to be considered.
Following the algorithmic scheme in Fig. 1, we describe next the details for
each of the steps in our comparison benchmarks:
1. 0
Build the seed data set, {$\boldsymbol{x}^{(i)}\\}_{\rm{seed}}$ and
{$\sigma^{(i)}_{\rho,N/2}\\}_{\rm{seed}}$. For each problem instance defined
by $\rho$ and a random subset with $N$ assets from the S&P 500, gather all
initial available data obtained from previous optimization attempts with
classical solver(s). In our case, for each problem instances we collected
10,000 observations from the SA solver. These 10,000 observations
corresponding to portfolio candidates {$\boldsymbol{x}^{(i)}\\}_{\rm{init}}$
and their respective risk evaluations
{$\sigma^{(i)}_{\rho,N/2}\\}_{\rm{init}}$ were sorted and only the first
$n_{\rm{seed}}=1,000$ portfolio candidates with the lowest risks were selected
as the seed data set. This seed data set is the one labeled as
{$\boldsymbol{x}^{(i)}\\}_{\rm{seed}}$ and
{$\sigma^{(i)}_{\rho,N/2}\\}_{\rm{seed}}$ in the main text and hereafter. The
idea of selecting a percentile of the original data is to provide the
generative model inside GEO with samples which are the target samples to be
generated. This percentile is a hyperparameter and we set it 10% of the
initial data for our purposes.
2. 1
Construct of the softmax surrogate distribution: Using the seed data from step
0, we construct a softmax multinomial distribution with $n_{\rm{seed}}$
classes - one for each point on the seed data set. The probabilities outcome
associated with each of these classes in the multinomial is calculated as a
Boltzmann weight,
$p_{i}=\dfrac{e^{-\overline{\sigma}_{\rho,\kappa}^{(i)}}}{\sum\limits_{j=1}^{n_{\rm{seed}}}e^{-\overline{\sigma}_{\rho,\kappa}^{(j)}}}$.
Here,
$\overline{\sigma}_{\rho,\kappa}^{(i)}=\sigma_{\rho,\kappa}(\boldsymbol{x}^{(i)})/T$,
and $T$ is a “temperature” hyperparameter. In our simulations, $T$ was
computed as the standard deviation of the risk values of this seed data set.
In Bayesian optimization methods the surrogate function tracks the landscape
associated with the values of the objective function (risk values here). This
softmax surrogate constructed here by design as a multinomial distribution
from the seed data observations serves the purpose of representing the
objective function landscape but in probability space. That is, it will assign
higher probability to portfolio candidates with lower risk values. Since we
will use this softmax surrogate to generate the training data set, this bias
imprints a preference in the quantum-inspired generative model to favor low-
cost configurations.
3. 2
Sample from softmax surrogate. We will refer to these samples as the training
set since these will be used to train the MPS-based generative model. For our
experiments here we used $n_{\rm{train}}=10000$ samples.
4. 3
Use the $n_{\rm{train}}$ samples from the previous step to train the MPS
generative model.
5. 4
Obtain $n_{\rm{MPS}}$ samples from the generative model which correspond to
the new list of potential portfolio candidates. In our experiments,
$n_{\rm{MPS}}=4000$. For the case of 500 assets, as sampling takes sensibly
longer because of the problem dimension, this value was reduced to 400 to
match the time in SA.
6. 5
Select new candidates: From the $n_{\rm{MPS}}$ samples, select only those who
fulfill the cardinality condition, and which have not been evaluated. These
new portfolio candidates $\\{\boldsymbol{x}^{(i)}\\}_{\rm{new}}$ are saved for
evaluation in the next step.
7. 6
Obtain risk value for new selected samples: Solve Eq. 3 to evaluate the
objective function (portfolio risks) for each of the new candidates
$\\{\boldsymbol{x}^{(i)}\\}_{\rm{new}}$. We will denote refer to the new cost
function values by $\\{\sigma^{(i)}_{\rho,N/2}\\}_{\rm{new}}$.
8. 7
Merge the new portfolios, $\\{\boldsymbol{x}^{(i)}\\}_{\rm{new}}$, and their
respective cost function evaluations,
$\\{\sigma^{(i)}_{\rho,N/2}\\}_{\rm{new}}$ with the seed portfolios,
$\\{\boldsymbol{x}^{(i)}\\}_{\rm{seed}}$, and their respective cost values,
$\\{\sigma^{(i)}_{\rho,N/2}\\}_{\rm{seed}}$, from step 0 above. This combined
super set is the new initial data set.
9. 8
Use the new initial data set from step 7 to start the algorithm from step 1.
If a desired minimum is already found or if no more computational resources
are available, one can decide to terminate the algorithm here. In all of our
benchmark results reported here when using TN-GEO as a booster from SA
intermediate results, we only run the algorithm for this first cycle and the
minima reported for the TN-GEO strategy is the lowest minimum obtained up to
step 7 above.
### A.6 Algorithm Methodology for TN-GEO as a stand-alone solver
This section presents the algorithm for the TN-GEO scheme as a stand-alone
solver. In optimization problems where the objective function is inexpensive
to evaluate, we can easily probe it at many points in the search for a
minimum. However, if the cost function evaluation is expensive, e.g., tuning
hyperparameters of a deep neural network, then it is important to minimize the
number of evaluations drawn. This is the domain where optimization technique
with a Bayesian flavour, where the search is being conducted based on new
information gathered, are most useful, in the attempt to find the global
optimum in a minimum number of steps.
The algorithmic steps for TN-GEO as a stand-alone solver follows the same
logic as that of the solver as a booster described Sec. A.5. The main
differences between the two algorithms rely on step 0 during the construction
of the initial data set and seed data set in step 0, the temperature use in
the softmax surrogate in step 1, and a more stringent selection criteria in
step 5. Since the other steps remain the same, we focus here to discuss the
main changes to the algorithmic details provided in Sec. A.5.
1. 0
Build the seed data set: since evaluating the objective function could be the
major bottleneck (assumed to be expensive) then we cannot rely on cost
function evaluations to generate the seed data set. The strategy we adopted is
to initialize the algorithm with samples of bitstrings which satisfy the hard
constraints of the problem. In our specific example, we can easily generate
$n_{\rm{seed}}$ random samples,
$\mathcal{D}_{0}=\\{\boldsymbol{x}^{(i)}\\}_{\rm{seed}}$, which satisfy the
cardinality constraint. Since all the elements in this data set hold the
cardinality condition, then maximum length $n_{\rm{seed}}$ of
$\mathcal{D}_{0}$ is $\binom{N}{\kappa}$. In our experiments, we set the
number of samples $n_{\rm{init}}=2,000$, for all problems considered here up
to $N=100$ assets
2. 1
Construct the softmax surrogate distribution: start by constructing a uniform
multinomial probability distribution where each sample in $\mathcal{D}_{0}$
has the same probability. Therefore, for each point in the seed data set its
probability is set to $p_{0}=1/n_{\rm{seed}}$. As in TN-GEO as a booster, we
will attempt to generate a softmax-like surrogate which favors samples with
low cost value, but we will slowly build that information as new samples are
evaluated. In this first iteration of the algorithm, we start by randomly
selecting a point $\boldsymbol{x}^{(1)}$ from $\mathcal{D}_{0}$, and we
evaluate the value of its objective function $\sigma^{(1)}$ (its risk value in
our specific finance example). To make this point $\boldsymbol{x}^{(1)}$ stand
out from the other unevaluated samples, we set its probability to be twice
that of any of the remaining $n_{\rm{seed}}-1$ points in $\mathcal{D}_{0}$.
Since we increase the probability of one of the points, we need to adjust the
probability of the $n_{\rm{seed}}-1$ from $p_{0}$ to $p^{{}^{\prime}}_{0}$,
and if we assume the probability weights for observing each point follows a
multinomial distribution with Boltzmann weights, under these assumptions, and
making by fixing the temperature hyperparameter we can solve for the reference
“risk” value $\sigma^{(0)}$ associated to all the other $n_{\rm{seed}}-1$
points as shown below. It is important to note that $\sigma^{(0)}$ is an
artificial reference value which is calculated analytically and does not
require a call to the objective function (in contrast to $\sigma^{(1)}$).
Here, $\mathcal{N}$ is the normalization factor of the multinomial and $T$ is
the temperature hyperparameter which, as in the case of TN-GEO as a booster,
can be adjusted later in the algorithm as more data is seen. Due to the lack
of initial cost function values, in order to set a relevant typical “energy”
scale in this problem, we follow the procedure in Ref. Alcazar _et al._
(2020) where it is set to be the square root of the mean of the covariance
matrix defined in Eq. 2, as this matrix encapsulates the risk information
(volatility) as stated in the Markowitz’s model.
$\begin{array}[]{l}\begin{cases}(n_{\rm{seed}}-1)p_{0}^{{}^{\prime}}+p_{1}=1\\\
p_{1}=2\cdot
p_{0}^{{}^{\prime}}\end{cases}\Rightarrow\begin{cases}p_{0}^{{}^{\prime}}=1/(1+n_{\rm{seed}})\\\
p_{1}=2/(1+n_{\rm{seed}})\end{cases}\\\ \\\
\begin{cases}\mathcal{N}=(n_{\rm{seed}}-1)e^{-\sigma^{(0)}/T}+e^{-\sigma^{(1)}/T}\\\
p_{1}=e^{-\sigma^{(1)}/T}/\mathcal{N}\\\
p_{0}^{{}^{\prime}}=e^{-\sigma^{(0)}/T}/\mathcal{N}\\\
\end{cases}\Rightarrow\\\ \\\ \begin{cases}\mathcal{N}=(n_{\rm{seed}}+1)\cdot
e^{-\sigma^{(1)}/T}/2\\\ \sigma^{(0)}=T\cdot\log{2}+\sigma^{(1)}\\\
\end{cases}\end{array}$ (11)
3. 2
Generate training set: same as in TN-GEO as a booster (see Appendix A.5).
4. 3
Train MPS: same as in TN-GEO as a booster (see Appendix A.5).
5. 4
Generate samples from trained MPS: same as in TN-GEO as a booster (see
Appendix A.5).
6. 5
Select new candidates from trained MPS: In contrast to TN-GEO as a booster we
cannot afford to evaluate all new candidates coming from the MPS samples. In
our procedure we selected only two new candidates which must meet the
cardinality constraint. For our procedure these two candidates correspond to
the most frequent sample (“exploitation”) and the least frequent sample
(“exploration”). If all new samples appeared with the same frequency, then we
can select two samples at random. In the case where no new samples were
generated, we choose them from the unevaluated samples of the original seed
data set in $\mathcal{D}_{0}$
7. 6
Obtain risk value for new selected samples: same as in TN-GEO as a booster
(see Appendix A.5).
8. 7
Merge the new portfolios with seed data set from step 0 same as in TN-GEO as a
booster (see Appendix A.5).
9. 8
Restart next cycle of the algorithm with the merge data set as the new seed
data set: same as in TN-GEO as a booster (see Appendix A.5).
## Appendix B Relative TN-GEO Enhancement
Figure 6 represents the relative performance within the strategies 1 and 2
referred to subsection III.1.
Figure 6: Relative TN-GEO enhancement similar to those shown in the bottom
panel of Fig. 2 in the main text. For these experiments, portfolio
optimization instances with a number of variables ranging from $N=30$ to
$N=100$ were used. Here, each panel correspond to a different investment
universes corresponding to a random subset of the S&P 500 market index. Note
the trend for a larger quantum-inspired enhancement as the number of variables
(assets) becomes larger, with the largest enhancement obtained in the case on
instances with all the assets from the S&P 500 ($N=500$), as shown in Fig. 2
|
# Parameter inference in a computational model of hemodynamics in pulmonary
hypertension
Amanda L. Colunga∗1, Mitchel J. Colebank∗1,2,
REU Program1, Mette S. Olufsen1
1Department of Mathematics, North Carolina State University, Raleigh, NC
2Edwards Lifesciences Foundation Cardiovascular Innovation and Research
Center,
and Department of Biomedical Engineering, University of California, Irvine,
Irvine, CA
∗ authors contributed equally
Correspondence: Mette S. Olufsen<EMAIL_ADDRESS>
###### Abstract
Pulmonary hypertension (PH), defined by a mean pulmonary arterial pressure
(mPAP) $>$ 20 mmHg, is characterized by increased pulmonary vascular
resistance and decreased pulmonary arterial compliance. There are few
measurable biomarkers of PH progression, but a conclusive diagnosis of the
disease requires invasive right heart catheterization (RHC). Patient-specific
computational models of the cardiovascular system are a potential noninvasive
tool for determining additional indicators of disease severity. Using
computational modeling, this study quantifies physiological parameters
indicative of disease severity in nine PH patients. The model includes all
four heart chambers and the pulmonary and systemic circulations. We consider
two sets of calibration data: static (systolic & diastolic values) RHC data
and a combination of static and continuous, time-series waveform data. We
determine a subset of identifiable parameters for model calibration using
sensitivity analyses and multistart inference, and carry out uncertainty
quantification post-inference. Results show that additional waveform data
enables accurate calibration of the right atrial reservoir and pump function
across the PH cohort. Model outcomes, including stroke work and pulmonary
resistance-compliance relations, reflect typical right heart dynamics in PH
phenotypes. Lastly, we show that estimated parameters agree with previous,
non-modeling studies, supporting this type of analysis in translational PH
research.
Keywords: Pulmonary hypertension, computational model, parameter inference,
cardiovascular modeling
Abbreviations |
---|---
0D | Zero dimensional (time or spatial component)
CO | Cardiac output
CTEPH | Chronic thromboembolic pulmonary hypertension
iid | Independent and identically distributed
MPA | Main pulmonary artery
mPAP | Mean pulmonary arterial pressure
ODE | Ordinary differential equation
PAH | Pulmonary arterial hypertension
PAWP | Pulmonary arterial wedge pressure
PH | Pulmonary hypertension
PVR | Pulmonary vascular resistance
RA | Right atrium
RHC | Right heart catheterization
RV | Right ventricle
## 1 Introduction
Patients with a resting mean pulmonary arterial blood pressure (mPAP) greater
than 20 mmHg are diagnosed with pulmonary hypertension (PH) [51]. This disease
has no cure and, if left untreated, progresses rapidly, leading to thickening
and stiffening of the pulmonary vasculature, vascular-ventricular decoupling,
and right ventricular (RV) failure [18, 26]. There are five main PH
etiologies: pulmonary arterial hypertension (PAH, group 1), PH due to left
heart disease (group 2), PH due to lung disease and/or hypoxia (group 3),
chronic thromboembolic PH (CTEPH, group 4), and PH with unclear multifactorial
mechanisms (group 5) [19]. Only patients in groups 1 and 4 have PH as their
primary disease; in groups 2-5, PH is a comorbidity. Patients with PAH and
CTEPH experience common symptoms early on, including shortness of breath,
dizziness, fainting, fatigue, and swelling of the legs and abdomen [42]. Early
diagnosis is difficult. Therefore patients with suspected PH undergo several
tests. A definite diagnosis requires invasive pulmonary arterial blood
pressure measurements through right heart cardiac catheterization (RHC) [42,
35]. PH symptoms do not appear until 1-2 years after disease onset [26]. At
this time, patients have typically undergone significant disease progression;
before diagnosis limiting and reducing treatment outcomes. Understanding how
cardiovascular parameters (e.g., pulmonary vascular resistance (PVR) and
compliance) are modulated with the disease can assist in early detection and
better therapeutic interventions. We utilize systems-level computational
models with RHC data to study how model parameters and outcomes are modulated
with PH.
Mathematical modeling is useful for monitoring and understanding
cardiovascular disease progression. Systems-level models with multiple
cardiovascular compartments have had notable success in analyzing in-vivo
dynamics [31, 50, 12]. For example, Colunga et al. [12] utilized a zero-
dimensional (0D) systems-level model to predict pressure-volume (PV) loops and
left ventricular (LV) power to understand heart transplant recovery. Kung et
al. [31] used a similar model to quantify exercise capacity in Fontan
patients, an essential indicator of patient survival. The study by Shimizu et
al. [50] used a 0D model to study postoperative dynamics in patients with a
hypoplastic RV. Their results show that the effectiveness of ventricular
repair can be predicted by RV stiffness. These studies used models to predict
patient outcomes. As noted by Colunga et al. [12], reliable results require
that model parameters are identifiable given the model structure and available
data. Parameters are identifiable if they influence the model output and can
be uniquely determined by available data. A parameter’s influence on model
predictions is quantified using local [17, 39] and global [16, 8, 7]
sensitivity analyses. Subset selection algorithms [39, 38] determine parameter
interdependence and reduce identifiability issues. Schiavazzi et al. [49]
estimated cardiovascular model parameters by fitting simulations to data from
single-ventricle patients with a Norwood physiology. They show that combining
local and global identifiability techniques, apriori, provides unique and
consistent parameter estimates given the available data. Our group [12] used
similar methods to analyze data from heart-transplant patients finding that
model predictions align with static RHC data measured at one point and over
longitudinal patient recordings.
These previous studies use noninvasive or static data, while others used
dynamic time-series data, such as pressure waveforms, for model calibration.
Marquis et al. [36] developed a compartment model of the systemic circulation.
The model was calibrated by inferring five identifiable model parameters to
simultaneously recorded LV pressure and volume waveforms in rats. Their
results showed that estimating these parameters led to agreement between the
dynamic model prediction and the waveform data. The study by Bjørdalsbakke et
al. [5] compared model sensitivity using static or dynamic outputs from a
systemic circulation model. They found that time-averaged global sensitivities
of aortic pressure were less influential to systemic resistance than static
systolic and diastolic pressure outputs. Gerringer et al. [21] used three- and
four-element Windkessel models to predict main pulmonary artery (MPA) pressure
waveforms in control and PAH mice. The study matched model simulations to
dynamic MPA data, showing good agreement with the data. However, the authors
did not consider a closed-loop model. These studies demonstrate the importance
of employing sensitivity analyses and parameter reduction but do not discuss
what data, static and/or dynamic, are informative for parameter inference.
Most clinical protocols only utilize static data in electronic health records.
Though static measurements are extracted from waveform data, storing patient
static and dynamic pressure adds complexity to data storage. However, PH time-
series pressure data may reveal important markers of disease severity.
The objective of this study is two-fold: we 1) investigate if systems-level
model calibration is improved by adding dynamic RHC data and 2) investigate if
patient-specific cardiovascular parameters are consistent with the
physiological understanding of PH. To do so, we study the impact of model
parameters on hemodynamic predictions using local and global sensitivity
analyses. To quantify the benefits of adding waveform data in parameter
inference, we consider two residual vectors: comparing model predictions to
static data (systolic, diastolic, and mean pressures and cardiac output (CO))
and using a combination of static and dynamic data (RHC time-series
waveforms). By integrating mathematical modeling, patient-specific data, and
physiological intuition, we categorize each patient’s functional state,
including right atrial (RA), RV, and pulmonary artery (PA) temporal dynamics.
In addition, we run simulations with estimated parameters to calculate
patient-specific physiological biomarkers, including PV loops and other
markers of PH severity.
## 2 Methods
### 2.1 Ethics and approval
Patient-specific data are obtained from two hospitals, adhering to their
respective institutional review board guidelines. Deidentified RHC patient
data are obtained from the Scottish Pulmonary Vascular Unit at Golden Jubilee
National Hospital, Glasgow, UK, and from the Center for Pulmonary Vascular
Disease at Duke University Medical Center, Durham, NC.
### 2.2 Blood pressure data
This study utilizes clinically deidentified RHC data from nine patients with
confirmed PH: five with PAH and four with CTEPH. Three CTEPH and three PAH
datasets are from Duke University, and one CTEPH and two PAH datasets are from
the Scottish Pulmonary Vascular Unit. Static data include height, weight, sex,
age, heart rate, systolic, diastolic, and mean systemic blood pressure
measured by cuff pressure. The patients underwent RHC, during which a catheter
was advanced from the RA to the RV and MPA. Dynamic pressure waveforms are
recorded in each compartment. The pulmonary arterial wedge pressure (PAWP,
mmHg), an estimate of left atrial pressure, is also recorded. CO (L/min) is
measured during RHC by thermodilution. All pressure readings are obtained over
7-8 heartbeats. Demographics are provided in table 1.
Table 1: Patient demographics; group 1: pulmonary arterial hypertension (PAH);
group 4: chronic thromboembolic pulmonary hypertension (CTEPH).
Patient | PH | Age | Sex | Height (cm) | Weight (kg) | CO ($\frac{\text{L}}{\text{min}}$)
---|---|---|---|---|---|---
1 | 1 | 64 | Male | 164.0 | 72.6 | 4.0
2 | 4 | 58 | Male | 161.0 | 70.0 | 4.3
3 | 1 | 27 | Female | 151.0 | 81.1 | 2.6
4 | 4 | 71 | Female | 167.6 | 93.3 | 6.1
5 | 4 | 51 | Male | 179.1 | 117.2 | 3.6
6 | 1 | - | Male | 178.0 | 108.0 | 6.4
7 | 1 | - | Male | 179.0 | 74.0 | 6.3
8 | 1 | - | Female | 183.0 | 82.0 | 5.6
9 | 4 | - | Female | 154.9 | 67.4 | 4.0
CO: cardiac output; PH: pulmonary hypertension.
For patients 6-9, age were omitted from medical records.
### 2.3 Data extraction
Time-series data are extracted from clinical RHC reports using GraphClick
version 3.0.3 for Mac OS and Map Digitizer available on the Apple AppStore.
Beat-to-beat hemodynamic profiles for each patient are extracted by aligning
RHC pressure waveform to the electrocardiogram signal. The waveforms are
separated using by R-R interval and stored as separate files. For this study,
a single representative RA, RV, and MPA signal is chosen for each patient (see
figure 1). Since RHC data are not measured simultaneously, the representative
waveforms are selected during expiration and assigned a cardiac cycle length
equal to the averaged pressure cycle length. To align the signals within the
cardiac cycle, we shift the RA and MPA signals to ensure that RA contraction
occurs before the start of RV isovolumic contraction and that peak RV pressure
occurs immediately before peak MPA pressure. Magnitudes of the RA, RV, and MPA
pressure signals are shifted slightly to ensure physiological valve dynamics.
Dynamic pressure waveforms from the RHC are shown in figure 1. Lastly, we
construct a normotensive, control patient using pressure and volume values
from literature [6, 29]; these pressure values are displayed in table 2.
Control parameters and model predictions are compared to those obtained using
PH data.
Figure 1: Data processing. Dynamic data from the right atrium (RA), right ventricle (RV), and main pulmonary artery (MPA) for each patient are digitized from right heart catheterization recordings and used for model calibration. Table 2: Static values from patient data and used for nominal parameter calculations. Mean and standard deviation values are calculated for PH data only. † control values obtained from [6, 29]. ‡ left atrial diastolic value used in place of PAWP. Data | Control | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | Mean $\pm$ SD
---|---|---|---|---|---|---|---|---|---|---|---
$p^{d}_{ra,M}$ | 12 | 14 | 24 | 28 | 16 | 31 | 15 | 24 | 25 | 15 | $21\pm 6$
$p^{d}_{ra,m}$ | 3 | 5 | 16 | 20 | 8 | 23 | 11 | 15 | 22 | 8 | $14\pm 7$
$p^{d}_{rv,M}$ | 21 | 87 | 91 | 93 | 69 | 81 | 54 | 76 | 61 | 53 | $74\pm 15$
$p^{d}_{rv,m}$ | 2 | 3 | 5 | 3 | 1 | 17 | 8 | 12 | 16 | 4 | $8\pm 6$
$p^{d}_{pa,M}$ | 21 | 86 | 90 | 92 | 68 | 81 | 53 | 75 | 60 | 52 | $73\pm 15$
$p^{d}_{pa,m}$ | 8 | 32 | 38 | 34 | 20 | 36 | 28 | 37 | 34 | 19 | $31\pm 7$
$p^{d}_{pa}$ | 12 | 48 | 55 | 54 | 41 | 53 | 37 | 51 | 45 | 34 | $46\pm 8$
$p^{d}_{W}$ | 5 ‡ | 4 | 5 | 8 | 11 | 20 | 10 | 17 | 22 | 12 | $12\pm 6$
$p^{d}_{sa,M}$ | 120 | 112 | 112 | 127 | 148 | 118 | 133 | 127 | 89 | 123 | $121\pm 16$
$p^{d}_{sa,m}$ | 80 | 76 | 76 | 90 | 78 | 77 | 87 | 92 | 68 | 65 | $79\pm 9$
$p^{d}_{sa}$ | 93 | 88 | 88 | 102 | 101 | 91 | 102 | 103 | 75 | 84 | $93\pm 10$
### 2.4 Mathematical model
This study utilizes a systems-level, ordinary differential equations (ODE)
model (shown in figure 2) that simulates dynamic pressure $p$ (mmHg), flow $q$
(mL/s), and volume $V$ (mL). The model consists of 8 compartments: the left
and right atria and ventricles, and the systemic and pulmonary arteries and
veins. The model is formulated using an electrical circuit analogy, with
pressure analogous to voltage, flow to current, volume to charge, and
compliance to capacitance. We include four heart valves, two semilunar
(tricuspid and mitral), and two atrioventricular (pulmonary and aortic). An
additional systemic venous valve is also included. To ensure proper flow
between compartments, heart valves are modeled as diodes, i.e., the valves are
either open or closed depending on the pressure gradient between compartments.
Equivalent to an RC circuit, equations relating to the three dependent
quantities are given by
$\displaystyle\frac{dV_{s,i}}{dt}$ $\displaystyle=$ $\displaystyle
q_{i-1}-q_{i},$ (1) $\displaystyle q_{i}$ $\displaystyle=$
$\displaystyle\frac{p_{i}-p_{i+1}}{R_{i}},$ (2) $\displaystyle V_{s,i}$
$\displaystyle=$ $\displaystyle V_{i}-V_{un,i}=C_{i}(p_{i}-p_{i+1}),$ (3)
where the subscripts $i-1$, $i$, $i+1$ refer to the prior, current, and
proceeding compartments in the system, respectively. $V_{s,i}$ (mL) denotes
the stressed volume (the circulating volume), and $V_{un,i}$ (mL) is the
unstressed volume (the non-circulating volume, assumed constant). $R_{i}$
(mmHg$\cdot$s/mL) denotes the resistance between two compartments and $C_{i}$
(ml/mmHg) the compartment compliance. Equation (1) ensures conservation of
volume, equation (2) is the analog of Ohm’s law, and equation (3) relates
volume and pressure.
Figure 2: Model schematic. Follows an electrical circuit analog. The model has
eight compartments: the systemic and pulmonary arteries and veins, two atria,
and two ventricles. Each compartment is modeled as compliant and is separated
by a resistor element. The right atrium, right ventricle, and pulmonary
arteries (red boxes) have both dynamic and static data. The pulmonary veins
and systemic arteries have only static data. RHC: right heart catheterization;
CO: cardiac output.
We model each heart chamber by a time-varying elastance function $E_{i}(t)$
(mmHg/mL) [17, 36], which relates pressure and volume by
$p_{i}\left(t\right)=E_{i}\left(\tilde{t}\right)V_{s,i},$ (4)
where $i=ra,la,rv,lv$ denote the left $(l)$ and right $(r)$ atria $(a)$ and
ventricles $(v)$. The time within the cardiac cycle is denoted by
$\tilde{t}=\mathrm{mod}(t,T$), where $T(s)$ is the length of the cardiac
cycle. The ventricular elastance function $E_{v}(\tilde{t})$ is given by the
piece-wise continuous function [17]
$\displaystyle
E_{v}(\tilde{t})=\begin{cases}\frac{E_{M}-E_{m}}{2}\Big{(}\cos\Big{(}\frac{\pi\tilde{t}}{T_{c}}\Big{)}\Big{)}+E_{m},&0\leq\tilde{t}\leq
T_{c}\\\
\frac{E_{M}-E_{m}}{2}\Big{(}1+\cos\Big{(}\frac{\pi\big{(}\tilde{t}-T_{c}\big{)}}{\big{(}T_{r}-T_{c}\big{)}}\Big{)}\Big{)}+E_{m},&T_{c}<\tilde{t}\leq
T_{r}\\\ E_{m},&T_{r}<\tilde{t}\leq T,\end{cases}$ (5)
where $E_{v,m}$ and $E_{v,M}$ (mmHg/mL) are the minimal and maximal
ventricular elastances, and $T_{c,v}$ (s) and $T_{r,v}$ (s) denote the
duration of ventricular contraction and relaxation. The atrial elastance
function (shown in figure 3) is prescribed in a similar fashion [33]
$\displaystyle
E_{a}(\tilde{t})=\begin{cases}\frac{E_{a,M}-E_{a,m}}{2}\Big{(}1-\cos\Big{(}\frac{\pi\big{(}\tilde{t}-T_{r,a}\big{)}}{\big{(}T-T_{c,a}+T_{r,a}}\Big{)}\Big{)}+E_{a,m},&0\leq\tilde{t}\leq
T_{r,a}\\\ E_{a,m},&T_{r,a}<\tilde{t}\leq\tau_{c,a}\\\
\frac{E_{a,M}-E_{a,m}}{2}\Big{(}1-\cos\Big{(}\frac{\pi\big{(}\tilde{t}-\tau_{c,a}\big{)}}{\big{(}T_{c,a}-\tau_{c,a}\big{)}}\Big{)}\Big{)}+E_{a,m},&\tau_{c,a}<\tilde{t}\leq
T_{c,a}\\\
\frac{E_{a,M}-E_{a,m}}{2}\Big{(}1+\cos\Big{(}\frac{\pi\big{(}\tilde{t}-T_{c,a}\big{)}}{\big{(}T-T_{c,a}+T_{r,a}\big{)}}\Big{)}\Big{)}+E_{a,m},&T_{c,a}<\tilde{t}\leq
T.\end{cases}$ (6)
Here, $E_{a,m}$ and $E_{a,M}$ (mmHg/mL) are the minimum and maximum elastances
of the atria and $T_{r,a}$, $\tau_{c,a}$ and $T_{c,a}$ (s) denote the start of
atrial relaxation, the start of atrial contraction, and the point of maximum
atrial contraction. The elastance model is parameterized by $0\leq
T_{r,a}\leq\ \tau_{c,a}\leq T_{c,a}\leq T$. Figure 3 shows a representative
elastance time course in the atria and ventricles.
Figure 3: Heart chamber elastance function. Representative elastance function
for the atrial (red) and ventricular (blue) heart chambers. Timing parameters
are shown above their respective phases of the cardiac cycle. Note that
ventricular isovolumic contraction occurs while the atrium is still relaxing.
### 2.5 Model outcomes
We compute four physiological quantities derived from the model predictions
and inferred parameters. These indices are utilized as biomarkers of PH
severity.
1. 1.
Stroke work per cycle (SW): defined as the time-averaged integral of the PV
loop, i.e., $\displaystyle\frac{1}{T}\int_{V}^{\ }p(t)dV^{\prime}$, calculated
in each heart chamber [46, 12].
2. 2.
Resistance ratio: the ratio of pulmonary and systemic resistance,
$R_{p}/R_{s}$ [61].
3. 3.
Compliance ratio: the ratio of pulmonary and systemic compliance,
$C_{pa}/C_{sa}$.
4. 4.
Pulsatility index (PI): the ratio of pulmonary arterial pulse pressure to
average right atrial pressure, $(p_{pa,M}-p_{pa,m})/{\bar{p}}_{ra}$ [37].
### 2.6 Parameter values and initial conditions
The sparse hemodynamic data and many model parameters make it imperative that
nominal parameter values and initial conditions are set in a physiologically
and patient-specific manner. Following previous approaches [36, 12], we use a
combination of patient-specific data (where available) and literature values.
Table 3 lists the nominal parameter values and their calculation.
#### 2.6.1 Compartment volumes and cardiac output
Using Hidalgo’s formula [25], each patients’ total blood volume ($V_{tot}$,
mL) is calculated as a function of height ($H$, cm), weight ($W$, kg), and sex
[60] as
$V_{tot}=\begin{cases}3.47\cdot\text{BSA}-1.954\cdot 1000,&\text{if Female}\\\
3.29\cdot\text{BSA}-1.229\cdot 1000,&\text{if Male}\end{cases},$ (7)
where BSA$=\sqrt{W\cdot H/3600}$ (m2) is the body surface area [14].
The heart’s initial stressed volumes (initial conditions) are calculated using
BSA indexed values. In contrast, stressed volumes in the vasculature are based
on blood volume proportions [4]. The BSA indexed volumes, $V_{i,ED}^{d}$, for
the right heart are based on Tello et al. [57], with $V_{ra,ED}^{d}=58.9\cdot
BSA$ and $V_{rv,ED}^{d}=116.9\cdot BSA$. We assume that the left heart chamber
volume is unaffected by PH, and use $V_{la,ED}^{d}=30\cdot BSA$ and
$V_{lv,ED}^{d}=80\cdot BSA$ [29]. Note that these values determine the blood
volume distributions for PH patients. The normotensive control simulation used
$V_{ra,ED}^{d}=30\cdot BSA$ and $V_{rv,ED}^{d}=78\cdot BSA$,
$V_{la,ED}^{d}=30\cdot BSA$, and $V_{lv,ED}^{d}=78\cdot BSA$ [29].
The total volumes for the systemic and pulmonary arteries are 13% and 3% of
$V_{tot}$, of which the stressed volumes are 27% and 58% of the total volume.
Pulmonary venous blood volume is 11% of $V_{tot}$, and 11% of this volume is
stressed. These values are from previous studies [17, 36]. To ensure that
blood volume distributions add to 100%, we calculate total systemic venous
blood volume as the remaining volume
$V_{sv\%}=100-13-3-11-V_{H\%},$
where $V_{H\%}$ is the percentage of total blood volume within the entire
heart. CO is calculated assuming that the total blood volume circulates in one
minute [17, 6].
#### 2.6.2 Pressure
Pulmonary circulation pressures are extracted from the RHC data, while
systemic arterial pressure is determined from cuff measurements. These values
are listed in table 2. Nominal pressure values for compartments we do not have
measurements (i.e., the left atrium, LV, and systemic veins) are calculated by
scaling pressures in their adjacent, data calibrated compartments [60]. We use
the following relationships for compartments for which we do not have data
$\displaystyle p_{sv}$ $\displaystyle=$ $\displaystyle\max(10,1.15\
p_{rv,m}),$ (8) $\displaystyle p_{la,m}$ $\displaystyle=$ $\displaystyle 0.95\
p_{pv},$ (9) $\displaystyle p_{la,M}$ $\displaystyle=$ $\displaystyle
p_{la,m}+5,$ (10) $\displaystyle p_{lv,m}$ $\displaystyle=$ $\displaystyle
0.97{\ p}_{la,M},$ (11) $\displaystyle p_{lv,M}$ $\displaystyle=$
$\displaystyle 1.01\ p_{sa,M}.$ (12)
The subscripts $sa,sv,la,$ and $pv$ denote the systemic arteries, systemic
veins, left atrium, and pulmonary veins, respectively. The additional
subscript $m$ and $M$ denote minimum and maximum value. For the left atrium,
we assume a pulse pressure of 5 mmHg, consistent with previous studies [43].
#### 2.6.3 Resistance
Each compartment is separated by a resistance to flow. Utilizing Ohm’s law,
the nominal vascular resistance is calculated as
$R_{i}=\frac{\Delta p}{\text{CO}},$ (13)
where the resistance in compartment $i$ depends on the pressure gradient,
$\Delta p$, and the CO; refer to table 3 for more details. The the aortic and
pulmonary valve resistances are calculated as
$R_{ava}=\frac{p_{lv,M}\ -p_{sa,M}}{\text{CO}}\ \ \ \text{and}\ \
R_{pva}=\frac{p_{rv,M}\ -p_{pa,M}}{\text{CO}},$ (14)
For PH patients, RA and pulmonary venous pressures are elevated [2] and
resistance equations overestimate atrioventricular valve resistance. To
circumvent this, we set $R_{tva}=0.03$ and $R_{mva}=0.01$ for all nine PH
patients.
#### 2.6.4 Compliance
is defined as the relative change in volume for a given change in pressure
[59] and quantifies the ability of the vasculature to distend under load. In
this study, nominal compliance estimates are
$C_{i}=\frac{V_{i}-V_{un,i}}{\tilde{p}_{i}},$ (15)
where $\tilde{p}_{i}$ is a compartment specific pressure [12]; see table 3 for
more details.
#### 2.6.5 Heart parameters
include elastance and timing parameters. Noting that compliance is the inverse
of elastance and that the compliance in the heart is minimal during end-
systole (computed at the maximum pressure and minimal volume) [36], we
calculate the maximum and minimum elastances as
$E_{i,M}=\frac{p_{i,M}}{V_{i,m}-V_{un,i}}\ \ \ \mbox{and}\ \ \ \ E_{i,\
m}=\frac{p_{i,m}}{V_{i,M}-V_{un,i}},$ (16)
where $i=la,ra,lv,rv$.
Nominal timing parameters for the RA and RV elastance functions are extracted
from the time-series data. Maximum and minimum RV elastance occur at peak
systole and the beginning of diastole, corresponding to $T_{c,v}$ and
$T_{v,r}$, respectively. RA dynamics are used to determine the end of atrial
systole, the start of atrial contraction, and peak atrial contraction, i.e.
$T_{r,a},\tau_{c,a},\text{ and }\ T_{c,a}$. Since dynamic data is unavailable
for the left atrium and LV, we set left-heart chamber timing parameters equal
to the right-heart timing parameters.
Table 3: Parameters in the 0D model and the methods for calculating their nominal values. Parameter | Units | Equation | Reference
---|---|---|---
Heart Valves
$R_{ava}$ | $\frac{mmHg\ s}{mL}$ | $\frac{\displaystyle p_{lv,M}-p_{sa,M}}{\displaystyle q_{tot}}$ | Ohm’s Law
$R_{mva}$ | $\frac{mmHg\ s}{mL}$ | 0.01 | -
$R_{pva}$ | $\frac{mmHg\ s}{mL}$ | $\frac{\displaystyle p_{rv,M}-p_{pa,M}}{\displaystyle q_{tot}}$ | Ohm’s Law
$R_{tva}$ | $\frac{mmHg\ s}{mL}$ | 0.03 | -
$R_{sv}$ | $\frac{mmHg\ s}{mL}$ | $\frac{\displaystyle\bar{p}_{sv}-p_{ra,m}}{\displaystyle q_{tot}}$ | Ohm’s Law
$R_{pv}$ | $\frac{mmHg\ s}{mL}$ | $\frac{\displaystyle\bar{p}_{pv}-p_{la,m}}{\displaystyle q_{tot}}$ | Ohm’s Law
Systemic Vasculature
$R_{s}$ | $\frac{mmHg\ s}{mL}$ | $\frac{\displaystyle p_{sa,m}-\bar{p}_{sv}}{\displaystyle q_{tot}}$ | Ohm’s Law
$C_{sa}$ | $\frac{mL}{mmHg}$ | $\frac{\displaystyle V_{sa,M}-V_{sa,un}}{\displaystyle p_{sa,m}}$ | [12]
$C_{sv}$ | $\frac{mL}{mmHg}$ | $\frac{\displaystyle V_{sv,M}-V_{sv,un}}{\displaystyle\bar{p}_{sv}}$ | [12]
Pulmonary Vasculature
$R_{p}$ | $\frac{mmHg\ s}{mL}$ | $\frac{\displaystyle p_{pa,m}-\bar{p}_{pv}}{\displaystyle q_{tot}}$ | Ohm’s Law
$C_{pa}$ | $\frac{mL}{mmHg}$ | $\frac{\displaystyle V_{pa,M}-V_{pa,un}}{\displaystyle p_{pa,m}}$ | [12]
$C_{pv}$ | $\frac{mL}{mmHg}$ | $\frac{\displaystyle V_{pv,M}-V_{pv,un}}{\displaystyle\bar{p}_{pv}}$ | [12]
Heart Elastance
$E_{M,rv}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{rv,M}}{\displaystyle V_{rv,m}-V_{rv,un}}$ | [36]
$E_{m,rv}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{rv,m}}{\displaystyle V_{rv,M}-V_{rv,un}}$ | [36]
$E_{M,ra}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{ra,M}}{\displaystyle V_{ra,m}-V_{ra,un}}$ | [36]
$E_{m,ra}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{ra,m}}{\displaystyle V_{ra,M}-V_{ra,un}}$ | [36]
$E_{M,lv}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{lv,M}}{\displaystyle V_{lv,m}-V_{lv,un}}$ | [36]
$E_{m,lv}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{lv,m}}{\displaystyle V_{lv,M}-V_{lv,un}}$ | [36]
$E_{M,la}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{la,M}}{\displaystyle V_{la,m}-V_{la,un}}$ | [36]
$E_{m,la}$ | $\frac{mmHg}{mL}$ | $\frac{\displaystyle p_{la,m}}{\displaystyle V_{la,M}-V_{la,un}}$ | [36]
Heart Timing
$\tau_{r,a}$ | $s$ | Data | -
$T_{c,a}$ | $s$ | Data | -
$T_{r,a}$ | $s$ | Data | -
$T_{c,v}$ | $s$ | Data | -
$T_{r,v}$ | $s$ | Data | -
### 2.7 Model summary
The model consists of a system of eight ODE’s, a stressed volumes, $V_{s,i}$,
for each compartment, with twenty-five parameters. The system can be written
as
$\displaystyle\mathbf{y}$ $\displaystyle=g(t,\mathbf{x};\theta),$ (17)
$\displaystyle\frac{d\mathbf{x}}{dt}$ $\displaystyle=f(t,\mathbf{x};\theta),$
$\displaystyle\mathbf{x}$
$\displaystyle=\\{V_{la},V_{lv},V_{sa},V_{sv},V_{ra},V_{rv},V_{pa},V_{pv}\\},$
where
$\begin{split}\theta=\\{R_{s},R_{p},R_{ava},R_{mva},R_{pva},R_{tva},R_{pv},R_{sv},C_{sa},C_{sv},C_{pa},C_{pv},\\\
E_{la,M},E_{la,m},E_{ra,M},E_{ra,m},E_{lv,M},E_{lv,m},E_{rv,M},E_{rv,m},\\\
T_{r,a},\tau_{c,a},T_{c,a},T_{c,v},T_{r,v},\\}.\end{split}$ (18)
Here $\mathbf{x}$ denotes the state variables ($V_{s,i}$ in compartment $i$).
The functions $f(t,\mathbf{x};\theta)$ denote the evolution of the states
(equation (1)), and $\bm{\theta}$ are the parameters. The vector $\mathbf{y}$
is the model output, including predictions of pressure and CO, used for
parameter inference.
### 2.8 Parameter inference
We estimate model parameters, some of which correspond to disease biomarkers,
by minimizing the relative least-squares error between model predictions and
data. We use the Levenberg-Marquardt algorithm to solve the generalized least-
squares problem [30]. The observed data $\mathbf{y}^{d}$ (static or time-
series) is assumed to be of the form
$\mathbf{y}^{d}=g(t,\mathbf{x};\mathbf{\theta})+\mathbf{\varepsilon},$ (19)
where $g(t,\mathbf{x};\theta)$ are the model predictions (here, pressure and
CO), and $\varepsilon$ is the measurement error, assumed to be independent and
identically distributed (iid) white Gaussian noise, i.e., $\varepsilon\ \sim\
\mathcal{N}(0,\ \sigma_{\varepsilon}^{2}\mathbf{I})$. Using this framework, we
estimate parameters that minimize the relative sum of squared errors,
$J=\mathbf{r}^{T}\mathbf{r}$, where $\mathbf{r}$ is the residual vector. The
residual encompasses the relative differences between the measured data
$\mathbf{y}^{d}$ and model predictions $\mathbf{y}=\
g(t,\mathbf{x};\mathbf{\theta})$.
The static residual is defined as
$\mathbf{r}_{s}=\frac{1}{\sqrt{N_{s}}}\frac{\mathbf{y}-\mathbf{y}^{d}}{\mathbf{y}^{d}},$
(20)
where the vector $\mathbf{y}=[p_{ra,M},\ p_{ra,m},\ p_{rv,M},\ p_{rv,m},\
p_{pa,M},\ p_{pa,m},$ $\ p_{sa,M},\ p_{sa,m},\ p_{pv,m},\ \text{CO}]$ includes
model outputs, $\mathbf{y}^{d}$ is the corresponding data, and $N_{s}$ is the
number of points. The three dynamic residuals are given by
$\displaystyle\mathbf{r}_{ra}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{N_{ra}}}\frac{\mathbf{p}_{ra}(t;\theta)-\mathbf{p}_{ra}^{d}(t)}{\mathbf{p}_{ra}^{d}(t)},$
(21) $\displaystyle\mathbf{r}_{rv}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{N_{rv}}}\frac{\mathbf{p}_{rv}(t;\theta)-\mathbf{p}_{rv}^{d}(t)}{\mathbf{p}_{rv}^{d}(t)},$
(22) $\displaystyle\mathbf{r}_{pa}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{N_{pa}}}\frac{\mathbf{p}_{pa}(t;\theta)-\mathbf{p}_{pa}^{d}(t)}{\mathbf{p}_{pa}^{d}(t)},$
(23)
where $\mathbf{p}_{i}(t;\theta),\ \mathbf{p}_{i}^{d}(t),$ and $N_{i}$ are the
time-series pressure predictions, time-series pressure data, and number of
residual points for the RA, RV, and pulmonary arteries. we consider two
combined residuals as our quantity of interest
$\displaystyle\begin{split}\mathbf{r}_{\mathbf{1}}\ &=\ \mathbf{r}_{s},\\\
\mathbf{r}_{\mathbf{2}}&=[\mathbf{r}_{s},\ \mathbf{r}_{ra},\ \mathbf{r}_{rv},\
\mathbf{r}_{pa}].\end{split}$
Similar to the approach in [36], each residual is computed over the last 30
cycles of the model predictions compared to the data. In the absence of volume
data, we include four penalty terms in our inference procedure to constrain
heart chamber volumes. PAH and CTEPH patients have enlarged RAs and RVs,
increasing the chamber volume [57]. We penalize end-diastolic model
predictions below a BSA-indexed volume threshold, as defined in subsection
2.6.1. The penalty functions are defined by
$J_{\text{penalty},i}=\max\left(0,\frac{\max(\mathbf{V}_{i})-V_{i,ED}^{d}}{V_{i,ED}^{d}}\right),$
(24)
where $i=la,lv,ra,rv$ and$\mathbf{V}_{i}$ is the predicted chamber volume.
### 2.9 Sensitivity analyses
We compute the sensitivity of the residual vectors $\mathbf{r}_{\mathbf{1}}$
and $\mathbf{r}_{\mathbf{2}}$ with respect to the model parameters. Both
local, derivative-based, and global, variance-based, sensitivity analyses are
used. The former methods are valid within a small neighborhood of the nominal
parameter values and quantify the gradient of the residual vectors
$\mathbf{r}_{\mathbf{1}}$ and $\mathbf{r}_{\mathbf{2}}$ with respect to the
parameters. The latter measure model sensitivity throughout the physiological
parameter space, simultaneously varying multiple factors.
The local sensitivity of the residual for a parameter $\theta_{i}$ at time $t$
is denoted by $\chi_{i}(t)$. Sensitivities are approximated numerically via
the complex-step method [13]. We rank parameters from most to least
influential by calculating the 2-norm of each sensitivity [36, 12]
$\|\chi_{i}(t)\|_{2}^{2}=\Bigg{(}\sum_{l=1}^{N}\chi_{i}^{2}(t_{l})\Bigg{)}^{\frac{1}{2}},$
(25)
where $i=1,2,\dots,\mathcal{M}$ is the number of parameters and
$l=1,2,\dots,N$ is the length of the time vector.
While global sensitivity analysis is more computationally expensive than local
methods, its ability to vary multiple parameters at a time may expose
undiscovered relationships between parameters [16]. In this study, we use
variance-based global sensitivity analysis methods, computing first ($S$) and
total order ($S_{T}$) Sobol’ indices [53]). The former measures the
parameters’ individual contribution to the total output variance of the cost
function, and the latter the individual contributions and higher-order
interactions between the parameters on the variance. $S_{T}$ are used to order
parameters from most to least influential. Additional local and global methods
information can be found in Section S2 of the Supplemental Material.
### 2.10 Parameter subset selection
Once the sensitivity analysis is performed, additional steps are taken to
determine if the parameters are identifiable. A parameter set to be
identifiable if it can be uniquely determined when fitting a model to data
[44, 34]. The model used in this study is analogous to an electrical resistor-
capacitor circuit. Circuit theory dictates that resistors and capacitors in
series and parallel can be combined to give an equivalent resistor and
capacitor. Therefore, if no data is available between two components, their
parameters cannot be estimated uniquely, i.e., they are non-identifiable.
Given the limited data and the large number of parameters (found in (18)), we
expect identifiability problems if all parameters are inferred from data [36,
12]. We take several steps to determine an identifiable and influential subset
with respect to the residual vectors. The subset selection process begins by
analyzing the global sensitivity results. Parameters with $S_{T_{i}}\approx 0$
are considered non-influential and fixed at their nominal values [54, 16].
After excluding these parameters, we use a singular value decomposition (SVD)
QR factorization method to determine local pairwise parameter interactions
[44]. Lastly, we use multistart inference to reduce the subset further until
we mitigate all identifiability issues.
#### 2.10.1 SVD-QR
The SVD-QR method [22] decomposes the sensitivity matrix as
$\bm{\chi}=\bm{U}\bm{\Sigma}\bm{V}^{\top}$, where $\bm{U}$ is the matrix of
left orthonormal eigenvectors, $\bm{\Sigma}$ is a diagonal matrix of singular
values, and $\bm{V}$ is the matrix of right orthonormal eigenvectors. The
total number of identifiable parameters, $\rho$, is the numerical rank of
$\Sigma$ and is used to partition $\bm{V}$ as $\bm{V}=[\bm{V}_{\rho}\ \
\bm{V}_{P-\rho}]$. The permutation matrix $\tilde{\bm{P}}$ is determined by QR
factorization such that $\bm{V}_{\rho}^{\top}\tilde{\bm{P}}=\bm{Q}\bm{R}$.
Here, $\bm{Q}$ is an orthogonal matrix, and the first $\rho$ columns of
$\bm{R}$ form an upper triangular matrix consisting of diagonal entries in
decreasing order. The first $\rho$ entries of $\tilde{\bm{P}}$ establish the
identifiable parameters for the subset.
#### 2.10.2 Multistart inference
The previous methods ensure that the parameters are locally and linearly
identifiable. However, they do not guarantee practically identifiable
parameter subsets if the model has nonlinear behavior in output space [54].
Thus, we determine our final subset by inferring parameters from multiple
initial guesses randomly selected between $\pm 20\%$ of the nominal values.
Non-identifiable parameters likely approach different values, whereas
identifiable parameters converge to the same value regardless of initial guess
[49]. We assess identifiability by calculating each patient’s coefficient of
variance (CoV; the standard deviation relative to the mean). Subsets that
exhibit parameter CoV $>10\%$ are reduced by fixing the least influential
parameter above this threshold. The multistart inference is iteratively run
until the CoV for each parameter is below the $10\%$ threshold.
### 2.11 Confidence and prediction intervals
Model parameter and output uncertainty are quantified using asymptotic
analysis [11]. Under the assumption that the noise $\mathbf{\varepsilon}$ is
iid, we compute the variance estimator $\hat{\sigma}_{\epsilon}^{2}$ and
parameter covariance estimator
$\hat{\mathbf{C}}=\hat{\sigma}_{\epsilon}^{2}\left(\hat{\bm{\chi}}^{\top}\hat{\bm{\chi}}\right)^{-1}$
using asymptotic analysis for nonlinear least-squares [52].
The 95% parameter confidence intervals for each inferred parameter,
$\hat{\theta}_{i}$, are computed as
$[{{\hat{\theta}}_{i}^{CI-},\hat{\theta}}_{i}^{CI+}]={\hat{\theta}}_{i}\pm
t_{N-\rho}^{0.975}\sqrt{{\hat{\mathbf{C}}}_{i,i}},$ (26)
where $t_{N-\mathcal{M}^{\prime}}^{1-\alpha/2}$ is a two-sided t-statistic
with a $1-\alpha/2=95\%$ confidence level, and
$\sqrt{{\hat{\mathbf{C}}}_{i,i}}$ represents the standard error for the $i$th
parameter estimator. Throughout we denote these confidence intervals by mean
$\pm$ two standard deviations, i.e. $\hat{\theta_{i}}\pm
2\sigma_{\theta_{i}}$. The confidence and prediction intervals for the optimal
model output ${\hat{y}}_{j}$ at time $t_{j}$ are
$[{{\hat{y}}_{j}^{CI-},\hat{y}}_{j}^{CI+}]={\hat{y}}_{j}\pm
t_{N-\rho}^{0.975}\
\sqrt{\hat{\bm{\chi}}_{j}^{T}{\hat{\mathbf{C}}}_{i,i}\hat{\bm{\chi}}_{j}},$
(27)
$[{{\hat{y}}_{j}^{PI-},\hat{y}}_{j}^{PI+}]={\hat{y}}_{j}\pm
t_{N-\rho}^{0.975}\
\sqrt{\sigma_{\varepsilon}^{2}+\hat{\bm{\chi}}_{j}^{T}{\hat{\mathbf{C}}}_{i,i}\hat{\bm{\chi}}_{j}},$
(28)
where $\hat{\bm{\chi}}_{j}^{T}$ is the sensitivity vector at $t_{j}$ evaluated
at
$\hat{\mathbf{\theta}}=\left\\{{\hat{\mathbf{\theta}}}_{\mathbf{\rho}},\mathbf{\theta}_{\mathcal{M}-\mathbf{\rho}}\right\\}$.
Note that the prediction intervals account for the variance in both the model
output and the data, hence they are wider.
### 2.12 Simulations
To study the impact of PH, we run several simulations comparing PH patients to
a normotensive control subject.
##### Control:
Simulations for a control patient are conducted using normotensive pressure
and volume values given in table 2. Hemodynamic predictions are compared to
those from PH patients.
##### Static:
Similar to Colunga et al. [12], we calibrate model predictions utilizing only
static pressure and CO data for each PH patient, i.e., $\mathbf{r}_{1}$. We
use this as a benchmark procedure to determine the effects of adding dynamic
waveforms.
##### Dynamic waveforms:
Model predictions of systolic, diastolic, and mean pressure are computed in
combination with dynamic RA, RV, and pulmonary artery predictions utilizing
residual $\mathbf{r}_{2}$.
## 3 Results
Local and global sensitivity analyses of both residuals $\mathbf{r}_{1}$ and
$\mathbf{r}_{2}$ distinguish influential and non-influential parameters. Next,
SVD-QR and multistart inference are used to construct subsets of identifiable
parameters. Model predictions are calibrated to measured RHC data using the
identifiable subset, and other outcomes, such as PV loops, are computed.
Uncertainty of parameter estimates and model outputs are compared for the two
residual vectors and is shown here for a single representative patient;
results for remaining patients can be found in the Supplemental Material.
### 3.1 Sensitivity analyses
Figure 4 (a-b) shows the patient-specific local sensitivity parameter ranking
for $\mathbf{r}_{1}$ (static values only, panel (a)) and $\mathbf{r}_{2}$
(static and time-series data, panel (b)). Sensitivities are normalized by the
largest magnitude for each patient and residual, and parameters are sorted
based on their median ranking across all nine patients.
Parameters are ranked similarly for the two residual vectors; however,
accounting for dynamic predictions makes the timing parameter $\tau_{c,a}$
more influential on $\mathbf{r}_{2}$. The most influential parameters for both
residuals are $C_{sa},\,C_{pa},\,C_{pv},$ and $E_{M,rv}$. Seven of the nine
patients display consistent parameter rankings for both residual vectors.
Parameter $\tau_{c,a}$ is less influential for patients 3 and 5 than for the
other patients. Overall, the local analysis shows that parameters
[$R_{ava},\,R_{mva},\,R_{pva},\,R_{pv},\,R_{sv},\,E_{M,la},\,T_{r,a}$] are
non-influential for both residuals; parameters with sensitivities $\leq
10^{-1}$ are considered non-influential. The boundary separating influential
and non-influential parameters is marked with vertical lines in figure 4.
For the global sensitivity analysis, $n=10^{4}$ samples are generated for each
parameter using a Sobol’ sequence. The average first order ($S_{i}$) and total
($S_{T_{i}}$) effects across all nine patients are shown in figure 4 (c-d) for
the cost functional $J(\theta)$ using residuals $\mathbf{r}_{1}$ and
$\mathbf{r}_{2}$. Sobol’ indices are similar across all patients, and the
parameter ranking using the total Sobol’ index agrees with the local results.
A total index, $S_{T_{i}}$, near zero ($\leq 10^{-2}$) suggests that the
corresponding parameter is non-influential. Results show that $S_{T_{i}}$ is
$\leq 0.005$ for parameters $R_{ava}$, $R_{pva}$, $R_{pv}$, $R_{mva}$,
$E_{M,la}$, and $T_{r,a}$, consistent with the local sensitivity results,
suggesting that these parameters can be fixed at their nominal values. The
$S_{T_{i}}$ is also approximately zero for $T_{c,v}$ and $T_{r,v}$. Since the
local sensitivity identifies $T_{c,v}$ and $T_{r,v}$ as influential, we
include these in our subset selection procedure.
Figure 4: Sensitivity analysis. Parameter ranking based on a local sensitivity
analysis using either $\bm{r}_{1}$ (a) or $\bm{r}_{2}$ (b). Each patient is
plotted by a different color. The parameter order is based on median
sensitivity across all patients. Average first order $(S_{i})$ and total order
$(S_{T_{i}})$ Sobol’ indices using either $\bm{r}_{1}$ or $\bm{r}_{2}$ (c).
Average parameter ranking based on $(S_{T_{i}})$ magnitude for either residual
are shown in (d). The horizontal dashed lines separate influential (above) and
non-influential (below) parameters.
### 3.2 Parameter subsets and inference
Both SVD-QR and multi-start inference are used for parameter subset selection.
The non-influential parameters,
$\theta^{NI}=[R_{ava},\,R_{mva},\,R_{pva},\,R_{pv},\,E_{M,la},\,T_{r,a}$] are
fixed prior to SVD-QR. Previous studies [60] found that the maximum and
minimum elastance cannot be inferred simultaneously. Since the minimum
elastance control both the amplitude and baseline elastance, this parameter
contains more information and is, therefore, more important to infer. The
study by Domogo and Ottesen [15] focused on left atrial dynamics using a 0D
computational model. They found that changes in atrial volume were sensitive
to maximal atrial compliance (i.e., minimal atrial elastance). This
observation supports our exclusion of maximal elastance parameters in subset
selection. Thus, the remaining maximal elastances,
[$E_{M,ra},\,E_{M,rv},\,E_{M,lv}$], are also fixed prior to SVD-QR. We
generate a subset for each residual, including parameters consistently
identified by SVD-QR across all nine patients. Parameters that are
inconsistent using SVD-QR are depicted in blue in tables S1 and S2 of the
Supplemental Material.
We run the multistart inference with these reduced SVD-QR subsets. For
instances of multistart inference that have parameters with high CoV ($>0.10$)
(purple in tables S1 and S2 in the Supplemental Material), the least
influential parameter is removed from the subset and fixed at its nominal
value. The final subsets used for each residual are
$\displaystyle\bm{\theta}^{r_{1}}$ $\displaystyle=$
$\displaystyle\left[R_{s},R_{p},R_{tva},C_{sa},C_{sv},C_{pa},E_{m,ra},E_{m,rv},E_{m,lv},T_{c,a},T_{r,v}\right],$
(29) $\displaystyle\bm{\theta}^{r_{2}}$ $\displaystyle=$
$\displaystyle\left[R_{s},R_{p},R_{tva},R_{sv},C_{sa},C_{sv},C_{pa},E_{m,ra},E_{m,rv},E_{m,lv},\tau_{c,a},T_{c,a},T_{c,v},T_{r,v}\right].$
(30)
Figure 5 shows the CoV of the final subsets for $\mathbf{r}_{1}$ and
$\mathbf{r}_{2}$. Table 4 and 5 list the estimated parameters using either
$\mathbf{r}_{1}$ or $\mathbf{r}_{2}$. These optimal values reflect the
optimization starting from the nominal guesses for each patient. We also
calculate the 95% parameter confidence intervals using eq. (26).
Figure 5: Multistart Inference. For each patient, each subset is tested for
identifiability using 8 randomized starting guesses within $\pm 20\%$ of the
nominal value. Coefficient of variance (CoV) for the final parameter sets
using (a) $\bm{r}_{1}$ or (b) $\bm{r}_{2}$ provide a CoV below 10%. Table 4:
Estimated parameter values using $\mathbf{r_{1}}$ along with the $95\%$
confidence interval (depicted as $\hat{\theta_{i}}\pm 2\sigma_{\theta_{i}}$).
$\theta$ P1 P2 P3 P4 P5 P6 P7 P8 P9 $R_{s}$ 0.82$\pm$0.14 1.13$\pm$0.97
1.2$\pm$0.65 1.14$\pm$0.12 1.08$\pm$0.82 0.9$\pm$0.12 0.86$\pm$0.16
0.55$\pm$0.74 1.24$\pm$0.12 $R_{p}$ 0.5$\pm$0.11 0.92$\pm$0.71 0.77$\pm$0.42
0.33$\pm$0.21 0.59$\pm$0.87 0.28$\pm$0.18 0.36$\pm$0.22 0.26$\pm$0.71
0.33$\pm$0.18 $R_{tva}$ 0.02$\pm$1.63 0.12$\pm$3.95 0.12$\pm$2.26
0.07$\pm$0.57 0.09$\pm$3.01 0.01$\pm$3.29 0.01$\pm$2.03 0.03$\pm$1.57
0.05$\pm$0.65 $C_{sa}$ 2.11$\pm$0.02 0.99$\pm$0.08 1.45$\pm$0.08 0.86$\pm$0.02
0.98$\pm$0.09 1.69$\pm$0.02 2.18$\pm$0.02 2.37$\pm$0.06 0.67$\pm$0.02 $C_{sv}$
57.86$\pm$0.36 22.36$\pm$0.7 14.26$\pm$0.48 45.68$\pm$0.45 21.16$\pm$0.87
34.67$\pm$0.18 15.67$\pm$0.13 13.4$\pm$0.41 35.67$\pm$0.25 $C_{pa}$
1.33$\pm$0.01 0.63$\pm$0.08 0.82$\pm$0.06 1.13$\pm$0.03 0.82$\pm$0.1
2.95$\pm$0.02 1.84$\pm$0.02 1.69$\pm$0.06 0.98$\pm$0.03 $E_{ra,m}$
0.06$\pm$0.37 0.23$\pm$0.71 0.28$\pm$0.48 0.1$\pm$0.44 0.25$\pm$0.81
0.1$\pm$0.18 0.16$\pm$0.13 0.26$\pm$0.39 0.12$\pm$0.25 $E_{rv,m}$
0.03$\pm$0.62 0.03$\pm$2.18 0.02$\pm$3.51 0.01$\pm$3.36 0.07$\pm$0.83
0.05$\pm$0.64 0.09$\pm$0.1 0.09$\pm$0.51 0.03$\pm$0.56 $E_{lv,m}$
0.02$\pm$0.13 0.03$\pm$2.66 0.04$\pm$1.2 0.07$\pm$0.15 0.11$\pm$0.51
0.05$\pm$0.35 0.1$\pm$0.29 0.13$\pm$0.76 0.09$\pm$0.14 $T_{c,a}$ 0.73$\pm$0.31
0.74$\pm$0.54 0.9$\pm$0.68 0.9$\pm$0.23 0.85$\pm$0.91 0.83$\pm$0.85
0.86$\pm$0.44 0.67$\pm$0.94 0.79$\pm$0.13 $T_{r,v}$ 0.48$\pm$0.04
0.52$\pm$2.22 0.76$\pm$0.85 0.5$\pm$0.48 0.6$\pm$1.19 0.56$\pm$0.63
0.58$\pm$0.07 0.54$\pm$0.62 0.56$\pm$0.32
Table 5: Estimated parameter values using $\mathbf{r_{2}}$ along with the
$95\%$ confidence interval (depicted as $\hat{\theta_{i}}\pm
2\sigma_{\theta_{i}}$).
$\Theta$ P1 P2 P3 P4 P5 P6 P7 P8 P9 $R_{s}$ 0.77$\pm$0.82 1.14$\pm$1.31
1.25$\pm$0.95 1.12$\pm$0.32 1.15$\pm$0.57 0.9$\pm$0.2 0.79$\pm$0.42
0.58$\pm$0.28 1.21$\pm$0.21 $R_{p}$ 0.49$\pm$0.1 0.9$\pm$0.14 0.76$\pm$0.13
0.34$\pm$0.1 0.58$\pm$0.12 0.28$\pm$0.07 0.36$\pm$0.09 0.26$\pm$0.05
0.33$\pm$0.09 $R_{tva}$ 0.03$\pm$0.36 0.12$\pm$0.45 0.12$\pm$0.3 0.05$\pm$0.21
0.09$\pm$0.31 0.01$\pm$0.25 0.03$\pm$0.23 0.03$\pm$0.1 0.05$\pm$0.19 $R_{sv}$
0.02$\pm$1.77 0.02$\pm$2.77 0.01$\pm$3.42 0.01$\pm$1.84 0.01$\pm$1.69
0.01$\pm$1.07 0.05$\pm$1.03 0.01$\pm$0.58 0.02$\pm$1.11 $C_{sa}$ 2.04$\pm$0.12
0.99$\pm$0.11 1.5$\pm$0.14 0.85$\pm$0.05 1.01$\pm$0.06 1.68$\pm$0.03
2.12$\pm$0.06 2.43$\pm$0.04 0.66$\pm$0.03 $C_{sv}$ 35.45$\pm$0.47
23.24$\pm$0.23 15.7$\pm$0.13 42.86$\pm$0.21 24.05$\pm$0.08 33.5$\pm$0.1
12.16$\pm$0.2 14.42$\pm$0.02 31.12$\pm$0.16 $C_{pa}$ 1.3$\pm$0.02
0.63$\pm$0.02 0.89$\pm$0.02 1.13$\pm$0.02 0.83$\pm$0.02 2.91$\pm$0.01
1.79$\pm$0.01 1.73$\pm$0.01 0.95$\pm$0.01 $E_{ra,m}$ 0.07$\pm$0.25
0.22$\pm$0.25 0.24$\pm$0.2 0.11$\pm$0.16 0.22$\pm$0.12 0.1$\pm$0.07
0.19$\pm$0.09 0.23$\pm$0.06 0.13$\pm$0.12 $E_{rv,m}$ 0.03$\pm$0.38
0.03$\pm$0.45 0.02$\pm$0.97 0.01$\pm$0.8 0.08$\pm$0.08 0.05$\pm$0.05
0.1$\pm$0.06 0.09$\pm$0.02 0.03$\pm$0.16 $E_{lv,m}$ 0.02$\pm$0.44
0.03$\pm$0.61 0.04$\pm$0.53 0.06$\pm$0.22 0.11$\pm$0.19 0.05$\pm$0.08
0.11$\pm$0.17 0.14$\pm$0.09 0.09$\pm$0.17 $\tau_{c,a}$ 0.48$\pm$0.28
0.43$\pm$0.43 0.63$\pm$0.22 0.69$\pm$0.1 0.41$\pm$0.3 0.6$\pm$0.09
0.54$\pm$0.16 0.44$\pm$0.08 0.55$\pm$0.11 $T_{c,a}$ 0.91$\pm$0.14
0.74$\pm$0.17 0.92$\pm$0.12 0.9$\pm$0.08 0.85$\pm$0.1 0.87$\pm$0.05
0.98$\pm$0.05 0.69$\pm$0.03 0.82$\pm$0.06 $T_{c,v}$ 0.31$\pm$0.04
0.27$\pm$0.05 0.2$\pm$0.06 0.29$\pm$0.03 0.22$\pm$0.04 0.34$\pm$0.02
0.35$\pm$0.03 0.24$\pm$0.02 0.32$\pm$0.02 $T_{r,v}$ 0.5$\pm$0.03 0.51$\pm$0.04
0.76$\pm$0.04 0.62$\pm$0.03 0.48$\pm$0.04 0.56$\pm$0.02 0.58$\pm$0.02
0.51$\pm$0.01 0.59$\pm$0.02
We display the relative change between estimated PH parameters and
normotensive parameters in figure 6 as box-and-whisker plots to understand how
parameters change with PH. Note that estimated parameters shared between
$\bm{\theta}^{r_{1}}$ and $\bm{\theta}^{r_{2}}$ are nearly identical even with
additional parameters in $\bm{\theta}^{r_{2}}$. Parameters $R_{p}$, $R_{tva}$,
$E_{m,ra}$, $E_{m,rv}$, and $E_{m,lv}$ are consistently elevated in all PH
patients. The normotensive value of $R_{tva}$ is substantially smaller than
the PH patients, which explains the larger relative change compared to other
parameters in the subset. The timing parameters for the heart chambers,
compartment compliances, and systemic resistances $R_{s}$ and $R_{sv}$ remain
relatively close to normotensive values. The $R_{p}$-$C_{pa}$ (RC)
relationship was also determined from the inferred parameters. As shown in
figure 7, there is a clear inverse relationship between $R_{p}$ and $C_{pa}$
with the curve of best fit being $C_{pa}=0.6518/(0.1005+R_{p})$, $R^{2}=0.77$,
and constant RC time $R_{p}\cdot C_{pa}=0.55\pm 0.15$s.
Figure 6: Changes in parameters due to PH. Box and whisker plots showing
quantiles and outliers for the estimated parameters. Results show the relative
difference from the normotensive predictions. Figure 7: Hyperbolic
$R_{p}$-$C_{pa}$ relationship. Optimal values of $R_{p}$ and $C_{pa}$ for the
normotensive (black) and PH (red) patients. The best-fit curve is given by
$C_{pa}=0.6518/(0.1005+R_{p})$, and is similar to previous findings using
isolated Windkessel models [56]. A PVR $\geq$ 3 Wood units (3 WU = 0.18 mmHg s
mL${}^{-}1$) is considered in the PH range (dashed line).
### 3.3 Model forecasts and uncertainty
Post-inference predictions of pressure and CO using either $\bm{r}_{1}$ or
$\bm{r}_{2}$ are depicted in figure 8(a) along with the measured data from
patient 7. Predictions for all PH patients are included in the Supplemental
Material. Both $\bm{r}_{1}$ and $\bm{r}_{2}$ inference procedures are able to
match the static data well. Using $\bm{r}_{2}$ minimizes the mismatch between
the dynamic model outputs and the time-series data. Predictions of RA dynamics
improve drastically when including time-series data. In contrast, RV and PA
predictions improve only marginally. For five patients, CO predictions are
only slightly worse when matched using $\bm{r}_{2}$ vs. $\bm{r}_{1}$. However,
maximum and minimum pressure values still match the data well.
Figure 8: Optimal model predictions. (a) Optimal model fits for pressure,
$p_{i}$ (mmHg) and cardiac output, CO (L/min) using either $\bm{r}_{1}$
(dotted line) or $\bm{r}_{2}$ (solid line) compared to the data for patient 7.
(b) simulated pressure-volume loops in the ventricles and atria using residual
1 (dotted line) and residual 2 (solid line) for the dataset from patient 7.
A benefit of computational models is that essential but unmeasurable outcomes,
such as PV loops, can be predicted. We contrast PV loops from all four heart
chambers for the normotensive subject to the nine PH patients (using estimated
parameters from $\mathbf{r}_{2}$) in figure 9. Except for patients 1 and 2,
all PH patients have increased left atrial pressure. In contrast, RA PV loops
display elevated volumes and pressures relative to the normotensive simulation
for all patients. The RV and LV PV loops have similar shapes, yet the RV PV
loops in the PH group have a more drastic rise in pressure during isovolumic
contraction compared to the normotensive results.
We calculate SW for all four heart chambers by integrating simulated pressure
with respect to volume. These results and other model outcomes, including the
resistance and compliance ratios, $R_{p}/R_{s}$ and $C_{pa}/C_{sa}$, and the
pulsatility index PI, are shown in Table 6. Left atrial SW is lower in PH for
all but patients 5 and 8, and RA SW is higher in all PH patients relative to
the normotensive value. LV SW is lower in all CTEPH patients (3, 4, 5, and 9)
and two PAH patients (2 and 8), while RV SW is increased in all nine PH
patients. In general, there is a drastic increase in $R_{p}/R_{s}$ and
decrease in $C_{pa}/C_{sa}$ in PH relative to normotensive conditions. The PI
decreased in PH except in patient 1.
Figure 9: Simulated pressure-volume loops. Pressure-volume loops in the
normotesive (norm) and all nine PH patients are contrasted. Model predictions
include (a) left atrial, (b), right atrial, (c) left ventricular, and (d)
right ventricular pressure-volume loops. Table 6: Model outcomes from
normotensive and PH simulations.
| SW | | |
---|---|---|---|---
Patient | LA | LV | RA | RV | $\mathbf{R_{p}/R_{s}}$ | $\mathbf{C_{pa}/C_{sa}}$ | PI
Norm | 0.031 | 1.676 | 0.013 | 0.223 | 0.08 | 3.84 | 4.25
1 | 0.010 | 1.556 | 0.064 | 0.882 | 0.63 | 0.64 | 5.37
2 | 0.020 | 0.868 | 0.041 | 0.532 | 0.79 | 0.64 | 2.40
3 | 0.023 | 1.150 | 0.034 | 0.618 | 0.60 | 0.59 | 2.52
4 | 0.018 | 1.502 | 0.039 | 0.590 | 0.30 | 1.33 | 3.94
5 | 0.038 | 0.779 | 0.043 | 0.368 | 0.50 | 0.83 | 1.63
6 | 0.012 | 1.728 | 0.024 | 0.488 | 0.31 | 1.74 | 1.87
7 | 0.009 | 1.618 | 0.042 | 0.640 | 0.45 | 0.84 | 1.76
8 | 0.038 | 0.892 | 0.022 | 0.423 | 0.44 | 0.71 | 1.07
9 | 0.021 | 0.888 | 0.035 | 0.324 | 0.27 | 1.44 | 2.85
Indices include stroke work (SW, Joule) in all four heart chambers, resistance
ratios (dimensionless), compliance ratios (dimensionless), and pulsatility
index (PI, dimensionless) calculated after estimating parameters using
$\bm{r}_{2}$. LA – left atrium, LV – left ventricle, RA – right atrium, RV –
right ventricle.
Parameters confidence intervals are provided in table 5. Model confidence and
prediction intervals for patient 7 are shown in figure 10 (see the
Supplemental Material for results from all nine patients) using either
residual vector. The confidence and prediction intervals show uncertainty in
mean pulmonary venous pressure (matched to PAWP data), CO, and maximum and
minimum pressures in the systemic arteries, RA, RV, and PA. The confidence
intervals for RV and PA are smaller than the RA and are attributed to the
larger mismatch between RA data and model simulations. Adding dynamic data in
$r_{2}$ increases the magnitude of the sum of squared residuals, thus
increasing the prediction intervals in figure10b. Note that the PA, RA, and RV
data nearly all fall within the 95% prediction intervals shown in figure10b.
Figure 10: Output uncertainty. Uncertainty in the model outputs for pressure,
$p_{i}$ (mmHg) and cardiac output, CO (L/min) using either $\bm{r}_{1}$ (a) or
$\bm{r}_{2}$ (b) for the quantity of interest.
## 4 Discussion
Electronic health records typically include RHC blood pressure measurements,
estimates of cardiac output, and systolic and diastolic blood pressure cuff
measurements in the systemic circulation. Traditionally, static pressures
(e.g., systolic & diastolic) are recorded, though the RHC also generates blood
pressure waveforms. Our goal is to examine if additional waveform data improve
model calibration and, therefore, characterization of PH and its phenotypes.
We use a systems-level cardiovascular model to characterize patient-specific
changes due to PH. We use a combination of sensitivity analyses, subset
selection, and multi-start inference to determine informative and identifiable
parameter subsets and estimate these parameters to patient RHC data. Results
show that the proposed model captures the hallmarks of PH both with and
without waveform data. We find increased RA, RV, and PA pressures, elevated
PVR, and reduced pulmonary arterial compliance in all PH patients. Finally, we
show that additional waveform data are essential in quantifying RA reservoir
and pump function. Overall, our results show that systems-level models can
capture patient-specific PH dynamics and parallel the current clinical
understanding of the disease.
### 4.1 Sensitivity analyses
Sensitivity analysis is crucial for determining which parameters influence the
model output. Our model has 25 parameters, yet limited data and the structure
of the model make inferring all the parameters infeasible. We use local and
global sensitivity analyses on two residual vectors: one comparing static
outputs and another static and dynamics outputs. Both methods consistently
identify 16 influential and six non-influential parameters, independent of the
technique and residual. Three parameters, $[R_{sv},\,T_{c,v},\,T_{r,v}]$, are
excluded from the sets as they are not consistently influential across the two
techniques. The influential parameters are candidates to be inferred, while
the non-influential parameters will be kept fixed at their nominal value.
The pulmonary valve resistance ($R_{pva}$) is non-influential; this parameter
is directly associated with the coupling between the RV and PA. However, none
of the PH patients in this study have a history of pulmonary valve stenosis.
Thus it is reasonable to keep this parameter fixed at its nominal value. The
pulmonary venous ($R_{pv}$) and mitral valve ($R_{mva}$) resistances are also
not influential. Since we do not have left heart data, the residuals do not
include left heart quantities, and therefore we expect these to be non-
influential. This finding agrees with previous studies [36, 17, 24] that fix
the valve resistances.
Both local and global analysis techniques are essential as they each highlight
model features. Global sensitivities identify influential parameters over the
physiological parameter range, while local sensitivities are evaluated at
known values. Global sensitivity analysis sample parameters over the
physiological range, but due to nonlinear model behavior, this could include
combinations that generate an non-physiological output. Yet, the local
analysis only provides a snapshot of the sensitivities; again, since the model
is nonlinear, the parameter influence may change if a parameter is changed,
i.e., a parameter influential before optimization could be non-influential
after optimization. For example (see figure4), the atrial timing parameter
$\tau_{c,a}$ is less influential for patients 3 and 5 than for the other PH
patients, and $E_{M,la}$ is less influential for patient 4. These results
agree with Marquis et al. [36], where LV elastance and systolic timing
parameters varied across each rat. Global sensitivity analysis cannot identify
these discrepancies, as it integrates the sensitivity over the physiological
parameter space.
Finally, while influential parameters are consistent between methods,
individual parameters may have a different ranking. As shown in Figure4, the
maximal atrial elastance $E_{ra,M}$ is the second most influential parameter
in the global analysis, whereas the local analysis ranks the parameter
significantly lower. This can be attributed to interactions between $E_{M,ra}$
and $E_{ra,m}$, which account for the RA reservoir and pump function. Small
changes in $E_{ra,m}$ drastically affect maximum and minimum pressure values
while changes in $E_{ra,M}$ only affect the model output when $E_{ra,M}\gg
E_{ra,m}$. Thought the ranking of $E_{M,ra}$ differs, $E_{ra,m}$ is always
influential.
Deficiencies in RA reservoir and contractile function are strong predictors of
mortality in PH [1]. RA filling during ventricular diastole is dictated by
systemic venous dynamics and tricuspid valve integrity. In the model, RA
systolic and diastolic pressures are determined by minimum elastance
$E_{ra,m}$, which is always influential. The tricuspid valve resistance
$R_{tva}$ forms the interface for RA-RV interactions. Hence, this parameter
influences the relationship between the two heart chambers throughout the
cardiac cycle. The high sensitivity of RA predictions to these parameters
mimics the current physiological understanding of altered RA function in PH
[1].
Two of the three parameters characterized differently between the local and
global methods are timing parameters dictating contraction and relaxation of
the heart. The timing of heart contraction and relaxation are well
approximated from dynamic pressure data. Hence, the uncertainty in these
parameters (i.e., the bounds for global sensitivity sampling) is substantially
smaller ($\pm 10-15\%$) than other model parameter uncertainty ($\pm 400\%$).
This contributes to why the Sobol’ indices are smaller than the local
analysis. Since our nominal timing parameter values are well informed, the
local analysis is more relevant and used to determine timing parameter
influence.
The final parameter with varying influence is $R_{sv}$, the systemic venous
resistance. This parameter impacts central venous pressure and RA filling. As
we discuss later, while at the border between influential and non-influential,
the parameter is essential to predict atrial dynamics.
### 4.2 Parameter inference and subset selection
We fix non-influential parameters at their nominal values; however, this does
not guarantee that the parameter subset is practically identifiable [38, 24].
We combine SVD-QR subset selection and multistart parameter inference to
determine an identifiable parameter subset. SVD-QR methods reduce the number
of parameters [44], and multistart inference tests if solutions to the inverse
problem are unique. For each patient, our results provide consistent parameter
estimates across both residuals. Results reveal that the model with static
data has 11 identifiable parameters, while the model with static and dynamic
data has 14 identifiable parameters. An important observation is that the
identifiable parameter subsets are subsets of each other, i.e.,
$\bm{\theta}^{r_{1}}\subset\bm{\theta}^{r_{2}}$. These results demonstrate
that the patient-specific model is robust.
Our finding that sensitivity analysis alone is inadequate to determine
identifiable parameters agrees with results reported in the literature. For
example, Schiavazzi et al. [49] reported that sensitivity analyses does not
guarantee unique parameter estimates. The authors use multistart inference to
interrogate parameter identifiability and reduce their parameter subset. We
use a similar technique. A CoV cutoff of 10%, shown in figure 5, ensures that
parameter estimates are robust to simulations with 20% uncertainty in initial
guesses.
As shown on Figure 6, identifiable parameters $R_{p}$, $R_{tva}$, $E_{m,ra}$,
$E_{m,rv}$, and $E_{m,lv}$ are elevated in PH. The parameters $R_{p}$ and
$R_{tva}$ have the largest relative increase. PVR is a known biomarker of PH
disease severity, it is elevated in both PAH and CTEPH [27, 58]. The increase
in minimum elastance in the RA and RV indicates chamber stiffening, as
reported in PH [57]. An elevated end-diastolic elastance, $E_{m,rv}$, is
negatively correlated with RA reservoir, passive, and active strain [57],
suggesting that RA and RV functions deteriorate during PH progression. We also
observe a slight elevation in minimal LV elastance $E_{m,lv}$, correlating
with impaired LV function due to rightward septal bulging [40]. Another
important disease biomarker is pulmonary arterial compliance $C_{pa}$, which
measures arterial distensibility. Figure 6 shows a relative decrease in
$C_{pa}$ with PH, which consistent with literature [20], reflects the
stiffening of the proximal pulmonary arteries due to constitutive changes
(e.g., collagen accumulation) [23].
Several studies [20, 58, 56, 3, 32] have emphasized the inverse relationship
between $R_{p}$ and $C_{pa}$ in the pulmonary circulation, often referred to
as RC-time, $\tau=R_{p}C_{pa}$. Tedford et al. [56] report an inverse-
hyperbolic relationship from analysis of data from 1,009 patients with PH and
normal pulmonary capillary wedge pressure with best-fit curve
$C_{pa}=0.564/(0.047+R_{p})$ and RC time $\tau=0.48\pm 0.17$. Similarly, the
retrospective study by Assad et al. [3] found that the RC time is $\tau=0.7\pm
0.34$ in PAH patients (n=593) with a best-fit curve
$C_{pa}=0.775/(0.045+R_{p})$. They also noted that the inverse-hyperbolic RC-
time relationship is nearly identical for PAH and group 2 PH patients. Figure
7 shows this relationship from our patient cohort. The best fit curve
$C_{pa}=0.6518/(0.1105+R_{p})$ and constant RC time $\tau=0.55\pm 0.15$ are
consistent with results from these studies [56, 3]. Our results were obtained
from analysis of a closed-loop model, whereas the original RC times are
computed using an isolated Windkesel model. This suggests that our systems-
level model reproduces key features across large PH cohorts.
The parameters in the static and dynamic residuals, including the systemic
venous resistance controlling flow from the systemic veins to the RA,
significantly affect RA filling. PH patients have a small reduction in
$R_{sv}$ relative to the normotensive patients, increasing systemic venous
inflow and diastolic RA filling. Growing evidence suggests that RA function is
impaired during PH, though little is known about how RA-RV coupling is altered
during disease progression [18, 1]. Using dynamic RA data for model
calibration may provide new insight into the mechanisms of RA contractile and
reservoir deterioration with RV dysfunction. Changes in RA contractile timing
can only be observed with dynamic pressure data. Other parameters only in the
dynamic residual include $T_{c,v},T_{c,a}$, and $\tau_{c,a}$. These parameters
are all associated with the timing of heart function, i.e., the generation of
the waveforms. Alenezi et al. [1] studied RA strain across across 67 PAH
subjects using speckle-tracking imaging. The study found that RA dysfunction
was an independent predictor of mortality, and that RA strain rate (which is
time dependent) correlate with PAH severity. Future investigations using
modeling with RA pressure and strain data may reveal additional indicators of
RA dysfunction and PAH severity.
As shown in figure 8, including more data in the parameter inference procedure
not only increases the number of identifiable parameters but also changes
model predictions and inferred parameter values. Both residuals account for
systolic, diastolic, and mean values, which are well matched by the model
across all patients. Dynamic PA and RV predictions are unchanged between
$\mathbf{r}_{1}$ and $\mathbf{r}_{2}$. This is attributed to good nominal
estimates of the ventricular timing parameters $T_{c,v}$ and $T_{r,v}$, i.e.,
the optimized values are close to nominal values. In contrast, there is a
significant change in simulated RA dynamics when calibrating the model to
dynamic pressure data. The intricate dynamics of atrial filling and
contraction make it difficult to identify the RA timing parameters from
pressure data visually. The PV loops in figure 8 show large changes in atrial
dynamics when comparing $\mathbf{r}_{1}$ to $\mathbf{r}_{2}$. The study by
Domogo and Ottesen [15] studied left atrial dynamics using a systems-level
model. Their model has a more sophisticated atrioventricular coupling, but the
authors noted that an elastance model can capture dynamic atrial data. The
time-varying dynamics of the atria are more complex, demonstrating the need
for dynamic rather than static data for model calibration. The RA is gaining
traction as a biomarker for PH severity [1, 57]. Hence our ability to
calibrate RA dynamics may provide further insight into the progression of RA-
RV-PA dysfunction in PH.
In the absence of volume data, we included additional volume constraints in
our inference procedure. It is well established that both PAH and CTEPH cause
increased RV myocardial remodeling, including wall thickening and dilatation
[57]. Penalizing the inference procedure to ensure BSA-indexed blood volumes
in all cardiac chambers constrains the model forecasts to volumes seen in
clinical studies [57]. The addition of constraints leads to increased RA
filling volumes and pressure magnitudes, as noted by Tello et al. [57].
Moreover, as shown in 9, the RV PV loop has a rightward shift but is
comparable in shape to its LV counterpart. This shift is known to occur in PH
[55], increasing RV end-systolic elastance. While not modeled explicitly, our
results show a reduction in LV PV loop area and SW due to RV dysfunction. A
recent study by Jayasekera et al. [28] reported significant decreases in LV
strain and prominent LV mechanical dyssynchrony in a cohort of patients with
PAH.
We predicted several outcomes using our model simulations, including Cardiac
SW, a known indicator of cardiac oxygen consumption and overall cardiomyocyte
function. Clinically, SW is calculated as the product of stroke volume and
mean arterial pressure; using the model, SW is calculated more accurately by
determining the area inside the PV loop. Both left and right heart SW, listed
in table 6, change in PH. In general, LV SW decreases while right heart SW
increases in PH. These findings agree with the retrospective clinical analysis
by Chemla et al. [9], who found that RV SW is doubled in PH. Increased RV SW
is linked to severe pediatric PAH in a study by Yang et al., who also use a
compartment model to generate PV loops. Without volume data, our model can
provide these indicators of disease severity, making them clinically relevant.
### 4.3 Uncertainty quantification
We efficiently determined both parameter and output uncertainty using
frequentist analyses. This study only infers identifiable parameters.
Parameters that are more influential have narrower confidence intervals
compared to less influential parameters (see table 5). A consequence of narrow
parameter bounds is that the model confidence and prediction intervals that
are sensitive to these influential parameters contain the measured data
remarkably well for both residuals.
Output uncertainty is compared in figure 10 for the two residuals
$\mathbf{r}_{1}$ or $\mathbf{r}_{2}$. Model outputs computed using
$\mathbf{r}_{1}$ have relatively small uncertainty for static targets. For
$\mathbf{r}_{2}$, including both static and dynamic data, the uncertainty
increases significantly, likely due to the increased complexity of the inverse
problem. The least squares error is significantly higher, and even though the
model does an excellent job fitting data, there are parts of the waveform that
the simple lumped model used here cannot reproduce. However, we gain
information about the dynamic output uncertainty in dynamic RA, RV, and PA
predictions using $\mathbf{r}_{2}$. This better quantifies the expected beat-
to-beat variation we would expect to see on continuous RHC monitoring. In
general, a more liberal estimate of uncertainty as show from $\mathbf{r}_{2}$
reduces the chance of having a biased prediction due to a single heart beat of
data.
Other groups have performed uncertainty quantification on cardiovascular
models. The study by Harrod et al. [24] investigated PA pressure uncertainty
using Markov chain Monte Carlo sampling. Their study focuses on uncertainties
in model outputs using a normotensive parameter set, whereas our work explores
output uncertainty using parameters indicative of PH. To our knowledge, this
is the first study to consider output uncertainty in a systems-level
cardiovascular model of PH. Several authors have performed uncertainty
quantification using one-dimensional [11, 41] or three-dimensional [48] fluid
dynamics models, which are fundamentally different than the systems-level
model used here. Colebank et al. [11] found that uncertainty bounds around PA
pressures were nearly identical between frequentist or Bayesian methods. The
study also compared uncertainty across normotensive and hypoxia-induced PH
mice. It showed larger uncertainty in the normotensive mice due to a larger
discrepancy in the model fit to data. We see a similar trend in our results,
with larger uncertainty typically attributed to patients with more complex RA
dynamics (see Supplemental Material). Our 0D model cannot capture the dynamics
of wave reflections suitable for a one-dimensional hemodynamics model. Yet, it
does capture the global diastolic decay in PA pressure, as shown in figure 10.
We match the model to RV dynamics exceptionally well; note the narrow
confidence and prediction intervals in figure 10. The study by Yang et al.
[61] captured RV mechanics in PH using an simplified, open loop model. We show
that a more complex model accounting for the systemic circulation and left
heart can still predict RV dynamics with high accuracy.
### 4.4 Limitations
This study has several limitations. Our model accounts for LV and RV dynamics
without including interventricular interaction through the septal wall.
Several studies have included this mechanism in the modeling framework [40,
12], which is important for understanding RV affects on LV function. Adding
this model component provides a next step in understanding biventricular
function during PH progression [28]. We use data from 9 patients, 4 of which
have CTEPH while the other 5 are PAH. We do not have a sufficiently large
sample size to deduce differences in PH phenotypes, though recent studies have
found differences in the biomechanics of the two subgroups [45]. Our inference
procedure enforces cardiac volumes that match previously recorded BSA-indexed
values; additional volume data (e.g., from a conductance catheter) would
better inform the model calibration. Yet these were not available for the
patients studied. Lastly, it is well established that PH disproportionately
affects women, with sex differences being a significant area of attention in
the PH community [10]. Combining a larger, more diverse patient cohort with
the parameter inference performed here may elucidate sex-dependent differences
in RA, RV, and PA parameters. Our study is a proof of concept that patient-
specific models can be constructed from RHC data, laying the foundation for
future studies on a larger population of patients.
## 5 Conclusions
This study uses a 0D, systems-level hemodynamics model to predict changes in
cardiovascular parameters due to PH. We utilize sensitivity analyses and
subset selection techniques to deduce the best parameter subsets for two
residuals: one with static data and one with additional dynamic RA, RV, and PA
pressure waveforms. Our results show that adding time-series waveform data
allows for additional parameters $R_{sv},\,\tau_{c,a},\,T_{c,v}$ to be
estimated without altering estimates in the static-only residual. These
additional parameters better describe RA pump and reservoir function, which
has been the focus of recent attention in the PH community [1]. Overall, model
outcomes are consistent with the physiological understanding of the disease;
PH induces increased PVR, decreased pulmonary arterial compliance, and
elevated minimum RA and RV elastance, leading to increased mPAP. While the
uncertainty in model predictions is smaller for the static residual, adding
time-series data provides useful insight into uncertainty in dynamic
predictions. Our study provides evidence that systems-level models can be
tuned to fit PH data. The model can predict the right atrial function by
adding static and dynamic data, which is important for differentiating PH
subtypes. The framework devised here may be able to explain the mechanisms
contributing to abnormal RA, RV, and PA function in PH.
## Citation Diversity Statement
In agreement with the editorial from the Biomedical Engineering Society (BMES)
[47] on biases in citation practices, we have analyzed the gender and race of
our bibliography. This is done manually, though automatic probabilistic tools
exist (e.g., https://zenodo.org/record/4104748#.YvVXpnbMI2z). We recognize
existing race and gender biases in citation practices and promote the use of
diversity statements encouraging fair gender and racial author inclusion.
Our references, including those in the Supplemental Material, contain 15.15%
woman(first)/woman(last), 13.64% man/woman, 16.67% woman/man, and 54.55%
man/man. This binary gender categorization is limited because it cannot
account for intersex, non-binary, or transgender people. In addition, our
references contain 6.06% author of color (first)/author of color(last), 12.12%
white author/author of color, 18.18% author of color/white author, and 63.64%
white author/white author. Our approach to gender and race categorization is
limited in that gender and race are assigned by us based on publicly available
information and online media. We look forward to future databases allowing all
authors to self-identify race and gender in an appropriately, anonymized, and
searchable fashion and new research that enables and supports equitable
practices in science.
### Data Accessibility.
The code and data used to produce these results can be found at
https://github.com/mjcolebank/CDG_NCSU/.
### Author Contributions.
ALC: conceptualization, formal analysis, investigation, methodology, software,
validation, visualization, writing—original draft, writing—review and editing;
MJC: conceptualization, formal analysis, investigation, methodology, software,
validation, visualization, writing—original draft, writing—review and editing;
REU: conceptualization, investigation, and methodology; MSO:
conceptualization, investigation, methodology, and editing
### Competing interests.
The authors declare they have no competing interests.
### Funding.
The project described was supported by the National Center for Research
Resources and the National Center for Advancing Translational Sciences,
National Institutes of Health, through Grant #TL1 TR001415 (MJC), the National
Heart Lung Blood Institute, National Institute of Health #R01HL147590 (MSO),
the National Science Foundation, Division of Mathematical Sciences, National
Science Foundation #1615820 (MSO) and Research Training Group Award #1246991
(MSO, MJC, REU), and The National GEM Consortium, GEM Graduate Fellowship
(ALC). The content is solely the responsibility of the authors and does not
necessarily represent the official views of the NIH or NSF. The funders had no
role in study design, data collection and analysis, decision to publish, or
preparation of the manuscript.
### Acknowledgements.
Author REU Program include participants: Macie King, Christopher Schell, Matt
Sheldon, Mariam Kharbat, and Robert Sternquist who participated in the summer
Research Experience for Undergraduates (RTG-REU) summer 2019. We thank Dr.
Martin Johnson (Golden Jubile Hospital, Glasgow, Scotland) and Dr. Sudarshan
Rajagopal (Duke University) for providing the patient data.
## References
* [1] F Alenezi, A Mandawat, ZJ. Il’Giovine, LK Shaw, I Siddiqui, VF Tapson, K Arges, D Rivera, MMD Romano, EJ Velazquez, PS Douglas, Z Samad, and S Rajagopal. Clinical utility and prognostic value of right atrial function in pulmonary hypertension. Circ Cardiovasc imag, 11:e006984, 2018.
* [2] F Alenezi, S Rajagopal, and S Kutty. Assessing right atrial function in pulmonary hypertension: window to the soul of the right heart? Am J Physiol, 318:H154–H155, 2020.
* [3] TR Assad, EL Brittain, QS Wells, EH Farber-Eger, SJ Halliday, LN Doss, M Xu, L Wang, FE Harrell, C Yu, IM Robbins, JH Newman, and AR Hemnes. Hemodynamic evidence of vascular remodeling in combined post- and precapillary pulmonary hypertension. Pulm Circ, 6:313–321, 2016.
* [4] JE Beneken and B De Wit. A physical approach to hemodynamic aspects of the human cardiovascular system., pages 1–45. Saunders, Philadelphia, PA, 1966.
* [5] NL Bjørdalsbakke, JT Sturdy, DR Hose, and LR Hellevik. Parameter estimation for closed-loop lumped parameter models of the systemic circulation using synthetic data. Math Biosci, 343:108731, 2022.
* [6] WF Boron and EL Boulpaep. Medical Physiology. Elsevier, Philadelphia, PA, 3rd edition, 2017.
* [7] M Calvo, VL Rolle, D Romero, N Behar, P Gomis, P Mabo, and AI Hernandez. Global sensitivity analysis of a cardiovascular model for the study of the autonomic response to head-up tilt testing. In Proc Ann Int Conf IEEE Eng Med Biol Soc, EMBS, volume 2018, pages 5458–5461, 2018.
* [8] JO Campos, J Sundnes, RW Dos Santos, and BM Rocha. Uncertainty quantification and sensitivity analysis of left ventricular function during the full cardiac cycle: Uq and sa in cardiac mechanics. Philos Trans Royal Soc A, 378:20190381, 2020.
* [9] D Chemla, V Castelain, K Zhu, Y Papelier, N Creuzé, S Hoette, F Parent, G Simonneau, M Humbert, and P Herve. Estimating right ventricular stroke work and the pulsatile work fraction in pulmonary hypertension. Chest, 143:1343–1350, 2013.
* [10] T-C Cheng, DM Tabima, LR Caggiano, AL Frump, TA. Hacker, JC Eickhoff, T Lahm, and NC Chesler. Sex differences in right ventricular adaptation to pressure overload in a rat model. J Appl Physiol, 132:888–901, 2022.
* [11] M Colebank, MU Qureshi, and MS Olufsen. Sensitivity analysis and uncertainty quantification of 1‐d models of pulmonary hemodynamics in mice under control and hypertensive conditions. Int J Numer Meth Biomed Eng, 37:e3242, 2019.
* [12] AL Colunga, KG Kim, NP Woodall, TF Dardas, JH Gennari, MS Olufsen, and BE Carlson. Deep phenotyping of cardiac function in heart transplant patients using cardiovascular system models. J Physiol, 598:3203–3222, 2020.
* [13] DJW De Pauw and PA Vanrolleghem. Using the complex-step derivative approximation method to calculate local sensitivity functions of highly nonlinear bioprocess models. Proc 17th IMACS World Congress on Scientific Computation, Applied Mathematics and Simulation (IMACS 2005), 2005.
* [14] AH DeCherney and GS Berkowitz. Simplified calculation of body-surface area. NEJM, 306:424–426, 1982.
* [15] AA Domogo and JT Ottesen. Patient-specific parameter estimation: Coupling a heart model and experimental data. J Theor Biol, 526:110791, 2021.
* [16] V Eck, W Donders, J Sturdy, J Feinberg, T Delhaas, L Hellevik, and W Huberts. A guide to uncertainty quantification and sensitivity analysis for cardiovascular applications. Int J Numer Meth Biomed Eng, 26:807–827, 2016.
* [17] LM Ellwein, HT. Tran, Cheryl Zapata, Vera Novak, and Mette S. Olufsen. Sensitivity analysis and model assessment: Mathematical models for arterial blood flow and blood pressure. Cardiovasc Eng, 8:94–108, 2008.
* [18] AU Fayyaz, WD Edwards, JJ Maleszewski, EA Konik, HM DuBrock, BA Borlaug, RP Frantz, SM Jenkins, and MM Redfield. Global pulmonary vascular remodeling in pulmonary hypertension associated with heart failure and preserved or reduced ejection fraction. Circulation, 137:1796–1810, 2018.
* [19] Michelle Foshat and Nahal Boroumand. The evolving classification of pulmonary hypertension. Arch Pathol Lab Med, 141:696–703, 2017.
* [20] TA Gelzinis. Pulmonary hypertension in 2021: part i-definition, classification, pathophysiology, and presentation. J Cardiothorac Vasc Anesth, 36:1552–1564, 2022.
* [21] JW Gerringer, JC Wagner, D Vélez-Rendón, and D Valdez-Jasso. Lumped-parameter models of the pulmonary vasculature during the progression of pulmonary arterial hypertension. Physiol Reports, 6:e13586, 2018.
* [22] G Golub, V Klema, and GW Stewart. Rank degeneracy and least squares problems. Technical Report STAN-CS-76-559, Dept of Comput Sci, Stanford University, Stanford CA, 1976.
* [23] S Guigui, S I Zaidi, J J Lee, T Elajami, C G Mihos, and E Escolar. Relationship between compliance and pulmonary vascular resistance in pulmonary arterial hypertension. J Thorac Dis, 12:2971–2976, 2020.
* [24] KK Harrod, JL Rogers, JA Feinstein, AL Marsden, and DE Schiavazzi. Predictive modeling of secondary pulmonary hypertension in left ventricular diastolic dysfunction. Front Physiol, 12:1–23, 2021.
* [25] JU Hidalgo, SB Nadler, and T Bloch. The use of the electronic digital computer to determine best fit of blood volume formulas. J Nuclear Med, 3.2:94–99, 1962.
* [26] MM Hoeper, H-A Ghofrani, E Grünig, H Klose, H Olschewski, and S Rosenkranz. Pulmonary hypertension. Deutsches Arzteblatt international, 114:73–84, 2017.
* [27] M Humbert. Pulmonary arterial hypertension and chronic thromboembolic pulmonary hypertension: Pathophysiology. Eur Respir, 19:59–63, 2010.
* [28] G Jayasekera, A Macdonald, C. Mccomb, V Orchard, D Welsh, C Church, M Johnson, M Brewis, C Berry, A Radjenovic, and A Peacock. Left ventricular dysfunction and intra-ventricular dyssynchrony in idiopathic pulmonary arterial hypertension. Int J Cardiol, 365:131–139, 2022.
* [29] N Kawel-Boehm, A Maceira, ER Valsangiacomo-Buechel, J Vogel-Claussen, EB Turkbey, R Williams, S Plein, M Tee, J Eng, and DA Bluemke. Normal values for cardiovascular magnetic resonance in adults and children. J Cardiovasc Magn Res, 17:1–33, 2015.
* [30] CT Kelley. Iterative Methods for Optimization. SIAM, Philadelphia, PA, 1999.
* [31] E Kung, G Pennati, F Migliavacca, TY Hsia, R Figliola, A Marsden, A Giardini, and M Investigators. A simulation protocol for exercise physiology in fontan patients using a closed loop lumped-parameter model. J Biomech Eng, 136:1–14, 2014.
* [32] JW Lankhaar, N Westerhof, TJC Faes, KMJ Marques, JT Marcus, PE Postmus, and A Vonk-Noordegraaf. Quantification of right ventricular afterload in patients with and without pulmonary hypertension. Am J Physiol, 291:H1731–H1737, 2006.
* [33] F Liang, S Takagi, R Himeno, and H Liu. Multi-scale modeling of the human cardiovascular system with applications to aortic valvular and arterial stenoses. Med Biol Eng Comput, 47:743–755, 2009.
* [34] G Mader, MS Olufsen, and A Mahdi. Modeling cerebral blood flow velocity during orthostatic stress. Ann Biomed Eng, 43:1748–1758, 2015.
* [35] SA Mandras, HS Mehta, and A Vaidya. Pulmonary hypertension: A brief guide for clinicians. Mayo Clin Proc, 95:1978–1988, 2020.
* [36] AD Marquis, A Arnold, C Dean-bernhoft, BE Carlson, and Olufsen MS. Practical identifiability and uncertainty quantification of a pulsatile cardiovascular model. Math Biosci, 304:9–24, 2018.
* [37] S Mazimba, TS Welch, H Mwansa, KK Breathett, JLW Kennedy, AD Mihalek, WC Harding, MM Mysore, DX Zhuo, and KC Bilchick. Haemodynamically derived pulmonary artery pulsatility index predicts mortality in pulmonary arterial hypertension. Heart Lung Circ, 28:752–760, 2019.
* [38] H Miao, X Xiaohua, AS Perelson, and H Wu. On identifiability of nonlinear ode models and applications in viral dynamics. SIAM Rev, 53:3–39, 2011.
* [39] MS Olufsen and JT Ottesen. A practical approach to parameter estimation applied to model predicting heart rate regulation. J Math Biol, 67:39–68, 2013.
* [40] G Palau-Caballero, J Walmsley, V Van Empel, J Lumens, and T Delhaas. Why septal motion is a marker of right ventricular failure in pulmonary arterial hypertension: Mechanistic analysis using a computer model. Am J Physiol, 312:H691–H700, 2017.
* [41] LM Paun, MJ Colebank, MS Olufsen, NA Hill, and D Husmeier. Assessing model mismatch and model selection in a bayesian uncertainty quantification analysis of a fluid-dynamics model of pulmonary blood circulation. J R Soc Interface, 17:20200886, 2020.
* [42] AJ Peacock and R Naeije. Pulmonary Circulation: Diseases and Their Treatment. Taylor and Francis, an imprint of CRC Press, Boca Raton, FL, 4th edition, 2016.
* [43] A Pironet, PC Dauby, S Paeme, S Kosta, JG Chase, and T Desaive. Simulation of left atrial function using a multi-scale model of the cardiovascular system. PLoS ONE, 8:e65146, 2013.
* [44] SR Pope, LM Ellwein, CL Zapata, V Novak, CT Kelley, and MS Olufsen. Estimation and identification of parameters in a lumped cerebrovascular model. Math Biosci Eng, 6:93–115, 2009.
* [45] F Raza, C Kozitza, C Lechuga, D Seiter, P Corrado, M Merchant, N Dharmavaram, C Korcarz, M Eldridge, C Francois, O Wieben, and N Chesler. Multimodality deep phenotyping methods to assess mechanisms of poor right ventricular–pulmonary artery coupling. Function, 3:1–5, 2022.
* [46] AE Rimehaug, O Lyng, DO Nordhaug, L Løvstakken, P Aadahl, and I Kirkeby-Garstad. Cardiac power integral: a new method for monitoring cardiovascular performance. Physiol Rep, 1:e00159, 2013.
* [47] B Rowson, SM Duma, MR King, I Efimov, A Saterbak, and NC Chesler. Citation diversity statement in bmes journals. Ann Biomed Eng, 49:947–949, 2021.
* [48] DE Schiavazzi, G Arbia, C Baker, AM Hlavacek, TY Hsia, AL Marsden, IE Vignon‐Clementel, and Modeling of Congenital Hearts Alliance (MOCH). Uncertainty quantification in virtual surgery hemodynamics predictions for single ventricle palliation. Int J Numer Meth Biomed Eng, 32:e02737, 2016.
* [49] DE Schiavazzi, A Baretta, G Pennati, TY Hsia, and AL Marsden. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty. Int J Numer Meth Biomed Eng, 33:1–34, 2017.
* [50] S Shimizu, T Shishido, D Une, A Kamiya, T Kawada, S Sano, and M Sugimachi. Right ventricular stiffness constant as a predictor of postoperative hemodynamics in patients with hypoplastic right ventricle: a theoretical analysis. J Physiol Sci, 60:205–212, 2010.
* [51] G Simonneau, D Montani, DS Celermajer, CP Denton, MA Gatzoulis, PG Krowka, M abd Williams, and R Souza. Haemodynamic definitions and updated clinical classification of pulmonary hypertension. Eur Respir J, 53:1801913, 2019.
* [52] RC Smith. Uncertainty Quantification: Theory, Implementation, and Applications, volume 12 of Computational Science and Engineering. SIAM, Philadelphia, PA, 2013.
* [53] IM Sobol. Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates. Math Comput Simul, 55:271–280, 2001.
* [54] T Sumner, E Shephard, and IDL Bogle. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling. J R Soc Interface, 9:2156–2166, 2012.
* [55] DM Tabima, JL Philip, and NC Chesler. Right ventricular-pulmonary vascular interactions. Physiology, 32:346–356, 2017.
* [56] RJ Tedford, PM Hassoun, SC Mathai, RE Girgis, SD Russell, DR Thiemann, OH Cingolani, JO Mudd, BA Borlaug, MM Redfield, DJ Lederer, and DA Kass. Pulmonary capillary wedge pressure augments right ventricular pulsatile loading. Circulation, 125:289–297, 2012.
* [57] K Tello, A Dalmer, R Vanderpool, HA Ghofrani, R Naeije, F Roller, W Seeger, M Wiegand, H Gall, and MJ Richter. Right ventricular function correlates of right atrial strain in pulmonary hypertension: A combined cardiac magnetic resonance and conductance catheter study. Am J Physiol, 317:H156–H164, 2019.
* [58] A Vonk Noordegraaf, BE Westerhof, and N Westerhof. The relationship between the right ventricle and its load in pulmonary hypertension. J Am Coll Cardiol, 69:236–243, 2017.
* [59] Z Wang and NC Chesler. Pulmonary vascular wall stiffness: an important contributor to the increased right ventricular afterload with pulmonary hypertension. Pulm Circ, 1:212–223, 2011.
* [60] ND Williams, R Brady, S Gilmore, P Gremaud, HT Tran, JT Ottesen, J Mehlsen, and MS Olufsen. Cardiovascular dynamics during head-up tilt assessed via pulsatile and non-pulsatile models. J Math Biol, 79:987–1014, 2019.
* [61] W Yang, AL Marsden, MT Ogawa, C Sakarovitch, KK Hall, M Rabinovitch, and JA Feinstein. Right ventricular stroke work correlates with outcomes in pediatric pulmonary arterial hypertension. Pulm Circ, 8:1–9, 2018.
|
# soft-threshold attention based audio-visual speech enhancement network
###### Abstract
Audio-visual speech enhancement system is regarded to be one of the promising
solutions for isolating and enhancing the speech of the desired speaker.
Conventional methods focus on predicting clean speech spectrum via a naive
convolution neural network based encoder-decoder architecture, and these
methods a) are not adequate to use data fully and effectively, b) cannot
process features selectively. To tackle these problems, this paper proposes a
soft-threshold attention based convolution recurrent network for audio-visual
speech enhancement, which a) applies a novel audio-visual fusion strategy that
fuses audio and visual features layer by layer in encoding stage, and that
feeds fused audio-visual features to each corresponding decoder layer, and
more importantly, b) which introduces a soft-threshold attention applied on
every decoder layers to select the informative modality softly. Experimental
results illustrate that the proposed architecture obtains consistently better
performance than recent models of both PESQ and STOI scores.
Index Terms— speech enhancement, audio-visual, soft-threshold attention,
multi-layer feature fusion model
## 1 Introduction
Speech processing systems are commonly used in a variety of applications such
as automatic speech recognition, speech synthesis, and speaker verification.
Numerous speech processing devices (e.g. mobile communication systems and
digital hearing aids systems) are often used in environments with high levels
of ambient noise such as public places and cars in our daily life. Generally
speaking, the presence of high-level noise interference, severely decrease
perceptual quality and intelligibility of speech signal. Therefore, there is
an urgent need for the development of speech enhancement algorithms which can
automatically filter out noise signal and improve the effectiveness of speech
processing systems.
Recently, many approaches are proposed to recover the clean speech of target
speaker immersed in noisy environment, which can be roughly divided into two
categories, i.e., audio-only speech enhancement (AO-SE)[1, 2] and audio-visual
speech enhancement (AV-SE)[3, 4]. AO-SE approaches make assumptions on
statistical properties of the involved signals[5], and aim to estimate target
speech signals according to mathematically tractable criteria[6]. Advanced AO-
SE methods based on deep learning can predict target speech signal directly,
but they tend to depart from the knowledge-based modelling. Compared with AO-
SE approaches, AV-SE methods have achieved an improvement in the performance
of intelligibility of speech enhancement due to the visual aspect which can
recover some of the suppressed linguistic features when target speech is
corrupted by noise interference[7]. However, AV-SE model should be trained
using data that are representative of settings in which they are deployed. In
order to have robust performance in a wide variety of settings, very large AV
datasets for training and testing need to be collected. Furthermore, AV-SE is
inherently a multi-modal process, and it focuses not only on determining the
parameters of a model, but also on the possible fusion architectures[8].
Generally, a naive fusion strategy does not allow to control how the
information from audio and the visual modalities is fused, as a consequence,
one of the two modalities dominate the other.
Fig. 1: Schematic diagram of the proposed soft-threshold attention based CRN
model. The STA denotes the soft-threshold attention unit.
To overcome the aforementioned limitations, this paper proposes a Soft-
threshold attention (STA) based Audio-visual Convolution Recurrent Neural
Networks (AVCRN) for speech enhancement, which integrates the selected audio
and visual cues into a unified network using multi-layer audio-visual fusion
strategy. The proposed framework applies a STA inspired by soft thresholding
algorithm [9], which has often been used as a key step in many signal
denoising methods [10], and eliminates unimportant features [11]. Moreover,
the proposed model adopts the multi-layer audio and visual fusion strategy. in
which the extracted audio and visual features are concatenated in every
encoding layer. When two modalities in each layer are concatenated, the system
applies them as an additional input via STA to feed the corresponding decoding
layer.
The rest of this paper is organized as follows: In Section 2, the proposed
method is presented in detail. Section 3 is the dataset and experimental
settings. Section 4 demonstrates the results and analysis, and a conclusion is
shown in Section 5.
## 2 Model Architecture
### 2.1 Audio-visual CRN
The diagram of proposed audio-visual CRN is demonstrated in Figure 1. This
model following an encoder-decoder scheme, uses a series of downsampling and
upsampling blocks to make its predictions, and consists of the encoder
component, fusion component, and decoder component.
The encoder component involves audio encoder and video encoder. As previous
approaches in several CNNs based audio encoding models[12, 13, 14], the audio
encoder is thus designed as a CNNs using the spectrogram as input. The video
encoder part is used to process the input face embedding. In our approach, the
video feature vectors and audio feature vectors take concatenation access at
every step in the encoding stage, and the size of visual feature vectors after
convolution layer has to be the same as the corresponding audio feature
vectors, as is shown in Figure 1.
Fusion component consists of audio-visual fusion process and audio-visual
embedding process. Audio-visual fusion process usually designates a
consolidated dimension to implement fusion, which combines the audio and
visual streams in each layer directly and feeds the combination into several
convolution layers. Audio-visual embedding which flattens audio and visual
streams from 3D to 1D, then concatenates both flattened streams together, and
finally feed the concatenated feature vector into two LSTM layers. Audio-visual
embedding is a feature deeper fusion strategy, and the resulting vector is
then to build decoder component.
The decoder component, or named audio decoder, is made of deconvolutional
layers. Because of the downsampling blocks, the model computes a number of
higher level features on coarser time scales, and generates the audio-visual
features by audio-visual fusion process in each level, which are concatenated
with the local, high resolution features computed from the same level
upsampling block. This concatenation results into multi-scale features for
predictions.
### 2.2 Soft-threshold attention
In the proposed architecture, the potential unbalance caused by concatenation-
based fusion easily happened on decoder blocks, when the concatenating
features directly computed during contracting path with the same hierarchical
level among the decoder blocks. Consequently, the proposed model use attention
gates, as is shown in Figure 2, to selectively filter out unimportant
information using soft-thresholding algorithms.
Soft-thresholding is a kind of filter that can transform useful information to
very positive or negative features and noise information to near-zero
features. Deep learning enables the soft thresholding algorithm to be learned
automatically by using a gradient decent algorithm , which is a promising way
to eliminate noise-related information and construct highly discriminative
features. The function of soft-thresholding can be expressed by
$Y=\begin{cases}X-\tau,&X>\tau\\\ 0,&-\tau\leq X\leq\tau\\\ X+\tau,&X<-\tau\\\
\end{cases}$ (1)
where $X$ is the input feature, $Y$ is the output feature, and $\tau$ is the
threshold. In addition, $X$ and $\tau$ are not independent variables where
$\tau$ is non-negative, and their relation is expressed in Eq 3.
Fig. 2: The soft-threshold attention, where the $X_{i,j,k}$ is the feature map
which generated by a convolution block with a concatenation input, $i$, $j$,
and $k$ are the index of width, height and channel of the feature map $X$, $Y$
is output feature, which size is the same as $x$, and $z$, $\alpha$ are the
indicators of the features maps to be used when determining threshold.
The estimation of threshold is a set of deep learning blocks as is shown in
Figure 2. In the threshold estimating module, the feature map $X_{i,j,k}$,
where $i$, $j$, and $k$ are the index of width, height and channel, is taken
absolute value, and its dimension is reduced to 1D. The function of the
following several fully-connected layers generates the attention mask[15],
where the sigmoid function at the last layers scaled the attention mask from 0
to 1, which can be expressed by
$\alpha=\frac{1}{1+e^{-z}}$ (2)
where $z$ is the output of fully-connected layers, and $\alpha$ is the
attention mask. Finally, the threshold parameter $\tau$ can be used to
determine the value of feature vectors, which are obtained by multiplying
between the average value of $|X_{i,j,k}|$ and attention mask $\alpha$. The
function of threshold parameter can be expressed by
$\tau=\alpha\times{\rm Avg}(|X_{i,j,k}|)$ (3)
where ${\rm Avg}(.)$ denotes the average pooling. Substitute Eq 2 and Eq 3
into Eq 1, the output feature $Y_{i,j,k}$ can be obtained.
There are two advantages of STA: Firstly, it removes noise-related features
from higher-level audio-visual fusion vectors. Secondly, it balances audio and
visual modalities in the audio-visual fusion vector, and selectively take
audio-visual features.
## 3 Experimental setup
### 3.1 Datasets
The dataset used in the proposed model involves two publicly available audio-
visual datasets: GRID[16] and TCD-TIMIT[17], which are the two most commonly
used databases in the area of audio-visual speech processing. GRID consists of
video recordings where 18 male speakers and 16 female speakers pronounce 1000
sentences each. TCD-TIMIT consists of 32 male speakers and 30 female speakers
with around 200 videos each.
The proposed model shuffles and splits the dataset to training, validation,
and evaluation sets to 24300 (15 males, 12 females, 900 utterance each), 4400
(12 males, 10 females, 200 utterance each), and 1200 utterances (4 males, 4
females, 150 utterance each), respectively. The noise dataset contains 25.3
hours ambient noise categorized into 12 types: room, car, instrument, engine,
train, human chatting, air-brake, water, street, mic-noise, ring-bell, and
music.
Part of noise signals (23.9 hours) are conducted into both training set and
validation set, but the rest are used to mix the evaluation set. The speech-
noise mixtures in training and validation are generated by randomly selecting
utterances from speech dataset and noise dataset and mixing them at random SNR
between -10dB and 10dB. The evaluation set is generated SNR at 0dB and -5dB.
### 3.2 Audio representation
The audio representation is the transformed magnitude spectrograms in the log
Mel-domain. The input audio signals are raw waveforms, and firstly are
transformed to spectrograms using Short Time Fourier Transform (STFT) with
Hanning window function, and 16 kHz resampling rate. Each frame contains a
window of 40 milliseconds, which equals 640 samples per frame and corresponds
to the duration of a single video frame, and the frame shift is 160 samples
(10 milliseconds).
The transformed spectrograms are then converted to log Mel-scale spectrograms
via Mel-scale filter banks. The resulting spectrogram have 80 Mel frequency
bands from 0 to 8 kHz. The whole spectrograms are sliced into pieces of
duration of 200 milliseconds corresponding to the length of 5 video frames,
resulting in spectrograms of size 80$\times$20, representing 20 temporal
samples, and 80 frequency bins in each sample.
### 3.3 Video representation
Visual representation is extracted from the input videos, and is re-sampled to
25 frames per second. Each video is divided into non-overlapping segments of 5
frames.
## 4 Experiment Results
Table 1: Models comparison in terms of STOI and PESQ scores, “Speech” interference denotes the background speech signal from unknown talker(s); “Natural” interference denotes the ambient non-speech noise. Evaluation metrics | STOI (%) | PESQ
---|---|---
Test SNR | -5 dB | 0 dB | -5 dB | 0 dB
Interference | Speech | Natural | Speech | Natural | Speech | Natural | Speech | Natural
Unprocessed | 57.8 | 51.4 | 64.7 | 62.6 | 1.59 | 1.03 | 1.66 | 1.24
TCNN (Audio-only) | 73.2 | 78.7 | 80.8 | 81.3 | 2.01 | 2.19 | 2.47 | 2.58
Baseline | 77.9 | 81.3 | 88.6 | 87.9 | 2.41 | 2.35 | 2.77 | 2.94
AV-CRN(proposed) | 80.7 | 82.7 | 88.4 | 89.3 | 2.61 | 2.72 | 2.84 | 2.92
\+ Soft-threshold attention | 83.2 | 84.9 | 90.1 | 92.5 | 2.81 | 2.94 | 3.04 | 3.11
### 4.1 Competing models
To evaluate the performance of the proposed approach, the comparisons are
provided with several recently proposed speech enhancement algorithms.
Specially, the evaluation methods are compared proposed model with TCNN model
(an AO-SE approach), the AV-SE baseline system. Therefore, there are three
networks have trained:
* •
TCNN[18]: Temporal convolutional neural network for real-time speech
enhancement in the time domain.
* •
Baseline[19]: A baseline work of visual speech enhancement.
* •
STA-based CRN: soft-threshold attention based convolution recurrent network
for audio-visual speech enhancement.
### 4.2 Results
The results of the proposed network by using the following evaluation metrics:
Short Term Objective Intelligibility (STOI) and Perceptual Evaluation of
Speech Quality (PESQ). Each measurement compares the enhanced speech with
clean reference of each of the test stimuli provided in the dataset. In
addition, the proposed model has decomposed to two groups, AV-CRN model
without STA i.e. AV-CRN, and the complete form of proposed model, i.e. AV-CRN
+ STA. 111Speech samples are available at: https://XinmengXu.github.
io/AVSE/AVCRN.html
Table 1 demonstrates the improvement in the performance of network, as new
component to the speech enhancement architecture, such as visual modality,
multi-layer audio-visual feature fusion strategy, and finally the STA. There
is an observation that the AV-SE baseline work outperforms TCNN, an end-to-end
deep learning based AO-SE system, and the performance of AV-CRN model better
than the baseline system. Hence the performance improvement from TCNN (AO-SE)
to AV-CRN is primarily for two reasons: a) the addition of the visual
modality, and b) the use of fusion technique named multi-layer audio-visual
fusion strategy, instead of concatenating audio and visual modalities only
once in the whole network. Finally, the results from Table 1 show that STA
improves the performance of AV-CRN further.
Table 2 demonstrates that our proposed approach produces state-of-the-art
results in terms of speech quality metrics as is discussed above by comparing
against the following three recently proposed methods that use deep neural
networks to perform AV-SE on GRID dataset:
* •
Deep-learning-based AV-SE[20]: Deep-learning-based audio-visual speech
enhancement in presence of Lombard effect
* •
OVA approach[21]: A LSTM based AV-SE with mask estimation
* •
L2L model[22]: A speaker independent audio-visual model for speech separation
The results where $\Delta$PESQ denotes PESQ improvement with AV-CRN + STA
result in Table 1. Results for the competing methods are taken from the
corresponding papers. Although the comparison results are for reference only,
the proposed model demonstrates a robust performance in comparison with state-
of-the-art results on the GRID AV-SE tasks.
Table 2: Performance comparison of proposed model with state-of-the-art result on GRID Test SNR | -5 dB | 0 dB
---|---|---
Evaluation Metrics | $\Delta$PESQ
Deep-learning-based AV-SE | 1.09 | 0.77
OVA Approach | 0.24 | 0.13
L2L Model | 0.28 | 0.16
## 5 Conclusion
This paper proposed an soft-threshold attention based convolution recurrent
network for audio-visual speech enhancement. The multi-layer feature fusion
strategy process a long temporal context by repeated downsampling and
convolution of feature maps to combine both high-level and low-level features
at different layer steps. In addition, STA is inspired by soft-thresholding
algorithm, which can automatically select informative features, transfer them
to very positive or negative features, and finally eliminate the rest of near-
zero features. Results provided an illustration that the proposed model has
better performance than some published state-of-the-art models on the GRID
dataset.
## References
* [1] Li-Ping Yang and Qian-Jie Fu, “Spectral subtraction-based speech enhancement for cochlear implant patients in background noise,” The journal of the Acoustical Society of America, vol. 117, no. 3, pp. 1001–1004, 2005.
* [2] K Paliwal and Anjan Basu, “A speech enhancement method based on kalman filtering,” in ICASSP’87. IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1987, vol. 12, pp. 177–180.
* [3] Jen-Cheng Hou, Syu-Siang Wang, Ying-Hui Lai, Yu Tsao, Hsiu-Wen Chang, and Hsin-Min Wang, “Audio-visual speech enhancement using multimodal deep convolutional neural networks,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 2, pp. 117–128, 2018.
* [4] Ibrahim Almajai and Ben Milner, “Visually derived wiener filters for speech enhancement,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 6, pp. 1642–1651, 2010.
* [5] Yariv Ephraim, “Statistical-model-based speech enhancement systems,” Proceedings of the IEEE, vol. 80, no. 10, pp. 1526–1555, 1992.
* [6] Afshin Rezayee and Saeed Gazor, “An adaptive klt approach for speech enhancement,” IEEE Transactions on Speech and Audio Processing, vol. 9, no. 2, pp. 87–95, 2001.
* [7] Quentin Summerfield, “Use of visual information for phonetic perception,” Phonetica, vol. 36, no. 4-5, pp. 314–331, 1979.
* [8] Dhanesh Ramachandram and Graham W Taylor, “Deep multimodal learning: A survey on recent advances and trends,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 96–108, 2017\.
* [9] David L Donoho, “De-noising by soft-thresholding,” IEEE transactions on information theory, vol. 41, no. 3, pp. 613–627, 1995.
* [10] Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, and Michael Pecht, “Deep residual shrinkage networks for fault diagnosis,” IEEE Transactions on Industrial Informatics, vol. 16, no. 7, pp. 4681–4690, 2019.
* [11] Minghang Zhao, Shisheng Zhong, Xuyun Fu, and Tang, “Deep residual shrinkage networks for fault diagnosis,” IEEE Transactions on Industrial Informatics, vol. 16, no. 7, pp. 4681–4690, 2019.
* [12] Szu-Wei Fu, Yu Tsao, and Xugang Lu, “SNR-aware convolutional neural network modeling for speech enhancement.,” in Interspeech, 2016, pp. 3768–3772.
* [13] Tomas Kounovsky and Jiri Malek, “Single channel speech enhancement using convolutional neural network,” in 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and their Application to Mechatronics (ECMSM). IEEE, 2017, pp. 1–5.
* [14] Ke Tan and DeLiang Wang, “A convolutional recurrent neural network for real-time speech enhancement.,” in Interspeech, 2018, pp. 3229–3233.
* [15] Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
* [16] Martin Cooke, Jon Barker, Stuart Cunningham, and Xu Shao, “An audio-visual corpus for speech perception and automatic speech recognition,” The Journal of the Acoustical Society of America, vol. 120, no. 5, pp. 2421–2424, 2006.
* [17] Naomi Harte and Eoin Gillen, “TCD-TIMIT: An audio-visual corpus of continuous speech,” IEEE Transactions on Multimedia, vol. 17, no. 5, pp. 603–615, 2015\.
* [18] Ashutosh Pandey and DeLiang Wang, “TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 6875–6879.
* [19] Aviv Gabbay, Asaph Shamir, and Shmuel Peleg, “Visual speech enhancement,” Interspeech, pp. 1170–1174, 2018.
* [20] Daniel Michelsanti, Zheng-Hua Tan, Sigurdur Sigurdsson, and Jesper Jensen, “Deep-learning-based audio-visual speech enhancement in presence of lombard effect,” Speech Communication, vol. 115, pp. 38–50, 2019.
* [21] Wupeng Wang, Chao Xing, Dong Wang, Xiao Chen, and Fengyu Sun, “A robust audio-visual speech enhancement model,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7529–7533.
* [22] Ariel Ephrat, Inbar Mosseri, Oran Lang, Tali Dekel, Kevin Wilson, Avinatan Hassidim, William T Freeman, and Michael Rubinstein, “Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation,” ACM Transactions on Graphics, 2018.
|
11institutetext: Zentrum für Astronomie der Universität Heidelberg,
Astronomisches Rechen-Institut, Mönchhofstr. 12-14, 69120 Heidelberg, Germany,
11email<EMAIL_ADDRESS>22institutetext: Center for Galaxy
Evolution Research & Department of Astronomy, Yonsei University, Seoul 03722,
Republic of Korea 33institutetext: Space Telescope Science Institute, 3700 San
Martin Drive, Baltimore, MD 21218, USA 44institutetext: College of Arts &
Sciences, St Martin’s University, Ernsdor Center 130 5000 Abbey Way SE Lacey,
WA 98503, USA 55institutetext: Key Laboratory for Research in Galaxies and
Cosmology, Shanghai Astronomical Observatory, 80 Nandan Road, Shanghai 20030,
China 66institutetext: Department of Physics & Astronomy, University of
California Los Angeles, 430 Portola Plaza, Box 951547, Los Angeles, CA
90095-157, USA 77institutetext: Department of Natural Sciences, University of
Michigan-Dearborn, 4901 Evergreen Rd, Dearborn, MI 48128, USA 88institutetext:
Indiana University Department of Astronomy, SW319, 727 E 3rd Street,
Bloomington, IN 47405 USA 99institutetext: Cerro Tololo Inter-American
Observatory, NSF’s National Optical-Infrared Astronomy Research Laboratory,
Casilla 603, La Serena, Chile 1010institutetext: Indiana University,
University Information Technology Services, CIB 2709 E 10th Street,
Bloomington, IN 47401 USA
# Blanco DECam Bulge Survey (BDBS) III: A new view of the double red clump in
the Milky Way bulge through luminosity and color distribution
Dongwook Lim 11 Andreas J. Koch-Hansen 11 Chul Chung 22 Christian I. Johnson
33 Andrea Kunder 44 Iulia T. Simion 55 R. Michael Rich 66 William I. Clarkson
77 Catherine A. Pilachowski 88 Scott Michael 88 A. Katherina Vivas 99 Michael
D. Young 1010
(Received November 21, 2020 / Accepted December 30, 2020)
Red clump (RC) stars are one of the best stellar tracers of the structure of
the Milky Way (MW) bulge. Here we report a new view of the double RC through
luminosity and color distributions of RC stars in nine bulge fields ($l$ =
0.0$\degree$, $\pm$4.5$\degree$; $b$ = -6.0$\degree$, -7.5$\degree$,
-9.0$\degree$) from the Blanco DECam Bulge Survey (BDBS), which covers near-
ultraviolet to near-infrared bandpasses. The bright and faint RCs show
contrasting distributions in ($u-g$)0 and ($u-i$)0 colors but similar
distributions in ($J-K_{s}$)0 with a variation depending on the Galactic
longitude, where the bright RC is typically redder than the faint RC. In
particular, the RC stars are clearly divided into the bluer and redder
populations when using the ($u-g$)0 color (($u-g$)0 $<$ 2.5 for the bluer RC;
($u-g$)0 $\geq$ 2.5 for the redder RC). The bluer stars show a single clump on
the faint RC regime, whereas the redder stars form double clumps on both the
bright and faint RCs. The bright clump of the redder stars is dominant in the
positive longitude fields, while the faint clump of those red stars is
significant at negative longitudes. We also confirm that the bluer and redder
stars have different peak metallicity through comparison with spectroscopy
($\Delta$[Fe/H] $\sim$ 0.45 dex). Therefore, our results support a scenario
whereby the MW bulge is composed of a spheroid of metal-poor stars and a
boxy/peanut shape (X-shape) predominantly made up of metal-rich stars.
###### Key Words.:
Galaxy: bulge – Galaxy: formation – Galaxy: structure – Stars: horizontal-
branch
## 1 Introduction
The double red clump (RC) observed in the high-latitude fields of the Milky
Way (MW) bulge is an essential feature for understanding the nature of the
bulge. McWilliam & Zoccali (2010) and Nataf et al. (2010) simultaneously
reported that the RC stars in the bulge can be divided into two groups, namely
bright and faint RCs, from Two Micron All Sky Survey (2MASS) and Optical
Gravitational Lensing Experiment (OGLE)-III surveys. The RC stars have long
been used as a standard candle in order to examine the distance and structure
of the Local Universe, and in particular the Galactic bulge (e.g., Stanek et
al. 1994; Rattenbury et al. 2007; Wegg et al. 2015). Furthermore, the presence
of the double RC can be taken as evidence for an X-shaped feature where the
bright and faint RCs are placed on the near- and far-side arms of this
structure (see Ness et al. 2012; Wegg & Gerhard 2013). We note that an
X-shaped structure has been considered as a part of the boxy/peanut bulge and
is indeed observed in several external galaxies (Bureau et al. 2006; Buta et
al. 2015; Gonzalez et al. 2016). This X-shaped scenario for the origin of the
double RC is based on the two main characteristics of the RC stars: firstly,
stars in the bright and faint RCs show an almost identical distribution in the
($J-K$) and ($V-I$) colors, which suggests a negligible difference in
metallicity. Secondly, the peak magnitudes of the bright and faint RCs are
almost constant regardless of Galactic longitude, while the population ratio
between the two RCs changes significantly, which is inconsistent with the
expected influence of the Galactic bar component (McWilliam & Zoccali 2010).
Multiple chemical populations may complicate the interpretation of the RC
apparent magnitude distribution, as varying chemical compositions can change
the RC absolute magnitude –this is generally observed in globular clusters
(GCs). It is well established that GCs contain more than two stellar
populations: the later-generation stars are more enhanced in He and certain
light elements, such as N, Na, and Al, than the earlier-generation stars, and
are depleted in others like C, O, and Mg (Gratton et al. 2012; Bastian & Lardo
2018). In the same vein, Lee et al. (2015) demonstrated through population
synthesis modeling that He-enhanced later-generation stars would be placed on
the bright RC (bRC) regime, while the He-normal first-generation stars are
mainly located on the faint RC (fRC) regime. Thus, the multiple population
phenomenon may exist in the bulge, where the combined effects of different
metallicity and He abundance can cause the RC stars to have different
intrinsic luminosity (see Joo et al. 2017). Recently, Lee et al. (2018) and
Lim et al. (2021) reported a difference in CN band strength and [Fe/H] between
stars in the bright and faint RCs as supporting evidence of this scenario.
These chemical properties of the double RC show some similarities with those
observed among multiple stellar populations in the peculiar GC Terzan 5
(Origlia et al. 2011). Unlike the X-shaped scenario, the multiple population
scenario does not require any distance difference between the bright and faint
RCs. In particular, the multiple population scenario favors the “classical
bulge” model, whereas the X-shaped scenario follows the “pseudo bulge” model.
Therefore, investigating the properties of RC stars is crucial for
understanding the structure of the MW bulge (see also Athanassoula 2005; Nataf
2017).
On the other hand, the MW bulge is known to have more than two stellar
components with potentially different metallicity and kinematics, such as
velocity dispersion and rotation curve (e.g., Babusiaux et al. 2010; Ness et
al. 2013; Zoccali et al. 2017; Clarkson et al. 2018). The stars in the bulge
can be mainly divided into the metal-poor ([Fe/H] $\sim$ $-$0.25 dex) and
metal-rich ([Fe/H] $\sim$ +0.15 dex) components, although the metallicity
criterion varies depending on the study. The majority of the metal-poor
component shows a spheroid shape with roughly constant velocity dispersion
with Galactic latitude (although we note that a barred distribution can also
be seen in metal-poor stars; Kunder et al. 2020), whereas the metal-rich
component forms a boxy/peanut shape with a steeper gradient of velocity
dispersion with latitude. Thus, it appears that the MW bulge region contains
both metal-poor stars characteristic of a classical bulge and metal-rich stars
characteristic of a pseudo bulge, as well as the stellar populations of the
inner halo and discs (Rojas-Arriagada et al. 2014; Koch et al. 2016; Kunder et
al. 2016; Savino et al. 2020; see also Kunder et al. 2020).
Most studies of the MW bulge have been carried out using spectroscopic or
photometric observations covering optical to near-infrared (NIR) bands. In
this regard, the Blanco DECam Bulge Survey (BDBS; Rich et al. 2020) operating
in the near-ultraviolet (NUV) to NIR has opened up new opportunities for
studying the structure of the bulge. In particular, the NUV and optical colors
of stars are more sensitive to stellar metallicity, age, and chemical
composition of light elements. The BDBS is the Rubin Observatory Legacy Survey
of Space and Time (LSST) precursor program covering $\sim$200 square degrees
of the Galactic bulge at $-$11$\degree$ $<$ $l$ $<$ +11$\degree$ and
$-$13$\degree$ $<$ $b$ $<$ $-$2$\degree$ and was performed from 2013 to 2014
using the Dark Energy Camera (DECam) at the CTIO-4m telescope with $ugrizY$
filters. The main goal of the BDBS is to produce an optical multi-band map of
the southern Galactic bulge. A more detailed description of the BDBS is
presented in Rich et al. (2020) and Johnson et al. (2020).
Here, we investigate the luminosity and color distributions of RC stars in
various bulge fields taking advantage of metallicity-sensitive BDBS photometry
with NIR photometry of the Vista Variables in the Via Lactea (VVV; Minniti et
al. 2010), complemented by parallax information from the second Gaia data
release (DR2, Gaia Collaboration et al. 2018). The current paper is organized
as follows. In Section 2, we describe the data-selection process. The
luminosity and color distributions of RC stars are presented in Section 3,
while our results are compared with spectroscopy in Section 4. Finally, our
conclusion and discussion for the MW bulge drawn from the RC stars are given
in Section 5.
## 2 Data selection
In order to investigate the color and luminosity distributions of RC stars, we
obtained $ugrizY$ magnitudes from the BDBS for stars within circles of
1$\degree$ diameter around nine different fields of the bulge at $l$ =
0.0$\degree$, $\pm$4.5$\degree$ and $b$ = -6.0$\degree$, -7.5$\degree$, and
-9.0$\degree$. We first focus on the central field at ($l$, $b$) =
(0.0$\degree$, -7.5$\degree$) where the double RC is distinctly observed, and
then select the near-fields toward increasing or decreasing longitude and
latitude to examine the trends of the RC depending on Galactic position. Here,
we note that the double RC feature is most prominently observed in the high-
latitude fields (—$b$— $\geq$ 6$\degree$; see McWilliam & Zoccali 2010; Nataf
et al. 2010). There is only one GC in the selected areas and contamination is
minimized by excluding stars within one half-light radius of that one, namely
NGC 6558. We then selected the best sample of stars from the BDBS data by
applying the criteria for measurement error, observing count and quality flags
for each band: error1 $\leq$ error2, count $\geq$ 2, error_flag $\leq$ 1,
sky_flag $\leq$ 2, shape_flag $\leq$ 1\. Here, error1 is calculated from the
weighted flux, and error2 is the magnitude error added in quadrature for each
exposure. Three quality flags indicate the number of standard deviations away
from the mean of error, sky, and chi values, respectively. A detailed
description of the data-reduction process of the BDBS and its quality flags
can be found in Johnson et al. (2020). However, in the case of ($l$, $b$) =
(+4.5$\degree$, -9.0$\degree$) field, the bulk of bright stars ($K_{s_{0}}$
$<$ 12.5) is excluded with these criteria because of the low number of
observations at the edge of the survey (see Figure 1 of Rich et al. 2020).
Therefore, we applied more relaxed criteria for this particular field, namely
error1 $\leq$ error2, error_flag $\leq$ 2, and sky_flag $\leq$ 2.
Table 1: Number of stars in each field Ntotal | $l$ = +4.5$\degree$ | $l$ = 0.0$\degree$ | $l$ = -4.5$\degree$
---|---|---|---
(NRGB; N${}_{RC})$
$b$ = -6.0$\degree$ | 177,214 | 93,887 | 90,713
(29,648; 18,925) | (29,373; 19,817) | (27,383; 18,428)
$b$ = -7.5$\degree$ | 154,298 | 72,895 | 125,829
(16,352; 9,962) | (8,290; 4,908) | (11,500; 6,687)
$b$ = -9.0$\degree$ | 82,176 | 87,258 | 114,217
(4,496; 2313) | (8,176; 4,846) | (5,297; 2,588)
Figure 1: Dereddened CMDs for stars within a circle of 1$\degree$ diameter
around nine different fields of the bulge in the ($K_{s}$, $u-g$)0, ($K_{s}$,
$g-r$)0, ($K_{s}$, $r-i$)0, ($K_{s}$, $u-i$)0, and ($K_{s}$, $J-K_{s}$)0
planes using the BDBS and the VVV data. The Galactic position ($l$, $b$) of
each field is listed in the upper right corner. The green and purple boxes
indicate the selection criteria for the RC and RGB stars, respectively, and
the horizontal dotted line in each CMD divides the bright and faint RCs at
13.0 $K_{s_{0}}$ mag. We also plot the luminosity function of the RGB stars
for each field (rightmost panels in each field), and color distribution for
the stars in bRC (red), fRC (blue), and RGB (black) regimes, respectively, on
the top of each CMD. The presence of double RCs is particularly prominent in
the fields of $l$ = 0.0$\degree$, while the bRC (fRC) is dominant at the
positive (negative) longitude fields ($l$ = $\pm$4.5$\degree$). The stars in
the bright and faint RC show a different distribution in ($u-g$)0 and ($u-i$)0
colors, but similar distribution in ($r-i$)0 and ($J-K_{s}$)0 colors.
In addition, we cross-matched the BDBS data with the Gaia DR2 and the VVV DR2
NIR photometry using the CDS X-Match service from TOPCAT (Taylor 2005). We
note again that the two major obstacles in the study of the bulge are the
contamination of stars from other stellar components towards the Galactic
plane, such as disk and inner halo, and the high interstellar reddening toward
the bulge. Although the accuracy of the Gaia parallax ($\varpi$) measurements
is insufficient to fully disentangle the bulge stars from the Galactic disk,
we only used the stars within the range of -0.2 $<$ $\varpi$ $<$ 0.4 and -2.0
$<$ relative parallax error ($\varpi$/$\sigma_{\varpi}$) $<$ 4.0 for this work
at least to exclude nearby stars. In each field, about 45% of stars are
excluded by this selection criterion and the majority of them seem to be main
sequence stars belonging to the thin and thick disks (see Figure 7 of Rich et
al. 2020). We note that this selection procedure by parallax does not
significantly affect our results, because, in general, about 80% of RGB and
85% of RC stars still remain in the samples. The NIR photometry obtained from
the VVV is used for identifying RC stars because these bands are less
sensitive to the reddening effect. The double RC feature is indeed most
prominently observed in the NIR bands (see McWilliam & Zoccali 2010; Wegg &
Gerhard 2013).
Following Johnson et al. (2020), we derived the reddening corrected magnitudes
using the extinction map from Simion et al. (2017) and the extinction
coefficients of Schlafly & Finkbeiner (2011) for the $u$-band, Green et al.
(2018) for the $grizY$-bands, Alonso-García et al. (2017) for the $JHK$-bands,
and Casagrande & VandenBerg (2018) for the Gaia photometry. Figure 1 shows
color–magnitude diagrams (CMDs) for our sample stars from the BDBS within our
parallax range in the nine Galactic bulge fields, showcasing the ($u-g$)0,
($g-r$)0, ($r-i$)0, ($u-i$)0, and ($J-K_{s}$)0 colors. The number of stars in
each field is listed in Table 1.
## 3 Color distribution of RC stars
First of all, we identify the red giant branch (RGB) and RC stars from the
CMDs in Figure 1, which simultaneously fall on the specific regions on each
CMD (purple boxes for RGB stars; green boxes for RC stars). We note that we
slightly adapted the color criteria with Galactic latitude, while the
magnitude ranges were kept identical for all fields as 10.0 $<$ $K_{s_{0}}$
mag $<$ 15.0 for RGB stars and 12.0 $<$ $K_{s_{0}}$ mag $<$ 14.0 for RC stars.
All stars in the RC regime are also included in the RGB group because the RC
and RGB stars could not be distinguished on the CMD. Table 1 shows the number
of stars in the RGB and RC regimes for each field.
The rightmost panels of Figure 1 present the luminosity function of RGB stars.
These show a bimodal distribution at the magnitude range of the RC, which can
be divided into the bright and faint RCs at $K_{s_{0}}$ mag = 13.0 (horizontal
dotted line). This double RC feature is particularly apparent in the fields of
$l$ = 0.0$\degree$ and becomes significant with increasing latitude. 111It is
important to note that, besides RC stars, stars in the evolutionary stage of
the red giant branch bump (RGBB) are also embedded in the luminosity function
of RGB stars. In particular, the RGBB stars of the bRC, corresponding to
$\sim$25% of the number counts of the bRC stars, might be placed within the
similar magnitude range of the fRC (see Nataf et al. 2014). However, in this
study the contamination by RGBB stars was not taken into account because it is
similar in every bulge field. In addition, the bRC is more dominant than the
fRC in the positive longitude fields ($l$ = +4.5$\degree$), whereas the fRC
stars are more abundant at negative longitudes ($l$ = -4.5$\degree$). These
trends of the double RC depending on the Galactic position are identical to
the previous reports by McWilliam & Zoccali (2010) and Nataf et al. (2015).
In order to examine the color distribution of the double RCs, we divide RC
stars into bright and faint RCs (12.0 $<$ $K_{s_{0}}$ $\leq$ 13.0 for bRC;
13.0 $<$ $K_{s_{0}}$ $\leq$ 14.0 for fRC) and then draw the histograms of each
color in the top panels of the CMD in Figure 1 (blue for fRC; red for bRC;
black for RGB). It is expected that the bRC is typically redder than the fRC
in every histogram because the RGB stars become redder with increasing
luminosity. Nevertheless, the differences in ($u-g$)0 and ($u-i$)0 between the
two RCs are more obvious than those in other colors. In particular, in the
fields of ($l$, $b$) = (0.0$\degree$, -7.5$\degree$), it appears that both the
bRC and fRC show the bimodal distribution in ($u-g$)0 with a stronger redder
peak in the bRC and a stronger bluer peak in the fRC. Thus, these CMDs and
histograms of the BDBS data suggest a possible difference in color
distribution between the bright and faint RCs.
However, it is necessary to confirm that the difference in color distribution
between the bright and faint RCs is not due to the general trend on the RGB.
Therefore, we determine the “delta colors” for stars in the field of ($l$,
$b$) = (0.0$\degree$, -7.5$\degree$) as the horizontal distance from the
fiducial line (purple lines in Figure 2), which is visually defined as the
right edge of the RGB similar to Lee et al. (2013). Figure 2 shows CMDs in the
($K_{s}$, $u-g$)0, ($K_{s}$, $u-i$)0, and ($K_{s}$, $J-K_{s}$)0 planes,
together with histograms of the respective delta colors for stars in each bin
of 1.0 mag from 10.0 to 15.0 in $K_{s_{0}}$-band. The histograms indicate the
different patterns in the $\Delta$($u-g$)0 and $\Delta$($u-i$)0 between the
bRC and fRC regimes. Although both the bRC and fRC have two peaks at the bluer
and redder colors, the majority of the bRC is in the redder peak while that of
the fRC is in the bluer peak. Thus, the bRC stars are generally redder than
the fRC stars in the ($u-g$)0 and ($u-i$)0 colors regardless of the trend of
the RGB. We note that the bimodal distribution of the bRC and fRC in ($u-i$)0
color has already been reported in the field of ($l$, $b$) = (+1$\degree$,
-8$\degree$) from the previous BDBS study (see Figure 17 of Johnson et al.
2020). In the case of $\Delta$($J-K_{s}$)0, the two RCs show a similar
distribution, which is consistent with the earlier finding by McWilliam &
Zoccali (2010). The $\Delta$($J-K_{s}$)0 color of the bRC is even somewhat
bluer than that of the fRC in contrast to the cases of $\Delta$($u-g$)0 and
$\Delta$($u-i$)0. This discrepancy implies that the NUV and optical photometry
of the BDBS is highly powerful and provides a new view of the double RC in the
bulge.
Figure 2: Color–magnitude diagrams and histograms of $\Delta$($u-g$)0,
$\Delta$($u-i$)0, and $\Delta$($J-K_{s}$)0 for stars in the field of ($l$,
$b$) = (0.0$\degree$, -7.5$\degree$). The $\Delta$-colors are derived as the
difference between the original color and the fiducial line (purple lines in
the left panels), and the histogram is respectively drawn for stars within the
1.0 mag range from 10.0 to 15.0 mag in $K_{s_{0}}$-band. Both the bRC and fRC
stars show bimodal distributions in $\Delta$($u-g$)0 and $\Delta$($u-i$)0.
However, the majority of the bRC is in the redder side, whereas that of the
fRC is in the bluer side. In contrast, the bRC and fRC show similar
distributions in the $\Delta$($J-K_{s}$)0.
A more detailed comparison of the color and luminosity distributions between
the bright and faint RCs, as well as the longitude and latitude dependence of
the double RC, is presented in Figure 3, which shows density maps of the stars
on the RC in the ($K_{s}$, $u-g$)0 and ($K_{s}$, $J-K_{s}$)0 planes for all
fields used in this study. As has been shown before, the double RC is
prominently observed in the ($K_{s}$, $J-K_{s}$)0 CMD, particularly at the
fields of $l$ = 0.0$\degree$. The bright and faint RCs are clearly separated
in $K_{s_{0}}$-magnitude with a similar color of ($J-K_{s}$)0. The change in
the fraction of the bright and faint RC stars depending on the Galactic
longitude is also well demonstrated in the ($K_{s}$, $J-K_{s}$)0 plane of
Figure 3. For instance, the bRC is more significant than the fRC in the
positive longitude fields ($l$ = +4.5$\degree$), but this trend is reversed at
the negative longitude fields ($l$ = -4.5$\degree$). However, in the field of
($l$, $b$) = (+4.5$\degree$, -9.0$\degree$), the dominance of the bRC is less
clear compared to other fields of ($l$, $b$) = (+4.5$\degree$, -6.0$\degree$)
and (+4.5$\degree$, -7.5$\degree$). It is probably due to the lack of bright
stars in this field during the sample-selection procedure, although we applied
the relaxed criteria for this field (see Section 2 and Figure 1).
Figure 3: Density maps of stars in the RC regime for nine Galactic fields in
the ($K_{s}$, $u-g$)0 and ($K_{s}$, $J-K_{s}$)0 planes. The horizontal dotted
line indicates 13.0 mag in the $K_{s_{0}}$-band, which divides the bright and
faint RCs. While the two RCs have similar ($J-K_{s}$)0 color, they show
contrasting distribution in the ($K_{s}$, $u-g$)0 plane with large variations
depending on the Galactic position. In general, the bright stars form a redder
clump than the faint stars in the ($u-g$)0 color (see text for details).
Interestingly, the bright and faint RCs show contrasting distributions in
($u-g$)0 color. The fRC stars are concentrated in the bluer regime at around
($u-g$)0 $\sim$ 2.0 for all fields. Although the bRC stars are mainly placed
in the redder regime at ($u-g$)0 $\sim$ 3.0, their distribution patterns are
varied with Galactic position. In particular, while the bRC stars show a
redder clump with a tail toward bluer colors in the fields at $l$ =
0.0$\degree$ and +4.5$\degree$, the fainter and redder clump is shown in the
fields of $l$ = -4.5$\degree$ at around a $K_{s_{0}}$ mag of 13.3 and 2.9 in
($u-g$)0, instead of a distinct bRC. In addition, closer inspection of ($l$,
$b$) = (+4.5$\degree$, -7.5$\degree$) and (0.0$\degree$, -7.5$\degree$) fields
reveals that the RGB stars can also be divided into the bluer and redder
branches in ($u-g$)0 color, in addition to the RCs (see also Figure 12).
Additional density maps using other color combinations are shown in Appendix
A.
In order to further examine the split of stars by color, we plot the
color–color contours using ($u-g$)0 and ($J-K_{s}$)0 for stars on the bright
and faint RCs, respectively, in Figure 4. As is clearly shown in the fields at
$b$ = -7.5$\degree$, the stars in both the bRC and fRC are separated into two
subgroups with different ($u-g$)0, but similar ($J-K_{s}$)0. Furthermore, a
similar pattern is commonly observed in all fields for the bRC and fRC. We
note that displaying the contours in two-color diagrams is more illustrative
of the color distribution than the density map drawn from all stars in the RC
regime of Figure 3. For instance, the presence of the redder clump in the bRC
regime at ($l$, $b$) = (+4.5°$,-9.0\degree$), which is not evident in Figure
3, is distinct in this contour. Nevertheless, the separation by ($u-g$)0 color
is less clear in some fields, such as the bRC in the field at ($l$, $b$) =
(+4.5°$,-6.0\degree$) and the fRC in ($l$, $b$) = (-4.5°$,-9.0\degree$), which
is probably due to the relatively large difference in the number ratio between
the bluer and redder subgroups. As the color of the RGB and RC stars is
generally related to their metallicity, this subgrouping would imply the
presence of two stellar populations with different metallicities in the outer
MW bulge. For a detailed investigation of the color and magnitude distribution
for these subgroups, we divide the stars in the RC regime into the bluer and
redder RCs regardless of magnitude (($u-g$)0 $<$ 2.5 for bluer RC stars;
($u-g$)0 $\geq$ 2.5 for redder RC stars; vertical dotted line in Figure 4).
Figure 4: Density contours of stars in the bright and faint RC regimes,
respectively, in the ($J-K_{s}$, $u-g$)0 plane. Stars on both RCs can be
divided into two subgroups with different ($u-g$)0 and similar ($J-K_{s}$)0 in
most fields. The vertical dotted line indicates ($u-g$)0 = 2.5, where we
divide the sample into bluer and redder RC stars. Figure 5: Density contours
of stars in the RC regime with subgrouping by ($u-g$)0 color, in the ($K_{s}$,
$u-g$)0 plane. While the bluer RC stars show a single faint clump in all
fields, the pattern of the redder RC stars varies with longitude and latitude.
In the redder RC regime, the bright (faint) clump is prominently observed in
the fields of positive (negative) longitude, and both clumps are shown in the
fields of $l$ = 0.0$\degree$. The peak magnitudes of these bright and faint
clumps of redder stars are almost the same regardless of Galactic position. We
note that the single clump of the bluer stars is fainter than the faint clump
of the redder stars. The horizontal dotted lines represent $K_{s_{0}}$ mag =
12.7, 13.3, and 13.6, respectively. Figure 6: Same as Figure 5 but for
($J-K_{s}$)0 color. The bluer and redder stars, divided by ($u-g$)0, overlap
in ($J-K_{s}$)0 color. Thus, the distinct double RC observed in the ($K_{s}$,
$J-K_{s}$)0 CMD may be due to the the synergy of the faint clump of the bluer
stars and the bright and faint clumps of the redder stars. Figure 7: Same as
Figure 5 but for ($u-i$)0 color. The blue and redder stars show almost the
same pattern within the ($u-g$)0 and ($u-i$)0 colors. As the ($u-i$)0 color
correlates with metallicity (Johnson et al. 2020), the bluer and redder stars
would have a significant difference in metallicity. Figure 8: Color–magnitude
diagrams for stars in the field at ($l$, $b$) = (-1.0$\degree$, -8.5$\degree$)
together with [Fe/H] abundances of stars obtained from spectroscopy (data from
Lim et al. 2021). The colored circles indicate the metallicity of stars from
metal-poor (blue) to metal-rich (red). The [Fe/H] abundances of stars are
clearly enhanced with increasing color. In particular, the metallicity
gradient is apparent in the ($u-g$)0, ($g-r$)0, and ($u-i$)0 colors, but is
indistinct in ($r-i$)0 and ($J-K_{s}$)0. The upper panels show
color–metallicity relations (solid black lines) obtained from stars in 12.0
$\leq$ $K_{s_{0}}$ $\leq$ 14.0. The standard deviation ($\sigma$) of the
offset between the observed and fitted [Fe/H] values is indicated in the upper
left corner. The [Fe/H] of stars is tightly correlated with ($u-g$)0,
($g-r$)0, and ($u-i$)0 colors with a small standard deviation. We note that
the dashed magenta line in the ($u-i$)0 color represents the color–metallicity
relation calculated in Johnson et al. (2020).
Figure 5 shows the density contours in the ($K_{s}$, $u-g$)0 plane with
subgrouping of the bluer and redder stars in the RC regime. First, when we
examine the redder RC stars, the presence of the bright and faint clumps are
clearly shown in the fields of $l$ = 0.0$\degree$, while the bright clump is
more significant than the faint clump. The ratio of the faint clump to the
bright clump becomes smaller at the higher latitude field, but the average
magnitudes of these two clumps are almost constant at the $K_{s_{0}}$ mag of
12.7 for the bright clump and 13.3 for the faint clump regardless of latitude.
In addition, the bright and faint clumps of the redder stars show a small
difference in ($u-g$)0 color ($\sim$0.2 dex) compared to that derived from all
samples in the bRC and fRC ($\sim$ 1.0 dex; see Figure 3). The bright clump
becomes predominant in the fields of positive longitude ($l$ = +4.5$\degree$),
whereas the faint clump is significant in the fields of negative longitude
($l$ = -4.5$\degree$). It is important to note that the peak magnitudes of the
bright and faint clumps are not changed with Galactic position. Thus, the two
clumps of the redder RC stars show a negligible difference in color and a
variation of ratio with Galactic longitude and latitude with constant
magnitudes, which is identical to the properties of the double RCs expected
from the X-shaped model (e.g., McWilliam & Zoccali 2010, see also Section 1).
In addition, amongst the fields of $l$ = 0.0$\degree$, the bright and faint
clumps of redder RC stars are further apart in the higher latitude field. This
also supports the X-shaped scenario for the double clumps in the redder RC
regime. In contrast, the bluer RC stars only form a single extended clump on
the fainter region at 13.3 $\sim$ 13.8 $K_{s_{0}}$ mag in all fields. In
particular, the peak magnitude of this clump is even fainter than the faint
clump of the redder RC stars.
We also plot the density contours of the bluer and redder RC stars, divided by
($u-g$)0 color, in the ($K_{s}$, $J-K_{s}$)0 and ($K_{s}$, $u-i$)0 planes in
Figures 6 and 7. Although the blue and redder stars are similarly separated in
the ($J-K_{s}$)0 and ($u-i$)0 colors, they mildly overlap in the ($K_{s}$,
$J-K_{s}$)0 plane. It therefore appears that the distinct double RC observed
in the NIR photometry is composed of the bright clump of the redder stars and
the faint clumps of both the redder and bluer stars. We note that similar
distributions of bluer and redder stars are also observed in other color
combinations (see Figures 16 and 17). In addition, as the ($u-i$)0 color is
tightly correlated with metallicity (see Figure 18 of Johnson et al. 2020),
the difference in ($u-i$)0 between the bluer and redder stars suggests a
difference in metallicity, where the redder stars are more metal-rich. The
fact that the double RC appears only amongst the redder stars while the bluer
stars comprise a single clump corresponds to the current understanding of the
MW bulge that the double RC feature is prominent among the metal-rich stars
(Ness et al. 2012; Rojas-Arriagada et al. 2014). In the same vein, the
difference in metallicity between stars in the bright and faint RCs reported
by spectroscopic studies (e.g., Uttenthaler et al. 2012) is also reasonable
because the bluer (metal-poor) stars only form a single clump in the fRC
regime.
## 4 Comparison with spectroscopy
Figure 9: Same as Figure 8, but for [Na/Fe], [Al/Fe], and [O/Fe] abundances in
the ($K_{s}$, $u-g$)0 and ($K_{s}$, $J-K_{s}$)0 planes. The [Na/Fe] abundances
of stars increase with increasing ($u-g$)0 color, while the [O/Fe] abundances
decrease. These abundance gradients are not observed in the ($J-K_{s}$)0
color. In the case of [Al/Fe], a variation of abundance is not obvious in
either ($u-g$)0 or ($J-K_{s}$)0. Figure 10: Kernel density estimates of
[Fe/H], [Na/Fe], [Al/Fe], and [O/Fe] abundances for the bluer and redder
stars. The redder stars are generally more enhanced in [Fe/H] and [Na/Fe] than
the bluer stars, while this trend is reversed in [Al/Fe] and [O/Fe]. The
vertical red and blue dotted lines indicate the peak abundances of each group,
and the bandwidth for KDE is in the upper right corner of each panel.
In order to further examine the relationship between the colors of the RC
stars and their chemical composition, we compare the BDBS data with high-
resolution spectroscopic data. The spectroscopic data are from Lim et al.
(2021), and were obtained using the Michigan/Magellan Fiber System (M2FS;
Mateo et al. 2012) on the Magellan telescope for the RC and RGB stars in the
field of ($l$, $b$) = (-1$\degree$, -8.5$\degree$). For this comparison, we
performed the same sample-selection procedure as that described in Section 2,
this time for stars in this field from the BDBS, VVV, and Gaia data. A total
of 124 stars were cross-matched with the spectroscopic data, and the
metallicities of these stars are over-plotted on the CMDs in Figure 8. As is
expected, the [Fe/H] abundances of stars are gradually enhanced from -0.9 dex
to +0.4 dex with increasing color index (i.e., from blue to red). This
metallicity gradient with stellar color is particularly evident in ($u-g$)0,
($g-r$)0 , and ($u-i$)0, but less clear in ($r-i$)0 and ($J-K_{s}$)0. We also
estimate the color–metallicity relations for stars in the RC range (12.0
$\leq$ $K_{s_{0}}$ $\leq$ 14.0) in the upper panels of Figure 8. The mean
offsets between the observed and fitted [Fe/H] values are 0.14, 0.16, 0.22,
0.15, and 0.23 dex in ($u-g$)0, ($g-r$)0, ($r-i$)0, ($u-i$)0, and ($J-K_{s}$)0
colors, respectively, and their standard deviations are 0.18, 0.20, 0.26,
0.19, and 0.28. The [Fe/H] of stars show tight correlations in the ($u-g$)0,
($g-r$)0, and ($u-i$)0 colors with a low standard deviation ($\sigma$ $\leq$
0.2), while these correlations are less distinct in the ($r-i$)0 and
($J-K_{s}$)0 colors. The color–metallicity relations for the BDBS ($u-g$)0,
($g-r$)0, and ($u-i$)0 colors are as follows:
$[\mathrm{Fe/H}]=(0.633\pm 0.048)\times(u-g)_{0}-(1.665\pm 0.105),$
$[\mathrm{Fe/H}]=(2.068\pm 0.191)\times(g-r)_{0}-(1.738\pm 0.140),$
$[\mathrm{Fe/H}]=(0.451\pm 0.037)\times(u-i)_{0}-(1.726\pm 0.118).$
In particular, the relation with ($u-i$)0 color is comparable to that
determined by Johnson et al. (2020), which reads [Fe/H] = 0.563($u-i$)0 $-$
2.074 (magenta dashed line in Figure 8). Therefore, we can confirm that the
bluer and redder stars, divided in ($u-g$)0 in Section 3, have different mean
metallicities, separating into metal-poor and metal-rich populations of the
bulge. However, the trend of metallicity with magnitude is not noticeable222It
appears that the difference in metallicity between stars in the bright and
faint RCs reported by Lim et al. (2021) is probably due to the dominance of
the redder stars (metal-rich) in the bRC regime and that of the bluer stars
(metal-poor) in the fRC regime (see also Section 3)..
Figure 9 shows the comparison of the BDBS photometry with chemical abundances
of Na, Al, and O. The metal-rich redder stars are more enhanced in [Na/Fe]
than the metal-poor bluer stars, while this trend is reversed in [O/Fe] in the
($K_{s}$, $u-g$)0 CMD. However, these abundance gradients are not observed in
the ($K_{s}$, $J-K_{s}$)0 CMD as they for the [Fe/H] abundance in Figure 8.
The Na enhancement with a decline in O in the metal-rich stars is consistent
with the typical chemical trends of the bulge stars reported by several
studies; for example the [Na/Fe] is increased and [O/Fe] is decreased with
increasing [Fe/H] (e.g., Johnson et al. 2014; Zasowski et al. 2019). On the
other hand, the NUV photometry has been efficiently used to trace the multiple
stellar populations with different chemical abundance in light elements, such
as N and Na, for GCs (e.g., Cummings et al. 2014; Savino et al. 2018). In this
regard, the clear gradients of Na and O abundances with ($u-g$)0 color may
reflect abundance variations in light elements together with the primary
effect of different metallicity. Therefore, this result further implies that
the BDBS data are also useful for studying multiple populations in GCs through
specific color combinations, such as the $c_{y}$ index (e.g., Savino et al.
2018, see also Figure 14). However, for the case of [Al/Fe], the separation
between the bluer and redder stars and the gradient of abundance is not clear,
although the [Al/Fe] abundances of the bulge stars slightly decrease with
increasing [Fe/H], as in [O/Fe]. This could be because the variation of Al
abundance with metallicity is not large compared to the Na and O (see Figure 2
of Zasowski et al. 2019).
We also plot the kernel density estimates (KDEs) of [Fe/H], [Na/Fe], [Al/Fe],
and [O/Fe] abundances for stars in the bluer and redder RC subgroups,
respectively, in Figure 10. While all the cross-matched stars show a distinct
bimodal distribution in [Fe/H], the bluer and redder stars are clearly
separated with a peak difference of $\sim$0.45 dex (peak value of -0.4 dex for
bluer RC stars; +0.05 dex for redder RC stars). This difference is comparable
to that between the metal-poor and metal-rich components of the bulge reported
by previous studies. Here we note that Ness et al. (2013) reported a mean
[Fe/H] of -0.25 dex for the metal-poor component and +0.15 dex for the metal-
rich component, and -0.4 dex and +0.3 dex of the peak values of [Fe/H] are
presented for the metal-poor and metal-rich components by Zoccali et al.
(2017). This similarity supports the idea that the bluer and redder stars of
this study correspond to the metal-poor and metal-rich components of the
bulge, respectively.
In addition, the metal-rich, redder stars are also generally more enhanced in
[Na/Fe] but depleted in [Al/Fe] and [O/Fe] than the metal-poor bluer stars,
although this trend is less clear in [Al/Fe]. The differences between the two
groups are $\sim$0.2 dex in [Na/Fe], $\sim$0.13 dex in [Al/Fe], and $\sim$0.35
dex in [O/Fe]. These chemical characteristics are analogous to that observed
in GCs between the earlier and later stellar populations in terms of the Na-O
anti-correlation (see, e.g., Carretta et al. 2009; Bastian & Lardo 2018).
However, we note that the Na-enhancement with O-depletion of the metal-rich
stars could be naturally expected in the bulge (see, e.g., Johnson et al.
2014; Zasowski et al. 2019). Although the bluer and redder stars in the bulge
show a large difference in metallicity, unlike typical GCs, the intrinsic
metallicity variations are also observed in some peculiar GCs, such as
$\omega$-Centauri, Gaia 1, and Terzan 5 (Johnson & Pilachowski 2010; Origlia
et al. 2011; Massari et al. 2014; Mucciarelli et al. 2017; Simpson et al.
2017; Schiavon et al. 2017; Koch et al. 2018). In this regard, further
research is necessary to determine whether these chemical properties simply
reflect the general trends of the bulge stars or are associated with the
multiple populations of GCs.
## 5 Discussion
Here, we use the BDBS data to show that the bright and faint RCs observed in
the bulge have contrasting distributions in ($u-g$)0 and ($u-i$)0 colors with
significant variations depending on Galactic longitude and latitude. In
particular, the stars on the RC could be efficiently divided into bluer and
redder stars according to their ($u-g$)0 colors. The redder stars are
characteristic of the double RC, in that they show constant magnitudes in the
bright and faint clumps and a variation in number ratio with longitude, while
the bluer stars are mainly placed in the fainter RC regime regardless of the
studied field. We also confirm that the redder stars are more enhanced in
[Fe/H] and [Na/Fe], but more depleted in [O/Fe] than the bluer stars through a
comparison with our spectroscopy. Our result is consistent with previous
studies showing that the MW bulge hosts a spheroidal shape comprising a metal-
poor component and a boxy/peanut-shaped metal-rich component (Rojas-Arriagada
et al. 2014; Kunder et al. 2016).
### 5.1 Effect of the bar
Although our findings are reasonable in the context of the current
understanding of the bulge, which comprises a metal-poor and a metal-rich
component, two other possible explanations should be considered. The first is
that the redder RC originated from the metal-rich population of the Galactic
bar. McWilliam & Zoccali (2010) suggested that the influence of the bar
component is insufficient to explain the constant magnitude of the bright and
faint RCs in the various fields of the bulge. However, if we suppose the
single clump of the redder stars in the fields of $l$ = +4.5$\degree$ and
-4.5$\degree$, the clump in the positive longitude is 0.7 mag brighter than
that in the negative longitude field (see Figures 3 and 5). This difference in
magnitude is comparable to that expected for the 9$\degree$ separation in
longitude of the tilted bar (see Section 5 of McWilliam & Zoccali 2010). Thus,
while the RC of the metal-poor bulge stars is placed on the bluer and fainter
regime, the RC of the metal-rich bar stars is moved in the redder regime,
which could explain the observed magnitude and color distribution, as well as
their dependence on the Galactic position. However, the effect of the bar
alone cannot explain the relatively distinct double RC of the redder stars in
the fields at $l$ = 0$\degree$, where its bright clump shows a magnitude that
is similar to the clump in the $l$ = +4.5$\degree$ fields. Moreover, the faint
clump shows magnitudes that are consistent with the clump in the $l$ =
-4.5$\degree$ fields. If the redder stars were to originate from the bar, they
would be expected to form a single clump (at $l$ = 0$\degree$) in between the
magnitude range of the clump of the $l$ = +4.5$\degree$ and $l$ =
-4.5$\degree$ fields. One other interesting feature is the magnitude variation
of the bluer clump in the fields of $b$ = $-$6.0$\degree$. As shown in Figures
5, 6, and 7, the peak magnitude of the bluer clump increases with decreasing
longitude from 13.0 ($l$ = +4.5$\degree$) to 13.5 ($l$ = $-$4.5$\degree$)
$K_{s_{0}}$ mag, while no clear variation is observed in the $b$ =
$-$7.5$\degree$ and $-$9.0$\degree$ fields. Kunder et al. (2020) reported that
one population of metal-poor RR Lyrae stars traces the barred structure. In
this regard, this feature may reflect the tilted bar structure among metal-
poor stars, because the barred signature would be negligible at higher
latitude ($b$ = $-$7.5$\degree$ and $-$9.0$\degree$).
Figure 11: Comparison of our observation (left panels) with synthetic
population models (right panels). The difference in the ($u-g$)0 and ($u-i$)0
colors, with similar ($J-K_{s}$)0 color, between the bRC and fRC can be
reproduced with two-population models. The bRC models (red circles) are
enhanced not only in [Fe/H] but also in He abundance with respect to the fRC
models (blue circles) with $\Delta$[FeH] = 0.5 dex and $\Delta$$Y$ = 0.1.
### 5.2 Multiple-population scenario
Another possible explanation for the observed color distribution could be the
difference in chemical composition between the bright and faint RCs based on
the multiple populations (see Lee et al. 2015; Joo et al. 2017; Lim et al.
2021). The color of the RC is mainly related to metallicity, while the helium
abundance of stars affects their luminosity. In this regard, the redder color
of the bRC and the bluer color of the fRC could be well explained by the
difference in both metallicity and He abundance of RC stars. Figure 11 shows a
comparison between observations of stars in the ($l$, $b$) = (0.0$\degree$,
-9.0$\degree$) field with synthetic models for the two stellar populations
with different chemical composition, but the same age of 10 Gyr and distance
modulus of 15.0 mag in the $K_{s}$-band. These models are based on the
evolutionary population synthesis of Chung et al. (2017). As shown in this
figure, the metal-poor and He-normal population of the fRC and the metal-rich
and He-enhanced population of the bRC ($\Delta$[Fe/H] = 0.6 dex; $\Delta$ $Y$
= 0.1) nicely reproduce the observed features, such as the difference in the
($u-g$)0 and ($u-i$)0 with similar ($J-K_{s}$)0 between the two RCs. The
enhancement of He abundance in the later-generation stars has been reported in
some massive GCs (e.g., King et al. 2012; Milone 2015). In this scenario, the
variation of the redder RC depending on longitude can be explained by the
different influence of the bar component (see Joo et al. 2017). Thus, the
young and metal-rich bar component comprises the faint clump of the redder
stars in the field of $l$ = 0.0$\degree$, while this component is embedded in
the bright clump of the redder stars in the positive longitude fields, and
forms the faint clump of the redder RC regime in the negative longitude
fields. However, the lack of the redder bright clump in the fields of $l$ =
-4.5$\degree$ and the almost identical magnitude of the redder faint clump
between the $l$ = 0.0$\degree$ and -4.5$\degree$ fields remain to be
explained.
### 5.3 X-shaped structure of the metal-rich component
In summary, our main findings for the color and luminosity distributions of
the RC stars are as follows: (1) RC stars can efficiently be divided into
metal-poor bluer and metal-rich redder stars in the ($u-g$)0 color. (2) The
metal-poor bluer stars populate a single RC with consistent magnitude in all
fields. (3) The metal-rich redder stars show distinct double clumps at the
fields of $l$ = 0.0$\degree$. (4) In these fields, the separation between the
bright and faint clumps of the metal-rich redder stars is extended with
increasing latitude, and (5) the bright clump of the metal-rich redder stars
becomes significant in the positive longitude fields, whereas the faint clump
of the redder stars is dominant in the negative longitude fields at the same
magnitude with the fields of $l$ = 0.0$\degree$. All these properties are best
explained by a spheroidal shape of the metal-poor component and a boxy/peanut
shape (X-shape) of the metal-rich component in the bulge.
However, based on this scenario there is an additional issue to be addressed.
As clearly shown in the field at ($l$, $b$) = (0.0$\degree$, -7.5$\degree$),
the RC of the metal-poor stars is even fainter than the faint clump of the
metal-rich stars (see Figure 5). If the double RC of the metal-rich stars
originates from a different distance on the X-shaped structure, the spheroidal
shape of the metal-poor stars should be placed in between the near and far
arms of the X-structure. In this case, the fainter luminosity of the metal-
poor component cannot be explained by a distance effect alone. This difference
in luminosity could be due to metallicity or population effects on the RC. The
RC stars become redder with increasing metallicity, while they become slightly
brighter in NIR bands and fainter in optical bands (see Salaris & Girardi
2002). This effect on luminosity increases with increasing age. For instance,
when we compare two stellar isochrones with the same age of 10 Gyr from the
“Bag of Stellar Tracks and Isochrones” (BaSTI; Hidalgo et al. 2018),
contrasting metal-poor ([M/H] = -0.401, $Z$ = 0.006, and $Y$ = 0.255) and
metal-rich ([M/H]=+0.06, $Z$ = 0.017, and $Y$ = 0.269) cases, the RC of the
metal-rich model is approximately 0.1 mag brighter in the Ks band and 0.5
redder in ($u-g$) than that of the metal-poor model. We note that this
difference in magnitude is comparable to that suggested by Salaris & Girardi
(2002; see their Table 1). However, the metallicity effect seems to be
insufficient to explain our observed magnitude difference between the bluer RC
and the center of the redder double RC ($\sim$0.5 mag at ($l$, $b$) =
(0.0$\degree$, -7.5$\degree$)). On the other hand, as the luminosity of RC
stars is primarily affected by He abundance, a He-enhancement of the metal-
rich component of the bulge may be required (see Joo et al. 2017). Therefore,
further study using population synthesis modeling is essential to examine
whether the observed luminosity distribution of the metal-poor and metal-rich
components can be reproduced with metallicity and distance differences only or
whether an additional He abundance variation between the two components is
required.
The BDBS data allow an enormous sample of the bulge stars to be probed in six
passbands ($ugrizY$), providing a new view of the luminosity and color
distribution of the RC stars in the MW bulge. In particular, these data will
be of great help in investigating the bimodal structure of the MW bulge
composed of the metal-poor and metal-rich stellar populations. Further
investigation of the detailed 3D structure model of the metal-poor and metal-
rich populations of the bulge is required, and the BDBS is ideal for such
studies.
###### Acknowledgements.
We are grateful to the anonymous referee for a number of helpful suggestions
and comments. DL and AJKH gratefully acknowledge funding by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) – SFB 881 (“The Milky
Way System”, subprojects A03, A05, A11). DL thanks Sree Oh for comments and
encouragements. This research made use of the cross-match service provided by
CDS, Strasbourg. Data used in this paper comes from the Blanco DECam Survey
Collaboration. This project used data obtained with the Dark Energy Camera
(DECam), which was constructed by the Dark Energy Survey (DES) collaboration.
Funding for the DES Projects has been provided by the U.S. Department of
Energy, the U.S. National Science Foundation, the Ministry of Science and
Education of Spain, the Science and Technology Facilities Council of the
United Kingdom, the Higher Education Funding Council for England, the National
Center for Supercomputing Applications at the University of Illinois at
Urbana-Champaign, the Kavli Institute of Cosmological Physics at the
University of Chicago, the Center for Cosmology and Astro-Particle Physics at
the Ohio State University, the Mitchell Institute for Fundamental Physics and
Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundaçõ
Carlos Chagas Filho de Amparo á Pesquisa do Estado do Rio de Janeiro, Conselho
Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da
Ciência, Tecnologia e Inovacão, the Deutsche Forschungsgemeinschaft, and the
Collaborating Institutions in the Dark Energy Survey. The Collaborating
Institutions are Argonne National Laboratory, the University of California at
Santa Cruz, the University of Cambridge, Centro de Investigaciones
Enérgeticas, Medioambientales y Tecnológicas-Madrid, the University of
Chicago, University College London, the DES-Brazil Consortium, the University
of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi
National Accelerator Laboratory, the University of Illinois at Urbana-
Champaign, the Institut de Cióncies de l’Espai (IEEC/CSIC), the Institut de
Física d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-
Maximilians Universität München and the associated Excellence Cluster
Universe, the University of Michigan, the National Optical Astronomy
Observatory, the University of Nottingham, the Ohio State University, the
OzDES Membership Consortium the University of Pennsylvania, the University of
Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the
University of Sussex, and Texas A&M University. Based on observations at Cerro
Tololo Inter-American Observatory (2013A-0529; 2014A-0480; PI: Rich), National
Optical Astronomy Observatory, which is operated by the Association of
Universities for Research in Astronomy (AURA) under a cooperative agreement
with the National Science Foundation.
## References
* Alonso-García et al. (2017) Alonso-García, J., Minniti, D., Catelan, M., et al. 2017, ApJ, 849, L13
* Athanassoula (2005) Athanassoula, E. 2005, MNRAS, 358, 1477
* Babusiaux et al. (2010) Babusiaux, C., Gómez, A., Hill, V., et al. 2010, A&A, 519, A77
* Bastian & Lardo (2018) Bastian, N. & Lardo, C. 2018, ARA&A, 56, 83
* Bureau et al. (2006) Bureau, M., Aronica, G., Athanassoula, E., et al. 2006, MNRAS, 370, 753
* Buta et al. (2015) Buta, R. J., Sheth, K., Athanassoula, E., et al. 2015, ApJS, 217, 32
* Carretta et al. (2009) Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2009, A&A, 505, 117
* Casagrande & VandenBerg (2018) Casagrande, L. & VandenBerg, D. A. 2018, MNRAS, 479, L102
* Chung et al. (2017) Chung, C., Yoon, S.-J., & Lee, Y.-W. 2017, ApJ, 842, 91
* Clarkson et al. (2018) Clarkson, W. I., Calamida, A., Sahu, K. C., et al. 2018, ApJ, 858, 46
* Cummings et al. (2014) Cummings, J. D., Geisler, D., Villanova, S., & Carraro, G. 2014, AJ, 148, 27
* Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1
* Gonzalez et al. (2016) Gonzalez, O. A., Gadotti, D. A., Debattista, V. P., et al. 2016, A&A, 591, A7
* Gratton et al. (2012) Gratton, R. G., Carretta, E., & Bragaglia, A. 2012, A&A Rev., 20, 50
* Green et al. (2018) Green, G. M., Schlafly, E. F., Finkbeiner, D., et al. 2018, MNRAS, 478, 651
* Hidalgo et al. (2018) Hidalgo, S. L., Pietrinferni, A., Cassisi, S., et al. 2018, ApJ, 856, 125
* Johnson & Pilachowski (2010) Johnson, C. I. & Pilachowski, C. A. 2010, ApJ, 722, 1373
* Johnson et al. (2014) Johnson, C. I., Rich, R. M., Kobayashi, C., Kunder, A., & Koch, A. 2014, AJ, 148, 67
* Johnson et al. (2020) Johnson, C. I., Rich, R. M., Young, M. D., et al. 2020, MNRAS, 499, 2357
* Joo et al. (2017) Joo, S.-J., Lee, Y.-W., & Chung, C. 2017, ApJ, 840, 98
* King et al. (2012) King, I. R., Bedin, L. R., Cassisi, S., et al. 2012, AJ, 144, 5
* Koch et al. (2018) Koch, A., Hansen, T. T., & Kunder, A. 2018, A&A, 609, A13
* Koch et al. (2016) Koch, A., McWilliam, A., Preston, G. W., & Thompson, I. B. 2016, A&A, 587, A124
* Kunder et al. (2020) Kunder, A., Pérez-Villegas, A., Rich, R. M., et al. 2020, AJ, 159, 270
* Kunder et al. (2016) Kunder, A., Rich, R. M., Koch, A., et al. 2016, ApJ, 821, L25
* Lee et al. (2013) Lee, Y.-W., Han, S.-I., Joo, S.-J., et al. 2013, ApJ, 778, L13
* Lee et al. (2018) Lee, Y.-W., Hong, S., Lim, D., et al. 2018, ApJ, 862, L8
* Lee et al. (2015) Lee, Y.-W., Joo, S.-J., & Chung, C. 2015, MNRAS, 453, 3906
* Lim et al. (2021) Lim, D., Lee, Y.-W., Koch, A., et al. 2021, ApJ, in press, arXiv:2012.03954
* Massari et al. (2014) Massari, D., Mucciarelli, A., Ferraro, F. R., et al. 2014, ApJ, 795, 22
* Mateo et al. (2012) Mateo, M., Bailey, J. I., Crane, J., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, 84464Y
* McWilliam & Zoccali (2010) McWilliam, A. & Zoccali, M. 2010, ApJ, 724, 1491
* Milone (2015) Milone, A. P. 2015, MNRAS, 446, 1672
* Minniti et al. (2010) Minniti, D., Lucas, P. W., Emerson, J. P., et al. 2010, New A, 15, 433
* Mucciarelli et al. (2017) Mucciarelli, A., Monaco, L., Bonifacio, P., & Saviane, I. 2017, A&A, 603, L7
* Nataf (2017) Nataf, D. M. 2017, PASA, 34, e041
* Nataf et al. (2014) Nataf, D. M., Cassisi, S., & Athanassoula, E. 2014, MNRAS, 442, 2075
* Nataf et al. (2010) Nataf, D. M., Udalski, A., Gould, A., Fouqué, P., & Stanek, K. Z. 2010, ApJ, 721, L28
* Nataf et al. (2015) Nataf, D. M., Udalski, A., Skowron, J., et al. 2015, MNRAS, 447, 1535
* Ness et al. (2013) Ness, M., Freeman, K., Athanassoula, E., et al. 2013, MNRAS, 430, 836
* Ness et al. (2012) Ness, M., Freeman, K., Athanassoula, E., et al. 2012, ApJ, 756, 22
* Origlia et al. (2011) Origlia, L., Rich, R. M., Ferraro, F. R., et al. 2011, ApJ, 726, L20
* Rattenbury et al. (2007) Rattenbury, N. J., Mao, S., Sumi, T., & Smith, M. C. 2007, MNRAS, 378, 1064
* Rich et al. (2020) Rich, R. M., Johnson, C. I., Young, M., et al. 2020, MNRAS, 499, 2340
* Rojas-Arriagada et al. (2014) Rojas-Arriagada, A., Recio-Blanco, A., Hill, V., et al. 2014, A&A, 569, A103
* Salaris & Girardi (2002) Salaris, M. & Girardi, L. 2002, MNRAS, 337, 332
* Savino et al. (2020) Savino, A., Koch, A., Prudil, Z., Kunder, A., & Smolec, R. 2020, A&A, 641, A96
* Savino et al. (2018) Savino, A., Massari, D., Bragaglia, A., Dalessand ro, E., & Tolstoy, E. 2018, MNRAS, 474, 4438
* Schiavon et al. (2017) Schiavon, R. P., Johnson, J. A., Frinchaboy, P. M., et al. 2017, MNRAS, 466, 1010
* Schlafly & Finkbeiner (2011) Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Simion et al. (2017) Simion, I. T., Belokurov, V., Irwin, M., et al. 2017, MNRAS, 471, 4323
* Simpson et al. (2017) Simpson, J. D., De Silva, G. M., Martell, S. L., et al. 2017, MNRAS, 471, 4087
* Stanek et al. (1994) Stanek, K. Z., Mateo, M., Udalski, A., et al. 1994, ApJ, 429, L73
* Taylor (2005) Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. Shopbell, M. Britton, & R. Ebert, 29
* Uttenthaler et al. (2012) Uttenthaler, S., Schultheis, M., Nataf, D. M., et al. 2012, A&A, 546, A57
* Wegg & Gerhard (2013) Wegg, C. & Gerhard, O. 2013, MNRAS, 435, 1874
* Wegg et al. (2015) Wegg, C., Gerhard, O., & Portail, M. 2015, MNRAS, 450, 4050
* Zasowski et al. (2019) Zasowski, G., Schultheis, M., Hasselquist, S., et al. 2019, ApJ, 870, 138
* Zoccali et al. (2017) Zoccali, M., Vasquez, S., Gonzalez, O. A., et al. 2017, A&A, 599, A12
## Appendix A Density maps from various colors
Figure 12: Density maps of stars in the RGB regime for nine Galactic fields in
the ($u-g$)0 and ($J-K_{s}$)0 color planes. A clear split of RGBs is shown in
the field of ($l$, $b$) = (+4.5$\degree$, -7.5$\degree$). Figure 13: Density
maps of stars in the RC regime for nine Galactic fields in the ($u-K_{s}$)0
color plane. Similar to the cases of ($u-g$)0 and ($u-i$)0 colors, contrasting
distributions between the bright and faint RC are shown. Figure 14: Same as
Figure 13, but in the ($u-g$)0-($g-i$)0 color plane. The overall patterns are
consistent with those in ($u-g$)0 and ($u-K_{s}$)0 colors. In particular, the
separation between the blue and redder RC stars is more distinct in this color
combination. Figure 15: Same as Figure 13, but in the ($G_{BP}-G_{Rp}$)0 color
plane. Although the double RC feature is shown, no difference in the color
distribution between the bright and faint RC stars is observed. Figure 16:
Density contours of stars in the RC regime with subgrouping by ($u-g$)0 color,
in the ($K_{s}$, $g-i$)0 plane. Similar to the ($u-g$)0 and ($u-i$)0 colors in
Figures 5 and 7, bluer and redder RC stars are well divided in this color
plane with a single clump of bluer stars and a double clump of redder stars.
Figure 17: Same as Figure 16 but for ($r-i$)0 color. The bluer and redder
stars are overlapped in this color, similar to the case of ($J-K_{s}$)0 color
(see Figure 6).
|
# A Long Stream of Metal-Poor Cool Gas around a Massive Starburst Galaxy at z
= 2.67
Hai Fu11affiliation: Department of Physics & Astronomy, University of Iowa,
Iowa City, IA 52242 22affiliation: Institute for Astronomy, University of
Hawaii, Honolulu, HI 96822 , R. Xue11affiliation: Department of Physics &
Astronomy, University of Iowa, Iowa City, IA 52242 33affiliation: National
Radio Astronomy Observatory, Charlottesville, VA, 22903 , J. X.
Prochaska44affiliation: Department of Astronomy and Astrophysics, UCO/Lick
Observatory, University of California, Santa Cruz, CA 95064 , A.
Stockton22affiliation: Institute for Astronomy, University of Hawaii,
Honolulu, HI 96822 , S. Ponnada11affiliation: Department of Physics &
Astronomy, University of Iowa, Iowa City, IA 52242 55affiliation: Astronomy
Department, California Institute of Technology, Pasadena, CA 51125 , M. W.
Lau66affiliation: Department of Physics and Astronomy, University of
California, Riverside, CA 92521 , A. Cooray77affiliation: Department of
Physics and Astronomy, University of California, Irvine, CA 92697 , and D.
Narayanan88affiliation: Department of Astronomy, University of Florida,
Gainesville, FL, 32611
(Submitted on 2020 Dec 23, accepted on 2021 Jan 11)
###### Abstract
We present the first detailed dissection of the circumgalactic medium (CGM) of
massive starburst galaxies at $z>2$. Our target is a submillimeter galaxy
(SMG) at $z=2.674$ that has a star formation rate of 1200 $M_{\odot}\,{\rm
yr}^{-1}$ and a molecular gas reservoir of $1.3\times 10^{11}$ $M_{\odot}$. We
characterize its CGM with two background QSOs at impact parameters of 93 kpc
and 176 kpc. We detect strong H i and metal-line absorption near the redshift
of the SMG toward both QSOs, each consisting of three main subsystems spanning
over 1500 km s-1. The absorbers show remarkable kinematic and metallicity
coherence across a separation of $\sim$86 kpc. In particular, the cool gas in
the CGM of the SMG exhibits high H i column densities ($\log N_{\rm HI}/{\rm
cm}^{-2}=20.2,18.6$), low metallicities (${\rm[M/H]}\approx-2.0$), and similar
radial velocities ($\delta v\sim-300$ km s-1). While the H i column densities
match previous results on the CGM around QSOs at $z>2$, the metallicities are
lower by more than an order of magnitude, making it an outlier in the line
width$-$metallicity relation of damped Ly$\alpha$ absorbers. The large
physical extent, the velocity coherence, the high surface density, and the low
metallicity are all consistent with the cool, inflowing, and near-pristine gas
streams predicted to penetrate hot massive halos at $z>1.5$. We estimate a
total gas accretion rate of $\sim$100 $M_{\odot}\,{\rm yr}^{-1}$ from three
such streams, which falls short of the star formation rate but is consistent
with simulations. At this rate, it takes about a gigayear to acquire the
molecular gas reservoir of the central starburst.
###### Subject headings:
Starburst galaxies; Circumgalactic medium
††journal: To appear in the Astrophysical Journal
## 1\. Introduction
The global gas supply for in situ star formation is a central question in
galaxy formation and evolution, because star formation and merging are the two
primary channels through which galaxies grow (Oser et al., 2010). According to
spherical hydrodynamical models (Birnboim & Dekel, 2003) and cosmological
simulations (Keres et al., 2005), stable accretion shocks are established near
the virial radius when a dark matter (DM) halo grows to a mass threshold of
$M_{\rm shock}=2-3\times 10^{11}$ $M_{\odot}$. So in massive halos, a
significant fraction of the accreted gas is expected to be shock-heated to the
virial temperature ($T_{\rm vir}=8\times 10^{6}~{}(M_{\rm
halo}/10^{13}~{}M_{\odot})^{2/3}$ K) and develops an atmosphere of hot diffuse
gas. The virial shock effectively cuts off the fuel supply for star formation,
because of the inefficient radiative cooling of the hot gas even in the denser
inner regions (Kereš et al., 2009). But at high redshift, narrow filaments of
cool gas ($T\lesssim 10^{5}$ K) from the cosmic web may penetrate the hot
atmospheres of rare, massive halos without ever being shock-heated to the
virial temperature, thanks to the lower masses of typical halos at higher
redshifts that define the width of the filaments (Dekel & Birnboim, 2006;
Dekel et al., 2009). In fact, this cold mode accretion may dominate over the
hot mode accretion (radiative cooling of shock-heated virialized gas) at all
halo masses at $z>2$ (Kereš et al., 2009).
In emission, the predicted cold-mode accretion streams (or “cold streams” in
short) feeding high-redshift massive galaxies may appear as giant filamentary
Ly$\alpha$ nebulae around QSOs (Weidinger et al., 2004; Cantalupo et al.,
2014; Martin et al., 2015, 2019) and in dense protocluster environments
(Møller & Fynbo, 2001; Hennawi et al., 2015; Umehata et al., 2019; Li et al.,
2019; Daddi et al., 2020). However, it has been difficult to rule out outflows
as the alternative interpretation, especially when QSO photoionization
contributes to the Ly$\alpha$ emission and the chemical abundance of the
nebulae cannot be easily measured. In absorption, cold streams can be detected
and distinguished from other gaseous components based on neutral hydrogen (H
i) column density, kinematics, and particularly chemical abundance (Fumagalli
et al., 2011; Theuns, 2021).
In this project, we have selected a sample of massive starbursts at high
redshifts in the vicinity of background QSOs to trace the cool gas supply in
these early massive halos. There, the problem of gas supply is the most acute
because of the extremely short gas exhaustion timescale. We then utilize the
absorption-line spectra of background QSOs to characterize the physical state
of their circumgalactic medium (CGM) – the gas between the inner regions of
galaxies and the diffuse intergalactic medium (IGM) – and to search for large-
scale cool gas reservoirs. In this section, we review our knowledge of the
target galaxy sample and QSO absorption-line systems in the literature. These
earlier studies have motivated this project and will provide valuable
reference samples that can be compared with the system dissected in this work.
### 1.1. Submillimeter Galaxies
Heated dust in the interstellar medium cools by emitting a modified blackbody
spectrum (MBB; $S_{\nu}\propto(1-e^{-\tau_{\nu}})B_{\nu}(T)$) with
temperatures in the range $10~{}{\rm K}\lesssim T\lesssim 100~{}{\rm K}$,
forming the far-infrared (IR) hump in the spectral energy distribution (SED)
of a galaxy. At any given frequency along the Rayleigh-Jeans tail where the
dust should be optically thin, the observed flux density ($S_{\nu,\rm obs}$)
of the MBB is proportional to the dust mass ($M_{\rm dust}$), the dust
temperature ($T$), and the redshift ($z$) —
$\displaystyle S_{\nu,{\rm obs}}$ $\displaystyle\propto M_{\rm
dust}~{}T~{}(1+z)^{\beta-1}~{}\nu_{\rm obs}^{2+\beta}/d_{A}(z)^{2}$
$\displaystyle\propto M_{\rm dust}~{}T~{}(1+z)^{\beta-1},$ (1)
where $d_{A}(z)$ is the angular diameter distance at redshift $z$ (which
varies by only 22% between $z=1$ and $z=4$) and $\beta\approx 2$ is the dust
emissivity parameter ($\kappa_{\nu}\propto\nu^{\beta}$). Therefore, galaxies
selected at long wavelengths, such as the (sub)millimeter regime,
preferentially have higher dust mass, higher dust temperature, and are at
higher redshift than galaxies selected at shorter wavelengths. Furthermore,
holding the metallicity ($Z_{\rm gas}$) constant, high dust mass together with
high temperature implies that the galaxies are gas-rich ($M_{\rm gas}=M_{\rm
dust}/Z_{\rm gas}$) and have high star-formation efficiency (${\rm SFE}={\rm
SFR}/M_{\rm gas}$, where SFR is the star formation rate), because
$T^{4}\propto L_{\rm bol}/M_{\rm dust}\propto{\rm SFR}/(Z_{\rm gas}M_{\rm
gas}),$ (2)
a result from the Stefan-Boltzmann law.
Indeed, follow-up observations of the brightest galaxies selected at 850
$\mu$m ($S_{850}\gtrsim 3$ mJy), the submillimeter galaxies (SMGs; Smail et
al. 1997; Barger et al. 1998; Blain et al. 2002), have revealed a significant
population of gas-rich starburst galaxies that contribute almost as much to
the cosmic SFR density as UV-selected Lyman break galaxies at $z=2-3$ (Chapman
et al., 2005; Casey et al., 2014). The SMGs are mature ($\langle M_{\rm
star}\rangle\sim 10^{11}$ $M_{\odot}$; Hainline et al., 2011; Michałowski et
al., 2012; Targett et al., 2013), metal-rich ($\langle Z\rangle\sim
Z_{\odot}$; Swinbank et al., 2004), gas-rich ($\langle M_{\rm mol}\rangle\sim
3\times 10^{10}$ $M_{\odot}$; Greve et al., 2005; Tacconi et al., 2008; Ivison
et al., 2011; Bothwell et al., 2013), extreme star-forming systems
($\overline{\rm SFR}\sim 500$ $M_{\odot}\,{\rm yr}^{-1}$) with a broad
redshift distribution that peaks at $\langle z\rangle\sim 2.5$ (Chapman et
al., 2005; Wardlow et al., 2011). The molecular gas reservoirs are turbulent,
likely due to starburst-driven galactic outflows (Falgarone et al., 2017).
Notably, the nearly linear relation between CO and IR luminosities implies an
almost constant gas depletion timescale of $\tau_{\rm dep}\equiv M_{\rm
mol}/{\rm SFR}\sim 0.1~{}{\rm Gyr}$ (Bothwell et al., 2013), which is far
shorter than that of normal star-forming galaxies on the main sequence
($\tau_{\rm dep}\sim 0.6~{}{\rm Gyr}$ at $z=2.5$; Tacconi et al., 2018),
justifying the usage of “starburst” in describing SMGs111Although most of the
difference in $\tau_{\rm dep}$ is driven by the conversion factor from CO to
molecular gas ($\alpha_{\rm CO}\equiv M_{\rm mol}/L_{\rm CO}^{\prime}$ in
units of $M_{\odot}/({\rm K~{}km~{}s^{-1}~{}pc^{2}})$), constraints from
dynamical masses and dust masses have shown that SMGs indeed have lower
$\alpha_{\rm CO}$ than that of normal star-forming galaxies (e.g., Hodge et
al., 2012; Magnelli et al., 2012a; Xue et al., 2018) and that the value
adopted from local ultraluminous IR galaxies (ULIRGs, $\alpha_{\rm CO}=1.0$;
Downes & Solomon, 1998; Papadopoulos et al., 2012) is more appropriate for
SMGs than the Galactic value ($\alpha_{\rm CO}=4.3$; Bolatto et al., 2013)..
On the other hand, the autocorrelation length for SMGs of $\sim$11 Mpc at
$z=1-3$ implies a characteristic dark matter halo mass of $M_{\rm halo}\sim
9\times 10^{12}$ $M_{\odot}$ for $h=0.7$ (Hickox et al., 2012). The high halo
mass is consistent with the high maximum rotation velocities ($V_{\rm
circ}\gtrsim 500$ km s-1) observed in several bright SMGs with spatially
resolved kinematics (e.g., Hodge et al., 2012; Xue et al., 2018). For
Navarro–Frenk–White (NFW) halos (Navarro et al., 1996) at $z=2.5$, the halo
mass is directly related to the maximum circular velocity by a power law:
$M_{\rm halo}=10^{13}~{}M_{\odot}~{}(V_{\rm circ}/500~{}{\rm
km~{}s}^{-1})^{3}$ (Bullock et al., 2001; Klypin et al., 2011). Such a mass is
well above the threshold mass for stable virial shocks ($M_{\rm shock}$), and
atmospheres of hot gas at the virial temperature ($\sim 8\times 10^{6}$ K) are
expected to fill the halo. But as previously discussed, at the early epoch of
the SMGs, cool gas filaments can penetrate their halos, which could
potentially deliver enough gas to build the molecular gas reservoir that
supports the ongoing intense star formation.
### 1.2. QSO Absorption-line Systems
Ever since the discovery of multiple absorption redshifts in QSO spectra
(Burbidge et al., 1968), quasar absorption-line spectroscopy has become a
powerful tool to study diffuse gas at various phases in the IGM and the CGM,
which account for the majority of the baryonic mass in the universe (see
Péroux & Howk 2020 for a recent review). The optical depth at the Lyman limit
($\lambda_{\rm rest}=912$ Å) reaches unity when the H i column density reaches
$\log N_{\rm HI}=17.2$222Column densities are given in units of cm-2
throughout the paper. Accumulating evidence suggests that these optically
thick absorbers trace material in virialized structures (i.e., the CGM,
Fumagalli et al., 2016; Lehner et al., 2016), while the optically thin
absorbers in the Ly$\alpha$ forest (LYAF) likely trace the IGM (Rauch, 1998).
Due to their distinct physical properties, the optically thick absorbers are
empirically subdivided into three categories based on their H i column
densities: the Lyman limit systems (LLSs, $17.2\leq\log N_{\rm HI}<19$) that
are mostly ionized, the damped Ly$\alpha$ absorbers (DLAs; $\log N_{\rm
HI}\geq 20.3$) that are mostly neutral, and lastly the super-LLSs or sub-
DLAs333The two terms have been used interchangeably. for the intermediate
category of absorbers with $19\leq\log N_{\rm HI}<20.3$. Unlike absorbers at
lower column densities, gas in the DLAs is mostly neutral. In fact, at all
epochs since $z\sim 5$, the DLAs have contained most of the neutral gas that
is poised to fuel star formation in galaxies (Wolfe et al., 2005).
### 1.3. Emission$-$Absorption Connection
Because the H i column density threshold of DLAs was set by the observed limit
of 21 cm emission at the cutoff boundaries of nearby spiral disks (Wolfe et
al., 1986), the DLAs were expected to arise from gas-rich galactic disks even
at high redshifts. However, the emission counterparts (i.e., the DLA galaxies)
of most DLAs have eluded detection. Among the limited detections in optical
searches, it is found that the DLA galaxies are very faint ($r\gtrsim 24$) and
close ($\sim$2″) to the QSOs (e.g., Steidel & Hamilton, 1992; Fynbo et al.,
2008), making it difficult to measure their redshifts. To improve efficiency,
searches of the emission counterparts of DLAs have focused on DLAs that are
clearly chemically enriched ([M/H] $>-0.7$) (e.g., Fynbo et al., 2013;
Jorgenson & Wolfe, 2014) or sightlines that pass through multiple DLAs (e.g.,
Srianand et al., 2016). But still, only 16 $z>1.9$ DLA host galaxies have been
identified via emission lines in the optical (see Krogager et al., 2017;
Møller & Christensen, 2020, for compilations) over an extensive search period
of nearly three decades. The advent of (sub)millimeter interferometers such as
the Atacama Large Millimeter/submillimeter Array (ALMA) and the Very Large
Array (VLA) have significantly improved the success rate of identifying
absorption-selected galaxies, because (1) the contrast between DLA host
galaxies and the QSO is more favorable at longer wavelengths, and (2) the
interferometers have an unattenuated view over a wide FoV (thus does not
require lucky slit placements). In only a few years, there have been four
$z\sim 4$ DLA galaxies identified in [C ii] 158 $\mu$m (Neeleman et al., 2017,
2019) and five $z\sim 2$ DLA in CO(4-3) and CO(3-2) (Kanekar et al., 2020).
Interestingly, the DLA galaxies previously identified in the optical/near-IR
are not detected in CO and vice versa (Kanekar et al., 2020), indicating that
observations at different wavelength are complementary to one another and that
H i-absorption-selection tags gas-rich galaxies of all types. Some DLAs also
have multiple emission counterparts that are consistent with the absorption
redshifts, suggesting a group/cluster environment (e.g., Fynbo et al., 2018).
The opposite approach from the searches of DLA galaxies is to start from an
emission-selected galaxy sample and look for corresponding absorption lines in
the spectra of nearby background QSOs. This approach requires chance
alignments of foreground galaxies and background QSOs, thus requires large
samples of both populations. The implicit assumption is that the emission-
selected galaxies have similar CGM properties, and therefore, the absorption
signals obtained from different galaxy-QSO pairs can be combined to provide
meaningful average properties of a typical halo in the studied galaxy
population. The searches for absorbers are no longer limited to DLAs, but to
all optically thick absorbers (i.e., LLSs and sub-DLAs). At $z\gtrsim 2$, the
targeted galaxy populations have included Lyman Break Galaxies (LBGs) (e.g.,
Simcoe et al., 2006; Rudie et al., 2012, 2013; Crighton et al., 2013, 2015)
and QSOs (e.g., Hennawi et al., 2006; Prochaska et al., 2013a; Lau et al.,
2016). In addition, using a sample of projected QSO pairs where one of the
QSOs intercepts a DLA, Rubin et al. (2015) have probed the CGM of the DLA
galaxy without identifying the DLA galaxy in emission. These studies have
mapped out the H i column density, the ion ratios, and the metallicity as a
function of impact parameters ($R_{\bot}$). The covering fraction of optically
thick H i absorbers increases from $\sim$30% around LBGs (Rudie et al., 2012)
and DLAs (Rubin et al., 2015) to $\gtrsim$60% around QSOs (Prochaska et al.,
2013a) for sightlines out to $R_{\bot}=100-200$ kpc (comparable to the virial
radius of DM halos with $M_{\rm halo}=10^{12.5}$ $M_{\odot}$ at $z=2$: $R_{\rm
vir}=154$ kpc). The abundance of neutral gas in the halos of QSOs is
particularly puzzling. Simulations predict that such massive halos are
dominated by a hot $T\sim 10^{7}\,{\rm K}$ virialized plasma and a
significantly lower covering factor of optically thick H i absorbers (e.g.,
Faucher-Giguère et al., 2015).
The unexpectedly large covering factor of LLSs around $z\sim 2$ QSOs and the
difficulty of reproducing the SMG population in galaxy formation models, could
both be symptoms of the same problem. Attempting to reduce this tension
between observations and theory, more recent cosmological zoom-in simulations
have implemented recipes of stronger and presumably more realistic stellar
feedback, which manages to preserve cool gas reservoirs in the accreted sub-
halos during earlier phases of star formation before their infall into the
massive halo. The presence of these gas-rich sub-halos increases the cool gas
covering factor around QSOs (Faucher-Giguère et al., 2016) and their prolonged
bombardment to the central galaxy leads to a rising star formation history
that eventually produces SMGs between $2<z<3$ (Narayanan et al., 2015; Lovell
et al., 2021).
### 1.4. Organization
QSO absorption-line spectroscopy combined with efficient emission-line mapping
provides a powerful method to link star-forming galaxies with the neutral gas
reservoir that may fuel future star formation. Our understanding of the
formation and evolution of massive galaxies is severely limited by the lack of
observational constraints of the CGM of SMGs. The advent of Herschel large-
scale far-infrared surveys have provided an opportunity to use projected
SMG$-$QSO pairs to probe the CGM of SMGs. In this paper, we focus on one
particularly interesting system – GAMA J0913$-$0107 – where two background
QSOs have revealed an unusually H i-rich CGM around a luminous SMG.
The main text of the paper is organized as follows. We first provide an
orientation of the system in § 2, then proceed with a detailed study of the
emission sources (§ 3) and the absorption-line systems (§ 4), before finally
drawing connections between the absorbers and their emission counterparts in §
5. We conclude the paper with a summary of the main results and a discussion
of the implications in § 6. To keep the main text focused on the SMG$-$DLA
system at $z\approx 2.67$, we move additional material to the Appendices. We
analyze the nearby optical source to the SMG and its potential lensing effect
in Appendix A, present the methodology and result of our blind search of line
emitters in the ALMA band-3 data in Appendix B, give an inventory of the line-
of-sight contaminating absorbers at other redshifts in Appendix C, provide
tables of detailed ionic column density and metallicity measurements in
Appendix D, and describe our attempt to detect CO emission from faint optical
sources near QSO1 and the identification of Comp b in Appendix E.
Throughout this paper, we adopt a model optimization method that combines a
heuristic $\chi^{2}$ minimization algorithm with a Markov Chain Monte Carlo
(MCMC) algorithm (hereafter, the “amoeba + mcmc” method). It begins with using
the downhill simplex method amoeba (Press et al., 1992) with simulated
annealing amoeba_sa to find the solution that minimizes the residual. Although
computationally more expensive than other least-$\chi^{2}$ solvers (e.g., the
Levenberg-Marquardt technique), amoeba_sa has the advantage of avoiding being
trapped in local minima in a multidimensional parameter space. This advantage
is particularly important in more complex problems such as fitting the H i
absorption profiles with many Voigt profiles (§ 4.2). Next, starting from the
minimum-$\chi^{2}$ solution of amoeba_sa, we use the Differential Evolution
MCMC algorithm (Ter Braak, 2006) implemented in exofast_demc (Eastman et al.,
2013, 2019) to obtain the final solution and the statistical uncertainties of
the parameters. The exofast_demc routine first determines the stepping scale
of each parameter by varying it from the minimum $\chi^{2}$ solution until the
$\chi^{2}$ increases by one. It then starts the chains from positions that are
randomly offset from the minimum $\chi^{2}$ solution. The routine stops when
the chains are considered well-mixed and the steps in the initial “burn-in”
phase are removed. The marginalized 1$\sigma$ confidence interval of each
parameter is determined from the values at 15.8 and 84.1 percentiles of the
concatenated chains, and the median values are adopted as the formal solution.
Parameters derived from the model parameters are treated likewise: their
formal values and uncertainties are calculated from the 50, 15.8, and 84.1
percentiles of the array directly calculated from the chains of model
parameters.
We assume the $\Lambda$CDM cosmology with $\Omega_{\rm m}=0.3$,
$\Omega_{\Lambda}=0.7$, and $h\equiv H_{0}/(100~{}{\rm km~{}s}^{-1}~{}{\rm
Mpc}^{-1})=0.7$ and quote proper/physical distances.
Figure 1.— Multi-wavelength images of the GAMA J0913$-$0107 system. Left \- A
wide-field Herschel pseudo-color image combining 250 $\mu$m (blue), 350 $\mu$m
(green) and 500 $\mu$m (red) images. The SMG is the bright source near the
center of the 15.2′$\times$21.0′ region. Middle \- An ALMA map zoomed in onto
the SMG showing CO (3$-$2) emission between $2.67<z<2.70$. This
36.5″$\times$50.4″ region encloses the SMG and its CO companion galaxies (red
tickmarks), and the two QSOs in the background of the system (black
tickmarks). This composite CO image is formed by combining the 11 channels
within $\pm$140 km s-1 of $z=2.674$ (i.e., $\nu_{\rm obs}=94.120\pm 0.039$
GHz). To show the CO emission from Comp b, the 8″$\times$8″ region centered on
Comp b (dotted box) is formed by combining the two channels where CO emission
is detected ($\nu_{\rm obs}=93.7535$, and 93.6676 GHz, corresponding to
$z=2.6884$ and 2.6917). The contours are drawn at $-3$ (black dotted), 3, 4,
5(black solid), 20, 40, and 60$\sigma$ (white solid). The synthesized beam of
1.6″$\times$1.3″ is shown at the lower right corner. Right \- A deep $r$-band
image of the same region from KiDS (5$\sigma$ detection limit at $\sim$25
mag). In all images, the position of QSO1 sets the origin of the coordinates.
## 2\. System Overview
Table 1Major Components of the GAMA J0913$-$0107 system and Impact Parameters Designation | Short Name | R.A. (J2000) | Decl. (J2000) | $z$ | $L^{\prime}_{\rm CO3-2}$ | $\theta_{1}$ | $\theta_{2}$ | $R_{\bot,1}$ | $R_{\bot,2}$
---|---|---|---|---|---|---|---|---|---
| | (deg) | (deg) | | (K km s-1 pc2) | (arcsec) | (arcsec) | (kpc) | (kpc)
ALMA J091339.55$-$010656.4 | SMM J0913 | 138.4147767 | $-$1.1156772 | 2.674 | $6.5\times 10^{10}$ | 11.7 | 22.1 | 93.1 | 175.5
ALMA J091338.28$-$010643.8 | Comp a | 138.4094849 | $-$1.1121639 | 2.6747 | $6.3\times 10^{9}$ | 23.3 | 24.8 | 185.1 | 197.1
ALMA J091338.49$-$010705.5 | Comp b | 138.4103803 | $-$1.1181869 | 2.6884 | $6.8\times 10^{8}$ | 7.4 | 4.1 | 58.9 | 32.2
— | — | — | — | 2.6917 | $5.3\times 10^{8}$ | — | — | — | —
SDSS J091338.97$-$010704.6 | QSO1 | 138.4124260 | $-$1.1179280 | 2.9161 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
SDSS J091338.30$-$010708.6 | QSO2 | 138.4096520 | $-$1.1190520 | 2.7488 | $5.6\times 10^{9}$ | 10.8 | $\cdots$ | 85.0 | $\cdots$
Fig. 1 illustrates the GAMA J0913$-$0107 system, where we label its major
components: the SMG, its CO companions, and the two background QSOs. Table 1
lists their coordinates, redshifts, CO (3$-$2) luminosities, along with the
impact parameters of the QSO sightlines.
The GAMA J0913$-$0107 system is one of the 163 SMG$-$QSO pairs with apparent
separations between 5″ and 30″, which were selected by cross-matching
Herschel-selected SMGs with optically-selected QSOs from a compilation of
spectroscopic surveys (Fu et al., 2016, 2017). Located in the R.A. = 9 hr
equatorial field of the Herschel Astrophysical Terahertz Large Area Survey
(H-ATLAS) survey (Eales et al., 2010), the Herschel source at R.A. = $09^{\rm
h}13^{\rm m}39^{\rm s}$, Decl. = $-01^{\circ}06\arcmin 59\arcsec$ is detected
at high S/N by SPIRE (Spectral and Photometric Imaging Receiver; Griffin et
al., 2010) at 250, 350, and 500 $\mu$m, with de-boosted flux densities of
$S_{250}=52.5\pm 7.4$ mJy, $S_{350}=69.4\pm 8.8$ mJy, and $S_{500}=48.4\pm
9.2$ mJy (Valiante et al., 2016; Fu et al., 2017). The far-IR SED clearly
peaks around 350 $\mu$m (i.e., “350 $\mu$m peakers”), giving a rough
photometric redshift of $\sim$2.5 assuming a dust temperature of $\sim$50 K,
following Wien’s displacement law for $S_{\nu}$ over $\lambda$: $\lambda_{\rm
peak}=102~{}\mu{\rm m}~{}(50~{}{\rm K}/T)~{}(1+z)$.
Our ALMA 345 GHz imaging pinpointed the position of the Herschel source (Fu et
al., 2017), and Gemini near-IR and ALMA 94 GHz spectroscopy jointly determined
a spectroscopic redshift of 2.674 (§ 3.1 & 3.2). We designate the SMG as ALMA
J091339.55$-$010656.4 or SMM J0913 in short. In addition, our ALMA 94 GHz
observations detected companion galaxies in CO (3$-$2) near the redshift of
the SMG: Comp a at $z=2.6747$, and Comp b at $z=2.6884,2.6917$. Both are
within 23″ of the SMG position. Comp b has two redshifts because it is a
superposition of two galaxies (see § 3.5). Because Herschel has FWHM
resolutions of 18″, 25″, and 35″ at 250, 350, and 500 $\mu$m, respectively,
the SMG and its companions are blended in the Herschel images. But the
contribution of the companions to the Herschel fluxes should be negligible
given their orders-of-magnitude lower CO line luminosities.
There are two bright QSOs within 22″ of the SMG, QSO1 (SDSS
J091338.97$-$010704.6, $g=20.78$, $r=20.38$) at $z=2.9161$ and QSO2 (SDSS
J091338.30$-$010708.6, $g=20.71$, $r=20.44$) at $z=2.7488$. Both QSOs are in
the background of the SMG, allowing us to probe its CGM at impact parameters
of $R_{\bot}=93.1$ kpc and $R_{\bot}=175.5$ kpc, or approximately 0.5$\times$
and 0.9$\times$ the virial radius of a $10^{13}$ $M_{\odot}$ halo at $z=2.674$
($R_{\rm vir}=186$ kpc). Coincidentally, strong H i and metal absorption lines
near the SMG redshift have been previously detected in the QSO spectra (Finley
et al., 2014).
When comparing the CO map with the KiDS $r$-band image in Fig. 1, we notice an
$r=21.6$ optical source just 0.8″ from the SMG. As we will show in Appendix A,
it is a foreground galaxy with a spec-$z$ of $z=0.055$ and its gravitational
lensing effect on the SMG is negligible. We also find that Comp b is just
$\sim$3″ from an elongated optical source to the NNE. In the KiDS DR4 catalog,
the optical source has a designation of J091338.527$-$010703.60 and magnitudes
of $r=23.8\pm 0.1$ and $H=22.0\pm 0.3$. It has a photo-$z$ of $z_{\rm
p}=0.79^{+0.45}_{-0.06}$. Its SED shows a clear 1-magnitude drop-off between
$Y$-band and $Z$-band, corresponding to a 4000 Å-break at $z\sim 1.2-1.5$,
which is consistent with the maximum-likelihood photo-$z$ of $z_{\rm p}=1.37$
in the catalog. We thus conclude that the optical source is most likely a
foreground galaxy, although the extraction of an ALMA spectrum near the
position of the optical source led to the identification of Comp b (Appendix
E).
## 3\. The Submillimeter Galaxy and Its Companions
### 3.1. ALMA Position and Near-IR Spectroscopy
Herschel/SPIRE positions have large uncertainties. A comparison between ALMA
and Herschel positions showed a 1$\sigma$ positional offset of $\sim$4.2″ for
sources with S/N $\sim$ 6 at 250 $\mu$m (Eq. 5 of Fu et al., 2017). Near-IR
slit spectroscopy requires sub-arcsec positions, so we carried out
0.5″-resolution ALMA band-7 (345 GHz/870 $\mu$m) observations of GAMA
J0913$-$0107 as part of our Cycle-3 project 2015.1.00131.S (see Fu et al.,
2017, for details). GAMA J0913$-$0107 shared an hour-long observing session
with nine other Herschel SMGs in the same H-ATLAS field. With four scans and
44 antennas, we accumulated a total on-source integration time of 189.5 s. A
single high S/N source is detected in the $\sim$17″ Full Width at Half Power
(FWHP) of the primary beam. It has a 870 $\mu$m flux density of
$S_{870}=7.4\pm 0.5$ mJy, an offset from the Herschel position by 4.1″, and a
beam-deconvolved FWHM of $0.46\arcsec\pm 0.04\arcsec$ along the major axis
(which corresponds to $\sim$4 kpc at $z=2.5$).
The accurate ALMA position enabled our follow-up near-IR slit spectroscopy.
Observations of SMM J0913 were carried out with Gemini near-infrared
spectrograph (GNIRS; Elias et al., 2006) on 2017 Feb 19 as part of our queue
program GN-2017A-Q-31. The A0-type star HIP49125 ($V$ = 7.19, $K$ = 6.553 Vega
mag) was observed right after the science target to provide telluric
correction and flux calibration. Because our goal was to measure the redshift,
we used the cross-dispersed mode with the 32 l/mm grating to achieve a
continuous wavelength coverage between 0.85 and 2.5 $\mu$m (orders 3 to 8).
After applying a 30″ offset from an offset star to the NE, we placed the
1″-wide 7″-long slit on the ALMA 870 $\mu$m position at a position angle (PA)
of 73 deg (E of N; the PA was chosen to reach a guide star). The expected
spectral resolution of this configuration is $R=510$ (FWHM = 590 km s-1), but
the actual spectral resolution may be higher depending on the source size and
the seeing. We took 24 exposures of 136 s with a 3″-step ABBA dithering
pattern. Data reduction was performed with a modified version of Spextool
(Cushing et al., 2004) for GNIRS by K. Allers. The final coadded spectrum in
Fig. 2 includes all 24 frames and has a total on-source time of 54.4 min. We
detected an emission line at $\sim$4$\sigma$-level at 2.4121 $\mu$m
(heliocentric corrected, vacuum wavelength), which we identified as H$\alpha$
($\lambda_{\rm rest}=6564.63\,\AA$) at $z_{\rm H\alpha}=2.6743\pm 0.0003$. The
[N ii] $\lambda$6585.28 line is undetected, likely due to the elevated
background noise at its wavelength and its lower flux. Our best-fit Gaussian
model yields a line ${\rm FWHM}=230\pm 60$ km s-1 (i.e., the line is
unresolved) and a line flux of $F_{\rm H\alpha}=(6.1\pm 1.0)\times 10^{-17}$
erg s-1 cm-2, which are comparable to those of the SMGs identified by the VLA
(Alaghband-Zadeh et al., 2012; Fu et al., 2016).
### 3.2. ALMA CO (3$-$2) Spectral Line Imaging
Figure 2.— The GNIRS near-IR spectrum of SMM J0913. The top panel shows the
coadded 2D spectrum. The ordinate is the positional offset along the spatial
direction, and is centered on the SMG location. The bottom panel shows the
flux-calibrated 1D spectrum (black) and its 1$\sigma$ uncertainty (red).
Wavelengths affected by strong sky lines show large errors. The dashed lines
indicate the redshifted H$\alpha$ and [N ii] $\lambda\lambda$6550,6585 lines.
The brown curve shows the atmosphere transmission curve, using the right-side
ordinate.
To detect the molecular gas reservoir that fuels the intense star formation in
the SMG, we carried out deep ALMA band-3 (100 GHz/3 mm) spectral line
observations of SMM J0913 on 2018 December 10 and 15 with our Cycle 6 project
2018.1.00548.S. We tuned the four 1.875 GHz-bandwidth spectral windows to
center on 92.2 (BB3), 94.0 (BB4), 104.2 (BB1), and 106.0 GHz (BB2) in dual
linear polarization mode (XX and YY). We chose a spectral averaging factor of
16 to bin the Frequency Division Mode (FDM)’s input channel spacing of 0.488
MHz to an output channel spacing of 7.8125 MHz. The spectral averaging
significantly reduces the output data rate and essentially eliminates the
correlation between adjacent channels introduced by the Hanning window
function applied to the correlation functions (see § 5.5 in the ALMA Technical
Handbook). The resulting spectral response function is basically a top-hat
function with a width of 7.8125 MHz (i.e., the output channel spacing), which
corresponds to a spectral resolution of 22$-$25 km s-1. The two lower
frequency spectral windows provide a continuous frequency range between 91.27
and 94.94 GHz, which covers the CO (3$-$2) line ($\nu_{\rm rest}=345.79599$
GHz and $\lambda_{\rm rest}=866.96337$ $\mu$m) between $2.642\leq z\leq
2.789$, encompassing the SMG at $z=2.674$ and QSO2 at $z=2.7498$. The
frequency range covers a velocity window between $-2620$ and $+9240$ km s-1
relative to the SMG redshift. The two higher frequency spectral windows
provide a continuous frequency coverage between 103.27 and 106.94 GHz, which
covers the CO (3$-$2) line between $2.234\leq z\leq 2.348$ and traces the
continuum emission at rest-frame frequencies around $\nu_{\rm rest}=386$ GHz
($\lambda_{\rm rest}=776$ $\mu$m) at $z=2.674$.
The primary beam of the ALMA 12-m antennas has an FWHP of 62″ at 94 GHz. We
set the field center at R.A. = $09^{\rm h}13^{\rm m}38.89^{\rm s}$, Decl. =
$-01^{\circ}07\arcmin 03.6\arcsec$, which is near the position of QSO1 but
$\sim$12″ offset from the SMG. Three of the four planned observing sessions
were executed, accumulating a total on-source time of 143.8 min with 6.05 s
integrations. Either 43 or 46 12-m antennas were operational, with baselines
ranging between 15.1 m and 740.5 m. The BL Lac object J0854$+$2006 served as
the amplitude, bandpass, and pointing calibrator, and the flat-spectrum radio
quasar J0909$+$0121 as the phase calibrator (Bonato et al., 2019).
Figure 3.— ALMA CO (3$-$2) spectrum of SMM J0913 and the best-fit double-
Gaussian model (red solid curve). The inset shows a zoomed-in version of the
spectrum to highlight the broad component (red dashed curve) beneath the
narrow component (blue dotted curve). The dotted rectangle shows the portion
of the spectrum shown in the inset. The bottom panel shows the residual (data
- model) in units of 1$\sigma$ error.
### 3.3. ALMA Data Processing
The raw visibility data were flagged and calibrated by the ALMA pipeline
(Pipeline ver. 42030M, CASA ver. 5.4.0-68). The calibrated visibilities of the
three observing sessions were then combined to form the final calibrated
measurement set (MS). The pipeline worked very well. After inspecting the
amplitudes of the calibrated visibilities, we found that additional flagging
was only necessary for a tiny fraction of data. We used the CASA task flagcmd
to flag the cross-correlation data of the antenna pair DA62 and DA65 in the
94.0 GHz spectral window between channels 188 and 191.
We use the CASA task tclean to image the calibrated visibilities of each
spectral window into spectral data cubes. When visibilities are gridded into
regularized $uv$-cells, we adopt natural weighting to maximize the
sensitivity. The synthesized beams are on average 1.7″$\times$1.3″ in FWHM, so
we set the imaging pixel size to 0.2″. In the spectral dimension, we retain
the original channel spacing of 7.8125 MHz. The data were recorded in the
Topocentric (TOPO) reference frame. Due to the motion of the Earth, every
observing scan has a slightly different sampling in sky frequency. We image
the data to the solar System Barycenter (BARY) reference frame to be
consistent with the velocities measured in the heliocentric-corrected optical
and near-IR spectra.
Significant continuum emission and a strong emission line at $\sim$94.1 GHz
(in BB4) is detected at the ALMA 870 $\mu$m position of SMM J0913. To minimize
the sidelobes from this bright source, we used Clark clean deconvolution
algorithm with a mask consisting of a single 2″-radius circle centered on the
SMG. The clean depth is set to be 2$\times$ the rms listed below.
For each spectral window, we generate two datacubes: one avoids interpolation
in the spectral dimension by setting interpolation = nearest and is
uncorrected for the primary beam, and the other uses linear interpolation and
is corrected for the primary beam. The former is better suited for blind line
searches because maps in adjacent channels remain uncorrelated, while the
latter is better suited for measuring line parameters such as central
frequency, width, and integrated flux. The resulting spectral cubes have a
dimension of 540 pixels by 540 pixels by 240 channels. At the phase center,
the sensitivities of the datacubes reach ${\rm rms}=0.165,0.171,0.158,0.155$
mJy beam-1 channel-1 for BB1, BB2, BB3, and BB4, respectively. The rms values
are consistent with the visibility noise that we measured with visstat
($\sigma\sim 250$ mJy visibility-1 channel-1), because ${\rm
rms}\sim\sigma/\sqrt{n_{\rm ch}n_{\rm pol}n_{\rm baseline}n_{\rm int}}$, where
$n_{\rm ch}=1$ (1 channel binning), $n_{\rm pol}=2$ (2 polarizations), $n_{\rm
baseline}=n_{\rm ant}(n_{\rm ant}-1)/2=903$ for $n_{\rm ant}=43$ (903
baselines for 43 antennae), and $n_{\rm int}=t_{\rm on\\_source}/t_{\rm
int}\sim 1426$ (total number of integrations).
In addition, we generate one continuum image from the two higher frequency
spectral windows (BB1 and BB2). We used the Multi-term Multi-Frequency
Synthesis (mtmfs) clean algorithm with a linear spectral model (i.e., nterms =
2). Again, we used natural weighting, 0.2″ pixel size, and the same clean mask
centered on the SMG. The sensitivity of the continuum image reaches ${\rm
rms}=7.4$ $\mu$Jy beam-1, consistent with the rms of the spectral cubes
divided by $\sqrt{n_{\rm ch}}\simeq 22$.
### 3.4. SMG Properties
Figure 4.— Physical properties of SMM J0913 (red data points). Left: the Far-IR SED from Herschel and ALMA and the formal MBB solution. The shaded area shows the 1$\sigma$ spread of the models in the MCMC chains. Middle: CO (1$-$0) luminosity vs. IR luminosity. Blue squares show SMGs from the literature (Harris et al., 2010; Riechers et al., 2011; Ivison et al., 2011; Bothwell et al., 2013; Sharon et al., 2013). The dashed lines show contours of constant gas depletion timescales. Right: SFR surface density vs. molecular gas mass surface density (i.e., the Kennicutt-Schmidt relation). Other data points show local ULIRGs (Kennicutt, 1998), high-redshift SMGs (Daddi et al., 2009; Genzel et al., 2010; Fu et al., 2013), normal star-forming galaxies at $z\sim 2$ and $z\sim 0$ (Tacconi et al., 2013). The dashed lines indicate constant ratios of surface densities, which are another indicator of the gas depletion timescale. Table 2Spectroscopy of the SMG Quantity | Measurement | Unit
---|---|---
H$\alpha$ from Gemini/GNIRS
$z_{\rm H\alpha}$ | $2.6743(3)$ | $\cdots$
$\Delta V_{\rm FWHM}$ | $<300$ | km s-1
$F_{\rm H\alpha}$ | $(6.1\pm 1.0)\times 10^{-17}$ | erg s-1 cm-2
$L_{\rm H\alpha}$ | $(2.9\pm 0.5)\times 10^{41}$ | erg s-1
SFRHα | $1.6\pm 0.3$ | $M_{\odot}\,{\rm yr}^{-1}$
CO (3$-$2) from ALMA Band-3
Narrow Component
$z_{\rm CO3-2}$ | 2.67399(3) | $\cdots$
$\Delta V_{\rm FWHM}$ | $249\pm 8$ | km s-1
$S_{\rm CO}\Delta V$ | $1.38\pm 0.06$ | Jy km s-1
$L^{\prime}_{\rm CO3-2}$ | $(5.0\pm 0.2)\times 10^{10}$ | K km s-1 pc2
Broad Component
$z_{\rm CO3-2}$ | 2.6741(7) | $\cdots$
$\Delta V_{\rm FWHM}$ | $906\pm 206$ | km s-1
$S_{\rm CO}\Delta V$ | $0.41\pm 0.07$ | Jy km s-1
$L^{\prime}_{\rm CO3-2}$ | $(1.5\pm 0.3)\times 10^{10}$ | K km s-1 pc2
Total Emission
$L^{\prime}_{\rm CO1-0}$ | $(1.25\pm 0.07)\times 10^{11}$ | K km s-1 pc2
$M_{\rm mol}$ | $(1.25\pm 0.07)\times 10^{11}$ | $M_{\odot}$
Intrinsic Source Size
Deconv. Maj. | $0.76\pm 0.09$ | arcsec
Deconv. Min. | $0.54\pm 0.17$ | arcsec
Table 3Photometry of the SMG Quantity | Measurement | Unit
---|---|---
Herschel/SPIRE
R.A. | 09:13:39.32 | hms
Decl. | $-$01:06:58.6 | dms
$S_{250}$ | $52.5\pm 7.4$ | mJy
$S_{350}$ | $69.4\pm 8.8$ | mJy
$S_{500}$ | $48.4\pm 9.2$ | mJy
ALMA Band-6 Continuum
R.A. | 09:13:39.55 | hms
Decl. | $-$01:06:56.4 | dms
$S_{\rm 343.5GHz}$ | $7.4\pm 0.5$ | mJy
Deconv. Maj. | $0.46\pm 0.04$ | arcsec
| $3.7\pm 0.3$ | kpc
Deconv. Min. | $0.28\pm 0.06$ | arcsec
| $2.2\pm 0.5$ | kpc
ALMA Band-3 Continuum
$S_{\rm 93.1GHz}$ | $106\pm 22$ | $\mu$Jy
$S_{\rm 105.1GHz}$ | $111\pm 20$ | $\mu$Jy
Deconv. Maj. | $0.68\pm 0.47$ | arcsec
Deconv. Min. | $0.47\pm 0.23$ | arcsec
Modified Blackbody Fit
$T$ | $44\pm 7$ | K
$\beta$ | $2.3\pm 0.2$ | $\cdots$
$\lambda_{0}$ | $98\pm 24$ | $\mu$m
$\pi r_{s}^{2}$ | $9^{+10}_{-5}$ | kpc2
$M_{\rm dust}$ | $(5.4\pm 1.2)\times 10^{8}$ | $M_{\odot}$
$L_{\rm IR}$ | $1.17^{+0.17}_{-0.11}\times 10^{13}$ | $L_{\odot}$
SFRIR | $1170^{+170}_{-110}$ | $M_{\odot}\,{\rm yr}^{-1}$
Fig. 3 shows the ALMA band-3 spectrum of the SMG. The spectrum is extracted
with an elliptical aperture matching the beam-convolved source size. A
prominent emission line peaks at $\nu_{\rm obs}=94.12$ GHz, which we identify
as the CO (3$-$2) line at $z_{\rm CO}=2.67399\pm 0.00003$. The CO detection
thus confirms the H$\alpha$ redshift from the GNIRS spectrum ($z_{\rm
H\alpha}=2.6743\pm 0.0003$, see § 3.1). Because the CO line is detected at a
higher S/N and is less affected by dust extinction, we adopt the CO redshift
for the SMG throughout the paper, i.e., $z_{\rm SMG}=z_{\rm CO}=2.674$ (a
slightly rounded-up value for simplicity).
A closer inspection of the CO (3$-$2) spectrum reveals a broad (FWHM $\sim$
1000 km s-1) emission-line component with a peak flux of $\sim$0.4 mJy
underneath the prominent narrow component (FWHM $\sim$ 250 km s-1). We thus
model the CO spectrum with two Gaussians and compare its result with a single-
Gaussian model.
We find that the improvement of the double-Gaussian model over the single-
Gaussian model is highly significant. The formal double-Gaussian solution
achieves a $\chi^{2}=220.2$ for a degree-of-freedom (DOF) of 209. For
comparison, the formal single-Gaussian solution achieves a $\chi^{2}=251.8$
for DOF = 212. According to the $F$-test, such a difference rejects the null
hypothesis, that the double-Gaussian model does not provide a significantly
better fit, at a confidence level of 99.99964% or 4.6$\sigma$. The result of
the double-Gaussian fit from the “amoeba + mcmc” method is listed in Table 2.
The broad component has an FWHM of $\sim$900 km s-1 and accounts for almost a
quarter of the total emission-line flux. The existence of a narrow CO line on
top of a broad CO line with essentially no velocity offset indicates that the
SMG’s intense star-forming nucleus (the narrow component) is either embedded
in a fast-rotating disk or driving a bipolar outflow (the broad component).
Unfortunately, the spatial resolution of the ALMA data is inadequate to
distinguish between the two scenarios. We note that such a broad CO component
would not have been detected in shallower spectroscopic data that are more
generally available to SMGs, so this feature may not be unique to SMM J0913.
With the redshift determined, we fit the SED between 250 $\mu$m and 3 mm from
Herschel/SPIRE and ALMA with a modified blackbody curve. We adopt the general
solution of the radiative transfer equation assuming local thermal equilibrium
at a constant temperature $T$:
$S_{\nu}=(1-e^{-\tau_{\nu}})~{}B_{\nu}(T)~{}\pi r_{s}^{2}/d_{L}^{2},$ (3)
where $B_{\nu}(T)$ is the Planck function at a temperature of $T$ and a rest-
frame frequency $\nu$, $\pi r_{s}^{2}$ the effective size of the dust emitting
region, and $d_{L}$ the luminosity distance. Assuming that the dust opacity
follows a power-law with a negative slope of $-\beta$ at wavelengths greater
than the dust size ($\sim$10 $\mu$m), the optical depth should follow the same
power-law:
$\tau_{\nu}=(\nu/\nu_{0})^{\beta}=(\lambda/\lambda_{0})^{-\beta},$ (4)
where $\nu_{0}$ ($\lambda_{0}$) is the rest-frame frequency (wavelength) at
which the dust becomes optically thick. Given the dust mass-absorption
coefficient of $\kappa=0.07$ m2 kg-1 at 850 $\mu$m for Galactic dust (Dunne et
al., 2000; James et al., 2002), it can be shown that the dust mass is:
$M_{\rm dust}=9.0\times 10^{9}~{}M_{\odot}~{}(\pi r_{s}^{2}/{\rm
kpc}^{2})~{}(\lambda_{0}/850\,\mu{\rm m})^{\beta}.$ (5)
The result of the SED fit gives us a measure of the dust temperature, the
dust-obscured SFR, the dust mass, and the effective size of the dust
photosphere. Fig. 4 left shows the multi-band photometry, the median MCMC
model, and the 1$\sigma$ spread of the models. Table 3 lists the formal
parameters and their uncertainties. The dust is relatively warm ($T=44\pm 7$
K) and the dust photosphere has an effective size of $9_{-5}^{+10}$ kpc2,
comparable to the intrinsic source size measured at 343.5 GHz – $\pi
ab/4=6.4\pm 1.5$ kpc2.
The ALMA CO (3$-$2) line luminosity offers an estimate of the mass of the
molecular gas reservoir. Because CO (3$-$2) traces the warm and moderately
dense ($n_{\rm eff}\sim 10^{4}$ cm-3) component (e.g., Juneau et al., 2009),
we first convert the CO (3$-$2) to CO (1$-$0) luminosity using the average
brightness temperature ratio of $r_{31}\equiv L^{\prime}_{\rm
CO3-2}/L^{\prime}_{\rm CO1-0}=0.52$ observed in SMGs (Bothwell et al., 2013).
We then convert the CO (1$-$0) luminosity to the total molecular gas mass with
a CO-to-Molecular-Gas conversion factor of $\alpha_{\rm CO}=1.0$, a value
found appropriate for high-redshift dusty starbursts (e.g., Hodge et al.,
2012; Magnelli et al., 2012b; Xue et al., 2018). The result is a total
molecular gas mass of $M_{\rm mol}=(1.25\pm 0.07)\times
10^{11}~{}(r_{31}/0.52)^{-1}~{}(\alpha_{\rm CO}/1.0)$ $M_{\odot}$, near the
high end of the molecular gas masses measured in SMGs (see Fig. 4 middle).
Combining the results from the SED fit and the CO (3$-$2) spectroscopy, we
found a gas depletion timescale of $\sim$0.1 Gyr, which is similar to other
SMGs but 6$\times$ shorter than co-eval main-sequence galaxies (Fig. 4
middle). The SMG’s dust emission is resolved by the ALMA band-6 data with a
beam-deconvolved size of 0.46″$\times$0.28″. Its CO (3$-$2) emission is
resolved by the ALMA band-3 data with a beam-deconvolved size of
0.76″$\times$0.54″. In both cases, we have measured the intrinsic source sizes
from clean’ed images using the CASA task imfit. These size measurements allow
us to place the SMG on the Kennicutt-Schmit relation (Fig. 4 right). It is
characterized by high surface densities of SFR and molecular gas even compared
to the SMG population. But like other SMGs, it features a high star-formation
efficiency that is distinct from normal star-forming galaxies at $z\sim 2$
(e.g., Daddi et al., 2010; Genzel et al., 2010).
### 3.5. Companion Galaxies
We carried out a blind search of line emitters in the ALMA datacubes with a
matched-filter algorithm and tested the fidelity of the detections with
simulated noise-only interferometer data (see Appendix § B). In the spectral
window centered at 94 GHz (i.e., BB4), we found two robust line emitters: the
SMG at $z=2.674$ (S/N = 67) and Comp a at $z=2.6747$ (S/N = 6.4). The other
companion, Comp b, was detected at high significance when we combine the two
ALMA channels closest in velocity to the metal-line-detected absorbing clouds
C1 and C2 toward QSO1 (see § 4.3). The source has a peak S/N of 4.5 when
combining the two channels, while its peak S/N is only 3.6 in the two
individual channels, making it confused with noise spikes. The matched-filter
algorithm fails to identify Comp b because it assumes that only adjacent
channels can boost the S/N above the detection limit. In other words, the
detection of Comp b is possible only because (1) we have utilized the prior
knowledge of the redshifts of the absorption lines (with the implicit
assumption that the emission counterparts have similar redshifts), and (2) the
emission counterparts of the two clouds are superimposed on the sky
(increasing the S/N when they are combined).
Our blind search detected four additional high-fidelity line emitters: QSO2 at
$z=2.7488$ (S/N = 7.0), its companion at $z=2.7392$ ($\delta v=-770$ km s-1;
S/N = 6.1) located $\sim$30″ to the NE of QSO2, and two additional sources at
$z=2.3452$ (S/N = 5.5) and $z=2.3324$ (S/N = 5.2) that may correspond to the
$z_{\rm abs}=2.345$ H i and C iv absorbers that appear toward both QSOs (see
Fig. 12 in Appendix C). But on the other hand, only the SMG is detected in the
continuum image of the two higher-frequency spectral windows (i.e., BB1 and
BB2).
While the detection of CO in the SMG and QSO2 is expected, the detection of
their companion CO emitters is not. We can use the ALMA Spectroscopic Survey
in the Hubble Ultra Deep Field (ASPECS; Decarli et al., 2019) to estimate a
baseline level of source-detection probability in normal field environments.
The ASPECS covers an area of 4.6 arcmin2 (${\rm PB}\geq 0.5$, where PB is a
correction factor for the primary beam pattern) with 17 pointings in band 3
and a spectral range of 21 GHz (84$-$105 GHz) with five tunings. The rms
sensitivity varies with frequency with a range between 0.12 and 0.4 mJy beam-1
channel-1 for a channel spacing of 7.8 MHz (the same as ours). To provide a
conservative estimate, we only count the 7 sources detected at ${\rm S/N}>6$
between 96 and 103 GHz (González-López et al., 2019), where ${\rm rms}\simeq
0.135$ mJy beam-1 channel-1. Only within this spectral range is ASPECS more
sensitive than our data (${\rm rms}\simeq 0.16$ mJy beam-1 channel-1). This
gives a source density of $0.22\pm 0.08$ arcmin-2 GHz-1 in the field. Given
that our ALMA observations cover an area of 0.88 arcmin2 where ${\rm PB}\geq
0.5$ and are $\sim$20% shallower, one would expect to identify less than
$0.36\pm 0.14$ sources at ${\rm S/N}>6$ over the 1.875 GHz bandwidth of a
baseband, and only a third of these (i.e., $<0.12\pm 0.05$ sources) are
expected to fall within $\pm$0.3 GHz (1000 km s-1) of the main galaxies to be
considered as companions. In other words, one would need to increase our
survey area by $>$8$\times$ to detect a chance “companion” galaxy of the SMG
or QSO2 in the field. Yet we have detected one ${\rm S/N}>6$ companion within
30″ of each main galaxy. Our result thus indicates that both the SMG and QSO2
inhabit overdense environments, which is consistent with their purported large
halo masses (Hickox et al., 2012).
In addition to the companion galaxies detected in CO emission, both the SMG
and QSO2 are also associated with absorbers of high H i column density in the
spectrum of a common background QSO (QSO1), as we will show in the next
section.
## 4\. Absorption-line Systems
The two QSOs in the GAMA J0913$-$0107 system were first identified in the
Sloan Digital Sky Survey (SDSS) DR9 quasar catalog (Pâris et al., 2012). The
pair has a separation of only 10.8″, and more importantly, two closely
separated DLAs were immediately identified in the low-resolution SDSS spectrum
of QSO1 (Noterdaeme et al., 2012a). The stronger DLA ($\log N_{\rm HI}\simeq
21.3$) at $z_{\rm abs}\approx 2.75$ is associated with QSO2 at $z=2.7488$,
providing an important probe of the CGM around QSOs at an impact parameter of
$R_{\bot}=85$ kpc (see Appendix C). The other DLA ($\log N_{\rm HI}\simeq
20.5$) at $z_{\rm abs}\approx 2.68$ provides a window to probe the CGM of the
SMG at $z=2.674$, which is just 11.7″ from the QSO.
We searched the spectral databases with
specdb444https://github.com/specdb/specdb and found that the QSO pair had
accumulated an excellent set of spectroscopic data from Gemini Multiobject
Spectrograph (GMOS; Prochaska et al., 2013b), VLT/X-shooter (Finley et al.,
2014), and Magellan Echellette Spectrograph (MagE; Rubin et al., 2015). Finley
et al. (2014) noticed the strong coincident absorption at $z_{\rm abs}\approx
2.68$ in the SDSS spectra of both QSOs, which motivated them to obtain the
higher resolution X-shooter spectra for a detailed analysis. The absorption
structure toward both QSOs is resolved into three major subsystems of variable
metallicities and with a total velocity span of $>$1700 km s-1. The observed
kinematic and metallicity coherence across sightlines is remarkable, given the
86 kpc separation between the QSOs. The authors interpreted the system as a
gaseous overdensity extended by six Mpc along the line-of-sight, which is
suggestive of a clumpy filamentary structure that may eventually collapse and
form a proto-cluster. They attributed the two main subsystems at lower
velocities (A and B) as part of the IGM because of their low metallicity
(${\rm[Fe/H]}<-1.9$) and suspected that the third main subsystem (C) with
${\rm[Fe/H]}=-1.1$ is likely associated with a galaxy. Now with the detection
of the SMG and its companion galaxies, we will use the coincident absorption-
line system to characterize the CGM of these galaxies in § 5. We will show
that subsystems A and B are cool gas streams in the CGM of the SMG, and
subsystem C is indeed associated with a galaxy (Comp b).
In this section, we present a re-analysis of the $z_{\rm abs}\approx 2.68$
absorption system using a new reduction of the X-shooter spectra (§ 4.1).
Finley et al. (2014) used vpfit555https://people.ast.cam.ac.uk/~rfc/vpfit.html
to fit Voigt profiles to the entire spectrum. Our approach is complementary to
the vpfit analysis and our results show a good agreement with those presented
in Finley et al. (2014). The main differences between the two analyses are:
1. 1.
We fit Voigt profiles to the H i Lyman series after masking out contaminating
LYAF lines and quantify the statistical and systematic uncertainties of the
model parameters using an MCMC algorithm (§ 4.2).
2. 2.
We measure the ionic column densities of metals with the apparent optical
depth method (AODM; § 4.3). This is a more direct and conservative technique
compared with Voigt profile fitting using vpfit, because it relies only on
equivalent width measurements and uses a straightforward method to detect line
saturation.
3. 3.
We use ionic column density ratios to constrain the photoionization model for
each cloud, which in turn provides the ionization correction factors necessary
for metallicity estimates (§ 4.4).
4. 4.
We use the SMG to define the systemic redshift and adopt the solar abundance
scale of Asplund et al. (2009).
### 4.1. VLT X-shooter Spectroscopy
The X-shooter observations of the QSOs took place between 2013 March 31 and
2013 May 1 on the 8.2 m VLT/UT2 telescope (program ESO 089.A-0855; Finley et
al. 2014). X-shooter uses three individual echelle spectrographs to cover a
wide wavelength range between 0.3 and 2.5 $\mu$m simultaneously (Vernet et
al., 2011). Finley et al. (2014) estimated spectral resolutions of $R\sim
6400$ (FWHM = 47 km s-1) in the UVB arm (3000$-$5600Å), $R\sim 11000$ (27 km
s-1) in the VIS arm (5500Å$-$1$\mu$m), and $R\sim 6600$ (45 km s-1) in the NIR
arm (1$-$2.5 $\mu$m). The total exposure times are 100 min for QSO2 and 310
min for QSO1. We downloaded the raw data from the ESO archive and reduced the
data with the spectroscopy data reduction pipeline developed by George
Becker666ftp://ftp.ast.cam.ac.uk/pub/gdb/. The final 1D spectra were corrected
for the 0.2Å (0.5 pixel) wavelength redshift of the spectra in the VIS arm
noticed by Noterdaeme et al. (2012b), which is likely produced by
uncompensated instrumental flexure.
We fit the QSO continuum using the Python software package
linetools777https://github.com/linetools/linetools. First, the spectrum is
divided into a number of wavelength intervals, which are $\sim$50 Å wide
shortward of the Ly$\alpha$ emission, narrower across strong emission lines,
and wider in regions free of emission lines and longward of Ly$\alpha$. Next,
a spline is fit through the central wavelength and median flux of each
interval (i.e., the spline “knots”). Finally, these “knots” are iteratively
added, deleted, or moved until a satisfactory continuum fit is obtained.
Figure 5.— Velocity profiles of H i absorption near the SMG redshift toward
QSO1 and QSO2, overlaid with best-fit Voigt profiles (blue curves). We
indicate data points in contamination-free regions with brown diamonds with
error bars. The gray dashed lines in the left panels show the H i Ly$\alpha$
and Ly$\delta$ absorption from the DLA at $z_{\rm abs}=2.751$; note how
significantly they affect the Ly$\alpha$ and Ly$\gamma$ profiles of the
absorption at $z_{\rm abs}\approx 2.68$. All velocities are relative to
$z_{\rm SMG}=2.674$.
Figure 6.— Velocity profiles of H i Ly$\beta$ and selected metal lines toward
QSO1 (left) and QSO2 (right). All velocities are relative to $z_{\rm
SMG}=2.674$. The velocity integration ranges of the clouds defined in Table 6
are highlighted in color. Vertical dotted lines strike out regions that are
blended with lines from absorbers at other redshifts (see Appendix C). The
error spectrum is plotted (blue) when it shows significant structures.
### 4.2. Voigt Profile Fitting of Neutral Hydrogen
Fig. 5 shows the H i absorption profiles (Ly$\alpha$ through Ly$\delta$) of
the absorbers at $z_{\rm abs}\approx 2.68$ toward the two QSOs. Although the
two QSOs are separated by 10.8″ (86 kpc at $z=2.674$), their H i absorption
profiles show strikingly similar velocity structures spanning over 1800 km
s-1, as first noted by Finley et al. (2014). The kinematic coherence indicates
that the medium responsible for the absorption is extended at least 86 kpc
across the sky plane.
Line blending from other absorbers is evident, as indicated by the
disagreement in velocity profile among the Lyman series. To measure H i column
densities in such a complex situation, it is beneficial to first identify a
guessed solution by iteratively varying the Voigt profiles (convolved to
$R=6400$) until an acceptable fit to the data is obtained. The guessed
solution not only provides a good starting point for the formal minimum
$\chi^{2}$ approach below, but also helps to identify regions contaminated by
line blending (which thus should be flagged out). During this procedure, we
find that a minimum of 10 clouds are needed to adequately fit the Lyman series
in each sightline. Because each cloud is described by three parameters
($v,b,\log N_{\rm HI}$), our model for each QSO spectrum has a total of 30
free parameters.
To model the absorption toward QSO1, the H i Lyman lines of the DLA at $z_{\rm
abs}\approx 2.751$ (see Fig. 12$a$) must be included in the model because its
Ly$\alpha$ and Ly$\delta$ blend with the Ly$\alpha$ and Ly$\gamma$ profiles of
the absorber at $z_{\rm abs}\approx 2.68$. We find that the DLA’s H i
absorption is adequately modeled as two clouds separated by 290 km s-1
($z_{\rm abs}=2.7502,2.7538$), each with $\log N_{\rm HI}=21.0$ and $b=40$ km
s-1 (see Fig. 13). These parameters for the DLA at $z_{\rm abs}=2.751$ are
fixed in the fitting process.
The $\chi^{2}$ minimization is focused on the velocity range between $-1500$
and 2100 km s-1 for QSO1 and between $-600$ and 1350 km s-1 for QSO2. With the
guessed solution, we also mask out the pixels that are clearly contaminated by
line blending within the fitting ranges. The surviving “good” pixels are
indicated by diamond symbols with error bars in Fig. 5 and the Voigt models
are optimized using the amoeba + mcmc method described in § 1.4. The priors of
central velocities and column densities are centered around the guessed
solution, with bounds of $\pm$100 km s-1 for $v_{\rm HI}$ and $\pm$0.8 dex for
$\log N_{\rm HI}$. On the other hand, the Doppler parameter, $b_{\rm HI}$, is
allowed to vary between 5 and 70 km s-1. For three H i components, we found it
necessary to fix their velocities to those measured from low-ion metal lines,
because the H i series alone do not constrain their velocities well.
Specifically, these components are at $-470$ and $-225$ km s-1 toward QSO1 and
at $-209$ km s-1 toward QSO2. The optimized models are plotted against the
data as blue curves in Fig. 5 and the formal parameters and their statistical
uncertainties are tabulated in Table 4.
Because of the empirical nature of our placement of the unabsorbed QSO
continuum, the Voigt parameters suffer from significant systematic
uncertainties. In particular, we are interested in the systematic
uncertainties of $\log N_{\rm HI}$, which depends on (1) the column density
due to the varying gradient of the curve of growth, (2) the quality of the
spectrum, and (3) the significance of line blending. To quantify this, we run
the same modeling procedure as above but vary the QSO continuum model by
$\pm$10% and use the resulting offsets between the three formal solutions to
estimate systematic uncertainties. All subsequent errors in $\log N_{\rm HI}$
and metallicities ([X/H]) include both statistical and systematic
uncertainties.
Fig. 5 reveals that there are three separate kinematic clumps centered around
$\delta v\approx-300,+400,+1200$ km s-1 relative to $z_{\rm SMG}=2.674$ (i.e.,
$z_{\rm abs}\approx 2.6703,2.6789,2.6887$), which we designate as subsystems
A, B, and C, respectively, following the nomenclature of Finley et al. (2014).
The same clumps appear toward both QSOs, although their column densities vary
between sightlines. Half of the six subsystems are optically thick (i.e.,
$\log N_{\rm HI}>17.2$), including two sub-DLAs (QSO1-A and QSO1-C) and one
LLS (QSO2-A). Metal absorption lines from these subsystems are thus expected,
as we will show in the next subsection.
Table 4Voigt Solution for H i Lyman Lines toward QSO1 | toward QSO2
---|---
$v_{\rm HI}$ | $b_{\rm HI}$ | $\log N_{\rm HI}$ | $v_{\rm HI}$ | $b_{\rm HI}$ | $\log N_{\rm HI}$
(km s-1) | (km s-1) | ($\log{\rm cm}^{-2}$) | (km s-1) | (km s-1) | ($\log{\rm cm}^{-2}$)
$-470.1_{-0.0}^{+0.0}$ | $32.2_{-1.2}^{+1.0}$ | $19.64_{-0.05}^{+0.04}$ | $-276.3_{-1.3}^{+1.2}$ | $47.2_{-0.7}^{+0.8}$ | $18.56_{-0.09}^{+0.06}$
$-224.7_{-0.0}^{+0.0}$ | $42.2_{-0.5}^{+0.6}$ | $20.06_{-0.04}^{+0.04}$ | $-208.5_{-0.0}^{+0.0}$ | $14.2_{-6.3}^{+6.6}$ | $17.48_{-0.47}^{+0.41}$
$52.5_{-2.4}^{+2.3}$ | $45.8_{-3.6}^{+3.7}$ | $14.66_{-0.03}^{+0.03}$ | $31.6_{-1.3}^{+1.3}$ | $38.2_{-1.4}^{+1.4}$ | $14.79_{-0.04}^{+0.04}$
$210.1_{-8.2}^{+7.8}$ | $43.3_{-12.0}^{+13.6}$ | $13.75_{-0.11}^{+0.10}$ | $205.1_{-5.9}^{+5.9}$ | $67.0_{-4.2}^{+2.2}$ | $13.75_{-0.03}^{+0.03}$
$360.1_{-3.4}^{+4.3}$ | $22.2_{-3.5}^{+3.6}$ | $15.90_{-0.25}^{+0.41}$ | $384.7_{-4.2}^{+4.0}$ | $46.2_{-3.6}^{+3.4}$ | $16.03_{-0.13}^{+0.18}$
$454.5_{-8.1}^{+9.0}$ | $58.3_{-8.3}^{+7.0}$ | $15.22_{-0.07}^{+0.06}$ | $509.1_{-8.4}^{+7.0}$ | $37.9_{-4.6}^{+5.3}$ | $14.85_{-0.08}^{+0.08}$
$612.6_{-9.4}^{+8.9}$ | $51.9_{-11.4}^{+11.5}$ | $14.08_{-0.08}^{+0.08}$ | $632.9_{-4.8}^{+4.6}$ | $18.4_{-7.9}^{+10.0}$ | $13.24_{-0.07}^{+0.08}$
$775.4_{-2.7}^{+2.7}$ | $62.1_{-3.9}^{+3.9}$ | $14.83_{-0.02}^{+0.02}$ | $798.1_{-1.5}^{+1.5}$ | $67.4_{-2.1}^{+1.7}$ | $14.41_{-0.02}^{+0.02}$
$1159.5_{-2.0}^{+2.0}$ | $59.0_{-0.7}^{+0.7}$ | $20.23_{-0.02}^{+0.02}$ | $1014.4_{-2.3}^{+2.4}$ | $29.4_{-2.6}^{+2.8}$ | $14.30_{-0.05}^{+0.06}$
$1474.8_{-2.2}^{+1.6}$ | $30.2_{-0.8}^{+1.0}$ | $18.79_{-0.19}^{+0.14}$ | $1181.8_{-1.9}^{+1.9}$ | $54.1_{-2.0}^{+2.0}$ | $16.00_{-0.09}^{+0.10}$
### 4.3. Ionic Column Densities from the AODM Method
Table 5Selected Metal Transitions Ion | $\lambda_{\rm rest}$ | $\log f$ | IP0 | IP1
---|---|---|---|---
| (Å) | | (eV) | (eV)
C ii | 1334.5323 | $-0.8935$ | 11.26 | 24.38
C iv | 1548.2040 | $-0.7215$ | 47.89 | 64.49
$\cdots$ | 1550.7776 | $-1.0234$ | $\cdots$ | $\cdots$
O i | 1302.1685 | $-1.3188$ | 0.00 | 13.62
Mg ii | 2796.3543 | $-0.2108$ | 7.65 | 15.04
$\cdots$ | 2803.5315 | $-0.5146$ | $\cdots$ | $\cdots$
Al ii | 1670.7886 | $0.2405$ | 5.99 | 18.83
Al iii | 1854.7183 | $-0.2526$ | 18.83 | 28.45
Si ii | 1304.3702 | $-1.0640$ | 8.15 | 16.35
$\cdots$ | 1526.7070 | $-0.8761$ | $\cdots$ | $\cdots$
Si iv | 1393.7602 | $-0.2899$ | 33.49 | 45.14
$\cdots$ | 1402.7729 | $-0.5952$ | $\cdots$ | $\cdots$
Fe ii | 1608.4508 | $-1.2388$ | 7.90 | 16.20
$\cdots$ | 2382.7642 | $-0.4949$ | $\cdots$ | $\cdots$
$\cdots$ | 2600.1725 | $-0.6209$ | $\cdots$ | $\cdots$
Fig. 6 compares the velocity profiles of H i Ly$\beta$ and a selection of
metal line transitions commonly observed in LLSs and DLAs (see Table 5 for
transition data). We find that five of the six H i subsystems are detected in
at least one metal transition; the only exception is QSO1-B. Similar to their
H i absorption, the metal-line absorptions of subsystems QSO1-A and C, the two
sub-DLAs, are resolved into multiple components. Note that the X-shooter
spectrum has higher resolution for metal lines in the range $1500<\lambda_{\rm
rest}<2700$ Å ($R=11000$ or FWHM = 27 km s-1) than for the H i Lyman lines at
$\lambda_{\rm rest}<1217$ Å ($R=6400$ or FWHM = 47 km s-1). For each distinct
metal-line cloud, we define a velocity integration window (highlighted in Fig.
6) and name it by adding a number suffix to designate its associated
subsystem. The cloud QSO1-C1 shows the strongest metal absorption with at
least four blended components within $\sim$200 km s-1. We treat it as a single
entity here, because for the purpose of measuring the gas metallicity it is
unnecessary to deblend these components with Voigt profile fitting. Because
the absorption toward QSO2 spans a narrower velocity range than that toward
QSO1, the former is missing the most blueshifted cloud “A1” and the most
redshifted cloud “C2”. For completeness, we defined QSO1-B1 based on its H i
absorption because no metal lines are detected there. As a result, there are a
total of eight metal-line clouds. The top section of Table 6 lists the
velocity integration windows of the clouds and their H i column densities by
summing $\log N_{\rm HI}$ of the Voigt components within the velocity windows.
Each cloud contains only one H i Voigt component except QSO1-B1 and QSO2-A2,
both of which contain two closely separated components.
Table 6Properties of Metal-line-defined Clouds Quantity | QSO1-A1 | QSO1-A2 | QSO1-B1 | QSO1-C1 | QSO1-C2 | QSO2-A2 | QSO2-B1 | QSO2-C1
---|---|---|---|---|---|---|---|---
$\delta v/{\rm km\,s}^{-1}$ | [$-545,-405$] | [$-295,-155$] | [$325,475$] | [$975,1375$] | [$1425,1575$] | [$-300,-110$] | [$300,500$] | [$1100,1250$]
$\log N_{\rm HI}$ | $19.64_{-0.06}^{+0.04}$ | $20.06_{-0.05}^{+0.08}$ | $15.98_{-0.23}^{+0.39}$ | $20.23_{-0.08}^{+0.07}$ | $18.79_{-0.25}^{+0.33}$ | $18.59_{-0.43}^{+0.23}$ | $16.03_{-0.16}^{+0.21}$ | $16.00_{-0.09}^{+0.10}$
log C iv/C ii | $\cdots$ | $<-1.06$ | $\cdots$ | $<-1.10$ | $<-1.23$ | $<-0.80$ | $>0.93$ | $>0.36$
log Al iii/Al ii | $<-0.20$ | $<0.17$ | $\cdots$ | $-0.58$ | $<0.00$ | $<0.22$ | $\cdots$ | $\cdots$
log Si iv/Si ii | $<-1.47$ | $<-0.92$ | $\cdots$ | $-1.22$ | $<-0.86$ | $>-0.37$ | $\cdots$ | $\cdots$
$\log U$ | $<-3.4$ | $<-2.9$ | $\cdots$ | $-3.0$ | $<-3.2$ | $-2.9$ | $>-2.1$ | $>-2.5$
$\log U$, adopted | $-3.5$ | $-3.0$ | $-2.0$ | $-3.0$ | $-3.5$ | $-3.0$ | $-2.0$ | $-2.0$
$\log n_{\rm H}/{\rm cm}^{3}$ | $-1.4$ | $-1.9$ | $-2.9$ | $-1.9$ | $-1.4$ | $-1.9$ | $-2.9$ | $-2.9$
$\log N_{\rm H}/{\rm cm}^{2}$ | $19.9$ | $20.4$ | $19.5$ | $20.5$ | $19.7$ | $20.1$ | $19.5$ | $19.5$
$\log f_{\rm HI}$ | $-0.3$ | $-0.3$ | $-3.5$ | $-0.3$ | $-0.9$ | $-1.5$ | $-3.5$ | $-3.5$
$\log l/{\rm pc}$ | $2.8$ | $3.9$ | $3.9$ | $3.9$ | $2.6$ | $3.5$ | $3.9$ | $3.9$
$[\alpha/{\rm H}]$ | $-1.77\pm 0.06$ | $-2.28\pm 0.08$ | $\cdots$ | $-1.08\pm 0.08$ | $-1.15\pm 0.30$ | $-1.91\pm 0.34$ | $-1.02\pm 0.19$ | $-1.58\pm 0.15$
ion | O i | O i | $\cdots$ | Si ii | O i | C ii | C iv | C iv
IC | $-0.01$ | $-0.01$ | $\cdots$ | $-0.12$ | $-0.04$ | $-0.90$ | $-2.82$ | $-2.82$
$[{\rm Fe/H}]$ | $-1.92\pm 0.07$ | $-2.62\pm 0.14$ | $\cdots$ | $-1.27\pm 0.09$ | $-1.76\pm 0.34$ | $\cdots$ | $\cdots$ | $\cdots$
$[\alpha/{\rm Fe}]$ | $+0.15\pm 0.09$ | $+0.34\pm 0.16$ | $\cdots$ | $+0.19\pm 0.12$ | $+0.61\pm 0.45$ | $\cdots$ | $\cdots$ | $\cdots$
In Appendix D, we provide an overview of the AODM method and our measurements
of ionic column densities from all of the selected transitions (Table 10). The
listed uncertainties of the unsaturated and unblended detections in the Table
include both statistical and systematic errors. Column densities from the AODM
method are taken as lower limits for lines with more than one saturated pixels
(which we define as $I_{\rm obs}/I_{0}\leq 0.05$ or $\tau\geq 3$) and are
taken as upper limits for lines that are blended with transitions from
absorbers at other redshifts. Lastly, for undetected transitions, we quote
3$\sigma_{\rm sta}$ upper limits on the column densities. Systematic errors
are not used here because it’s not meaningful to adjust the QSO continuum
around an undetected transition.
### 4.4. Ionization Correction and Metallicities
A relative metallicity measurement of the intervening gas requires (1) H i
column density, (2) ionic column density of a metal element, (3) the reference
solar abundances, and (4) the ionization correction. The definition of the
relative metallicity makes this explicit:
$\displaystyle{\rm[X/H]}$ $\displaystyle\equiv\log(N_{\rm X}/N_{\rm
H})-\log(N_{\rm X}/N_{\rm H})_{\odot}$ $\displaystyle=[\log(N_{\rm
X_{i}}/N_{\rm HI})-\log(N_{\rm X}/N_{\rm H})_{\odot}]+(\log f_{\rm HI}-\log
f_{\rm X_{i}})$ $\displaystyle\equiv{\rm[X/H]}^{\prime}+{\rm IC}$ (6)
where Xi denotes the ionic state $i$ of element X, $f_{\rm X_{i}}\equiv N_{\rm
X_{i}}/N_{\rm X}$ is the fraction of the element in the ionic state $i$,
$f_{\rm HI}\equiv N_{\rm HI}/N_{\rm H}$ is the neutral fraction of Hydrogen,
${\rm[X/H]}^{\prime}\equiv\log(N_{\rm X_{i}}/N_{\rm HI})-\log(N_{\rm X}/N_{\rm
H})_{\odot}$ is the raw metallicity, and ${\rm IC}\equiv\log f_{\rm HI}-\log
f_{\rm X_{i}}$ is the ionization correction.
We have obtained the first two items ($\log N_{\rm HI}$ and $\log N_{\rm
X_{i}}$) in § 4.2 and § 4.3 and the results are listed in Tables 4 and 10.
Combined with the elemental abundances of the present-day solar photosphere
from Asplund et al. 2009, we are ready to calculate the raw metallicity
${\rm[X/H]}^{\prime}$. Next, we calculate the ionization correction (IC) using
cloudy photoionization models (Ferland et al., 2017).
The IC factors are sensitive to both the H i column density ($\log N_{\rm
HI}$) and the ionization parameter $\log U=\log\Phi_{\rm H}/(n_{\rm H}c)$,
where $\Phi_{\rm H}$ is the surface flux of ionizing photons with $h\nu>1$ Ryd
at the illuminated face. The former has been measured, but the latter needs to
be constrained by comparing the column density ratios of different ionic
states of the same elements and predictions from photoionization models.
Therefore, for each cloud listed in Table 6, we calculate a set of
photoionization models, with a termination condition set to meet the observed
H i column density. For the ionizing source, we use the Haardt & Madau (2012)
radiation background interpolated to $z=2.67$ with contributions from both
galaxies and quasars. For the cloud, we assume plane-parallel geometry, the
solar relative abundance pattern, a metallicity of ${\rm[M/H]}=-1.5$888The
derived ICs are insensitive to the assumed metallicity., and a range of
hydrogen volume densities ($1.09\geq\log n_{\rm H}/{\rm cm}^{-3}\geq-4.91$) to
cover ionization parameters between $-6\leq\log U\leq 0$. For each cloud, we
compare the observed ionic column ratios (C iv/C ii and Si iv/Si ii) and the
model-predicted ratios to constrain the ionization parameter. We list the
constraints on $\log U$ in Table 6. It is rare to have detections of both high
ions and low ions in the same cloud, leading to many upper or lower limits of
$\log U$. But we found that the most plausible ionization parameters lie
between $-4\lesssim\log U\lesssim-2$, comparable to other published LLSs
(e.g., Prochaska, 1999; Lehner et al., 2016). Depending on the data
constraint, we adopt $\log U$ values of $-3.5,-3.0,$ or $-2.0$ for each cloud.
Finally, once both $\log N_{\rm HI}$ and $\log U$ are fixed, we use the cloudy
model of the same parameters to calculate the ICs for all of the ions (Table
11), which are then used to obtain the ionization-corrected metallicity
measurements for all of the transitions (Table 12). Notice that the ionization
correction only becomes important for (1) low ions in clouds with $\log N_{\rm
HI}\lesssim 19$ and (2) intermediate or high ions (e.g., C iv and Si iv) at
all column densities. For low ions in sub-DLAs, the ICs are fairly small
($\pm$0.15 dex). We also use the model to infer the total H column density
($\log N_{\rm H}$), the H neutral fraction ($\log f_{\rm HI}$), the H volume
density ($\log n_{\rm H}$), and the characteristic line-of-sight depth of the
cloud ($\log l=\log N_{\rm H}-\log n_{\rm H}$). The results are listed in
Table 6.
The best metallicity measurement is provided by O i, because O i has the
smallest ionization correction factors due to its charge-exchange reactions
with hydrogen (Field & Steigman, 1971) and its hydrogen-like ionization
potential. Unfortunately, O i $\lambda 1302$ is saturated in C1 toward QSO1 (a
common issue of the transition for DLAs) and is undetected in cloud B toward
QSO1 and the clouds toward QSO2. As a result, transitions from other ions need
to be used. The bottom section of Table 6 lists the final adopted
metallicities from our preferred $\alpha$-element transitions and Fe ii. Note
that the ICs for QSO2-B1 and QSO2-C1 are large because only the C iv lines are
detected in these clouds. We fail to obtain a reliable metallicity measurement
for QSO1-B1 because of the absence of metal absorption. For the four clouds
where we have both [$\alpha$/H] and [Fe/H], we found a moderate level of
$\alpha$-enhancement ([$\alpha$/Fe]) between 0.15 and 0.61 (with an inverse-
variance-weighted mean of 0.2), comparable to those previously measured in
$z>2$ DLAs ([$\alpha$/Fe] = $0.30\pm 0.16$; Rafelski et al. 2012).
We opt not to correct the gas-phase metallicity for dust depletion, because
(1) little depletion is expected from volatile elements such as O and C, (2)
the SDSS spectra and photometry of the QSOs show no evidence of significant
dust reddening, and (3) the depletion factors are largely uncertain in
external galaxies. As a reference, in the Milky Way’s ISM, volatile elements
(e.g., C, N, O, S, Zn) show depletions of ${\rm{\rm(X/H)}_{\rm
ISM}-(X/H)}_{\rm gas}\lesssim 0.3$, while refractory elements (e.g., Mg, Al,
Si, Fe, Ni) show $0.7\lesssim{\rm(X/H)}_{\rm ISM}-{\rm(X/H)}_{\rm gas}\lesssim
2.0$ (Savage & Sembach, 1996; Groves et al., 2004). These local measurements
from a metal-rich ISM provide strict upper limits on the level of depletion
expected in the CGM of the SMG.
## 5\. Emission-Absorption Connection
Figure 7.— Emission-absorption comparison. (a) CO (3$-$2) spectra of the SMG
and its companions, (b) H i Ly$\beta$ absorption, (c) H i column densities of
Voigt components (Table 4), (d) Al ii$\lambda$1670.8 absorption, and (e)
metallicities of metal-line-defined clouds (Table 6). The absorbers toward
QSO1 and QSO2 are color-coded in red and blue, respectively. In (a), the SMG
spectrum has been divided by 4$\times$ to show it together with the spectra of
its companions, and the vertical dashed lines indicate the centroid velocities
of the CO emission lines at 0, 58, 1171, and 1447 km s-1. All velocities are
relative to the redshift of the SMG ($z_{\rm SMG}=2.674$) and the gray shaded
regions indicate velocities beyond the escape velocity of a $10^{13}$
$M_{\odot}$ halo ($v_{\rm esc}\simeq 700$ km s-1).
Identifying the emission counterparts of the intervening gas helps us compare
the properties of the galaxies to those of their CGM. Having analyzed the ALMA
CO emitters in § 3 and the QSO absorption spectra in § 4, we are now ready to
draw connections between the emission and the absorption based on proximity in
both spatial and redshift dimensions.
Table 7Properties of the CGM around the SMG and Comp b Name | $\delta v$ | Galaxy | $R_{\bot}$ | $\log N_{\rm HI}$ | [M/H]
---|---|---|---|---|---
| (km s-1) | | (kpc) | ($\log{\rm cm}^{-2}$) |
QSO1-A | $[-545,-110]$ | SMG | 93.1 | $20.20_{-0.05}^{+0.07}$ | $-2.09\pm 0.07$
QSO2-A | — | — | 175.5 | $18.59_{-0.43}^{+0.23}$ | $-1.91\pm 0.34$
QSO1-B | [300,550] | SMG | 93.1 | $15.98_{-0.23}^{+0.39}$ | $\cdots$
QSO2-B | — | — | 175.5 | $16.06_{-0.15}^{+0.20}$ | $-1.02\pm 0.19$
QSO1-C | [975,1575] | Comp b | 58.9 | $20.25_{-0.07}^{+0.06}$ | $-1.09\pm 0.11$
QSO2-C | — | — | 32.2 | $16.01_{-0.09}^{+0.10}$ | $-1.58\pm 0.15$
Fig. 7 directly compares the absorption profiles from the QSOs to the CO
emission profiles from the SMG and its companions. The H i Ly$\beta$ and Al
ii$\lambda$1670.8 profiles from the two QSOs are plotted together to
illustrate the striking kinematic coherence. In addition, the figure
illustrates $\log N_{\rm HI}$ from the Voigt profile solution in Table 4 and
the ionization-corrected metallicities of the metal-line-defined clouds in
Table 6.
In § 4, we have found that the $z_{\rm abs}\approx 2.68$ H i absorbers have
total H i column densities of $\log N_{\rm HI}=20.53_{-0.06}^{+0.06}$ and
$18.60_{-0.43}^{+0.23}$ toward QSO1 and QSO2, respectively. Each absorber is
resolved into three main subsystems (A, B, and C) with velocity spans of
$\sim$1500-2000 km s-1. Although their H i column densities vary significantly
between the two QSO sightlines, their radial velocities show remarkable
consistency, indicating that the QSOs are intercepting three expansive
sheets/filaments of gas. At the same time, the extreme velocity widths of the
absorption-line systems suggest that they probe merging systems (Prochaska et
al., 2019).
Results in Fig. 7 show that subsystem C is unlikely to be in the same halo as
subsystems A, for several reasons. First, subsystem C is 10$\times$ more
metal-enriched than subsystem A (${\rm[M/H]}\simeq-1.1$ vs. $-2.1$). Secondly,
the velocity spans of $\sim$1950 km s-1 (QSO1) and $\sim$1460 km s-1 (QSO2;
Table 4) and their asymmetric distributions around $z_{\rm SMG}$ (absorption
is centered at $z_{\rm abs}=2.68$) are inconsistent with gravitational motions
inside even a $10^{13}$ $M_{\odot}$ halo centered on the SMG, because the
escape velocity of such a halo following the NFW profile is flat at $\sim$700
km s-1 between 60 kpc and the virial radius of 186 kpc. Lastly, we have
detected CO emission at almost exactly the redshifts of the absorbing clouds
in subsystem C in Comp b, which lies much closer to the QSOs than the SMG.
Therefore, we consider subsystem A part of the CGM of the SMG at $z=2.674$,
and subsystem C part of the CGM of Comp b at $z=2.6884$ and 2.6917.
As for subsystem B ($\log N_{\rm HI}\simeq 16$), although its velocity allows
an association with the SMG, it is unimportant because its contribution to the
CGM is negligible compared to subsystem A.
Once the emission counterparts are determined, the absorption-line
measurements from the two QSO sightlines can be plotted against the impact
parameter to show crude radial profiles of the CGM around the SMG and Comp b.
We consolidate the results in Table 7, where we have assigned a velocity
window for each subsystem that captures its major clouds. We calculate the
total H i column densities from the H i Voigt components within these velocity
windows. When there are multiple metal-line clouds in a subsystem, the
metallicity is calculated as the $N_{\rm H}$-weighted mean [$\alpha$/H], where
$N_{\rm H}$ is the H i \+ H ii column density based on the adopted cloudy
model.
Figure 8.— Profiles of the CGM around the SMG (red) and Comp b (blue). Top: H
i column density vs. projected separation for the absorption-line clouds in
GAMA J0913$-$0107 and the literature. The curves show the projected H i column
densities of NFW halos, where the precipitous decline marks the virial radii.
Bottom: Metallicity vs. projected separation for the absorption-line clouds in
GAMA J0913$-$0107 and the literature. Literature QSO CGM data are from Lau et
al. (2016), SFG $\log N_{\rm HI}$ data are from Simcoe et al. (2006), Rudie et
al. (2012), and Crighton et al. (2013, 2015), and SFG [M/H] data are from
Simcoe et al. (2006). The horizontal dashed lines in the bottom panel indicate
the range of IGM metallicities measured from the LYAF at $z_{\rm abs}\sim 2.5$
(${\rm[M/H]}=-2.85\pm 0.75$; Simcoe et al., 2004). The sloped line shows the
oxygen abundance profile of the giant spiral galaxy M101 measured with an
electron-temperature-based method (Eq. 5 of Kennicutt et al., 2003). The solid
portion is covered by H ii regions, while the dotted portion is an
extrapolation.
The GAMA J0913$-$0107 system gives us the first glimpse into the CGM around an
SMG. With high H i column density and low metallicity at large impact
parameters, the CGM of the SMG is distinct from the CGM of QSOs and normal
star-forming galaxies at $z\sim 2-3$.
The profile of H i column density is shown in Fig. 8$a$. We compare our
measurements with literature QSO absorption-line measurements in the
surroundings of $z\sim 2-3$ QSOs (Lau et al., 2016) and Lyman Break Galaxies
(LBGs; Simcoe et al., 2006; Rudie et al., 2012; Crighton et al., 2013, 2015).
The H i column densities of the SMG’s CGM, similar to coeval QSOs, are
significantly greater than those of star-forming galaxies at $R_{\bot}\gtrsim
70$ kpc. The H i column density declines as we move away from the SMG, with a
gradient of $-2.0\pm 0.4$ dex per 100 kpc. How do the observed column
densities compare with the projected surface mass density $\Sigma_{M}(R)$ of
NFW halos? To make this comparison, we first calculate $\Sigma_{M}(R)$ by
integrating the NFW density profile $\rho(r)$ up to the virial radius:
$\Sigma_{M}(R)=2\int_{R}^{R_{\rm vir}}\frac{r\rho(r)}{\sqrt{r^{2}-R^{2}}}dr$
(7)
We then convert it to H i column densities by assuming a baryon$-$dark-matter
density ratio of $f_{b}\equiv\Omega_{b}/\Omega_{c}=0.187$ (Planck
Collaboration et al., 2020), a Helium correction of $f_{\rm He}=M_{\rm
H+He}/M_{\rm H}=1.36$, and an arbitrary neutral fraction of 10%999Comparable
to the neutral fractions estimated by our photoionization models in Table 6..
Fig. 8$a$ shows that the H i column density profiles of the SMG and the QSOs
are consistent with the expectation from a $\sim$$10^{13}$ $M_{\odot}$ halo.
This is in agreement with the detection of a high column of H i ($\log N_{\rm
HI}=18.6$) at an impact parameter as far as 176 kpc. The agreement also shows
that the neutral gas in the absorption-line systems can account for $\sim$10%
of the total baryonic mass in the halo if they have a filling factor close to
unity. On the other hand, the $\log N_{\rm HI}$ profile of LBGs is more
consistent with a $\sim$$10^{12}$ $M_{\odot}$ halo.
The metallicity profile is shown in Fig. 8$b$. The literature data points show
that, metal-enriched gas with $-1\lesssim{\rm[M/H]}\lesssim 0$ dominates the
inner part ($R_{\bot}\lesssim 150$ kpc) of the CGM around both QSOs and LBGs.
By contrast, the CGM of the SMG is poor in metal, with almost a constant
metallicity of [M/H]$\approx-2$ across two sightlines separated by 86 kpc
(10.8″). Its metallicity is near the 1$\sigma$ upper bound of the
metallicities measured in the LYAF (i.e., the IGM; Simcoe et al., 2004), and
it is lower by $\sim$1.5 dex at $R_{\bot}=93.1$ and by $\sim$0.5 dex at
$R_{\bot}=175.5$ kpc than that of the CGM of QSOs.
On the other hand, the CGM of Comp b show properties similar to that of normal
star-forming galaxies at $z\sim 2-3$. Both line emitters in Comp b have orders
of magnitude lower molecular gas mass than the SMG (see Table 1). Assuming a
CO-to-molecular-mass correction factor of $\alpha_{\rm CO}=4.3$ and a CO
excitation correction of $r_{31}=0.52$, the emitters at $z=2.6884$ and 2.6917
have molecular gas masses of $M_{\rm mol}=5.6\times 10^{9}$ and $4.4\times
10^{9}$ $M_{\odot}$, respectively. The gas masses are comparable to those
measured in the lensed normal star-forming galaxies that have stellar masses
of $\sim 10^{10}$ $M_{\odot}$ (Saintonge et al., 2013). One would thus expect
a CGM similar to that of LBGs. The corresponding subsystem C provides
absorption-line measurements at $R_{\bot}=32.3$ and 58.9 kpc. It shows
significantly metal-enriched gas (compared to the IGM level) with a large
variation in H i column density over just a difference of 27 kpc in impact
parameter. Comp b thus is surrounded by a metal-enriched clumpy medium
extended to at least $\sim$60 kpc.
In the above, we have shown that the CGM of SMM J0913 is distinctly different
from the CGM of QSOs and normal star-forming galaxies. But how do our
absorbers compare with other DLAs in terms of absorption-line properties only?
The H i column densities of subsystems A and C toward QSO1 miss the DLA
threshold of $\log N_{\rm HI}=20.3$ by merely $\sim$0.1 dex. Because such
small differences are comparable to the measurement uncertainty, more liberal
thresholds have been used to select DLAs (e.g., $\log N_{\rm HI}>20.1$ in
Rubin et al., 2015). Combined with the result that the two sub-DLAs have
neutral fraction of $\sim$50% from cloudy models, we believe that it is
appropriate to compare their properties with those of the general DLA
population.
With high-resolution spectra of 100 DLAs at $z_{\rm abs}\sim 2-4$, Rafelski et
al. (2012) found that their metallicity distribution is well fit by a Gaussian
with a mean at ${\rm[M/H]}=-1.57$ and a dispersion of 0.57 dex. The DLA
metallicity only mildly decreases with redshift, but shows a strong
correlation with the width of low-ion metal lines (e.g., $\Delta V_{90}$ or
$W_{1526}$) (Neeleman et al., 2013). This correlation between kinematics and
metallicity is generally interpreted as a manifestation of the mass-
metallicity relation (e.g., Tremonti et al., 2004; Erb et al., 2006), because
the line width may reflect the halo mass (like in the Tully-Fisher relation),
which in turn is proportional to the stellar mass (Møller et al., 2013).
Previous works have used higher-resolution spectra ($R\gtrsim 40000$) to
measure $\Delta V_{90}$, the velocity interval including 90% of the optical
depth of an unsaturated line. And the alternative kinematic parameter
$W_{1526}$, the rest-frame equivalent width of Si ii$\lambda$1526.7, is
unsuitable for our sub-DLAs because the line is unsaturated (see Fig. 6). The
equivalent width only becomes a good kinematics indicator when the line is
saturated, weakening its dependency on the Si+ column density (and
consequently, on metallicity). So to place our sub-DLAs on the relation, we
adopt the velocity separation between kinematically resolved substructures as
a surrogate of $\Delta V_{90}$. QSO1-A shows two velocity components of
similar strength separated by $\sim$250 km s-1, and QSO1-C is dominated by the
stronger C1 cloud, which appears to be a blend of four components with a
velocity span of $\sim$200 km s-1. These estimates of the velocity width
should be considered as lower limits on $\Delta V_{90}$, because they are the
separations between peak optical depths.
Figure 9.— Metallicity$-$line-width relation of (sub-)DLAs at $z\sim 2-3$.
Gray squares are 44 DLAs between $2<z_{\rm abs}<3$ compiled from the
literature (Neeleman et al., 2013). The red and blue circles show respectively
subsystems A and C toward QSO1. For both sub-DLAs, we have estimated the
velocity widths using the separation of resolved components in the $R\sim
11000$ X-shooter/VIS spectrum.
Fig. 9 compares the two sub-DLAs against literature DLAs. To control the
redshift evolution of the mass-metallicity relation, only the DLAs between
$2<z_{\rm abs}<3$ are plotted. While QSO1-C blends in the general trend
established by DLAs, QSO1-A is a clear outlier. Firstly, very few DLAs have
metallicities as low as QSO1-A: only 2 out of the 44 DLAs
($4.5_{-1.5}^{+5.5}$%) have ${\rm[M/H]}\leq-2.1$. Secondly, its high velocity
width places it significantly below the metallicity-line-width relation of
DLAs, which would have predicted a velocity width of only $\sim$20 km s-1
based on its metallicity at ${\rm[M/H]}=-2.1$.
This finding suggests that most of the DLAs are likely closer to their hosts
than our sub-DLA QSO1-A ($R_{\bot}=93$ kpc), consistent with previous
observations of DLA host galaxies (see § 1.3). More importantly, the unusual
combination of high velocity-width and low metallicity provides a method –
based on absorption-line properties alone – to potentially separate (sub-)DLAs
associated with normal star-forming galaxies at low impact parameters with
those associated with more massive galaxies like SMGs at large impact
parameters.
## 6\. Summary & Discussion
We have carried out a comprehensive study of the emission-absorption system
GAMA J0913$-$0107 at $z\sim 2.67$ with a multi-wavelength data set obtained
primarily from Herschel, ALMA, and VLT/X-shooter. The system consists of a
bright SMG, its CO companion galaxies, and a number of optically thick H i
absorbers toward two background QSOs within 22″ of the SMG. Our main results
are:
1. 1.
The Herschel-selected SMG at $z=2.674$, with an 870 $\mu$m flux of 7.4 mJy and
an IR luminosity of $\sim 10^{13}$ $L_{\odot}$, is one of the most luminous
dusty star-forming galaxies. Its properties are similar to the general SMG
population at $z\sim 2$, featuring a short gas depletion timescale of
$\sim$0.1 Gyr and compact (sub-arcsec) sizes in both dust emission and CO
(3$-$2) emission. The high S/N spectrum reveals two CO (3$-$2) components at
almost the same redshift: $\sim$1/4 of the line flux is in a broad component
with a ${\rm FWHM}\simeq 900$ km s-1, while $\sim$3/4 of the flux in a narrow
component with ${\rm FWHM}\simeq 250$ km s-1.
2. 2.
Three companion CO emitters are identified within 30″ and 1500 km s-1 of the
SMG. A comparison with the source counts from the ASPECS field survey
indicates that the SMG lives in an over-dense environment.
3. 3.
Two nearby QSOs provide background beacons to probe the CGM of the SMG. A DLA
with a total H i column density of $\log N_{\rm HI}=20.5$ is identified at
$z_{\rm abs}\sim 2.68$ in the closer QSO sightline at $\theta=11.7\arcsec$.
The DLA is quite unusual, in terms of both the large impact parameter
($R_{\bot}=93.1$ kpc to the SMG) and the enormous velocity span ($\sim$2000 km
s-1). X-shooter resolved the DLA into three major subsystems, including two
sub-DLAs with distinctly different metallicities separated by $\sim$1600km
s-1. Remarkably, the same subsystems are also found in the farther QSO
sightline at $\theta=22.1\arcsec$: they have nearly the same velocities and
metallicities as their counterparts at $\theta=11.7\arcsec$, despite lower H i
column densities (total $\log N_{\rm HI}=18.6$).
4. 4.
We use the absorption-line systems to characterize the CGM of the SMG and its
companion Comp b, and we compare their properties with the CGM of QSOs and
normal star-forming galaxies. The CGM of the SMG forms its own category: while
its high column densities at large impact parameters are similar to the
massive halos inhabited by $z=2-3$ QSOs, its metal content ($\sim$1% solar) is
an order of magnitude lower than the circum-QSO medium. On the other hand, the
CGM of the much less luminous Comp b is more consistent with that of normal
star-forming galaxies at $z=2-3$: showing significant metal enrichment
($\sim$10% solar) within $R_{\bot}\lesssim 60$ kpc.
The detection of high-column density, mostly neutral, metal-poor gas in the
CGM of a massive dusty starburst galaxy at $z=2.674$ has powerful implications
to theories of galaxy formation and evolution. The remarkable consistency of
the H i absorbers in both kinematics and metallicity across two sightlines
separated by 86 kpc is at odds with CGM models that assume randomly floating H
i clouds in pressure equilibrium with hot X-ray gas. Instead, it is logical to
assume that the background QSOs have intercepted a single filament of cool gas
permeating in the halo.
Narrow filaments of cool gas and satellite galaxies can penetrate the hot CGM
of massive halos without ever being shock-heated to the virial temperature,
because (1) massive halos are rare and tend to form at the intersections of
the cosmic web and (2) the cooling time is shorter in the filaments than in
the halo (Dekel & Birnboim, 2006; Dekel et al., 2009; Kereš et al., 2009). At
$z\gtrsim 2.5$, such cold streams can survive even in halos more massive than
$\sim$$10^{13}$ $M_{\odot}$ (although note that this mass limit is highly
uncertain). In particular, stream-feeding is likely important in the bright
SMGs with $S_{850}>5$ mJy because their comoving space density exceeds the
expectation from minor and major mergers (Dekel et al., 2009).
The observed properties of the absorption-line systems match the simulation-
predicted properties of cold streams. First, the simulations of Dekel et al.
(2009) show that the inflow velocity is comparable to the virial velocity and
is roughly constant along the filament. This is consistent with the velocity
shift ($\delta v=-300$ km s-1) and the kinematic coherence we saw between the
clouds in both QSO sightlines. Secondly, by post-processing gas in seven
simulated halos with $M_{\rm halo}=10^{10}-10^{12}$ $M_{\odot}$ between
$1.5<z<4.0$, Fumagalli et al. (2011) found that the absorption-line systems
associated with the smooth stream component have systematically lower
metallicity ($\sim$1% solar). This is exactly the level of the gas metallicity
we measured in the absorbers associated with the SMG.
Radially inflowing on nearly a free-fall timescale, the cold streams may
account for the bulk of the baryonic accretion rate and become the dominant
mechanism to feed the growth of galaxies (Kereš et al., 2009). We can crudely
estimate the gas accretion rate from the filament that the QSOs intercepted.
The filament has a length $>$176 kpc and a depth on the order of 10 kpc at
$R_{\bot}=93$ kpc. The former is estimated from the impact parameter of QSO2,
and the latter is estimated from the ratio between the column density and the
volume density of Hydrogen, $l=N_{\rm H}/n_{\rm H}$, inferred from the
photoionization model of the sub-DLA QSO1-A2. The depth-to-distance ratio is
0.11 radian or $6^{\circ}$, comparable to the opening angles of
$20-30^{\circ}$ seen in simulations (Dekel et al., 2009). Our photoionization
models also indicate similar H i+H ii column densities for QSO1-A2 ($\log
N_{\rm H}=20.4$) and QSO2-A2 ($\log N_{\rm H}=20.1$), the two main clouds
associated with the SMG at $R_{\bot}=93,176$ kpc. By assuming an opening angle
of $\beta=25^{\circ}$, an average hydrogen column density of $\log N_{\rm
H}=20.2$, and a $10^{13}$ $M_{\odot}$ NFW halo at $z=2.674$, we estimate that
the mass of the filament is $M_{\rm fil}=f_{\rm He}m_{p}N_{\rm
H}(\beta~{}R_{\rm vir}^{2}/2)\sim 1.3\times 10^{10}$ $M_{\odot}$. Given an
accretion timescale of $\tau_{\rm acc}=R_{\rm vir}/V_{\rm vir}=4\times 10^{8}$
yr, the gas accretion rate from this single filament is $\sim$33
$M_{\odot}\,{\rm yr}^{-1}$. Typically three main filaments are seen in the
simulations, so our estimate shows that cold-mode accretion can supply gas at
a rate of $\sim$100 $M_{\odot}\,{\rm yr}^{-1}$. Although accounting for only
$\sim$10% of the current SFR, our estimated cool gas accretion rate is in fact
comparable to the rate of total gas supply to the central galaxies in
$10^{13}$ $M_{\odot}$ halos at $z=2$ from cosmological simulations (see Fig. 9
of Kereš et al., 2009), and at this rate the molecular gas reservoir of
$10^{11}$ $M_{\odot}$ can be acquired in just $\sim$1 Gyr. On the other hand,
star formation at the current intensity seems unsustainable despite the
efficient gas supply from cold streams.
In this work, we have presented the first observational evidence that supports
the existence of cold streams in the CGM of a massive starburst galaxy. The
GAMA J0913$-$0107 system has an excellent data set and the results are highly
informative, but larger samples are clearly desired to draw conclusions on the
general properties of the CGM. We hope that this will serve as a springboard
for upcoming statistical studies of the CGM in similar galaxies. As an attempt
to inform these future studies, it is worth discussing the major technical
challenges we had faced in this program:
1. 1.
The large beam of Herschel (17.8″ at 250 $\mu$m) makes it inefficient to
identify SMG-QSO pairs with small angular separations ($\theta\lesssim
10\arcsec$). As a result, QSO1 in GAMA J0913$-$0107 is the only one that
probes below 100 kpc; and despite intercepting a sub-DLA it has yet to expose
the chemically enriched area of the CGM of the SMG.
2. 2.
High S/N spectra with moderately high spectral resolution are needed to
unambiguously detect optically thick absorbers with $17.2<\log N_{\rm
HI}\lesssim 19$. For example, with the $R\sim 2000$ SDSS spectrum of QSO2, we
couldn’t have identified the LLS associated with the SMG (i.e., QSO2-A2 with
$\log N_{\rm HI}=18.6$). But the LLS is unambiguously detected in the
X-shooter spectrum because of its resolved H i Lyman profiles and the clear
detection of low-ion metal lines. Similarly low column densities are expected
in most of our sample, because all of the other spectroscopically confirmed
SMG-QSO pairs we have so far have impact parameters between $100<R_{\bot}<300$
kpc (Fu et al., 2016, and unpublished data) and none of them show obvious
(sub-)DLA features (which may be expected given the large impact parameters).
3. 3.
It has been difficult to obtain spectroscopic redshifts of the Herschel
sources because (1) they require sub-arcsec positions from interferometers to
place spectroscopic slits, (2) the near-IR spectral range suffers from heavy
telluric absorption and OH emission, and (3) SMGs tend to be weak in rest-
frame optical lines. The latter two points are the main reasons why our
redshift success rate is only $\sim$60%.
Possible solutions to these issues may be already on the horizon. To address
the first challenge, we need to design an efficient interferometer imaging
survey, because a sample of Herschel sources with less than 10″ apparent
offsets from optical QSOs is likely overwhelmed by contaminating sources. Two
third of the Herschel-SDSS-selected pairs with apparent separations between 5″
and 10″ turned out to be IR-luminous QSOs instead of projected SMG-QSO pairs.
A good strategy is to observe multiple sources located within a $\sim$10∘
diameter circle in a single ALMA scheduling block (SB). In Fu et al. (2017),
we managed to observe $\sim$10 targets in a single $\sim$50 min SB, achieving
an on-source integration time of 200 s per source and an rms of 0.12 mJy
beam-1. Do we have enough such pairs to populate a 50-min SB? The surface
density of Herschel-QSO pairs with offsets less than 10″ is 0.16 deg2 (26
pairs in the 161.6 deg2 H-ATLAS GAMA fields), which gives $\sim$13 targets for
a $\sim$10∘ diameter circle.
To address the second challenge, we need QSO spectra with quality similar to
the X-shooter spectra presented here to better sample the spatial profile of
the CGM. The QSOs in our sample are selected to have $g<22$, the majority of
which requires $\sim$1.5 hours’ exposure time with an Echellette spectrograph
on a 10-m-class telescope to reach a sufficient S/N at $R\sim 8000$ (e.g., see
the Keck/ESI survey of DLAs by Rafelski et al., 2012).
To address the last challenge, we need a more efficient method to obtain SMG
redshifts. One potential approach is to exploit the frequency scan mode
offered by modern millimeter interferometers. This method has the additional
advantage of bypassing the interferometer imaging step because the primary
beam is usually larger than the Herschel positional uncertainty and the line
detection also provides positional information. For instance, with NOEMA scans
of the 2 mm and 3 mm bands (only two spectral configurations per band), Neri
et al. (2020) obtained 12 secure redshifts for 13 sources. The average
telescope time spent is $\sim$105 min per source, including $\sim$40 min
overhead. Although the targets in this Pilot study have 500 $\mu$m flux
densities greater than 80 mJy (many are strongly lensed), this technique could
be applied to fainter sources (like ours with $S_{500}>20$ mJy) as the
instrument sensitivity and overheads continue to improve.
We thank D. Kereš and J. Hennawi for discussions. This work is partially
supported by the National Science Foundation (NSF) grant AST-1614326. D. N.
acknowledges support from NSF AST-1909153. The National Radio Astronomy
Observatory is a facility of the NSF operated under cooperative agreement by
Associated Universities, Inc. This paper makes use of the following ALMA data:
ADS/JAO.ALMA#2015.1.00131.S, ADS/JAO. ALMA#2018.1.00548.S. ALMA is a
partnership of ESO (representing its member states), NSF (USA) and NINS
(Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI
(Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA
Observatory is operated by ESO, AUI/NRAO and NAOJ. Facilities: Herschel, ALMA,
Gemini/GNIRS, VLT/X-shooter, Keck/LRIS, Sloan, KiDS, VISTA
## References
* Alaghband-Zadeh et al. (2012) Alaghband-Zadeh, S., Chapman, S. C., Swinbank, A. M., et al. 2012, MNRAS, 424, 2232
* Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
* Barger et al. (1998) Barger, A. J., Cowie, L. L., Sanders, D. B., et al. 1998, Nature, 394, 248
* Benítez (2000) Benítez, N. 2000, ApJ, 536, 571
* Birnboim & Dekel (2003) Birnboim, Y., & Dekel, A. 2003, MNRAS, 345, 349
* Blain et al. (2002) Blain, A. W., Smail, I., Ivison, R. J., Kneib, J.-P., & Frayer, D. T. 2002, Phys. Rep., 369, 111
* Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207
* Bonato et al. (2019) Bonato, M., Liuzzo, E., Herranz, D., et al. 2019, MNRAS, 485, 1188
* Bothwell et al. (2013) Bothwell, M. S., Smail, I., Chapman, S. C., et al. 2013, MNRAS, 429, 3047
* Bruzual & Charlot (2003) Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000
* Bryan & Norman (1998) Bryan, G. L., & Norman, M. L. 1998, ApJ, 495, 80
* Bullock & Boylan-Kolchin (2017) Bullock, J. S., & Boylan-Kolchin, M. 2017, ARA&A, 55, 343
* Bullock et al. (2001) Bullock, J. S., Kolatt, T. S., Sigad, Y., et al. 2001, MNRAS, 321, 559
* Burbidge et al. (1968) Burbidge, E. M., Lynds, C. R., & Stockton, A. N. 1968, ApJ, 152, 1077
* Cantalupo et al. (2014) Cantalupo, S., Arrigoni-Battaia, F., Prochaska, J. X., Hennawi, J. F., & Madau, P. 2014, Nature, 506, 63
* Casey et al. (2014) Casey, C. M., Narayanan, D., & Cooray, A. 2014, Phys. Rep., 541, 45
* Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763
* Chapman et al. (2005) Chapman, S. C., Blain, A. W., Smail, I., & Ivison, R. J. 2005, ApJ, 622, 772
* Chen et al. (2015) Chen, C.-C., Smail, I., Swinbank, A. M., et al. 2015, ApJ, 799, 194
* Chen et al. (2016) Chen, Y.-M., Shi, Y., Tremonti, C. A., et al. 2016, Nature Communications, 7, 13269 EP
* Condon (1997) Condon, J. J. 1997, PASP, 109, 166
* Crighton et al. (2013) Crighton, N. H. M., Hennawi, J. F., & Prochaska, J. X. 2013, ApJ, 776, L18
* Crighton et al. (2015) Crighton, N. H. M., Hennawi, J. F., Simcoe, R. A., et al. 2015, MNRAS, 446, 18
* Cushing et al. (2004) Cushing, M. C., Vacca, W. D., & Rayner, J. T. 2004, PASP, 116, 362
* Daddi et al. (2009) Daddi, E., Dannerbauer, H., Stern, D., et al. 2009, ApJ, 694, 1517
* Daddi et al. (2010) Daddi, E., Elbaz, D., Walter, F., et al. 2010, ApJ, 714, L118
* Daddi et al. (2020) Daddi, E., Valentino, F., Rich, R. M., et al. 2020, arXiv e-prints, arXiv:2006.11089
* Decarli et al. (2019) Decarli, R., Walter, F., Gónzalez-López, J., et al. 2019, ApJ, 882, 138
* Dekel & Birnboim (2006) Dekel, A., & Birnboim, Y. 2006, MNRAS, 368, 2
* Dekel et al. (2009) Dekel, A., Birnboim, Y., Engel, G., et al. 2009, Nature, 457, 451
* Downes & Solomon (1998) Downes, D., & Solomon, P. M. 1998, ApJ, 507, 615
* Dunne et al. (2000) Dunne, L., Eales, S., Edmunds, M., et al. 2000, MNRAS, 315, 115
* Eales et al. (2010) Eales, S. A., Raymond, G., Roseboom, I. G., et al. 2010, A&A, 518, L23
* Eastman et al. (2013) Eastman, J., Gaudi, B. S., & Agol, E. 2013, PASP, 125, 83
* Eastman et al. (2019) Eastman, J. D., Rodriguez, J. E., Agol, E., et al. 2019, arXiv e-prints, arXiv:1907.09480
* Elias et al. (2006) Elias, J. H., Joyce, R. R., Liang, M., et al. 2006, in Proc. SPIE, Vol. 6269, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 62694C
* Erb et al. (2006) Erb, D. K., Shapley, A. E., Pettini, M., et al. 2006, ApJ, 644, 813
* Falgarone et al. (2017) Falgarone, E., Zwaan, M. A., Godard, B., et al. 2017, Nature, 548, 430
* Faucher-Giguère et al. (2016) Faucher-Giguère, C.-A., Feldmann, R., Quataert, E., et al. 2016, MNRAS, 461, L32
* Faucher-Giguère et al. (2015) Faucher-Giguère, C.-A., Hopkins, P. F., Kereš, D., et al. 2015, MNRAS, 449, 987
* Ferland et al. (2017) Ferland, G. J., Chatzikos, M., Guzmán, F., et al. 2017, Rev. Mexicana Astron. Astrofis., 53, 385
* Field & Steigman (1971) Field, G. B., & Steigman, G. 1971, ApJ, 166, 59
* Finley et al. (2014) Finley, H., Petitjean, P., Noterdaeme, P., & Pâris, I. 2014, A&A, 572, A31
* Fu et al. (2017) Fu, H., Isbell, J., Casey, C. M., et al. 2017, ApJ, 844, 123
* Fu et al. (2013) Fu, H., Cooray, A., Feruglio, C., et al. 2013, Nature, 498, 338
* Fu et al. (2016) Fu, H., Hennawi, J. F., Prochaska, J. X., et al. 2016, ApJ, 832, 52
* Fumagalli et al. (2016) Fumagalli, M., O’Meara, J. M., & Prochaska, J. X. 2016, MNRAS, 455, 4100
* Fumagalli et al. (2011) Fumagalli, M., Prochaska, J. X., Kasen, D., et al. 2011, MNRAS, 418, 1796
* Fynbo et al. (2008) Fynbo, J. P. U., Prochaska, J. X., Sommer-Larsen, J., Dessauges-Zavadsky, M., & Møller, P. 2008, ApJ, 683, 321
* Fynbo et al. (2013) Fynbo, J. P. U., Geier, S. J., Christensen, L., et al. 2013, MNRAS, 436, 361
* Fynbo et al. (2018) Fynbo, J. P. U., Heintz, K. E., Neeleman, M., et al. 2018, MNRAS, 479, 2126
* Genzel et al. (2010) Genzel, R., Tacconi, L. J., Gracia-Carpio, J., et al. 2010, MNRAS, 407, 2091
* González-López et al. (2019) González-López, J., Decarli, R., Pavesi, R., et al. 2019, ApJ, 882, 139
* Greve et al. (2005) Greve, T. R., Bertoldi, F., Smail, I., et al. 2005, MNRAS, 359, 1165
* Griffin et al. (2010) Griffin, M. J., Abergel, A., Abreu, A., et al. 2010, A&A, 518, L3
* Groves et al. (2004) Groves, B. A., Dopita, M. A., & Sutherland, R. S. 2004, ApJS, 153, 9
* Haardt & Madau (2012) Haardt, F., & Madau, P. 2012, ApJ, 746, 125
* Hainline et al. (2011) Hainline, L. J., Blain, A. W., Smail, I., et al. 2011, ApJ, 740, 96
* Harris et al. (2010) Harris, A. I., Baker, A. J., Zonak, S. G., et al. 2010, ApJ, 723, 1139
* Hennawi et al. (2015) Hennawi, J. F., Prochaska, J. X., Cantalupo, S., & Arrigoni-Battaia, F. 2015, Science, 348, 779
* Hennawi et al. (2006) Hennawi, J. F., Prochaska, J. X., Burles, S., et al. 2006, ApJ, 651, 61
* Hickox et al. (2012) Hickox, R. C., Wardlow, J. L., Smail, I., et al. 2012, MNRAS, 421, 284
* Hodge et al. (2012) Hodge, J. A., Carilli, C. L., Walter, F., et al. 2012, ApJ, 760, 11
* Ivison et al. (2011) Ivison, R. J., Papadopoulos, P. P., Smail, I., et al. 2011, MNRAS, 412, 1913
* James et al. (2002) James, A., Dunne, L., Eales, S., & Edmunds, M. G. 2002, MNRAS, 335, 753
* Jorgenson & Wolfe (2014) Jorgenson, R. A., & Wolfe, A. M. 2014, ApJ, 785, 16
* Juneau et al. (2009) Juneau, S., Narayanan, D. T., Moustakas, J., et al. 2009, ApJ, 707, 1217
* Kanekar et al. (2020) Kanekar, N., Prochaska, J. X., Neeleman, M., et al. 2020, ApJ, 901, L5
* Kennicutt (1998) Kennicutt, Robert C., J. 1998, ApJ, 498, 541
* Kennicutt et al. (2003) Kennicutt, Robert C., J., Bresolin, F., & Garnett, D. R. 2003, ApJ, 591, 801
* Keres et al. (2005) Keres, D., Katz, N., Weinberg, D. H., & Davé, R. 2005, MNRAS, 363, 2
* Kereš et al. (2009) Kereš, D., Katz, N., Fardal, M., Davé, R., & Weinberg, D. H. 2009, MNRAS, 395, 160
* Klypin et al. (2011) Klypin, A. A., Trujillo-Gomez, S., & Primack, J. 2011, ApJ, 740, 102
* Kneib & Natarajan (2011) Kneib, J.-P., & Natarajan, P. 2011, A&A Rev., 19, 47
* Krogager et al. (2017) Krogager, J.-K., Møller, P., Fynbo, J. P. U., & Noterdaeme, P. 2017, MNRAS, 469, 2959
* Kuijken et al. (2019) Kuijken, K., Heymans, C., Dvornik, A., et al. 2019, A&A, 625, A2
* Lau et al. (2016) Lau, M. W., Prochaska, J. X., & Hennawi, J. F. 2016, ApJS, 226, 25
* Lehner et al. (2016) Lehner, N., O’Meara, J. M., Howk, J. C., Prochaska, J. X., & Fumagalli, M. 2016, ApJ, 833, 283
* Li et al. (2019) Li, Q., Cai, Z., Prochaska, J. X., et al. 2019, ApJ, 875, 130
* Lovell et al. (2021) Lovell, C. C., Geach, J. E., Davé, R., Narayanan, D., & Li, Q. 2021, MNRAS
* Magnelli et al. (2012a) Magnelli, B., Saintonge, A., Lutz, D., et al. 2012a, A&A, 548, 22
* Magnelli et al. (2012b) Magnelli, B., Lutz, D., Santini, P., et al. 2012b, A&A, 539, 155
* Martin et al. (2015) Martin, D. C., Matuszewski, M., Morrissey, P., et al. 2015, Nature, 524, 192
* Martin et al. (2019) Martin, D. C., O’Sullivan, D., Matuszewski, M., et al. 2019, Nature Astronomy, 3, 822
* Meiksin (2009) Meiksin, A. A. 2009, Reviews of Modern Physics, 81, 1405
* Michałowski et al. (2012) Michałowski, M. J., Dunlop, J. S., Cirasuolo, M., et al. 2012, A&A, 541, 85
* Møller & Christensen (2020) Møller, P., & Christensen, L. 2020, MNRAS, 492, 4805
* Møller et al. (2013) Møller, P., Fynbo, J. P. U., Ledoux, C., & Nilsson, K. K. 2013, MNRAS, 430, 2680
* Møller & Fynbo (2001) Møller, P., & Fynbo, J. U. 2001, A&A, 372, L57
* Morton (2003) Morton, D. C. 2003, ApJS, 149, 205
* Narayanan et al. (2015) Narayanan, D., Turk, M., Feldmann, R., et al. 2015, Nature, 525, 496
* Navarro et al. (1996) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ, 462, 563
* Neeleman et al. (2017) Neeleman, M., Kanekar, N., Prochaska, J. X., et al. 2017, Science, 355, 1285
* Neeleman et al. (2019) Neeleman, M., Kanekar, N., Prochaska, J. X., Rafelski, M. A., & Carilli, C. L. 2019, ApJ, 870, L19
* Neeleman et al. (2013) Neeleman, M., Wolfe, A. M., Prochaska, J. X., & Rafelski, M. 2013, ApJ, 769, 54
* Neri et al. (2020) Neri, R., Cox, P., Omont, A., et al. 2020, A&A, 635, A7
* Noterdaeme et al. (2012a) Noterdaeme, P., Petitjean, P., Carithers, W. C., et al. 2012a, A&A, 547, L1
* Noterdaeme et al. (2012b) Noterdaeme, P., Laursen, P., Petitjean, P., et al. 2012b, A&A, 540, A63
* Oke et al. (1995) Oke, J. B., Cohen, J. G., Carr, M., et al. 1995, PASP, 107, 375
* Oser et al. (2010) Oser, L., Ostriker, J. P., Naab, T., Johansson, P. H., & Burkert, A. 2010, ApJ, 725, 2312
* Papadopoulos et al. (2012) Papadopoulos, P. P., van der Werf, P., Xilouris, E., Isaak, K. G., & Gao, Y. 2012, ApJ, 751, 10
* Pâris et al. (2012) Pâris, I., Petitjean, P., Aubourg, É., et al. 2012, A&A, 548, A66
* Pascale et al. (2011) Pascale, E., Auld, R., Dariush, A., et al. 2011, MNRAS, 415, 911
* Pavesi et al. (2018) Pavesi, R., Sharon, C. E., Riechers, D. A., et al. 2018, ApJ, 864, 49
* Péroux & Howk (2020) Péroux, C., & Howk, J. C. 2020, ARA&A, 58, 363
* Planck Collaboration et al. (2020) Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A6
* Press et al. (1992) Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical recipes in C. The art of scientific computing (Cambridge: University Press, $|$c1992, 2nd ed.)
* Prochaska (1999) Prochaska, J. X. 1999, ApJ, 511, L71
* Prochaska et al. (2013a) Prochaska, J. X., Hennawi, J. F., & Simcoe, R. A. 2013a, ApJ, 762, L19
* Prochaska et al. (2014) Prochaska, J. X., Lau, M. W., & Hennawi, J. F. 2014, ApJ, 796, 140
* Prochaska et al. (2019) Prochaska, J. X., Neeleman, M., Kanekar, N., & Rafelski, M. 2019, ApJ, 886, L35
* Prochaska et al. (2001) Prochaska, J. X., Wolfe, A. M., Tytler, D., et al. 2001, ApJS, 137, 21
* Prochaska et al. (2013b) Prochaska, J. X., Hennawi, J. F., Lee, K.-G., et al. 2013b, ApJ, 776, 136
* Rafelski et al. (2012) Rafelski, M., Wolfe, A. M., Prochaska, J. X., Neeleman, M., & Mendez, A. J. 2012, ApJ, 755, 89
* Rauch (1998) Rauch, M. 1998, ARA&A, 36, 267
* Rengelink et al. (1997) Rengelink, R. B., Tang, Y., de Bruyn, A. G., et al. 1997, A&AS, 124, 259
* Riechers et al. (2011) Riechers, D. A., Hodge, J., Walter, F., Carilli, C. L., & Bertoldi, F. 2011, ApJ, 739, L31
* Rigby et al. (2011) Rigby, E. E., Maddox, S. J., Dunne, L., et al. 2011, MNRAS, 415, 2336
* Rubin et al. (2015) Rubin, K. H. R., Hennawi, J. F., Prochaska, J. X., et al. 2015, ApJ, 808, 38
* Rudie et al. (2013) Rudie, G. C., Steidel, C. C., Shapley, A. E., & Pettini, M. 2013, ApJ, 769, 146
* Rudie et al. (2012) Rudie, G. C., Steidel, C. C., Trainor, R. F., et al. 2012, ApJ, 750, 67
* Saintonge et al. (2013) Saintonge, A., Lutz, D., Genzel, R., et al. 2013, ApJ, 778, 2
* Savage & Sembach (1991) Savage, B. D., & Sembach, K. R. 1991, ApJ, 379, 245
* Savage & Sembach (1996) —. 1996, ApJ, 470, 893
* Serra et al. (2015) Serra, P., Westmeier, T., Giese, N., et al. 2015, MNRAS, 448, 1922
* Sharon et al. (2013) Sharon, C. E., Baker, A. J., Harris, A. I., & Thomson, A. P. 2013, ApJ, 765, 6
* Simcoe et al. (2004) Simcoe, R. A., Sargent, W. L. W., & Rauch, M. 2004, ApJ, 606, 92
* Simcoe et al. (2006) Simcoe, R. A., Sargent, W. L. W., Rauch, M., & Becker, G. 2006, ApJ, 637, 648
* Smail et al. (1997) Smail, I., Ivison, R. J., & Blain, A. W. 1997, ApJ, 490, L5
* Srianand et al. (2016) Srianand, R., Hussain, T., Noterdaeme, P., et al. 2016, MNRAS, 460, 634
* Steidel & Hamilton (1992) Steidel, C. C., & Hamilton, D. 1992, AJ, 104, 941
* Swinbank et al. (2004) Swinbank, A. M., Smail, I., Chapman, S. C., et al. 2004, ApJ, 617, 64
* Tacconi et al. (2008) Tacconi, L. J., Genzel, R., Smail, I., et al. 2008, ApJ, 680, 246
* Tacconi et al. (2013) Tacconi, L. J., Neri, R., Genzel, R., et al. 2013, ApJ, 768, 74
* Tacconi et al. (2018) Tacconi, L. J., Genzel, R., Saintonge, A., et al. 2018, ApJ, 853, 179
* Targett et al. (2013) Targett, T. A., Dunlop, J. S., Cirasuolo, M., et al. 2013, MNRAS, 432, 2012
* Ter Braak (2006) Ter Braak, C. J. F. 2006, Statistics and Computing, 16, 239
* Theuns (2021) Theuns, T. 2021, MNRAS, 500, 2741
* Tremonti et al. (2004) Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, ApJ, 613, 898
* Umehata et al. (2019) Umehata, H., Fumagalli, M., Smail, I., et al. 2019, Science, 366, 97
* Valiante et al. (2016) Valiante, E., Smith, M. W. L., Eales, S., et al. 2016, MNRAS, 462, 3146
* Vernet et al. (2011) Vernet, J., Dekker, H., D’Odorico, S., et al. 2011, A&A, 536, A105
* Walter et al. (2016) Walter, F., Decarli, R., Aravena, M., et al. 2016, ApJ, 833, 67
* Wardlow et al. (2011) Wardlow, J. L., Smail, I., Coppin, K. E. K., et al. 2011, MNRAS, 415, 1479
* Weidinger et al. (2004) Weidinger, M., Møller, P., & Fynbo, J. P. U. 2004, Nature, 430, 999
* Wolfe et al. (2005) Wolfe, A. M., Gawiser, E., & Prochaska, J. X. 2005, ARA&A, 43, 861
* Wolfe et al. (1986) Wolfe, A. M., Turnshek, D. A., Smith, H. E., & Cohen, R. D. 1986, ApJS, 61, 249
* Xue et al. (2018) Xue, R., Fu, H., Isbell, J., et al. 2018, ApJ, 864, L11
## Appendix A Lensing of the SMG
In the KiDS $gri$ pseudo-color image in Fig. 1, there is an extended optical
source just 0.8″ to the NW of the ALMA source. This offset cannot be an
astrometry error, because the ALMA CO (3$-$2) emission of QSO2 agrees with its
KiDS optical position within 0.1″. The optical source is in the KiDS-VISTA
9-band photometric catalog ($ugriZYJHK_{s}$; Kuijken et al., 2019) with a
designation of KiDSDR4 J091339.496$-$010656.17 and a photometric redshift of
$z_{p}=0.07_{-0.04}^{+0.05}$. We obtained an optical spectrum with the Low
Resolution Imaging Spectrometer (LRIS; Oke et al., 1995) on the Keck I
telescope on 2017 Mar 23. Strong emission lines, such as [O ii]$\lambda$3728,
[O iii]$\lambda\lambda$4960,5008, were detected at high significance in the 20
min exposure, placing its redshift at 0.055.
The source is detected in all of the nine photometric bands included in the
KiDS$+$VISTA photometry catalog, with $r=21.58\pm 0.02$ and $K_{s}=20.84\pm
0.18$. Our best-fit stellar population synthesis model of the photometry
reveals a stellar mass of $\sim 3\times 10^{7}$ $M_{\odot}$ and an SFR of
$\sim 0.006$ $M_{\odot}\,{\rm yr}^{-1}$. We have used the Bruzual & Charlot
(2003) models assuming exponentially declining star-forming histories and the
Chabrier (2003) initial mass function.
Could the SMG be gravitationally magnified by this foreground dwarf galaxy?
Using the halo-mass$-$stellar-mass relation from abundance matching (Fig. 6 of
Bullock & Boylan-Kolchin, 2017), we estimate a halo mass of $\sim 3\times
10^{10}$ $M_{\odot}$. The halo mass corresponds to a line-of-sight velocity
dispersion ($\sigma$) of just $\sim$29 km s-1, assuming a singular isothermal
sphere (SIS; $\rho(r)=\sigma^{2}/2\pi Gr^{2}$) and the fitting function of the
halo overdensity $\Delta_{c}(z)$ from Bryan & Norman (1998). The Einstein
radius of the SIS can be calculated as (e.g., Kneib & Natarajan, 2011):
$r_{E}=4\pi\frac{\sigma^{2}}{c^{2}}\frac{D_{ds}}{D_{s}}$ (A1)
where $c$ is the speed of light, $D_{ds}$ is the angular diameter distance
between the deflector and the background source, and $D_{s}$ is the angular
diameter distance between the observer and the background source. For our
system consisting of a foreground lens at $z_{d}=0.055$ with $\sigma=29$ km
s-1 and a background source at $z_{s}=2.674$, the Einstein radius is
$r_{E}\sim 0.024\arcsec$. Because $r_{E}$ is 33$\times$ smaller than the 0.8″
offset between the SMG and the foreground galaxy, we conclude that the
foreground galaxy is unlikely to have any measurable lensing effect on the
SMG.
## Appendix B Blind Line Detection in the ALMA Band-3 Data
To search for faint emission-line sources with unknown line widths in 3D data
cubes, a matched-filtering algorithm is commonly used: e.g., in SoFiA (Serra
et al., 2015), FindClump (Walter et al., 2016), LineSeeker (González-López et
al., 2019), and MF3D (Pavesi et al., 2018). We wrote an IDL program to
implement this simple algorithm. The convolution kernel we chose to filter the
data in the spectral dimension is a top-hat function with a variable half-
width between $n=1$ and $n=9$. For each channel, the data in the neighboring
$\pm n$ channels are stacked with equal weighting. Given the average channel
spacing of $\sim$25 km s-1, the convolved channel widths range between
$\sim$75 km s-1 and $\sim$475 km s-1. As shown in González-López et al.
(2019), the simple top-hat function is as effective as the more sophisticated
Gaussian kernels in detecting low S/N line emission.
In each convoluted channel map, we measure the rms noise level with a robust
sigma routine and detect unresolved sources near the highest S/N pixel. An
elliptical Gaussian fixed to the shape of the core of the dirty beam is fit to
the 8″$\times$8″ subregion centered on the pixel and subtracted from the
image. The parameters of the best-fit Gaussians are saved at each iteration to
form the raw source catalog. The iterative process continues until the image
contains no pixels above the S/N threshold of ${\rm S/N}_{\rm pix,th}=4.5$. It
is worth noting that this source-detection algorithm is similar to the minor
cycles of the clean deconvolution algorithm, but here we subtract only the
core of the dirty beam to save computing time. This simplified approach is
justified by the low S/N of the sources other than the SMG.
Given that a single source can be detected in multiple channels and with
multiple convolution kernels, we remove duplicated detections by iteratively
looping through the raw source list from the highest to the lowest S/N and
discard all detections within 2″ and 0.2 GHz from the highest S/N source
remaining in the list.
Because our search is restricted to point sources, the source S/N simply
scales with the ratio between the peak of the best-fit Gaussian ($S_{\rm
peak}$) and the rms noise of the convolved channel map:
${\rm S/N}=0.77\frac{S_{\rm peak}}{{\rm rms}},$ (B1)
where a scaling factor is used to account for fitting errors (Eq. 9 of
Rengelink et al. 1997, see also Condon 1997).
To estimate the fidelity of the detected sources, we search for sources in
simulated noise-only interferometer data instead of using negative “sources”
in the actual data (e.g., González-López et al., 2019). First, we introduce
random thermal noise by replacing the calibrated MS’s visibilities with a
normally distributed random array of complex numbers generated with
numpy.random. This is equivalent to the CASA simulator function setnoise in
the “simplenoise” mode, but our approach is faster because it only writes the
DATA column once and does not add MODEL_DATA and CORRECTED_DATA columns to the
MS. Unlike the fixed random number seed (11111) adopted in the CASA simulator,
each noise realization uses a different seed in our code. We set the widths of
the normal distributions for the real and imaginary parts to the standard
deviations of the original visibilities measured with visstat ($\sigma\sim
250$ mJy visibility-1 with a slight dependence on the spectral window). Then,
we use tclean to image the noise-only visibilities into spectral data cubes
with natural weighting. The same tclean parameters are adopted except that we
turn off de-convolution by setting niter = 0, because the simulated data
contain no sources. For each spectral window, we generate a set of 10
simulated data cubes to provide enough source counts for the source fidelity
calculation.
We run the same line search code on the simulated data with the same detection
parameters. We compare the normalized cumulative distribution functions (CDF)
of the detected sources in the actual data and in the simulated data to
estimate the source fidelity. We use the following equation, similar to
LineSeeker used in the ASPECS-LP survey (González-López et al., 2019):
${\rm Fidelity}=1-\frac{F_{\rm sim}(\geq{\rm S/N}~{}|~{}n)}{F_{\rm
data}(\geq{\rm S/N}~{}|~{}n)},$ (B2)
where $F(\geq{\rm S/N}~{}|~{}n)$ is the fraction of sources detected at or
above the source S/N, with its detection kernel width $n$, and at any
frequency channel in the spectral window. The subscript indicates whether it
is from the actual data or the simulated noise-only data. $F_{\rm data}$ is
measured directly from the CDF, while $F_{\rm sim}$ is obtained from the best-
fit error function of the CDF to mitigate noise at high S/N due to low source
counts. The fidelity is thus the probability that the detected source is not
due to random noise. A source has a high fidelity near unity when $F_{\rm
sim}\ll F_{\rm data}$, and a low fidelity near zero when $F_{\rm sim}\approx
F_{\rm data}$.
We identified a total of six emission-line sources with ${\rm fidelity}>0.9$
within the primary beam from the non-interpolated datacubes of all four
spectral windows. We extracted the spectrum for each detection from the
linear-interpolated and primary-beam-corrected data cubes with an elliptical
aperture matched to the sizes of the restoring beams; i.e., we had assumed
that the sources are unresolved. We measured the central frequency
($\nu_{0}$), FWHM, and line flux with a single-Gaussian model and list the
results in Table 8. Fig. 10 illustrates the distribution of the detected
sources in the field, and Fig. 11 shows the zoomed-in version of the
integrated intensity maps and their ALMA spectra.
Figure 10.— ALMA emission-line detections in the GAMA J0913$-$0107 field. The
KiDS $r$-band image is overlaid with contours from the ALMA band-3 emission
line sources. The emission line sources are labeled with its ID number and
redshift (when interpreted as CO (3$-$2)). The long-dashed circle shows the
ALMA primary beam at 94 GHz.
Figure 11.— For each line emitter, we show a 10″$\times$10″ KiDS cutout image overlaid with its ALMA line emission map as contours, along with the source integrated spectrum (Flux Density in mJy vs. Observed Frequency in GHz). The ALMA maps are created by integrating line emission over narrow spectral windows, highlighted in red in their corresponding spectra. The red filled ellipses illustrate the synthesized beam size. Table 8Line Emitters Found in the ALMA Band-3 Data (Sorted in S/N) ID | R.A. (J2000) | Decl. (J2000) | $n$ | S/N | Fidelity | $\nu_{\rm obs}$ | FWHM | Line Flux | $z_{\rm CO32}$ | $L^{\prime}_{\rm CO32}$ | PB
---|---|---|---|---|---|---|---|---|---|---|---
| (hms) | (dms) | | | | (GHz) | (km s-1) | (Jy km s-1) | | (K km s-1 pc2) |
1 | 09:13:39.55 | $-$01:06:56.5 | 5 | 66.6 | 1.00 | 94.120 | $271\pm 5$ | $1.3435\pm 0.0264$ | $2.6740\pm 0.0001$ | $10.69\pm 0.01$ | 0.92
2 | 09:13:38.33 | $-$01:07:08.4 | 8 | 7.0 | 1.00 | 92.241 | $388\pm 69$ | $0.1482\pm 0.0282$ | $2.7488\pm 0.0004$ | $9.75\pm 0.08$ | 0.95
3 | 09:13:38.28 | $-$01:06:43.8 | 9 | 6.4 | 1.00 | 94.102 | $358\pm 67$ | $0.1724\pm 0.0318$ | $2.6747\pm 0.0003$ | $9.80\pm 0.08$ | 0.73
4 | 09:13:39.42 | $-$01:06:43.0 | 1 | 6.1 | 1.00 | 92.479 | $51\pm 10$ | $0.0590\pm 0.0134$ | $2.7392\pm 0.0001$ | $9.35\pm 0.10$ | 0.74
5 | 09:13:40.47 | $-$01:07:13.8 | 3 | 5.5 | 1.00 | 103.371 | $166\pm 32$ | $0.1509\pm 0.0293$ | $2.3452\pm 0.0002$ | $9.64\pm 0.08$ | 0.58
6 | 09:13:40.22 | $-$01:06:59.1 | 5 | 5.2 | 0.97 | 103.767 | $184\pm 53$ | $0.0488\pm 0.0260$ | $2.3324\pm 0.0002$ | $9.15\pm 0.23$ | 0.72
## Appendix C Other Absorbers in the QSO Spectra
Figure 12.— Continuum-normalized QSO spectra from X-shooter. (a,b): Labeled
are the main H i Ly$\alpha$ absorbers between $2.20<z_{\rm abs}<2.94$. The
redshift ranges covered by the ALMA observations for CO (3$-$2) are
highlighted. (c,d): Labeled in red are the main C iv
$\lambda\lambda$1548.2,1550.8 absorbers between $2.20<z_{\rm abs}<2.94$ and in
blue the main Mg ii $\lambda\lambda$2796.4,2803.5 absorbers between
$0.77<z_{\rm abs}<1.18$.
Using the low-resolution ($R\sim 2000$) BOSS spectrum of QSO1, Noterdaeme et
al. (2012a) identified two DLA candidates at $z_{\rm abs}=2.680$ and 2.751.
Subsequently, a number of Mg ii and C iv absorbers were identified toward both
QSOs using the BOSS spectra: $z_{\rm abs}=0.9388,2.2530,2.7512$ toward QSO1
and $z_{\rm abs}=0.7876,1.0126,1.5010,2.7248,2.7418$ toward QSO2 (Chen et al.,
2015, 2016). The X-shooter spectra confirm all of the previously known
absorbers and reveal several additional absorbers toward both QSOs: $z_{\rm
abs}=1.0855,2.283,2.9147$ toward QSO1 and $z_{\rm
abs}=0.886,1.0865,2.2525,2.2845,2.3065,2.3445,2.3595$ toward QSO2. Fig. 12
shows portions of the X-shooter spectra to illustrate all of the major
absorbers we have identified (8 toward QSO1 and 12 toward QSO2), omitting only
the Mg ii absorber at $z=1.501$ toward QSO2.
The $z_{\rm abs}\approx 2.75$ DLA toward QSO1 is apparently associated to QSO2
at $z=2.7488$. The DLA has been previously analyzed as part of the Quasar
Probing Quasar (QPQ) project using a lower-resolution GMOS spectrum, from
which they measured $\log N_{\rm HI}=21.3\pm 0.15$ (Prochaska et al., 2013b)
and rest-frame equivalent widths of $2.60\pm 0.05$ Å for C ii$\lambda$1334.5
and $0.51\pm 0.05$Å for C iv$\lambda$1548.2 (Prochaska et al., 2014). We
obtained similar results using the X-shooter spectrum (Fig. 13). With Voigt
profile fitting, we find that the H i Lyman series is adequately fit with two
components separated by 290 km s-1 ($z_{\rm abs}=2.7502,2.7538$), each with
$\log N_{\rm HI}=21.0$ and $b=40$ km s-1. We thus obtain a total column
density of $\log N_{\rm HI}=21.3$ with an estimated systematic uncertainty of
$\sim$0.1 dex. With the AODM method and ICs from a cloudy model with $\log
N_{\rm HI}=21.3$, $\log U=-2.5$, and the HM12 radiation background, we measure
an ionization-corrected $\alpha$ metallicity of [C/H] = $-1.2\pm 0.1$ from C
iv$\lambda$1550.8 and an iron metallicity of [Fe/H] = $-1.6\pm 0.2$ from Fe
ii$\lambda$1608.5 (see Table 9). Given the impact parameter of $R_{\bot}=85$
kpc, this DLA fits nicely with the CGM profiles of $z=2-3$ QSOs in Fig. 8 (Lau
et al., 2016).
Table 9Metal Line Measurements of the $z_{\rm abs}\approx 2.75$ DLA toward QSO1 Ion | $\lambda_{\rm rest}$ | EW | $\log N$ | [X/H]′ | [X/H]
---|---|---|---|---|---
| (Å) | (Å) | (cm-2) | |
C ii | 1334.5323 | $2.13\pm 0.01$ | $>15.48$ | $>-2.25$ | $>-2.26$
C iv | 1548.2040 | $0.63\pm 0.02$ | $14.40\pm 0.03$ | $-3.33\pm 0.10$ | $-1.31\pm 0.10$
$\cdots$ | 1550.7776 | $0.45\pm 0.02$ | $14.50\pm 0.04$ | $-3.23\pm 0.11$ | $-1.21\pm 0.11$
O i | 1302.1685 | $1.94\pm 0.01$ | $>15.86$ | $>-2.13$ | $>-2.14$
Mg ii | 2796.3543 | $4.37\pm 0.04$ | $>14.44$ | $>-2.46$ | $>-2.26$
$\cdots$ | 2803.5315 | $3.40\pm 0.08$ | $>14.59$ | $>-2.31$ | $>-2.11$
Al ii | 1670.7886 | $1.88\pm 0.01$ | $>13.99$ | $>-1.76$ | $>-1.52$
Al iii | 1854.7183 | $0.43\pm 0.01$ | $13.45\pm 0.05$ | $-2.30\pm 0.11$ | $-1.59\pm 0.11$
Si ii | 1260.4221 | $2.07\pm 0.00$ | $>14.54$ | $>-2.27$ | $>-2.31$
$\cdots$ | 1304.3702 | $1.69\pm 0.01$ | $>15.46$ | $>-1.35$ | $>-1.40$
$\cdots$ | 1526.7070 | $1.89\pm 0.02$ | $>15.23$ | $>-1.58$ | $>-1.63$
Si iv | 1393.7602 | $0.67\pm 0.01$ | $13.98\pm 0.02$ | $-2.83\pm 0.10$ | $-1.48\pm 0.10$
$\cdots$ | 1402.7729 | $0.24\pm 0.01$ | $13.79\pm 0.07$ | $-3.02\pm 0.12$ | $-1.68\pm 0.12$
Fe ii | 1608.4508 | $1.33\pm 0.01$ | $15.22\pm 0.01$ | $-1.58\pm 0.10$ | $-1.62\pm 0.10$
$\cdots$ | 2382.7642 | $3.01\pm 0.03$ | $>14.67$ | $>-2.13$ | $>-2.16$
$\cdots$ | 2600.1725 | $3.16\pm 0.03$ | $>14.73$ | $>-2.07$ | $>-2.10$
Figure 13.— The DLA at $z\approx 2.75$ toward QSO1. Left: H i Lyman series and
Voigt profile fit (blue). Right: selected metal transitions and the adopted
AODM velocity integration windows. All velocities are relative to the systemic
redshift defined by the CO (3$-$2) line of QSO2 at $z=2.7488$. The DLA is at
an impact parameter of $R_{\bot}=85$ kpc.
## Appendix D The AODM Method and Results
This Appendix gives a brief overview of the AODM method (Savage & Sembach,
1991; Prochaska et al., 2001) and provides tables of ionic column densities,
ionization corrections, and ionization-corrected metallicities for all of the
selected metal transitions (Tables 10, 11, and 12). Equations are all in SI
units. The corresponding expressions in cgs units can be obtained by setting
$\epsilon_{0}=1/4\pi$.
The velocity-dependent scattering cross section of resonance line photons is
(Meiksin, 2009):
$\sigma(u)=\left(\frac{\pi
e^{2}}{m_{e}c}\right)\left[\frac{1}{4\pi\epsilon_{0}}\right]f\lambda_{0}\phi_{u},$
(D1)
where the constants $e$, $m_{e}$, $\epsilon_{0}$, $c$, $f$, and $\lambda_{0}$
are respectively the electron charge, electron mass, the permeability of
vacuum, the speed of light, the oscillator strength, and the rest-frame
wavelength of the transition, and the function $\phi_{u}$ is the probability
density function per unit velocity due to line broadening (i.e., a Voigt
profile). For QSO absorption lines, the optical depth [$\tau(u)$] at velocity
$u$ is the product of this velocity-dependent cross section and the column
density ($N_{a}$):
$\tau(u)=\frac{e^{2}}{4\epsilon_{0}m_{e}c}f\lambda_{0}N_{a}\phi_{u}\equiv\frac{e^{2}}{4\epsilon_{0}m_{e}c}f\lambda_{0}N_{a,u}$
(D2)
where we have defined the column density per unit velocity, $N_{a,u}\equiv
N_{a}\phi_{u}$, by shifting the velocity dependency from the cross section to
the column density.
The above relation provides a method to measure column densities from observed
line profiles, because the optical depth is the natural logarithmic of the
ratio between the incident continuum intensity ($I_{0}$) and the observed
attenuated intensity ($I_{\rm obs}$):
$\tau(u)=\ln\frac{I_{0}(u)}{I_{\rm obs}(u)}.$ (D3)
The total column density can then be calculated by integrating the apparent
optical depth over the velocity integration window:
$N_{a}=\sum N_{a,u}\Delta
u=\sum\frac{4\epsilon_{0}m_{e}c}{e^{2}f\lambda_{0}}\tau(u)\Delta u$ (D4)
and the 1$\sigma$ statistical variance on the column density through standard
error propagation is:
$\sigma^{2}_{\rm
sta}(N_{a})=\sum\left(\frac{4\epsilon_{0}m_{e}c}{e^{2}f\lambda_{0}}\right)^{2}\sigma^{2}[\tau(u)]\Delta
u^{2}$ (D5)
where the statistical uncertainty of optical depth is estimated from the noise
spectrum:
$\sigma_{\rm sta}[\tau(u)]=\sigma[I_{\rm obs}(u)]/I_{\rm obs}(u)$ (D6)
Similar to the H i Voigt profile fitting (but to a less extent because of the
narrower velocity range), the ionic column density is also affected by the
systematic uncertainty in our empirical model of the QSO continuum. We again
adopt a $\pm$10% error in the QSO continuum ($I_{0}$), which directly leads to
$\sigma_{\rm sys}[\tau(u)]=0.1$ and the equation for the systematic error of
the ionic column density:
$\sigma^{2}_{\rm
sys}(N_{a})=\sum\left(\frac{4\epsilon_{0}m_{e}c}{e^{2}f\lambda_{0}}\right)^{2}0.1^{2}\Delta
u^{2}.$ (D7)
Table 10AODM Ionic Column Densities Ion | $\lambda_{\rm rest}$ | $\log N$
---|---|---
| (Å) | QSO1-A1 | QSO1-A2 | QSO1-B1 | QSO1-C1 | QSO1-C2 | QSO2-A2 | QSO2-B1 | QSO2-C1
C ii | 1334.5323 | $\lesssim 14.74$ | $13.94\pm 0.05$ | $<12.90$ | $>15.21$ | $14.18\pm 0.03$ | $14.01\pm 0.06$ | $<13.33$ | $<13.25$
C iv | 1548.2040 | $<12.88$ | $<12.88$ | $<12.91$ | $14.08\pm 0.04$ | $<12.91$ | $<13.20$ | $14.23\pm 0.03$ | $13.61\pm 0.08$
$\cdots$ | 1550.7776 | $<13.21$ | $<13.19$ | $<13.19$ | $14.15\pm 0.07$ | $\lesssim 14.83$ | $\lesssim 14.30$ | $14.30\pm 0.04$ | $13.72\pm 0.13$
O i | 1302.1685 | $14.57\pm 0.03$ | $14.48\pm 0.04$ | $<13.16$ | $>15.55$ | $14.36\pm 0.06$ | $<13.78$ | $<13.79$ | $<13.75$
Mg ii | 2796.3543 | $13.24\pm 0.07$ | $12.99\pm 0.08$ | $\cdots$ | $>14.09$ | $13.21\pm 0.05$ | $\cdots$ | $\cdots$ | $<12.60$
$\cdots$ | 2803.5315 | $13.49\pm 0.06$ | $13.15\pm 0.09$ | $\lesssim 14.11$ | $>14.32$ | $13.08\pm 0.09$ | $\cdots$ | $<12.93$ | $<12.76$
Al ii | 1670.7886 | $12.37\pm 0.11$ | $11.97\pm 0.28$ | $<11.70$ | $13.49\pm 0.02$ | $12.15\pm 0.20$ | $12.28\pm 0.19$ | $<12.12$ | $<12.06$
Al iii | 1854.7183 | $<12.17$ | $<12.14$ | $<12.15$ | $12.91\pm 0.16$ | $<12.12$ | $<12.50$ | $<12.53$ | $<12.49$
Si ii | 1304.3702 | $13.76\pm 0.12$ | $13.22\pm 0.40$ | $<12.93$ | $14.81\pm 0.02$ | $13.32\pm 0.34$ | $<13.52$ | $<13.55$ | $<13.48$
$\cdots$ | 1526.7070 | $13.78\pm 0.08$ | $<13.23$ | $<13.15$ | $14.76\pm 0.02$ | $13.55\pm 0.12$ | $<13.58$ | $<13.50$ | $<13.38$
Si iv | 1393.7602 | $<12.30$ | $<12.30$ | $<12.36$ | $\lesssim 13.70$ | $<12.56$ | $13.15\pm 0.11$ | $<12.76$ | $<12.77$
$\cdots$ | 1402.7729 | $<12.82$ | $<12.84$ | $<12.79$ | $13.54\pm 0.11$ | $<12.80$ | $<13.22$ | $<13.42$ | $<13.12$
Fe ii | 1608.4508 | $13.55\pm 0.26$ | $<13.39$ | $<13.44$ | $14.52\pm 0.05$ | $<13.36$ | $<13.64$ | $<13.74$ | $<13.71$
$\cdots$ | 2382.7642 | $13.34\pm 0.05$ | $12.90\pm 0.13$ | $<12.58$ | $\lesssim 14.47$ | $\lesssim 13.90$ | $<12.79$ | $<12.66$ | $<12.71$
$\cdots$ | 2600.1725 | $\lesssim 13.64$ | $13.13\pm 0.10$ | $<12.58$ | $>14.25$ | $12.91\pm 0.17$ | $<13.18$ | $<12.92$ | $<12.94$
Table 11Ionization Correction Ion | ${\rm IC}\equiv\log f_{\rm HI}-\log f_{\rm X}$ |
---|---|---
| QSO1-A1 | QSO1-A2 | QSO1-B1 | QSO1-C1 | QSO1-C2 | QSO2-A2 | QSO2-B1 | QSO2-C1
C ii | $-0.18$ | $-0.10$ | $-1.74$ | $-0.08$ | $-0.58$ | $-0.90$ | $-1.74$ | $-1.74$
C iv | $2.78$ | $1.90$ | $-2.82$ | $2.00$ | $1.98$ | $0.40$ | $-2.82$ | $-2.82$
O i | $-0.01$ | $-0.01$ | $2.56$ | $-0.01$ | $-0.04$ | $0.02$ | $2.56$ | $2.56$
Mg ii | $0.14$ | $0.25$ | $0.04$ | $0.27$ | $-0.36$ | $-0.71$ | $0.04$ | $0.04$
Al ii | $-0.05$ | $0.06$ | $-1.08$ | $0.12$ | $-0.68$ | $-1.20$ | $-1.08$ | $-1.08$
Al ii | $0.61$ | $0.55$ | $-1.18$ | $0.59$ | $0.21$ | $-0.51$ | $-1.18$ | $-1.18$
Si ii | $-0.21$ | $-0.14$ | $-1.17$ | $-0.12$ | $-0.62$ | $-0.95$ | $-1.17$ | $-1.17$
Si iv | $1.61$ | $1.03$ | $-2.37$ | $1.13$ | $0.83$ | $-0.46$ | $-2.37$ | $-2.37$
Fe ii | $-0.12$ | $-0.08$ | $1.58$ | $-0.07$ | $-0.37$ | $-0.49$ | $1.58$ | $1.58$
Table 12Ionization-Corrected Metallicities Ion | $\lambda_{\rm rest}$ | ${\rm[X/H]}\equiv{\rm[X/H]}^{\prime}+{\rm IC}$
---|---|---
| (Å) | QSO1-A1 | QSO1-A2 | QSO1-B1 | QSO1-C1 | QSO1-C2 | QSO2-A2 | QSO2-B1 | QSO2-C1
C ii | 1334.5323 | $\lesssim-1.51$ | $-2.66\pm 0.08$ | $<-1.25$ | $>-1.53$ | $-1.62\pm 0.29$ | $-1.91\pm 0.34$ | $<-0.87$ | $<-0.92$
C iv | 1548.2040 | $<-0.41$ | $<-1.72$ | $<-2.32$ | $-0.59\pm 0.09$ | $<-0.33$ | $<-1.42$ | $-1.05\pm 0.19$ | $-1.63\pm 0.13$
$\cdots$ | 1550.7776 | $<-0.08$ | $<-1.41$ | $<-2.04$ | $-0.52\pm 0.10$ | $\lesssim 1.59$ | $\lesssim-0.32$ | $-0.98\pm 0.19$ | $-1.53\pm 0.16$
O i | 1302.1685 | $-1.77\pm 0.06$ | $-2.28\pm 0.08$ | $<3.05$ | $>-1.38$ | $-1.15\pm 0.30$ | $<-1.48$ | $<3.63$ | $<3.62$
Mg ii | 2796.3543 | $-1.87\pm 0.09$ | $-2.42\pm 0.10$ | $\cdots$ | $>-1.47$ | $-1.53\pm 0.29$ | $\cdots$ | $\cdots$ | $<1.04$
$\cdots$ | 2803.5315 | $-1.61\pm 0.08$ | $-2.26\pm 0.11$ | $\lesssim 2.57$ | $>-1.24$ | $-1.66\pm 0.30$ | $\cdots$ | $<1.34$ | $<1.20$
Al ii | 1670.7886 | $-1.77\pm 0.12$ | $-2.47\pm 0.29$ | $<0.19$ | $-1.08\pm 0.08$ | $-1.77\pm 0.35$ | $-1.96\pm 0.38$ | $<0.56$ | $<0.52$
Al iii | 1854.7183 | $<-1.31$ | $<-1.82$ | $<0.54$ | $-1.18\pm 0.17$ | $<-0.91$ | $<-1.05$ | $<0.87$ | $<0.87$
Si ii | 1304.3702 | $-1.60\pm 0.13$ | $-2.49\pm 0.40$ | $<0.27$ | $-1.05\pm 0.08$ | $-1.60\pm 0.45$ | $<-1.53$ | $<0.84$ | $<0.80$
$\cdots$ | 1526.7070 | $-1.57\pm 0.09$ | $<-2.48$ | $<0.48$ | $-1.10\pm 0.08$ | $-1.37\pm 0.31$ | $<-1.48$ | $<0.79$ | $<0.70$
Si iv | 1393.7602 | $<-1.23$ | $<-2.24$ | $<-1.50$ | $\lesssim-0.91$ | $<-0.91$ | $-1.41\pm 0.35$ | $<-1.14$ | $<-1.11$
$\cdots$ | 1402.7729 | $<-0.72$ | $<-1.70$ | $<-1.07$ | $-1.07\pm 0.13$ | $<-0.68$ | $<-1.34$ | $<-0.49$ | $<-0.76$
Fe ii | 1608.4508 | $-1.71\pm 0.27$ | $<-2.24$ | $<3.54$ | $-1.27\pm 0.09$ | $<-1.30$ | $<-0.94$ | $<3.79$ | $<3.79$
$\cdots$ | 2382.7642 | $-1.92\pm 0.07$ | $-2.73\pm 0.15$ | $<2.68$ | $\lesssim-1.33$ | $\lesssim-0.76$ | $<-1.79$ | $<2.71$ | $<2.80$
$\cdots$ | 2600.1725 | $\lesssim-1.62$ | $-2.51\pm 0.12$ | $<2.69$ | $>-1.55$ | $-1.76\pm 0.34$ | $<-1.39$ | $<2.98$ | $<3.02$
## Appendix E The Identification of The Emission Counterpart of Subsystem C
Figure 14.— ($a$) The four faint ($r>23$) optical sources within 7″ of QSO1.
($b$) An ALMA CO map constructed by combining the channels at 93.7535 and
93.6676 GHz, where CO emission from Object C is detected. The contours are
drawn at $-3$, $-2$ (dashed), 2, 3, and 4$\sigma$ (solid). In both images,
QSO1 sets the origin of the coordinates. In the four panels in $c$, we show
the ALMA spectra of the objects. The vertical dashed lines indicate the CO
(3$-$2) frequencies that correspond to the redshifts of the major absorption-
line clouds toward QSO1.
In the deep $r$-band image from KiDS (5$\sigma$ limit at $\sim$25 mag) shown
in Fig. 14$a$, we labeled four faint optical sources within 7″ of QSO1. Here
we explore whether any of these sources is connected to the DLA at $z_{\rm
abs}\approx 2.68$ by examining their photometric redshifts and their ALMA
spectra.
Three of the sources (A, B, C) are listed in the joint KiDS-VISTA 9-band
photometric catalog (Kuijken et al., 2019):
* •
KiDSDR4 J091338.791$-$010700.71 (A): $r=23.6\pm 0.1$, $H=22.3\pm 0.4$, $z_{\rm
p}=1.09^{+0.09}_{-0.15}$;
* •
KiDSDR4 J091338.711$-$010705.48 (B): $r=24.5\pm 0.2$, $H=22.9\pm 0.7$, $z_{\rm
p}=0.45^{+0.78}_{-0.12}$.
* •
KiDSDR4 J091338.527$-$010703.60 (C): $r=23.8\pm 0.1$, $H=22.0\pm 0.3$, $z_{\rm
p}=0.79^{+0.45}_{-0.06}$.
where the $r-$ and $H$-band magnitudes are from the homogenized “Gaussian
Aperture and PSF (GAaP)” photometry and $z_{\rm p}$ are the 9-band photometric
redshift estimates from the Bayesian photometric redshift code BPZ (Benítez,
2000). The 68% confidence intervals of the photometric redshifts suggest that
both sources are in the far foreground of the SMG SMM J0913 ($z_{\rm
SMG}=2.674$). The fourth object (D) is not in the catalog likely because its
proximity to the bright QSO1. We measured its position directly from the
image: R.A. = $09^{\rm h}13^{\rm m}39.05^{\rm s}$, Decl. =
$-01^{\circ}07\arcmin 06.5\arcsec$.
We then extracted spectra at their optical positions from the ALMA band-3
datacube. For objects A, B, and D, we adopted elliptical apertures matching
the synthesized beam size (1.7″$\times$1.3″ at PA = 49∘). We show these
spectra in Fig. 14$c$. Even at the depth of our ALMA data (rms$=0.155$ mJy
beam-1 channel-1 in BB4), none of the sources show emission lines at a
detectable level.
For Object C, we initially used an aperture centered on the optical position
and detected hints of emission lines at the expected frequencies of
absorption-line clouds C1 and C2. We then made a CO map by combining the two
channels that show the most significant emission. The CO image in Fig. 14$b$
led to the discovery of Comp b as it reveals a highly significant source
$\sim$3″ to the SSW of the optical position, which we have designated as Comp
b. Guided by the CO image, we re-extracted a spectrum from a 3.4″$\times$1.8″
elliptical aperture that matches the geometry of Comp b to optimize the line
detection. This spectrum is shown in the Object C panel of Fig. 14$c$. Line
emission is clearly detected at the expected frequencies of absorption-line
clouds C1 and C2 toward QSO1. Through this exercise, we have identified Comp b
as the most likely emission counterpart of absorption subsystem C toward both
QSOs.
|
aainstitutetext: Maryland Center for Fundamental Physics, Department of
Physics,
University of Maryland, College park, MD 20742, USAbbinstitutetext: Berkeley
Center for Theoretical Physics, Department of Physics,
University of California, Berkeley, CA 94720, USAccinstitutetext: Theoretical
Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
# TwInflation
Kaustubh Deshpande b,c Soubhik Kumar a and Raman Sundrum<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The general structure of Hybrid Inflation remains a very well-motivated
mechanism for lower-scale cosmic inflation in the face of improving
constraints on the tensor-to-scalar ratio. However, as originally modeled, the
“waterfall” field in this mechanism gives rise to a hierarchy problem
($\eta-$problem) for the inflaton after demanding standard effective field
theory (EFT) control. We modify the hybrid mechanism and incorporate a
discrete “twin” symmetry, thereby yielding a viable, natural and EFT-
controlled model of non-supersymmetric low-scale inflation, “Twinflation”.
Analogously to Twin Higgs models, the discrete exchange-symmetry with a “twin”
sector reduces quadratic sensitivity in the inflationary potential to ultra-
violet physics, at the root of the hierarchy problem. The observed phase of
inflation takes place on a hilltop-like potential but without fine-tuning of
the initial inflaton position in field-space. We also show that all parameters
of the model can take natural values, below any associated EFT-cutoff mass
scales and field values, thus ensuring straightforward theoretical control. We
discuss the basic phenomenological considerations and constraints, as well as
possible future directions.
††preprint: UMD-PP-021-01
## 1 Introduction
Cosmic inflation (see Baumann:2009ds for a review) is an attractive and
robust framework for helping to explain the state of the early universe,
resolving issues such as the horizon problem, the flatness problem, and the
origin of primordial fluctuations. It can be implemented minimally by the slow
rolling of a single real scalar field, the inflaton $(\phi)$, along its nearly
flat potential ($V(\phi)$). But, this requires the inflaton to be
significantly lighter than the Hubble scale, which gives rise to a hierarchy
problem known as the “$\eta-$problem” (see e.g. Baumann:2014nda ).
Furthermore, the observations so far Planck2018Inflation seem to rule out or
strongly constrain some of the simplest forms of $V(\phi)$, originating from
straightforward and natural microscopic models explaining the lightness of the
inflaton. They typically predict a large tensor-to-scalar ratio, $r\gtrsim
0.01$, and hence a high scale of inflation. But, with the non-observation of
primordial tensor fluctuations to date, the data seems to hint towards lower-
scale inflation. The upcoming and near-future proposed experiments like BICEP
Array BICEP_Array_Hui:2018cvg , Simons Observatory
Simons_Observatory_Ade:2018sbj , CMB-S4 CMB_S4_Abazajian:2019eic , LiteBIRD
LiteBIRD_Hazumi:2019lys , and PICO PICO_Hanany:2019lle , will be able to
measure $r\gtrsim 10^{-3}$, corresponding to $H\gtrsim 5\times 10^{12}$ GeV.
It is therefore interesting to reconsider the structure of inflationary
dynamics, especially keeping the $\eta-$problem in mind, to see whether
observable $r$ is a robust prediction or whether extremely small $r$ can be
readily achieved.
Indeed, inflation may well take place at a much lower scale than above, i.e.
with $H\ll 10^{12}$ GeV, with unobservably small tensor fluctuation at these
near-future experiments, although, realizing such low-scale inflation with a
simple single-field model is typically fine-tuned. This fine-tuning can come
in the form of the potential, the model parameters, and also the initial
conditions (see e.g. Goldwirth:1991rj ; Dine:2011ws ; Brandenberger:2016uzh ;
Linde:2017pwt ; Chowdhury:2019otk ). On the other hand, multi-field inflation,
i.e. with the field(s) orthogonal to inflaton playing an important dynamical
role in (ending) inflation, can help in the model building for low-scale
inflation. The classic example of this is Hybrid Inflation Linde:1993cn .
Here, the inflaton couples to a “waterfall” field ($\sigma$) in such a way
that $\sigma$ has a $\phi$-dependent mass term. During inflation, the much
heavier $\sigma$ is fixed at $0$, while $\phi$ performs the slow roll. As the
inflaton rolls past a critical field value, $\sigma$ becomes tachyonic and
rapidly rolls down to the global minimum of the potential. This fast rolling
along the “waterfall” on the inflationary trajectory ends inflation by
releasing the vacuum energy in the $\sigma$ field. Hybrid inflation exhibits a
separation of roles with the space-time expansion during inflation dominantly
driven by vacuum energy in $\sigma$, and the slow-roll “clock” provided by
$\phi$, which helps in realizing low-scale inflation as we will review in Sec.
2. This provides a mechanism generating an effective inflationary trajectory
with an abrupt drop in vacuum energy, which is difficult to realize from a
single-field perspective. However, as we will review in Sec. 2, hybrid
inflation needs fine-tuning in the model parameters to achieve radiative
stability and EFT control. We will address this issue in the present work and
build an EFT-controlled and natural low-scale inflationary model.
The primary challenge offered by the hybrid inflation paradigm towards
building a microscopic model is the following: $\phi$ needs to be a light real
scalar, but with sufficiently strong non-derivative coupling with the heavy
$\sigma$ field as required for the waterfall effect. Even if $\phi$ is modeled
as a pseudo-Nambu Goldstone boson (pNGB) of a global symmetry, its coupling
with $\sigma$ explicitly breaks the symmetry and induces quadratic sensitivity
in the effective inflationary potential to the ultra-violet (UV) physics.
Hence, we need some extra ingredient to achieve naturalness in hybrid
inflation. This issue is similar to the case of the light Higgs boson as
required in the Standard Model (SM) in the presence of its Yukawa and gauge
couplings. This, hence, motivates one to apply different particle physics
mechanisms explored in the literature to address the hierarchy problem of the
SM Higgs boson, to the case of hybrid inflation mentioned above. There are
various supersymmetric constructions of hybrid inflation, see e.g.
Copeland:1994vg ; Dvali:1994ms ; Binetruy:1996xj ; Halyo:1996pp ;
Kallosh:2003ux . Little Inflaton Kaplan:2003aj ; ArkaniHamed:2003mz is also
one such proposal addressing the issue of naturalness in hybrid inflation
based on the Little Higgs mechanism Little_Higgs . This makes use of
“collective symmetry breaking” to protect the inflaton potential from the
radiative contributions sourced by its coupling with the waterfall field. See
also Sundrum:2009ii ; Ross:2016hyb ; Kaloper:2020jso ; Carta:2020oci for more
proposals aimed at building such a radiatively stable, EFT-controlled and
viable model for hybrid inflation.
Twin Higgs Chacko:2005pe is another mechanism proposed to address the
(little) hierarchy problem of the SM Higgs boson. Here, the light scalar is
protected from radiative corrections sourced by its non-derivative couplings
by using a discrete symmetry, with a symmetry-based cancellation of 1-loop
quadratic divergences. Inspired by this, in the present work, we make use of a
$\mathbb{Z}_{2}$-symmetry structure to build a quite simple, natural and EFT-
controlled model of hybrid inflation, which we will call “Twinflation”.111We
thank N. Craig, S. Koren and T. Trott for giving us permission to re-use this
name, first used by them in the different setting of Ref. Craig:2016lyx . As
we will see in Sec. 5, Twinflation can naturally give rise to a viable model
of inflation, with a red tilt in the primordial scalar fluctuations consistent
with the observations Planck2018Inflation , and with the inflationary Hubble
scale as low as $\sim 10^{7}$ GeV.
Low-scale inflation and the consequent reheating, apart from explaining the
smallness of yet-unobserved primordial tensor fluctuations, can also be
motivated from other particle physics considerations. For example, if QCD
axions or axion-like particles constitute (a significant fraction of) cold
dark matter (CDM) and if Peccei-Quinn (PQ) symmetry is broken during
inflation, low-scale inflation is favored to avoid CDM isocurvature
constraints (see e.g. Axion_Cosmology_Review_Marsh:2015xka ;
ALPs_isocurvature_Diez-Tejedor:2017ivd ; Planck2018Inflation ). Such
inflationary scenarios are also often invoked so that heavy, unwanted relics
e.g. monopoles, moduli, gravitino, which might be generated by the UV physics
(see e.g. GravitinoProblem_Ellis:1982yb ; GravitinoProblem_Ellis:1984eq ;
GravitinoProblem_Murayama_etal ; ModuliProblem_Randall:1994fr ) are diluted
away/not reheated.222We note that it is also possible to avoid reheating heavy
relics just by requiring a low reheating temperature while still having a
high-scale inflation. Furthermore, for sufficiently low inflationary scales,
we can have complementary terrestrial particle physics probes of inflation and
reheating, such as at current and future collider experiments, see e.g.
Bezrukov:2009yw ; Allahverdi:2010zp ; Boehm:2012rh ; Bramante:2016yju .
The paper is organized as follows. In Sec. 2, we review the basic mechanism of
hybrid inflation, also reviewing that it requires fine-tuning of parameters to
achieve radiative stability and EFT control, the criteria of which we also
explain. In Sec. 3, we present a simple variant of hybrid inflation with a
soft (dimensionful) waterfall coupling, and show that even this suffers from a
similar naturalness problem as before. In Sec. 4, we describe the effective
single-field inflation with the massive waterfall field integrated out. Here,
we also introduce a simplifying notation for the effective inflationary
potential that arises quite generically from hybrid inflation (irrespective of
its naturalness) using which we can estimate the inflationary observables and
constrain some model parameters. In Sec. 5, we construct the Twinflation
model, starting with a simple renormalizable version, analysing its radiative
stability and EFT consistency, and then presenting a more complete version
realizing the pNGB structure of the inflaton. In Sec. 6, we discuss a simple
way to address the cosmological domain wall problem related to the spontaneous
breaking of a (simplifying but non-essential) $\sigma$-parity at the end of
inflation, via a small explicit breaking. We conclude in Sec. 7.
## 2 Hybrid inflation and naturalness
The basic mechanism of hybrid inflation can be described by the following
simple variant Lyth:1996kt of the original potential in Linde:1993cn :
$V(\phi,\sigma)=V_{\text{inf}}+v(\phi)+\frac{1}{2}M_{\sigma}^{2}\sigma^{2}+\frac{1}{4}\lambda_{\sigma}\sigma^{4}-\frac{1}{2}g\phi^{2}\sigma^{2}+\dots.$
(1)
Here, $\phi$ is the slowly rolling inflaton and $\sigma$ is the “waterfall”
field whose dynamics ends inflation. Inflation starts at small $\phi$, with
$0<g\phi^{2}<M_{\sigma}^{2}$, such that the minimum in the $\sigma$ direction
is at $\sigma=0$. The ellipsis in Eq. (1) includes higher-dimensional
interaction terms ensuring global stability of the potential at large field
values. A crucial ingredient of the hybrid inflation mechanism is that during
inflation the $\sigma$-mass is bigger than both the $\phi$-mass and the Hubble
scale. This ensures that $\sigma$ remains localized at $\sigma=0$, and does
not play any role until the end of inflation. Therefore, during inflation,
i.e. for $g\phi^{2}<M_{\sigma}^{2}$, $V(\phi,\sigma)$ in Eq. (1) effectively
reduces to
$\displaystyle V_{\rm eff}(\phi)\approx V_{\text{inf}}+v(\phi).$ (2)
For $|v(\phi)|\ll V_{\text{inf}}$, this implies that the detailed dynamics of
the inflaton is governed by $v(\phi)$, while the vacuum energy
$V_{\text{inf}}$ dominantly drives the spacetime expansion. We will see that
the relaxation of $V_{\rm inf}$ to zero, as needed at the end of inflation,
can be triggered by $\sigma$ dynamics, rather than purely the single-field
rolling of $\phi$. The crucial separation of roles between $v$ and $V_{\rm
inf}$ is one of the primary reasons why the waterfall mechanism allows for
consistent low-scale models of inflation.
As inflation progresses, $\phi$ slowly rolls down its potential $v(\phi)$,
i.e. towards larger $\phi$. As it crosses a critical value
$\phi_{*}=\frac{M_{\sigma}}{\sqrt{g}}$ (assumed to be smaller than the minimum
of $v(\phi)$), the effective mass-squared for $\sigma$ switches sign.
Consequently, the now-tachyonic $\sigma$ rapidly rolls down to its new
minimum. This _fast_ rolling of the waterfall field violates the _slow-_ roll
conditions and ends inflation by releasing the inflationary vacuum energy,
$V_{\text{inf}}$. The two fields finally settle into the global minimum which
can be characterized by some $\phi_{\rm min}$ with $\sigma_{\rm
min}=\sqrt{\frac{g\phi_{\rm min}^{2}-M_{\sigma}^{2}}{\lambda_{\sigma}}}$.
Demanding a negligible vacuum energy in the post-inflationary era fixes
$\displaystyle
V_{\text{inf}}=3H^{2}M_{\text{pl}}^{2}\approx\frac{\left(g\phi_{\rm
min}^{2}-M_{\sigma}^{2}\right)^{2}}{4\lambda_{\sigma}}=\frac{\left(1-\phi_{\rm
min}^{2}/\phi_{*}^{2}\right)^{2}}{4}\frac{M_{\sigma}^{4}}{\lambda_{\sigma}}\sim\mathcal{O}(1)\frac{M_{\sigma}^{4}}{\lambda_{\sigma}}.$
(3)
In the last step above, we have considered that the ellipsis in Eq. (1) fixes
the global minimum in $\phi$ only $\mathcal{O}(1)$ away from $\phi_{*}$, i.e.
$\phi_{*}\sim\mathcal{O}(\phi_{\rm min})$. This is also so that there is no
tuning required in the initial inflaton field location (see also Sec. 4). As
we will see in Sec. 5.4, all these aspects can be easily realized with $\phi$
being a pNGB of a global symmetry and consequently its couplings taking
trigonometric forms.
In the original hybrid inflation model Linde:1993cn ,
$v(\phi)=+\frac{1}{2}m_{\phi}^{2}\phi^{2}$ along with an opposite choice of
signs in the potential in Eq. (1) for the $M_{\sigma}^{2}$ and $g$ terms,
allowing inflation to start at large $\phi$. This convex form of $v(\phi)$ in
hybrid inflation, however, leads to blue tilt in the power spectrum of the
primordial scalar perturbations (after respecting the constaint on tensor-to-
scalar ratio) which is strongly disfavored by the Planck data
Planck2018Inflation . In order to get the observed red tilted spectrum, we
will consider a hilltop-like $v(\phi)$ Lyth:1996kt with inflation happening
somewhat near its maximum. In Sec. 4, we will see that no tuning is required
in the initial inflaton field value to achieve this. A simple example of such
a potential is
$v(\phi)=-\frac{1}{2}m_{\phi}^{2}\phi^{2}+\frac{\lambda_{\phi}}{4}\phi^{4}+\dots,$
(4)
which has a hilltop at $\phi=0$. The ellipsis above refers to sub-dominant
higher-dimensional terms in $\phi$.
### 2.1 Naturalness considerations
In high-scale models of inflation, the inflaton field typically traverses
super-Planckian field distances LythBound , requiring special UV structures to
ensure the consistency of the inflationary effective field theory, e.g. as in
Biaxion_KNP . Here, for our lower-scale inflation, we will aim to have a more
straightforward EFT consistency. In particular, we will be aiming to construct
a low-scale model of hybrid inflation where
* •
all the parameters take natural (or bigger) values,
* •
all the relevant mass scales and field values are smaller than the respective
EFT cutoff(s),
* •
the EFT cutoff(s) is (are) sub-Planckian.
In the following, we will examine the naturalness of hybrid inflation, in
light of the above requirements, first for the original model in Eq. (1) (with
a hilltop structure of $v(\phi)$) and then in Sec. 3 for our simple
modification with a soft waterfall coupling.
The non-derivative coupling with the waterfall field in Eq. (1) badly breaks
shift symmetry of the inflaton and radiatively generates quadratic sensitivity
in $m_{\phi}^{2}$ to the UV cutoff scale333More precisely, $\Lambda$ should be
thought of as a placeholder for the mass of some heavy field. $\Lambda$:
$\left(\delta
m_{\phi}^{2}\right)_{\text{1-loop}}\sim\frac{g\Lambda^{2}}{16\pi^{2}}.$ (5)
In order to satisfy naturalness in $m_{\phi}^{2}$, we require
$\left(\delta
m_{\phi}^{2}\right)_{\text{1-loop}}\lesssim\left(m_{\phi}^{2}\right)_{\rm
tree}~{}~{}\textrm{i.e.}~{}~{}\Lambda^{2}\lesssim\left(16\pi^{2}\eta\right)\frac{H^{2}}{g},$
(6)
implying that the UV cutoff $\Lambda$ cannot be arbitrarily large. Here
$\eta\equiv
M_{\text{pl}}^{2}\frac{\partial^{2}_{\phi}V(\phi,\sigma)}{V(\phi,\sigma)}\ll
1$ is the slow-roll parameter during inflation, with $(m_{\phi}^{2})_{\rm
tree}\sim\eta H^{2}$. Furthermore, the requirement that $\sigma$ is not
dynamical during inflation, i.e. it being frozen at $\sigma=0$, implies its
effective mass should be bigger than the Hubble scale,
$M_{\sigma,\rm eff}^{2}\equiv
M_{\sigma}^{2}-g\phi_{0}^{2}\sim\mathcal{O}(1)\cdot g\phi_{0}^{2}\gtrsim
H^{2},$ (7)
where $\phi_{0}$ denotes a typical inflaton field value during inflation and
$M_{\sigma,\rm eff}^{2}\sim M_{\sigma}^{2}\sim\mathcal{O}(1)\cdot
g\phi_{0}^{2}$. To satisfy conditions in Eq. (6) and (7), we need
$\phi_{0}^{2}\gtrsim\frac{\Lambda^{2}}{16\pi^{2}\eta}.$ (8)
Since the observed tilt of the primordial perturbations gives $\eta\sim
10^{-2}$, this demands inflaton field displacement bigger than the UV scale,
i.e.
$\phi_{0}\gtrsim\Lambda.$ (9)
However, this is only marginally consistent with our requirements above, and
we cannot take $\phi_{0}\ll\Lambda$ as desired.
Furthermore, even marginally satisfying validity of the EFT, i.e.
$\phi_{0}\sim\Lambda$ in Eq. (9), we need to satisfy
$M_{\sigma,\textrm{eff}}^{2}\sim H^{2}$ in Eq. (7). However, using Eq. (3),
this then requires the post-inflationary $\sigma$-VEV to be $\sim
M_{\text{pl}}$:
$\langle\sigma\rangle_{\rm post-
inf.}^{2}\sim\frac{M_{\sigma}^{2}}{\lambda_{\sigma}}\sim
M_{\text{pl}}^{2}\frac{H^{2}}{M^{2}_{\sigma}}\sim M_{\text{pl}}^{2},$ (10)
which is against our EFT requirements of sub-Planckian field values mentioned
earlier. In detail, $\langle\sigma^{2}\rangle_{\rm post-inf.}=\frac{g\phi_{\rm
min}^{2}-M_{\sigma}^{2}}{\lambda_{\sigma}}=\frac{M_{\sigma}^{2}}{\lambda_{\sigma}}\left(\frac{\phi_{\rm
min}^{2}}{\phi_{*}^{2}}-1\right)$, and hence $\langle\sigma^{2}\rangle_{\rm
post-inf.}<\frac{M_{\sigma}^{2}}{\lambda_{\sigma}}$ is possible implying a
slightly sub-Planckian $\sigma$-VEV. However, this is only marginal, and we
would have a greater confidence in the EFT-control if the $\sigma$-VEV is
_parametrically_ lower than $M_{\text{pl}}$.
Thus, the only way to construct a consistent hybrid inflation model with Eq.
(1), which is under EFT control, is with fine-tuning in $m_{\phi}^{2}$, i.e.
with fine cancellations between $m^{2}_{\phi,\rm tree}$ and $\delta
m^{2}_{\phi,\rm 1-loop}$. Only at the cost of such a tuning, can we satisfy
$\phi_{0}<\Lambda$.
### 2.2 Allowing for different cutoff scales
Since the quadratic sensitivity of $m_{\phi}^{2}$ at 1-loop comes due to the
$\sigma$ field running in the loop, another solution one may try is allowing
for different cutoff scales for $\phi$ and $\sigma$, i.e. $\Lambda_{\phi}$ and
$\Lambda_{\sigma}$, respectively. This can come about if $\phi$ and $\sigma$
belong to two different sectors with different physical scales involved in
their UV completions. A familiar but dramatic example is given by the chiral
Lagrangian description of composite pions of QCD, cut off by the GeV hadronic
scale, while light leptons and gauge fields interacting with these pions have
a much higher cutoff.
With a choice
$\Lambda_{\phi}\gtrsim\phi_{0}\gtrsim\Lambda_{\sigma},$ (11)
one may evade Eq. (9) while still ensuring EFT control in the $\phi-$sector.
Now, we examine if hybrid inflation satisfies naturalness for all couplings,
all scales being sub-Planckian and also smaller than the respective cutoffs,
i.e. $m_{\phi},\phi_{0}\lesssim\Lambda_{\phi}$ and
$M_{\sigma},\langle\sigma\rangle\lesssim\Lambda_{\sigma}$. The radiative
corrections to $m_{\phi}^{2}$ now are
$\left(\delta m_{\phi}^{2}\right)_{\rm
1-loop}\sim\frac{g\Lambda_{\sigma}^{2}}{16\pi^{2}}\gtrsim\frac{g\langle\sigma\rangle^{2}}{16\pi^{2}}\sim\frac{H^{2}M_{\text{pl}}^{2}}{16\pi^{2}\phi_{0}^{2}},$
(12)
where we use $\Lambda_{\sigma}\gtrsim\langle\sigma\rangle$ and
$\langle\sigma\rangle\sim\frac{HM_{\text{pl}}}{\sqrt{g}\phi_{0}}$ following
Eq. (10). Now, we can see that 1-loop naturalness in $m_{\phi}^{2}$, i.e.
$\left(\delta m_{\phi}^{2}\right)_{\rm 1-loop}\lesssim m_{\phi}^{2}\sim\eta
H^{2}$, can only be satisfied with
$\phi_{0}\gtrsim M_{\text{pl}},$ (13)
which is against our requirements to realize a truly low-scale hybrid
inflation model.
Thus, even allowing for separate cutoffs, hybrid inflation is still not
naturally in EFT control.
## 3 Hybrid inflation with a soft “waterfall” coupling
The naturalness problem described in Sec. 2 stems from the quadratic UV scale
sensitivity in $m_{\phi}^{2}$. One of the simplest solutions is to have only a
soft shift symmetry breaking for $\phi$, i.e. a dimensionful $\phi-\sigma$
interaction, e.g.
$V(\phi,\sigma)=V_{\text{inf}}+\left(-\frac{m_{\phi}^{2}}{2}\phi^{2}+\frac{\lambda_{\phi}}{4}\phi^{4}+\dots\right)+\left(\frac{M_{\sigma}^{2}}{2}\sigma^{2}+\frac{\lambda_{\sigma}}{4}\sigma^{4}\right)-\frac{\mu\phi}{2}\sigma^{2}+\dots.$
(14)
Here, during inflation, i.e. for $\mu\phi<M_{\sigma}^{2}$, $\sigma$ remains
localized at $\sigma=0$, thus giving the same effective inflationary potential
as Eq. (2). The ellipsis after the last term in Eq. (14) above, as in Eq. (1),
includes higher-dimensional interaction terms which ensure that the global
minimum in $\phi$ is only $\mathcal{O}(1)$ away from the critical value
$\phi_{*}=\frac{M_{\sigma}^{2}}{\mu}$. As $\phi$ rolls down past $\phi_{*}$,
the waterfall in $\sigma$ is triggered, thus ending inflation by releasing the
inflationary vacuum energy
$V_{\text{inf}}\sim\mathcal{O}(1)\frac{M_{\sigma}^{4}}{\lambda_{\sigma}}$,
similarly to Eq. (3). As mentioned before, this parametric form of
$V_{\text{inf}}$ along with $\phi_{\rm min}\sim\mathcal{O}(\phi_{*})$ can be
explicitly realized in the pNGB realization of the inflaton which we detail in
Sec. 5.4.
### 3.1 Naturalness considerations
The soft coupling $\mu$ generates only a logarithmic cutoff sensitivity in
$m_{\phi}^{2}$:
$(\delta m_{\phi}^{2})_{\rm 1-loop}\sim\frac{\mu^{2}\ln\Lambda}{16\pi^{2}}.$
(15)
As in the previous case, demanding that the loop-induced inflaton mass is
smaller than its tree-level mass, i.e. $\frac{\mu^{2}}{16\pi^{2}}\lesssim\eta
H^{2}$ (taking $\ln\Lambda\sim\mathcal{O}(1)$), and that $\sigma$ is non-
dynamical during inflation, i.e.
$M_{\sigma,\textrm{eff}}^{2}\sim\mu\phi_{0}\gtrsim H^{2}$, we get
$\displaystyle\frac{H}{\phi_{0}}\lesssim\frac{\mu}{H}\lesssim
4\pi\sqrt{\eta}\sim\mathcal{O}(1).$ (16)
Therefore, at the first sight, there is no constraint such as
$\phi_{0}\gtrsim\Lambda$ as before. However, the $\mu$ term in Eq. (14) also
generates a quadratically divergent $\phi$-tadpole:
$\displaystyle V(\phi,\sigma)\ni\frac{\mu\Lambda^{2}}{16\pi^{2}}\phi.$ (17)
Indeed, the soft waterfall coupling breaks $\phi\rightarrow-\phi$ symmetry
allowing for a tadpole like above. Although it is possible for the theory to
have a larger tadpole, e.g. $\Lambda^{3}\phi$, but it is _natural_ for it to
have the above radiatively generated value. We take $\mu\ll\Lambda$ to
characterize the small breaking of $\phi\rightarrow-\phi$ symmetry in any
coupling of the model. The tadpole in Eq. (17) can be absorbed in Eq. (14)
with a large shift in the $\phi$ field:
$\delta\phi\sim\frac{\mu\Lambda^{2}}{16\pi^{2}m_{\phi}^{2}}\sim\frac{\mu\Lambda^{2}}{16\pi^{2}\eta
H^{2}}\sim\frac{\mu\Lambda^{2}}{H^{2}}.$ (18)
Such a large shift in $\phi$, however, also gives large contributions to other
terms in Eq. (14), e.g.
$\frac{\delta
M_{\sigma}^{2}}{M_{\sigma,\textrm{eff}}^{2}}\sim\frac{\delta\phi}{\phi_{0}}\sim\frac{\mu\Lambda^{2}}{H^{2}\phi_{0}}\sim\frac{M_{\sigma,\textrm{eff}}^{2}}{H^{2}}\frac{\Lambda^{2}}{\phi_{0}^{2}}.$
(19)
We can see from above that, in order for naturalness in $M_{\sigma}^{2}$ (and
also to allow for waterfall transition), i.e. for $\delta
M_{\sigma}^{2}\lesssim M_{\sigma,\textrm{eff}}^{2}$, we need
$\frac{\phi_{0}^{2}}{\Lambda^{2}}\gtrsim\frac{M_{\sigma,\textrm{eff}}^{2}}{H^{2}}\gtrsim
1.$ (20)
This again implies $\phi_{0}\gtrsim\Lambda$, which is in contradiction with
the EFT requirements stated earlier.
### 3.2 Allowing for different cutoff scales
Allowing even for different cutoff scales in this hybrid inflation model with
soft coupling, we get a similar result as Eq. (13). The radiative corrections
to $M_{\sigma}^{2}$ here are
$\left(\delta M_{\sigma}^{2}\right)_{\rm
1-loop}\sim\frac{\lambda_{\sigma}\Lambda_{\sigma}^{2}}{16\pi^{2}}+\frac{\mu^{2}\Lambda_{\sigma}^{2}}{16\pi^{2}m_{\phi}^{2}}.$
(21)
Naturalness for the first term on the right hand side above, as before,
demands $\langle\sigma\rangle\lesssim\Lambda_{\sigma}\lesssim
4\pi\langle\sigma\rangle$, now with
$\langle\sigma\rangle\sim\frac{HM_{\text{pl}}}{\sqrt{\mu\phi_{0}}}$. In order
to satisfy naturalness for the second term (sourced by quadratically divergent
$\phi$-tadpole), i.e.
$1\gtrsim\frac{\mu^{2}\Lambda_{\sigma}^{2}}{16\pi^{2}m_{\phi}^{2}M_{\sigma}^{2}}\gtrsim\frac{\mu\langle\sigma\rangle^{2}}{H^{2}\phi_{0}}\sim\frac{M_{\text{pl}}^{2}}{\phi_{0}^{2}},$
(22)
we again need
$\phi_{0}\gtrsim M_{\text{pl}}.$ (23)
Thus, we see that with either marginal or soft $\phi-\sigma$ coupling, even
with different cutoffs for the inflaton and the waterfall field, if we demand
EFT control (i.e. all scales being smaller than the respective cutoffs) and
sub-Planckian physics, the only way to have a consistent hybrid inflation
model is with fine-tuning of the relevant parameters, $m_{\phi}^{2}$ or
$M_{\sigma}^{2}$ as discussed in this and the previous section. This suggests
that in order to build a natural model for hybrid inflation, we need some
significant new mechanism to entirely get rid of the quadratic UV-sensitivity
in the inflaton potential coming from its necessarily non-derivative coupling
to the waterfall field.
## 4 Effective single-field inflation
The models described in Sec. 2 and 3 cannot give rise to consistent hybrid
inflation under EFT control without fine-tuning of parameters. Before we
propose such a natural model for hybrid inflation in Sec. 5, in this section
we first focus on effective single-field inflation with the massive waterfall
field integrated out. We also introduce here a simplifying notation for the
effective inflationary potential that arises quite generically from hybrid
inflation. As we will see, this simplified single-field analysis allows us to
easily estimate the inflationary observables and use them to constrain the
effective model parameters, even without knowing the detailed form of the full
potential. This “satellite view” will be helpful later in Sec. 5 by simply
identifying the realistic parts of parameter space deserving a fuller
analysis.
The waterfall field, although with a $\phi$-dependent mass, still remains
heavier than $H$ throughout inflation, except at the end of inflation when
$M_{\sigma}^{2}(\phi)$ passes through zero. Thus, prior to the end of
inflation we can integrate it out and get an effective single-field
description in terms of $\phi$. Hybrid inflation quite generically gives this
effective single-field inflationary potential in the form of Eq. (2), which
varies as some function $v(\phi)$ with a large vacuum energy offset
$V_{\text{inf}}$. In this section, we introduce a simplifying notation with
$v(\phi)=V_{0}\cdot F\left(\frac{\phi}{f}\right),$ (24)
where $V_{0}$ controls the magnitude, while the shape is specified by a
dimensionless function $F$. The effective inflationary potential then has the
following form:
$V_{\rm eff}(\phi)=V_{\text{inf}}+V_{0}\cdot
F\left(\frac{\phi}{f}\right)~{}~{};~{}~{}V_{\text{inf}}\gg V_{0}.$ (25)
The hilltop-like $v(\phi)$ that we considered earlier in Eq. (4) has the form
as in Eq. (24). We will also show later how this simple form arises
generically from a more complete hybrid inflation model in Sec. 5 where the
inflaton is realized as a pNGB, and where $F\left(\frac{\phi}{f}\right)$ takes
a trigonometric form.
The main benefit of using this simplifying notation is that, assuming the
function $F$ and its derivatives are $\sim\mathcal{O}(1)$ during inflation,
which is also the case in the model that we discuss later in Sec. 5, we can
obtain general expressions for inflationary observables as shown below, even
without specifying the explicit form of $F$. We assume that inflation
starts444More precisely, when the largest scales observable today exit the
horizon during inflation. at $\phi_{i}$ which is somewhat near the hilltop of
$F\left(\frac{\phi}{f}\right)$ as preferred by the data Planck2018Inflation ,
and ends at $\phi_{e}$ by a waterfall transition along the $\sigma$ field.
Then, the slow-roll inflation parameters are555The slow roll parameters
$\epsilon,\eta$ as defined above are, in general, functions of $\phi$.
However, unless an explicit functional argument is shown, they refer to the
parameters evaluated at an epoch when the largest scales observable today exit
the horizon during inflation, normally $\sim$50-60 e-folds before the end of
inflation.
$\begin{split}&\eta\equiv\frac{V^{\prime\prime}}{V}M_{\text{pl}}^{2}\sim\frac{V_{0}}{V_{\text{inf}}}\frac{M_{\text{pl}}^{2}}{f^{2}}\
,\
\epsilon\equiv\frac{1}{2}\left(\frac{V^{\prime}}{V}\right)^{2}M_{\text{pl}}^{2}\sim\eta^{2}\frac{f^{2}}{M_{\text{pl}}^{2}},\\\
&A_{s}\equiv\frac{1}{8\pi^{2}}\frac{H^{2}}{M_{\text{pl}}^{2}}\frac{1}{\epsilon}\sim\frac{10^{-2}}{\eta^{2}}\frac{H^{2}}{f^{2}}\
,\
\mathcal{N}_{e}\equiv\int_{\phi_{i}}^{\phi_{e}}\frac{d\phi}{M_{\text{pl}}\sqrt{2\epsilon(\phi)}}\sim\frac{1}{\eta}\int_{\theta_{i}}^{\theta_{e}}\frac{d\theta}{F^{\prime}(\theta)}\sim\frac{\mathcal{O}(1)}{\eta}.\end{split}$
(26)
The last relation above involving the number of observable e-foldings
$\mathcal{N}_{e}$ uses the notation $\theta\equiv\phi/f$. First line of Eq.
(26) shows that quite generically the slow-roll parameter $\epsilon$ is
parametrically suppressed compared to $\eta$ (for $f\ll M_{\text{pl}}$),
thereby naturally explaining the smallness of the yet-unobserved primordial
tensor fluctuations Planck2018Inflation . The observables—spectral tilt of the
primordial scalar fluctuations ($1-n_{s}$), tensor-to-scalar ratio ($r$), and
the scalar power spectrum amplitude ($A_{s}$)—as per the Planck CMB data
Planck2018CosmoParam ; Planck2018Inflation are
$\begin{split}1-n_{s}=6\epsilon-2\eta\approx-2\eta\approx 0.04\ ,\
r=16\epsilon<0.06\ ,\ A_{s}\approx 2\times 10^{-9},\end{split}$ (27)
where, in the first part above, we assume $\epsilon\ll\eta$ as is the case
preferred by the data. Also, as the spectral tilt constraint above shows,
$\eta<0$ is strongly preferred, especially for the low-scale models we are
considering (i.e. for small $\epsilon$). A convex form of
$F\left(\frac{\phi}{f}\right)$ in Eq. (25), or more generally convex $v(\phi)$
in Eq. (2), e.g. $v(\phi)=+\frac{1}{2}m_{\phi}^{2}\phi^{2}$ as mentioned
earlier, gives $\eta>0$ and hence a blue spectral tilt which is strongly
disfavored. Hence, we consider a hilltop-like $F\left(\frac{\phi}{f}\right)$
with inflation happening somewhat close to its maximum. Eq. (27) constrains
the parameters of the effective single-field inflation as described by Eq.
(25), i.e. $(V_{\text{inf}},V_{0},f)$, as666We will do a better job of
estimating these parameters, especially $\frac{f}{H}$, in Sec. 5.4, taking the
$\sim\mathcal{O}(1)$ factors in $F$ and its derivatives from Eq. (25) into
account.
$\frac{f}{H}\sim\frac{0.1}{\eta\sqrt{A_{s}}}\sim 10^{6}\ ,\
\frac{V_{0}}{f^{4}}\sim 10^{2}\eta^{3}A_{s}\sim 10^{-12}\ ,\
\frac{V_{0}}{V_{\text{inf}}}\sim\frac{\epsilon}{\eta}\sim\mathcal{O}(10)\ r.$
(28)
Hilltop inflation models, in order to satisfy the slow roll conditions,
typically require inflation to happen very close to the hilltop. However, with
a large offset in the vacuum energy as in Eq. (25), this tuning in the initial
inflaton field location is not required. Here, the potential generically
satisfies slow-roll conditions for all values of $\phi$ and not just near its
extrema. As can be seen in Eq. (26), $\mathcal{N}_{e}\propto
1/\eta\sim\mathcal{O}(100)$. Hence, the dimensionless integral there needs
only to be $\mathcal{O}(1)$ to get $\mathcal{N}_{e}=50-60$ which can be easily
satisfied with $\phi_{i},\phi_{e}\sim\mathcal{O}(f)$.
## 5 Hybrid “Twinflation”
In the present section, we propose a natural model for hybrid inflation,
“Twinflation”, which satisfies naturalness for all parameters, all mass scales
and field values being smaller than the respective UV cutoff scales, and sub-
Planckian physics. We will also make use of the estimates in Sec. 4, since the
effective inflationary potential here has the same form as in Eq. (25), as we
will see later.
In order to get rid of the quadratic sensitivity of the inflaton potential
$V_{\rm eff}(\phi)$ towards the UV physics, we consider mirroring the
$\sigma$-field with a $\mathbb{Z}_{2}$ exchange symmetry. Considering the
original structure of hybrid inflation, Eq. (1), one could try
$g\phi^{2}\sigma^{2}\rightarrow
g\phi^{2}\left(\sigma_{A}^{2}-\sigma_{B}^{2}\right)$, such that the quadratic
sensitivity of the inflaton mass to the UV scale is canceled between
$\sigma_{A}$ and $\sigma_{B}$. However, no symmetry protects this structure
and hence it is not radiatively stable. Instead, we consider twinning the
$\sigma$-field in our variant hybrid inflation, Eq. (14), i.e.
$\mu\phi\sigma^{2}\rightarrow\mu\phi\left(\sigma_{A}^{2}-\sigma_{B}^{2}\right).$
(29)
Here, $m_{\phi}^{2}$ has already only log-sensitivity to the UV scale. Now the
twinning in $\sigma$ prevents a quadratically divergent $\phi$-tadpole, and
thereby removing the associated issues as discussed in Sec. 3. Also, there
exists a symmetry protecting this structure:
$\sigma_{A}\rightarrow\sigma_{B},\phi\rightarrow-\phi$; along with
$\sigma$-parity i.e. $\sigma_{i}\rightarrow-\sigma_{i}$ ($i=A,B$) for
simplicity.777In the next section we will softly break the $\sigma-$parity in
a controlled manner to address the cosmological domain wall problem while
ensuring naturalness. So, this structure is radiatively stable. This can also
be realized by a UV completion where $\phi$ is a pNGB of a $U(1)$ global
symmetry with soft explicit breaking (see Sec. 5.4).
A similar model construction to the one presented in the Sec. 5.1, i.e. Eqs.
(30) and (31), was considered in Ref. Berezhiani:1995am but in the context of
mirror-world models to achieve asymmetric reheating of the mirror sector so as
to avoid the $\Delta N_{\rm eff}$ constraints. However, here our primary goal
is to point out the utility of the twin symmetry in Eq. (30) to address the
$\eta-$problem for the inflaton, by constraining inflaton radiative
corrections, while reheating can proceed as in standard hybrid inflation.
### 5.1 Basic model
We now consider the symmetry structure described above, namely,
$\sigma_{A}\rightarrow\sigma_{B}\ ,\ \phi\rightarrow-\phi$ (30)
under the twin symmetry, and also $\sigma_{i}\rightarrow-\sigma_{i}$ for
simplicity. The most general potential consistent with the above symmetry is
given by
$\begin{split}V(\phi,\sigma_{A,B})=~{}&V_{\text{inf}}+\left(-\frac{1}{2}m_{\phi}^{2}\phi^{2}+\frac{\lambda_{\phi}}{4}\phi^{4}+\dots\right)\\\
&+\left(\left(\frac{1}{2}M_{\sigma}^{2}\sigma_{A}^{2}+\frac{\lambda_{\sigma}}{4}\sigma_{A}^{4}\right)+(A\rightarrow
B)\right)+\frac{\bar{\lambda}_{\sigma}}{4}\sigma_{A}^{2}\sigma_{B}^{2}\\\
&+\frac{\mu}{2}\phi\left(\sigma_{A}^{2}-\sigma_{B}^{2}\right)+\kappa\phi^{2}\left(\sigma_{A}^{2}+\sigma_{B}^{2}\right)+\dots,\end{split}$
(31)
where ellipsis after the last term includes higher-dimensional interaction
terms, as in Eq. (14). Approximate shift symmetry for the inflaton $\phi$ then
requires
$\mu,m_{\phi}\ll M_{\sigma}\ \ \textrm{and}\ \
\kappa,\lambda_{\phi}\ll\lambda_{\sigma},\bar{\lambda}_{\sigma}~{}~{},$ (32)
which ensures that $\phi$ is much lighter and weakly coupled as compared to
$\sigma_{i}$.
Let us first analyze the effective inflationary dynamics at tree-level. During
inflation, i.e. for $\mu\phi<M_{\sigma}^{2}$, both the $\sigma$ fields remain
heavy and with vanishing VEVs. Then, integrating them out at tree-level is
simply dropping $\sigma_{i}$ in Eq. (31). This gives
$V_{\rm{eff}}(\phi)=V_{\text{inf}}+\left(-\frac{1}{2}m_{\phi}^{2}\phi^{2}+\frac{\lambda_{\phi}}{4}\phi^{4}+\dots\right)=V_{\text{inf}}+\frac{\lambda_{\phi}}{4}(\phi^{2}-f^{2})^{2}+\dots,$
(33)
where $f\sim m_{\phi}/\sqrt{\lambda_{\phi}}$ and the ellipsis includes sub-
dominant higher-dimensional terms in $\phi$. This potential is of the form of
Eq. (25) and hence all the results of Sec. 4, in particular Eq. (28), apply
here. We will consider inflationary trajectory somewhat close to the hilltop
of $V_{\rm{eff}}(\phi)$ (i.e. $\phi=0$), but still with a typical inflaton
field value of $\sim\mathcal{O}(f)$ to avoid any considerable initial location
tuning. As $\phi$ rolls down its potential, $M_{\sigma_{i}}^{2}$ change as
$M^{2}_{\sigma_{A,B}}(\phi)=M^{2}_{\sigma}\pm\mu\phi.$ (34)
In order for the waterfall effect to take place, we need
$M_{\sigma}^{2}\sim\mathcal{O}(\mu f).$ (35)
Since $M_{\sigma_{A}}^{2}$ always stays positive along the inflationary
trajectory, $\sigma_{A}$ has no dynamical role in the model. But $\sigma_{B}$,
which is the true waterfall field here, turns tachyonic at
$\phi_{*}=\frac{M_{\sigma}^{2}}{\mu}\sim\mathcal{O}(f)$ and rapidly rolls down
to its new minimum. The global minimum can be characterized by
$~{}\sigma_{B,\textrm{min}}=\left(\frac{\mu\phi_{\rm
min}-M_{\sigma}^{2}}{\lambda_{\sigma}}\right)^{1/2}=\frac{M_{\sigma}}{\sqrt{\lambda_{\sigma}}}\left(\frac{\phi_{\rm
min}}{\phi_{*}}-1\right)^{1/2},~{}\sigma_{A,\textrm{min}}=0.$ (36)
This fast rolling to the global minimum ends inflation by releasing the vacuum
energy given by
$V_{\rm inf}=\frac{M_{\sigma}^{4}}{4\lambda_{\sigma}}\left(\frac{\phi_{\rm
min}}{\phi_{*}}-1\right)^{2}\sim\mathcal{O}(1)\frac{\mu^{2}f^{2}}{\lambda_{\sigma}}.$
(37)
In the last step above, as also alluded to before in Sec. 3, we have set
$\phi_{\rm min}\sim\mathcal{O}(\phi_{*})\sim\mathcal{O}(f)$ assuming that the
higher-dimensional interaction terms in the ellipsis in Eq. (31) fix the
global minimum in $\phi$ at $\sim\mathcal{O}(f)$. As we will see later in Sec.
5.4, this can be easily realized in a more complete model with $\phi$ as pNGB
of a $U(1)$ global symmetry.
### 5.2 Radiative stability and naturalness
In order for the tree-level analysis of the Twinflation model from the
previous section to be valid even at loop-level, we need the radiative
corrections in Eq. (31) to be sufficiently small which we explore in this
section. The effect of loops is two-fold: renormalizing tree-level parameters,
and giving non-analytic field-dependence via logarithmic terms in the Coleman-
Weinberg (CW) potential. First, we require that renormalization of tree-level
parameters respects radiative stability and naturalness, and get the resulting
constraints on the model parameters. Then, in Sec. 5.3, we also consider the
effects of the full CW potential, but we will show that they can have
significant effects only at the boundary of the allowed parameter space, i.e.
when naturalness in $V_{\rm eff}(\phi)$ is saturated, which we examine
numerically and show in Fig. 1. In this section, we will therefore defer the
full CW analysis in order to first identify the bulk of the viable parameter
space.
Here we look for the constraints in the parameter space required to achieve
naturalness of the tree-level parameters. In the $\sigma$-sector, quadratic
divergence in $M^{2}_{\sigma}$ is induced by the $\sigma$ self-quartic
couplings as
$\delta M^{2}_{\sigma,\rm
1-loop}\sim\frac{\lambda_{\sigma}\Lambda_{\sigma}^{2}}{16\pi^{2}}+\frac{\bar{\lambda}_{\sigma}\Lambda_{\sigma}^{2}}{16\pi^{2}}.$
(38)
Hence, naturalness in $M^{2}_{\sigma}$ demands the cutoff in $\sigma$-sector
to be
$\frac{M_{\sigma}}{\sqrt{\lambda_{\sigma}}}\lesssim\Lambda_{\sigma}\lesssim
4\pi\frac{M_{\sigma}}{\sqrt{\lambda_{\sigma}}}.$ (39)
The first constraint above is obtained by demanding that the VEV of $\sigma$
is smaller than the UV scale, which is one of our EFT consistency requirement.
We also consider $\bar{\lambda}_{\sigma}\lesssim\lambda_{\sigma}$ such that
the upper bound on $\Lambda_{\sigma}$ is controlled by $\lambda_{\sigma}$ as
above. Since both $\bar{\lambda}_{\sigma}$ and $\lambda_{\sigma}$ get the same
radiative contributions as mentioned below in Eq. (40), this is justified.
In the $\phi$-sector, for simplicity, first we consider an exact shift
symmetry, which is then only softly broken by the $\mu$ term in Eq. (31).
Then, the loop-level one-particle irreducible (1PI) effective potential has
contributions as follows (here we track only the $\mu$-dependent corrections):
$\begin{split}&\delta m^{2}_{\phi,\rm
1-loop}\sim\frac{\mu^{2}}{16\pi^{2}}\ln\Lambda_{\sigma}~{},\\\
&\delta\left(\lambda_{\phi},\lambda_{\sigma},\bar{\lambda}_{\sigma}\right)_{\rm{1-loop}}\sim\frac{\mu^{4}}{16\pi^{2}M_{\sigma}^{4}}\sim\frac{\mu^{2}}{16\pi^{2}f^{2}}~{},\\\
&\delta\kappa_{\rm
1-loop}\sim\frac{\lambda_{\sigma}\mu^{2}}{16\pi^{2}M_{\sigma}^{2}}\sim\frac{\lambda_{\sigma}\mu}{16\pi^{2}f}~{}.\end{split}$
(40)
Here, we first note that there is no quadratic sensitivity to the UV cutoff
scales as in Eq. (17), due to cancellations induced by the twin symmetry, and
only a log-sensitivity in $m^{2}_{\phi}$. Now, we will consider even tree-
level hard breaking of $\phi$-shift symmetry, i.e. tree-level $\lambda_{\phi}$
and $\kappa$ couplings, which are comparable to the loop contributions above.
We will take tree-level values for the other parameters to be at least
comparable or bigger than their loop contributions. This gives
$m^{2}_{\phi,\rm
tree}\gtrsim\frac{\mu^{2}}{16\pi^{2}}~{}~{},~{}~{}\left(\lambda_{\sigma},\bar{\lambda}_{\sigma}\right)_{\rm
tree}\gtrsim\frac{\mu^{2}}{16\pi^{2}f^{2}}~{}~{},~{}~{}\lambda_{\phi,\rm
tree}\sim\frac{\mu^{2}}{16\pi^{2}f^{2}}~{}~{},~{}~{}\kappa_{\rm
tree}\sim\frac{\lambda_{\sigma}\mu}{16\pi^{2}f}~{}~{},$ (41)
taking $\ln\Lambda_{\sigma}\sim\mathcal{O}(1)$. We note that with the above
choice for $m_{\phi}^{2}$ and $\lambda_{\phi}$, the $\phi$-transit scale is
indeed $\mathcal{O}(f)$. But, the tree-level $\lambda_{\phi}$ and $\kappa$
hard breaking terms now induce quadratic UV-sensitivity in $V_{\rm
eff}(\phi)$. However, their values satisfying the above constraints are
sufficiently small so that naturalness in $m^{2}_{\phi}$ can still be
maintained as below:
$\begin{split}&\delta
m^{2}_{\phi,\rm{1-loop},(\lambda_{\phi})}\sim\frac{\lambda_{\phi}\Lambda_{\phi}^{2}}{16\pi^{2}}\sim\frac{\mu^{2}}{16\pi^{2}}\frac{\Lambda_{\phi}^{2}}{16\pi^{2}f^{2}}\lesssim\frac{\mu^{2}}{16\pi^{2}}\lesssim
m^{2}_{\phi,\rm tree}\ ,\\\ &\delta
m^{2}_{\phi,\rm{1-loop},(\kappa)}\sim\frac{\kappa\Lambda_{\sigma}^{2}}{16\pi^{2}}\sim\frac{\mu^{2}}{16\pi^{2}}\frac{\Lambda_{\sigma}^{2}}{16\pi^{2}M_{\sigma}^{2}/\lambda_{\sigma}}\lesssim\frac{\mu^{2}}{16\pi^{2}}\lesssim
m^{2}_{\phi,\rm tree}.\end{split}$ (42)
As can be seen above, this requires cutoffs in the two sectors to be bounded
as
$\Lambda_{\phi}\lesssim 4\pi f~{}~{},~{}~{}\Lambda_{\sigma}\lesssim
4\pi\frac{M_{\sigma}}{\sqrt{\lambda_{\sigma}}}~{}~{},$ (43)
where the $\sigma$-cutoff also satisfies Eq. (39). We note that these cutoffs
can still be bigger than the respective field values.
#### Getting a consistent inflationary model:
In order to get a consistent single-field inflation model, we need to satisfy
$m^{2}_{\phi}\sim\eta H^{2}\ ,\ M_{\sigma}\gtrsim H\ ,\ V_{\text{inf}}\sim
H^{2}M_{\text{pl}}^{2}\sim\frac{M_{\sigma}^{4}}{\lambda_{\sigma}}.$ (44)
The first condition above, along with Eq. (41), requires
$\mu\lesssim\mathcal{O}(H)$. The second condition, i.e. the $\sigma$ fields
being at least heavier than the Hubble scale, combined with
$M_{\sigma}^{2}\sim\mu f$ (see Eq. (35)) and $f\sim 10^{6}H$ (see Eq. (28)),
requires $\mu\gtrsim 10^{-6}H$. Together, these constrain the model parameter
$\mu$ as
$10^{-6}\lesssim\frac{\mu}{H}\lesssim\mathcal{O}(1).$ (45)
The lower bound on $\mu$ above also satisfies $\langle\sigma\rangle\lesssim
M_{\text{pl}}$ following Eq. (37) and Eq. (39). A stronger requirement of
$\Lambda_{\sigma}\sim 4\pi\langle\sigma\rangle\lesssim M_{\text{pl}}$ implies
$\frac{\mu}{H}\gtrsim 10^{-3}$.
#### Lower bound on the Hubble scale:
The third condition in Eq. (44), which relates the inflationary Hubble scale
to the model parameters, implies
$\lambda_{\sigma}\sim\frac{M_{\sigma}^{4}}{H^{2}M_{\text{pl}}^{2}}\sim\frac{\mu^{2}f^{2}}{H^{2}M_{\text{pl}}^{2}}\sim
10^{22}\frac{\mu^{2}}{f^{2}}\frac{H^{2}}{M_{\text{pl}}^{2}},$ (46)
using Eq. (28) in the last step. Hence naturalness in $\lambda_{\sigma}$, i.e.
$\lambda_{\sigma}\gtrsim\frac{\mu^{2}}{16\pi^{2}f^{2}}$ (see Eq. (41)),
combined with Eq. (46) gives a lower bound on the inflationary Hubble scale
within our Twinflation model as
$H\gtrsim 10^{6}\textrm{GeV}.$ (47)
This also implies a lower bound on the tensor-to-scalar ratio as $r\gtrsim
10^{-16}$.
As we can see above, naturalness in $\lambda_{\sigma}$ also implies
$H^{2}M_{\text{pl}}^{2}\lesssim 16\pi^{2}f^{4}$ i.e.
$V_{\text{inf}}\lesssim\Lambda_{\phi}^{4}$, with the $\phi$-cutoff
$\Lambda_{\phi}\lesssim 4\pi f$. Also, perturbativity of $\lambda_{\sigma}$
combined with Eq. (37) and (39) implies
$V_{\text{inf}}\lesssim\Lambda_{\sigma}^{4}$. Thus, the inflationary energy
scale being smaller than the UV scales ensures good EFT control in this model.
Thus, our Twinflation model of Eq. (31), with the parameters satisfying the
constraints in Eq. (41), exhibits naturalness and EFT control. All the mass
scales and the field values are less than the corresponding UV cutoff scales,
especially $f\lesssim\Lambda_{\phi}$ and
$\langle\sigma\rangle\lesssim\Lambda_{\sigma}$. As we will see later in Sec.
5.4, there is a significant parameter space available satisfying
$\Lambda_{\phi},\Lambda_{\sigma}\lesssim M_{\text{pl}}$ (see Fig. 1) such that
we have a truly low-scale, sub-Planckian hybrid inflation model under EFT
control, satisfying all of our naturalness requirements as mentioned in Sec.
2.
### 5.3 One-loop Coleman-Weinberg effective potential
As we noted earlier, the $\sigma$ fields are always heavy before the end of
inflation, and hence can be integrated out to give a 1-loop Coleman-Weinberg
(CW) potential:
$\begin{split}V_{\rm
CW}(\phi)&=\sum_{i=A,B}\frac{M_{\sigma_{i}}^{4}(\phi)}{64\pi^{2}}\ln{\frac{M_{\sigma_{i}}^{2}(\phi)}{\Lambda_{\sigma}^{2}}}\\\
&=\frac{\mu^{2}f^{2}}{64\pi^{2}}\left[\left(2\frac{\phi^{2}}{f^{2}}+\cdots\right)\ln{\frac{\mu
f}{\Lambda_{\sigma}^{2}}}+\frac{(\phi_{*}+\phi)^{2}}{f^{2}}\ln{\frac{\phi_{*}+\phi}{f}}+\frac{(\phi_{*}-\phi)^{2}}{f^{2}}\ln{\frac{\phi_{*}-\phi}{f}}\right].\end{split}$
(48)
The first term above renormalizes $m_{\phi,\text{tree}}^{2}$ as in Eq. (40).
Parameterizing the tree-level inflaton mass as
$m_{\phi,\rm{tree}}^{2}\equiv c_{\phi}\frac{\mu^{2}}{16\pi^{2}}\ ,$ (49)
the naturalness constraint in Eq. (41) requires
$c_{\phi}\gtrsim\mathcal{O}(1)$. Then, $V_{\rm CW}(\phi)$ in Eq. (48) is
comparable to tree-level $V_{\rm eff}(\phi)$ in Eq. (33) only when
$c_{\phi}\approx 1$, while giving sub-dominant effects for the bulk of the
natural parameter space ($c_{\phi}\gg 1$). Nevertheless, in our full numerical
analysis in Sec. 5.4, we will incorporate the logarithmic effects in the
inflaton that distinguish the 1-loop potential, but they are so modest as to
be difficult to resolve by eye, as we will see in Fig. 1.
### 5.4 Pseudo-Nambu-Goldstone inflaton realization
In this section, we discuss a simple and more complete extension of the model
in Eq. (31), realizing the inflaton as a pNGB of a global $U(1)$ symmetry,
with soft explicit breaking. The Lagrangian is given by,
$\displaystyle\mathcal{L}_{\rm UV}=$
$\displaystyle|\partial\Phi|^{2}-V_{\Phi}(|\Phi|^{2})$
$\displaystyle+\left(\left(\frac{1}{2}(\partial\sigma_{A})^{2}-\frac{1}{2}M_{\sigma}^{2}\sigma_{A}^{2}-\frac{\lambda_{\sigma}}{4}\sigma_{A}^{4}\right)+(A\rightarrow
B)\right)-\frac{\bar{\lambda}_{\sigma}}{4}\sigma_{A}^{2}\sigma_{B}^{2}$
$\displaystyle+\left(\frac{\mu\Phi}{2\sqrt{2}}(\sigma_{A}^{2}-\sigma_{B}^{2})+\frac{c_{\phi}}{64\pi^{2}}\left(\mu\Phi\right)^{2}+\text{h.c.}\right)-g|\Phi|^{2}\left(\sigma_{A}^{2}+\sigma_{B}^{2}\right)-V_{\text{inf}}.$
(50)
Similar to the symmetry structure in Eq. (30), we demand
$\displaystyle\Phi\rightarrow-\Phi,~{}~{}\sigma_{A}\rightarrow\sigma_{B}$ (51)
under the twin symmetry, and also for simplicity a $\mathbb{Z}_{2}$-symmetry
under which $\sigma_{i}\rightarrow-\sigma_{i}$ for $i=A,B$. Furthermore, we
treat $\mu$ as a $U(1)$ “spurion” with charge $-1$ that compensates the $+1$
charge of $\Phi$ under the $U(1)$. This spurion analysis, along with the
symmetry structure in Eq. (51), uniquely fixes the Lagrangian in Eq. (5.4) at
the dimension-4 level. There are two dimensionless coupling constants
$c_{\phi}$ and $g$, with
$\mu,M_{\sigma},\lambda_{\sigma},\bar{\lambda}_{\sigma}$ being the same as in
Eq. (31).888To simplify the notation, we keep using the same parameter $\mu$
as before, although now it has a spurion charge. The potential $V_{\Phi}$ is
such that it allows for a spontaneous breaking of $U(1)$ with the inflaton
($\phi$) being the corresponding Nambu-Goldstone boson (NGB). The $\mu-$term
in the third line of Eq. (5.4) then gives mass to the inflaton, as we will see
below, making it a pseudo-NGB. We parametrize the inflaton $\phi$ as
$\Phi=\frac{f+\chi}{\sqrt{2}}e^{i\phi/f}$, where $\chi$ is the radial mode and
$\langle\Phi\rangle=f$ is the VEV. Integrating out $\chi$ and redefining
$\frac{\phi}{f}\rightarrow\frac{\phi}{f}+\pi/2$, we get an effective
Lagrangian from Eq. (5.4) as
$\displaystyle\mathcal{L}_{\rm IR}=$
$\displaystyle\left(\left(\frac{1}{2}(\partial\sigma_{A})^{2}-\frac{1}{2}\widetilde{M}_{\sigma}^{2}\sigma_{A}^{2}-\frac{\lambda_{\sigma}}{4}\sigma_{A}^{4}\right)+(A\rightarrow
B)\right)-\frac{\bar{\lambda}_{\sigma}}{4}\sigma_{A}^{2}\sigma_{B}^{2}$
$\displaystyle+\frac{1}{2}(\partial\phi)^{2}-\frac{\mu
f}{2}\sin\left(\frac{\phi}{f}\right)\left(\sigma_{A}^{2}-\sigma_{B}^{2}\right)-c_{\phi}\frac{\mu^{2}f^{2}}{64\pi^{2}}\cos\left(\frac{2\phi}{f}\right)-V_{\text{inf}}.$
(52)
Here we have defined $\widetilde{M}_{\sigma}^{2}\equiv M_{\sigma}^{2}+gf^{2}$.
For the waterfall mechanism to work, we need both $M_{\sigma}^{2}\sim\mu f$,
which was discussed earlier, and $g\lesssim\mu/f$, which then implies
$\widetilde{M}_{\sigma}^{2}\sim M_{\sigma}^{2}\sim\mu f$. Hence, in what
follows, we will drop the tilde over $M_{\sigma}^{2}$. This value of $g$ is
technically natural since loop-contributions in the 1PI effective potential
include
$\displaystyle\delta g_{\rm
1-loop}\sim\frac{\lambda_{\sigma}\mu^{2}}{16\pi^{2}M_{\sigma}^{2}}\sim\frac{\lambda_{\sigma}\mu}{16\pi^{2}f}\ll\frac{\mu}{f}.$
(53)
Inflation starts somewhat near the hilltop along $\phi$ i.e. close to
$\phi=0$. Expanding for $\phi/f\ll 1$ in Eq. (5.4), we get999The size of the
cosine potential in $\phi$ ($\sim\mu^{2}f^{2}/16\pi^{2}$) is much smaller than
$V_{\text{inf}}\sim\mu^{2}f^{2}/\lambda_{\sigma}$, as we will see later in Eq.
(58), and hence the constant term from the cosine can be neglected here.
$\displaystyle\mathcal{L}_{\rm IR}\approx$
$\displaystyle\left(\left(\frac{1}{2}(\partial\sigma_{A})^{2}-\frac{1}{2}M_{\sigma}^{2}\sigma_{A}^{2}-\frac{\lambda_{\sigma}}{4}\sigma_{A}^{4}\right)+(A\rightarrow
B)\right)-\frac{\bar{\lambda}_{\sigma}}{4}\sigma_{A}^{2}\sigma_{B}^{2}$
$\displaystyle+\frac{1}{2}(\partial\phi)^{2}-\frac{\mu\phi}{2}\left(\sigma_{A}^{2}-\sigma_{B}^{2}\right)-V_{\text{inf}}+c_{\phi}\frac{\mu^{2}}{16\pi^{2}}\left(\frac{\phi^{2}}{2}-\frac{\phi^{4}}{6f^{2}}+\dots\right).$
(54)
For $c_{\phi}\gtrsim\mathcal{O}(1)$, as required by technical naturalness in
Eq. (5.4), this reproduces all the interactions relevant for hybrid inflation
as was studied earlier in Eq. (31) for $c_{\phi}>0$.
During inflation, i.e. with
$\sin\left(\frac{\phi}{f}\right)<\frac{M_{\sigma}^{2}}{\mu f}$, both
$\sigma_{A,B}$ remain heavy and with vanishing VEVs. Thus, integrating them
out at tree-level, which is dropping them in Eq. (5.4), gives an effective
inflationary potential
$V_{\rm eff}(\phi)\approx
V_{\text{inf}}+c_{\phi}\frac{\mu^{2}f^{2}}{64\pi^{2}}\cos\left(\frac{2\phi}{f}\right).$
(55)
This is of the form of Eq. (25) with the function
$F\left(\frac{\phi}{f}\right)$ taking trigonometric form as above, and hence
all the results of Sec. 4 apply here too. As inflaton rolls past a critical
value $\phi_{*}$ such that
$\sin\left(\frac{\phi_{*}}{f}\right)=\frac{M_{\sigma}^{2}}{\mu f},$ (56)
waterfall is triggered along $\sigma_{B}$. The fields then rapidly roll down
to the global minimum which is situated at
$\begin{split}&\frac{\phi_{\rm
min}}{f}=\frac{\pi}{2},~{}~{}\sigma_{A,\textrm{min}}=0,\\\
&\sigma_{B,\textrm{min}}=\sqrt{\frac{1}{\lambda_{\sigma}}\left(\mu
f\sin\left(\frac{\phi_{\rm
min}}{f}\right)-M_{\sigma}^{2}\right)}=\sqrt{\frac{\mu
f}{\lambda_{\sigma}}\left(1-\sin\left(\frac{\phi_{*}}{f}\right)\right)}\sim\mathcal{O}(1)\sqrt{\frac{\mu
f}{\lambda_{\sigma}}}.\end{split}$ (57)
The inflationary vacuum energy released during this waterfall transition is
given by
$V_{\text{inf}}\approx\frac{\mu^{2}f^{2}}{4\lambda_{\sigma}}\left(1-\sin\left(\frac{\phi_{*}}{f}\right)\right)^{2}\sim\mathcal{O}(1)\frac{\mu^{2}f^{2}}{\lambda_{\sigma}}.$
(58)
Thus, as mentioned earlier in Sec. 5.1, once $\phi$ is realized as a pNGB of a
$U(1)$ global symmetry as in this section, the global minimum in $\phi$ is
fixed only $\sim\mathcal{O}(1)$ away from the critical point triggering
waterfall, i.e. $\phi_{\rm min}\sim\mathcal{O}(\phi_{*})\sim\mathcal{O}(f)$.
Consequently, the parametric dependence of $V_{\text{inf}}$ (and hence $H$) on
the model parameters is obtained as in Eq. (58), which is as expected in Eq.
(37).
Integrating out the heavy $\sigma$ fields at 1-loop level, similar to Eq.
(48), gives rise to the following logarithmic dependence from the Coleman-
Weinberg potential:
$\begin{split}V_{\rm
CW}\left(\theta\equiv\frac{\phi}{f}\right)=\frac{\mu^{2}f^{2}}{64\pi^{2}}&\left[\left(\sin\theta_{*}+\sin\theta\right)^{2}\ln{(\sin\theta_{*}+\sin\theta)}\right.\\\
&+\left.\left(\sin\theta_{*}-\sin\theta\right)^{2}\ln{(\sin\theta_{*}-\sin\theta)}\right].\end{split}$
(59)
As mentioned earlier in Sec. 5.2, this can give considerable effects only when
naturalness is saturated for $m_{\phi}^{2}$, i.e. for $c_{\phi}\approx 1$.
These effects, numerically computed in Fig. 1, are however so modest as to be
difficult to resolve by eye.
Figure 1: Available parameter space in the $U(1)$ version of our Twinflation
model (see Sec. 5.4) exhibiting naturalness and EFT-control:
$\phi_{*}/f=\pi/5$ for concreteness. The right and bottom edges of the shaded
region correspond to naturalness constraints on $m_{\phi}$ and
$\lambda_{\sigma}$, respectively. The top and left edges correspond to the
cutoffs $\Lambda_{\phi}$ and $\Lambda_{\sigma}$ being sub-Planckian,
respectively. $\Lambda_{\phi}\approx\Lambda_{\sigma}$ on the dotted line. The
parameter $c_{\phi}$ varies from 1 to $\sim 10^{4}$ as we move from right to
left edge, which makes the loop contributions to inflaton potential smaller
and smaller as compared to the tree-level term. The dashed lines show contours
for $H=10^{7},10^{9},10^{11}$ GeV, corresponding to $r\approx
10^{-15},10^{-11},10^{-7}$, respectively. $n_{s}$ is fixed to 0.9649, its
central value from the Planck CMB constraints Planck2018Inflation . Varying
its value up or down by a percent shifts the entire blue region slightly to
the left or right, respectively, by about a percent which is hardly resolvable
by eye.
Fig. 1 shows the available parameter space in our Twinflation model described
by Eq. (5.4), satisfying the requirements of naturalness and EFT control, and
giving a viable hybrid inflation model. Here we have fixed
$\frac{\phi_{*}}{f}=\frac{\pi}{5}$ for concreteness. This then gives the
initial field value101010This value changes slightly for different $c_{\phi}$
values, i.e. including the CW potential from Eq. (59).
$\frac{\phi_{i}}{f}\approx 0.1\pi$ to get 60 e-foldings, using the effective
potential in Eq. (55) and the analysis in Sec. 4. This gives the trigonometric
functions $\sim\mathcal{O}(1)$ for both $\frac{\phi_{i}}{f}$ and
$\frac{\phi_{*}}{f}$, as alluded to before in Sec. 4. The other essential
parameters $M_{\sigma}^{2}$ and $\lambda_{\sigma}$ are then fixed by the model
requirements in Eqs. (56), (58), and (28). The right and bottom edges of the
allowed parameter space correspond to naturalness constraints on $m_{\phi}$
(see Eq. (45)) and $\lambda_{\sigma}$ (see Eq. (41)), respectively. The top
and left edges correspond to the cutoffs in the $\phi$ and $\sigma$ sectors
being sub-Planckian, respectively. Here we consider $\Lambda_{\phi}\approx
4\pi f,\Lambda_{\sigma}\approx 4\pi\frac{M_{\sigma}}{\sqrt{\lambda_{\sigma}}}$
saturating the constraints in Eq. (43). Thus, the shaded region satisfies our
naturalness and EFT consistency requirements. $n_{s}$ is fixed to 0.9649, its
central value from the Planck CMB constraints Planck2018Inflation . Varying
its value up or down by a percent shifts the entire allowed region slightly to
the left or right, respectively, by about a percent. The dashed lines show
contours for $H$ which are mostly horizontal (i.e. constant $f/H$, see Eq.
(28)), but bending slightly upwards close to the right edge due to the CW
potential contribution. As we can see in the figure, $\Lambda_{\phi}$ being
sub-Planckian restricts the model to realize $H\lesssim 10^{11}$ GeV, while
the $\lambda_{\sigma}$-naturalness gives a lower bound on $H$ as $\sim 10^{6}$
GeV as expected from Eq. (47). The two cutoffs
$\Lambda_{\phi},\Lambda_{\sigma}$ are approximately equal on the dotted line.
Thus, as the figure shows, demanding $\Lambda_{\phi}\approx\Lambda_{\sigma}$
can only realize $H$ bigger than $\sim 10^{10}$ GeV. Only a small part of the
parameter space lying above this dotted line corresponds to
$\Lambda_{\phi}>\Lambda_{\sigma}$, while a majority of the allowed region has
$\Lambda_{\sigma}>\Lambda_{\phi}$.
The Lagrangian of the $U(1)$ model in Eq. (5.4) contains terms only up to
dimension-4. This will also include higher-dimensional terms respecting the
symmetry in Eq. (51) and the spurion analysis mentioned thereafter, and thus
will be of the form
$\delta\mathcal{L}_{\rm UV,non-ren.}\ni
c_{nm}\frac{\left(\mu\Phi\right)^{n}\left(\sigma_{i}^{2}\right)^{m}}{\left(\Lambda^{2}\right)^{n+m-2}}~{}~{}.$
(60)
Here, the exponents $n,m$ and the combinations of $\sigma_{A,B}$ in
$\sigma_{i}^{2}$ will be such that they respect the symmetry in Eq. (51).
Also, for simplicity, we consider here a single UV cutoff scale $\Lambda$
suppressing these non-renormalizable terms.111111It can be shown that even
with different cutoff scales for $\phi$ and $\sigma$ fields, analogous to what
is shown here for $\Lambda_{\phi}\sim\Lambda_{\sigma}$, these non-
renormalizable terms do not pose any danger to our model. In order to satisfy
naturalness in the $\sigma$-potential, it suffices to have
$c_{0m}\lesssim\left(16\pi^{2}\right)^{m-2}\lambda_{\sigma}$. This mild
requirement on the coefficients $c_{nm}$ in Eq. (60), i.e. $c_{nm}\sim
c_{0m}\lesssim\left(16\pi^{2}\right)^{m-2}\lambda_{\sigma}$, is sufficient to
render the entire model natural, even at the non-renormalizable level, as
illustrated below. The most vulnerable terms would be the super-renormalizable
terms in Eq. (5.4), i.e. the bare and $\Phi-$dependent $\sigma$ mass terms,
which we collectively refer to as $M_{\sigma}^{2}(\Phi)$. The higher-
dimensional terms in Eq. (60) can contribute to $M_{\sigma}^{2}(\Phi)$ at
loop- or tree-level (i.e. after setting some fields to their VEVs) as
$\frac{\delta
M_{\sigma}^{2}(\Phi)}{M_{\sigma}^{2}}\sim\frac{c_{nm}(\mu\Phi)^{n}\cdot\langle\sigma\rangle^{2(m-1)}}{M_{\sigma}^{2}\cdot\Lambda^{2(n+m-2)}}\lesssim\frac{(16\pi^{2})^{m-2}(\mu\Phi)^{n}\cdot\langle\sigma\rangle^{2(m-2)}}{\Lambda^{2(n+m-2)}}\sim\left(\frac{\mu\Phi}{\Lambda^{2}}\right)^{n}\lesssim\left(\frac{\mu}{\Lambda}\right)^{n},$
(61)
which is negligible due to the suppression from
$\frac{\mu}{\Lambda}\lesssim\frac{H}{4\pi f}\lesssim 10^{-6}$. Also, any
higher-dimensional terms in Eq. (5.4) involving $|\Phi|^{2}$ will be sub-
dominant since they will come with suppression factors of at least
$\frac{|\Phi|^{2}}{\Lambda^{2}}\sim\frac{1}{16\pi^{2}}$.
## 6 Addressing the cosmological domain wall problem
Spontaneous breaking of an exact discrete symmetry, in our model
$\sigma_{i}\rightarrow-\sigma_{i}$, during cosmological evolution, will lead
to the formation of domains (with $\langle\sigma_{B}\rangle>0$ or $<0$) after
the end of inflation, separated by cosmologically stable domain walls (DW).
The energy density in these domain walls redshifts slower than both matter and
radiation. This gives rise to a late-time universe dominated by domain walls
contrary to what is observed during Big-Bang Nucleosynthesis. This is the so
called “cosmological domain wall problem” DomainWallProblem_Zeldovich:1974uw ,
which our Twinflation model faces for an exact
$\sigma_{i}\rightarrow-\sigma_{i}$ symmetry. The $\sigma$ fields could be
charged under a $U(1)$ gauge symmetry, which then may not give rise to domain
walls, but instead forms the much less constrained cosmic strings (see e.g.
Vilenkin:1982ks ; Hindmarsh:2011qj ; Auclair:2019wcv ). However, this approach
requires additional fields and structures. Here we will consider a simple
solution to the domain wall problem via small explicit breaking of the
discrete symmetry.
We first note that $\sigma_{i}\rightarrow-\sigma_{i}$ symmetry is not an
essential ingredient of our model and is used so far only for simplicity. We
can hence add a small soft breaking of this symmetry in Eq. (31) or (5.4) via
$V(\phi,\sigma_{i})\ni M\sigma_{i}^{3},$ (62)
where $M$ is a dimensionful spurion of this $\sigma$-parity breaking. This
leads to a bias between the previously degenerate vacua as
$\frac{\Delta V_{\rm
bias}}{V_{\text{inf}}}\sim\frac{M}{M_{\sigma}\sqrt{\lambda_{\sigma}}},$ (63)
where in the denominator we have $V_{\text{inf}}$ which is also the typical
size of the $\sigma$-potential. This bias provides a pressure force acting
against the surface tension of the walls, eventually leading to their
annihilation. Then, demanding that this annihilation of domain walls happens
before their cosmological energy domination, we need
DomainWallBias_Vilenkin:1981zs ; DomainWallBias_Gelmini:1988sf ;
DomainWalls_Saikawa:2017hiv
$\mathcal{O}(1)\gtrsim\frac{\Delta V_{\rm
bias}}{V_{\text{inf}}}\gtrsim\frac{M_{\sigma}^{2}}{\lambda_{\sigma}M_{\text{pl}}^{2}},$
(64)
which can be realized in our model, using Eq. (63), by having
$M_{\sigma}\sqrt{\lambda_{\sigma}}\gtrsim
M\gtrsim\frac{M_{\sigma}^{3}}{\sqrt{\lambda_{\sigma}}M_{\text{pl}}^{2}}.$ (65)
However, the cubic term in Eq. (62) radiatively generates the following
$\sigma$-tadpole:
$V(\phi,\sigma_{i})\ni M\frac{\Lambda_{\sigma}^{2}}{16\pi^{2}}\sigma_{i}\sim
M\frac{M_{\sigma}^{2}}{\lambda_{\sigma}}\sigma_{i}.$ (66)
Tadpole terms of this order shift the minimum in $\sigma_{i}$ in a
$\phi$-dependent way as
$\delta\sigma_{i}(\phi)\sim\frac{MM_{\sigma}^{2}}{\lambda_{\sigma}M_{\sigma_{i}}^{2}(\phi)}\sim\frac{M}{\lambda_{\sigma}}\left(1\pm\frac{\sin(\phi/f)}{\sin(\phi_{*}/f)}\right)^{-1},$
(67)
where $M_{\sigma_{i}}^{2}(\phi)=M_{\sigma}^{2}\pm\mu f\sin\left(\phi/f\right)$
is the $\phi$-dependent mass-squared for $\sigma_{i}$ (see Eq. (5.4)). This
shift contributes to the effective inflaton potential as121212As
$\phi\rightarrow\phi_{*}$, i.e. towards the end of inflation, the expressions
in Eqs. (67), (68) seem to diverge. However, this is because the effective
mass for $\sigma_{B}$ vanishes at $\phi_{*}$, and hence we have to balance the
$\sigma$-tadpole with $\sigma$-cubic which will modify these expressions close
to $\phi_{*}$.
$\delta V_{\rm
eff}(\phi)\sim\sum_{i=A,B}\frac{M^{2}M_{\sigma}^{4}}{\lambda_{\sigma}^{2}M_{\sigma_{i}}^{2}(\phi)}\sim\frac{M^{2}M_{\sigma}^{2}}{\lambda_{\sigma}^{2}}\left(1-\frac{\sin^{2}(\phi/f)}{\sin^{2}(\phi_{*}/f)}\right)^{-1}.$
(68)
Figure 2: Addressing the cosmological domain wall problem in Twinflation: The
blue region (same as in Fig. 1) satisfies our naturalness and EFT consistency
requirements. Small explicit breaking of $\sigma$-parity (see Eq. (62)) solves
the domain wall problem. Its contribution to $V_{\rm eff}(\phi)$, via the
natural value of $\sigma$-tadpole, is sub-dominant in the green region shown
above.
Demanding that this contribution is sub-dominant to the inflaton potential
implies
$1\gtrsim\frac{\delta V_{\rm eff}(\phi)}{V_{\rm
eff}(\phi)}\sim\frac{16\pi^{2}M^{2}}{c_{\phi}\lambda_{\sigma}^{2}M_{\sigma}^{2}}\gtrsim\frac{16\pi^{2}M_{\sigma}^{4}}{c_{\phi}\lambda_{\sigma}^{3}M_{\text{pl}}^{4}},$
(69)
where in the last step we have used Eq. (65). Then, using our model
requirements –
$\lambda_{\sigma}\sim\frac{M_{\sigma}^{4}}{H^{2}M_{\text{pl}}^{2}},M_{\sigma}^{2}\sim\mu
f,\frac{f}{H}\sim 10^{6}$ – we get the constraint for the allowed parameter
region as
$\sqrt{c_{\phi}}\frac{\mu^{2}}{fM_{\text{pl}}}\gtrsim 10^{-17}.$ (70)
This is evaluated numerically and shown in Fig. 2 as the green region. We can
also note here that this now gives a lower bound on the Hubble scale as
$H\gtrsim 10^{7}\textrm{GeV},$ (71)
which is $\sim\mathcal{O}(10)$ bigger than that obtained in Eq. (47).
Thus, the cosmological domain wall problem can be solved in our model by
introducing a small explicit breaking of $\sigma$-parity at the cost of some
reduction in the allowed parameter space as shown in Fig. 2. One might explore
more general ways of explicit $\sigma$-parity breaking than the simple one we
considered here via Eq. (62), possibly allowing for viable hybrid inflation in
the entire blue region. We leave this exploration for a future study.
## 7 Discussion
In the present work, we build a viable, natural, and EFT-controlled model of
low-scale hybrid inflation, “Twinflation”. Here, inflation happens somewhat
near the hilltop of the effective inflaton potential, although without any
fine-tuning of the initial position. This gives rise to the red tilt in the
scalar perturbations, consistent with the observations. The quadratic
sensitivity to the UV cutoff scales in the inflaton potential, induced by its
necessarily non-derivative coupling with the waterfall field, is removed by a
twin symmetry. All the parameters take (technically) natural values, without
any fine-tuning. All the mass scales and field values are below the respective
UV cutoff scales and also the Planck scale, thus rendering the model under
(straightforward) EFT control. This model can realize low-scale inflation with
the Hubble scale as low as $\sim 10^{6}$ GeV (see Fig. 1). It is therefore
easily consistent with the smallness of the yet-unobserved primordial tensor
fluctuations, which could be unobservably small ($r\sim 10^{-16}$) for the
lowest Hubble scales realized in our model.
Spontaneous breaking of the discrete symmetry
$\sigma_{i}\rightarrow-\sigma_{i}$ towards the end of inflation will lead to
cosmic domain wall formation in the post-inflationary universe. One simple way
to be compatible with our universe on the large scales at late times, is to
demand that such domain walls should annihilate before they start dominating
the cosmic energy density. As discussed in Sec. 6, we show that this can be
easily implemented in our model with a small explicit breaking of the
$\sigma$-parity, which we only considered for technical simplification in any
case. This, however, can be achieved only in the parameter space as shown in
Fig. 2, allowing for the smallest inflationary Hubble scale to be $\sim
10^{7}$ GeV. We expect that allowing for more general ways of explicit
$\sigma$-parity breaking can possibly relax this constraint, which we leave
for a future study. It is also interesting that the domain wall dynamics can
give rise to a stochastic gravitational wave (GW) background observable in
future GW experiments. See DomainWalls_Saikawa:2017hiv for a review.
Hybrid inflation models typically require fine-tuned couplings. However, our
model does not require any fine-tuning in the parameters to achieve radiative
stability. With regards to the initial conditions, we also showed that there
is no tuning required in the initial inflaton field location, i.e. it need not
start very close to the hilltop and can have a transit of
$\sim\mathcal{O}(f)$. A large initial inflaton velocity can be compensated by
starting more uphill along the potential, up to the hilltop. However,
demanding that it first damps to the terminal slow-roll velocity, then gives
the required number of e-foldings of slow-roll inflation before entering the
waterfall phase, we see that the initial velocity has to be sufficiently
small: $\frac{\dot{\phi}}{f^{2}}\lesssim\frac{H}{f}\sim 10^{-6}$. (See also
Buchmuller:2014epa for similar constraints.) Furthermore, there is the
question of whether inflation can begin in an inhomogeneous spacetime.
Numerical simulations show that whereas large-field inflation models are less
susceptible to inhomogeneities preventing the onset of inflation, small-field
inflation models may be more so Goldwirth:1991rj ; Laguna:1991zs ;
KurkiSuonio:1993fg ; Easther:2014zga ; East:2015ggf ; Clough:2016ymm . These
issues can however be addressed, for example, by invoking tunneling from a
prior metastable vacuum in the landscape of the theory, which naturally gives
rise to a state with small field velocity and inhomogeneity (see e.g.
Freivogel:2005vv ; Dutta:2011fe ; Guth:2013sya ; Masoumi:2017gmh ).
It would obviously be very interesting if we could directly observe the
waterfall field(s) ($\sigma_{i}$) via their mediation of primordial non-
Gaussianity (NG), using the idea of “Cosmological Collider Physics”
Chen:2009zp ; Arkani-Hamed:2015bza . Ordinarily such signals would be strongly
“Boltzmann”-suppressed by $e^{-\pi M_{\sigma}/H}$, since $M_{\sigma}\gg H$.
However, the recently discussed “scalar chemical potential” mechanism
NG_with_chemical_potential_Bodas:2020yho may eliminate this suppression and
be compatible with our twin symmetry structure. We leave an exploration of
this to future work.
As discussed in the Introduction, a variety of UV physics scenarios may give
rise to unwanted defects or relics like monopoles, moduli, gravitino (see e.g.
GravitinoProblem_Ellis:1982yb ; GravitinoProblem_Ellis:1984eq ;
GravitinoProblem_Murayama_etal ; ModuliProblem_Randall:1994fr ). Different UV
scenarios can also exhibit a meta-stable high temperature phase in which the
universe can remain stuck if the phase transition to the familiar low
temperature phase fails to complete RSPT_Creminelli:2001th . Reheating of the
universe at a low temperature, following inflation with a low Hubble scale,
might help to address these issues in a straightforward way. Another
motivation towards low-scale inflation can come from the constraints on
isocurvature perturbations sourced by (QCD) axionic dark matter (see e.g.
Planck2018Inflation ; Axion_Cosmology_Review_Marsh:2015xka ;
ALPs_isocurvature_Diez-Tejedor:2017ivd ). If the Peccei-Quinn symmetry is
broken during inflation, axions source dark matter isocurvature perturbations
which are stronger for higher $H$ (for any given axion decay constant,
$f_{a}$), the non-observation of which thus prefers low-scale inflation.
Furthermore, with current and future collider experiments, such as a future
$\sim\mathcal{O}(100)$ TeV collider, we might have the opportunity to
investigate the physics during and after such a low-scale inflation in
laboratory searches too, along with the cosmological ones!
###### Acknowledgements.
We are grateful to Anson Hook for useful conversation. KD and RS are supported
in part by the NSF grant PHY-1914731 and by the Maryland Center for
Fundamental Physics. SK is supported in part by the NSF grants PHY-1914731,
PHY-1915314 and the U.S. DOE Contract DE-AC02-05CH11231.
## References
* (1) D. Baumann, Inflation, in Theoretical Advanced Study Institute in Elementary Particle Physics: Physics of the Large and the Small, pp. 523–686, 2011. arXiv:0907.5424.
* (2) D. Baumann and L. McAllister, Inflation and String Theory. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 5, 2015.
* (3) Planck Collaboration, Y. Akrami et al., Planck 2018 results. X. Constraints on inflation, arXiv:1807.06211.
* (4) H. Hui et al., BICEP Array: a multi-frequency degree-scale CMB polarimeter, Proc. SPIE Int. Soc. Opt. Eng. 10708 (2018) 1070807, [arXiv:1808.00568].
* (5) Simons Observatory Collaboration, P. Ade et al., The Simons Observatory: Science goals and forecasts, JCAP 02 (2019) 056, [arXiv:1808.07445].
* (6) K. Abazajian et al., CMB-S4 Science Case, Reference Design, and Project Plan, arXiv:1907.04473.
* (7) M. Hazumi et al., LiteBIRD: A Satellite for the Studies of B-Mode Polarization and Inflation from Cosmic Background Radiation Detection, J. Low Temp. Phys. 194 (2019), no. 5-6 443–452.
* (8) NASA PICO Collaboration, S. Hanany et al., PICO: Probe of Inflation and Cosmic Origins, arXiv:1902.10541.
* (9) D. S. Goldwirth and T. Piran, Initial conditions for inflation, Phys. Rept. 214 (1992) 223–291.
* (10) M. Dine and L. Pack, Studies in Small Field Inflation, JCAP 06 (2012) 033, [arXiv:1109.2079].
* (11) R. Brandenberger, Initial conditions for inflation — A short review, Int. J. Mod. Phys. D 26 (2016), no. 01 1740002, [arXiv:1601.01918].
* (12) A. Linde, On the problem of initial conditions for inflation, Found. Phys. 48 (2018), no. 10 1246–1260, [arXiv:1710.04278].
* (13) D. Chowdhury, J. Martin, C. Ringeval, and V. Vennin, Assessing the scientific status of inflation after Planck, Phys. Rev. D 100 (2019), no. 8 083537, [arXiv:1902.03951].
* (14) A. D. Linde, Hybrid inflation, Phys. Rev. D49 (1994) 748–754, [astro-ph/9307002].
* (15) E. J. Copeland, A. R. Liddle, D. H. Lyth, E. D. Stewart, and D. Wands, False vacuum inflation with Einstein gravity, Phys. Rev. D 49 (1994) 6410–6433, [astro-ph/9401011].
* (16) G. R. Dvali, Q. Shafi, and R. K. Schaefer, Large scale structure and supersymmetric inflation without fine tuning, Phys. Rev. Lett. 73 (1994) 1886–1889, [hep-ph/9406319].
* (17) P. Binetruy and G. R. Dvali, D term inflation, Phys. Lett. B 388 (1996) 241–246, [hep-ph/9606342].
* (18) E. Halyo, Hybrid inflation from supergravity D terms, Phys. Lett. B 387 (1996) 43–47, [hep-ph/9606423].
* (19) R. Kallosh and A. D. Linde, P term, D term and F term inflation, JCAP 10 (2003) 008, [hep-th/0306058].
* (20) D. E. Kaplan and N. J. Weiner, Little inflatons and gauge inflation, JCAP 0402 (2004) 005, [hep-ph/0302014].
* (21) N. Arkani-Hamed, H.-C. Cheng, P. Creminelli, and L. Randall, Pseudonatural inflation, JCAP 0307 (2003) 003, [hep-th/0302034].
* (22) N. Arkani-Hamed, A. G. Cohen, and H. Georgi, Electroweak symmetry breaking from dimensional deconstruction, Phys. Lett. B 513 (2001) 232–240, [hep-ph/0105239].
* (23) R. Sundrum and C. M. Wells, Warped Hybrid Inflation, JHEP 02 (2010) 097, [arXiv:0909.3254].
* (24) G. G. Ross, G. German, and J. A. Vazquez, Hybrid Natural Inflation, JHEP 05 (2016) 010, [arXiv:1601.03221].
* (25) N. Kaloper, M. König, A. Lawrence, and J. H. Scargill, On Hybrid Monodromy Inflation (Hic Sunt Dracones), arXiv:2006.13960.
* (26) F. Carta, N. Righi, Y. Welling, and A. Westphal, Harmonic Hybrid Inflation, arXiv:2007.04322.
* (27) Z. Chacko, H.-S. Goh, and R. Harnik, The Twin Higgs: Natural electroweak breaking from mirror symmetry, Phys. Rev. Lett. 96 (2006) 231802, [hep-ph/0506256].
* (28) N. Craig, S. Koren, and T. Trott, Cosmological Signals of a Mirror Twin Higgs, JHEP 05 (2017) 038, [arXiv:1611.07977].
* (29) D. J. E. Marsh, Axion Cosmology, Phys. Rept. 643 (2016) 1–79, [arXiv:1510.07633].
* (30) A. Diez-Tejedor and D. J. E. Marsh, Cosmological production of ultralight dark matter axions, arXiv:1702.02116.
* (31) J. R. Ellis, A. D. Linde, and D. V. Nanopoulos, Inflation Can Save the Gravitino, Phys. Lett. B 118 (1982) 59–64.
* (32) J. R. Ellis, J. E. Kim, and D. V. Nanopoulos, Cosmological Gravitino Regeneration and Decay, Phys. Lett. B 145 (1984) 181–186.
* (33) T. Moroi, H. Murayama, and M. Yamaguchi, Cosmological constraints on the light stable gravitino, Phys. Lett. B 303 (1993) 289–294.
* (34) L. Randall and S. D. Thomas, Solving the cosmological moduli problem with weak scale inflation, Nucl. Phys. B 449 (1995) 229–247, [hep-ph/9407248].
* (35) F. Bezrukov and D. Gorbunov, Light inflaton Hunter’s Guide, JHEP 05 (2010) 010, [arXiv:0912.0390].
* (36) R. Allahverdi, B. Dutta, and Y. Santoso, MSSM inflation, dark matter, and the LHC, Phys. Rev. D 82 (2010) 035012, [arXiv:1004.2741].
* (37) C. Boehm, J. Da Silva, A. Mazumdar, and E. Pukartas, Probing the Supersymmetric Inflaton and Dark Matter link via the CMB, LHC and XENON1T experiments, Phys. Rev. D 87 (2013), no. 2 023529, [arXiv:1205.2815].
* (38) J. Bramante, J. Cook, A. Delgado, and A. Martin, Low Scale Inflation at High Energy Colliders and Meson Factories, Phys. Rev. D 94 (2016), no. 11 115012, [arXiv:1608.08625].
* (39) D. H. Lyth and E. D. Stewart, More varieties of hybrid inflation, Phys. Rev. D 54 (1996) 7186–7190, [hep-ph/9606412].
* (40) D. H. Lyth, What would we learn by detecting a gravitational wave signal in the cosmic microwave background anisotropy?, Phys. Rev. Lett. 78 (1997) 1861–1863, [hep-ph/9606387].
* (41) J. E. Kim, H. P. Nilles, and M. Peloso, Completing natural inflation, JCAP 01 (2005) 005, [hep-ph/0409138].
* (42) Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, arXiv:1807.06209.
* (43) Z. G. Berezhiani, A. D. Dolgov, and R. N. Mohapatra, Asymmetric inflationary reheating and the nature of mirror universe, Phys. Lett. B 375 (1996) 26–36, [hep-ph/9511221].
* (44) Y. Zeldovich, I. Kobzarev, and L. Okun, Cosmological Consequences of the Spontaneous Breakdown of Discrete Symmetry, Zh. Eksp. Teor. Fiz. 67 (1974) 3–11.
* (45) A. Vilenkin and A. E. Everett, Cosmic Strings and Domain Walls in Models with Goldstone and PseudoGoldstone Bosons, Phys. Rev. Lett. 48 (1982) 1867–1870.
* (46) M. Hindmarsh, Signals of Inflationary Models with Cosmic Strings, Prog. Theor. Phys. Suppl. 190 (2011) 197–228, [arXiv:1106.0391].
* (47) P. Auclair et al., Probing the gravitational wave background from cosmic strings with LISA, JCAP 04 (2020) 034, [arXiv:1909.00819].
* (48) A. Vilenkin, Gravitational Field of Vacuum Domain Walls and Strings, Phys. Rev. D 23 (1981) 852–857.
* (49) G. B. Gelmini, M. Gleiser, and E. W. Kolb, Cosmology of Biased Discrete Symmetry Breaking, Phys. Rev. D 39 (1989) 1558.
* (50) K. Saikawa, A review of gravitational waves from cosmic domain walls, Universe 3 (2017), no. 2 40, [arXiv:1703.02576].
* (51) W. Buchmüller, V. Domcke, K. Kamada, and K. Schmitz, Hybrid Inflation in the Complex Plane, JCAP 07 (2014) 054, [arXiv:1404.1832].
* (52) P. Laguna, H. Kurki-Suonio, and R. Matzner, Inhomogeneous inflation: The Initial value problem, Phys. Rev. D 44 (1991) 3077–3086.
* (53) H. Kurki-Suonio, P. Laguna, and R. A. Matzner, Inhomogeneous inflation: Numerical evolution, Phys. Rev. D 48 (1993) 3611–3624, [astro-ph/9306009].
* (54) R. Easther, L. C. Price, and J. Rasero, Inflating an Inhomogeneous Universe, JCAP 08 (2014) 041, [arXiv:1406.2869].
* (55) W. E. East, M. Kleban, A. Linde, and L. Senatore, Beginning inflation in an inhomogeneous universe, JCAP 09 (2016) 010, [arXiv:1511.05143].
* (56) K. Clough, E. A. Lim, B. S. DiNunno, W. Fischler, R. Flauger, and S. Paban, Robustness of Inflation to Inhomogeneous Initial Conditions, JCAP 09 (2017) 025, [arXiv:1608.04408].
* (57) B. Freivogel, M. Kleban, M. Rodriguez Martinez, and L. Susskind, Observational consequences of a landscape, JHEP 03 (2006) 039, [hep-th/0505232].
* (58) K. Dutta, P. M. Vaudrevange, and A. Westphal, The Overshoot Problem in Inflation after Tunneling, JCAP 01 (2012) 026, [arXiv:1109.5182].
* (59) A. H. Guth, D. I. Kaiser, and Y. Nomura, Inflationary paradigm after Planck 2013, Phys. Lett. B 733 (2014) 112–119, [arXiv:1312.7619].
* (60) A. Masoumi, A. Vilenkin, and M. Yamada, Initial conditions for slow-roll inflation in a random Gaussian landscape, JCAP 07 (2017) 003, [arXiv:1704.06994].
* (61) X. Chen and Y. Wang, Quasi-Single Field Inflation and Non-Gaussianities, JCAP 04 (2010) 027, [arXiv:0911.3380].
* (62) N. Arkani-Hamed and J. Maldacena, Cosmological Collider Physics, arXiv:1503.08043.
* (63) A. Bodas, S. Kumar, and R. Sundrum, The Scalar Chemical Potential in Cosmological Collider Physics, arXiv:2010.04727.
* (64) P. Creminelli, A. Nicolis, and R. Rattazzi, Holography and the electroweak phase transition, JHEP 03 (2002) 051, [hep-th/0107141].
|
Optimal conditions for multiplexing information into ring-core optical fibers
S. Rojas-Rojas,1,2 G. Cañas,3 G. Saavedra,4,* E. S. Gómez,1,2 S. P. Walborn,1,2 G. Lima,1,2
1Departamento de Física, Universidad de Concepción, 160-C Concepción, Chile
2 Millennium Institute for Research in Optics, Universidad de Concepción, 160-C Concepción, Chile
3Departamento de Física, Universidad del Bío-Bío, Collao 1202, 5-C Concepción, Chile
4Departamento de Ingeniería Eléctrica , Universidad de Concepción, 160-C Concepción, Chile
In optical communications, space-division multiplexing is a promising strategy to augment the fiber network capacity. It relies on modern fiber designs that support the propagation of multiple spatial modes. One of these fibers, the ring-core fiber (RCF), is able to propagate modes that carry orbital angular momentum (OAM), and has been shown to enhance not only classical, but also quantum communication systems. Typically, the RCF spatial modes are used as orthogonal transmission channels for data streams that are coupled into the fiber using different Laguerre-Gaussian (LG) beams. Here, we study the optimal conditions to multiplex information into ring-core fibers in this scheme. We determine which are the most relevant LG beams to be considered, and how their coupling efficiency can be maximized by properly adjusting the beam width with respect to the fiber parameters. Our results show that the coupling efficiency depends upon the OAM value, and that this can limit the achievable transmission rates. In this regard, we show that LG beams are not the optimal choice to couple information into RCF. Rather, another class of OAM-carrying beam, the perfect vortex beam, allows for nearly perfect coupling efficiencies for all spatial modes supported by these fibers.
§ INTRODUCTION
Over the last 50 years, the capability of fiber optic technology to deal with an ever growing demand for higher communication rates has fueled a revolution in telecommunication and networking industries, as well as in science and engineering. Using available techniques for time-, polarization-, and wavelength-division signal multiplexing, high capacity communications systems have been implemented resorting to single-mode fibers (SMFs)[1, 2, 3, 4, 5, 6, 7]. However, nowadays the information capacity carried by SMFs is rapidly approaching its physical limit [8, 9, 10], a problem known as the “capacity crunch”. One of the main proposals to overcome this limiting issue is the use of multiple spatial channels to multiplex signals into optical fibers, in addition to the aforementioned techniques [11, 8]. To achieve this, novel optical fibers that support propagation of several spatial modes have been under development. To date, at least three main fiber types can be found, namely, few mode fibers (FMFs) [12, 13], multi-core fibers (MCFs) [14], and ring-core fibers (RCFs) [15].
Ring-core fibers (see Fig. <ref>) have been successful in this context, with demonstrations showing that the spatial modes of these fibers can be used to enhance not only classical [16, 17, 18], but also quantum communication links [19, 20, 21, 22]. The spatial modes of a RCF can carry orbital-angular-momentum (OAM), and are usually excited using independent data streams propagating in different free-space Laguerre-Gaussian (LG) beams [16, 23, 24].
However, since there are several spatial modes supported by the fiber, the optical configuration adopted to couple the beams into the RCF may lead to drastic differences in the coupling efficiencies of each input signal. For instance, a LG beam - characterized by radial (p) and azimuthal ($\ell$) numbers - is a helically phased beam composed of an azimuthal phase term $e^{i\ell \phi}$, where each photon carries OAM $ \ell\hbar$, with $\ell$ being the topological charge of the beam and $\phi$ its azimuthal angle [25, 26]. The LG beam width depends on both of these numbers. For constant radial number, the ring radius size scale with $\sqrt{|\ell|}$. The spatial eigenmodes of the RCF, on the other hand, have a fixed width parameter that is defined only by the core radius, and consequently an asymmetry on the coupling efficiencies as a function of $\ell$ is observed for a given optical configuration. This results in space-division multiplexing communication schemes with channels that may have drastically different overall transmissions, which in general decreases the achievable communication rates. For quantum communication, asymmetrical coupling efficiencies result in undesired state transformations performed by the fiber, resulting in high quantum error rates.
(a) Schematics of a ring-core fiber. Light propagates through the annular core with internal and external radius $b$ and $a$, respectively, delimited by refractive index $n_{\rm co}$ embedded on a cladding with refractive index $n_{\rm cl}$. (b) Radial profile of the refractive index for the fiber parameters used in our study.
To overcome such limitations, we study the optimal conditions to multiplex information into a RCF. First, we show which are the most relevant LG beams to be considered, and how their coupling efficiency can be maximized by properly adjusting the beam width with respect to the fiber core radius. Then, we show that LG beams are not optimal to couple signals multiplexed in the spatial domain into a RCF. As an alternative to the usual LG beams, we consider the perfect vortex (PV) beams introduced by Ostrovsky et al. [27, 28], and we show that these PV beams allow for nearly perfect coupling efficiency for all spatial modes supported by RCFs.
§ THE SPATIAL MODES OF RING-CORE OPTICAL FIBERS
(a) Effective refractive index of the linearly polarized modes supported by the RCF, as a function of the internal radius $b$. The vertical blue line marks the particular configuration $b=6\,\mu$m used in our calculations, which supports the seven modes indicated by the labels, with the same radial order $m=1$ (higher-order modes are represented by the dashed curves). The example in the inset shows an enlarged region of the plot including the exact (vector) modes. (b) Example of a combination of exact vector modes giving rise to a linearly polarized mode in the limit $n_{\rm co}\simeq n_{\rm cl}$. (c) A complex superposition of orthogonal LP modes allows to encode OAM. In this example the topological charge is $\ell=3$.
To study the coupling efficiency between a RCF and different spatial modes propagating in free-space, first we need to determine the bound modes of the fiber. The system under consideration is a ring core fiber, illustrated in Fig. <ref> (a). Let $z$ be the direction corresponding to the longitudinal axis of the RCF. The electric and magnetic components of the $j$-th bound mode carried by the fiber can then be expressed as ${\bf e}_je^{i\beta z}$ and ${\bf h}_je^{i\beta z}$ respectively, where amplitudes ${\bf e}_j$ and ${\bf h}_j$ solve the vector eigenvalue equations derived from the source-free Maxwell equations [29]. The corresponding eigenvalue $\beta_j$ is the propagation constant of the mode. Depending on the symmetry of the particular problem, exact solutions of the vector equations can be found [30]. When the contrast $\Delta n$ between the core ($n_{\rm co}$) and cladding ($n_{\rm cl}$) refractive indices is low enough, different subsets of vector (exact) modes become nearly degenerate. This regime, commonly referred to as the weakly guiding approximation [31], is attainable with standard fabrication techniques and enables linear combinations of the exact solutions to become bound modes of the fiber. In particular, the fiber sustains linearly polarized (LP) modes whose longitudinal field components are small compared to the transverse components, which keep the same direction of polarization across the transverse section (unlike the exact vector modes). This last property is related to the LP modes being solutions of the scalar equation $\nabla_t^2\Psi_\ell+(k^2n^2-\beta_\ell^2)\Psi_\ell=0$, derived from the exact vector equations in the limit $n_{\rm co}\simeq n_{\rm cl}$. Therefore, in the weakly guiding approximation the LP modes form a basis, where for each propagation constant $\beta_\ell$ other than the fundamental one, two orthogonal states of different parity (i.e. different variation with the azymuthal angle $\phi$). Indeed, linear combinations of such degenerate basis states are also eigenmodes of the fiber with the same eigenvalue, which makes it possible to encode OAM using LP modes with azimuthal dependence $\exp(i\ell\phi)$, depicted in Fig. <ref> (c).
If the external cladding is taken to have a finite extension, their spatial profile can be expressed as
\begin{equation}\label{eq:lp}
\Psi_\ell(r,\phi)=S(\ell\phi)
\begin{cases}
C_1{\rm I}_\ell(wr) & 0\leq r < b\,,\\
A_1{\rm J}_\ell(ur)+A_2{\rm Y}_\ell(ur) & b\leq r < a\,,\\
C_2{\rm K}_\ell(wr)+C_3{\rm I}_\ell(wr) & a\leq r \leq c\,,
\end{cases}
\end{equation}
where $S(\ell\phi)$ is either $\cos(\ell\phi)$ or $\sin(\ell\phi)$ depending on the mode parity, while $J_\ell$ $Y_\ell$, $K_\ell$ and $I_\ell$ are the Bessel functions of the first and second kind. Real coefficients $A_i$ and $C_i$ are determined by the condition that fields described by Eq. (<ref>) must be continuous and smooth across all the fiber section, and null in the cladding border $r=c$, in order to solve the scalar wave equation. These requirements results in the following characteristic equation for $\beta_\ell$:
\begin{equation}
\begin{split}
&\frac{{\rm I}'_\ell(wb){\rm J}_\ell(ub)-\frac{u}{w}{\rm J}'_\ell(ub){\rm I}_\ell(wb)}{{\rm I}'_\ell(wb){\rm Y}_\ell(ub)-\frac{u}{w}{\rm Y}'_\ell(ub){\rm I}_\ell(wb)}\\
&=\frac{\left({\rm K}'_\ell(wa)-\frac{{\rm K}_\ell(wc)}{{\rm I}_\ell(wc)}{\rm I}'(wa)\right){\rm J}_\ell(ua)-\frac{u}{w}{\rm J}'_\ell(ua)\left({\rm K}_\ell(wa)-\frac{{\rm K}_\ell(wc)}{{\rm I}_\ell(wc)}{\rm I}(wa)\right)}{\left({\rm K}'_\ell(wa)-\frac{{\rm K}_\ell(wc)}{{\rm I}_\ell(wc)}{\rm I}'(wa)\right){\rm Y}_\ell(ua)-\frac{u}{w}{\rm Y}'_\ell(ua)\left({\rm K}_\ell(wa)-\frac{{\rm K}_\ell(wc)}{{\rm I}_\ell(wc)}{\rm I}(wa)\right)}\,,
\end{split}
\end{equation}
with $w^2=\beta_\ell^2-k_0^2n_{\rm cl}^2$ and $u^2=k_0^2n_{\rm cl}^2-\beta_\ell^2$ being the fiber parameters. Note that in the limit $c\rightarrow \infty$ we recover the characteristic equation for LP modes in annular cores [32, 33]. We start by computing the bound modes of a RCF with external radius $a=9.0\,\mu$m for different internal radii (cf. [34]). The fiber material is taken to be fused silica, such that $n_{\rm cl}=1.444$ when the wavelength of the incident light is $\lambda=1\,550$ nm. The refractive index contrast is $\Delta n=0.025$. We first solve the scalar equation to obtain the LP modes of the fiber. Our results for the effective refractive index $n_{\rm eff}$ (proportional to the propagation constant) of the modes are shown in Fig. <ref> (a). In order to ensure that the LP modes can be used to encode OAM, we need to confirm the validity of the weakly guiding approximation for the parameters in our analysis. To evaluate this, we solved the exact problem and obtained the $n_{\rm eff}$ curves for the vector solutions. The exact results are very well approximated by the LP modes, so the curves overlap in the full picture. Furthermore, we estimate the time spread of the LP modes to be $\sim0.1\,$ns per kilometer. For an internal radius $b$ equal to $6.0\,\mu$m, (vertical blue line in the figure) we find that the fiber supports thirteen modes LP$_{\ell 1}$: the fundamental $\ell=0$ mode and two parities for each $\ell$ between 1 and 6. The second index $m=1$ indicates that all the modes are first-order regarding the radial distribution of the field amplitude. In Ref. [18], a RCF with an internal radius of $6.0\,\mu$m was used for space-division multiplexing, and analysis in the following sections are made with this configuration. The example in Fig. <ref> (b) illustrates how LP modes arise as linear combinations of the vector modes in the limit where these become degenerate. In Fig. <ref> (c) we show a combination of LP modes allowing to encode OAM. For each OAM order, the topological charge is given by $\pm |\ell|$ ($\ell=3$ in the example) so it can be excited by a free-space beam with the same $\ell$ and linear polarization. Note that this is not possible when the refractive-index contrast is high, such as in fibers with doped cores [35]. In that regime, OAM modes must be constructed as phase-shifted combinations of even and odd vector modes, so the propagated field has circular polarization. Finally, we note that for certain quantum information protocols the relative time delay between modes, induced by the difference in their effective refractive index, may be relevant. In our case, the delay between the fundamental and the highest-order mode is 20.97 ps (10.82 ns) after a transmission distance of 50 cm (1 km). We leave detailed study of the temporal delay of these modes for a future study, and remark that the relative delays can be corrected at the receiver or transmitter, if necessary.
§ COUPLING EFFICIENCY BETWEEN THE FIBER AND FREE-SPACE SPATIAL MODES
The OAM modes in a RCF can be used as orthogonal carriers to multiplex data streams. Typically, free-space modes are used to excite OAM fiber modes. We shall now study the coupling conditions to optimally multiplex information into ring-core fibers, using LG modes and PV beams.
§.§ Using LG beams
Overlap between the LP$_{\ell 1}$ modes sustained by the fiber and the matching Laguerre-Gaussian modes LG$_{p\ell}$ up to $p=3$, as a function of the ratio between the width $w_0$ of the LG-modes and the external radius $a$ of the fiber core.
LG beams are a suitable choice for the adopted free-space modes to excite the RCF modes since they share the cylindrical symmetry of the fiber. Moreover, they are eigenmodes of first-order optical systems with cylindrical symmetry, such as free space propagation or spherical lenses. These modes are solutions of the paraxial Helmholtz equation, and are given by [36]:
\begin{equation}\label{eq:lg}
\text{LG}_{p\ell}=M_{p\ell}\left(\frac{2r^2}{w_0^2}\right)^{\frac{|\ell|}{2}} L_p^{|\ell|}\left(\frac{2r^2}{w_0^2}\right)\exp\left(-\frac{r^2}{w_0^2}\right)\,\exp(i\ell\phi),
\end{equation}
where, $L_p^{|\ell|}$ are the associated Laguerre polynomials, $w_0$ defines the width of the beam and $M_{p\ell}$ is a normalization factor.
Using LG beams to couple OAM modes into a RCF has been successfully demonstrated, and was used in [37, 18]. Despite this, coupling multiple OAM with different topological charge remains a practical challenge, as the diameter of the LG mode is proportional to $\sqrt{|\ell|}$. In practice, this means that one cannot simultaneously optimize the coupling of different free space modes into the fiber using the same optical configuration. In [37], optical modes use the same topological charge $\ell=\pm 1$ with opposite wavefront rotation directions, and in [18] OAM modes where multiplexed with contiguous $\ell=\{+4,+5\}$ to simplify the coupling setup. Note that both references use systems with 2 spatial modes or dimensions.
To analyze the coupling efficiency of LG beams in a RCF we use the projection of the modes onto the LP basis and viceversa. As described in the previous section, OAM modes within the RCF can be generated as a linear combination of LP modes. We consider the following figure of merit, which measures the overlap between the modes:
\begin{equation}\label{eq:overlap}
\eta = \left| \iint {\rm LG}_{p\ell}(r,\phi) \Psi_{\ell}^\ast(r,\phi)\, dA\,\right |.
\end{equation}
It follows directly that LG and LP modes can be matched only if they have the same azimuthal index $\ell$, in accordance with the definition of $\eta$, since they both have the same angular dependence $\exp(i\ell\phi)$.
The relationship in Eq. (<ref>) was used by Brüning et al. in Ref. [38] to study the overlap of the LG modes with the eigenmodes of a step-index fiber. Unlike that case, modes of the RCF can be matched to LG modes having different radial order due to the ring shape of the core. Therefore, for each LP mode we evaluate the overlap with the first four radial orders ($p=0,1,2,3$) of the corresponding LG modes. The overlap between LG and LP modes can be characterized by a single parameter–the ratio $w_0/a$ between the beam width $w_0$ of the LG mode and the external core radius $a$ of the RCF, and is shown in Fig. <ref>.
Vertical lines show the maximum coupling efficiency for a given order $\ell$. Each sub-figure presents a different radial order $p$ of the LG beams. As the azimuthal order of the LG mode is increased, the ratio $w_0/a$ needs to be decreased in order to couple light into the fiber with maximum efficiency. In general, higher coupling efficiencies are observed using LG beams with radial order $p=0$. However, large differences are observed in the optimal $w_0/a$ ratio for different values of $\ell$. For example, for $p=0$, a $w_0/a$ of $1.2$ is required to optimally couple $\ell = 0$ into the RCF, while $w_0/a$ of $0.47$ is needed for $\ell = 6$. Alternatively, using $p=3$ results in lower coupling efficiencies, however the optimal $w_0/a$ ratios of $0.35$ and $0.25$ for $\ell = 0$ and $6$ are much closer. For radial orders $p>0$, multiple peaks in the coupling efficiency are observed as $w_0/a$ is increased. This is due to the fact that LG modes can have multiple rings, and as the beam width is increased the inner rings are coupled into the RCF. However, the highest overlap is observed for the outer ring of any LG mode, which usually has the highest intensity.
The maximal coupling efficiency that can be achieved for each optical configuration studied is highlighted in Table <ref>. Note that for the $p = 0$ the efficiency increases together with the azimuthal number, while for $p>0$ the opposite effect is observed. In the former case, this is due to the fact that the rings get narrower as a function of $|\ell|$. In the latter case, as discussed previously, in the optimal coupling scenario only the external ring of the LG mode is coupled into the fiber for $p>0$, leading to coupling losses.
$\eta_{\rm max}$ LG$_{0\ell}$ LG$_{1\ell}$ LG$_{2\ell}$ LG$_{3\ell}$
LP$_{01}$ 0.7214 0.8101 0.7353 0.6553
LP$_{11}$ 0.8314 0.7878 0.7000 0.6235
LP$_{21}$ 0.8860 0.7686 0.6729 0.6000
LP$_{31}$ 0.9188 0.7532 0.6525 0.5818
LP$_{41}$ 0.9411 0.7372 0.6355 0.5657
LP$_{51}$ 0.9568 0.7258 0.6205 0.5484
LP$_{61}$ 0.9690 0.7199 0.6039 0.5389
Maximum overlap between different pairs of $LP_{\ell 1}$ and $LG$ modes.
§.§ Using PV beams
Overlap between the LP modes of the fiber and PV beams as a function of its radius $r_r$ (a) and its width $w_0$ (b), in units of $a$. Vertical blue lines indicate the parameters for optimal coupling. The coupling is independent of the topological charge.
Our results show that the coupling efficiency between LG and LP modes strongly depends on the topological charge $\ell$, even for the case of constant radial index $p$. This must be considered when coupling multiple OAM modes into ring core fiber, and leads to a trade-off between optimality and homogeneity in terms of the coupling efficiencies. To eliminate this $\ell$-dependence, we now consider PV beams, which have a field profile that is more convenient for this type of application. The PV beams are obtained as Fourier transformations of Bessel-Gaussian beams, and have a transverse field distribution given by [39]:
\begin{equation}\label{eq:pvb}
PV_{\ell}\simeq i^{\ell-1} \frac{w_g}{w_0}\exp(i\ell\phi)\exp\left(-\frac{(r-r_r)^2}{w_0^2}\right),
\end{equation}
where $w_g$ is the beam width of the Gaussian beam which is used to confine the Bessel beam, $w_0$ is the beam width at the focus plane ($w_0=2f/kw_g$), and the annular profile of PV beams have thickness and radius of the ring equal to $2w_0$ and $r_r$, respectively. Thus, as long as $r_r$ is large enough for the approximation of Eq. (<ref>) to be valid, it is possible to set the field amplitude to have the desired transverse “ring" profile that is independent of the value $\ell$. We note that PV beams have also been used to demonstrate the propagation of OAM modes through specially designed ring core and air core fibers, which support up to $36$ and $10$ OAM modes, respectively [40, 35]. In our case, the linear polarization and the sign of the topological charge allows the fiber to support 26 modes. Different to the case of conventional few- and multi-mode fibers, in the RCF the spatial modes are confined within the annular core in such a way that the radial profile of the LP modes only varies slightly with $\ell$, but is determined by the radii parameters $a$ and $b$. Thus, it is possible to find a single $w_o$ and $r_r$ which optimally couples each LP$_{\ell1}$ to a PV beam PV$_{\ell}$ with the same topological charge. This is shown in Fig. <ref>, where we show the overlap between the PV and the LP modes obtained by using the PV beam of Eq. (<ref>) in relation (<ref>). In this case, an average coupling efficiency of $0.9959$ is achieved for the ratios $r_r/a=0.83$ and $w_0/a=0.235$. If instead we look for specific PV beam optimized for each $\ell$, the maximum overlap is 0.9986 for $\ell=3$, close to the 0.9989 benchmark attainable with exact vector modes. This coupling efficiency outperforms all cases considered for the LG beams. For instance, the best coupling efficiency of LG beams is achieved for $p=0$, which is about $5\%$ lower than the coupling efficiency of PV into RCF modes. Furthermore, for $p>0$ the coupling efficiency is $25\%$ to $50\%$ lower than PV beams (see Fig. <ref>).
§.§ LG vs PV beams coupling efficiencies
To further compare the use of both LG and PV beams to excite OAM modes in a RCF, we now use the radial profile of the modes and examine how they compare to those of the bound modes of the fiber. Figure <ref> (a) shows the radial amplitude profiles of a variety of studied beams. As discussed above, the annular structure strongly determines the radial profile of the LP modes (the average is shown by the beige region in the figure) so a single PV beam can be found (black curve) which optimally couples to all LP modes, with an average overlap of 0.9959. Profiles of the LG beams are shown in the figure for $w_0/a=0.475$, which corresponds to the highest coupling efficiency achieved in Fig. <ref> (for $\ell=6$). Visual inspection of the overlap between amplitudes clearly shows that sub-optimal coupling is achieved for every LG mode when the beam width is fixed. LG modes LG$_{06}$ and LG$_{05}$ have similar amplitude profiles, and thus similar efficiency is observed (see Fig. <ref> (a)). However, LG modes with lower azimuthal order deviate greatly from the LP mode profile.
On the other hand, since the radial profile of the PV beam is independent of the azimuthal charge, the same overlap is observed between PV$_\ell$ and the bound mode LP$_{\ell 1}$ supported in the fiber. Despite this, we can observe that the overlap between the PV and the LP modes is not perfect. The radial profile of the PV beam deviates slightly from the LP mode because the former is explicitly defined to have a Gaussian profile around $r_r$. Since the LP modes must satisfy the boundary conditions for the parallel components of the electromagnetic field, its radial profile exhibits a different decay in the inner and the outer cladding, described by different kinds of Bessel functions in each region.
§.§ Achievable dimension of quantum communication channels
To use OAM as a viable candidate to solve the "capacity crunch" in optical fiber communication systems, and to expand quantum communication systems, a large number of OAM modes need to be multiplexed into a single RCF. As studied here, the use of LG beams to multiplex OAM into a RCF will lead to different transmission losses for each spatial channel. For classical communications links this will result in different quality of transmission for each channel, limiting the amount of information that can be encoded into it.
In quantum information, it is well known that some protocols can be more robust when using quantum states in higher dimensions (qudits) [41, 42, 43, 44]. In the current scenario, a typical approach is to encode a $d$-dimensional qudit state as a single photon in a superposition of LG modes with different OAM $\ell$ and fixed radial order $p$ [45, 46, 47]. Here it is necessary to couple not only the individual OAM basis states into the RCF, but also superposition states with reasonable fidelity. Thus, one must search for a ratio parameter $w_0/a$ that gives the same coupling efficiency for several OAM modes. For instance, a mode overlap of $\sim 0.95$ can be achieved for the LG modes with $p=0$ and $\ell=\pm 4, \pm 5, \pm 6$ for the ratio $w_0/a=0.52$, allowing for a fairly high and homogeneous coupling efficiency for the basis elements of a 6-dimensional quantum state (see Fig. <ref>a). Similar situations for six-dimensional states occur for radial orders $p>0$, but with a coupling efficiency less than $75\%$ for a ratio parameter $w_0/a$ between $0.25$ and $0.35$ (see Fig. <ref>). Consequently, the dimension of the quantum state encoded in the LG modes is restricted by the coupling efficiency between LG and RCF modes, instead of the number of allowed propagation modes allowed by the RCF. We note that the overall effect of coupling into the RCF is a non-unitary filtering operation on the qudit state. This operation could of course be corrected at the expense of further losses. Therefore, though we identify 13 different eigenmodes in the RCF we study, a 13-dimensional quantum states cannot be transmitted into the RCF without a drastic loss in fidelity, efficiency or both.
On the other hand, the PV beams can be used to encode quantum states in higher dimensions, and to encode classical channels in SDM systems. Due to the characteristic property of the PV beams, which is that the beam shape is independent of the topological charge $\ell$, PV beams present a constant coupling efficiency for all OAM modes supported by a RCF. In this case, the dimension of a quantum state and the number of spatial channels are limited ultimately by the propagation modes allowed by the RCF instead of the coupling efficiency of the free-space beams into the fiber. The use of PV beams will lead to improved transmission systems, and simplify the optical setup to generate high order quantum and classical communication systems using RCF.
(a) Variation in the radial profile of the LG$_ {0\ell}$ modes as the topological charge $\ell$ is increased, for a fixed beam width $w_0=0.475$. Their radius scales as $\sqrt{\ell}$, so the maximal overlap with the LP modes (orange area) is achieved with $\ell=6$ for the chosen $w_0$.
§ CONCLUSIONS
The use of orbital angular momentum to generate spatially multiplexed channels in optical fiber has the potential to reduce the impact of the capacity crunch in classical communications fibers, and to increase the efficiency of quantum communication links. By evaluating the overlap between free space optical beams capable of carrying OAM and the spatial modes supported by a ring-core fiber we have computed the coupling efficiency between them, for different beam parameters. We show that for Laguerre-Gaussian input beams, the coupling efficiency depends not only upon the beam width and fiber core diameters, but also upon the OAM value of the beam. This leads to a decrease in communication capacity, as some OAM channels will be coupled worse than others. Presumably, one could resort to much more complex optical systems to achieve homogeneous coupling efficiencies. As an alternative solution, we investigate the use of perfect vortex beams as input to the RCF. We show that in this case the coupling efficiencies are nearly independent of the OAM value, rendering these beams as much more suitable for multiplexing OAM channels from free space into a ring-core fiber. We expect these results to play an important role in space-division multiplexing of both classical and quantum optical information.
§ ACKNOWLEDGMENTS
This work was supported by Fondo Nacional de Desarrollo Científico y Tecnológico (Fondecyt 1190933, Fondecyt 1190710, Fondecyt 1190901, Fondecyt 1200266, and Fondecyt 1200859), and by ANID - Millenium Science Initiative Program - ICN17_012. S.R.R. acknowledges support from Fondecyt 3180752.
§ DISCLOSURES
The authors declare no conflicts of interest.
[1]
Y. Zhu, M. Jiang, and F. Zhang, Direct detection of polarization
multiplexed single sideband signals with orthogonal offset carriers,
Opt. Express 26, 15887–15898 (2018).
[2]
T. Kan, K. Kasai, M. Yoshida, and M. Nakazawa, 42.3 tbit/s, 18 gbaud
64 qam wdm coherent transmission over 160 km in the c-band using an
injection-locked homodyne receiver with a spectral efficiency of 9 bit/s/hz,
Opt. Express 25, 22726–22737 (2017).
[3]
A. H. Gnauck, P. J. Winzer, S. Chandrasekhar, X. Liu, B. Zhu, and D. W.
Peckham, Spectrally efficient long-haul wdm transmission using
224-gb/s polarization-multiplexed 16-qam, J.
Lightwave Technol. 29, 373–377 (2011).
[4]
L. Galdino, A. Edwards, W. Yi, E. Sillekens, Y. Wakayama,
T. Gerard, W. S. Pelouch, S. Barnes, T. Tsuritani, R. I. Killey,
D. Lavery, and P. Bayvel, Optical fibre capacity optimisation
via continuous bandwidth amplification and geometric shaping,
IEEE Photonics Technology Letters 32,
1021–1024 (2020).
[5]
M. Ionescu, D. Lavery, A. Edwards, E. Sillekens, D. Semrau, L. Galdino, R. I.
Killey, W. Pelouch, S. Barnes, and P. Bayvel, 74.38 tb/s
transmission over 6300 km single mode fibre enabled by c$+$l amplification
and geometrically shaped pdm-64qam, J. Lightwave
Technol. 38, 531–537 (2020).
[6]
F. Hamaoka, K. Minoguchi, T. Sasai, A. Matsushita, M. Nakamura,
S. Okamoto, E. Yamazaki, and Y. Kisaka, 150.3-tb/s
ultra-wideband (s, c, and l bands) single-mode fibre transmission over 40-km
using >519gb/s/a pdm-128qam signals, in 2018 European Conference on
Optical Communication (ECOC), (2018), pp. 1–3.
[7]
J. Renaudier, A. C. Meseguer, A. Ghazisaeidi, P. Tran, R. R. Muller,
R. Brenot, A. Verdier, F. Blache, K. Mekhazni, B. Duval,
H. Debregeas, M. Achouche, A. Boutin, F. Morin, L. Letteron,
N. Fontaine, Y. Frignac, and G. Charlet, First 100-nm
continuous-band wdm transmission system with 115tb/s transport over 100km
using novel ultra-wideband semiconductor optical amplifiers, in 2017
European Conference on Optical Communication (ECOC), (2017), pp. 1–3.
[8]
R.-J. Essiambre, G. J. Foschini, G. Kramer, and P. J. Winzer, Capacity
limits of information transport in fiber-optic networks,
Phys. Rev. Lett. 101, 163901 (2008).
[9]
D. J. Richardson, Filling the light pipe,
Science 330, 327–328 (2010).
[10]
R.-J. Essiambre, G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel,
Capacity limits of optical fiber networks,
J. Lightwave Technol. 28, 662–701 (2010).
[11]
D. J. Richardson, J. M. Fini, and L. E. Nelson, Space-division
multiplexing in optical fibres, Nature Photonics
7, 354–362 (2013).
[12]
P. Sillard, M. Bigot-Astruc, and D. Molin, Few-mode fibers for
mode-division-multiplexed systems, Journal of
Lightwave Technology 32, 2824–2829 (2014).
[13]
G. Rademacher, R. S. Luís, B. J. Puttnam, T. A. Eriksson, R. Ryf,
E. Agrell, R. Maruyama, K. Aikawa, Y. Awaji, H. Furukawa, and N. Wada,
High capacity transmission with few-mode fibers,
Journal of Lightwave Technology 37,
425–432 (2019).
[14]
K. Saitoh and S. Matsuo, Multicore fiber technology,
Journal of Lightwave Technology 34, 55–66
[15]
C. Brunet, B. Ung, L. Wang, Y. Messaddeq, S. LaRochelle, and L. A. Rusch,
Design of a family of ring-core fibers for OAM transmission
studies, Optics Express 23, 10553–10563
[16]
N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner,
and S. Ramachandran, Terabit-scale orbital angular momentum mode
division multiplexing in fibers, Science
340, 1545–1548 (2013).
[17]
R. M. Nejad, K. Allahverdyan, P. Vaity, S. Amiralizadeh, C. Brunet,
Y. Messaddeq, S. LaRochelle, and L. A. Rusch, Mode division
multiplexing using orbital angular momentum modes over 1.4-km ring core
fiber, Journal of Lightwave Technology 34,
4252–4258 (2016).
[18]
L. Zhu, G. Zhu, A. Wang, L. Wang, J. Ai, S. Chen, C. Du, J. Liu, S. Yu, and
J. Wang, 18 km low-crosstalk OAM + WDM transmission with 224
individual channels enabled by a ring-core fiber with large high-order mode
group separation, Optics Letters 43,
1890–1893 (2018).
[19]
D. Cozzolino, D. Bacco, B. Da Lio, K. Ingerslev, Y. Ding, K. Dalgaard,
P. Kristensen, M. Galili, K. Rottwitt, S. Ramachandran, and L. K.
Oxenløwe, Orbital angular momentum states enabling fiber-based
high-dimensional quantum communication, Phys. Rev.
Applied 11, 064058 (2019).
[20]
A. Sit, R. Fickler, F. Alsaiari, F. Bouchard, H. Larocque, P. Gregg, L. Yan,
R. W. Boyd, S. Ramachandran, and E. Karimi, Quantum cryptography
with structured photons through a vortex fiber, Opt.
Lett. 43, 4108–4111 (2018).
[21]
H. Cao, S.-C. Gao, C. Zhang, J. Wang, D.-Y. He, B.-H. Liu, Z.-W. Zhou, Y.-J.
Chen, Z.-H. Li, S.-Y. Yu, J. Romero, Y.-F. Huang, C.-F. Li, and G.-C. Guo,
Distribution of high-dimensional orbital angular momentum
entanglement over a 1 km few-mode fiber, Optica
7, 232–237 (2020).
[22]
G. B. Xavier and G. Lima, Quantum information processing with
space-division multiplexing optical fibres,
Communications Physics 3, 9 (2020).
[23]
J. Zhang, J. Liu, L. Shen, L. Zhang, J. Luo, J. Liu, and S. Yu,
Mode-division multiplexed transmission of wavelength-division
multiplexing signals over a 100-km single-span orbital angular momentum
fiber, Photon. Res. 8, 1236–1242 (2020).
[24]
G. Zhu, Z. Hu, X. Wu, C. Du, W. Luo, Y. Chen, X. Cai, J. Liu, J. Zhu, and
S. Yu, Scalable mode division multiplexed transmission over a 10-km
ring-core fiber using high-order orbital angular momentum modes,
Opt. Express 26, 594–604 (2018).
[25]
L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman,
Orbital angular momentum of light and the transformation of
laguerre-gaussian laser modes, Phys. Rev. A
45, 8185–8189 (1992).
[26]
M. J. Padgett, Orbital angular momentum 25 years on [invited],
Opt. Express 25, 11265–11274 (2017).
[27]
A. S. Ostrovsky, C. Rickenstorff-Parrao, and V. Arrizón,
Generation of the “perfect” optical vortex using a
liquid-crystal spatial light modulator, Opt. Lett.
38, 534–536 (2013).
[28]
J. García-García, C. Rickenstorff-Parrao, R. Ramos-García,
V. Arrizón, and A. S. Ostrovsky, Simple technique for generating
the perfect optical vortex, Opt. Lett. 39,
5305–5308 (2014).
[29]
A. W. Snyder and J. Love, Optical Waveguide Theory (Chapman & Hall,
[30]
C. Brunet, B. Ung, P.-A. Bélanger, Y. Messaddeq, A. LaRochelle, and L. A.
Rusch, Vector Mode Analysis of Ring-Core Fibers: Design Tools for
Spatial Division Multiplexing, J. Lightwave
Technol. 32, 4648 (2014).
[31]
A. W. Snyder and W. R. Young, Modes of optical waveguides,
J. Opt. Soc. Am. 68, 297 (1978).
[32]
B. C. Sarkar, P. K. Choudhury, and T. Yoshino, On the analysis of a
weakly guiding doubly clad dielectric optical fiber with an annular core,
Microw. Opt. Techn. Lett. 31, 435 (2001).
[33]
J. Marcou and S. Février, Comments on `On the analysis of a
weakly guiding doubly clad dielectric optical fiber with an annular core',
Microw. Opt. Techn. Lett. 38, 249 (2003).
[34]
M. Kasahara, K. Saitoh, T. Sakamoto, N. Hanzawa, T. Matsui, K. Tsujikawa, and
F. Yamamoto, Design of Three-Spatial-Mode Ring-Core Fiber,
J. Lightwave Technol. 1337, 32 (2014).
[35]
C. Brunet, P. Vaity, Y. Messaddeq, S. LaRochelle, and L. A. Rusch,
Design, fabrication and validation of an oam fiber supporting 36
states, Opt. Express 22, 26117–26127
[36]
L. Allen, S. Barnett, and M. Padgett, Optical Angular Momentum, Optics
and Optoelectronics (Institute of Physics Publishing, Bristol, UK, 2003).
[37]
N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner,
and S. Ramachandran, Terabit-scale orbital angular momentum mode
division multiplexing in fibers, science
340, 1545–1548 (2013).
[38]
R. Brüning, Y. Zhang, M. McLaren, M. Duparré, and A. Forbes,
Overlap relation between free-space Laguerre Gaussian modes and
step-index fiber modes, J. Opt. Soc. Am. A
32, 1678 (2015).
[39]
P. Vaity and L. Rusch, Perfect vortex beam: Fourier transformation of
a Bessel Beam, Opt. Lett. 40, 597 (2015).
[40]
P. Vaity, C. Brunet, Y. Messaddeq, S. LaRochelle, and L. A. Rusch,
Exciting oam modes in annular-core fibers via perfect oam beams, in
2014 The European Conference on Optical Communication (ECOC), (2014),
pp. 1–3.
[41]
N. J. Cerf, M. Bourennane, A. Karlsson, and N. Gisin, Security of
quantum key distribution using $\mathit{d}$-level systems,
Phys. Rev. Lett. 88, 127902 (2002).
[42]
i. c. v. Brukner, M. ŻŻukowski, and A. Zeilinger,
Quantum communication complexity protocol with two entangled
qutrits, Phys. Rev. Lett. 89, 197901
[43]
D. Martínez, A. Tavakoli, M. Casanova, G. Cañas, B. Marques, and
G. Lima, High-dimensional quantum communication complexity beyond
strategies based on bell's theorem, Phys. Rev.
Lett. 121, 150504 (2018).
[44]
G. M. Nikolopoulos, K. S. Ranade, and G. Alber, Error tolerance of
two-basis quantum-key-distribution protocols using qudits and two-way
classical communication, Phys. Rev. A 73,
032325 (2006).
[45]
M. Erhard, R. Fickler, M. Krenn, and A. Zeilinger, Twisted photons:
new quantum perspectives in high dimensions, Light:
Science & Applications 7, 17146 (2018).
[46]
G. Gibson, J. Courtial, M. J. Padgett, M. Vasnetsov, V. Pas'ko, S. M. Barnett,
and S. Franke-Arnold, Free-space information transfer using light
beams carrying orbital angular momentum, Opt.
Express 12, 5448–5456 (2004).
[47]
A. M. Yao and M. J. Padgett, Orbital angular momentum: origins,
behavior and applications, Adv. Opt. Photon.
3, 161–204 (2011).
|
# Gaia EDR3 confirms that Westerlund 1 is closer and older than previously
thought
Mojgan Aghakhanloo Steward Observatory, University of Arizona, 933 N. Cherry
Ave., Tucson, AZ 85721, USA Jeremiah W. Murphy Department of Physics,
Florida State University, 77 Chieftan Way, Tallahassee, FL 32306, USA Nathan
Smith Steward Observatory, University of Arizona, 933 N. Cherry Ave., Tucson,
AZ 85721, USA John Parejko Department of Astronomy, University of
Washington, Box 351580, Seattle, WA 98195, USA Mariangelly Díaz-Rodríguez
Department of Physics, Florida State University, 77 Chieftan Way, Tallahassee,
FL 32306, USA Maria R. Drout The Observatories of the Carnegie Institution
for Science, 813 Santa Barbara St, Pasadena Jose H. Groh School of Physics,
Trinity College Dublin, The University of Dublin, Dublin, Ireland Joseph
Guzman Department of Physics, Florida State University, 77 Chieftan Way,
Tallahassee, FL 32306, USA Keivan G. Stassun Department of Physics &
Astronomy, Vanderbilt University, 6301 Stevenson Center Lane, Nashville, TN
37235, USA Department of Physics, Fisk University, 1000 17th Avenue N.,
Nashville, TN 37208, USA
###### Abstract
Using Gaia Early Data Release 3 (EDR3) parallaxes and Bayesian inference, we
infer a parallax of the Westerlund 1 (Wd1) cluster. We find a parallax of
$0.34\pm{0.05}$ mas corresponding to a distance of $2.8^{+0.7}_{-0.6}$ kpc.
The new Gaia EDR3 distance is consistent with our previous result using Gaia
DR2 parallaxes. This confirms that Wd1 is less massive and older than
previously assumed. Compared to DR2, the EDR3 individual parallax
uncertainties for each star decreased by 30%. However, the aggregate parallax
uncertainty for the cluster remained the same. This suggests that the
uncertainty is dominated by systematics, which is possibly due to crowding,
motions within the cluster, or motions due to binary orbits.
stars — evolution, open clusters and associations — individual — Westerlund 1,
methods — Bayesian analysis
## 1
Westerlund 1 (Wd1) has previously been discussed as potentially one of the
most massive young star clusters in the Galaxy. Wd1 is of significant interest
because it contains a large population of evolved massive stars such as Wolf-
Rayet stars, red and blue supergiants, yellow hypergiants, an LBV, and a
magnetar (Clark & Negueruela, 2003; Clark et al., 2005; Muno et al., 2005;
Crowther et al., 2006; Groh et al., 2006; Fenech et al., 2018). Previous
distance estimates to Wd1 ranged from 1.0 to 5.5 kpc (Westerlund, 1961, 1968;
Piatti et al., 1998; Clark et al., 2005; Crowther et al., 2006), although
values around 5 kpc have usually been adopted. Stellar luminosities at $\sim$5
kpc imply that the cluster’s current turnoff mass would be around 40 M⊙ or
more.
In Aghakhanloo et al. (2020), we used Gaia Data Release 2 (DR2; Prusti et al.,
2016; Brown et al., 2018) and Bayesian inference to estimate the distance to
Wd1. We modeled both cluster stars and Galactic field stars and inferred a
parallax of $0.35^{+0.07}_{-0.06}$ mas corresponding to a distance of
$2.6^{+0.6}_{-0.4}$ kpc. At this closer distance, stellar luminosities would
be reduced by a factor of more than 3. The turnoff mass would be reduced from
$\sim$40 M⊙ to around 22 M⊙, with a corresponding increase in age and a
decrease in the cluster’s total stellar mass compared to values usually
adopted in the literature.
In this work, we update a parallax of the cluster using Gaia early third data
release (EDR3; Collaboration et al., 2020). We infer a parallax of
$0.34\pm{0.05}$ mas corresponding to a distance of $2.8^{+0.7}_{-0.6}$ kpc.
Fig. 1 shows the posterior distribution for cluster parallax,
$\varpi_{\text{cl}}$ (mas), density of the cluster stars, $n_{\text{cl}}$
(number per square arcminute), density of the field stars, $n_{\text{f}}$
(number per square arcminute), the field-star length scale, $L$ (kpc), the
field-star offset, $\varpi_{\text{os}}$ (mas), and the parallax zero-point of
the cluster, $\varpi_{\text{zp}}$ (mas). The two regions used to constrain
these parameters are an inner circle centred on the position of Wd1 and with a
radius of 1 arcmin, and an outer annulus from 9 to 10 arcmin. The values in
the top right corner show the mode and the highest 68% density interval (HDI)
for marginalized distributions. The density of the cluster is
$n_{\text{cl}}=153.93^{+8.87}_{-7.05}$ stars per square arcminute, density of
field stars is $n_{\text{f}}=41.46^{+0.83}_{-0.84}$ stars per square
arcminute, the field-star length scale is $L=1.32\pm{0.06}$ kpc, the field-
star offset is $\varpi_{\rm{os}}=0.15\pm{0.01}$ mas, and the parallax zero-
point of the cluster is $\varpi_{\text{zp}}=-0.06^{+0.05}_{-0.04}$ mas.
Figure 1: Posterior distribution for the six-parameter model. We report the
mode and the highest density 68% confidence interval for the cluster parallax
($\varpi_{\text{cl}}$), the cluster density ($n_{\text{cl}}$), the field-star
density ($n_{\text{f}}$), the field-star length scale ($L$), the field-star
offset ($\varpi_{\text{os}}$), and the parallax zero-point of the cluster
($\varpi_{\text{zp}}$). The parallax of the cluster is
$\varpi_{\text{cl}}=0.34\pm{0.05}$ mas, which corresponds to a distance of
$R=2.8^{+0.7}_{-0.6}$ kpc.
The new Gaia EDR3 result is consistent with our previous work using Gaia DR2.
In the Gaia EDR3, the individual parallax errors decreased by 30%
(Collaboration et al., 2020). Also, in this sample, the number of sources in
the inner circle with a good solution (at least eight visibility periods, RUWE
$<1.40$ and astrometric excess noise sigma $\leq$ 2) increased by a factor of
$\sim$3\. Even though the individual Gaia EDR3 parallax precision for each
star increased, the new Gaia EDR3 parallax of the Wd1 cluster has the same
precision due to unmodeled systematic errors. If the uncertainty is dominated
by random statistics, then the uncertainty should be of order
$\sigma_{i}/\sqrt{N}$, where $N$ is the number of stars with good solutions
and $\sigma_{i}$ is the uncertainty of each star. Therefore, if uncertainties
are random, Gaia EDR3 parallax uncertainty of Wd1 cluster should be a factor
of $\sim$2 smaller than Gaia DR2 parallax uncertainty. The fact that the Gaia
EDR3 parallax precision of the cluster stays the same implies that there is a
systematic error that is unmodeled. Such systematic errors could be due to
crowding, motions within the cluster or motions due to binary orbits. Due to
the increased number of sources in the inner circle, the cluster density
increases by a factor of $\sim$1.5. The Gaia EDR3 field-star length scale is
within $\sim$1.6 sigma of the Gaia DR2 result. The field-star offset and the
parallax zero-point of the cluster are consistent with the previous results
using Gaia DR2.
W243 is a confirmed Luminous Blue Variable (LBV) that is associated with the
Wd1 cluster (Clark, J. S. & Negueruela, I., 2004; Clark et al., 2005). In Gaia
DR2, the parallax of the individual star W243 is $0.979\pm{0.165}$ mas,
implying a distance of 1.78${}^{+2.37}_{-0.95}$ kpc (Smith et al., 2019),
while the Gaia EDR3 parallax of the individual star W243 is $0.012\pm{0.081}$
mas. In both Gaia DR2 and EDR3, the excess astrometric noise sigma for this
star is larger than 2, which indicates that the source may not be
astrometrically well-behaved. The significant difference between Gaia DR2 and
EDR3 data and large astrometric excess noise sigma may be due to crowding, and
binarity in this region. Therefore, the distance to the Wd1 cluster is a more
reliable distance estimate to LBV W243 than its individual distance estimate,
and the cluster parallax also provides a much more precise estimate of the
distance.
While we infer that possible systematic effects seem to limit the improvement
in precision in the parallax of the Wd1 cluster as we move from Gaia DR2 to
EDR3, we nevertheless find a result that is consistent with our previously
inferred distance. This confirms that the Wd1 cluster is in fact closer, less
massive, and less luminous than typically assumed in the literature, having
the important consequence that the magnetar, the LBV, and other evolved stars
seen in the cluster descended from initial masses far less than 40 M⊙, being
closer to 25 M⊙ or less.
## References
* Aghakhanloo et al. (2020) Aghakhanloo, M., Murphy, J. W., Smith, N., et al. 2020, MNRAS, 492, 2497, doi: 10.1093/mnras/stz3628
* Brown et al. (2018) Brown, A. G. A., Vallenari, A., Prusti, T., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051
* Clark & Negueruela (2003) Clark, J. S., & Negueruela, I. 2003, A&A, 413, L15, doi: 10.1051/0004-6361:20031700
* Clark et al. (2005) Clark, J. S., Negueruela, I., Crowther, P. A., & Goodwin, S. P. 2005, A&A, 434, 949, doi: 10.1051/0004-6361:20042413
* Clark, J. S. et al. (2005) Clark, J. S., Larionov, V. M., & Arkharov, A. 2005, A&A, 435, 239, doi: 10.1051/0004-6361:20042563
* Clark, J. S. & Negueruela, I. (2004) Clark, J. S., & Negueruela, I. 2004, A&A, 413, L15, doi: 10.1051/0004-6361:20031700
* Collaboration et al. (2020) Collaboration, G., Brown, A. G. A., Vallenari, A., et al. 2020, Gaia Early Data Release 3: Summary of the contents and survey properties. https://arxiv.org/abs/2012.01533
* Crowther et al. (2006) Crowther, P. A., Hadfield, L. J., Clark, J. S., Negueruela, I., & Vacca, W. D. 2006, MNRAS, 372, 1407, doi: 10.1111/j.1365-2966.2006.10952.x
* Fenech et al. (2018) Fenech, D. M., Clark, J. S., Prinja, R. K., et al. 2018, A&A, 617, A137, doi: 10.1051/0004-6361/201832754
* Groh et al. (2006) Groh, J. H., Damineli, A., Teodoro, M., & Barbosa, C. L. 2006, A&A, 457, 591, doi: 10.1051/0004-6361:20064929
* Muno et al. (2005) Muno, M. P., Clark, J. S., Crowther, P. A., et al. 2005, ApJ, 636, L41, doi: 10.1086/499776
* Piatti et al. (1998) Piatti, A. E., Bica, E., & Clariá, J. J. 1998, A&AS, 127, 423, doi: 10.1051/aas:1998111
* Prusti et al. (2016) Prusti, T., de Bruijne, J. H. J., Brown, A. G. A., et al. 2016, A&A, 595, A1, doi: 10.1051/0004-6361/201629272
* Smith et al. (2019) Smith, N., Aghakhanloo, M., Murphy, J. W., et al. 2019, MNRAS, 488, 1760, doi: 10.1093/mnras/stz1712
* Westerlund (1961) Westerlund, B. E. 1961, PASP, 73, 51, doi: 10.1086/127618
* Westerlund (1968) —. 1968, ApJ, 154, L67, doi: 10.1086/180270
|
# Boundary conditions at a thin membrane for normal diffusion equation which
generate subdiffusion
Tadeusz Kosztołowicz<EMAIL_ADDRESS>Institute of Physics,
Jan Kochanowski University,
Uniwersytecka 7, 25-406 Kielce, Poland Aldona Dutkiewicz<EMAIL_ADDRESS>Faculty of Mathematics and Computer Science,
Adam Mickiewicz University, Uniwersytetu Poznańskiego 4, 61-614 Poznań, Poland
###### Abstract
We consider a particle transport process in a one-dimensional system with a
thin membrane, described by a normal diffusion equation. We consider two
boundary conditions at the membrane that are linear combinations of integral
operators, with time dependent kernels, which act on the functions and their
spatial derivatives define on both membrane surfaces. We show how boundary
conditions at the membrane change the temporal evolution of the first and
second moments of particle position distribution (the Green’s function) which
is a solution to normal diffusion equation. As these moments define the kind
of diffusion, an appropriate choice of boundary conditions generates the
moments characteristic for subdiffusion. The interpretation of the process is
based on a particle random walk model in which the subdiffusion effect is
caused by anomalously long stays of the particle in the membrane.
## I Introduction
Anomalous diffusion in a one-dimensional system is usually characterized by
the following relation defined in the long time limit bg ; mk ; mk1 ; ks
$\left\langle(\Delta x)^{2}(t)\right\rangle\sim t^{\alpha},$ (1)
where $\left\langle(\Delta x)^{2}(t)\right\rangle$ is the mean square
displacement of diffusing particle, $0<\alpha<1$ is for subdiffusion,
$\alpha=1$ is for normal diffusion, and $\alpha>1$ is for superdiffusion. Eq.
(1) is usually taken as the definition of anomalous diffusion. We consider the
case of subdiffusion and normal diffusion, $0<\alpha\leq 1$. Eq. (1)
characterizes a kind of diffusion when the parameter $\alpha$ is uniquely
defined. When there is a probability distribution of $\alpha$ smc , the
particle mean square displacement is described by a more complicated equation.
In the following we assume that $\alpha$ is unique.
Different models of subdiffusion lead to Eq. (1) in the long time limit bg ;
mk ; mk1 . We mention here diffusion in a system having comb–like structure
and diffusion on fractals. We focus our attention on models based on
differential equations. Subdiffusion can be described by a differential
equation with a fractional time derivative mk ; mk1 ; ks ; compte
$\frac{\partial P(x,t|x_{0})}{\partial
t}=D_{\alpha}\frac{\partial^{1-\alpha}}{\partial
t^{1-\alpha}}\frac{\partial^{2}P(x,t|x_{0})}{\partial x^{2}},$ (2)
where $P(x,t|x_{0})$ is the Green’s function which is interpreted as
probability density that a diffusing particle is at a point $x$ at time $t$,
$D_{\alpha}$ is a subdiffusion coefficient measured in the units of
$m^{2}/second^{\alpha}$, and $x_{0}$ is the initial position of the particle.
The initial condition is
$P(x,0|x_{0})=\delta(x-x_{0}),$ (3)
$\delta$ is the Dirac delta function. The Riemann-Liouville fractional
derivative is defined for $0<\gamma<1$ as
$\frac{d^{\gamma}f(t)}{dt^{\gamma}}=\frac{1}{\Gamma(1-\gamma)}\frac{d}{dt}\int_{0}^{t}dt^{\prime}\frac{f(t^{\prime})}{(t-t^{\prime})^{\gamma}}.$
(4)
The physical interpretation of subdiffusion within the Continuous Time Random
Walk model that leads to Eq. (1) is that a diffusing particle waits an
anomalously long time for its next jump. The probability density of the
waiting time $\psi_{\alpha}$ has a heavy tail, $\psi_{\alpha}(t)\sim
1/t^{1+\alpha}$ mk ; mk1 ; ks . The other example is the subdiffusion
differential equation with derivatives of natural orders frank ; lenzi
$\frac{\partial P^{\mu}(x,t)}{\partial t}=\frac{\partial}{\partial
x}D(x,t)\frac{\partial P^{\nu}(x,t)}{\partial x},$ (5)
$\mu,\nu>0$. When $D(x,t)=const.$ the solution $P$ provides Eq. (1) with
$\alpha=2\mu/(\mu+\nu)$; when $\mu<\nu$ we have subdiffusion. The physical
interpretation of this process is based on the non-additive Sharma–Mittal
entropy frank . When $D(t)\sim t^{\alpha-1}$ and $\mu=\nu=1$ one gets $P$
which leads to Eq. (1) lim . For diffusion in a box bounded by impenetrable
walls assuming $D(x,t)=D|x|^{-\Theta}$, $\Theta>0$, one gets the Green’s
function which provides $\left\langle(\Delta
x)^{2}(t)\right\rangle\sim(Dt)^{\Theta/(2+\Theta)}$ fa .
The Continuous Time Random Walk model of subdiffusion assumes that particle
jumps are significantly hindered at each point of the system. However, in some
processes particle diffusion can be very hindered at a membrane only.
Considering diffusion of a particle along the $x$-axis, we have diffusion in a
one-dimensional system disturbed at a single point at which the perpendicular
to the $x$ axis membrane is placed. Obstruction of a particle passage through
the membrane may affect the nature of diffusion. An example is breaking the
Markov property for normal diffusion due to specific boundary conditions at
the membrane tk2020 . The change of the character of diffusion can also be
caused by the presence of an adsorbing wall in a system in which the process
is described by the normal diffusion equation. A boundary condition at the
wall involves an integral operator with a time dependent kernel gui .
The mechanisms of a particle transport through the membrane may be very
complicated. Some of them lead to great difficulties in particle transport
inside the membrane, which affect the process in the outer regions. From a
mathematical point of view, these mechanisms provide specific boundary
conditions at the membrane bouncond ; ab , see also the discussion in Ref.
tk2020 and the references cited therein, the list of references regarding
this issue can be significantly extended. In particular, the boundary
conditions may contain fractional derivatives kd ; tk2019 ; kwl . The
diffusing particle can stay in the membrane for a long time, which can happen,
among others, in a lipid bilayer membrane lipbil .
The question considered in this paper is whether there are boundary conditions
at the membrane that change the nature of the diffusion process described by
the normal diffusion equation in such a way that the process has subdiffusion
properties. In our considerations we are based on the Laplace transforms of
the Green’s functions. We consider the boundary conditions for which Laplace
transforms are linear combination of probabilities and fluxes defined on both
membrane surfaces with coefficients depending on the Laplace transform
parameter. As it is argued in Ref. tk2020 , such boundary conditions often
occur in models of diffusion in a membrane system. In the time domain the
boundary conditions are expressed by integral operators with time–dependent
kernels. We show that appropriately chosen boundary conditions at the membrane
lead to Green’s functions for the normal diffusion equation providing Eq. (1)
with $0<\alpha<1$. We also present a particle random walk model describing the
process in which the subdiffusion effect is caused by anomalously long stays
of the particle inside the membrane.
## II Method
In this section we consider how boundary conditions at the membrane are
related to the first and second moments of distribution of particle location.
This distribution (Green’s function) is a solution to normal diffusion
equation with the initial condition Eq. (3).
### II.1 Boundary conditions at a membrane
The normal diffusion equation with constant diffusion coefficient $D$ is
$\frac{\partial P(x,t|x_{0})}{\partial
t}=D\frac{\partial^{2}P(x,t|x_{0})}{\partial x^{2}}.$ (6)
In the following we use the Laplace transform
$\mathcal{L}[f(t)]=\hat{f}(s)=\int_{0}^{\infty}{\rm e}^{-st}f(t)dt$. In terms
of the Laplace transform Eq. (6) is
$s\hat{P}(x,s|x_{0})-P(x,0|x_{0})=D\frac{\partial^{2}\hat{P}(x,s|x_{0})}{\partial
x^{2}}.$ (7)
We assume that a thin membrane is located at $x=0$. A thin membrane means that
the particle can stop inside the membrane, but its diffusive motion is not
possible in it. We additionally assume that $x_{0}<0$. The regions bounded by
the membrane are denoted as $A=(-\infty,0)$ and $B=(0,\infty)$. In the
following the function $P$ and a diffusive flux $J$ are marked by the indexes
$A$ and $B$ which indicate the location of the point $x$. In the time domain
the flux is defined as
$J_{i}(x,t|x_{0})=-D\frac{\partial P_{i}(x,t|x_{0})}{\partial x},$ (8)
its Laplace transform is
$\hat{J}_{i}(x,s|x_{0})=-D\frac{\partial\hat{P}_{i}(x,s|x_{0})}{\partial x},$
(9)
$i\in\\{A,B\\}$.
We consider boundary conditions at a thin membrane which in terms of the
Laplace transform are
$\hat{P}_{B}(0^{+},s|x_{0})=\hat{\Phi}(s)\hat{P}_{A}(0^{-},s|x_{0}),$ (10)
$\hat{J}_{B}(0^{+},s|x_{0})=\hat{\Xi}(s)\hat{J}_{A}(0^{-},s|x_{0}).$ (11)
Assuming that the system is unbounded, the above boundary conditions are
supplemented by
$\hat{P}_{A}(-\infty,s|x_{0})=\hat{P}_{B}(\infty,s|x_{0})=0.$ (12)
In the time domain the boundary conditions (10)–(12) are
$P_{B}(0^{+},t|x_{0})=\int_{0}^{t}dt^{\prime}\Phi(t-t^{\prime})P_{A}(0^{-},t^{\prime}|x_{0}),$
(13)
$J_{B}(0^{+},t|x_{0})=\int_{0}^{t}dt^{\prime}\Xi(t-t^{\prime})J_{A}(0^{-},t^{\prime}|x_{0}),$
(14) $P_{A}(-\infty,t|x_{0})=P_{B}(\infty,t|x_{0})=0.$ (15)
The question arises whether Eqs. (10) and (11) do not constitute too narrow
set of linear boundary conditions at a thin membrane. Let us consider the
following boundary conditions
$\displaystyle\gamma_{1}(s)\hat{P}_{A}(0^{-},s|x_{0})+\gamma_{2}(s)\hat{J}_{A}(0^{-},s|x_{0})$
(16)
$\displaystyle=\gamma_{3}(s)\hat{P}_{B}(0^{+},s|x_{0})+\gamma_{4}(s)\hat{J}_{B}(0^{+},s|x_{0}),$
$\displaystyle\lambda_{1}(s)\hat{P}_{A}(0^{-},s|x_{0})+\lambda_{2}(s)\hat{J}_{A}(0^{-},s|x_{0})$
(17)
$\displaystyle=\lambda_{3}(s)\hat{P}_{B}(0^{+},s|x_{0})+\lambda_{4}(s)\hat{J}_{B}(0^{+},s|x_{0}).$
Eqs. (16) and (17) are more general that Eqs. (10) and (11). However, as it is
shown in Appendix I, the boundary conditions (16) and (17) and the ones (10)
and (11) provide the same Green’s functions when
$\hat{\Phi}(s)=\frac{2\sqrt{Ds}W_{B}(s)}{W(s)+2\sqrt{Ds}W_{A}(s)},$ (18)
$\hat{\Xi}(s)=\frac{2\sqrt{Ds}W_{B}(s)}{W(s)-2\sqrt{Ds}W_{A}(s)},$ (19)
where
$\displaystyle
W(s)=(\lambda_{1}(s)-\sqrt{Ds}\lambda_{2}(s))(\gamma_{3}(s)+\sqrt{Ds}\gamma_{4}(s))$
(20)
$\displaystyle-(\lambda_{3}(s)+\sqrt{Ds}\lambda_{4}(s))(\gamma_{1}(s)-\sqrt{Ds}\gamma_{2}(s)),$
$\displaystyle
W_{A}(s)=\frac{1}{2}\bigg{[}\bigg{(}\frac{\gamma_{1}(s)}{\sqrt{Ds}}+\gamma_{2}(s)\bigg{)}\bigg{(}\lambda_{3}(s)+\sqrt{Ds}\lambda_{4}(s)\bigg{)}$
(21)
$\displaystyle-\bigg{(}\frac{\lambda_{1}(s)}{\sqrt{Ds}}+\lambda_{2}(s)\bigg{)}\bigg{(}\gamma_{3}(s)+\sqrt{Ds}\gamma_{4}(s)\bigg{)}\bigg{]},$
$\displaystyle
W_{B}(s)=\frac{1}{2}\bigg{[}\bigg{(}\frac{\gamma_{1}(s)}{\sqrt{Ds}}+\gamma_{2}(s)\bigg{)}\bigg{(}\lambda_{1}(s)-\sqrt{Ds}\lambda_{2}(s)\bigg{)}$
(22)
$\displaystyle-\bigg{(}\frac{\lambda_{1}(s)}{\sqrt{Ds}}+\lambda_{2}(s)\bigg{)}\bigg{(}\gamma_{1}(s)-\sqrt{Ds}\gamma_{2}(s)\bigg{)}\bigg{]},$
under conditions $W(s)\neq 0$ and $W_{A}(s)\neq\pm W(s)/2\sqrt{Ds}$. Since the
boundary conditions determine the solutions to the diffusion equation
uniquely, the boundary conditions Eqs. (16) and (17) can be written as Eqs.
(10) and (11) under the above mentioned conditions which interpretation is
given in Appendix I. In general, the boundary conditions (16) and (17) depend
on eight functions $\gamma_{i}$ and $\lambda_{i}$, $i\in\\{1,2,3,4\\}$, while
the boundary conditions Eqs. (10) and (11) are generated by two functions
$\hat{\Phi}$ and $\hat{\Xi}$ only. Thus, due to Eqs. (18) and (19), the
boundary conditions Eqs. (10) and (11) are uniquely determined by Eqs. (16)
and (17) but the opposite is not true.
Figure 1: Illustration of the boundary conditions at a thin membrane. The
operator $\Phi$ changes the probabilities that the particle is located at the
membrane surface, the operator $\Xi$ changes the flux flowing through the
membrane.
For example, one of the most used boundary conditions at the membrane is
$J_{A}(0,t|x_{0})=\lambda_{1}P_{A}(0^{-},t|x_{0})-\lambda_{2}P_{B}(0^{+},t|x_{0})$,
$\lambda_{1},\lambda_{2}>0$, supplemented by the condition that the flux is
continuous $J_{A}(0^{-},t|x_{0})=J_{B}(0^{+},t|x_{0})$. These boundary
conditions can be written in the form of Eqs. (13) and (14) with
$\Phi(t)=\frac{\lambda_{1}}{\sqrt{D}}\left[\frac{1}{\sqrt{Dt}}-\frac{\lambda_{2}}{\sqrt{D}}\;{\rm
e}^{\frac{\lambda_{2}^{2}t}{D}}{\rm
erfc}\left(\frac{\lambda_{2}\sqrt{t}}{\sqrt{D}}\right)\right]$ and
$\Xi(t)=\delta(t)$, where ${\rm erfc}(u)=(2/\sqrt{\pi})\int_{u}^{\infty}{\rm
e}^{-\tau^{2}}d\tau$ is the complementary error function tk2020 . For this
case we have $\hat{\Phi}(s)=\lambda_{1}/(\lambda_{2}+\sqrt{Ds})$ and
$\hat{\Xi}(s)=1$.
The Laplace transform of Green’s functions for normal diffusion equation
obtained for the boundary conditions (10)–(12) are tk2020
$\displaystyle\hat{P}_{A}(x,s|x_{0})=\frac{1}{2\sqrt{Ds}}\;{\rm
e}^{-|x-x_{0}|\sqrt{\frac{s}{D}}}$ (23)
$\displaystyle-\left(\frac{\hat{\Phi}(s)-\hat{\Xi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right)\frac{1}{2\sqrt{Ds}}\;{\rm
e}^{(x+x_{0})\sqrt{\frac{s}{D}}},$
$\displaystyle\hat{P}_{B}(x,s|x_{0})=\left(\frac{\hat{\Phi}(s)\hat{\Xi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right)\frac{1}{\sqrt{Ds}}\;{\rm
e}^{-(x-x_{0})\sqrt{\frac{s}{D}}}.$ (24)
In the following we use the function $P_{M}$ defined as
$\displaystyle P_{M}(t|x_{0})=1-\int_{-\infty}^{0}P_{A}(x,t|x_{0})dx$ (25)
$\displaystyle-\int_{0}^{\infty}P_{B}(x,t|x_{0})dx.$
Eqs. (23), (24), and the Laplace transform of Eq. (25) provide
$\hat{P}_{M}(s|x_{0})=\frac{{\rm
e}^{x_{0}\sqrt{\frac{s}{D}}}}{s}\left[\frac{\hat{\Phi}(s)\left(1-\hat{\Xi}(s)\right)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right].$
(26)
The function $P_{M}$ is the probability of not finding the particle in the
regions $A$ or $B$ at time $t$. The Green’s functions Eqs. (23) and (24) are
normalized when $P_{M}(t|x_{0})\equiv 0$. Thus, the normalization condition is
met when the flux through the membrane is continuous, $\hat{\Xi}(s)\equiv 1$,
or when $\hat{\Phi}(s)\equiv 0$ and the flux is non–zero at the membrane. We
treat the second condition as non-physical. It is not possible that the
probability of finding a particle on the membrane surface $0^{+}$ is still
zero with a non-zero flux flowing from the region $A$ to $B$.
In Sec.II.2 we consider a model of a random walk of a particle as it passes
through a membrane. This model gives a stochastic interpretation of the
boundary conditions. It also imposes a certain condition on the functions
$\hat{\Phi}$ and $\hat{\Xi}$.
### II.2 Random walk model of particle passing through the membrane
We consider a model in which a diffusing particle can be inside a thin
membrane for a very long time.
Figure 2: Illustration of the transport process described by Eq. (27). The
diffusive flux $J$ at the point $x$ depends on the distribution of waiting
times $\psi_{a}$ and $\psi_{b}$ for the particle to jump between the
neighbouring points $x^{-}$ and $x^{+}$ located in the media $a$ and $b$,
respectively.
Figure 3: Transport of a particle through the membrane. Point $0$ represents
the inside of the membrane where the particle can stay even for a long time,
points $0^{-}$ and $0^{+}$ mark the positions of the particle on membrane
surfaces, a more detailed description is in the text.
We define the Laplace transform of diffusive flux that flows through the
boundary between two media $a$ and $b$ located at $x$ as
$\displaystyle\hat{J}(x,s|x_{0})=\frac{\epsilon
s\hat{\psi}_{a}(s)}{2(1-\hat{\psi}_{a}(s))}\hat{P}_{a}(x^{-},s|x_{0})$ (27)
$\displaystyle-\frac{\epsilon
s\hat{\psi}_{b}(s)}{2(1-\hat{\psi}_{b}(s))}\hat{P}_{b}(x^{+},s|x_{0}),$
where $\hat{\psi}_{i}(s)$ is the Laplace transform of probability density of
time which is needed to take a particle next step in the medium $i$,
$i\in\\{a,b\\}$, $\epsilon=x^{+}-x^{-}$ is a length of particle step, see Fig.
2, the derivation of Eq. (27) is in Appendix II. The function $\hat{\psi}$ is
expressed by the formula kd
$\hat{\psi}(s)=\frac{1}{1+\epsilon^{2}\eta(s)},$ (28)
where the function $\eta$, which in practice determines a kind of diffusion,
fulfils the condition $\eta(s)\rightarrow 0$ when $s\rightarrow 0$. In the
limit of small $\epsilon$ we have $\hat{\psi}(s)=1-\epsilon^{2}\eta(s)$. We
assume that the particle can stay inside the membrane at the point $0$. Let
the points $0^{-}$ and $0^{+}$ represent points located on the membrane
surfaces. Applying Eq. (27) to the system presented in Fig. 3 we get
$\displaystyle\hat{J}_{A}(0^{-},s|x_{0})=\frac{s}{2\epsilon\eta(s)}\hat{P}_{A}(0^{-},s|x_{0})$
(29) $\displaystyle-\frac{s}{2\epsilon\eta_{M}(s)}\hat{P}_{M}(s|x_{0}),$
$\displaystyle\hat{J}_{B}(0^{+},s|x_{0})=\frac{s}{2\epsilon\eta_{M}(s)}\hat{P}_{M}(s|x_{0})$
(30) $\displaystyle-\frac{s}{2\epsilon\eta(s)}\hat{P}_{B}(0^{+},s|x_{0}),$
where
$\hat{\psi}_{M}(s)=\frac{1}{1+\epsilon^{2}\eta_{M}(s)}.$ (31)
For normal diffusion the distribution of time to take the particle next step
is given by Eq. (28) with
$\eta(s)=\frac{s}{2D}.$ (32)
We are going to find the function $\eta_{M}$ which together with Eqs. (29),
(30) provide Eq. (11). The probability that the particle is inside the
membrane, represented by the point $0$, is $P_{M}(t|x_{0})$. From Eqs. (23)
and (24) we get
$\hat{P}_{A}(0^{-},s|x_{0})=\left(\frac{\hat{\Xi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right)\frac{{\rm
e}^{x_{0}\sqrt{\frac{s}{D}}}}{\sqrt{Ds}},$ (33)
$\hat{P}_{B}(0^{+},s|x_{0})=\left(\frac{\hat{\Phi}(s)\hat{\Xi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right)\frac{{\rm
e}^{x_{0}\sqrt{\frac{s}{D}}}}{\sqrt{Ds}}.$ (34)
Combining Eqs. (11), (26), and (29)–(34) we obtain
$\eta_{M}(s)=\frac{\hat{\Phi}(s)(1-\hat{\Xi}^{2}(s))}{2\hat{\Xi}(s)(\hat{\Phi}(s)+\hat{\Xi}(s))}\sqrt{\frac{s}{D}}.$
(35)
The boundary conditions at the membrane Eqs. (10) and (11) are generated by
the residence time of the particle in the membrane with distribution Eq. (31)
in which $\eta_{M}$ is expressed by Eq. (35). However, due to the
normalization condition $\hat{\psi}_{M}(0)=1$, there is
$\eta_{M}(s)\rightarrow 0$ when $s\rightarrow 0$. This condition and Eq. (35)
provide the following condition for the functions $\hat{\Phi}$ and $\hat{\Xi}$
$\frac{\sqrt{s}\hat{\Phi}(s)(1-\hat{\Xi}^{2}(s))}{\hat{\Xi}(s)(\hat{\Phi}(s)+\hat{\Xi}(s))}\rightarrow
0$ (36)
when $s\rightarrow 0$.
### II.3 First and second moments of $P(x,t|x_{0})$
We derive the relations between the moments of particle locations at time $t$,
generated by Green’s functions $P_{A}$ and $P_{B}$, and the functions $\Phi$
and $\Xi$ that define boundary conditions at the membrane. The moments are
calculated by means of the formula
$\displaystyle\left\langle
x^{i}(t)\right\rangle=\int_{-\infty}^{0}x^{i}P_{A}(x,t|x_{0})dx$ (37)
$\displaystyle+\int_{0}^{\infty}x^{i}P_{B}(x,t|x_{0})dx.$
From Eqs. (23), (24), and the Laplace transform of Eq. (37) we get
$\mathcal{L}\left[\left\langle x(t)\right\rangle\right]=\frac{x_{0}}{s}+{\rm
e}^{x_{0}\sqrt{\frac{s}{D}}}\hat{v}(s),$ (38) $\mathcal{L}\left[\left\langle
x^{2}(t)\right\rangle\right]=\frac{x^{2}_{0}}{s}+\frac{2D}{s^{2}}+{\rm
e}^{{x_{0}\sqrt{\frac{s}{D}}}}\hat{w}(s),$ (39)
where
$\hat{v}(s)=\frac{\sqrt{D}}{s^{3/2}}\left(\frac{\left(\hat{\Phi}(s)-1\right)\hat{\Xi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right),$
(40)
$\hat{w}(s)=\frac{2D}{s^{2}}\left(\frac{\left(\hat{\Xi}(s)-1\right)\hat{\Phi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right).$
(41)
We consider the first and second moments in the limit of long time which
corresponds to the limit of small parameter $s$. If $s\ll D/|x_{0}|^{2}$,
which corresponds to $t\gg|x_{0}|^{2}/D$, we can use the approximation ${\rm
e}^{x_{0}\sqrt{s/D}}\approx 1$. In this case it is convenient to define the
function
$\hat{z}(s)=\hat{w}(s)+\frac{2D}{s^{2}}.$ (42)
Then, Eqs. (38) and (39) read
$\mathcal{L}\left[\left\langle
x(t)\right\rangle\right]=\frac{x_{0}}{s}+\hat{v}(s),$ (43)
$\mathcal{L}\left[\left\langle
x^{2}(t)\right\rangle\right]=\frac{x^{2}_{0}}{s}+\hat{z}(s).$ (44)
From Eqs. (41) and (42) we get
$\hat{z}(s)=\frac{2D}{s^{2}}\left(\frac{\left(\hat{\Xi}(s)+1\right)\hat{\Xi}(s)}{\hat{\Phi}(s)+\hat{\Xi}(s)}\right).$
(45)
From Eqs. (40) and (45) we obtain
$\hat{\Phi}(s)=\frac{\hat{z}(s)+2\sqrt{\frac{D}{s}}\hat{v}(s)}{\hat{z}(s)-2\sqrt{\frac{D}{s}}\hat{v}(s)},$
(46)
$\hat{\Xi}(s)=\frac{\hat{z}(s)+2\sqrt{\frac{D}{s}}\hat{v}(s)}{\frac{4D}{s^{2}}-\hat{z}(s)+2\sqrt{\frac{D}{s}}\hat{v}(s)}.$
(47)
Thus, knowing the boundary conditions at the membrane we can determine the
time evolution of the first and second moments of the particle position
distribution in the long time limit putting Eqs. (40) and (45) to Eqs. (43)
and (44), respectively, and then calculating the inverse Laplace transforms of
the obtained functions. Conversely, the temporal evolution of these moments
defines the boundary conditions at the membrane by Eqs. (46) and (47).
### II.4 Boundary conditions at the membrane generated by the first and
second moments
The boundary conditions at the membrane generated by Eqs. (10), (11), (46),
and (47) read
$\displaystyle\left(\frac{s^{2}\hat{z}(s)}{2D}-\frac{s^{3/2}\hat{v}(s)}{\sqrt{D}}\right)\hat{P}_{B}(0^{+},s|x_{0})$
(48)
$\displaystyle=\left(\frac{s^{2}\hat{z}(s)}{2D}+\frac{s^{3/2}\hat{v}(s)}{\sqrt{D}}\right)\hat{P}_{A}(0^{-},s|x_{0}),$
$\displaystyle\left(1-\frac{s^{2}\hat{z}(s)}{4D}+\frac{s^{3/2}\hat{v}(s)}{2\sqrt{D}}\right)\hat{J}_{B}(0^{+},s|x_{0})$
(49)
$\displaystyle=\left(\frac{s^{2}\hat{z}(s)}{4D}+\frac{s^{3/2}\hat{v}(s)}{2\sqrt{D}}\right)\hat{J}_{A}(0^{-},s|x_{0}).$
Due to the formula
$\mathcal{L}^{-1}\left[\hat{g}(s)\hat{h}(s)\right]=\int_{0}^{t}g(t^{\prime})h(t-t^{\prime})dt^{\prime},$
(50)
in the time domain the boundary conditions Eqs. (48) and (49) take the forms
of integral operators with the kernels depending on the functions $v(t)$ and
$z(t)$.
### II.5 Green’s functions generated by the first and second moments
From Eqs. (23), (24), (26), (46), and (47) we get
$\displaystyle\hat{P}_{A}(x,s|x_{0})=\frac{{\rm
e}^{-|x-x_{0}|\sqrt{\frac{s}{D}}}}{2\sqrt{Ds}}$ (51)
$\displaystyle-\left(1-\frac{s^{2}\hat{z}(s)}{2D}+\frac{s^{3/2}\hat{v}(s)}{\sqrt{D}}\right)\frac{{\rm
e}^{(x+x_{0})\sqrt{\frac{s}{D}}}}{2\sqrt{Ds}},$
$\displaystyle\hat{P}_{B}(x,s|x_{0})=\left(\frac{s^{2}\hat{z}(s)}{4D}+\frac{s^{3/2}\hat{v}(s)}{2\sqrt{D}}\right)\frac{{\rm
e}^{-(x-x_{0})\sqrt{\frac{s}{D}}}}{\sqrt{Ds}},$ (52)
we also obtain
$\hat{P}_{M}(s|x_{0})=\left(1-\frac{s^{2}\hat{z}(s)}{2D}\right)\frac{{\rm
e}^{x_{0}\sqrt{\frac{s}{D}}}}{s}.$ (53)
## III Boundary conditions at a thin membrane which generate subdiffusion
We consider how the temporal evolution of the first and second moments that
are power functions of time affects the boundary conditions and Green’s
functions. These moments lead to the relation Eq. (1).
### III.1 Moments as power functions of time
We consider time evolution of the first and second moments, and consequently
the mean square displacement, as power functions of time. We use Eqs. (43) and
(44) assuming
$\hat{v}(s)=\frac{B}{s^{1+\beta}},$ (54) $\hat{z}(s)=\frac{A}{s^{1+\alpha}},$
(55)
where $\alpha,\beta,A>0$. In the time domain we have
$\left\langle x(t)\right\rangle=x_{0}+B^{\prime}t^{\beta},$ (56) $\left\langle
x^{2}(t)\right\rangle=x^{2}_{0}+A^{\prime}t^{\alpha},$ (57)
where $A^{\prime}=A/\Gamma(1+\alpha)$ and $B^{\prime}=B/\Gamma(1+\beta)$.
Using the equation
$\left\langle(\Delta x)^{2}(t)\right\rangle=\left\langle
x^{2}(t)\right\rangle-\left\langle x(t)\right\rangle^{2},$ (58)
we get $\left\langle(\Delta
x)^{2}(t)\right\rangle=A^{\prime}t^{\alpha}-B^{\prime
2}t^{2\beta}-2x_{0}B^{\prime}t^{\beta}$. Since $\left\langle(\Delta
x)^{2}(t)\right\rangle>0$, we suppose $\alpha\geq 2\beta$, but if
$\alpha=2\beta$ we assume that $A^{\prime}>B^{\prime 2}$. Under these
conditions for sufficiently long times this relation can be approximated as
$\left\langle(\Delta x)^{2}(t)\right\rangle=\tilde{A}t^{\alpha},$ (59)
where $\tilde{A}=A^{\prime}$ when $\alpha>2\beta$ and
$\tilde{A}=A^{\prime}-B^{\prime 2}$ when $\alpha=2\beta$.
### III.2 Boundary conditions at the membrane
Combining Eqs. (48), (49), (54), (55), and using the following formula valid
for bounded function $g$
$\mathcal{L}^{-1}[s^{\gamma}\hat{g}(s)]=\frac{d^{\gamma}g(t)}{dt^{\gamma}}\;,\;0<\gamma<1,$
(60)
we get the boundary conditions at the membrane with Riemann–Liouville
fractional time derivatives
$\displaystyle\left(\frac{A}{2D}\frac{\partial^{1-\alpha}}{\partial
t^{1-\alpha}}-\frac{B}{\sqrt{D}}\frac{\partial^{1/2-\beta}}{\partial
t^{1/2-\beta}}\right)P_{B}(0^{+},t|x_{0})$ (61)
$\displaystyle=\left(\frac{A}{2D}\frac{\partial^{1-\alpha}}{\partial
t^{1-\alpha}}+\frac{B}{\sqrt{D}}\frac{\partial^{1/2-\beta}}{\partial
t^{1/2-\beta}}\right)P_{A}(0^{-},t|x_{0}),$
$\displaystyle\left(1-\frac{A}{4D}\frac{\partial^{1-\alpha}}{\partial
t^{1-\alpha}}+\frac{B}{2\sqrt{D}}\frac{\partial^{1/2-\beta}}{\partial
t^{1/2-\beta}}\right)J_{B}(0^{+},t|x_{0})$ (62)
$\displaystyle=\left(\frac{A}{4D}\frac{\partial^{1-\alpha}}{\partial
t^{1-\alpha}}+\frac{B}{2\sqrt{D}}\frac{\partial^{1/2-\beta}}{\partial
t^{1/2-\beta}}\right)J_{A}(0^{-},t|x_{0}).$
The discussion in Sec.III.1 shows that $0<\alpha\leq 1$ and $0\leq\beta\leq
1/2$. Thus, all fractional derivatives in the above boundary conditions are of
non-negative orders which are not greater than one.
### III.3 Solutions to diffusion equation
From Eqs. (51)–(55) we get
$\displaystyle\hat{P}_{A}(x,s|x_{0})=\frac{1}{2\sqrt{Ds}}\left[{\rm
e}^{-|x-x_{0}|\sqrt{\frac{s}{D}}}-{\rm
e}^{(x+x_{0})\sqrt{\frac{s}{D}}}\right]$ (63)
$\displaystyle+\left(\frac{As^{-\alpha+1/2}}{2D^{3/2}}-\frac{Bs^{-\beta}}{4D}\right)\;{\rm
e}^{(x+x_{0})\sqrt{\frac{s}{D}}},$
$\displaystyle\hat{P}_{B}(x,s|x_{0})=\left(\frac{As^{-\alpha+1/2}}{2D^{3/2}}+\frac{Bs^{-\beta}}{2D}\right)\;{\rm
e}^{-(x-x_{0})\sqrt{\frac{s}{D}}},$ (64)
$\hat{P}_{M}(s|x_{0})=\left(1-\frac{As^{1-\alpha}}{2D}\right)\frac{{\rm
e}^{x_{0}\sqrt{\frac{s}{D}}}}{s}.$ (65)
We calculate the inverse Laplace transforms of Eqs. (63)–(65) using the
formulas $\mathcal{L}^{-1}[{\rm e}^{-x\sqrt{s/D}}/\sqrt{Ds}]={\rm
e}^{-x^{2}/4Dt}/\sqrt{\pi Dt}$, $\mathcal{L}^{-1}[{\rm
e}^{-x\sqrt{s/D}}/s]={\rm erfc}(x/2\sqrt{Dt})$, $x>0$, and tk2004
$\displaystyle\mathcal{L}^{-1}\left[s^{\nu}{\rm e}^{-as^{\beta}}\right]\equiv
f_{\nu,\beta}(t;a)$ (66)
$\displaystyle=\frac{1}{t^{\nu+1}}\sum_{k=0}^{\infty}{\frac{1}{k!\Gamma(-k\beta-\nu)}\left(-\frac{a}{t^{\beta}}\right)^{k}}\;,$
$a,\beta>0$. In this way we obtain the following solutions to the diffusion
equation Eq. (6) with the boundary conditions Eqs. (61) and (62)
$\displaystyle P_{A}(x,t|x_{0})=\frac{1}{2\sqrt{\pi Dt}}\left[{\rm
e}^{-\frac{(x-x_{0})^{2}}{4Dt}}-{\rm e}^{-\frac{(x+x_{0})^{2}}{4Dt}}\right]$
(67)
$\displaystyle+\frac{A}{2D^{3/2}}f_{-\alpha+1/2,1/2}\left(t;\frac{-(x+x_{0})}{\sqrt{D}}\right)$
$\displaystyle-\frac{B}{2D}f_{-\beta,1/2}\left(t;\frac{-(x+x_{0})}{\sqrt{D}}\right),$
$\displaystyle
P_{B}(x,t|x_{0})=\frac{A}{2D^{3/2}}f_{-\alpha+1/2,1/2}\left(t;\frac{x-x_{0}}{\sqrt{D}}\right)$
(68)
$\displaystyle+\frac{B}{2D}f_{-\beta,1/2}\left(t;\frac{x-x_{0}}{\sqrt{D}}\right).$
The inverse Laplace transform of Eq. (65) reads
$\displaystyle P_{M}(t|x_{0})={\rm
erfc}\left(\frac{-x_{0}}{2\sqrt{Dt}}\right)-\frac{A}{2D}f_{-\alpha,1/2}\left(t;\frac{-x_{0}}{\sqrt{D}}\right).$
(69)
### III.4 Comparison of two models
We compare the Green’s functions for the diffusion equation (6) and for the
fractional subdiffusion equation (2). In both cases we assume the boundary
conditions that the functions are continuous at the membrane, but the flux is
continuous for the solutions to Eq. (2) only. The discontinuity of the flux at
the membrane in the first case generates a subdiffusion effect. We also assume
that the Green’s functions for both equations generate the same relation
$\left\langle(\Delta
x)^{2}(t)\right\rangle=\frac{2D_{\alpha}t^{\alpha}}{\Gamma(1+\alpha)}.$
Thus, we solve the normal diffusion equation with the boundary conditions (61)
and (62) with $A=2D_{\alpha}/\Gamma(1+\alpha)$ and $B=0$. We obtain
$\displaystyle P_{A}(x,t|x_{0})=\frac{1}{2\sqrt{\pi Dt}}\left({\rm
e}^{-\frac{(x-x_{0})^{2}}{4Dt}}-{\rm e}^{-\frac{(x+x_{0})^{2}}{4Dt}}\right)$
(70)
$\displaystyle+\frac{D_{\alpha}}{2D^{3/2}\Gamma(1+\alpha)}f_{1/2-\alpha,1/2}\left(t;\frac{|x+x_{0}|}{\sqrt{D}}\right),$
$\displaystyle P_{B}(x,t|x_{0})=\frac{D_{\alpha}}{2D^{3/2}\Gamma(1+\alpha)}$
(71) $\displaystyle\times
f_{1/2-\alpha,1/2}\left(t;\frac{x-x_{0}}{\sqrt{D}}\right),$
the function $P_{M}$ is
$\displaystyle P_{M}(t|x_{0})={\rm
erfc}\left(\frac{-x_{0}}{2\sqrt{Dt}}\right)$ (72)
$\displaystyle-\frac{D_{\alpha}}{D\Gamma(1+\alpha)}f_{-\alpha,1/2}\left(t;\frac{-x_{0}}{\sqrt{D}}\right),$
The solution to fractional diffusion equation in terms of the Laplace
transform is
$\hat{P}(x,s|x_{0})=\frac{s^{-1+\alpha/2}}{2\sqrt{D_{\alpha}}}\;{\rm
e}^{-|x-x_{0}|\sqrt{\frac{s^{\alpha}}{D_{\alpha}}}}.$
In the time domain we get
$\displaystyle
P(x,t|x_{0})=\frac{1}{2\sqrt{D_{\alpha}}}f_{-1+\alpha/2,\alpha/2}\left(t;\frac{|x-x_{0}|}{\sqrt{D_{\alpha}}}\right).$
(73)
The plots of the Green’s functions Eqs. (70), (71) for the model considered in
this paper and for the ones Eq. (73) being solutions to the fractional
subdiffusion equation are shown in Figs. LABEL:fig4 and LABEL:fig5. The
Green’s functions are assumed to be continuous at the membrane. However, as
opposed to Eq. (73), the flux is assumed to be discontinuous at the membrane
for the functions Eqs. (70) and (71). Then, the particle can stay inside the
membrane as it passes through it. The plots show that the subdiffusion effect
is achieved by anomalous long residence times within the membrane. The effect
is stronger for less $\alpha$. In Fig. 6 we can see that the probability of
finding a particle inside the membrane strongly depends on $\alpha$. If
$\alpha$ is greater, the mobility of the particle is greater and it is less
likely to remain in the membrane. From Eqs. (35), (46), (47), (54), and (55)
we obtain
$\displaystyle\eta_{M}(s)=\frac{2\sqrt{D}}{A}s^{\alpha-1/2}\left(1-\frac{A}{2D}s^{1-\alpha}\right)$
(74)
$\displaystyle\times\left(\frac{1-\frac{B}{2\sqrt{D}}s^{-\beta+1/2}}{1+\frac{2B\sqrt{D}}{A}s^{\alpha-\beta-1/2}}\right),$
In the limit of small $s$ we get $\eta_{M}(s)\approx 2\sqrt{D}s^{\alpha-1/2}$.
Using the approximation $\hat{\psi}_{M}(s)\approx
1-\epsilon^{2}\eta_{M}(s)\approx{\rm e}^{-\epsilon^{2}\eta_{M}(s)}$ and Eq.
(66) with $\nu=0$ we find that $\psi_{M}$ has the heavy tail
$\psi_{M}(t)\approx\frac{\kappa}{t^{\alpha+1/2}},\;t\rightarrow\infty,$ (75)
where $\kappa=2\epsilon^{2}\sqrt{D}(\alpha-1/2)/A\Gamma(3/2-\alpha)$. This
tail is ”heavier” than the one $\psi_{\alpha}(t)\sim 1/t^{1+\alpha}$,
$t\rightarrow\infty$, for the model provides the fractional subdiffusion
equation Eq. (2) mk ; ks .
## IV Final remarks
We have shown how boundary conditions at a thin membrane affect the first and
second moments of probability density $P(x,t|x_{0})$ of a particle position at
$x$ at time $t$. This probability is a solution to the normal diffusion
equation for the initial condition $P(x,0|x_{0})=\delta(x-x_{0})$. We also
considered the inverse problem, how knowing the time evolution of these
moments we can find the boundary conditions and the Green’s functions. The
first and second moments, considered in the long time limit, also determine
the temporal evolution of $\left\langle(\Delta x)^{2}(t)\right\rangle$ which
is usually considered as the definition of the kind of diffusion. We have
shown that assuming appropriate boundary conditions we can change the kind of
diffusion in the membrane system despite the fact that outside the membrane
the process is described by the normal diffusion equation. The other remarks
are as follows.
(1) Whether the relation (1) defines a kind of diffusion alone has been
treated by some authors rather as an open problem. It has been shown in Ref.
dgn that an appropriate combination of subdiffusion and superdiffusion leads
to Green’s functions that generate Eq. (1) with $\alpha=1$ which is
characteristic for normal diffusion, although the process is non–Gaussian and
non–Markovian. The conclusion is that, in addition to the relation (1), the
characteristics of the diffusion process should be based on its stochastic
interpretation. We have presented a stochastic random walk model in which, if
the particle enters the membrane, the waiting time for its jump has a heavy
tail $\psi_{M}(t)\sim 1/t^{\alpha+1/2}$ when $t\rightarrow\infty$, the waiting
time for a particle jump in the regions external to the membrane is the same
as for normal diffusion. This tail is heavier than the tail of distribution of
waiting time for the particle to jump $\psi_{\alpha}(t)\sim 1/t^{\alpha+1}$ in
a model providing the fractional subdiffusion equation Eq. (2). The function
$\psi_{M}$ affects diffusion of a particle at only one point corresponding to
the position of the membrane, while the function $\psi_{\alpha}$ affects
particle diffusion at each point in the system. However, both determine the
relation Eq. (1) with the same $\alpha$ in the long time limit. Thus, in the
presented model subdiffusion is generated by the effect of the long retention
of the diffusing particle inside the membrane.
(2) Possible application of the particle random walk model in a system with a
subdiffusive thin membrane could be diffusion of antibiotic through a thin
layer of bacterial biofilm. The bacteria in the biofilm have many defense
mechanisms against the action of the antibiotic. One of them is the thickening
of the biofilm which causes that antibiotic particles can be trapped in the
biofilm for a long time km .
(3) As an example, we have considered first and second moments that are power
functions of time. However, the results obtained in this paper can be applied
to other forms of the temporal evolution of the moments. For example, assuming
that the functions $\hat{v}$ and $\hat{z}$ are slowly varying, we obtain the
temporal evolution of the mean square of the particle displacement which is
characteristic for slow subdiffusion (ultraslow diffusion), see kd ; tk2019 ;
tk1 .
(4) The relations between the moments and the boundary conditions at the
membrane has the following properties. (a) When the Green’s function is
continuous at the membrane, $\hat{\Phi}(s)\equiv 1$, then $\hat{v}(s)\equiv
0$, see Eq. (40). Due to Eq. (43) there is $\left\langle
x(t)\right\rangle=x_{0}$. The second moment evolves over time according to the
formula $\left\langle
x^{2}(t)\right\rangle=\mathcal{L}^{-1}[(x_{0}^{2}+2D\hat{\Xi})/s^{2}]$. (b)
When the flux is continuous at the membrane, $\hat{\Xi}(s)\equiv 1$, then Eq.
(47) provides $\hat{z}=2D/s^{2}$. Thus, the flux is continuous at the membrane
only if $\left\langle x^{2}(t)\right\rangle=x_{0}^{2}+2Dt$. Due to Eq. (26),
the probability of a particle becoming trapped in the membrane is zero. Eq.
(35) shows that $\eta_{M}(s)\equiv 0$, thus $\hat{\psi}_{M}(s)\equiv 1$ and
$\psi_{M}(t)=\delta(t)$. This means that even when a particle enters the
membrane, it will immediately leave it. In this case the first moment evolves
in time as long as the Green’s function is not continuous at the membrane,
$\hat{\Phi}(s)\neq 1$. (c) When the probability density $P$ and flux $J$ are
continuous at the membrane, $\hat{\Phi}(s)\equiv 1$ and $\hat{\Xi}(s)\equiv
1$, then in time domain we have $\left\langle x(t)\right\rangle=x_{0}$ and
$\left\langle x^{2}(t)\right\rangle=x_{0}^{2}+2Dt$. In this case we get the
standard relation for normal diffusion $\left\langle(\Delta
x)^{2}(t)\right\rangle=2Dt$. This result is obvious as the continuity of the
Green’s function and flux means that there is no membrane effect on particle
diffusion.
## Acknowledgments
This paper was partially supported by the Jan Kochanowski University under
grant SMGR.RN.20.222.628.
## Appendix I
The Laplace transforms of solutions to the diffusion equation with boundary
conditions Eq. (12) read
$\displaystyle\hat{P}_{A}(x,s|x_{0})=\frac{1}{2\sqrt{Ds}}{\rm
e}^{-|x-x_{0}|\sqrt{\frac{s}{D}}}$ (76) $\displaystyle+A{\rm
e}^{(x+x_{0})\sqrt{\frac{s}{D}}},$ $\hat{P}_{B}(x,s|x_{0})=B{\rm
e}^{-(x-x_{0})\sqrt{\frac{s}{D}}}.$ (77)
From Eqs. (9), (16), (17), (76), and (77) we get the following system of
linear equations with respect to $A$ and $B$
$\displaystyle
A\bigg{(}\gamma_{1}(s)-\sqrt{Ds}\gamma_{2}(s)\bigg{)}-B\bigg{(}\gamma_{3}(s)+\sqrt{Ds}\gamma_{4}(s)\bigg{)}$
(78)
$\displaystyle=-\frac{1}{2}\bigg{(}\frac{\gamma_{1}(s)}{\sqrt{Ds}}+\gamma_{2}(s)\bigg{)},$
$\displaystyle
A\bigg{(}\lambda_{1}(s)-\sqrt{Ds}\lambda_{2}(s)\bigg{)}-B\bigg{(}\lambda_{3}(s)+\sqrt{Ds}\lambda_{4}(s)\bigg{)}$
(79)
$\displaystyle=-\frac{1}{2}\bigg{(}\frac{\lambda_{1}(s)}{\sqrt{Ds}}+\lambda_{2}(s)\bigg{)}.$
The determinants $W(s)$, $W_{A}(s)$, and $W_{B}(s)$ for the system of
equations (78) and (79) are given by Eqs. (20), (21), and (22), respectively.
Solutions to Eqs. (78) and (79) $A=W_{A}(s)/W(s)$ and $B=W_{B}(s)/W(s)$ are
unique only if $W(s)\neq 0$. Under this condition the solutions to diffusion
equation are determined by the membrane boundary conditions uniquely.
Comparing Eqs. (23) and (24) with (76) and (77), respectively, we get Eqs.
(18) and (19) if $A\neq\pm 1/2\sqrt{Ds}$. Since boundary conditions determine
the solution to diffusion equation uniquely, the equivalence of solutions
(23), (24) and (76), (77) means the equivalence of the boundary conditions
(10), (11) and (16), (17). If $A=\pm 1/2\sqrt{Ds}$, from Eq. (76) we get
$\displaystyle\hat{P}_{A}(x,s|x_{0})=\frac{1}{2\sqrt{Ds}}{\rm
e}^{-|x-x_{0}|\sqrt{\frac{s}{D}}}$ (80)
$\displaystyle\pm\frac{1}{2\sqrt{Ds}}{\rm e}^{(x+x_{0})\sqrt{\frac{s}{D}}}.$
The $+$ sign before the second term on the right–hand side of Eq. (80) gives
the Green’s function for a system with fully reflecting wall, in this case the
boundary condition at the membrane is $J_{A}(0^{-},t|x_{0})=0$. The sign -
gives the Green’s function for a system with fully absorbing wall, the
boundary condition is $P_{A}(0^{-},t|x_{0})=0$. In both cases the diffusion is
considered in region $A$ only.
## Appendix II
We present how to get Eq. (27), here we use the notation as shown in Fig 2.
Within the Continuous Time Random Walk model the Laplace transform of
diffusion flux reads tk2019
$\hat{J}(x,s|x_{0})=-\frac{\epsilon^{2}s\hat{\psi}}{2(1-\hat{\psi}(s))}\frac{\partial\hat{P}(x,s|x_{0})}{\partial
x}.$ (81)
The mean number of particle jumps in the time interval $[0,t]$ is
$\left\langle n(t)\right\rangle=\sum_{n=1}^{\infty}nQ_{n}(t)$, where $Q_{n}$
is the probability that the particle jumps $n$ times in the time interval. In
terms of the Laplace transform we have
$\hat{Q}_{n}(s)=\hat{\psi}^{n}(s)(1-\hat{\psi}(s))/s$, then
$\mathcal{L}[\left\langle
n(t)\right\rangle]=\hat{\psi}(s)/s(1-\hat{\psi}(s))$. The frequency of
particle jumps $\nu$ is defined as $\nu(t)=d\left\langle
n(t)\right\rangle/dt$. Since $\left\langle n(0)\right\rangle=0$ we get
$\hat{\nu}(s)=\hat{\psi}(s)/(1-\hat{\psi}(s))$. Using the above formula and
approximating the derivative as $\partial\hat{P}(x,s|x_{0})/\partial
x=[\hat{P}(x^{+},s|x_{0})-\hat{P}(x^{-},s|x_{0})]/\epsilon$ we define the
probability flux by the unidirectional fluxes. The unidirectional flux
$J_{x^{-}\rightarrow x^{+}}$ controls the probability that a particle jumps
from $x^{-}$ to $x^{+}$ in a time unit, similar interpretation is of
$J_{x^{+}\rightarrow x^{-}}$ which controls a particle jump in the opposite
direction. From the above equations we obtain
$\hat{J}(x,s|x_{0})=\hat{J}_{x^{-}\rightarrow
x^{+}}(x^{-},s|x_{0})-\hat{J}_{x^{+}\rightarrow x^{-}}(x^{-},s|x_{0}),$ (82)
where
$J_{x^{-}\rightarrow x^{+}}(x^{-},s|x_{0})=\frac{\epsilon
s\hat{\nu}(s)}{2}\hat{P}(x^{-},s|x_{0}),$ (83) $J_{x^{+}\rightarrow
x^{-}}(x^{+},s|x_{0})=\frac{\epsilon s\hat{\nu}(s)}{2}\hat{P}(x^{+},s|x_{0}).$
(84)
By adapting the above equations to the system presented in Fig. 2, we change
the particle jump frequency into frequencies defined in the media $a$ and $b$.
We get
$J_{x^{-}\rightarrow x^{+}}(x^{-},s|x_{0})=\frac{\epsilon
s\hat{\nu}_{a}(s)}{2}\hat{P}_{a}(x^{-},s|x_{0}),$ (85) $J_{x^{+}\rightarrow
x^{-}}(x^{+},s|x_{0})=\frac{\epsilon
s\hat{\nu}_{b}(s)}{2}\hat{P}_{b}(x^{+},s|x_{0}),$ (86)
where $\hat{\nu}_{i}(s)=\hat{\psi}_{i}(s)/(1-\hat{\psi}_{i}(s))$,
$i\in\\{a,b\\}$. From Eqs. (82), (85), and (86) we obtain Eq. (27).
## References
* (1) J.P. Bouchaud and A. Georgies, Phys. Rep. 195, 127 (1990).
* (2) R. Metzler and J. Klafter, Phys. Rep. 339, 1 (2000).
* (3) R. Metzler and J. Klafter, J. Phys. A 37, R161 (2004).
* (4) J. Klafter and I.M. Sokolov, First step in random walks. From tools to applications, (Oxford UP, NY, 2011).
* (5) T. Sandev, R. Metzler, and A. Chechkin, Fract. Calc. Appl. Analys. 21, 10 (2018); A. Chechkin, R. Gorenflo, and I.M. Sokolov, Phys. Rev. E 66, 046129 (2002); Frac. Calc. Appl. Anal. 6, 259 (2003); A. Chechkin, J. Klafter, and I.M. Sokolov, In: Fractional Dynamics: Recent Advances, World Scientific, Singapore (2011); T. Sandev, I.M. Sokolov, R. Metzler, and A. Chechkin, Chaos Solit. Fract. 102, 210 (2017); C.H. Eab and S.C. Lim, Phys. Rev. E 83, 031136 (2011);
* (6) A. Compte, Phys. Rev. E 53, 4191 (1996).
* (7) T. D. Frank, Nonlinear Fokker-Planck Equations. Fundamental and Applications, (Springer, Berlin, 2005); T. Kosztołowicz and K.D. Lewandowska, Phys. Rev. E 86, 021108 (2012).
* (8) E.K. Lenzi, R.S. Mendes, and C. Tsallis, Phys. Rev. E 67, 031104 (2003).
* (9) S.C. Lim and S.V. Muniandy, Phys. Rev. E 66, 021114 (2002).
* (10) K.S. Fa and E.K. Lenzi, Phys. Rev. E 71, 012101 (2005).
* (11) T. Kosztołowicz, Phys. Rev. E 102, 022123 (2020).
* (12) V.G. Guimarães, H.V. Ribeiro, Q. Li, L.R. Evangelista, E.K. Lenzi, and R.S. Zola, Soft Matter 11, 1658 (2015).
* (13) T. Zhang, B. Shi, Z. Guo, Z. Chai, and J. Lu, Phys. Rev. E 85, 016701 (2012); T. Kosztołowicz, K. Dworecki, and K.D. Lewandowska, ibid. 86, 021123 (2012); T. Kosztołowicz, Physica A 298, 285 (2001); D.K. Singh and A.R. Ray, J. Membr. Sci. 155, 107 (1999); Y.D. Kim, J. Y. Kim, H. K. Lee, and S. C. Kim, ibid. 190 69 (2001); R. Ash, ibid. 232, 9 (2004); S.M. Huang, M. Yang, W.-F. Zhong, and Y. Xu, ibid. 442, 8 (2013); A. Adrover, M. Giona, M. Grassi, R. Lapasin, and S. Pricl, ibid. 113, 7 (1996); M.J. Abdekhodaie, ibid. 174, 81 (2000); P. Taveira, A. Mendes, and C. Costa, ibid. 221, 123 (2003); M.I. Cabrera, J.A. Luna, and R.J.A. Grau, ibid. 280, 693 (2006); T. Kosztołowicz, ibid. 320, 492 (2008); I. Goychuk and P. Hänggi, Phys. Rev. E 70, 051915 (2004); N. Korabel and E. Barkai, ibid. 83, 051113 (2011); Phys. Rev. Lett. 104, 170603 (2010); M.A. Lomholt, I.M. Zaid, and R. Metzler, ibid. 98, 200603 (2007); I.M. Zaid, M.A. Lomholt, and R. Metzler, Biophys. J. 97, 710 (2009); D.S. Grebenkov, J. Chem. Phys. 151, 104108 (2019); ibid. 132, 034104 (2010).
* (14) A. Bobrowski, Convergence of one–parameter operator semigroups in models of mathematical biology and elsewhere (Cambridge UP, 2016).
* (15) T. Kosztołowicz and A. Dutkiewicz, Math. Meth. Appl. Sci. 43, 10500 (2020).
* (16) T. Kosztołowicz, Phys. Rev. E 99, 022127 (2019).
* (17) T. Kosztołowicz, S. Wa̧sik, and K.D. Lewandowska, Phys. Rev. E 96, 010101(R) (2017); T. Kosztołowicz, ibid. 91, 022102 (2015); Int. J. Heat Mass Transf. 111, 1322 (2017).
* (18) E. Awoonor–Williams and Ch.N. Rowley, Biochim. Biophys. Acta 1858, 1672 (2016); W. Shinoda ibid. 1858, 2254 (2016).
* (19) T. Kosztołowicz, J. Phys. A 37, 10779 (2004).
* (20) B. Dybiec and E. Gudowska–Nowak, Phys. Rev. E 80, 061122 (2009).
* (21) T. Kosztołowicz and R. Metzler, Phys. Rev. E 102, 032408 (2020); T. Kosztołowicz, R. Metzler, S. Wa̧sik, and M. Arabski, PLoS One 15, e0243003 (2020).
* (22) A.V. Chechkin, J. Klafter, and I.M. Sokolov, Europhys. Lett. 63, 326 (2003); S.I. Denisov and H. Kantz, Phys. Rev. E 83, 041132 (2011); S.I, Denisov, S.B. Yuste, Yu.S. Bystrik, H. Kantz, and K. Lindenberg, ibid. 84, 061143 (2011); R. Metzler, J.H. Jeon, A.G. Cherstvy, and E. Barkai, Phys. Chem. Chem. Phys. 16, 24128 (2014); L.P. Sanders, M.A. Lomholt, L. Lizana, K. Fogelmark, R. Metzler, and T. Abjörnsson, New J. Phys. 16, 113050 (2014); A.S. Bodrova, A.V. Chechkin, A.G. Cherstvy, and R. Metzler, ibid. 17, 063038 (2015); A.V. Chechkin, H. Kantz, and R. Metzler, Eur. Phys. J. B 90, 205 (2017); T. Kosztołowicz, J. Stat. Mech. P10021 (2015).
|
capbtabboxtable[][]
11institutetext: Concordia University, Montréal, Canada
# Lissy: Experimenting with on-chain order books
Mahsa Moosavi Jeremy Clark
###### Abstract
Financial regulators have long-standing concerns about fully decentralized
exchanges that run ‘on-chain’ without any obvious regulatory hooks. The
popularity of Uniswap, an automated market makers (AMM), made these concerns a
reality. AMMs implement a lightweight dealer-based trading system, but they
are unlike anything on Wall Street, require fees intrinsically, and are
susceptible to front-running attacks. This leaves the following research
questions we address in this paper: (1) are conventional (i.e., order books),
secure (i.e., resistant to front-running and price manipulation) and fully
decentralized exchanges feasible on a public blockchain like Ethereum, (2)
what is the performance profile, and (3) how much do Layer 2 techniques (e.g.,
Arbitrum) increase performance? To answer these questions, we implement,
benchmark, and experiment with an Ethereum-based call market exchange called
Lissy. We confirm the functionality is too heavy for Ethereum today (you
cannot expect to exceed a few hundred trade executions per block) but show it
scales dramatically (99.88% gas cost reduction) on Arbitrum.
## 1 Introductory Remarks
There are three main approaches to arranging a trade [19]. In a _quote-driven_
market, a dealer uses its own inventory to offer a price for buying or selling
an asset. In a _brokered exchange_ , a broker finds a buyer and seller. In an
_order-driven_ market, offers to buy (_bids_) and sell (_offers_ /_asks_) from
many traders are placed as orders in an order book. Order-driven markets can
be _continuous_ , with buyers/sellers at any time adding orders to the order
book (_makers_) or executing against an existing order (_takers_); or they can
be _called_ , where all traders submit orders within a window of time and
orders are matched in a batch (like an auction).
Conventional financial markets (e.g., NYSE, NASDAQ) use both continuous time
trading during open hours, and a call market before and during open hours to
establish an opening price and a closing price. After early experiments at
implementing continuous time trading on Ethereum (e.g., EtherDelta, OasisDEX),
it was generally accepted that conventional trading is infeasible on Ethereum
for performance reasons. Centralized exchanges continued their predominance,
while slowly some exchanges moved partial functionality on-chain (e.g.,
custody of assets) while executing trades off-chain.
A clever quote-driven alternative, called an automatic market maker (AMM), was
developed that only requires data structures and traversals with low gas
complexity. This approach has undesirable price dynamics (e.g., market impact
of a trade, slippage between the best bid/ask and actual average execution
price, etc.) which explains why there is no Wall Street equivalent, however,
it is efficient on Ethereum and works ‘good enough’ to attract trading. First
generation AMMs provide makers (called liquidity providers) with no ability to
act on price information—they are uninformed traders that can only lose
(called impermanent loss) on trades but make money on fees. Current generation
AMMs (e.g., Uniswap v3) provided informed makers with a limited ability
(called concentrated liquidity) to act on proprietary information [31] without
breaking Ethereum’s performance limitations. Ironically, the logical extension
of this is a move back to where it all started—a full-fledged order-driven
exchange that allows informed makers the fullest ability to trade
strategically.
Contributions. In this paper, we experiment with on-chain markets to
understand in detail if they remain infeasible on Ethereum and what the
limiting factors are. Some highlights from our research include answering the
following questions:
* $\bullet$
What type of exchange has the fairest price execution on balance? (A call
market.)
* $\bullet$
How many orders can be processed on-chain? (Upper-bounded by 152 per block.)
* $\bullet$
How much efficiency can be squeezed from diligently choosing the best data
structures? (Somewhat limited; turn 38 trades into 152.)
* $\bullet$
To what extent can we mitigate front-running attacks? (Almost entirely.)
* $\bullet$
Can we stop the exchange’s storage footprint on Ethereum from bloating? (Yes,
but it is so expensive that it is not worth it.)
* $\bullet$
Are on-chain order books feasible on layer 2? (Yes! Optimistic roll-ups reduce
gas costs by 99.88%.)
* $\bullet$
Which aspects of Ethereum were encountered that required deeper than surface-
level knowledge to navigate? (Optimizing gas refunds, Solidity is not truly
object-oriented, miner extractable value (MEV) can be leveraged for good, and
bridging assets for layer 2.)
* $\bullet$
How hard is an on-chain exchange to regulate? (The design leaves almost no
regulatory hooks beyond miners (and sequencers on layer 2).)
## 2 Preliminaries
### 2.1 Ethereum
We assume the reader is familiar with the following concepts: blockchain
technology; smart contracts and decentralized applications (DApps) on
Ethereum; how Ethereum transactions are structured, broadcast, and finalized;
the gas model including the gas limit (approximately 11M gwei at the time of
our experiments) per block. A gas refund is a more esoteric subject (not
covered thoroughly in any academic work to our knowledge) that we use heavily
in our optimizations. Briefly, certain EVM operations (SELFDESTRUCT and SSTORE
0) cost negative gas, with the follow caveats: the refund is capped at 50% of
the total gas cost of the transaction, and (2) the block gas limit applies to
the pre-refunded amount (i.e., a transaction receiving a full refund can cost
up to 5.5M gas with an 11M limit). We provide full details of all of these
topics in Appendix 0.A.1.
### 2.2 Trade Execution Systems
Type | Description | Advantages | Disadvantages
---|---|---|---
Centralized Exchanges (CEX) | Order-driven exchange acts as a trusted third party (e.g., Binance, Bitfinex) | Conventional
Highest performance
Low fees
Easy to regulate
Low price slippage
Verbose trading strategies | Fully trusted custodian
Slow withdrawals
Server downtime
Uncertain fair execution
Partially On-chain Exchange | Order-driven exchange acts as a semi-trusted party (e.g., EtherDelta, 0x, IDEX, Loopring) | High performance
Low fees
Easy to regulate
Low price slippage
Verbose trading strategies
Semi-custodial | Slow withdrawals
Server downtime
Front-running attacks
Uncertain fair execution
On-Chain Dealers | Quote-driven decentralized exchange trades from inventory with public pricing rule (e.g., Uniswap v3) | Non-custodial
Instant trading
Moderate performance
Fair execution | Unconventional
Impermanent loss
High price slippage
Intrinsic fees
Front-running attacks
Limited trading strategies
Hard to regulate
On-chain Order-Driven Exchanges | Order-driven decentralized exchange executes trades between buyers and sellers (e.g., Lissy) | Conventional
Non-custodial
Low price slippage
Fair execution
Verbose trading strategies
Front-running is mitigable | Very low performance
Hard to regulate
Table 1: Comparison among different trade execution systems.
Table 1 illustrates various trade execution systems and summarizes their
advantages and disadvantages. Appendix 0.A.2 provides a full justification for
the table. Briefly, fully decentralized, on-chain exchanges require the lowest
trust, provide instant settlement, and have transparent trading rules that
will always execute correctly. Front-running attacks (see Section 5 for a very
thorough discussion) are weaknesses inherent in blockchains that require
specific mitigation.
### 2.3 Related Work
Call markets are studied widely in finance and provide high integrity prices
(e.g., closing prices that are highly referenced and used in derivative
products) [20, 30, 15]. They can also combat high frequency trading [7, 1]. An
older 2014 paper [12] on the ‘Princeton prediction market’ [6] show that call
markets mitigate most blockchain-based front-running attacks present in an on-
chain continuous-trading exchange as well as other limitations: block
intervals are slow and not continuous, there is no support for accurate time-
stamping, transactions can be dropped or reordered by miners, and fast traders
can react to submitted orders/cancellations when broadcast to network but not
in a block and have their orders appear first. The paper does not include an
implementation, was envisioned as running on a custom blockchain (Ethereum was
still in development in 2014) and market operations are part of the blockchain
logic.
The most similar academic work to this paper is the Ethereum-based periodic
auction by Galal et al. [16] and the continuous-time exchange TEX [23]. As
with us, front-running is a main consideration of these works. In a recent SoK
on front-running attacks in blockchain [14], three general mitigations are
proposed: confidentiality, sequencing, and design. Both of these papers use
confidentiality over the content of orders (cf. [37, 39, 38, 10, 27]). The
main downside is that honest traders cannot submit their orders and leave,
they must interact in a second round to reveal their orders. The second
mitigation approach is to sequence transactions according to some rule akin to
first-in-first-out [22, 25]. These are not available for experimentation on
Ethereum yet (although Chainlink has announced an intention111A. Juels.
blog.chain.link, 11 Sep 2020.). The third solution is to design the service in
a way that front-running attacks are not profitable—this is the approach with
Lissy which uses no cryptography and is submit-and-go for traders. A detailed
comparison of front-running is provided in Section 5. Our paper also
emphasizes implementation details: Galal et al. do not provide a full
implementation, and TEX uses both on-chain and off-chain components, and thus
does not answer our research question of how feasible an on-chain order book
is.
## 3 Call Market Design
Operation | Description
---|---
depositToken() | Deposits ERC20 tokens in Lissy smart contract
depositEther() | Deposits ETH in Lissy smart contract
openMarket() | Opens the market
closeMarket() | Closes the market and processes the orders
submitBid() | Inserts the upcoming bids inside the priority queue
submitAsk() | Inserts the upcoming asks inside the priority queue
claimTokens() | Transfers tokens to the traders
claimEther() | Transfers ETH to the traders
Table 2: Primary operations of Lissy smart contract.
A call market opens for traders to submit bids and asks which are enqueued
until the market closes. Trades are executed by matching the best priced bid
to the best priced ask until the best bid is less than the best ask, then all
remaining trades are discarded. See Appendix 0.A.3 for a numeric example. If
Alice’s bid of $100 is executed against Bob’s ask of $90, Alice pays $100, Bob
receives $90 and the $10 difference (called a price improvement) is given to
miners for reasons in explained in the front-running evaluation (Section 5).
For our experiments and measurements, we implement a call market from scratch.
Lissy will open for a specified period of time during which it will accept a
capped number of orders (e.g., 100 orders—parameterized so that all orders can
be processed), and these orders are added to a priority queue (discussed in
Section 3.1). Our vision is the market would be open for a very short period
of time, close, and then reopen immediately (e.g., every other block). Lissy
is open source and written in 336 lines (SLOC) of Solidity plus the priority
queue (e.g., we implement 5 variants, each around 300 SLOC). We tested it with
the Mocha testing framework using Truffle [36] on Ganache-CLI [35] to obtain
our performance metrics. Once deployed, the bytecode of Lissy is 10,812 bytes
plus the constructor code (6,400 bytes) which is not stored. The Solidity
source code for Lissy and Truffle test files are available in a GitHub
repository.222https://github.com/MadibaGroup/2020-Orderbook We have also
deployed Lissy on Ethereum’s testnet Rinkeby with flattened (single file)
source code of just the Lissy base class and priority queue implementations.
It is visible and can be interacted with here: [etherscan.io]. We cross-
checked for vulnerabilities with Slither333https://github.com/crytic/slither
and SmartCheck444https://tool.smartdec.net and it only fails some
‘informational’ warnings that are intentional design choices (e.g., a costly
loop). All measurements assume a block gas limit of $11\,741\,495$ and 1 gas
$=$ 56 Gwei.555EthStats (July 2020): https://ethstats.net/ Table 2 summarizes
Lissy’s primary operations.
### 3.1 Priority Queues
In designing Lissy within Ethereum’s gas model, performance is the main
bottleneck. For a call market, closing the market and processing all the
orders are the most time-consuming steps. Assessing which data structures will
perform best is hard (e.g., gas refunds, a relatively cheap mapping data
structure, only partial support for object-oriented programming) without
actually deploying and evaluating several variants.
We first observe that orders are executed in order: highest to lowest price
for bids, and lowest to highest price for asks. This means random access to
the data structure holding the orders is unnecessary (we discuss cancelling
orders later in Section 6.2). We can use a lightweight priority queue (PQ)
which has only two functions: Enqueue() inserts an element into the priority
queue; and Dequeue() removes and returns the highest priority element.
Specifically, we use two PQs—one for bids, where the highest price is the
highest priority, and one for asks, where the lowest price is the highest
priority.
As closing the market is very expensive with any PQ, we rule out sorting the
elements while dequeuing and sort during each enqueue. We then implement the
following 5 PQ variants:
1. 1.
Heap with Dynamic Array. A heap is a binary tree where data is stored in nodes
in a specific order where the root always represents the highest priority item
(i.e., highest bid price/lowest ask price). Our heap stores its data in a
Solidity-provided dynamically sized array. The theoretical time complexity is
logarithmic enqueue and logarithmic dequeue.
2. 2.
Heap with Static Array. This variant replaces the dynamic array with a
Solidity storage array where the size is statically allocated. This is
asymptotically the same and marginally faster in practice.
3. 3.
Heap with Mapping. In this variant, we store a key for the order in the heap
instead of the entire order itself. Once a key is dequeued, the order struct
is drawn from a Solidity mapping (which stores key-value pairs very
efficiently). This is asymptotically the same and faster with variable-sized
data.
4. 4.
Linked List. In this variant, elements are stored in a linked list (enabling
us to efficiently insert a new element between two existing elements during
enqueue). Solidity is described as object-oriented but the Solidity equivalent
of an object is an entire smart contract. Therefore, an object-oriented linked
list must either (1) create each node in the list as a struct—but this is not
possible as Solidity does not support recursive structs—or (2) make every node
in the list its own contract. The latter option seems wasteful and unusual,
but it surprisingly ends up being the most gas efficient data structure to
dequeue. The theoretical time complexity is linear enqueue and constant
dequeue.
5. 5.
Linked List with Mapping. Finally, we try a variant of a linked list using a
Solidity mapping. The value of the mapping is a struct with the incoming
order’s data and the key of the next (and previous) node in the list. The
contract stores the key of the first node (head) and last node (tail) in the
list. Asymptotically, it is linear enqueue and constant dequeue.
We implemented, deployed, and tested each PQ. A simple test of enqueuing 50
integers chosen at random from a fixed interval is in Figure 2 and dequeing
them all is in Table 2. Dequeuing removes data from the contract’s storage
resulting in a gas refund. Based on our manual estimates,666EVM does not
expose the refund counter. We determine how many storage slots are being
cleared and how many smart contracts destroyed, then we multiply these numbers
by 24,000 or 15,000 respectively. every variant receives the maximum gas
refund possible (i.e., half the total cost of the transaction). In other
words, each of them actually consumes twice the gasUsed amount in gas before
the refund. However, none of them are better or worse based on how much of a
refund they generate.
Figure 1: Gas costs for enqueuing 50 random integers into five priority queue
variants. For the x-axis, a value of 9 indicates it is the 9th integer entered
in the priority queue. The y-axis is the cost of enqueuing in gas.
| Gas Used | Refund | Full Refund?
---|---|---|---
Heap with Dynamic Array | 2,518,131 | 750,000 | $\CIRCLE$
Heap with Static Array | 1,385,307 | 750,000 | $\CIRCLE$
Heap with Mapping | 2,781,684 | 1,500,000 | $\CIRCLE$
Linked List | 557,085 | 1,200,000 | $\CIRCLE$
Linked List with Mapping | 731,514 | 3,765,000 | $\CIRCLE$
Figure 2: The gas metrics associated with dequeuing 50 integers from five
priority queue variants. Full refund amount is shown but the actual refund
that is applied is capped.
We observe that (1) the linked list variants are materially cheaper than the
heap variants at dequeuing; (2) dequeuing in a call market must be done as a
batch, whereas enqueuing is paid for one at a time by the trader submitting
the order; and (3) Ethereum will not permit more than hundreds of orders so
asymptotic behaviour is not significant. For these reasons, we suggest using
one of the linked list variants. As it can be seen in Figure 2, the associated
cost for inserting elements into a linked list PQ is significantly greater
than the linked list with mapping, as each insertion causes the creation of a
new contract. Accordingly, we choose to implement the call market with the
linked list with mapping which balances a moderate gas cost for insertion
(i.e., order submission) with one for removal (i.e., closing the market and
matching the orders). In Section 4, we implement Lissy on Layer 2. There, the
PQ variant does not change the layer 1 gas costs (as calldata size is the
same) and the number of orders can be substantially increased. thus, we
reconsider asymptotic and choose a heap (with dynamic array) to lower L2 gas
costs across both enqueuing and dequeuing.
### 3.2 Cost/Benefit of Cleaning Up After Yourself
| Gas Used | Potential Refund | Full Refund?
---|---|---|---
Linked List without SELFDESTRUCT | 721,370 | 0 | $\LEFTcircle$
Linked List with SELFDESTRUCT | 557,085 | 1,200,000 | $\CIRCLE$
Linked List with Mapping and without DELETE | 334,689 | 765,000 | $\CIRCLE$
Linked List with Mapping and DELETE | 731,514 | 3,765,000 | $\CIRCLE$
Table 3: The gas metrics associated with dequeuing 50 integers from four
linked list variants. For the refund, ($\CIRCLE$) indicates the refund was
capped at the maximum amount and ($\LEFTcircle$) means a greater refund would
be possible.
One consequence of a linked list is that a new contract is created for every
node in the list. Beyond being expensive for adding new nodes (a cost that
will be bared by the trader in a call market), it also leaves a large
footprint in the active Ethereum state, especially if we leave the nodes on
the blockchain in perpetuity (i.e., we just update the head node of the list
and leave the previous head ‘dangling’). However in a PQ, nodes are only
removed from the head of the list; thus the node contracts could be
‘destroyed’ one by one using an extra operation, SELFDESTRUCT, in the
Dequeue() function. As shown in Table 3, the refund from doing this outweighs
to the cost of the extra computation: gas costs are reduced from 721K to 557K.
This suggests a general principle: cleaning up after yourself will pay for
itself in gas refunds. Unfortunately, this is not universally true as shown by
applying the same principle to the linked list with mapping.
Dequeuing in a linked list with mapping can be implemented in two ways. The
simplest approach is to process a node, update the head pointer, and leave the
‘removed’ node’s data behind in the mapping untouched (where it will never be
referenced again). Alternatively, we can call DELETE on each mapping entry
once we finish processing a trade. As it can be seen in the last two rows of
Table 3, leaving the data on the blockchain is cheaper than cleaning it up.
The lesson here is that gas refunds incentivize developers to clean up storage
variables they will not use again, but it is highly contextual as to whether
it will pay for itself. Further, the cap on the maximum refund means that
refunds are not fully received for large cleanup operations (however removing
the cap impacts the miners’ incentives to include the transaction). In
Appendix 0.B, we present a second case study of the cost-benefit of clearing a
mapping when it is no longer needed (including our idea to store the mapping
in its own contract so it can SELFDESTRUCT with a single function call). The
unfortunate takeaway is, again, that it is cheapest to leave the mapping in
place. Cleaning up EVM state is a complicated and under-explored area of
Ethereum in the research literature. For our own work, we strive to be good
citizens of Ethereum and clean up to the extent that we can—thus all PQs in
Table 2 implement some cleanup.
### 3.3 Lissy Performance Measurements
| Max Trades (w.c.) | Gas Used for Max Trades | Gas Used for 1000 Trades | Gas Used for Submission(avg)
---|---|---|---|---
Heap with Dynamic Array | 38 | 5,372,679 | 457,326,935 | 207,932
Heap with Static Array | 42 | 5,247,636 | 333,656,805 | 197,710
Heap with Mapping | 46 | 5,285,275 | 226,499,722 | 215,040
Linked List | 152 | 5,495,265 | 35,823,601 | 735,243
Linked List with Mapping | 86 | 5,433,259 | 62,774,170 | 547,466
Table 4: Performance of Lissy for each PQ variant. Each consumes just under
the block gas limit ($\sim$11M gas) with a full refund of half of its gas.
The main research question is how many orders can be processed under the
Ethereum block gas limit. The choice of PQ implementation is the main
influence on performance and the results are shown in Table 4. These numbers
are for the worst-case—when every submitted bid and ask is marketable (i.e.,
will require fulfillment). In practice, once closeMarket() hits the first bid
or ask that cannot be executed, it can stop processing all remaining orders.
Premised on Ethereum becoming more efficient over time, we were interested in
how much gas it would cost to execute 1000 pairs of orders, which is given in
the third column. The fourth column indicates the cost of submitting a bid or
ask — since this cost will vary depending on how many orders are already
submitted (recall Figure 2), we average the cost of 200 order submissions.
The main takeaway is that call markets appear to be limited to processing
about a hundred orders per transaction and even that is at the enormous cost
of monopolizing an entire Ethereum block just to close the market. Perhaps
Lissy can work today in some circumstances like very low liquidity tokens, or
markets with high volumes and a small number of traders (e.g., liquidation
auctions).
## 4 Lissy on Arbitrum
Layer 2 (L2) solutions [18] are a group of scaling technologies proposed to
address specific drawbacks of executing transactions on Ethereum, which is
considered Layer 1 (L1). Among these proposals, roll-ups prioritize reducing
gas costs (as opposed to other valid concerns like latency and throughput,
which are secondary for Lissy). We review two variants, optimistic roll-ups
and zk roll-ups, in Appendix 0.A.1.3. Briefly, in a roll-up, every transaction
is stored (but not executed) on Ethereum, then executed off-chain, and the
independently verifiable result is pushed back to Ethereum, with some evidence
of being executed correctly. In the Appendix, we also compare Lissy on
Arbitrum to Loopring 3.0.
We choose to experiment with Lissy on the optimistic rollup Arbitrum.777See
https://offchainlabs.com for more current details than the 2018 USENIX
Security paper [21]. To deploy a DApp on Arbitrum, or to execute a function on
an existing Arbitrum DApp, the transaction is sent to an inbox on L1. It is
not executed on L1, it is only recorded (as calldata) in the inbox. An open
network of validators watch the inbox for new transactions. Once inbox
transactions are finalized in an Ethereum block, validators will execute the
transactions and assert the result of the execution to other validators on a
sidechain called ArbOS. As the Inbox contract maintains all Arbitrum
transactions, anyone can recompute the entire current state of the ArbOS and
file a dispute if executions are not correctly reported on ArbOS. Disputes are
adjudicated by Ethereum itself and require a small, constant amount of gas,
invariant to how expensive the transaction being disputed is. When the dispute
challenge period is over, the new state of ArbOS is stored as a checkpoint on
Ethereum.
### 4.1 Lissy Performance Measurements on Arbitrum
| Layer1 gasUsed | Layer2 ArbGas
---|---|---
Lissy on Ethereum | 5,372,679 | N/A
Lissy on Arbitrum | 6,569 | 508,250
Table 5: Gas costs of closing a market on Ethereum and on Arbitrum. ArbGas
corresponds to Layer 2 computation used.
Testing Platforms. We implement Lissy using the Arbitrum Rollup chain hosted
on the Rinkeby testnet. It is visible and can be interacted with here:
[Arbitrum Explorer]. To call functions on Lissy, traders can (1) send
transactions directly to the Inbox contract, or (2) use a relay server (called
a Sequencer) provided by the Arbitrum. The sequencer will group, order, and
send all pending transactions together as a single Rinkeby transaction to the
Inbox (and pays the gas).
In our Lissy variant on Arbitrum, the validators do all computations (both
enqueuing and dequeuing) so we choose to use a heap with dynamic array for our
priority queue, which balances the expense of both operations. Heaps are 32%
more efficient than linked lists for submitting orders and 29% less efficient
for closing. Recall that without a roll-up, such a priority queue can only
match 38 pairs at a cost of 5,372,679 gas. Table 5 shows that 38 pairs cost
only 6,569 in L1 gas (a 99.88% savings). This is the cost of submitting the
closeMarket() transaction to the Inbox to be recorded, which is 103 bytes of
calldata. Most importantly, recording closeMarket() in the Inbox will always
cost around 6,569 even as the number of trades increases from 38 pairs to
thousands or millions of pairs. Of course, as the number of trades increase,
the work for the validators on L2 increases, as measured in ArbGas. The price
of ArbGas in Gwei is not well established but is anticipated to be relatively
cheap. Arbitrum also reduces the costs for traders to submit an order: from
207,932 to 6,917 in L1 gas. In Appendix 0.A.1.3, the full interaction is shown
in Figure 4, which illustrates how traders interact with Lissy on Arbitrum
including bridges, inboxes, sequencers and validators.
Running Lissy on Arbitrum has one large caveat. If the ERC20 tokens being
traded are not issued on ArbOS, which is nearly always the case today, they
first need to be bridged onto ArbOS, as does the ETH. Traders send ETH or
tokens to Arbitrum’s bridge contracts which create the equivalent amount at
the same address on L2. Withdrawals work the same way in reverse, but are only
final on L1 after a dispute challenge period (currently 1 hour).888L1 users
might accept assets before they are finalized as they can determine their
eventual emergence on L1 is indisputable (eventual finality).
## 5 Front-running Evaluation
| | | Centralized Continuous Market (Coinbase) | Partially Off-chain Continuous Market (EtherDelta) | Partially Off-chain Continuous Market w/ Roll-up (Loopring) | On-chain Continuous Market (OasisDex) | On-chain Dark Continuous Market (TEX) | On-chain Automated Market Maker (Uniswap) | On-chain Call Market w/ Price Improvement | On-chain Call Market (Lissy) | On-chain Call Market w/ Roll-up (Lissy variant) | On-chain Dark Call Market (Galal et al.)
---|---|---|---|---|---|---|---|---|---|---|---|---
Who is Mallory? Authority, Trader, Miner, Sequencer | A | A,T,M | A,T,M,S | T,M | T,M | T,M | T,M | T,M | T,M,S | T,M
Attack Example | Mallory (maker) squeezes in a transaction before Alice’s (taker) order | Ins. | $\Circle$ | $\Circle$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\Circle$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$
Mallory (taker) squeezes in a transaction before Bob’s (taker 2) | Disp. | $\Circle$ | $\Circle$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\Circle$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$
Mallory (maker 1) suppresses a better incoming order from Alice (maker 2) until Mallory’s order is executed | Supp. | $\Circle$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$
A hybrid attack based on the above (e.g., sandwich attacks, scalping) | I/S/D | $\Circle$ | $\Circle$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$
Mallory suspends the market for a period of time | Supp. | $\Circle$ | $\Circle$ | $\Circle$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$ | $\LEFTcircle$
Spoofing: Mallory (maker) puts an order as bait, sees Alice (taker) tries to execute it, and cancels it first | S&D | $\Circle$ | $\Circle$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\Circle$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$
Cancellation Griefing: Alice (maker) cancels an order and Mallory (taker) fulfills it first | Disp. | $\Circle$ | $\Circle$ | $\Circle$ | $\Circle$ | $\CIRCLE$ | $\Circle$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$ | $\CIRCLE$
Table 6: An evaluation of front-running attacks (rows) for different types of
order books (columns). Front-running attacks are in three categories:
Insertion, displacement, and suppression. A full dot ($\CIRCLE$) means the
front-running attack is mitigated or not applicable to the order book type, a
partial mitigation ($\LEFTcircle$) is awarded when the front-running attack is
possible but expensive, and we give no award ($\Circle$) if the attack is
feasible.
As we illustrate in Table 6, call markets have a unique profile of resilience
against _front-running attacks_ [12, 14, 13] that differs somewhat from
continuous-time markets and automated market makers. Traders are sometimes
distinguished as _makers_ (adds orders to a market) and _takers_ (trades
against a pre-existing, unexecuted orders). A continuous market has both. All
traders using an automated market maker are takers, while the investors who
provide tokens to the AMM (liquidity providers) are makers. Under our
definition, a call market only has makers: the only way to have a trade
executed is to submit an order. The front-running attacks in Table 6 are
subcategorized, using a recent SoK [14], as being _Insertion_ , _Displacement_
, and _Suppression_. To explain the difference, we will illustrate the first
three attacks in the table.
In an _insertion attack_ , Mallory learns of a transaction from Alice.
Consider Alice submitting a bid order for 100 tokens at any price (market
order). Mallory decides to add new ask orders to the book (limit orders) at
the maximum price reachable by Alice’s order given the rest of the asks in the
book. Mallory must arrange for her orders to be added before Alice’s
transaction and then arrange for Alice’s transaction to be the next (relevant)
transaction to run (e.g., before competing asks from other traders are added).
In a centralized exchange, Mallory would collude with the _authority_ running
the exchange to conduct this attack. On-chain, Mallory could be a fast
_trader_ who sees Alice’s transaction in the mempool and adds her transaction
with a higher gas fee to bribe miners to execute hers first (insertion is
probabilist and not guaranteed). Finally, Mallory could be the _miner_ of the
block that includes Alice’s transaction allowing her to insert with high
fidelity. Roll-ups use _sequencers_ discussed in Section 5.1.
A _displacement attack_ is like an insertion attack, except Mallory does not
care what happens to Alice’s original transaction—she only cares about being
first. If Mallory sees Alice trying to execute a trade at a good price, she
could try to beat Alice and execute the trade first. Mallory is indifferent to
whether Alice can then execute her trade or not. The analysis of both
insertion and suppression attacks are similar. Call markets mitigate these
basic insertion and displacement attacks because they do not have any time
priority (e.g., if you were to shuffle the order of all orders submitted
within the same call, the outcome would be exactly the same). A different way
to mitigate these attacks is to seal orders with confidentiality (a dark
market).
In a _suppression attack_ , Mallory floods the network with transactions until
a trader executes her order. Such selective denial of service is possible by
an off-chain operator. With on-chain continuous markets, it is not possible to
suppress Alice’s transaction while also letting through a transaction from a
taker—suppression applies to all Ethereum transactions or none. A call market
is uniquely vulnerable because it eventually times out (which does not require
an on-chain transaction) and new orders cannot be added. We still award a call
market partial mitigation since suppression attacks are expensive (cf. Fomo3D
attack [14]). If the aim of suppression is a temporary denial of service
(captured by attack 5 in the table), then all on-chain markets are vulnerable
to this expensive attack.
Some attacks combine more than one insertion, displacement, and/or suppression
attacks. AMMs are vulnerable to a double insertion called a sandwich attack
[41] which bookends a victim’s trade with the front-runner’s trades (plus
additional variants). In a traditional call market, a market clearing price is
chosen and all trades are executed at this price. All bids made at a higher
price will receive the assets for the lower clearing price (and conversely for
lower ask prices): this is called a price improvement and it allows traders to
submit at their best price. A hybrid front-running attack allows Mallory to
extract any price improvements. Consider the case where Alice’s ask crosses
Bob’s bid with a material price improvement. Mallory inserts a bid at Alice’s
price, suppresses Bob’s bid until the next call, and places an ask at Bob’s
price. She buys and then immediately sells the asset and nets the price
improvement as arbitrage. To mitigate this in Lissy, all price improvements
are given to the miner (using block.coinbase.transfer()). This does not
actively hurt traders—they always receive the same price that they quote in
their orders—and it removes any incentive for miners to front-run these
profits.
Other front-running attacks use order cancellations (see Section 6.2) which
Lissy mitigates by running short-lived markets with no cancellations.
There are two main takeaways from Table 6. Call markets provide strong
resilience to front-running only bested slightly by dark markets like TEX
[23], however, they do it through design—no cryptography and no two-round
protocols. A second observation is that dark call markets, like Galal et al.
[16], are no more resilient to front-running than a lit market (however
confidentiality could provide resilience to predatory trading algorithms that
react quickly to trades without acutally front-running).
### 5.1 Front-running on Arbitrum
In our Lissy variant on the Arbitrum, traders can submit transactions to the
Layer 1 Inbox contract instead of directly to the Lissy DApp. This has the
same front-running profile as Lissy itself; only the Layer 1 destination
address is different. If a sequencer is mandatory, it acts with the same
privilege as a Layer 1 Ethereum miner in ordering the transactions it
receives. Technically, sequencers are not limited to roll-ups and could be
used in the context of normal Layer 1 DApps, but they are more apparent in the
context of roll-ups. A sequencer could be trusted to execute transactions in
the order it receives them, outsource to a fair ordering service, or (in a
tacit acknowledge of the difficulties of preventing front-running) auction off
permission to order transactions to the highest bidder (called a MEV auction).
As shown in Table 6, a sequencer is an additional front-running actor but does
not otherwise change the kinds of attacks that are possible.
## 6 Design Landscape
Figure 3: A design landscape for on-chain call markets.
Lissy is a simple base class that implements the core functionality of a call
market. To use it in the real world, design decisions need to be made about
how it will be used. Figure 3 provides a design landscape for Lissy
deployment, with possible extensions and customization.
### 6.1 Token Divisibility and Ties
A common trading rule is to fill ties in proportion to their volume (i.e., pro
rata allocation)999If Alice and Bob bid the same price for 100 tokens and 20
tokens respectively, and there are only 60 tokens left in marketable asks,
Alice receives 50 and Bob 10.. This can fail when tokens are not divisible.
Consider the following corner case: 3 equally priced bids of 1 non-divisible
token and 1 ask at the same price: (1) the bid could be randomly chosen (cf.
Libra [28]), or (2) the bid could be prioritized based on time. In Lissy,
tokens are assumed to be divisible. If the volume of the current best bid does
not match the best ask, the larger order is partially filled and the remaining
volume is considered against the next best order. We note the conditions under
which pro rata allocation fails (i.e., non-divisible assets, an exact tie on
price, and part of the final allocation) are improbable. (1) is the fairest
solution with one main drawback: on-chain sources of ‘randomness’ are
generally deterministic and manipulatable by miners [5, 9], while
countermeasures can take a few blocks to select [4]. We implement (2) which
means front-running attacks are possible in this one improbable case.
### 6.2 Order Cancellations
Support for cancellation opens the market to new front-running issues where
other traders (or miners) can displace cancellations until after the market
closes. However, one benefit of a call market is that beating a cancellation
with a new order has no effect, assuming the cancellation is run any time
before the market closes. Also, cancellations have a performance impact.
Cancelled orders can be removed from the underlying data structure or
accumulated in a list that is cross-checked when closing the market. Removing
orders requires a more verbose structure than a priority queue (e.g., a self-
balancing binary search tree instead of a heap; or methods to traverse a
linked list rather than only pulling from the head). Lissy does not support
order cancellations. We intend to open and close markets quickly (on the order
of blocks), so orders are relatively short-lived.
### 6.3 Who Pays to Close/Reopen the Market?
In the Princeton paper [12], the call market is envisioned as an alt-coin,
where orders accumulate within a block and a miner closes the market as part
of the logic of producing a new block (i.e., within the same portion of code
as computing their coinbase transaction in Bitcoin or gasUsed in Ethereum). In
Lissy, someone needs to execute closeMarket() at the right time and pay for
it, which is probably the most significant design challenge for Lissy.
Since price improvements are paid to the miners, the miner is incentivized to
run closeMarket() if it pays for itself. Efficient algorithms for miners to
automatically find ‘miner extractable value (MEV)’ opportunities [13] is an
open research problem. Even if someone else pays to close the market, MEV
smooths out some market functionality. Assume several orders are submitted and
then closeMarket(). A naive miner might order the closeMarket() before the
submitted orders, effectively killing those orders and hurting its own
potential profit. MEV encourages miners to make sure a profitable
closeMarket() in the mempool executes within its current block (to claim the
reward for itself) and that it runs after other orders in the mempool to
maximize its profit.
Without MEV, markets should open and close on different blocks. In this
alternative, the closeMarket() function calls openMarket() as a subroutine and
sets two modifiers: orders are only accepted in the block immediately after
the current block (i.e., the block that executes the closeMarket()) and
closeMarket() cannot be run again until two blocks after the current block.
Another option is to have traders in the next call market pay to incrementally
close the current market. For example, each order in the next market needs to
pay to execute the next $x$ orders in the current market until the order book
is empty. This has two issues: first, amortizing the cost of closing the
market amongst the early traders of the new market disincentives trading early
in the market; the second issue is if not enough traders submit orders in the
new market, the old market never closes (resulting in a backlog of old markets
waiting to close).
A closely related option is to levy a carefully computed fee against the
traders for every new order they submit. These fees are accumulated by the
DApp to use as a bounty. When the time window for the open market elapses, the
sender of the first closeMarket() function to be confirmed receives the
bounty. This is still not perfect: closeMarket() cost does not follow a tight
linear increase with the number of orders, and gas prices vary over time which
could render the bounty insufficient for offsetting the closeMarket() cost. If
the DApp can pay for its own functions, an interested party can also arrange
for a commercial service (e.g.,
any.sender101010https://github.com/PISAresearch/docs.any.sender) to relay the
closeMarket() function call on Ethereum (an approach called meta-
transactions). This creates a regulatory hook.
The final option is to rely on an interested third party (such as the token
issuer for a given market) to always close the market, or occasionally bailout
the market when one of the above mechanisms fails. An external service like
Ethereum Alarm Clock111111https://ethereum-alarm-clock-service.readthedocs.io/
(which also creates a regulatory hook) can be used to schedule regular
closeMarket() calls.
### 6.4 Collateralization Options
In Lissy, both the tokens and ETH that a trader wants to potentially use in
the order book are preloaded into the contract. We discuss some alternative
designs in Appendix 0.C.
## 7 Concluding Remarks
Imagine you have just launched a token on Ethereum. Now you want to be able to
trade it. While the barrier to entry for exchange services is low, it still
exists. For a centralized or decentralized exchange, you have to convince the
operators to list your token and you will be delayed while they process your
request. For an automated market maker, you will have to lock up a large
amount of ETH into the DApp, along with your tokens. For roll-ups, you will
have to host your own servers. By contrast to all of these, with an on-chain
order book, you just deploy the code alongside your token and trading is
immediately supported. This should concern regulators. Even if it is too slow
today, there is little reason for developers not to offer it as a fallback
solution that accompanies every token. With future improvements to blockchain
scalability, it could become the de facto trading method.
#### Acknowledgements.
The authors thank the AMF (Autorité des Marchés Financiers) for supporting
this research project. J. Clark also acknowledges partial funding from the
National Sciences and Engineering Research Council (NSERC)/Raymond Chabot
Grant Thornton/Catallaxy Industrial Research Chair in Blockchain Technologies,
as well as NSERC through a Discovery Grant. M. Moosavi acknowledges support
from Fonds de Recherche du Québec - Nature et Technologies (FRQNT).
## References
* [1] M. Aquilina, E. B. Budish, and P. O’Neill. Quantifying the high-frequency trading “arms race”: A simple new methodology and estimates. Chicago Booth Research Paper, (20-16), 2020.
* [2] E. Ben-Sasson, I. Bentov, Y. Horesh, and M. Riabzev. Scalable zero knowledge with no trusted setup. In CRYPTO, 2019.
* [3] E. Ben-Sasson, A. Chiesa, D. Genkin, E. Tromer, and M. Virza. Snarks for c: Verifying program executions succinctly and in zero knowledge. In CRYPTO, 2013.
* [4] D. Boneh, J. Bonneau, B. Bünz, and B. Fisch. Verifiable delay functions. In CRYPTO, 2018.
* [5] J. Bonneau, J. Clark, and S. Goldfeder. On bitcoin as a public randomness source. https://eprint.iacr.org/2015/1015.pdf, 2015. Accessed: 2015-10-25.
* [6] R. Brandom. This princeton professor is building a bitcoin-inspired prediction market, Nov 2013.
* [7] E. Budish, P. Cramton, and J. Shim. The high-frequency trading arms race: Frequent batch auctions as a market design response. The Quarterly Journal of Economics, 130(4):1547–1621, 2015.
* [8] J. V. Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx. Foreshadow: Extracting the keys to the intel SGX kingdom with transient out-of-order execution. In USENIX Security Symposium, Baltimore, MD, Aug. 2018. USENIX Association.
* [9] B. Bünz, S. Goldfeder, and J. Bonneau. Proofs-of-delay and randomness beacons in ethereum. In IEEE S&B, 2017.
* [10] J. Cartlidge, N. P. Smart, and Y. Talibi Alaoui. Mpc joins the dark side. In ASIACCS, pages 148–159, 2019.
* [11] R. Cheng, F. Zhang, J. Kos, W. He, N. Hynes, N. Johnson, A. Juels, A. Miller, and D. Song. Ekiden: A platform for confidentiality-preserving, trustworthy, and performant smart contracts. In IEEE EuroS&P, pages 185–200. IEEE, 2019.
* [12] J. Clark, J. Bonneau, E. W. Felten, J. A. Kroll, A. Miller, and A. Narayanan. On decentralizing prediction markets and order books. In WEIS, 2014.
* [13] P. Daian, S. Goldfeder, T. Kell, Y. Li, X. Zhao, I. Bentov, L. Breidenbach, and A. Juels. Flash boys 2.0: Frontrunning, transaction reordering, and consensus instability in decentralized exchanges. In IEEE Symposium on Security and Privacy, 2020.
* [14] S. Eskandari, S. Moosavi, and J. Clark. Sok: Transparent dishonesty: front-running attacks on blockchain. In WTSC, pages 170–189. Springer, 2019.
* [15] E. Félez-Viñas and B. Hagströmer. Do volatility extensions improve the quality of closing call auctions? Financial Review, 56(3):385–406, 2021.
* [16] H. S. Galal and A. M. Youssef. Publicly verifiable and secrecy preserving periodic auctions. In WTSC. Springer, 2021.
* [17] R. Gennaro, C. Gentry, B. Parno, and M. Raykova. Quadratic span programs and succinct nizks without pcps. In EUROCRYPT, 2013.
* [18] L. Gudgeon, P. Moreno-Sanchez, S. Roos, P. McCorry, and A. Gervais. Sok: Layer-two blockchain protocols. In Financial Cryptography, pages 201–226. Springer, 2020.
* [19] L. Harris. Trading and exchanges: market microstructure for practitioners. Oxford, 2003.
* [20] P. Hillion and M. Suominen. The manipulation of closing prices. Journal of Financial Markets, 7(4):351–375, 2004.
* [21] H. Kalodner, S. Goldfeder, X. Chen, S. M. Weinberg, and E. W. Felten. Arbitrum: Scalable, private smart contracts. In USENIX Security Symposium, pages 1353–1370, 2018.
* [22] M. Kelkar, F. Zhang, S. Goldfeder, and A. Juels. Order-fairness for byzantine consensus. In CRYPTO, pages 451–480. Springer, 2020.
* [23] R. Khalil, A. Gervais, and G. Felley. Tex-a securely scalable trustless exchange. IACR Cryptol. ePrint Arch., 2019:265, 2019.
* [24] P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom. Spectre attacks: Exploiting speculative execution. In IEEE Symposium on Security and Privacy, pages 1–19, 2019.
* [25] K. Kursawe. Wendy, the good little fairness widget: Achieving order fairness for blockchains. In ACM AFT, 2020.
* [26] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh, J. Horn, S. Mangard, P. Kocher, D. Genkin, Y. Yarom, and M. Hamburg. Meltdown: Reading kernel memory from user space. In USENIX Security Symposium, pages 973–990, Baltimore, MD, Aug. 2018. USENIX Association.
* [27] F. Massacci, C. N. Ngo, J. Nie, D. Venturi, and J. Williams. Futuresmex: secure, distributed futures market exchange. In IEEE Symposium on Security and Privacy, pages 335–353. IEEE, 2018.
* [28] V. Mavroudis and H. Melton. Libra: Fair order-matching for electronic financial exchanges. In ACM AFT, 2019.
* [29] A. Norry. The history of the mt gox hack: Bitcoin’s biggest heist. https://blockonomi.com/mt-gox-hack/, June 2019. (Accessed on 12/31/2019).
* [30] M. S. Pagano and R. A. Schwartz. A closing call’s impact on market quality at euronext paris. Journal of Financial Economics, 68(3):439–484, 2003.
* [31] A. Park. The conceptual flaws of constant product automated market making. Available at SSRN 3805750, 2021.
* [32] H. Ragab, A. Milburn, K. Razavi, H. Bos, and C. Giuffrida. Crosstalk: Speculative data leaks across cores are real. In IEEE Symposium on Security and Privacy, 2021.
* [33] Securities and E. B. of India. Sebi | order in the matter of nse colocation. https://www.sebi.gov.in/enforcement/orders/apr-2019/order-in-the-matter-of-nse-colocation_42880.html, 2019\. (Accessed on 11/11/2019).
* [34] C. Signer. Gas cost analysis for ethereum smart contracts. Master’s thesis, ETH Zurich, Department of Computer Science, 2018.
* [35] T. Suite. Ganache. https://www.trufflesuite.com/ganache, May 2021. (Accessed on 05/26/2021).
* [36] T. Suite. Truffle. https://www.trufflesuite.com/docs/truffle/overview, May 2021. (Accessed on 05/26/2021).
* [37] C. Thorpe and D. C. Parkes. Cryptographic securities exchanges. In Financial Cryptography, 2007.
* [38] C. Thorpe and S. R. Willis. Cryptographic rule-based trading. In Financial Cryptography, 2012.
* [39] W. Yuen, P. Syverson, Z. Liu, and C. Thorpe. Intention-disguised algorithmic trading. In Financial Cryptography, 2010.
* [40] L. Zhao, J. I. Choi, D. Demirag, K. R. B. Butler, M. Mannan, E. Ayday, and J. Clark. One-time programs made practical. In Financial Cryptography, 2019.
* [41] L. Zhou, K. Qin, C. F. Torres, D. V. Le, and A. Gervais. High-frequency trading on decentralized on-chain exchanges. In IEEE Symposium on Security and Privacy, 2021.
## Appendix 0.A Additional background
### 0.A.1 Ethereum and Blockchain Technology
A public blockchain is an open peer-to-peer network that maintains a set of
transactions without a single entity in charge. In Ethereum, _transactions_
encode the bytecode of user-written _decentralized applications (DApps)_ to be
stored on the blockchain; and the function calls made to the DApp. Every
execution of every function call is validated by all honest, participating
nodes to correct; a property that is robust against a fraction of faulty and
malicious network nodes (or more precisely, their accumulated computational
power). Once transactions are agreed upon, all honest participants will have
identical sets of transactions in the same order. For Ethereum, this is
conceptualized as the current state of a large _virtual machine (EVM)_ that is
running many DApps.
Transactions are broadcast by users to the blockchain network where they are
propagated to all nodes. Nodes that choose to _mine_ will collect transactions
(in the order of their choosing) into a block, and will attempt to have the
network reach a consensus that their block should be added to the set (or
chain) of previous blocks. A transaction is considered finalized once
consensus on its inclusion has held for several additional blocks.
#### 0.A.1.1 Ethereum’s Gas Model.
Every transaction results in the participating nodes having to execute
bytecode. This is not free. When a transaction is executed, each opcode in the
execution path accrues a fixed, pre-specified amount of _gas_. The function
caller will pledge to pay a certain amount of Ethereum’s internal currency
_ETH_ (typically quoted in units of Gwei which is one billionth of an ETH) per
unit of gas, and miners are free to choose to execute that transaction or
ignore it. The function caller is charged for exactly what the transaction
costs to execute, and they cap the maximum they are willing to be charged (gas
limit). If the cap is too low to complete the execution, the miner keeps the
Gwei and _reverts_ the state of the EVM (as if the function never ran).
A miner can include as many transactions (typically preferring transactions
that bid the highest for gas) that can fit under a pre-specified block gas
limit, which is algorithmically adjusted for every block. As of the time of
writing, the limit is approximately 11M gas. Essentially, our main research
question is how many on-chain trades can be executed without exceeding that
limit. Later, we also discuss several bytecode operations (_opcodes_) that
refund gas (i.e., cost negative gas), which we heavily utilize in our
optimizations.
#### 0.A.1.2 Gas Refunds.
In order to reconstruct the current state of Ethereum’s EVM, a node must
obtain a copy of every variable change since the genesis block (or a more
recent ‘checkpoint’ that is universally agreed to). For this reason, stored
variables persist for a long time and, at first glance, it seems pointless to
free up variable storage (and unclear what ‘free up’ even means). Once the
current state of the EVM is established by a node, it can forget about every
historical variable changes and only concern itself with the variables that
have non-zero value (as a byte string for non-integers) in the current state
(uninitialized variables in Ethereum have the value 0 by default). Therefore,
freeing up variables will reduce the amount of state Ethereum nodes need to
maintain going forward.
For this reason, some EVM operations cost a negative amount of gas. That is,
the gas is refunded to the sender at the end of the transaction, however (1)
the refund is capped at 50% of the total gas cost of the transaction, and (2)
the block gas limit applies to the pre-refunded amount (i.e., a transaction
receiving a full refund can cost up to 5.5M gas with an 11M limit). Negative
gas operations include:
* $\bullet$
SELFDESTRUCT. This operation destroys the contract that calls it and refunds
its balance (if any) to a designated receiver address. The SELFDESTRUCT
operation does not remove the initial byte code of the contract from the
chain. It always refunds 24,000 gas. For example, if contract A stores a
single non-zero integer and contract B stores 100 non-zero integers, the
SELFDESTRUCT refund for both is the same (24,000 gas).
* $\bullet$
SSTORE. This operation loads a storage slot with a value. Using SSTORE to load
a zero into a storage slot with a non-zero value means the nodes can start
ignoring it (recall that all variables, even if uninitialized, have zero by
default). Doing this refunds 15,000 gas per slot.
At the time of this writing, Ethereum transaction receipts only account for
the gasUsed, which is the total amount of gas units spent during a
transaction, and users are not able to obtain the value of the EVM’s refund
counter from inside the EVM [34]. So in order to account for refunds in Table
2, we calculate them manually. First, we determine exactly how many storage
slots are being cleared or how many smart contracts are being destroyed, then
we multiply these numbers by 24,000 or 15,000 respectively.
#### 0.A.1.3 Optimistic Roll-Ups.
Figure 4: Overview of Lissy on Arbitrum.
Layer 2 solutions are a group of technologies that are designed and proposed
to address specific drawbacks of executing transactions on Layer 1 (i.e.,
Ethereum and other blockchains) [18]. These technologies focus on fast
transaction throughput, reducing gas costs, or educing transaction latency.
When using Lissy, we strive to reduce the gas cost as performance is the main
bottleneck. Thus, we choose a Layer 2 technology called roll-up which aims at
reducing the gas cost for operating on Layer 1 by taking the transaction
executions off-chain and only using the Ethereum blockchain for storing data.
In a roll-up, every transaction is executed by a server or cluster of servers
known as validators that can be run by a collection of users or third party
operators (here they can be run by the token issuer). These validators then
push the result of the executions (i.e., updates in the EVM state) back to the
Ethereum and assure the Ethereum network that the transactions have been
executed correctly.
A function can be computed off-chain and the new state of the DApp, called a
rollup, is written back to the blockchain, accompanied by either (1) a proof
that the function was executed correctly, or (2) a dispute resolution process
that can resolve, on-chain, functions that are not executed correctly (e.g.,
Arbitrum [21]). In the case of (1), validating the proof must be cheaper than
running the function itself. There are two main approaches: (1a) the first is
to use cryptographic proof techniques (e.g., SNARKS [3, 17] and variants [2]).
This is called a zk-rollup. Note that the proofs are heavy to compute
(introducing a burden to the validators who generate them) but considered
valid once posted to the Ethereum. The second approach (1b) is to execute the
function in a trusted execution environment (TEE; e.g., Intel SGX) and
validate the TEE’s quote on-chain (e.g., Ekiden [11]).121212The TEE-based
approach is mired by recent attacks on SGX [24, 26, 8, 32], however these
attacks do not necessarily apply to the specifics of how SGX is used here, and
safer TEE technologies like Intel TXT (cf. [40]) can be substituted. Approach
(2) is called an optimistic roll-up. Although the dispute time delays result
in a slower transaction finality, optimistic roll-ups substantially increase
the performance by decreasing the gas cost.
Arbitrum and Ethereum Optimism are the two prominent deployments of an
optimistic roll-up. Arbitrum uses a multi-round dispute process that results
in very minimal L1 gas costs to resolve a dispute. Specifically, if a dispute
over a transaction arrises, the L1 cost of resolving the dispute is a small
fraction of the cost of executing the transaction itself (whereas in Optimism,
the dispute resolution cost is essentially the same as executing the
transaction).
Figure 4 shows how traders interact with Lissy on Arbitrum. First, a trader
sends a depositETH transaction on Ethereum to the Inbox contract to deposit X
amount of ETH to the Arbitrum chain. Once the transaction is confirmed, X
amount of ETH will be credited to the trader’s address on the Arbitrum chain.
Trader can now interact with Lissy and execute its functions by sending the
instruction and data required for those executions to either (1) the Arbitrum
regular Inbox on Ethereum, or (2) the sequencer. In our example, trader uses
the regular Inbox to execute depositEther() and the sequencer to execute
submitBid() from Lissy that lives entirely on Arbitrum chain. Accordingly,
trader deposits ETH to Lissy smart contract by sending the instruction and
data for executing the depositEther() to the Arbitrum Inbox contract that
lives on Ethereum. A validator fetches this transaction from the Inbox,
executes it, and asserts the result to ArbOS. Next, trader sends the
instruction and data for execution of submitBid() to the sequencer. The
sequencer then inserts this message into the Inbox that it owns. This Inbox
contract has the same interface as the regular Inbox contract, however, it is
owned by the sequencer. A validator sees the transaction in the sequencer
Inbox of the bridge, executes it, and asserts the result to ArbOS.
Periodically, the entire state of ArbOS is committed back to Ethereum.
Our Lissy variant is not the first roll-up-based order book. Loopring
3.0131313https://loopring.org offers a continuous-time order book. The primary
difference is that orders in Loopring 3.0 are submitted off-chain to the
operator directly, whereas our variant uses on-chain submission so that the
roll-up server does not need to be publicly reachable. Loopring 3.0 can
operate near high-frequency trading as order submission is unhampered by
Ethereum. However, its roll-up proof does not ensure that the exchange did not
reorder transactions, which is particularly problematic in a continuous-time
order book. Traders who prioritize trade fairness might opt for a solution
like our variant, while traders who want speed would vastly prefer the
Loopring architecture which offers near-CEX speed while being non-custodial.
Loopring leaves a regulatory hook whereas our variant could be nearly as
difficult to regulate as a fully on-chain solution if the roll-up server was
kept anonymous: Ethereum and Arbitrum themselves would be the only regulatory
hooks.
### 0.A.2 Trade Execution Systems
##### Centralized Exchanges (CEX).
Traditional financial markets (e.g., NYSE and NASDAQ) use order-matching
systems to arrange trades. An exchange will list one or more assets (stocks,
bonds, derivatives, or more exotic securities) to be traded with each other,
given its own order book priced in a currency (e.g., USD). Exchanges for
blockchain-based assets (also called crypto assets by enthusiasts) can operate
the same way, using a centralized exchange (CEX) design where a firm (e.g.,
Binance, Bitfinex, etc.) operates the platform as a trusted third party in
every aspect: custodianship over assets/currency being traded, exchanging
assets fairly, offering the best possible price execution. Security breaches
and fraud in centralized exchanges (e.g., MtGox [29], QuadrigaCX [33], and
many others) have become a common source of lost funds for users, while
accusations of unfair trade execution have been leveled but are difficult to
prove. Today, CEXes are often regulated as other money service businesses—this
provides some ability for the government to conduct financial tracking but
does little to provide consumer protection against fraud.
##### On-chain Order Books.
For trades between two blockchain-based assets (e.g., a digital asset priced
in a cryptocurrency, stablecoin, or second digital asset), order matching can
be performed ‘on-chain’ by deploying the order-matching system either on a
dedicated blockchain or inside a decentralized application (DApp). In this
model, traders entrust their assets to an autonomously operating DApp with
known source code instead of a third party custodian that can abscond with or
lose the funds. The trading rules will operate as coded, clearing and settling
can be guaranteed, and order submission is handled by the blockchain—a
reasonably fair and transparent system (but see front-running below). Finally,
anyone can create an on-chain order book for any asset (on the same chain) at
any time. While these sound ideal, performance is a substantial issue and the
main subject of this paper. Since it is an open system, there is no obvious
regulatory hook (beyond the blockchain itself).
In this paper, we focus on benchmarking an order book for the public
blockchain Ethereum. Ethereum is widely used and we stand to learn the most
from working in a performance-hostile environment. Exchanges could be given
their own dedicated blockchain, where trade execution logic can be coded into
the network protocol. Trading systems on permissioned blockchains (e.g.,
NASDAQ Linq, tZero) can also improve execution time and throughput, but they
reduce user transparency and trust if unregulated.
##### On-chain Dealers.
An advantage of on-chain trading is that other smart contracts, not just human
users, can initiate trades, enabling broader decentralized finance (DeFi)
applications. This has fueled a resurgence in on-chain exchange but through a
quote-driven design rather than an order-driven one. Automated market makers
(e.g., Uniswap v3) have all the trust advantages of an on-chain order book,
plus they are relatively more efficient. The trade-off is that they operate as
a dealer—the DApp exchanges assets from its own inventory. This inventory is
loaded into the DApp by an investor who will not profit from the trades
themselves but hopes their losses (termed ‘impermanent losses’) are offset
over the long-term by trading fees. By contrast, an order book requires no
upfront inventory and trading fees are optional. Finally, there is a
complicated difference in their price dynamics (e.g., market impact of a
trade, slippage between the best bid/ask and actual average execution price,
etc.)—deserving of an entire research paper to precisely define. We leave it
as an assertion that with equal liquidity, order books have more favorable
price dynamics for traders.
##### Hybrid Designs.
Before on-chain dealers became prominent in the late 2010s, the most popular
design was hybrid order-driven exchanges with some trusted off-chain
components and some on-chain functionalities. Such decentralized exchanges
(DEXes) were envisioned as operating fully on-chain, but performance
limitations drove developers to move key components, such as the order
matching system, off-chain to a centralized database. A landscape of DEX
designs exist (e.g., EtherDelta, 0x, IDEX, etc.): many avoid taking
custodianship of assets off-chain, and virtually all (for order-driven
markets) operate the order book itself off-chain (a regulatory hook). A non-
custodial DEX solves the big issue of a CEX—the operator stealing the
funds—however trade execution is still not provably fair, funds can still be
indirectly stolen by a malicious exchange executing unauthorized trades, and
server downtime is a common frustration for traders. An enhancement is to
prove that trade execution is correct (e.g., Loopring) but these proofs have
blind spots (discussed above in Appendix 0.A.1.3).
### 0.A.3 Call Markets
Assume traders submit their orders in Table 7 to a call market when it is
open. In the following, we explain how these orders are executed:
Time | Trader | Order Type | Order Price | Volume
---|---|---|---|---
09:10 | Mehdi | Ask | 10.18 | 4
09:12 | Avni | Bid | 12 | 3
09:15 | Kritee | Bid | 13 | 3
09:18 | Bob | Bid | 12.15 | 1
09:26 | Navjot | Ask | 10.15 | 4
09:30 | Alice | Ask | 10 | 1
Table 7: Example orders that are submitted to a call market.
* $\bullet$
The call market first matches Alice’s ask order to sell 1 at 10 with Avni’s
bid order to buy 3 at 12. Trade occurs at the price Alice asks for; 10, and 2
will be given to the miner as a price improvement. This trade fills Alice’s
order and leaves Avni with a remainder of 2 to buy at 12.
* $\bullet$
Next, the call market matches Avni’s remainder of 2 with the next highest
priority ask order in the list which is Navjot’s order to sell 4 at 10.15.
Trade occurs at 10.15 and 1.85 will be given to the miner as a price
improvement. This trade fills the remainder of Avni’s bid order and leaves
Navjot with a remainder of 2 to sell at 10.15.
* $\bullet$
The market now matches the next highest bid order in the list, Bob’s bid order
to buy 1 at 12.15, with the remainder of Navjot’s ask order to sell 2 at
10.15. Trader occurs at 10.15 and 2 will be given to miner as a price
improvement. This trade fills Bob’s bid order and leaves Navjot with a
remainder of 1 to sell at 10.15.
* $\bullet$
Next, the market matches Kritee’s bid order to buy 3 at 13 with the remainder
of Navjot’s ask order to sell 1 at 10.15. Trade occurs at 10.15 and 2.85 will
be given to miner as a price improvement. This trade fills Navjot’s order and
leaves Kritee with a remainder of 2 to buy at 13.
* $\bullet$
The market then matches Mehdi’s ask order to sell 4 at 10.18 with the
remainder of Kritee’s bid order to buy 2 at 13. Trade occurs at 10.18 and 2.82
is given to miner as a price improvement. This trade fills Kritee’s order and
leaves Mehdi with a remainder of 2 to sell at 10.18 unfilled.
## Appendix 0.B Cleaning-Up Revisited: Clearing Mappings
Beyond the cleaning up issues with priority queues in Section 3.2, Lissy also
uses mappings with each market. Traders preload their account with tokens to
be traded (which comply with a common token standard called ERC20) and/or ETH.
Lissy tracks what they are owed using a mapping called totalBalance and allows
traders to withdraw their tokens at any time. However if a trader submits an
order (i.e., ask for their tokens), the tokens are committed and not available
for withdrawal until the market closes (after which, the balances are updated
for each trade that is executed). Committed tokens are also tracked in a
mapping called unavailableBalance. Sellers can request a token withdrawal up
to their total balance subtracted by their unavailable balance.
As the DApp runs closeMarket(), it starts matching the best bids to the best
asks. As orders execute, totalBalance and unavailableBalance are updated. At a
certain point, the bids and asks will stop matching in price. At this point,
every order left in the order book cannot execute (because the priority queue
sorts orders by price, and so orders deeper in the queue have worst prices
than the order at the head of the queue). Therefore all remaining entries in
unavailableBalance can be cleared.
In Solidity, it is not possible to delete an entire mapping without
individually zeroing out each entry key-by-key. At the same time, it is
wasteful to let an entire mapping sit in the EVM when it will never be
referenced again. The following are some options for addressing this conflict.
1. 1.
Manually Clearing the Mapping. Since mappings cannot be iterated, a common
design pattern used by DApp developers is to store keys in an array and
iterate over the array to zero out each mapping and array entry. Clearing a
mapping this way costs substantially more to clear than what is refunded.
2. 2.
Store the Mapping in a Separate DApp. We could wrap the mapping inside its own
DApp and when we are done with the mapping, we can run SELFDESTRUCT on the
contract. This refunds us 24,000 gas which is less than the cost of deploying
the extra contract. Additionally, every call to the mapping is more expensive
because (1) it is an external function call, and (2) the calls need access
control to ensure only the market contract can write to it (if a mapping is a
local variable, you get private access for free).
3. 3.
Leave and Ignore the Mapping. The final option is to not clear the mapping and
just create a new one (or create a new prefix for all mapping keys to reflect
the new version of the mapping). Unfortunately, this is the most economical
option for DApp developers even if it is the worst option for Ethereum nodes.
Clearing storage is important for reducing EVM bloat. The Ethereum refund
model should be considered further by Ethereum developers to better
incentivize developers to be less wasteful in using storage.
## Appendix 0.C Collateralization Options in Call Markets
in Lissy, both the tokens and ETH that a trader wants to potentially use in
the order book are preloaded into the contract. Consider Alice, who holds a
token and decides she wants to trade it for ETH. In this model, she must first
transfer the tokens to the contract and then submit an ask order. If she does
this within the same block, there is a chance that a miner will execute the
ask before the transfer and the ask will revert. If she waits for
confirmation, this introduces a delay. This delay seems reasonable but we
point out a few options it could be addressed:
1. 1.
Use msg.value. For the ETH side of a trade (i.e., for bids), ETH could be sent
with the function call to submitBid() to remove the need for depositEther().
This works for markets that trade ERC20 tokens for ETH, but would not work for
ERC20 to ERC20 exchanges.
2. 2.
Merge Deposits with Bids/Asks. Lissy could have an additional function that
atomically runs the functionality of depositToken() followed by the
functionality of submitAsk(). This removes the chance that the deposit and
order submission are ordered incorrectly.
3. 3.
Use ERC20 Approval. Instead of Lissy taking custody of the tokens, the token
holder could simply approve Lissy to transfer tokens on her behalf. If Lissy
is coded securely, it is unconcerning to allow the approval to stand long-term
and the trader never has to lock up their tokens in the DApp. The issue is
that there is no guarantee that the tokens are actually available when the
market closes (i.e., Alice can approve a DApp to spend 100 tokens even if she
only has 5 tokens or no tokens). In this case, Lissy would optimistically try
to transfer the tokens and if it fails, move onto the next order. This also
gives Alice an indirect way to cancel an order, by removing the tokens backing
the order—this could be a feature or it could be considered an abuse.
4. 4.
Use a Fidelity Bond. Traders could post some number of tokens as a fidelity
bond, and be allowed to submit orders up to 100x this value using approve. If
a trade fails because the pledged tokens are not available, the fidelity bond
is slashed as punishment. This allows traders to side-step time-consuming
transfers to and from Lissy while still incentivizing them to ensure that
submitted orders can actually be executed. The trade-off is that Lissy needs
to update balances with external calls to the ERC20 contract instead of simply
updating its internal ledger.
## Appendix 0.D Market Clearing Prices
Call markets are heralded for fair price discovery. This is why many exchanges
use a call market at the end of the day to determine the closing price of an
asset, which is an important price both optically (it is well published) and
operationally (many derivatives settle based on the closing price). We
purposely do not compute a ‘market clearing price’ with Lissy because miners
can easily manipulate the price (i.e., include a single wash trade at the
price they want fixed), although they forgo profit for doing so. This is not
merely hypothetical—Uniswap (the prominent quote-drive, on-chain exchange)
prices have been manipulated to exploit other DeFi applications relying on
them. Countermeasures to protect Uniswap price integrity could also apply to
Lissy: (1) taking a rolling median of prices over time, and (2) using it
alongside other sources for the same price and forming a consensus. While
Lissy does not emit a market clearing price, it can be computed by a web
application examining the order book at market close.
|
# Large-scale parameterized metasurface design using adjoint optimization
Mahdad Mansouree Andrew McClung Sarath Samudrala Amir Arbabi
<EMAIL_ADDRESS>[
###### Abstract
Optical metasurfaces are planar arrangements of subwavelength meta-atoms that
implement a wide range of transformations on incident light. The design of
efficient metasurfaces requires that the responses of and interactions among
meta-atoms are accurately modeled. Conventionally, each meta-atom’s response
is approximated by that of a meta-atom located in a periodic array. Although
this approximation is accurate for metastructures with slowly varying meta-
atoms, it does not accurately model the complex interactions among meta-atoms
in more rapidly varying metasurfaces. Optimization-based design techniques
that rely on full-wave simulations mitigate this problem but thus far have
been mostly applied to topology optimization of small metasurfaces. Here, we
describe an adjoint-optimization-based design technique that uses
parameterized meta-atoms. Our technique has a lower computational cost than
topology optimization approaches, enabling the design of large-scale
metasurfaces that can be readily fabricated. As proof of concept, we present
the design and experimental demonstration of high numerical aperture
metalenses with significantly higher efficiencies than their conventionally-
designed counterparts.
###### keywords:
Adjoint technique, Optimization, Metasurface, Metalens
umass] Department of Electrical and Computer Engineering, University of
Massachusetts Amherst, 151 Holdsworth Way, Amherst, MA 01003, USA IR,NMR,UV
## Introduction
Optical metasurfaces are arrangements of subwavelength meta-atoms that scatter
optical waves and generate desirable wavefront, amplitude, and polarization
distributions 1. Metasurface-based designs of numerous optical components have
been demonstrated, including lenses 2, 3, 4, blazed gratings, 5 and holograms
6, 7, 8, 9. Their planar form factor and the potential for low-cost
manufacture have spurred the recent development of complex optical systems
made of multiple metasurfaces, or metasystems, such as miniaturized cameras
10, spectrometers 11 and hyper-spectral imaging systems 12. However, cascading
multiple metasurfaces quickly increases a system’s optical loss and high-
performance metasystems require high-efficiency metasurface components.
Currently, most metasurfaces are designed using a unit-cell-based approach in
which each meta-atom is simulated as a part of a periodic array 13, 2, 5, 14.
This approach is computationally inexpensive, readily scalable to arbitrarily
large structures, and produces designs that can be fabricated easily. However,
two implicit assumptions in the unit-cell approach can lead to inefficient
metasurface designs: First, the response of a meta-atom with dissimilar
neighbors differs from the response of the same meta-atom in an array with
identical elements. The ‘local periodicity’ approximation breaks down in
structures with rapidly varying meta-atoms, such as high numerical aperture
(NA) lenses. This approximation also has reduced accuracy in structures
comprising meta-atoms with lower refractive index, in which the response of a
meta-atom is more strongly affected by variations of its neighbors 15, 16.
Second, the response map used in unit-cell-based methods records only the
normally transmitted response for normally incident light. In an actual
metasurface, the transmission angle varies, and hence the true response would
deviate from the response map 17. These assumptions can lead to significant
mismatches between expected and actual responses of a meta-atom in a
metasurface. The effect of such mismatch can be seen in reduction of the
efficiency of the device, however, it is hard to exactly distinguish the
contribution of each assumption without analytical models.
In the absence of simple and accurate models that capture the individual and
collective behaviors of meta-atoms, optimization-based design (i.e., inverse
design) is a practical alternative. Optical structures have been designed
using a variety of optimization-based approaches. A comprehensive review of
these methods is presented in ref. 18. Heuristic optimization algorithms based
on random explorations of the optimization space (e.g., particle swarm
optimization or genetic algorithms) have been used to design diffraction
grating filters 19, polarization beam splitters 20 and other small structures.
Heuristic optimization is well-suited to problems with a small number of
degrees of freedom but inefficient for structures with larger design spaces
21.
In most problems, the gradient of the design objective, necessary for
gradient-based algorithms, can be determined using an adjoint technique.
Adjoint-based algorithms are suitable for optimization spaces with high
dimensionality22, and have been used to design high-performance optical
devices, including grating couplers 23, polarization beam splitters24,
photonic crystals25, metagratings 24 and metasurfaces26, 27, 26. Adjoint-based
optimization is frequently applied to topology optimization problems 28, 23,
24, 25, 26, 27, 29, 22, 30, 29, 31, 32, 33, in which a structure or its meta-
atoms are defined by a large number of pixels or sets of curvilinear patterns
22, 30. This can lead to efficient metasurface designs, but because even
deeply subwavelength changes in meta-atom geometries can significantly alter
the scattering response of a design (see Supplementary Note 1 and Fig. S1),
the meta-atom geometries should be accurately approximated during simulations.
The accurate representation of meta-atoms with arbitrary shapes require high-
resolution meshings, practically limiting the structures to 2D 33, 32, 31, 34,
35 or small 3D 31 designs. As a result, the technique has been mostly used for
optimizing periodic structures such as gratings and cylindrical structures 36,
37, 26, 29.
To address this limitation, topology optimization recently has been combined
with a local periodicity approximation 32, 29, 34, 35. In this approach,
topology optimization is done on small subdomain of the device whose response
within the larger structure is approximated by periodic boundary conditions.
These subdomains are subsequently stitched together to form the large-scale
structure. This approach enables the optimization of large devices; however,
subdomains with periodic boundaries do not accurately model the local response
in high-NA metalenses or other rapidly-varying structures, limiting the
performance of designs arrived at by this approach.
Figure 1: Illustration of the parameterized metasurface design process using
adjoint optimization. As the structure is updated by the optimization method,
the desired output fields start to form.
Instead of designing free-form structures, here we propose and demonstrate an
adjoint optimization method based on parameterized rectangular meta-atoms
(Fig. 1). Parameterized meta-atoms lack the fine features typical of topology-
optimized structures , enabling simulations to converge at relatively low
resolution and thus very large metasurfaces to be designed. Confining the
design space to simple shapes (e.g. rectangular meta-atoms) also reduces the
cost of simulation preprocessing steps like subpixel smoothing 38, 39
(Supplementary Note 2 and Supplementary Fig. S2). More importantly, limiting
the optimization to this specific subspace of structures (i.e., rectangular
meta-atoms) removes a large number of potential local optima traps without
significantly affecting device performance and produces designs that conform
to a well-established metasurface platform that can be easily fabricated. Our
method also relies on a field interpolation technique for
$\mathbf{E}_{\parallel}$ and $D_{\bot}$ (see methods) and an efficient time to
frequency domain conversion technique to reduce the computational cost of
simulating large structures. Our method relies on full-wave, finite difference
time domain (FDTD) simulations of the entire structure, and iteratively
approaches an optimal design via gradient ascent. A similar parameterized
approach based on Mie theory was recently proposed by Zhan et. al. 40, 41.
However, the approach is limited to spherical meta-atoms, which are
challenging to fabricate, and does not account for the substrate’s effect. The
adjoint optimization technique does not rely on the two implicit assumptions
used in the unit-cell approach, and thereby achieves higher performing
designs. First, the variation in coupling among the meta-atoms caused by the
rapid variation of their dimensions is accounted for. Second, no assumption is
made about angular dependence of the meta-atom scattering response (i.e., its
element factor). In the following, we describe our method, and, as proof of
concept, use it to design and fabricate two metalenses with 50 $\upmu$m
diameter. The focusing efficiencies of metalenses designed using this method
show experimental improvements of 24% and 13% for NAs of 0.78 and 0.95 over
counterparts designed by the conventional periodic unit-cell method.
## Results
### Parameterized adjoint design method
We first describe the metastructure design using the parameterized adjoint
optimization method. The design process involves finding a set of meta-atom
parameters that generate the desired transformation efficiently. As shown in
Fig. 2a, to find the optimal design, we optimize the structure iteratively
from a trivial initial design (e.g., a uniform array in which all meta-atoms
are identical). In each iteration, the gradient of the objective function with
respect to all the design parameters is calculated using only two simulations
as conceptually shown in Fig. 2b-c. Based on the computed gradient, each meta-
atom is updated, generating a new design that is one step closer to the
optimal design. This cycle continues until all the parameters converge to
their final values (Fig. 2d).
Figure 2: Parameterized adjoint optimization. (a) Parameterized optimization:
meta-atom dimensions are updated but constrained to a simple shape. (b)
Representation of the forward problem. (c) Representation of the adjoint
problem. (d) Flow diagram showing steps in the optimization.
The goal of metastructure design is to transform an incident electric field
$\mathbf{E}^{\text{i}}$ into a desired transmitted or reflected field
distribution $\mathbf{E}^{\text{d}}$. This is schematically illustrated in
Fig. 2b, which shows a metasurface transforming a normally incident field into
a converging transmitted field. An arbitrary desired output can be achieved by
specifying an appropriate transmitted field distribution on a plane $S$ above
the metasurface. The metasurface we consider consists of an arrangement of
dissimilar meta-atoms positioned on a periodic lattice. Each meta-atom’s shape
is described by one or more parameters that are variables in the design
process. Thus a design can be expressed as a vector ${\mathbf{p}}$ containing
all the design parameters. In our proposed method, the design is cast as an
optimization problem of maximizing the fraction of the output field in the
desired field distribution. Specifically, an optimal design maximizes
$I=\left|F\right|^{2}$, where
$F(\mathbf{p})=\int_{S}^{\
}{\mathbf{E}^{\text{d}\ast}\cdot\mathbf{E}^{\text{f}}\left(\mathbf{p}\right)\
\text{d}A.}$ (1)
Here $\mathbf{E}^{\text{d}}$ is the desired field in the frequency domain on
the plane $S$, $\mathbf{E}^{\text{f}}\left(\mathbf{p}\right)$ is the field
realized by a design defined by $\mathbf{p}$ in the forward simulation excited
by $\mathbf{E}^{\text{i}}$ (Fig. 2b), and * represents the complex conjugate
operation. $F$ is the complex-valued projection of $\mathbf{E}^{\text{f}}$ on
$\mathbf{E}^{\text{d}}$.
Optimization starts from an initial design $\mathbf{p}^{(0)}$ and is updated
iteratively $(\mathbf{p}^{(1)},\ \mathbf{p}^{(2)},\ \mathbf{\ldots})$ via
gradient ascent. This process is illustrated in Fig. 2a: after each iteration,
$\mathbf{p}$ approaches its locally optimal value and the performance of the
metasurface improves. The gradient $\nabla_{\mathbf{p}}I$ is used to determine
how $\mathbf{p}$ changes in the next step and can be computed using an
additional simulation called the adjoint simulation. The adjoint simulation
uses the same design $\mathbf{p}$ as the forward simulation, but the structure
is instead excited by a surface current density
$\mathbf{J}_{\text{s}}^{\text{a}}$ $\equiv\mathbf{E}^{\text{d}\ast}$ that is
placed on the plane $S$ which generates a backward propagating wave (see Fig.
2c). The electric field in the adjoint simulation is denoted
$\mathbf{E}^{\text{a}}(\mathbf{p})$.
In general, the variation of $F$ with respect to small changes in the
boundaries of meta-atoms can be found using the functional derivative of $F$.
An expression for the functional derivative of $F$ based on symmetries of the
Green’s tensor can be found in Ref. 42. Here, we consider the special case of
rectangular meta-atoms with square cross-sections (inset of Fig. 2b). For such
meta-atoms, $\mathbf{p}=\left(w_{1},\ w_{2},..,w_{N}\right)$, where $w_{i}$
represents the width of $i^{th}$ meta-atom. Based on the Lorentz reciprocity
theorem 43, we show in Supporting Note 3 that the partial derivative of $F$
with respect to $w_{i}$ is given by
$\frac{\partial F}{\partial
w_{i}}=\frac{1}{2}~{}j\omega\left(n_{\text{m}}^{2}-n_{\text{c}}^{2}\right)\int_{\partial\Omega_{i}}^{\
}{\left(\mathbf{E}_{\parallel}^{\text{f}}\cdot\mathbf{E}_{\parallel}^{\text{a}}+\frac{1}{n_{\text{m}}^{2}n_{\text{c}}^{2}}D_{\bot}^{\text{f}}D_{\bot}^{\text{a}}\
\right)\mathrm{d}A,}$ (2)
where $\omega$ is the angular frequency of the excitation, $n_{\mathrm{m}}$
and $n_{\mathrm{c}}$ are the refractive indices of meta-atom and cladding
materials, ${\partial\Omega}_{i}$ represents the four side surfaces of the
$i$th meta-atom, $\mathbf{E}_{\parallel}^{\text{f}}$ and $D_{\bot}^{\text{f}}$
are the tangential components of the electric field and normal component of
the displacement field obtained in the forward problem, and
$\mathbf{E}_{\parallel}^{\text{a}}$ and $D_{\bot}^{\text{a}}$ are the
corresponding fields in the adjoint problem. The gradient of the objective
function necessary to determine the next design is given by
$\nabla_{\mathbf{p}}I=2\operatorname{Re}\left\\{F^{\ast}\nabla_{\mathbf{p}}F\right\\}$,
where $\mathrm{Re}\left\\{\cdot\right\\}$ represents the real part of a
complex number. Both forward and adjoint simulations are performed using full-
wave simulations of the entire metasurface. Although this is computationally
more expensive than techniques that employ local periodicity approximations
32, 29, 34, 35, it allows the gradient to be calculated more accurately. The
flow diagrams in Fig. 2d and Fig. S3 summarize the optimization procedure.
### Metalens design example
To demonstrate the parameterized adjoint method, we designed two metalenses
with NAs of 0.78 and 0.94 (Fig. 3a). The diameters of both metalenses are 50
$\upmu$m, yielding focal lengths of 20 $\upmu$m and 8.3 $\upmu$m.
The metalenses are composed of 430-nm-tall square $\alpha$Si meta-atoms that
are arranged on a rectangular lattice with a period of 320 nm. The meta-atoms
rest on a fused silica substrate and are surrounded by air. For these designs,
the parameter vector consists of the meta-atom widths, $\mathbf{p}=(w_{1},\
w_{2},\ \ldots,\ w_{N})$, where N$\approx$19,200 is the number of meta-atoms.
By imposing symmetries present in the problem we can reduce the design to 4800
independent variables. Still, the large number of independent variables and
the long time required for each simulation precludes a detailed study of the
design space. Both metalenses are initialized by a uniform array of 140-nm-
wide meta-atoms (i.e., $w_{i}$=140 nm).
Figure 3: Simulation and design of the optimized and control metalenses. (a)
Schematic of two metalenses with NAs of 0.78 and 0.95. The metalenses are
illuminated by normally incident $x$-polarized plane waves. The incident field
outside the metalens aperture is blocked by a perfect electric conductor (PEC)
layer. (b) The focusing efficiencies of the optimized metalenses during the
optimization process. Focusing efficiencies of the control metalenses are
shown for comparison. Inset shows color-coded width distributions of meta-
atoms at several steps during the optimization process. (c) Snapshot of
$E_{x}$ on the output apertures of the optimized (left) and control metalenses
(right). (d) Intensity at the focal planes of the optimized and control
metalens.
Both forward and adjoint simulations were performed using a finite difference
time domain (FDTD) solver 39 with a sinusoidal excitation that was gradually
ramped up. In the forward simulations, the metalenses were illuminated by an
$x$-polarized, normally incident plane wave (Fig. 3a) with a free-space
wavelength of $\lambda_{0}$=850 nm. The desired output field
$\mathbf{E}^{\text{d}}$ was selected to be the field of an ideal, spherical-
aberration-free flat lens (see Methods) 44. To expedite the simulations,
symmetric boundary conditions were used along both $x$ and $y$ axes, reducing
the simulation volume by a factor of four. The simulations were run until the
results converged, and then the fields were converted from time to frequency
domains using the method of ref. 45. The fields on the meta-atom side
boundaries, necessary to determine $\nabla_{\mathbf{p}}F$, were interpolated
from points on the Yee grid using a bilinear approach (see Methods and Fig.
S4). Further simulation details are described in the Methods section.
In each step of the optimization, the design vector was updated according to
$\mathbf{p}^{(n+1)}=\mathbf{p}^{(n)}+s\nabla_{\mathbf{p}^{(n)}}I$, where $s$
is the step size. The step size was chosen to achieve an average meta-atom
width change of a few nanometers. As the optimization proceeded, the step size
was manually updated, allowing $\mathbf{p}$ to converge (see Methods and Fig.
S5). To enforce polarization insensitivity, we symmetrized the derivatives
along $x=y$ plane (see Methods and Fig. S6).
As a quantitative measure of performance, we calculated the focusing
efficiency of each metalens during the optimization process. Focusing
efficiency is directly related to the accurate implementation of the desired
field profile, and metalenses with higher focusing efficiencies generally have
less undesired scattered light and form higher contrast images close to their
optical axes. For a fair comparison with the measured values (see the
Experimental demonstration section below), we defined the focusing efficiency
as the fraction of the power incident on the metalens aperture that passes
through a 7-$\upmu$m-diameter aperture in its focal plane. Figure 4b shows the
focusing efficiencies of the optimized metalenses as their design evolved
during the optimization process. Color-coded meta-atom width maps for these
metalenses at several steps during the design process are shown as insets in
Fig. 3b. At the first step of the optimization, the metalenses were periodic
arrays of posts and had low focusing efficiencies. As the design proceeded,
patterns similar to Fresnel zones appeared in the metalenses’ width
distributions (Fig. 3b, insets), and their focusing efficiencies increased.
The designs were run for 64 iterations, although after only 25 steps their
focusing efficiencies reached plateaus. At the last step, the focusing
efficiencies of the optimized metalenses with NAs of 0.78 and 0.94 were 78%
and 55%, respectively. For comparison, we designed two control metalenses
using the unit-cell approach with NAs, meta-atom heights, lattice constants,
and diameters identical to the optimized ones. The simulated focusing
efficiencies of the control metalenses are 69% and 43%. The details of the
designs and simulations of these control metalenses are presented in Methods.
Snapshots of the dominant component of the electric field ($E_{x}$) at the
output apertures of the control and optimized metalenses are presented in Fig.
3c (for $E_{y}$ distributions see Fig. S7). The field distributions in Fig. 3c
show that the optimized metalenses generate the desired fields with smaller
phase errors, and consequently produce brighter focal spots than the control
metalenses (Fig. 3d). The significantly higher focusing efficiencies of the
optimized metalenses compared to their control counterparts demonstrate the
efficacy of the parameterized adjoint optimization technique in designing
high-performance metasurfaces.
### Experimental demonstration
For experimental validation, we fabricated and characterized the optimized and
control metalenses. The metalenses were fabricated by depositing a layer of
aSi on a fused silica substrate and patterning it using electron beam
lithography and dry etching (see Methods for details). Figure 5a shows an SEM
image of a fabricated metalens. We characterized the metalenses using a setup
schematically shown in Fig. 4b. Metalenses were illuminated by a collimated
laser beam with a wavelength of $\lambda_{0}$=850 nm. The light transmitted
through the metalens was collected by an objective lens with an NA of 0.95 and
reimaged by a tube lens on an image sensor. Images of the focal spots, shown
in Fig. 5b, show enhanced peak intensities for the optimized metalenses
compared to the control ones.
Figure 4: Experimental results. (a) Scanning electron beam micrograph of a
fabricated metalens. (b) Schematic of the characterization setup for intensity
measurements and (c) efficiency measurements. (d) Intensity distributions of
optimized and conventional metalenses. (e) Intensity profiles are taken along
the dashed lines shown in (d).
We measured the focusing efficiencies of the metalenses by measuring the ratio
of the optical power focused into a 7-$\upmu$m-diameter pinhole in the focal
plane of the metalenses and the power incident on their apertures (Fig. 5c).
The measured focusing efficiencies of the optimized metalenses with NAs of
0.78 and 0.94 are 65% and 49%, respectively, higher than values of 52% and 43%
obtained for their control counterparts. This represents 24% and 13% relative
enhancements for the 0.78 and 0.94 NA lenses, respectively. The smaller
increase for the higher NA metalens is attributable to the limitations of our
measurement setup (the objective lens used has an NA of 0.95) and to its
higher sensitivities to fabrication errors.
To study the sensitivity of our designs, an array of metalenses with a range
of constant meta-atom offsets were fabricated alongside those characterized in
Fig. 4. The study shows that the optimized metalenses have approximately the
same sensitivities as the control ones (see Fig. S8).
## Discussion
The parameterized adjoint optimization method accurately estimates shape
derivatives of parameterized meta-atoms (see Supplementary Note 5 and Fig.
S9). In contrast with methods that simulate structures in a dielectric
continuum and then discretize to obtain a physically realizable design 22, 46,
26, 47, meta-atoms designed by our method maintain a dielectric discontinuity
at their boundaries throughout the whole design process, i.e., the simulation
and design domains are the same. Techniques such as level-set representation
can also be used to maintain boundaries with a dielectric discontinuity. We
previously demonstrated such a technique in a similar silicon on glass
material platform 48. Compared to the parametrized technique presented in this
article, the simulations for the free-form level-set technique require
significantly higher resolutions (i.e., much smaller grid size) to converge
and the optimization domain has many more local optima. Due to their small
features, the optimized metasurfaces obtained using this level-set approach
are also significantly more difficult to fabricate. As a result, the
application of level-set representation has been limited to small
structures48.
The parameterized adjoint optimization technique can be easily adapted for
designing other types of metasurfaces such as achromatic metasurfaces (see
Supplementary Note 4). We have presented the design of achromatic metalenses
with parameterized shapes in Figs. S10 and S11. These metasurfaces provide
comparable efficiencies to the ones designed using topology optimization 31,
and do not pose fabrication challenges similar to those of free-form
structures.
Using simple, parameterized shapes reduces the dimensionality of the
metasurface design space and simplifies the fabrication process. Designs
produced by adjoint topology optimization typically require hundreds of steps
to converge 26, 22. Parametrization enables us to include our knowledge about
principles of operation of metasurfaces by selecting proper arrangement of the
meta-atoms and other parameters such as meta-atom height and lattice constant.
Our initial design ( uniform metasurface comprising identical meta-atoms)
although very simple, includes many important characteristics of the final
design, so it can converge faster. The metalenses presented in this work
evolved to designs with performance superior to the conventionally-designed
controls in fewer than 15 steps. The quick convergence enabled us to optimize
large-scale (50 $\upmu$m diameter) metastructures, which, to the best of our
knowledge, are currently some of the largest 3D adjoint-optimized metalenses.
We previously demonstrated multifunctional multi-layer metasurface ref. 49
devices using similar methods in approximately the same number of iterations.
Furthermore, the number of iterations could be further reduced by implementing
an adaptive step size 50.
The full-wave simulations employed in this work are computationally expensive.
We employed several techniques to keep the optimization of large devices
feasible. The computational cost of FDTD simulation is directly related to the
grid size used. We employed bi-linear field interpolation, which increases the
accuracy of the derivatives without reducing the grid size, keeping the
computation time for each iteration tenable. To convert the time-domain fields
to the frequency domain, we only used two time samples using an efficient
harmonic detection method 45. This technique enables multi-wavelength
optimization at minimal additional cost (see Supplementary Note 4):
wavelengths with independent objective functions can be incorporated into the
simulations by adding appropriate sources and acquiring a few additional time-
domain samples without increasing the number or duration of the simulations.
Though in this work we presented metalenses optimized from a trivial initial
state, we could have selected a conventionally designed metasurface (based on
a unit-cell approach) as a starting point, which might have positioned the
initial and final designs nearer to each other.
Like any other gradient-based optimization method, designs determined by our
method represent local optima. However, parameterization allows us to restrict
our search to a judiciously selected subspace by using prior knowledge about
the problem. For example, information from low NA conventional designs can be
useful in determining the appropriate meta-atom height and the lattice
constant for a high NA adjoint-optimized design. To improve the chance of
finding the global optimum, multiple optimizations starting from initial
designs can be run in parallel. Results of such a multiple-seed optimization
are shown in Fig. S12. Despite their different starting points, all designs
converged to metalenses with similar focusing efficiencies. The observed
behavior might not be general, but it seems to be valid at least for
optimizing single layer structures with significant practical impact.
Because our method requires little knowledge about the final structure, it
allows us to design elements for which conventional techniques fail to produce
efficient designs, like multifunctional metasurfaces 49, 51, 52, 53. In
multifunctional devices, the interdependence of parameters is significantly
more complex than in single function designs and simple models are unable to
model meta-atom behavior accurately. In contrast, our method considers all the
complex interactions and generates more efficient designs. Our method is also
can be easily extended to other kinds of multi-objective optimizations, like
robust designs, that are tolerant to fabrication 54 error.
We envision that the adaptation of the parameterized adjoint optimization to
design of large-scale metasurfaces will enable efficient cascading of multiple
metasurfaces to implement compact, complex metasystems with high performance.
This work was funded by the Samsung Advanced Institute of Technology, and
performed in part at the Center for Nanoscale Systems (CNS) at Harvard
University, a member of the National Nanotechnology Coordinated Infrastructure
Network (NNCI), which is supported by the National Science Foundation under
NSF award no. 1541959.
## Methods
#### Metalens optimization
The two metalenses designed by the adjoint technique and the control
metalenses are composed of 430-nm-tall square cross-section aSi meta-atoms
($n_{\text{Si}}=3.84$) that are positioned on a square lattice with a lattice
constant of $\Lambda=320$ nm. The meta-atoms are on a fused silica substrate
($n_{\text{s}}=1.45$) and are cladded above by vacuum. One quadrant of each of
the metalenses are shown in Fig. S7.
The optimization flowchart is shown in Fig. S3. To reduce the required
computational resources, we simulated the fields in a small volume (52
$\upmu$m $\times$ 52 $\upmu$m $\times$ 1.33 $\upmu$m) around the metasurface.
All metalens optimization simulations were performed using a freely-available,
open-source FDTD solver 39. Time-domain simulations were run until the fields
converged (133 fs). The structure is terminated on all sides by a PML boundary
condition. Because only the near field of the structure was simulated, fields
at the focal plane (Fig. 4) were obtained by Fourier domain propagation. To
further expedite the simulations, we exploited symmetries of the structure and
fields: even mirror symmetry was specified along the $x$-axis and odd mirror
symmetry along the $y$-axis, reducing the simulated volume by a factor of
four.
Simulations were done using a workstation with an Intel E5-2680 CPU; 10 cores
were used for each simulation. The FDTD grid size and the step size were
adjusted manually when a reduction in the rate of improvement was observed.
The simulations in each optimization run began with a grid size of 33 nm (low
resolution); after the device efficiency increased, the grid size was reduced
to 20 nm (high resolution). Each iteration, consisting of both forward and
adjoint simulations, took $\sim$15 min at low resolution and $\sim$97 min at
high resolution. Color-coded plots of meta-atom widths of the optimized and
control lens are shown in Fig. S13.
### Target field distribution
For an $x$-polarized plane wave input $\mathbf{E}^{\text{i}}=\hat{x}E_{0}$ at
wavelength $\lambda_{0}$ originating in a medium with refractive index
$n_{\text{c}}$, the desired field distribution for an ideal metalens with
focal length $f$ is:
$\displaystyle E_{x}^{\mathrm{d}}$
$\displaystyle=E_{0}\,t(\theta)\left(\cos\theta\cos^{2}\phi+\sin^{2}\phi\right)$
(3) $\displaystyle E_{y}^{\mathrm{d}}$
$\displaystyle=E_{0}\,t(\theta)\left(1+\cos\theta\right)\sin\phi\cos\phi,$ (4)
where $t(\theta)=\sqrt{\frac{n_{\text{c}}}{\cos\theta}}\exp(-\frac{2\pi
jf}{\lambda_{0}\cos\theta})$,
$\theta=\tan^{-1}\left(\sqrt{x^{2}+y^{2}}/f\right)$44 is the local deflection
angle of the metasurface, and $\phi=\tan^{-1}\left(y/x\right)$ (see Fig. S14).
### Field interpolation
The FDTD solver calculates fields on a rectangular grid (Yee grid). However,
to determine the gradient, fields on the meta-atom boundaries are required.
From the boundary conditions, we know the fields $D_{\perp}$ and
$\mathbf{E}_{\parallel}$ are continuous. To obtain the boundary fields, we
interpolated along axes normal to meta-atom boundaries using a two-sided
linear fit approach that considers field values at four Yee lattice points
(Fig. S4). For each field component $C$, one linear fit $C_{\text{in}}(x)$ was
determined using two points $(x_{-2},x_{-1})$ inside the meta-atom, and
another, $C_{\text{out}}(x)$, using two points $(x_{1},x_{2})$ outside the
meta-atom. The field at the boundary ($x_{0}$) was found based on the
distance-weighted average of these two extrapolated values as
$C(x_{0})\approx\alpha C_{\text{out}}(x_{0})+\beta C_{\text{in}}(x_{0}),$ (5)
where $\alpha$ and $\beta$ are given by
$\alpha=\frac{\left|x_{0}-x_{-1}\right|}{\left|x_{1}-x_{-1}\right|},\beta=\frac{\left|x_{0}-x_{1}\right|}{\left|x_{1}-x_{-1}\right|}.$
### Gradient symmetrization and scaling
To obtain polarization-insensitive metalens designs, in addition to the mirror
symmetries along $x$ and $y$ axes described above for the simulation domain,
we imposed a symmetry along the $x=y$ line (see Fig. S6). The gradients were
first determined for the simulated, $x$-polarized field for a quarter of the
metalens and then symmetrized according to:
$\nabla_{\mathbf{p}}I(x,y)\leftarrow\frac{1}{2}\nabla_{\mathbf{p}}I(x,y)+\frac{1}{2}\nabla_{\mathbf{p}}I(y,x).$
(6)
This operation is equivalent to computing the gradient for circularly
polarized input light and optimizing the metalens using this symmetrized
gradient ensures its polarization insensitivity. After determining the
symmetrized gradient, the step size $s$ was selected such that the average of
the absolute change of the meta-atom widths $\nabla_{\mathrm{p}}I$ was equal
to a few nanometers (see Fig. S5). The maximum change in the post widths was
limited to 10 times the average value to ensure the first order gradient
approximation is valid.
At the beginning of the optimization the absolute value of the average change
was selected to be equal to 2 nm. Then, as the reduction in the rate of
improvement was observed (see Fig. S5), it was reduced to 0.1 nm.
### Control metalens designs
To compare the effectiveness of the proposed design method with the
conventional unit-cell design approach, we designed two control metalenses
using the unit-cell approach. The control metalenses have the same design
parameters as the optimized ones, i.e., with lattice constants of 320 nm, and
square cross-section aSi meta-atoms ($n_{\text{Si}}=3.84$) that are 430 nm
tall. Simulated transmittance and phase of the transmission coefficient for a
periodic array of meta-atoms are shown in Fig. S15a and were used to obtain
the design map shown in Fig. S15b.
### Fabrication
All metalenses were fabricated on the same fused silica substrate. To
compensate for systematic errors in lithography, etching and other fabrication
processes, an array of offsetted designs were included in the pattern. In each
element of this array, widths of square meta-atoms are uniformly changed by a
value in a range of $-$15 nm to 45 nm in steps of 5 nm. Figure S8 shows the
measured efficiencies of fabricated metalenses with different offset values.
To pattern the metasurfaces, a 430-nm-thick layer of aSi was deposited on the
substrate using plasma-enhanced chemical vapor deposition. Then, an
approximately 220-nm-thick layer of electron-beam resist (ZEP520A-7, Zeon) was
spin coated on the substrate. To avoid charging effects, a conductive polymer
layer (ARPC-5090, Allresist) was spin coated on top of the resist. The
patterns were defined using a 125 kV electron-beam lithography system
(ELS-F125, Elionix), and then an aluminum oxide hard mask was deposited using
an electron-beam evaporator. After lifting off the hard mask in a solvent
(Remover PG, Microchem), the sample was etched using an inductively-coupled
plasma reactive ion etching tool in an SF6/C4F8 gas mixture. The hard mask was
removed in a heated solution of ammonium hydroxide and hydrogen peroxide.
### Characterization
We used the setup schematically drawn in Fig. S16a to acquire the focusing
efficiency of the metalenses. Each metalens was illuminated by a weakly
diverging Gaussian beam with a wavelength of 850 nm that was partially focused
by a lens with 5 cm focal length (AC254-050, Thorlabs). The light passed
through the metalens and came into focus at a focal plane. Light in the focal
plane was collected by a microscope objective with an NA of 0.95 (UMPlanFI
100$\times$, Olympus), and reimaged by a tube lens (AC254-200, Thorlabs) and a
camera (CoolSnap K4, Photometrics).
The focusing efficiency was defined as the ratio of the power focused inside a
7-$\upmu$m-diameter pinhole in the focal plane of the metalens to the total
power incident on the metalens. To measure the efficiency, we measured the
power in the reimaged focal plane passing through an aperture with a diameter
equivalent to a 7-$\upmu$m-diameter pinhole in the metalens focal plane. A
flip mirror in the imaging system (dashed box in Fig. S16) allowed us to
direct the reimaged spot toward an apertured power meter (S120C, Thorlabs) and
measure the focusing efficiencies.
The total incident power is measured by redirecting all the power in the
partially focused beam (Fig. S16b) to the power-meter. Power incident on the
metalens is different from the the measured power because of the reflection at
the second interface of fused silica to air (Fig. S16b). The measured power
was corrected to indicate the actual power incident on the lens.
## References
* Kamali et al. 2018 Kamali, S. M.; Arbabi, E.; Arbabi, A.; Faraon, A. A review of dielectric optical metasurfaces for wavefront control. _Nanophotonics_ 2018, _7_ , 1041–1068
* Chen and Craighead 1996 Chen, F. T.; Craighead, H. G. Diffractive lens fabricated with mostly zeroth-order gratings. _Opt. Lett._ 1996, _21_ , 177
* Arbabi et al. 2015 Arbabi, A.; Horie, Y.; Bagheri, M.; Faraon, A. Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission. _Nat. Nanotechnol._ 2015, _10_ , 937–943
* Chen et al. 2012 Chen, X.; Huang, L.; Mühlenbernd, H.; Li, G.; Bai, B.; Tan, Q.; Jin, G.; Qiu, C.-W.; Zhang, S.; Zentgraf, T. Dual-polarity plasmonic metalens for visible light. _Nat. Commun._ 2012, _3_ , 1198
* Lalanne et al. 1998 Lalanne, P.; Astilean, S.; Chavel, P.; Cambril, E.; Launois, H. Blazed binary subwavelength gratings with efficiencies larger than those of conventional échelette gratings. _Opt. Lett._ 1998, _23_ , 1081–1083
* Huang et al. 2013 Huang, L.; Chen, X.; Mühlenbernd, H.; Zhang, H.; Chen, S.; Bai, B.; Tan, Q.; Jin, G.; Cheah, K.-W.; Qiu, C.-W.; Li, J.; Zentgraf, T.; Zhang, S. Three-dimensional optical holography using a plasmonic metasurface. _Nat. Commun._ 2013, _4_ , 2808
* Zhang et al. 2016 Zhang, X.; Jin, J.; Wang, Y.; Pu, M.; Li, X.; Zhao, Z.; Gao, P.; Wang, C.; Luo, X. Metasurface-based broadband hologram with high tolerance to fabrication errors. _Sci. Rep._ 2016, _6_ , 19856
* Zheng et al. 2015 Zheng, G.; Mühlenbernd, H.; Kenney, M.; Li, G.; Zentgraf, T.; Zhang, S. Metasurface holograms reaching 80% efficiency. _Nat. Nanotechnol._ 2015, _10_ , 308–312
* Larouche et al. 2012 Larouche, S.; Tsai, Y.-j.; Tyler, T.; Jokerst, N. M.; Smith, D. R. Infrared metamaterial phase holograms. _Nat. Mater._ 2012, _11_ , 450–454
* Arbabi et al. 2016 Arbabi, A.; Arbabi, E.; Kamali, S. M.; Horie, Y.; Han, S.; Faraon, A. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations. _Nat. Commun._ 2016, _7_ , 443–803
* Faraji-Dana et al. 2018 Faraji-Dana, M.; Arbabi, E.; Arbabi, A.; Kamali, S. M.; Kwon, H.; Faraon, A. Compact folded metasurface spectrometer. _Nat. Commun._ 2018, _9_ , 1–8
* Faraji-Dana et al. 2019 Faraji-Dana, M.; Arbabi, E.; Kwon, H.; Kamali, S. M.; Arbabi, A.; Bartholomew, J. G.; Faraon, A. Hyperspectral imager with folded metasurface optics. _ACS Photonics_ 2019, _6_ , 2161–2167
* Arbabi et al. 2015 Arbabi, A.; Horie, Y.; Ball, A. J.; Bagheri, M.; Faraon, A. Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays. _Nat. Commun._ 2015, _6_ , 7069
* Yu et al. 2011 Yu, N.; Genevet, P.; Kats, M. A.; Aieta, F.; Tetienne, J.-P.; Capasso, F.; Gaburro, Z. Light propagation with phase discontinuities: generalized laws of reflection and refraction. _Science_ 2011, _334_ , 333–337
* Bayati et al. 2019 Bayati, E.; Zhan, A.; Colburn, S.; Zhelyeznyakov, M. V.; Majumdar, A. Role of refractive index in metalens performance. _Appl. Opt._ 2019, _58_ , 1460–1466
* Yang and Fan 2017 Yang, J.; Fan, J. A. Analysis of material selection on dielectric metasurface performance. _Opt. Express_ 2017, _25_ , 23899–23909
* Torfeh and Arbabi 2020 Torfeh, M.; Arbabi, A. Modeling Metasurfaces Using Discrete-Space Impulse Response Technique. _ACS Photonics_ 2020, _7_ , 941–950
* Molesky et al. 2018 Molesky, S.; Lin, Z.; Piggott, A. Y.; Jin, W.; Vucković, J.; Rodriguez, A. W. Inverse design in nanophotonics. _Nat. Photonics_ 2018, _12_ , 659–670
* Shokooh-Saremi and Magnusson 2007 Shokooh-Saremi, M.; Magnusson, R. Particle swarm optimization and its application to the design of diffraction grating filters. _Opt. Lett._ 2007, _32_ , 894–896
* Shen et al. 2015 Shen, B.; Wang, P.; Polson, R.; Menon, R. An integrated-nanophotonics polarization beamsplitter with 2.4$\times$ 2.4 $\mu$m 2 footprint. _Nat. Photonics_ 2015, _9_ , 378
* Sigmund 2011 Sigmund, O. On the usefulness of non-gradient approaches in topology optimization. _Structural and Multidisciplinary Optimization_ 2011, _43_ , 589–596
* Jensen and Sigmund 2011 Jensen, J. S.; Sigmund, O. Topology optimization for nano-photonics. _Laser and Photonics Reviews_ 2011, _5_ , 308–321
* Niederberger et al. 2014 Niederberger, A. C. R.; Fattal, D. A.; Gauger, N. R.; Fan, S.; Beausoleil, R. G. Sensitivity analysis and optimization of sub-wavelength optical gratings using adjoints. _Opt. Express_ 2014, _22_ , 12971–12981
* Lalau-Keraly et al. 2013 Lalau-Keraly, C. M.; Bhargava, S.; Miller, O. D.; Yablonovitch, E. Adjoint shape optimization applied to electromagnetic design. _Opt. Express_ 2013, _21_ , 21693
* Burger et al. 2004 Burger, M.; Osher, S. J.; Yablonovitch, E. Inverse problem techniques for the design of photonic crystals. _IEICE T. Electron._ 2004, _87_ , 258–265
* Sell et al. 2017 Sell, D.; Yang, J.; Doshay, S.; Yang, R.; Fan, J. A. Large-angle, multifunctional metagratings based on freeform multimode geometries. _Nano Lett._ 2017, _17_ , 3752–3757
* 27 Phan, T.; Sell, D.; Yang, J.; Doshay, S.; Fan, J. A. Metasurface Lenses Based on Topology-Optimized Wavelength-Scale Building Blocks. in Conference on Lasers and Electro-Optics, OSA Technical Digest (Optical Society of America, 2018), paper FF3C.6
* Tsuji et al. 2006 Tsuji, Y.; Hirayama, K.; Nomura, T.; Sato, K.; Nishiwaki, S. Design of optical circuit devices based on topology optimization. _IEEE Photonics Technol. Lett._ 2006, _18_ , 850–852
* Lin et al. 2019 Lin, Z.; Liu, V.; Pestourie, R.; Johnson, S. G. Topology optimization of freeform large-area metasurfaces. _Opt. Express_ 2019, _27_ , 15765–15775
* Yang and Fan 2017 Yang, J.; Fan, J. A. Topology-optimized metasurfaces: impact of initial geometric layout. _Opt. Lett._ 2017, _42_ , 3161
* Chung and Miller 2020 Chung, H.; Miller, O. D. High-NA achromatic metalenses by inverse design. _Opt. Express_ 2020, _28_ , 6945–6965
* Phan et al. 2019 Phan, T.; Sell, D.; Wang, E. W.; Doshay, S.; Edee, K.; Yang, J.; Fan, J. A. High-efficiency, large-area, topology-optimized metasurfaces. _Light: Science & Applications_ 2019, _8_ , 1–9
* Bayati et al. 2020 Bayati, E.; Pestourie, R.; Colburn, S.; Lin, Z.; Johnson, S. G.; Majumdar, A. Inverse designed metalenses with extended depth of focus. _ACS Photonics_ 2020, _7_ , 873–878
* Lin and Johnson 2019 Lin, Z.; Johnson, S. G. Overlapping domains for topology optimization of large-area metasurfaces. _Opt. express_ 2019, _27_ , 32445–32453
* Pérez-Arancibia et al. 2018 Pérez-Arancibia, C.; Pestourie, R.; Johnson, S. G. Sideways adiabaticity: beyond ray optics for slowly varying metasurfaces. _Opt. Express_ 2018, _26_ , 30202–30230
* Sigmund 2009 Sigmund, O. Manufacturing tolerant topology optimization. _Acta. Mech. Sinica._ 2009, _25_ , 227–239
* Wang et al. 2011 Wang, F.; Jensen, J. S.; Sigmund, O. Robust topology optimization of photonic crystal waveguides with tailored dispersion properties. _J. Opt. Soc. Am. B_ 2011, _28_ , 387–397
* Farjadpour et al. 2006 Farjadpour, A.; Roundy, D.; Rodriguez, A.; Ibanescu, M.; Bermel, P.; Joannopoulos, J.; Johnson, S. G.; Burr, G. Improving accuracy by sub-pixel smoothing in FDTD. Tuning the Optic Response of Photonic Bandgap Structures III. 2006; p 63220G
* Oskooi et al. 2010 Oskooi, A. F.; Roundy, D.; Ibanescu, M.; Bermel, P.; Joannopoulos, J. D.; Johnson, S. G. Meep: A flexible free-software package for electromagnetic simulations by the FDTD method. _Comput. Phys. Commun._ 2010, _181_ , 687–702
* Zhan et al. 2018 Zhan, A.; Fryett, T. K.; Colburn, S.; Majumdar, A. Inverse design of optical elements based on arrays of dielectric spheres. _Appl. Opt._ 2018, _57_ , 1437–1446
* Zhan et al. 2019 Zhan, A.; Gibson, R.; Whitehead, J.; Smith, E.; Hendrickson, J. R.; Majumdar, A. Controlling three-dimensional optical fields via inverse Mie scattering. _Sci. Adv._ 2019, _5_
* Miller 2012 Miller, O. D. Photonic Design: From Fundamental Solar Cell Physics to Computational Inverse Design. Ph.D. thesis, UC Berkeley, 2012
* Harrington 2001 Harrington, R. F. _Time-harmonic electromagnetic fields_ ; IEEE Press, 2001; p 480
* 44 McClung, A.; Mansouree, M.; Samudrala, S.; Arbabi, A. Properties of ideal flat metalenses. in Conference on Lasers and Electro-Optics, OSA Technical Digest (Optical Society of America, 2020), paper FM11R.1.
* Furse 2000 Furse, C. M. Faster than Fourier: Ultra-efficient time-to-frequency-domain conversions for FDTD simulations. _IEEE Antenn. Propag. M._ 2000, _42_ , 24–34
* Su et al. 2018 Su, L.; Piggott, A. Y.; Sapra, N. V.; Petykiewicz, J.; Vuckovic, J. Inverse design and demonstration of a compact on-chip narrowband three-channel wavelength demultiplexer. _ACS Photonics_ 2018, _5_ , 301–305
* Sell et al. 2017 Sell, D.; Yang, J.; Doshay, S.; Fan, J. A. Periodic Dielectric Metasurfaces with High-Efficiency, Multiwavelength Functionalities. _Adv. Opt. Mater._ 2017, _5_ , 1700645
* Mansouree and Arbabi 2019 Mansouree, M.; Arbabi, A. Metasurface design using level-set and gradient descent optimization techniques. 2019 International Applied Computational Electromagnetics Society Symposium (ACES). 2019; pp 1–2
* Mansouree et al. 2020 Mansouree, M.; Kwon, H.; Arbabi, E.; McClung, A.; Faraon, A.; Arbabi, A. Multifunctional 2.5D metastructures enabled by adjoint optimization. _Optica_ 2020, _7_ , 77–84
* 50 Johnson, S. G. The NLopt nonlinear-optimization package. http://github.com/stevengj/nlopt
* Arbabi et al. 2016 Arbabi, E.; Arbabi, A.; Kamali, S. M.; Horie, Y.; Faraon, A. Multiwavelength polarization-insensitive lenses based on dielectric metasurfaces with meta-molecules. _Optica_ 2016, _3_ , 628–633
* Kamali et al. 2017 Kamali, S. M.; Arbabi, E.; Arbabi, A.; Horie, Y.; Faraji-Dana, M.; Faraon, A. Angle-multiplexed metasurfaces: Encoding independent wavefronts in a single metasurface under different illumination angles. _Phys. Rev. X_ 2017, _7_ , 041056
* Zhou et al. 2018 Zhou, Y.; Kravchenko, I. I.; Wang, H.; Nolen, J. R.; Gu, G.; Valentine, J. Multilayer noninteracting dielectric metasurfaces for multiwavelength metaoptics. _Nano Lett._ 2018, _18_ , 7529–7537
* Oskooi et al. 2012 Oskooi, A.; Mutapcic, A.; Noda, S.; Joannopoulos, J. D.; Boyd, S. P.; Johnson, S. G. Robust optimization of adiabatic tapers for coupling to slow-light photonic-crystal waveguides. _Opt. Express_ 2012, _20_ , 21558–21575
|
# Sensitivity Prewarping for Local Surrogate Modeling
Nathan Wycoff1, Mickaël Binois2 and Robert B. Gramacy3 To whom correspondence
should be addressed: Nathan Wycoff<EMAIL_ADDRESS>1 McCourt
School of Public Policy, Georgetown University; 2 ACUMES, Inria Sophia
Antipolis; 3 Dept. of Statistics, Virginia Tech
###### Abstract
In the continual effort to improve product quality and decrease operations
costs, computational modeling is increasingly being deployed to determine
feasibility of product designs or configurations. Surrogate modeling of these
computer experiments via local models, which induce sparsity by only
considering short range interactions, can tackle huge analyses of complicated
input-output relationships. However, narrowing focus to local scale means that
global trends must be re-learned over and over again. In this article, we
propose a framework for incorporating information from a global sensitivity
analysis into the surrogate model as an input rotation and rescaling
preprocessing step. We discuss the relationship between several sensitivity
analysis methods based on kernel regression before describing how they give
rise to a transformation of the input variables. Specifically, we perform an
input warping such that the “warped simulator” is equally sensitive to all
input directions, freeing local models to focus on local dynamics. Numerical
experiments on observational data and benchmark test functions, including a
high-dimensional computer simulator from the automotive industry, provide
empirical validation.
###### keywords:
computer experiments; emulation; sensitivity analysis; Gaussian process;
dimension reduction; active subspace; subbagging
††articletype: Preprint
## 1 Introduction
As previously unimaginable computing power has become widely available,
industrial scientists are increasingly making use of computationally intensive
computer programs to simulate complex phenomena that cannot be explained by
simple mathematical models and which would be prohibitively expensive to
experiment upon physically. These computer experiments have varied business
applications, for example: Zhou (2013) describe virtualization of an injection
molding process; Montgomery and Truss (2001) explored the strength of
automobile components; Crema et al. (2015) developed a computer model to help
manage an assemble to order system. Despite the tremendous supply of
computational resources provided by increasingly powerful CPUs, the general
purpose GPU computing paradigm, and even more specialized hardware such as
tensor processing units, the demands of advanced computer models are still
sizeable. As such, there is a market for fitting surrogates to computer
simulations: flexible statistical models which learn the input-output mapping
defined by the simulator of interest, and are ideally suited to serve as a
substitute for the same. For detailed review, see Gramacy (2020); Santner et
al. (2018); Forrester et al. (2008).
One popular use of computer experiments is to perform sensitivity analysis
(e.g., Oakley and O’Hagan, 2004; Marrel et al., 2009; Gramacy et al., 2013; Da
Veiga et al., 2009; Gramacy, 2020, Ch. 8.2). This can consist of determining
which of the input parameters are most influential, or even whether some
latent combination of the inputs is driving the response. Sensitivity analysis
for computer experiments must take into account unique characteristics not
found in observational data. As in classical design of experiments, the
training data inputs can be chosen, which means there is no need to take into
account natural correlation between the input variables. Moreover, the design
may be selected to maximize information gain or other criteria (Gramacy, 2020,
Ch. 6). Further, in the case of deterministic experiments, we observe input-
output dynamics exactly, and sometimes may even have derivative information
(or can approximate it). Active Subspaces (AS; Constantine, 2015) exploit the
knowledge of these gradients to perform linear sensitivity analysis, that is
to say, sensitivity analysis which finds “directions”, or linear combinations
of inputs, of greatest influence, rather than evaluating individual input
variables. In this article, we will not assume knowledge of the gradient, but
we will leverage that the target simulator is smooth, such that we can
estimate its AS nonparametrically (Othmer et al., 2016; Palar and Shimoyama,
2017, 2018; Wycoff et al., 2021). These methods are closely related to
existing gradient-based kernel dimension reduction (Fukumizu and Leng, 2014)
techniques from the statistics literature, which we discuss in a unified
framework in Section 2.2.
Global Sensitivity Analysis (GSA), beyond being of interest in and of itself,
can also be used to perform a transformation to the input space before
applying standard modeling methods, a process referred to as premodeling in Li
et al. (2005). Sometimes, this can take the form of variable selection, as in
using lasso to select variables before fitting a standard linear model
(Belloni et al., 2013). Otherwise, the dimension of the space is not changed,
but simply our orientation within it, for instance by changing basis to that
implied by Principal Components Analysis (PCA). This has been recommended as a
preprocessor for “axis-aligned” methods such as generalized additive models
(de Souza et al., 2018) and tree-based methods (Rodriguez et al., 2006). And,
of course, these approaches can be combined to learn both a rotated and
truncated space, as in principal components regression (Hastie et al., 2009).
In this article, we argue that this approach also has much promise as a
preprocessor for local surrogate modeling of large-scale computer experiments
(e.g., Gramacy and Apley, 2015; Katzfuss et al., 2020). Practically speaking,
what we dub “prewarping” influences the local model both directly and
indirectly. Directly, because it redefines the definition of distances between
points upon which many surrogate models (e.g., those based on Gaussian process
regression) rely to compute relevant spatial correlations, and indirectly, as
the definition of “local” changes with the metric, thus influencing
neighborhood selection. We build on recently proposed linear GSA techniques
and show significant improvement compared to directly applying the local
methods to the original input space. Intuitively, GSA based preprocessing
handles global trends, and frees the local models to better represent nearby
information. We formalize this intuition in Section 3.1 by proposing that the
relationship between the warped inputs and the outputs be equally sensitive to
every input dimension at the global level. We find that this enhances
predictive ability on a battery of test functions and datasets.
This prewarping idea may be compared to preconditioning in numerical analysis
(Wathen, 2015), where a central problem is the solution of linear systems
$\mathbf{A}\mathbf{x}=\mathbf{b}$. Modern solution algorithms are typically
iterative, meaning that they operate by improving a given approximate solution
$\tilde{\mathbf{x}}$ over the course of many iterations until a measure of
error like $||\mathbf{A}\tilde{\mathbf{x}}-\mathbf{b}||$ is acceptable.
Numerical analysts have found that oftentimes, by first performing a linear
transformation to the input space, they improve the conditioning of the linear
system which results in fewer iterations required for a given level of
accuracy. Similarly, we propose performing a linear transformation of the
input space based on a GSA in the hope that this will result in fewer data
requirements for a given level of accuracy, or greater accuracy given data. If
a surrogate prior to the linear transformation corresponds to fitting $y_{i}$
versus $\mathbf{x}_{i}$, afterwards the problem becomes $y_{i}$ versus
$\mathbf{L}\mathbf{x}_{i}$, where $\mathbf{L}$ is derived from an appropriate
GSA.
In particular, given a large collection of simulator inputs $\mathbf{X}$ and
outputs $\mathbf{y}$, we propose first conducting a GSA using a Gaussian
Process (GP) fit to a (global) manageably-sized subset of the data. We prefer
a separable kernel (details in Section 2), learning correlation decay
separately along each dimension. The Automatic Relevance Determination (ARD;
Neal, 1996; Rasmussen and Williams, 2006, Ch. 5.1) principle holds that those
input dimensions with large kernel length-scales are less important, and can
be dropped when conducting variable selection. Scaling each input dimension by
the reciprocal of the associated length-scale, one possible $\mathbf{L}$, thus
imbues the local surrogate with inductive bias reflecting global trends.
PCA is an option that goes beyond re-scaling to linear projection. However,
PCA’s emphasis on dispersion means it’s less useful for surrogate modeling,
where designs are typically chosen by the practitioner; i.e., no input
dispersion to learn beyond that we have ourselves imposed. AS, however, allows
for non-axis aligned measures of sensitivity, emitting an $\mathbf{L}$ for the
purposes of linear projection, while also accounting for the response. We
provide the details of how such sensitivities may be realized through
efficient sampling schemes, and how $\mathbf{L}$ may be backed out for the
purposes of input warping for downstream local analysis, and ultimately
accurate prediction. We privilege AS $\mathbf{L}$ prewarping as well as two
axis-by-axis sensitivity analyses, which we show both improve upon simple
global and local schemes, however there are certainly other possibilities.
After reviewing relevant background in Section 2, our proposed methodology is
detailed in Section 3. Section 4 begins by deploying our method on
observational data and low dimensional test functions, before tackling our
motivating automotive example, a $124$ dimensional problem with $500{,}000$
observations. Section 5 concludes the article and overviews promising future
work.
## 2 Background and Related Work
We review Gaussian processes before pivoting to gradient sensitivity analysis.
### 2.1 Gaussian Processes
Rather than specifying a functional form, a GP simply defines covariances
between input points via some function of the distance between them. For
example:
$\mathbb{V}\mathrm{ar}\left[y(\mathbf{x}_{i}),y(\mathbf{x}_{j})\right]=\sigma^{2}\exp\left\\{\frac{-||\mathbf{x}_{i}-\mathbf{x}_{j}||_{2}^{2}}{2l}\right\\},$
(1)
where the length-scale parameter $l$ controls how quickly correlation decays
as the distance between the inputs increases, and the covariance parameter
$\sigma$ scales the correlation to turn it into a covariance. Broadly
speaking, GP kernels differ firstly in how they calculate distance, and
secondly in how that distance is translated into a covariance. Isotropic
kernels such as (1) are those for which every input dimension is treated
identically in terms of distance calculations, whereas anisotropic kernels are
free to violate this. For instance, a tensor-product kernel assigns a
different length-scale to each dimension, allowing for correlation to decay at
different rates as different parameters are varied. Mathematically, this may
be expressed as
$k(\mathbf{x}_{i},\mathbf{x}_{j}):=\mathbb{V}\mathrm{ar}\left[y(\mathbf{x}_{i}),y(\mathbf{x}_{j})\right]=\sigma^{2}\exp\left\\{-\sum_{k=1}^{p}\frac{(x_{i,k}-x_{j,k})^{2}}{2l_{k}}\right\\},$
(2)
and evaluation of this kernel between all pairs is usually stored in a kernel
matrix $\mathbf{K}$. Notice that each summand in (2) has a different length-
scale $l_{k}$ in the denominator. Since as $l_{k}\to\infty$ the contribution
of that dimension to the covariance matrix shrinks to zero, the ARD principle
(Neal, 1996; Rasmussen and Williams, 2006, Ch. 5.1) argues that dimensions
with large length-scales can be ignored. However, technically speaking, there
is no guarantee that variable importance decreases monotonically with respect
to its length-scale, see (Lin and Joseph, 2020, Section 4.1) and (Wycoff et
al., 2021, Section 3.2) for counterexamples. Operating somewhat along this
principle, Sun et al. (2019) and Katzfuss et al. (2020) scale input dimensions
according to the inverse of their length-scale before fitting models which
involve finding local neighborhoods. This approach will form one of our
baselines in Section 3.1.
Inference in a GP is typically conducted in a Bayesian manner. Training data,
comprising observations $y(\mathbf{X})$ are collected at certain input
locations $\mathbf{X}$ and conditioned on, yielding a posterior GP with
modified mean and covariance functions. These latter apply at any desired
point $\mathbf{x}_{n+1}$ through textbook multivariate Gaussian conditioning:
$\displaystyle y(\tilde{\mathbf{x}})|\mathbf{y}(\mathbf{X})$
$\displaystyle\sim N(\mu_{n+1},\Sigma_{n+1})$ $\displaystyle\mu_{n+1}$
$\displaystyle=\beta_{0}+k(\tilde{\mathbf{x}},\mathbf{X})k(\mathbf{X},\mathbf{X})^{-1}(\mathbf{y}-\beta_{0}\mathbf{1})$
(3) $\displaystyle\Sigma_{n+1}$
$\displaystyle=k(\tilde{\mathbf{x}},\tilde{\mathbf{x}})-k(\tilde{\mathbf{x}},\mathbf{X})k(\mathbf{X},\mathbf{X})^{-1}k(\mathbf{X},\tilde{\mathbf{x}}).$
The most straightforward way to obtain these quantities involves calculating
the Cholesky decomposition of the kernel matrix $k(\mathbf{X},\mathbf{X})$, an
operation which scales cubically with the number of training locations, $n$,
and is computationally intractable when $n$ is in the low thousands. Much
recent work seeks to circumvent this bottleneck.
#### 2.1.1 Scaling Gaussian Processes to Many Observations
Exploiting the fact that an input point will generally only have high
correlation with its neighbors, Local Approximate Gaussian Processes (laGP;
Gramacy and Apley, 2015; Gramacy, 2020, Ch. 9.3), involve constructing a small
model at prediction time, incorporating only training points near where a
prediction is desired. These points may be selected via Nearest Neighbors (NN)
or more sophisticated criteria. The Vecchia approximation (Vecchia, 1992) also
exploits neighborhood structure, but this is used to build a partitioned
likelihood. Originally introduced for geospatial data, the Vecchia
approximation is most comfortable in low dimensional input spaces, which has
motivated a thread of research to adapt it to higher dimensional problems such
as surrogate modeling (Katzfuss et al., 2020). That these models select a
neighborhood set on the basis of inter-point distances means that proper
prewarping could not only give the model a better perspective of distances
within the set of local points itself, but also lead to a better set of local
points.
Another class of approaches involves choosing a kernel which represents the
inner product of a finite-dimensional yet sufficiently rich feature space.
Then, the kernel matrix $K$ has a rank bounded by the dimension of the feature
space, and can be decomposed efficiently using Woodbury identities. This is
the thrust of Fixed Rank Kriging (Cressie and Johannesson, 2008). Or, instead
of calculating the kernel on all $\mathcal{O}(n^{2})$ training pairs, the
inner product may be calculated through a smaller set of reference locations,
knots, or so-called Inducing Points (Smola and Bartlett, 2001; Snelson and
Ghahramani, 2006; Rasmussen and Williams, 2006, Ch. 8).
The concern with large datasets may seem somewhat antithetical to the idea
that each observation was obtained at great computational cost and should be
optimally exploited, but there’s no other choice in high dimension.
Consequently, the adaptation of kernel-based surrogates to high dimensional
problems is an area of active research.
#### 2.1.2 Scaling Gaussian processes to High Dimension
GP modeling in high dimension requires large designs to accurately capture
signal. However, if we assume that the intrinsic dimension of the function is
lower than the nominal input dimension, we may be able to get away with a
smaller training dataset if a mapping can be learned into this reduced space.
Consequently, many approaches for deploying GP as surrogates in high input
dimension settings involve built-in (usually linear) dimension reduction.
Perhaps the most straightforward mechanism involves random projection, as
exemplified by Random Embeddings Bayesian Optimization (Wang et al., 2016,
REMBO), and expanded upon in Binois et al. (2015).
Other options include learning projection matrices before fitting a GP on the
reduced space. In the special case of a one-dimensional reduced space,
Bayesian inference via Markov-Chain Monte Carlo has been proposed to learn the
low dimensional subspace for both observational data (Choi et al., 2011) as
well as for computer emulators (Gramacy and Lian, 2012) via Single-Index
Models. Djolonga et al. (2013) combine finite differencing in random
directions with low rank matrix recovery to discover the projection matrix.
Garnett et al. (2014) give this approach a Bayesian treatment, even proposing
an adaptive sampling algorithm to sequentially select informative design
points. Where finite differencing is appropriate, Constantine et al. (2014)
propose to deploy adaptive sampling for selecting the low dimensional
projection, and also discuss a heuristic for selecting kernel length-scale
parameters on the reduced space.
Instead of defining the GP on a low dimensional space, we could split up the
dimensions of the input space and define a model on each one. For instance,
Durrande et al. (2012); Duvenaud et al. (2011) propose Additive GPs, where the
response is modeled as a sum of stochastic processes defined individually for
each main effect. The sum can be expanded to include stochastic processes of
any interaction level, as detailed in Durrande et al. (2013), or scalar
transformations of the response, as in Lin and Joseph (2020). Delbridge et al.
(2020) lies at the intersection of random projection and additive kernels:
several random projections are combined additively.
### 2.2 Gradient-Based Sensitivity Analysis
If derivatives of the simulator are available with respect to input
parameters, a natural way to define importance of the inputs is via the
magnitude of $\frac{\partial f(\mathbf{x})}{\partial x_{i}}$ since this
quantity tells us how much the output changes as input variable $i$ is
perturbed, assuming the input scales are comparable. Global sensitivity
analysis proceeds by defining some method of aggregating such averaging as
proposed by Sobol and Gersham (1995), who used $\mathbb{E}\\{(\frac{\partial
f(\mathbf{x})}{\partial x_{i}})^{2}\\}$, estimated via Finite Differencing, as
a measure of variable importance for screening purposes. De Lozzo and Marrel
(2016) describe a GP based estimator for this quantity. But we are interested
in directions of importance, which may be defined by those with large average
directional derivatives.
Functions varying only in certain directions are called Ridge Functions, and
thus have the form $f(\mathbf{x})=g(\mathbf{A}\mathbf{x})$, where
$\mathbf{A}\in\mathbb{R}^{r\times p}$, $g$ is any function on
$\mathbb{R}^{r}$, and $r<p$. As a modeling device, ridge functions have
inspired a number of nonlinear statistical predictors, including projection
pursuit (Friedman and Stuetzle, 1981). In the ridge function framework,
dimension reduction is assumed to be linear, but the actual function on the
low dimensional space need not be. The left panel of Figure 1 shows the ridge
function $f(x)=\sin(x+y)\cos(x+y)e^{-\frac{x+y}{10}}$. Eponymous ridges are
visible as constant diagonal bands in the heat plot. Here,
$\mathbf{A}=[1\hskip 5.0pt1]$, and $g(z)=\sin(z)\cos(z)e^{-\frac{z}{10}}$.
Note, however, that ridge functions cannot exhibit “curvy” ridges, as in the
right panel. From the ridge function perspective, the right image represents a
two dimensional function, even if it depends only the one dimensional quantity
$\sqrt{x^{2}+y^{2}}$.
Figure 1: Heat plots of left: $f(z)=\sin(z)\cos(z)e^{-\frac{z}{10}}$, with
$z=x+y$; and right: $z=\sqrt{x^{2}+y^{2}}$.
The Active Subspace method (AS; Constantine, 2015) provides a way to view
functions as being “almost” ridge functions. This analysis considers the
expected gradient outer product matrix with respect to some measure $\nu$:
$\mathbf{C}=\mathbb{E}_{\nu}\left[\nabla f\nabla f^{\top}\right]=\int\nabla
f\nabla f^{\top}\ d\nu\,.$ (4)
Functions are said to have an AS when they change mostly rather than uniquely
along a small set of directions, formalized in the sense that $\mathbf{C}$ has
a gap in its eigenvalues. The eigenspace associated with the eigenvalues that
make the cut are those directions in which large gradients are “often”
pointed, relative to the measure $\nu$.
In this article, the measure with respect to which the AS is defined will
either be the Lebesgue measure $\nu_{l}$ or the sample probability measure
$\nu_{s}$, which is given by $\nu_{s}(\mathcal{A})=\frac{1}{n}$ if
$\mathcal{A}=\\{\mathbf{x}_{i}\\}$ for any sample point $\mathbf{x}_{i}$ (such
that taking the expectation of some quantity with respect to this measure is
simply the average of that quantity observed at the sampling locations). We
use $\nu$ to denote a generic probability measure.
Readers familiar with techniques such as PCA that analyze the spectrum of the
covariance matrix might expect us instead to be interested in
$\mathbb{E}_{\nu}\left[(\nabla f-\mathbb{E}_{\nu}\left[\nabla f\right])(\nabla
f-\mathbb{E}_{\nu}\left[\nabla
f\right])^{\top}\right]=\mathbb{E}_{\nu}\left[\nabla f\nabla
f^{\top}\right]-\mathbb{E}_{\nu}\left[\nabla
f\right]\mathbb{E}_{\nu}\left[\nabla f\right]^{\top},$
the only difference being that the mean gradient is subtracted prior to the
outer product. However, in the case of analyzing gradients rather than data
points, the average gradient contains useful information about the function.
This is to the extent that Lee (2019) even proposes adding the
$\mathbb{E}_{\nu}\left[\nabla f\right]\mathbb{E}_{\nu}\left[\nabla
f\right]^{\top}$ term above rather than subtracting it to enhance the
influence of that direction.
Analytically computing the integral defining $\mathbf{C}$ is not possible for
a general blackbox $f$. However, if the gradient may be evaluated at arbitrary
input locations, a Monte Carlo estimator may be formed by first sampling $B$
many vectors $\mathbf{x}_{i}\sim\nu$, and then computing
$\frac{1}{B}\sum_{i\in\\{1,\ldots,B\\}}(\nabla f)(\mathbf{x}_{i})(\nabla
f)(\mathbf{x}_{i})^{\top}$. As with the axis-aligned sensitivities, we can of
course use finite-difference approximations; Constantine (2015) analyzes the
effect of numerical error in this step on the quality of the overall estimate
of $\mathbf{C}$.
In situations where finite differencing is not appropriate, the derivative may
again be estimated via nonparametric methods (Othmer et al., 2016; Palar and
Shimoyama, 2017, 2018). Given a GP posterior with constant prior mean
$\beta_{0}$ on $f$, a natural way to estimate $\mathbf{C}$ is to use the
posterior mean of the integral quantity it is defined by (Eq. 4), which is now
a random variable as we are conducting Bayesian inference. Assuming a
sufficiently smooth kernel function, the gradient vector at any point
$\mathbf{x}^{*}$ has a multivariate Gaussian posterior $\nabla
f(\mathbf{x}^{*})\sim N(\mu_{\nabla},\Sigma_{\nabla})$, where
$\displaystyle\mu_{\nabla}$
$\displaystyle=\mathbf{K}_{[\nabla,X]}\mathbf{K}_{[X,X]}^{-1}(\mathbf{y}-\beta_{0})\,,$
$\displaystyle\mbox{and }\quad\Sigma_{\nabla}$
$\displaystyle=K_{[\nabla,\nabla]}-K_{[\nabla,X]}\mathbf{K}_{[X,X]}^{-1}K_{[X,\nabla]}\,.$
Above, $\mathbf{K}_{[\nabla,X]}$ represents the cross-covariance matrix
between the gradient at $\mathbf{x}^{*}$ and the observed outputs
$\mathbf{y}$, $\mathbf{K}_{[X,X]}$ that between the outputs $\mathbf{y}$ at
each training location, and $\mathbf{K}_{[\nabla,\nabla]}$ represents the
prior covariance matrix of the gradient vector. These quantities are easily
derived in terms of derivatives of the kernel function $k$ (Rasmussen and
Williams, 2006, Ch. 9), and was used as early as Morris et al. (1993) to
exploit observed derivative information to improve a computer experiment
response surface. We will use these facts to simplify the desired expectation:
$\displaystyle\mathbb{E}_{f}\left[\mathbf{C}_{\nu}|\mathbf{y}\right]$
$\displaystyle=\mathbb{E}_{f}\left[\mathbb{E}_{\mathbf{x}\sim\nu}\left[\nabla
f(\mathbf{x})\nabla f(\mathbf{x})^{\top}\right]|\mathbf{y}\right]$
$\displaystyle=\mathbb{E}_{\mathbf{x}\sim\nu}\left[\mathbb{E}_{f}\left[\nabla
f(\mathbf{x})\nabla
f(\mathbf{x})^{\top}|\mathbf{y}\right]\right]=\mathbb{E}_{\mathbf{x}\sim\nu}\left[\Sigma_{\nabla}(\mathbf{x})+\mu_{\nabla}(\mathbf{x})\mu_{\nabla}(\mathbf{x})^{\top}\right]\,.$
For general $\nu$, this expression may be evaluated via Monte Carlo. Wycoff et
al. (2021) provided closed forms for when $\nu$ is the Lebesgue measure on
$[0,1]^{p}$ (denoted $\nu_{l}$) and $k$ is Gaussian (1–2) or Matérn with
smoothness $\frac{3}{2}$ or $\frac{5}{2}$. The quantities above depend on the
choice of kernel hyperparameters, which must be estimated. We prefer
maximizing the marginal likelihood, but other options work (Fukumizu and Leng,
2014).
The quantity $\mathbf{C}$ was studied for observational data as early as
Samarov (1993). Kernel based estimates were proposed by Fukumizu and Leng
(2014) with respect to the sample measure $\nu_{s}$, and deployed by Liu and
Guillas (2017) to reduce the dimension of a tsunami simulator. Authors have
also considered second order derivatives. Li (1992) proposes looking at
Hessian eigen-decompositions in Principal Hessian Directions as well as a
method to estimate the Hessian itself using Stein’s Lemma, effectively
calculating the cross-covariance between the response and the outer product of
the input vector. For more on GSA, see Iooss and Lemaître (2015).
## 3 Methodology
We first discuss how to turn a sensitivity analysis into an input warping
before discussing how to fit local models in the warped space.
### 3.1 Warping
Here we propose the heuristic of using the warping such that running the
sensitivity analysis again afterwards would result in all directions being
equally important. In the case of ARD, this would amount to conducting a
warping such that the optimal length-scales are all equal to 1, while in the
case of AS, $\mathbf{C}=\mathbf{I}$. In both of these cases the transformation
is linear, and thus can be represented by a matrix $\mathbf{L}$. The matrix
$\mathbf{L}$ should premultiply each design point
$\mathbf{z}_{i}=\mathbf{L}\mathbf{x}_{i}$, which looks like
$\mathbf{Z}=\mathbf{X}\mathbf{L}^{\top}$ when the design points are stacked in
the canonical design matrix $\mathbf{X}\in\mathbb{R}^{n\times p}$. This
process may be seen as decomposing the black-box $f$ into two parts: a linear
transformation $\mathbf{L}$ and a nonlinear function $g$. Here, $g$ is the
function upon which we are actually doing regression when we fit $\mathbf{y}$
to $\mathbf{Z}$.
#### 3.1.1 Bandwidth and Range Scaling
When using the separable Gaussian kernel (Eq. 2), a length-scale of $l_{k}$
for input variable $k$ living in $[0,1]$ is equivalent to using a length-scale
of $l_{k}=1$ and a domain of $\left[0,\frac{1}{\sqrt{l_{k}}}\right]$.
Therefore, scaling each input dimension by the root of its estimated length-
scale would achieve our desired result. This is because fitting a GP to the
scaled input-output relationship would result in length-scale estimates equal
to 1.
Algorithm 1 Bandwidth Scaling
Given: Data $\mathbf{X},\mathbf{y}$, Bags $B$, Bag size nsub, Sample Size $n$,
1:for $b\in\\{1,\ldots,B\\}$ do
2: $\mathcal{I}\sim\textrm{Cat}\\{1,\ldots,N\\}$$\triangleright$ Subsampling
3:
$\hat{\boldsymbol{\theta}}_{b}\leftarrow\underset{\theta}{\textrm{argmin}}\,\mathcal{L}_{GP}(\mathbf{y}_{\mathcal{I}}|\theta)$$\triangleright$
Optimize GP Likelihood wrt $\boldsymbol{\theta}$
4:end for
5:$\hat{\boldsymbol{\theta}}\leftarrow\frac{1}{B}\sum_{\mathcal{B}}\hat{\boldsymbol{\theta}}_{\mathcal{B}}$
6:$\mathbf{L}\leftarrow\textrm{diag}(\hat{\boldsymbol{\theta}})$$\triangleright$
Place Estimates in a Diagonal Matrix
7:$\mathbf{Z}\leftarrow\mathbf{X}\mathbf{L}^{\top}$
Since we are just scaling the input space, $\mathbf{L}$ will be a diagonal
matrix with nonzero elements given by the inverse root of the length-scales:
$\mathbf{L}_{\mathrm{ARD}}=\mathrm{Diag}\left(\frac{1}{\sqrt{l_{1}}},\frac{1}{\sqrt{l_{2}}},\cdots,\frac{1}{\sqrt{l_{p}}}\right)$.
In Gramacy (2020) and Cole et al. (2021) this is treated as a preprocessing
step, performed once before deployment within local models, while in Katzfuss
et al. (2020) this scaling is iteratively updated as the marginal likelihood
is optimized and length-scale estimates change. Cole et al. attributed the
idea to Derek Bingham, who called it “stretching and compressing”.
Other approaches of input variable sensitivity could be considered in
developing transformations. As recommended by an anonymous reviewer, we will
consider another measure of sensitivity to be the range of the GP posterior
surface fit to data projected onto a given axis. In particular, to determine
the range sensitivity of variable $i$, we first fit a one dimensional GP
regression on $\mathbf{X}_{i}$ vs $\mathbf{y}$. Then, the sensitivity is
defined as the range of the posterior surface of that GP, that is to say, as
$\underset{x_{1},x_{2}\in[0,1]}{\max}|\hat{f}(x_{1})-\hat{f}(x_{2})|$ where
$\hat{f}$ is the posterior predictive mean. This is a nonconvex optimization
problem which we solve approximately by initializing $x_{1}$ and $x_{2}$ to be
the i’th coordinates of those design points corresponding to the largest and
smallest observed $y$ values and then applying a quasi-Newton method
(L-BFGS-B) refinement.
#### 3.1.2 Active Subspace Rotation
In the case of a known AS matrix $\mathbf{C}$, the transformation $\mathbf{L}$
which satisfies our desire to “undo” the sensitivity analysis is given by
$\mathbf{L}=\Lambda^{1/2}\mathbf{U}^{\top}$, where
$\mathbf{U}\in\mathbb{R}^{p\times p}$ is the matrix with columns giving the
eigenvectors of $\mathbf{C}$ and $\Lambda^{1/2}$ a diagonal matrix containing
the square root of the eigenvalues. To how that this warping satisfies our
heuristic, recall that $f(\mathbf{x})=g(\mathbf{L}\mathbf{x})$, and let
$\nu_{\mathbf{z}}$ be the measure implied on
$\mathbf{z}:=\mathbf{L}\mathbf{x}$ by $\nu$.
$\displaystyle\mathbb{E}_{\nu}\left[\nabla_{x}f(\mathbf{x})\nabla_{x}f(\mathbf{x})^{\top}\right]=\mathbb{E}_{\nu}\left[\nabla_{x}g(\mathbf{L}\mathbf{x})\nabla_{x}g(\mathbf{L}\mathbf{x})^{\top}\right]$
$\displaystyle\iff\mathbb{E}_{\nu}\left[\nabla_{x}f(\mathbf{x})\nabla_{x}f(\mathbf{x})^{\top}\right]=\mathbb{E}_{\nu}\left[\mathbf{L}^{\top}(\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x}))(\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x}))^{\top}\mathbf{L}\right]$
$\displaystyle\iff\mathbb{E}_{\nu}\left[\nabla_{x}f(\mathbf{x})\nabla_{x}f(\mathbf{x})^{\top}\right]=\mathbf{L}^{\top}\mathbb{E}_{\nu}\left[\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x})\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x})^{\top}\right]\mathbf{L}$
$\displaystyle\iff\mathbf{U}\Lambda\mathbf{U}^{\top}=\mathbf{U}\Lambda^{\frac{2}{2}}\mathbb{E}_{\nu}\left[\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x})\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x})^{\top}\right]\Lambda^{\frac{1}{2}}\mathbf{U}^{\top}$
$\displaystyle\iff\textbf{I}=\mathbb{E}_{\nu}\left[\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x})\nabla_{\mathbf{L}x}g(\mathbf{L}\mathbf{x})^{\top}\right],$
or alternatively
$\mathbb{E}_{\nu_{\mathbf{z}}}\left[\nabla_{\mathbf{z}}g(\mathbf{z})\nabla_{\mathbf{z}}g(\mathbf{z})^{\top}\right]=\mathbf{I}$.
Consequently, all directions are of equal importance globally, and the local
model is freed to concentrate on local information. The decomposition is
illustrated in Figure 2, which shows the trajectory from simulator input to
simulator output in two different ways.
Figure 2: The function $f$ (bottom, red line) with a nontrivial AS maps from
$[0,1]^{2}$ to $\mathbb{R}$. It may alternatively be viewed as a linear
scaling $\mathbf{L}:[0,1]^{2}\to\mathbb{R}^{2}$, followed by a function $g$
with all directions of equal importance (top, green lines). Before
preprocessing, regression is on $f$; afterwards on $g$.
The bottom of the figure shows the standard modeling approach, where the
black-box simulator maps directly from the input space to the scalar response
in an anisotropic manner. The top shows our proposed decomposition, where
first a linear transformation maps the input hypercube into a polytope defined
by the sensitivity analysis, and second the now isotropic nonlinear function
may be modeled by local predictors. This procedure is delineated in Algorithm
2, which defines a family of warpings parameterized by the measure $\nu$. In
this article, we will study the transformations $\mathbf{L}_{l}$, associated
with the Lebesgue measure, and $\mathbf{L}_{s}$, associated with the sample
measure.
Algorithm 2 Active Subspace Rotation
Given: Data $\mathbf{X},\mathbf{y}$,
$\nu\in\\{\textrm{Lebesgue},\textrm{Sample}\\}$, Bags $B$, Bag size nsub,
Sample Size $n$
1:for $b\in\\{1,\ldots,B\\}$ do$\triangleright$ Subbagging Iteration
2: $\mathcal{B}\sim\textrm{Cat}(\\{1,\ldots,N\\},\texttt{nsub})$
$\triangleright$ Subsampling
3:
$\hat{\boldsymbol{\theta}}_{\mathcal{B}}\leftarrow\underset{\theta}{\textrm{argmin}}\,\mathcal{L}_{GP}(\mathbf{y}_{\mathcal{B}},\mathbf{X}_{\mathcal{B}}|\theta)$
$\triangleright$ Optimize GP Likelihood wrt $\boldsymbol{\theta}$
4: $\hat{\mathbf{C}}_{\mathcal{B}}\leftarrow\mathbb{E}_{\nu}\left[\nabla
f(\mathbf{x})\nabla f(\mathbf{x})^{\top}|\mathbf{y}_{\mathcal{B}}\right]$
$\triangleright$ Subset estimate of $\mathbf{C}$
5:end for
6:$\hat{\mathbf{C}}\leftarrow\frac{1}{B}\sum_{\mathcal{B}}\hat{\mathbf{C}}_{\mathcal{B}}$
7:$\mathbf{U},\boldsymbol{\Lambda}\leftarrow\texttt{eigendecomp}(\hat{\mathbf{C}})$
8:$\mathbf{L}\leftarrow\boldsymbol{\Lambda}^{\frac{1}{2}}\mathbf{U}^{\top}$
9:$\mathbf{Z}\leftarrow\mathbf{X}\mathbf{L}^{\top}$
#### 3.1.3 Truncation
Once a transformation $\mathbf{L}$ is calculated, we may additionally select a
truncation dimension, creating another, more parsimonious class of options for
the warping. Determining the appropriate amount of such truncation depends on
what local predictor is to be applied downstream, on the warped (and lower
dimensional) inputs. We follow the approach outlined by Fukumizu and Leng
(2014), which is actually designed to estimate kernel hyperparameters but it
is easily adapted to any low-dimensional parameter, like model complexity. Our
pseudo-code in that setting is provided in Algorithm 3. Notice that the method
involves NN, however this is just one of many possible downstream models, a
discussion we shall table for the moment. We take the same approach to
truncation regardless of which GSA method gave rise to $\mathbf{L}$. In
particular, NN is applied to each candidate dimension, and the sum of squared
residuals computed. Rather than simply choosing that dimension which minimized
error magnitude, we found that optimizing the Bayesian Information Criterion
(BIC) was superior. In calculating BIC, we treated the dimension of the NN
model as the number of parameters it had and endowed it with a Gaussian error
structure.
Algorithm 3 Dimension Selection
Given: Rotated Design Matrix $\mathbf{Z}$, search interval [MIND, MAXD].
1:for $r^{*}\in\\{\texttt{MIND},\ldots,\texttt{MAXD}\\}$ do
2: $\mathbf{Z}_{r^{*}}\leftarrow\mathbf{Z}[,1:r^{*}]$
3:
$\texttt{mse}[r^{*}]\leftarrow\texttt{mean}(\texttt{resid}(\texttt{KNN}(\mathbf{Z}_{r^{*}},\mathbf{y}))\texttt{\textasciicircum
2})$ $\triangleright$ $\kappa$-Nearest Neighbors
4: $\texttt{bic}[r^{*}]\leftarrow n\log(\texttt{mse}[r^{*}])+r^{*}\log(n)$
5:end for
6:$r\leftarrow\underset{\texttt{MIND}\leq
r^{*}\leq\texttt{MAXD}}{\textrm{argmin}}\texttt{bic}[r^{*}]$
In our experiments (Section 4), all of our local models use the same truncated
dimension size $r$ selected by Algorithm 3. Other approaches still are
certainly possible. For instance, Constantine (2015) suggests manual
examination of $\mathbf{C}$’s spectrum for a gap, though such human
intervention may be at odds with the otherwise hands-off, automated approach
implied by the surrogate modeling context.
#### 3.1.4 Scaling Up
GP-based estimates of the active subspace carry the GP’s computational
burdens, and are limited to comparatively small datasets, just as the GP
itself is. We mitigate this via a subbagging approach (Breiman, 1996; Zhao et
al., 2018). Given a subbag size $n_{B}<n$ and a number of subbags $B$, we
simply sample $n_{B}$ many datapoints at random from our input-output pairs
before fitting a GP and developing an estimate of $\mathbf{C}$ based on those
data alone. This is repeated $B$ times, and each estimated $\mathbf{C}_{b}$ is
combined via averaging to form our estimator
$\frac{1}{B}\sum_{b=1}^{B}\mathbf{C}_{b}$. Since we are executing the cubic
cost GP operations not on $n$ data but on $n_{B}$ data, the overall
computational expense is significantly less on our applications despite the
fact that the procedure must be repeated several times. Furthermore, this is
an embarrassingly parallel task. Of course, this comes at the cost of
estimation error, and, to our knowledge, the impact of such subsampling on the
concentration rate of the estimate of $\mathbf{C}$ is an open question. We
find that it works in practice in Section 4.
### 3.2 Local Modeling
For some regression methods, such as the basic linear model, linear
transformations such as those we have described in this section so far would
have no nontrivial impact. However, this is certainly not the case for local
models, which are influenced in two major ways, namely by altering the
partitioning scheme and by changing the default distance metric. Before we see
exactly how, we provide an overview of the particular local models we prefer;
the Supplementary Material provides further detail.
The simplest of these is NN. To predict at $\tilde{\mathbf{x}}$, NN determines
the $k$ closest training locations to $\tilde{\mathbf{x}}$, then averages
their responses to obtain a prediction. It is thus affected by the linear
warping through a warped definition of “closest”, which thus alters the points
which are being averaged for each prediction.
The laGP method also operates by building a prediction set at
$\tilde{\mathbf{x}}$. And, just like NN, it begins with some number $\kappa$
of nearest neighbors to $\tilde{\mathbf{x}}$. Next, however, points are added
to that set based on how useful they will be for prediction as measured by an
acquisition criterion built on a GP. This GP is grown until some pre-specified
“max” size. Both the conditioning set(s) (like NN), and the local kernel
function are a influenced by the linear pre-warping.
The Vecchia approximation is a related but distinct idea. Unlike NN or laGP,
which create local models at prediction time, the Vecchia approximation
specifies a single generative story for the data. Each datapoint, rather than
being conditioned upon all other training data, is instead conditioned on a
cascade of subsets, assumed conditionally independent of all others. This
requires the data be ordered, making the assumption that any data point is
conditionally independent of all those data that come after it in the order.
Since vector data in general have no natural ordering, one is generally
imposed by sorting along a given axis or finding an ordering that best encodes
input distances (Guinness, 2018). The Vecchia approximation stands to benefit
from an improved ordering (and kernel structure) via prewarping.
#### Illustrating Influence on Neighborhood Selection
We shall now visually explore the effect preprocessing can have on the sets of
NN. Specifically, points which are farther from the prediction location along
axes with little influence, but closer along axes with much influence, are
comparatively favored. Figure 3 illustrates this principle, revisiting the
ridge function of Figure 1. In this toy example, we sample $400$ input
locations uniformly at random in the 2d input domain, then apply Lebesgue-
measure prewarping. The left panel shows the original input space, while the
right plot shows the new input space after applying a $\mathbf{L}_{l}$
rotation. The training set (black +’s) and prediction location (white
triangle) are the same in both, but the closest points (solid circles) are
changed. In each panel, the faded circles give the locations of the solid
circles from the other plot. We can see that the response value at the ten
nearest neighbors is much closer to the value at the predictive location after
the warping (right) than it is before (left).
Figure 3: The function $f(x)=\sin(x+y)\cos(x+y)e^{-\frac{x+y}{10}}$ with $x,y$
varying from $-2\pi$ to $2\pi$ rescaled to $[0,1]$, before (left) and after
(right) $\mathbf{L}_{\nu_{l}}$ rotation In both panels, the black + represent
the training set and solid circles represent the 10 nearest points to an
arbitrary prediction location, itself represented by the large white triangle.
Faded circles give nearest neighbors from the other plot. Note that the
rotated plot is not to scale for ease of viewing.
## 4 Numerical Experiments
We shall now present results of experiments devised to quantitatively evaluate
sensitivity prewarping in predictive exercises. We begin with outlining the
comparators and metrics, followed by implementation details, and the actual
experiments. R scripts reproducing all figures shown in this document may be
found here: https://github.com/NathanWycoff/SensitivityPrewarping
### 4.1 Implementation details, comparators and metrics
The preprocessing methods will be assessed based on their effect on the
performance of downstream local models. As baselines, we entertain GPs fit on
random data subsets, which we’ll denote sGP, as well as $k$-NN (KNN), laGP
(laGP), and the Vecchia approximation (vecc) on the full, original dataset.
Implementations are provided by R packages hetGP (Binois and Gramacy, 2019;
Binois et al., 2018), FNN (Beygelzimer et al., 2013), laGP (Gramacy, 2016;
Gramacy and Apley, 2015), and GpGp (Guinness, 2018; Guinness et al., 2020),
respectively. These will be compared to KNN, laGP and vecc with the four
specific prewarping methods proposed in Section 3.1. The Bandwidth Scaling
$\mathbf{L}_{\mathrm{ARD}}$ will be denoted by prefix B, Lebesgue-measure
prewarping $\mathbf{L}_{l}$ by prefix L, sample-measure prewarping
$\mathbf{L}_{s}$ by S, and the range sensitivity prewarping by R. Further, we
will consider truncation for all four prewarping techniques which is denoted
by a postfix of T.
For each test function, we first generate data using either a random Latin
Hypercube Sample (LHS; Stein, 1987) via the R package lhs (Carnell, 2020) for
synthetic data, or via uniform random subsampling with existing/observational
data, which we then randomly split into train and test sets. Then, we fit the
baseline models for $\mathbf{y}$ given $\mathbf{X}$ and calculated their
performance. Next, we conducted the sensitivity analyses using $5$ subsamples
each of size 1,500 in all experiments, using GP regression to estimate kernel
hyperparameters, as well as the nugget term, via MLE (Gramacy and Lee, 2012).
Afterwards, we compute the associated transformations to warp each
$\mathbf{X}$, yielding each $\mathbf{Z}$, as outlined in Algorithms 1 and 2.
Finally, each local model is fit to $\mathbf{Z}$ versus $\mathbf{y}$ for each
$\mathbf{Z}$ created by the different transformations, and their performance
on each recorded. This process is repeated for 10 Monte Carlo iterations.
In surrogate modeling, quantification of uncertainty is often high priority,
so we define performance using not only the Mean Square prediction Error
(MSE), but also logarithmic Score (Gneiting and Raftery, 2007). For GP
predictors, this is defined as the log likelihood of the response at a
prediction location given the predictive mean and variance at that point using
our assumption of Gaussianity for the response (Gneiting and Raftery, 2007,
Eq. 25). Since NN is typically not deployed in situations where uncertainty
quantification is desired, we omit score calculations for it.
While calculation of $\mathbf{C}$ can involve sophisticated machinery, we have
endeavored to make its application as simple as possible. With the R package
activegp (Wycoff and Binois, 2020; Wycoff et al., 2021) loaded, prewarping is
as straightforward as:
R> Lt <- Lt_GP(X, y, measure = "lebesgue") ## or measure = "sample"
R> Z <- X %*% Lt[,1:r] ## r is truncated dimension
### 4.2 Observational Data
We first consider two high dimensional observational datasets. The Communities
and Crime dataset (Redmond and Baveja, 2002) combines census and law
enforcement statistics from the United States. The task is to predict crime
rate per capita given 122 socio-economic indicators measured on 1,994
individuals. The Temperature dataset (Cawley et al., 2006) involves
temperature forecasting given the output of a weather model, and consists of
7,117 observations and 106 features.
Figure 4: Results on two observational test problems. Left and Center: the
$y$-axis gives either $\log_{10}$ MSE or negative Score (smaller is better).
The letter before the name, B, L and S represents the transformation used for
prewarping (if there is one); T denotes truncation. Bold names indicate
prewarping. Models that failed to fit are left blank. Right: logMSE vs -Score
for each run; faded icons indicated individual run while solid icons give
group medians. Circles indicate no prewarping, solid borders indicate no
truncation.
The performance of the competing methods is given in Figure 4. We find that
truncation is helpful for high dimensional problems, particularly on the
Temperature dataset, and more so for the active subspace rotations than for
the axis scaling methods (Bandwidth and Range). We also find that the
$\mathbf{L}_{s}$ generally outperforms $\mathbf{L}_{l}$. This is because the
observational data are not uniformly distributed, which has two implications.
First, since the training set is not uniformly distributed, Sample measure
overemphasizes certain parts of the input space compared to Lebesgue. Second,
because the test set was formed by random sampling, these same parts of the
input space that we have implicitly tuned our $\mathbf{L}$ estimate to are
those parts of the input space in which we tend to find testing locations. In
other words, there is simply a mismatch between the probability distribution
from which the observational data were drawn and that with respect to which
$\mathbf{L}_{l}$ is defined. We see that the preprocessing differentiated
itself the least on the Communities and Crime problem, potentially because
this problem consisted of significantly fewer observations, at around
$1{,}000$, making it difficult to estimate the rotation, and leading to high
variance.
Figure 5: A comparison on common test functions with $n=40{,}000$ runs. See
Figure 4 caption.
### 4.3 Benchmark Test Functions
We next evaluated the proposed methodology on benchmark test functions
(Surjanovic and Bingham, 2020) where we found that prewarping increased
performance in terms of both MSE and Score. In particular, we ran the
competing methods on the Borehole (Harper and Gupta, 1983, $p=8$), Robot Arm
(An and Owen, 2001, $p=8$), and Piston (Kenett and Zacks, 1998, $p=7$)
functions with a training set size of $40{,}000$ and test set size of
$2{,}000$ for each, sampled from a random LHS.
The results, shown in Figure 5, indicate that prewarping can be quite
beneficial for local modeling in terms of predictive accuracy. On these low
dimensional problems, each method performed similarly regardless of whether
truncation was applied, so we have omitted truncation in the results. On all
three problems, all forms of prewarping greatly outperform respective
baselines. On the Borehole problem the AS based methods $\mathbf{L}_{l}$ and
$\mathbf{L}_{s}$ outperform both the baselines and $\mathbf{L}_{\mathrm{ARD}}$
in terms of both MSE and Score. The Range prewarping seems to have a slight
edge in MSE and a slight disadvantage in Score. On the Robot Arm function, we
find that all prewarping methods are pretty similar, with the sample-measure
$\mathbf{L}_{s}$ generally having a slight edge. The Range transformation
seems to be at a disadvantage on this problem. Finally, on the Piston problem,
prewarping generally leads to a decrease in MSE, though which particular
method is ahead depends on the local model considered. Range again does about
the same as no prewarping.
### 4.4 The Jones MOPTA Problem
Figure 6: A comparison on the 124d MOPTA function. See Figure 4 caption.
In this section, we study the performance of prewarping on an optimization
problem presented by General Motors at the 2008 “Modeling and Optimization:
Theory and Applications (MOPTA)” conference (Jones, 2008). The input variables
characterize the design of the automobile, such as materials, part gauges, and
shape, which determine the results of several crash simulations. The problem
is to minimize the mass of the configuration, while observing constraints,
such as the durability of the vehicle and the harshness of the crash. This is
a constrained optimization problem involving 124 input variables and 68
constraints. While the standard approaches to smooth, high dimensional,
constrained optimization are primarily gradient-based, the simulator, a multi-
disciplinary effort, does not provide gradients with respect to inputs, and
numerical noise means finite differencing approaches are not applicable.
Various authors have proposed sophisticated solutions for this challenging
problem, including those based on Bayesian optimization, evolutionary
strategies, or both. Regis (2011) proposed fitting a surrogate model to each
constraint as well as the objective function to launch a stochastic search for
good feasible solutions. Beaucaire et al. (2019) tackled the optimization
problem by effectively using an ensemble of surrogates, while Regis (2012)
combined surrogate modeling approaches with evolutionary algorithms, and Regis
and Wild (2017) combined surrogate modeling with trust region methods.
However, this article is concerned with the large data regime, which is
generally not the case when conducting Bayesian optimization. To study Jones
MOPTA as an emulation problem, we simply treat the sum of the objective and
all of the constraints as a black-box function to approximate. This black-box
is of interest as such augmented objective functions form the basis of
penalty-based approaches to constrained optimization (Nocedal and Wright,
2006).
We sampled $500{,}000$ points uniformly at random in the input space, treating
$2{,}000$ as a test set, chosen randomly. We chose not to include vecc fit on
the untruncated data as the runtime was too long. As the results in Figure 6
show, in terms of MSE, prewarping without truncation can somewhat improve
performance, but throwing in truncation as well results in improvements of an
order of magnitude or more using doing AS prewarping ($\mathbf{L}_{s}$ or
$\mathbf{L}_{l}$). The exception is S-KNN, which is able to achieve
competitive accuracy without truncation. In terms of score, it would appear
that prewarping without truncation can result in a significant decrease in
performance compared to baseline. Indeed, looking at the scatterplot (Figure
6, right), we see that without truncation, the various local models and
prewarpings form a spectrum of solutions trading MSE for Score, whereas the
truncated AS prewarped local models significantly outperforms in terms of
both. However, this trend is not universal among prewarpings: the Range
prewarping performs very well in terms of MSE without truncation, but not
with. It seems as though the Range prewarping can offer a good warping of the
space, but not one amenable to truncation.
## 5 Conclusions and Future Work
We introduced Sensitivity Prewarping, a simple-to-deploy framework for local
surrogate modeling of computer experiments. Specifically, we proposed the
heuristic of warping the space such that a global sensitivity analysis would
reveal that all directions are equally important, and showed specific
algorithms based on the ARD principle and/or AS to achieve this. By learning
directions of global importance, we free each of the local models from
individually learning global trends, and instead allow them to focus on their
prediction region. Our prewarping effectively defines a new notion of distance
which has the dual benefit of improving both neighborhood selection and the
value of distance in prediction. We also proposed a subbagging procedure for
scaling up inference of the AS as estimated via a GP.
Generally, our numerical experiments revealed that prewarping yields
significant benefits in terms of predictive accuracy, as measured by MSE, as
well as predictive uncertainty, as measured by Score. We showed how rotations
can improve inference on low dimensional test functions, and how truncation
can be transformative in high dimensional problems. Given the ease of
implementation and the important improvement in predictive accuracy, we submit
that this procedure has broad applicability.
We focused on three specific sensitivity analyses and three specific local
models, but there is plenty of room for further inquiry. Deploying this
framework with nonlinear sensitivity analysis (i.e., that which can measure
the importance of nonlinear functions of the inputs) could be fruitful, for
instance with Active Manifolds (Bridges et al., 2019). It would also be
interesting to study what sensitivity techniques could be expected to perform
well when paired with a given local model.
Another area where future work could lend improvements is in large scale
estimation of $\mathbf{C}$. In this article, we proposed a subbagging
solution, but many other approaches are conceivable. For instance,
$\mathbf{C}$ could be computed by using existing approximations to the kernel
matrix, such as the Vecchia approximation. An alternative would be to deploy
Krylov subspace methods, which have shown great promise in scaling GPs (Wahba
et al., 1995; Gibbs and MacKay, 1997; Pleiss et al., 2018; Dong et al., 2017),
to develop stochastic algorithms either to estimate the matrix $\mathbf{C}$
itself or its leading eigenspace directly (Golub and Meurant, 2010).
Arguably, the weakest link of this approach is the GP fit in the first stage
which produces our estimator of $\mathbf{C}$, required to compute $\mathbf{L}$
in the AS approach. This is because local models can compensate for breaches
of our GP assumptions such as stationarity and homoskedasticity, while the
global fit cannot. Hence, designing techniques for estimation of $\mathbf{C}$
via more sophisticated models is likely to be a fruitful thread of research.
Deep GPs (Damianou and Lawrence, 2013) are a natural next step, and have been
recently studied in the context of computer experiments (Sauer et al., 2020).
Finally, the simulators we studied in this article all accepted a vector of
inputs and returned a scalar response. Extensions to vector-valued, discrete,
or functional responses would increase the breadth of problems this framework
can take on.
## References
* An and Owen (2001) An, J. and A. Owen (2001). Quasi-regression. Journal of Complexity 17(4), 588 – 607.
* Beaucaire et al. (2019) Beaucaire, P., C. Beauthier, and C. Sainvitu (2019). Multi-point infill sampling strategies exploiting multiple surrogate models. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1559–1567.
* Belloni et al. (2013) Belloni, A., V. Chernozhukov, and C. Hansen (2013). Inference on Treatment Effects after Selection among High-Dimensional Controls. The Review of Economic Studies 81(2), 608–650.
* Beygelzimer et al. (2013) Beygelzimer, A., S. Kakadet, J. Langford, S. Arya, D. Mount, and S. Li (2013). FNN: Fast Nearest Neighbor Search Algorithms and Applications. R package version 1.1.
* Binois et al. (2015) Binois, M., D. Ginsbourger, and O. Roustant (2015). A warped kernel improving robustness in Bayesian optimization via random embeddings. In International Conference on Learning and Intelligent Optimization, pp. 281–286. Springer.
* Binois and Gramacy (2019) Binois, M. and R. B. Gramacy (2019). hetGP: Heteroskedastic Gaussian Process Modeling and Design under Replication. R package version 1.1.2.
* Binois et al. (2018) Binois, M., R. B. Gramacy, and M. Ludkovski (2018). Practical heteroscedastic Gaussian process modeling for large simulation experiments. Journal of Computational and Graphical Statistics 27(4), 808–821.
* Breiman (1996) Breiman, L. (1996). Bagging predictors. Machine Learning 24(2), 123–140.
* Bridges et al. (2019) Bridges, R., A. Gruber, C. Felder, M. Verma, and C. Hoff (2019). Active manifolds: A non-linear analogue to active subspaces. In International Conference on Machine Learning, pp. 764–772. PMLR.
* Carnell (2020) Carnell, R. (2020). lhs: Latin Hypercube Samples. R package version 1.0.2.
* Cawley et al. (2006) Cawley, G. C., M. R. Haylock, and S. R. Dorling (2006). Predictive uncertainty in environmental modelling. In The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 5347–5354. Retrieved from http://theoval.cmp.uea.ac.uk/~gcc/competition/.
* Choi et al. (2011) Choi, T., J. Q. Shi, and B. Wang (2011). A Gaussian process regression approach to a single-index model. Journal of Nonparametric Statistics 23(1), 21–36.
* Cole et al. (2021) Cole, D. A., R. B. Christianson, and R. B. Gramacy (2021). Locally induced Gaussian processes for large-scale simulation experiments. Statistics and Computing 31(3), 1–21.
* Constantine (2015) Constantine, P. G. (2015). Active Subspaces. SIAM.
* Constantine et al. (2014) Constantine, P. G., E. Dow, and Q. Wang (2014). Active subspace methods in theory and practice: Applications to kriging surfaces. SIAM Journal on Scientific Computing 36(4), 1500–1524.
* Crema et al. (2015) Crema, G. G., F. G. Nezami, and S. R. Chakravarthy (2015). A stochastic model for managing an assemble-to-order system. In Proceedings of the 2015 Winter Simulation Conference, WSC ’15, pp. 2283–2294. IEEE Press.
* Cressie and Johannesson (2008) Cressie, N. and G. Johannesson (2008). Fixed rank kriging for very large spatial data sets. Journal of the Royal Statistical Society B 70(1), 209–226.
* Da Veiga et al. (2009) Da Veiga, S., F. Wahl, and F. Gamboa (2009). Local polynomial estimation for sensitivity analysis on models with correlated inputs. Technometrics 51(4), 452–463.
* Damianou and Lawrence (2013) Damianou, A. and N. Lawrence (2013). Deep Gaussian processes. In C. M. Carvalho and P. Ravikumar (Eds.), Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, Volume 31 of Proceedings of Machine Learning Research, pp. 207–215. PMLR.
* De Lozzo and Marrel (2016) De Lozzo, M. and A. Marrel (2016). Estimation of the derivative-based global sensitivity measures using a Gaussian process metamodel. SIAM/ASA Journal on Uncertainty Quantification 4(1), 708–738.
* de Souza et al. (2018) de Souza, J. B., V. A. Reisen, G. C. Franco, M. Ispány, P. Bondon, and J. M. Santos (2018). Generalized additive models with principal component analysis: an application to time series of respiratory disease and air pollution data. Journal of the Royal Statistical Society C 67(2), 453–480.
* Delbridge et al. (2020) Delbridge, I., D. Bindel, and A. G. Wilson (2020). Randomly projected additive gaussian processes for regression. In International Conference on Machine Learning, pp. 2453–2463. PMLR.
* Djolonga et al. (2013) Djolonga, J., A. Krause, and V. Cevher (2013). High-dimensional Gaussian process bandits. In Advances in Neural Information Processing Systems 26, pp. 1025–1033.
* Dong et al. (2017) Dong, K., D. Eriksson, H. Nickisch, D. Bindel, and A. Wilson (2017). Scalable log determinants for Gaussian process kernel learning. In Advances in Neural Information Processing Systems, pp. 6330–6340.
* Durrande et al. (2012) Durrande, N., D. Ginsbourger, and O. Roustant (2012). Additive kernels for Gaussian process modeling. Annales de la Facultée de Sciences de Toulouse, 17.
* Durrande et al. (2013) Durrande, N., D. Ginsbourger, O. Roustant, and L. Carraro (2013). ANOVA kernels and RKHS of zero mean functions for model-based sensitivity analysis. Journal of Multivariate Analysis 115, 57–67.
* Duvenaud et al. (2011) Duvenaud, D. K., H. Nickisch, and C. E. Rasmussen (2011). Additive Gaussian processes. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 24, pp. 226–234. Curran Associates, Inc.
* Forrester et al. (2008) Forrester, A., A. Sobester, and A. Keane (2008). Engineering Design via Surrogate Modelling: A Practical Guide. Wiley.
* Friedman and Stuetzle (1981) Friedman, J. H. and W. Stuetzle (1981). Projection pursuit regression. Journal of the American Statistical Association 76(376), 817–823.
* Fukumizu and Leng (2014) Fukumizu, K. and C. Leng (2014). Gradient-based kernel dimension reduction for regression. Journal of the American Statistical Association 109(505), 359–370.
* Garnett et al. (2014) Garnett, R., M. A. Osborne, and P. Hennig (2014). Active learning of linear embeddings for Gaussian processes. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI’14, pp. 230–239. AUAI Press.
* Gibbs and MacKay (1997) Gibbs, M. and D. J. MacKay (1997). Efficient implementation of Gaussian processes. Technical report.
* Gneiting and Raftery (2007) Gneiting, T. and A. E. Raftery (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 102(477), 359–378.
* Golub and Meurant (2010) Golub, G. H. and G. Meurant (2010). Matrices, Moments and Quadrature with Applications. Princeton University Press.
* Gramacy et al. (2013) Gramacy, R., M. Taddy, and S. Wild (2013). Variable selection and sensitivity analysis using dynamic trees, with an application to computer code performance tuning. The Annals of Applied Statistics 7(1), 51–80.
* Gramacy (2016) Gramacy, R. B. (2016). laGP: Large-scale spatial modeling via local approximate Gaussian processes in R. Journal of Statistical Software 72(1), 1–46.
* Gramacy (2020) Gramacy, R. B. (2020). Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences. Boca Raton, Florida: Chapman Hall/CRC.
* Gramacy and Apley (2015) Gramacy, R. B. and D. W. Apley (2015). Local Gaussian process approximation for large computer experiments. Journal of Computational and Graphical Statistics 24(2), 561–578.
* Gramacy and Lee (2012) Gramacy, R. B. and H. K. H. Lee (2012). Cases for the nugget in modeling computer experiments. Statistics and Computing 22(3), 713–722.
* Gramacy and Lian (2012) Gramacy, R. B. and H. Lian (2012). Gaussian process single-index models as emulators for computer experiments. Technometrics 54(1), 30–41.
* Guinness (2018) Guinness, J. (2018). Permutation and grouping methods for sharpening Gaussian process approximations. Technometrics 60(4), 415–429.
* Guinness et al. (2020) Guinness, J., M. Katzfuss, and Y. Fahmy (2020). GpGp: Fast Gaussian Process Computation Using Vecchia’s Approximation. R package version 0.3.1.
* Harper and Gupta (1983) Harper, W. V. and S. K. Gupta (1983). Sensitivity/uncertainty analysis of a borehole scenario comparing latin hypercube sampling and deterministic sensitivity approaches. Technical report, Office of Nuclear Waste Isolation.
* Hastie et al. (2009) Hastie, T., R. Tibshirani, and J. Friedman (2009). The Elements of Statistical Learning. Springer Series in Statistics. New York, NY, USA: Springer New York Inc.
* Iooss and Lemaître (2015) Iooss, B. and P. Lemaître (2015). A review on global sensitivity analysis methods. In C. Meloni and G. Dellino (Eds.), Uncertainty Management in Simulation-Optimization of Complex Systems: Algorithms and Applications, pp. 101–122. Springer.
* Jones (2008) Jones, D. R. (2008). Large-scale multi-disciplinary mass optimization in the auto industry. FORTRAN code for this simulator was retrieved from https://www.miguelanjos.com/jones-benchmark.
* Katzfuss et al. (2020) Katzfuss, M., J. Guinness, and E. Lawrence (2020). Scaled Vecchia approximation for fast computer-model emulation.
* Kenett and Zacks (1998) Kenett, R. and S. Zacks (1998). Modern Industrial Statistics: Design and Control of Quality and Reliability. Duxbury Press.
* Lee (2019) Lee, M. R. (2019). Modified active subspaces using the average of gradients. SIAM/ASA Journal on Uncertainty Quantification 7(1), 53–66.
* Li et al. (2005) Li, B., H. Zha, and F. Chiaromonte (2005). Contour regression: A general approach to dimension reduction. Ann. Statist. 33(4), 1580–1616.
* Li (1992) Li, K.-C. (1992). On principal Hessian directions for data visualization and dimension reduction: Another application of Stein’s lemma. Journal of the American Statistical Association 87(420), 1025–1039.
* Lin and Joseph (2020) Lin, L.-H. and V. R. Joseph (2020). Transformation and additivity in Gaussian processes. Technometrics 62(4), 525–535.
* Liu and Guillas (2017) Liu, X. and S. Guillas (2017). Dimension reduction for Gaussian process emulation: An application to the influence of bathymetry on tsunami heights. SIAM/ASA Journal on Uncertainty Quantification 5(1), 787–812.
* Marrel et al. (2009) Marrel, A., B. Iooss, B. Laurent, and O. Roustant (2009). Calculations of Sobol indices for the Gaussian process metamodel. Reliability Engineering & System Safety 94(3), 742–751.
* Montgomery and Truss (2001) Montgomery, G. P. and L. T. Truss (2001). Combining a statistical design of experiments with formability simulations to predict the formability of pockets in sheet metal parts. In SAE Technical Paper. SAE International.
* Morris et al. (1993) Morris, M. D., T. J. Mitchell, and D. Ylvisaker (1993). Bayesian design and analysis of computer experiments: Use of derivatives in surface prediction. Technometrics 35(3), 243–255.
* Neal (1996) Neal, R. M. (1996). Bayesian Learning for Neural Networks. Berlin, Heidelberg: Springer-Verlag.
* Nocedal and Wright (2006) Nocedal, J. and S. J. Wright (2006). Numerical Optimization (second ed.). New York, NY, USA: Springer.
* Oakley and O’Hagan (2004) Oakley, J. and A. O’Hagan (2004). Probabilistic sensitivity analysis of complex models: a Bayesian approach. Journal of the Royal Statistical Society B 66(3), 751–769.
* Othmer et al. (2016) Othmer, C., T. W. Lukaczyk, P. Constantine, and J. J. Alonso (2016). On active subspaces in car aerodynamics. In 17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference. American Institute of Aeronautics and Astronautics.
* Palar and Shimoyama (2017) Palar, P. S. and K. Shimoyama (2017). Exploiting active subspaces in global optimization: How complex is your problem? In Proceedings of the Genetic and Evolutionary Computation Conference Companion on - GECCO ’17, pp. 1487–1494. ACM Press.
* Palar and Shimoyama (2018) Palar, P. S. and K. Shimoyama (2018). On the accuracy of kriging model in active subspaces. In 2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, pp. 0913.
* Pleiss et al. (2018) Pleiss, G., J. Gardner, K. Weinberger, and A. G. Wilson (2018). Constant-time predictive distributions for Gaussian processes. In J. Dy and A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Machine Learning Research, pp. 4114–4123. PMLR.
* Rasmussen and Williams (2006) Rasmussen, C. E. and C. Williams (2006). Gaussian Processes for Machine Learning. MIT Press.
* Redmond and Baveja (2002) Redmond, M. and A. Baveja (2002). A data-driven software tool for enabling cooperative information sharing among police departments. European Journal of Operational Research 141(3), 660 – 678. Retrieved from https://archive.ics.uci.edu/ml/datasets/communities+and+crime.
* Regis (2011) Regis, R. G. (2011). Stochastic radial basis function algorithms for large-scale optimization involving expensive black-box objective and constraint functions. Computers and Operations Research 38(5), 837 – 853.
* Regis (2012) Regis, R. G. (2012). Surrogate-assisted evolutionary programming for high dimensional constrained black-box optimization. In Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, GECCO ’12, pp. 1431–1432. Association for Computing Machinery.
* Regis and Wild (2017) Regis, R. G. and S. M. Wild (2017). Conorbit: constrained optimization by radial basis function interpolation in trust regions. Optimization Methods and Software 32(3), 552–580.
* Rodriguez et al. (2006) Rodriguez, J. J., L. I. Kuncheva, and C. J. Alonso (2006). Rotation forest: A new classifier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10), 1619–1630.
* Samarov (1993) Samarov, A. M. (1993). Exploring regression structure using nonparametric functional estimation. Journal of the American Statistical Association 88(423), 836–847.
* Santner et al. (2018) Santner, T., B. Williams, and W. Notz (2018). The Design and Analysis of Computer Experiments, Second Edition. New York, NY: Springer–Verlag.
* Sauer et al. (2020) Sauer, A., R. B. Gramacy, and D. Higdon (2020). Active learning for deep Gaussian process surrogates. arXiv preprint arXiv:2012.08015.
* Smola and Bartlett (2001) Smola, A. and P. Bartlett (2001). Sparse greedy Gaussian process regression. In T. Leen, T. Dietterich, and V. Tresp (Eds.), Advances in Neural Information Processing Systems, Volume 13, pp. 619–625. MIT Press.
* Snelson and Ghahramani (2006) Snelson, E. and Z. Ghahramani (2006). Sparse Gaussian processes using pseudo-inputs. In Y. Weiss, B. Schölkopf, and J. Platt (Eds.), Advances in Neural Information Processing Systems, Volume 18, pp. 1257–1264. MIT Press.
* Sobol and Gersham (1995) Sobol, I. and A. Gersham (1995). On an alternative global sensitivity estimator. Proceedings of SAMO, 40–42.
* Stein (1987) Stein, M. (1987). Large sample properties of simulations using Latin hypercube sampling. Technometrics 29(2), 143–151.
* Sun et al. (2019) Sun, F., R. Gramacy, B. Haaland, E. Lawrence, and A. Walker (2019). Emulating satellite drag from large simulation experiments. SIAM/ASA Journal on Uncertainty Quantification 7(2), 720–759. preprint arXiv:1712.00182.
* Surjanovic and Bingham (2020) Surjanovic, S. and D. Bingham (2020). Virtual library of simulation experiments: Test functions and datasets. Retrieved from https://www.sfu.ca/~ssurjano/.
* Vecchia (1992) Vecchia, A. V. (1992). A new method of prediction for spatial regression models with correlated errors. Journal of the Royal Statistical Society B 54(3), 813–830.
* Wahba et al. (1995) Wahba, G., D. R. Johnson, F. Gao, and J. Gong (1995). Adaptive tuning of numerical weather prediction models: Randomized GCV in three- and four-dimensional data assimilation. Monthly Weather Review 123(11), 3358 – 3370.
* Wang et al. (2016) Wang, Z., F. Hutter, M. Zoghi, D. Matheson, and N. de Feitas (2016). Bayesian optimization in a billion dimensions via random embeddings. Journal of Artificial Intelligence Research 55, 361–387.
* Wathen (2015) Wathen, A. J. (2015). Preconditioning. Acta Numerica 24, 329–376.
* Wycoff and Binois (2020) Wycoff, N. and M. Binois (2020). activegp: Gaussian Process Based Design and Analysis for the Active Subspace Method. R package version 1.0.6.
* Wycoff et al. (2021) Wycoff, N., M. Binois, and S. M. Wild (2021). Sequential learning of active subspaces. Journal of Computational and Graphical Statistics 0(ja), 1–33.
* Zhao et al. (2018) Zhao, Y., Y. Amemiya, and Y. Hung (2018). Efficient Gaussian process modeling using experimental design-based subagging. Statistica Sinica 28(3), 1459–1479.
* Zhou (2013) Zhou, H. (2013). Computer Modeling for Injection Molding: Simulation, Optimization, and Control, pp. 25–47. John Wiley & Sons, Inc.
|
# On the importance of antimony for temporal evolution of emission from self-
assembled (InGa)(AsSb)/GaAs quantum dots on GaP(001)
Petr Steindl<EMAIL_ADDRESS>Department of Condensed Matter
Physics, Faculty of Science, Masaryk University, Kotlářská 267/2, 61137 Brno,
Czech Republic Huygens-Kamerlingh Onnes Laboratory, Leiden University, P.O.
Box 9504, 2300 RA Leiden, Netherlands Elisa Maddalena Sala
<EMAIL_ADDRESS>Center for Nanophotonics, Institute for Solid State
Physics, Technische Universität Berlin, Hardenbergstr. 36, 10623 Berlin,
Germany EPSRC National Epitaxy Facility, The University of Sheffield, North
Campus, Broad Lane, S3 7HQ Sheffield, United Kingdom Benito Alén Instituto
de Micro y Nanotecnología, IMN-CNM, CSIC (CEI UAM+CSIC) Isaac Newton, 8,
E-28760, Tres Cantos, Madrid, Spain Dieter Bimberg Center for Nanophotonics,
Institute for Solid State Physics, Technische Universität Berlin, Germany
“Bimberg Chinese-German Center for Green Photonics” of the Chinese Academy of
Sciences at CIOMP, 13033 Changchun, China Petr Klenovský
<EMAIL_ADDRESS>Department of Condensed Matter Physics, Faculty of
Science, Masaryk University, Kotlářská 267/2, 61137 Brno, Czech Republic
Czech Metrology Institute, Okružní 31, 63800 Brno, Czech Republic
###### Abstract
Understanding the carrier dynamics of nanostructures is the key for
development and optimization of novel semiconductor nano-devices. Here, we
study the optical properties and carrier dynamics of (InGa)(AsSb)/GaAs/GaP
quantum dots (QDs) by means of non-resonant energy and temperature modulated
time-resolved photoluminescence. Studying this material system is important in
view of the ongoing implementation of such QDs for nano memory devices. Our
set of structures contains a single QD layer, QDs overgrown by a GaSb capping
layer, and solely a GaAs quantum well, respectively. Theoretical analytical
models allow us to discern the common spectral features around the emission
energy of 1.8 eV related to GaAs quantum well and GaP substrate. We observe
type-I emission from QDs with recombination times between 2 ns and 10 ns,
increasing towards lower energies. The distribution suggests the coexistence
of momentum direct and indirect QD transitions. Moreover, based on the
considerable tunability of the dots depending on Sb incorporation, we suggest
their utilization as quantum photonic sources embedded in complementary metal-
oxide-semiconductor (CMOS) platforms, since GaP is almost lattice-matched to
Si. Finally, our analysis confirms the nature of the pumping power blue-shift
of emission originating from the charged-background induced changes of the
wavefunction topology.
###### pacs:
78.67.Hc, 73.21.La, 85.35.Be, 77.65.Ly
## I Introduction
In the last few decades, nano-structures like self-assembled III-V QDs have
been investigated due to their wide range of novel physical properties.
Advantages in this respect led to a number of different applications, such as
active media in semiconductor lasers Bimberg1997; Ledentsov;
Heinrichsdorff1997, as building blocks for quantum information devices,
particularly for quantum repeaters Bimberg2008_EL; Azuma_Qrep; Li2019, as
efficient single and entangled photon sources Lochamnn2006;
muller_quantum_2018; martin-sanchez_single_2009; schlehahn_single-photon_2015;
paul_single-photon_2017; salter_entangled-light-emitting_2010; Aberl:17;
Klenovsky2018; Senellart2017, including highly-entangled states for quantum
computing Lim_PRL2005; Lindner_PRL2009; Istrati2020; steindl2020artificial, or
as nanomemories Marent2011; BimbergPatent; Marent2009_microelectronics;
Bimberg2011_SbQDFlash; Marent_APL2007_10y. Among III-V QDs, particularly
type-I indirect (InGa)(AsSb)/GaAs QDs embedded in a GaP(001) matrix t_sala;
Sala2018 have recently attracted attention due to their promising use as
storage units for the QD-Flash nanomemory cells t_sala; Sala2018, as
potentially effective entangled photon sources Klenovsky2018_TUB, owing to
their smaller fine-structure splitting (FSS) of the ground state exciton
compared to well-known type-I systems such as (InGa)As/GaAs Aberl:17;
Klenovsky2018, and as quantum gates Burkard_PRB1999_QuantumGate; Krapek2010;
Klenovsky2016; Klenovsky2018_TUB. The concept of hole storage QD-Flash was
initially suggested by Bimberg and coworkers Marent2011; BimbergPatent;
Marent2009_microelectronics; Bimberg2011_SbQDFlash; Marent_APL2007_10y;
Kapteyn1999 following first pioneering studies Kapteyn1999 regarding the
mechanisms of electron escape from InAs/GaAs QDs, by using the Deep Level
Transient Spectroscopy (DLTS). The key feature of the QD-Flash is to combine
the fast access times of Dynamic Random Access Memories (DRAM) with the non-
volatility of the Flash, which leads to a universal memory type, potentially
simplifying future computer architectures. Recently, type-I indirect
(InGa)(AsSb)/GaAs/GaP QDs showed an improvement of one order of magnitude in
the storage time compared to pure In0.5Ga0.5As/GaAs/GaP QDs Bonato_APL2015;
Stracke2014, reaching $\sim$1 hr at room temperature t_sala; Sala2018. This
result represents to date the record for Metal-Organic Vapor Phase Epitaxy
(MOVPE)-grown QDs, thus opening up the possibility to use this technique to
fabricate memory devices based on high-quality III-V semiconductor QDs.
Additionally, in Ref. Klenovsky2018_TUB the authors theoretically discussed
the physical properties of such material system – particularly the quantum
confinement type – depending on the relative In/Ga and As/Sb contents in the
QDs. It was found that these QDs showed concurrently both direct and indirect
optical transitions for increasing Sb content, finally leading to type-II band
alignment Klenovsky2018_TUB. That made such QDs be excellent candidates for
quantum information technologies. Increasing the Sb content in the QDs has
been previously made possible by overgrowing (InGa)(AsSb)/GaAs/GaP QDs with a
GaSb capping layer, which has effectively modified the QD composition
Steindl2019_PL. Moreover, through detailed investigations of their optical
properties, it was found that such procedure led to an energy swapping of the
$\Gamma$ and L states, thereby increasing the wavefunction leakage outside the
QDs Klenovsky2018_TUB; Steindl2019_PL. This property is indeed very appealing
for further improvement of storage times since an increased Sb incorporation
into the QDs leads to increased hole localization energy Klenovsky2018_TUB;
Bimberg2011_SbQDFlash; Marent_APL2007_10y. Finally, fabricating QDs on GaP
substrates is advantageous in terms of integration on Silicon platforms, since
the lattice mismatch between GaP and Si amounts to just 0.4%, thus making
defect-free MOVPE growth of GaP on Si possible Grassman_apl2013.
In this work, we take the next step and study the carrier dynamics of
(InGa)(AsSb)/GaAs/GaP QDs, by means of time-resolved-photoluminescence (TRPL)
for varying detection energy and sample temperature. This allows us to
energetically separate the overlapping optical transitions previously observed
in our recent work Steindl2019_PL. First, we provide a brief overview of our
sample structures. Afterwards, we discuss the experimental results on carrier
lifetimes for varying measurement conditions. Analytical models, describing
the observed physical phenomena are provided, leading us to discern the
different types of optical transitions involved. We would like to point out
that, to date, there is no such detailed optical investigation of this
material system.
## II Sample structures
The samples were grown by MOVPE in Stranski-Krastanov (SK) mode on GaP(001)
substrates at the TU Berlin t_sala; Sala2018. Such samples were also
previously investigated by means of steady-state photoluminescence
Steindl2019_PL. The structures of the samples studied in this work are
schematically depicted in all figures as insets.
All samples include 5 ML-thick GaAs interlayer (IL), a crucial ingredient for
the subsequent QD formation, as pointed out by Sala et al. t_sala; Sala2016.
The sample having the IL only is referred to as Sw/o, that labeled Swith
(Scap) contains (InGa)(AsSb) QDs, without (with) $\sim$1 ML GaSb capping. The
QDs are of truncated pyramid shape, with basis diameter of $\sim$15 nm and
height of $\sim$2.5 nm Klenovsky2018_TUB; Steindl2019_PL; Gajjela2020. For
detailed information about the growth procedure, see Sala2018; t_sala;
Steindl2019_PL. Additional details on their structure, particularly on size,
shape, and composition, can be found in very recent work on XSTM and atom
probe tomography investigations on such QD samples Gajjela2020.
The sample photoluminescence (PL) is found at $\sim$1.8 eV and shows several
not well spectrally separated bands, representing a combination of momentum
direct and indirect type-I transitions from QDs Steindl2019_PL.
## III Experimental setup for TRPL measurements
In TRPL experiments we used a pulsed laser with the wavelength of 405 nm,
focused on 0.06 mm2 area with a 60 ps pulse-width. The emitted PL spectrum was
dispersed by 1200 grooves/mm ruled grating and detected by a Si avalanche
photodiode (APD). First, we cooled the samples to 15 K, and detected in 200 ns
temporal window the energy-resolved TRPL signal for each wavelength. Then,
within temperature-resolved TRPL, the sample temperature $T$ was varied in the
range 15–130 K. Here, the temporal window was modified to maximize the
resolution from 200 ns for lower $T$, to 25 ns for higher $T$. Changing the
temporal window is connected with changes in repetition rate, which was varied
between 5 MHz (for the temporal window 200 ns; used also for energy-resolved
TRPL) and 80 MHz (for 25 ns).
## IV Spectral line-shape model
Figure 1: Excitation power dependence of emission energies of samples (a)
$\mathrm{S}_{\mathrm{w/o}}$, (b) $\mathrm{S}_{\mathrm{with}}$, and (c)
$\mathrm{S}_{\mathrm{cap}}$. Symbols represent the emission energies fitted
from PL spectra. A typical (normalized) spectrum of each sample measured with
$D=3.3$ W/cm2 together with colored band-reconstruction over spectral range of
1650–1900 meV is shown in insets. The emission energies evolve in agreement
with diffuse interface model for spatial type-I transitions
Abramkin_blueshift_analytical; Steindl2019_PL (solid lines). Low-power
emission energies of IL transitions in $\mathrm{S}_{\mathrm{with}}$
($\mathrm{S}_{\mathrm{cap}}$) are red-shifted by $\mathcal{E}^{\mathrm{w}}$
($\mathcal{E}^{\mathrm{c}}$) in respect to that in
$\mathrm{S}_{\mathrm{w/o}}$.
For the description of PL in the time domain (TDPL), we take advantage of the
similarity in the grown structures, leading to expected shared spectral
features across samples associated with carriers confined in the GaAs IL,
i.e., zero-phonon (ZPL) and phonon-replica (rep-ZPL) transitions of electrons
from $X_{xy}$ conduction minima to $\Gamma$ valence band maximum
Prieto_APL1997; Steindl2019_PL. Through analysis of the line-shape in the
$S_{\mathrm{w/o}}$ sample, we conclude that the convolution of two
asymmetrical bands with maximum emission energy $E_{\mathrm{max}}$
concurrently showing a small high-energy and a prominent low-energy band-tail
produce better results than the purely Gaussian spectral deconvolution used in
Ref. Steindl2019_PL. The low energy tail shall be related to carrier
localization into long-range IL potential fluctuations Almosni2016. Meanwhile,
high energy tails shall be related to phonon-assisted thermal population of
delocalized states, especially at large excitation powers/temperatures or
during the initial stages of the relaxation process. We follow the work of
Almosni et al. to describe the low energy tail long-range fluctuations through
the following equation Almosni2016
$\displaystyle
I\propto\frac{\exp(\epsilon/E_{\mathrm{long}})}{E_{\mathrm{long}}}\exp(-\exp(\epsilon/E_{\mathrm{long}}))$
(1)
where a single parameter $E_{\mathrm{long}}$ characterizes the long-range
potential disorder energy. Meanwhile, hot carrier population is taken into
account through an $n$ phonon-assisted thermalization process by line-shape
Amtout1995
$\displaystyle
I_{n}\propto\epsilon^{5/2-n}\exp\left(-\frac{\epsilon}{k_{B}T_{\mathrm{ca}}}\right)$
(2)
with carrier thermalization energy of $k_{B}T_{\mathrm{ca}}$;
$\epsilon=E-E_{\mathrm{max}}$. We limit our description of $I_{\mathrm{IL}}$
(convolution of Eqs. (1) and (2)) to one-photon process ($n=1$) only.
As it can be seen in Fig. 1, two replicas of the above lineshape model account
for most of the PL emission in these samples, yet not completely. To describe
the full PL spectrum, two additional Gaussian profiles are necessary. One of
them describes a rather broad band (FWHM larger than 35 meV), clearly
observable only at very low excitation powers, likely originating in the
donor-acceptor pair (DAP) transitions in GaP Dean_PR68; Dean_1970 or other
defect induced during GaAs IL and QDs formation (the latter in the case of
samples with QDs). We attribute the second Gaussian band to the recombination
from QDs, being due to non-optimized excitation wavelength, and thus very weak
and observable mainly for high excitation powers. Before moving to time-
resolved analysis, we show the validity of the fitting model by applying it to
the PL vs. continuous-wave excitation power dependence $D$ measured at 15 K
and published in our previous study Steindl2019_PL.
Similarly as there, the fitted peak energies are used to analyse the emission
blue-shift with increasing $D$, in order to determine the type of carrier
spatial confinement. Although elsewhere in the literature Klenovsky2017;
Jo2012; Ledentsov1995; Jin; Gradkowski_pssb2009 the presence of blue-shift is
automatically assigned to indirect spatial alignment, the so-called type-II,
we examine here the blue-shift by $E=E_{0}+U\mathrm{ln}(D)+\beta D^{1/3}$
Abramkin_blueshift_analytical; Steindl2019_PL allowing us to disentangle type-
II bend-bending, due to state squeezing represented by the parameter $\beta$,
from the spatial alignment independent blue-shift caused by crystalline
defects described by the Urbach energy tail $U$. Having $\beta$ negligible,
the analysis in Fig. 1 suggests that the emission bands of our
heterostructures are of type-I, i.e. spatially direct, as also previously
reported based on Gaussian fits Steindl2019_PL and in agreement with
$\mathbf{k\cdot p}$ simulations Klenovsky2018_TUB. Moreover, we observe that
ZPL and rep-ZPL transitions of samples $\mathrm{S}_{\mathrm{with}}$ and
$\mathrm{S}_{\mathrm{cap}}$ are red-shifted in respect to their energies
observed from PL of $\mathrm{S}_{\mathrm{w/o}}$ by
$\mathcal{E}^{\mathrm{w}}=52$ meV and $\mathcal{E}^{\mathrm{c}}=82$ meV,
respectively. This shift partially reflects the strain-relaxation initialized
by constituent segregation from QD-layer Gajjela2020 and, thus, partially
induced change in band confinement. The former is connected also with the
natural spectral broadening when additional localized defect states are
created in the heterostructure. These additional states then form an effective
background potential increasing with excitation power, leading to the energy
blue-shift of bands of samples with QDs, characterized by the Urbach energy.
However, the bands of the sample with only GaAs IL do not manifest themselves.
A similar shift can be also observed in the time domain after the non-resonant
pulse-excitation when the carriers first thermalize into the trap states and
form the initial background potential. As those recombine, $E_{\mathrm{long}}$
decreases, the potential weakens and, thus, the emission energy is gradually
red-shifted, as we will discuss later in more detail. This potential weakening
is connected also with the spreading of the state wavefunctions, effectively
observable as an increase in recombination times in the excitation resolved
TRPL, see supplemental information Supplement.
Although we attribute the QD band in the emission of samples with dots, we
expect in the studied spectral range even richer spectral response related to
momentum-indirect transitions of QDs Klenovsky2018_TUB and their compositional
variations Gajjela2020 which are most likely shadowed by much stronger GaAs IL
emission.
Table 1: Summary of the best-fit parameters of the spectral shape model
applied to the excitation power resolved PL and TDPL of all studied samples.
Symbol ∗ (∗∗) refers to a discrepancy of $+10$ meV ($-5$ meV) in $E_{0}$ from
TDPL in respect to the extracted value from the excitation power-dependent PL.
For ZPL and rep-ZPL, we give $E_{\mathrm{long}}$ as FWHM.
sample | transition | FWHM (meV) | $E_{\mathrm{0}}$ (meV) | $U$ (meV) | $\Delta E$ (meV) | $\tau_{E}$ (ns) | $\tau_{1}^{\mathrm{TDPL}}$ (ns) | $\tau_{2}^{\mathrm{TDPL}}$ (ns)
---|---|---|---|---|---|---|---|---
$\mathrm{S}_{\mathrm{w/o}}$ | ZPL | 10 | $1858\pm 0.4$ | $0.5\pm 0.2$ | $1.3\pm 0.4$ | $50\pm 40$ | $10.7\pm 0.2$ | $52\pm 1$
rep-ZPL | 14 | $1826\pm 0.4$ | $0.8\pm 0.1$ | $5.9\pm 0.4$ | $31\pm 5$ | $11\pm 3$ | $87.6\pm 0.7$
$\mathrm{S}_{\mathrm{with}}$ | ZPL | 19 | $1796^{*}\pm 1$ | $3.9\pm 0.4$ | $13.8\pm 0.5$ | $41\pm 4$ | $6.8\pm 0.1$ | $47\pm 1$
rep-ZPL | 20 | $1765^{*}\pm 1$ | $2.8\pm 0.4$ | $11\pm 1$ | $46\pm 6$ | $12.9\pm 0.5$ | $47\pm 1$
QDs | 19 | $1777^{*}\pm 2$ | $3.6\pm 0.6$ | $14.3\pm 0.5$ | $35\pm 4$ | $10.4\pm 0.1$ |
$\mathrm{S}_{\mathrm{cap}}$ | ZPL | 20 | $1764\pm 0.4$ | $4.4\pm 0.1$ | $17\pm 1$ | $44\pm 7$ | $14.9\pm 0.1$ | $2.0\pm 0.1$
rep-ZPL | 23 | $1733\pm 0.4$ | $3.1\pm 0.2$ | $5.4\pm 0.7$ | $19\pm 4$ | $68\pm 4$ |
QDs | 8 | $1796^{**}\pm 0.6$ | $0.7\pm 0.2$ | $10\pm 1$ | $4.1\pm 0.4$ | $7.7\pm 2$ |
Table 2: Parameters obtained from Gourdon and Lavallard model, Eq. (4). Units
of the variables are: $\tau_{\mathrm{r}}^{i}$ is in ns, $E_{\mathrm{me}}^{i}$
and $U_{0}^{i}$ are in meV.
sample | GaAs IL | GaAs IL, phonon rep. | growth defects | DAP in GaP
---|---|---|---|---
$\tau_{\mathrm{r}}^{\mathrm{ZPL}}$ | $E_{\mathrm{me}}^{\mathrm{ZPL}}$ | $U_{\mathrm{0}}^{\mathrm{ZPL}}$ | $\tau_{\mathrm{r}}^{\mathrm{rep}}$ | $E_{\mathrm{me}}^{\mathrm{rep}}$ | $U_{\mathrm{0}}^{\mathrm{rep}}$ | $\tau_{\mathrm{r}}^{\mathrm{d}}$ | $E_{\mathrm{me}}^{\mathrm{d}}$ | $U_{\mathrm{0}}^{\mathrm{d}}$ | $\tau_{\mathrm{r}}^{\mathrm{DAP}}$ | $E_{\mathrm{me}}^{\mathrm{DAP}}$ | $U_{\mathrm{0}}^{\mathrm{DAP}}$
$\mathrm{S}_{\mathrm{w/o}}$ | $13.0\pm 1.0$ | $1882\pm 3$ | $4\pm 2$ | $14.4\pm 2.4$ | $1856\pm 2$ | $4.3\pm 1.4$ | $90\pm 1$ | $1877\pm 1$ | $5.3\pm 0.2$ | $260\pm 30$ | $1776\pm 3$ | $15.6\pm 0.5$
$\mathrm{S}_{\mathrm{with}}$ | $31.5\pm 0.7$ | $1835\pm 1$ | $8.0\pm 0.6$ | $30.7\pm 0.3$ | $1801\pm 2$ | $2.7\pm 1.1$ | $284\pm 2$ | $1810\pm 1$ | $14.9\pm 0.1$ | $561\pm 1$ | $1781\pm 1$ | $17.0\pm 0.1$
$\mathrm{S}_{\mathrm{cap}}$ | $18.4\pm 0.5$ | $1792\pm 1$ | $11\pm 1$ | $18.8\pm 0.3$ | $1743\pm 1$ | $2.5\pm 0.9$ | | | | $1156\pm 1$ | $1737\pm 1$ | $17.6\pm 0.2$
## V Emission energy dependent TRPL
Figure 2: False-color plots of PL intensity as a function of time and emission
energy for samples (a) $S_{\mathrm{w/o}}$, (b) $S_{\mathrm{with}}$, and (c)
$S_{\mathrm{cap}}$. The color scale is identical for all samples. Figure 3:
Fitted TDPL emission energies (symbols) which exhibit exponential-like energy
red-shift with temporal evolution (fit, black solid lines). While for
$\mathrm{S}_{\mathrm{w/o}}$ in (a), the shift is timid, for samples
$\mathrm{S}_{\mathrm{with}}$ (b) and $\mathrm{S}_{\mathrm{cap}}$ (c) it
exceeds 10 meV and leads to an observable spectral-shape variation within
temporal evolution (see Fig. 2 and insets with color-coded fitted emission
bands over the spectral range of 1.65–1.9 eV). The broken grey vertical lines
indicate the moment of the laser pulse excitation. Figure 4: Band schemes of
samples $\mathrm{S}_{\mathrm{w/o}}$ [panel (a)], $\mathrm{S}_{\mathrm{with}}$
[panel (d)] and $\mathrm{S}_{\mathrm{cap}}$ [panel (g)] according to the
observed TDPL transitions $E_{0}$. The insets show the experimentally observed
recombination times, transition (taken from fits of TDPL, solid lines) and
escape (dashed line) energies. The energy dispersion of (b) time constants and
(c) corresponding weights $w$ for sample $\mathrm{S}_{\mathrm{w/o}}$ obtained
by fitting the TRPL signal by the double mono-exponential model using Eq. (3)
(symbols) and fitted by the Gourdon-Lavallard’s model 4 (solid lines)
Gourdon_PSSB1989. That for samples $\mathrm{S}_{\mathrm{with}}$ and
$\mathrm{S}_{\mathrm{cap}}$ obtained from fitting of the TRPL signal by triple
mono-exponential model using Eq. (3) is shown in panels (d)–(f) and (g)–(i),
respectively. The deconvoluted time constants show good agreement with TDPL
intensity decays (full symbols with arrows representing time-domain $\Delta E$
shift; transitions are assigned by color in agreement with Fig. 3) and are
compared to the recombination time of wetting layer in InAs/GaAs QDs system of
25 ns (dashed line), taken from Ref. Karachinsky_WL25ns_missoriented_substr.
Shaded areas of $1-10$ ns, $10-40$ ns, and $>100$ ns correspond to different
recombination channels.
In this section, we study the energy-resolved carrier dynamics in our
heterostructures by TRPL. To assign the recombination times to the
characteristic bands, we first fit the signal (see raw experimental data in
Fig. 2) in individual time bins by the spectral shape model discussed in the
previous part, and we refer to this analysis as time-domain PL (TDPL). For the
best-fit results presented in Fig. 3, we use the parameters obtained from
steady-state excitation power dependency. Later, we analyse the signal for
each wavelength also by the double mono-exponential model (2ME)
$I(t)=A_{1}\exp(-t/\tau_{1})+A_{2}\exp(-t/\tau_{2}),$ (3)
characterized by amplitude $A_{1}$ ($A_{2}$) and decay time $\tau_{1}$
($\tau_{2}$) for the slow (fast) decay process. In the case of samples with
QDs, we added to the analysis also the third exponential decay component
($\tau_{3}$), representing the electron-hole recombination in QDs. Finally, we
analyze the spectral distribution of the time decay constants
$\tau_{1}$–$\tau_{3}$ by an analytical model developed by Gourdon and
Lavallard Gourdon_PSSB1989:
$\displaystyle\tau=\frac{\tau_{\mathrm{r}}}{1+\exp((E-E_{\mathrm{me}})/U_{0})}$
(4)
which is widely used in the literature Rubel_APL2007; Sugisaki_PRB2000, even
though in Eq. (4) the hopping processes Gourdon_PSSB1989 or temperature
dependence Zhicheng_SciRep2017 are not included. The meaning of the parameters
in Eq. (4) is as follows: $\tau_{\mathrm{r}}$ is the exciton radiative
lifetime, $E_{\mathrm{me}}$ the characteristic energy for which the radiative
time equals the transfer one, analogously to a mobility edge Oueslati_PRB1988;
Sugisaki_PRB2000, and $U_{0}$ is the measured energy of localized states,
similar to Urbach energy tail, responsible for the observed energy blue-shift
Abramkin_blueshift_analytical. Note, that $\tau_{1}$ process decays rather
slowly and does not completely disappear in one temporal window, therefore we
take into account its repumping from previous pulses in TRPL fits, as
discussed in the appendix. This issue is overcome in TDPL by disentangling
individual transitions by line-shape model fitting, where the slowest decay is
assigned to (mainly non-radiative) pair recombination of DAP in GaP Dean_PR68;
Dean_1970. Moreover, in spectral dependence for the evaluation of $\tau_{1}$
we need to extend the model (4) by an additional contribution, likely
connected with other defects created during the epitaxial growth process.
### V.1 Sample without QDs $\mathrm{S}_{\mathrm{w/o}}$
We start our discussion with the sample $\mathrm{S}_{\mathrm{w/o}}$. TDPL
deconvolution allows us to study not only the relaxation-time constants of the
considered decay process but also the energy changes of the state in the time
domain. Specifically, the emptying of the impurity states entails an
exponential-like decrease of the emission energies of the total energy $\Delta
E$ for both ZPL and rep-ZPL bands, also recently observed for relaxed GaAs/GaP
QDs with type-I band-alignment Shamirzaev_APL2010:
$\displaystyle E(t)=E_{0}+\Delta E\exp(-t/\tau_{E}),$ (5)
where $E_{0}+\Delta E$ is the energy of the observed state after laser
excitation, which exponentially decays proportionally to the time constant
$\tau_{E}$ (an effective time when impurities and defects affect the electron
state) to electron energy $E_{0}$. That can be equally well understood as due
to defects at the interfaces between segments of the heterostructure, which
create a local electric field (non-equilibrium carriers) leading to red-shift
$\Delta E$ of the electron state with energy $E_{0}$. The carriers then
recombine for $\tau_{\mathrm{E}}$ upon which the eigenvalue of electron state
returns to its value without the presence of the local field $E_{0}$. Note,
that the shift $\Delta E$ cannot be caused by inter-valley scattering, which
is three orders of magnitude faster than the observed $\tau_{E}$
Zollner_APL89, nor by the thermalization of higher excited states (since
$\tau_{E}>$ radiative recombination times) or thermalization of free-carrier
created after excitation which is of one order of magnitude faster, see
$T_{\mathrm{ca}}$ in supplemental information Supplement.
Even though both bands are shifted by few units of meV, similarly to the total
blue-shift observed in steady-state experiments, the integral PL spectrum
taken at different times of measurement does not show any significant shift
and decays equally in time proportionally to the decay around 10-15 ns, see
inset of Fig. 3 (a) and table 2. These values are in good agreement with
cryogenic radiative lifetimes of InAs/GaAs wetting layer of 25 ns
Karachinsky_WL25ns_missoriented_substr. Note, that since for the studied
samples the energy level separations of IL, DAP, and QDs are not clearly
distinguishable, we use double mono-exponential decay function (with time
constants $\tau_{1}^{\mathrm{TDPL}}$ and $\tau_{2}^{\mathrm{TDPL}}$) to
deconvolute the emission intensity, where the origin of the second time
constant is assigned according to the following: DAP and other non-radiative
defects decay slowly ($\tau_{2}^{\mathrm{TDPL}}>40$ ns), whereas quantum dot
transition is fast ($\tau_{2}^{\mathrm{TDPL}}<10$ ns).
The standard TRPL deconvolution at each wavelength in Fig. 4 (b) shows two
contributions. The faster, being in good agreement with ZPL and rep-ZPL TDPL
band decays, with time constants around 13 ns contributes more or less
constantly by 20 % to the total intensity [panel (c)]. The slower process,
related to DAP and crystalline defects, increases the time-constant up to
$\sim 200$ ns towards lower energies where none transition from GaAs IL is
expected Klenovsky2018_TUB; Prieto_APL1997 and is saturated below 1.79 eV as
expected from the similarity with the two other samples. Note, that similar
behaviour with extremely slow (up to few $\mu$s) low-energy transition were
independently reported for (In,Ga)As/GaP Robert2012; Robert2016, Ga(As,P)/GaP
Abramkin_JAP2012, and GaSb/GaP Abramkin2012 as momentum-indirect transitions
from QDs. Because we observe such transition not only for our QDs with
completely different stoichiometry but also for GaAs/GaP sample clearly
without any QDs, we tend to assign the slow transition to defects in GaP
substrate Jedral1992; Moser1984, common for all reported structures.
Furthermore, we note in Fig. 4(b) a good agreement between TDPL and TRPL time
constants, allowing us to deduce, in power and temperature resolved
experiments, the character of relaxation based on the results of TRPL
measurements only.
### V.2 Sample with QDs $\mathrm{S}_{\mathrm{with}}$
The whole spectrum of $\mathrm{S}_{\mathrm{with}}$ (Fig. 2), including ZPL and
rep-ZPL bands, is also red-shifted in TDPL in respect to that of
$\mathrm{S}_{\mathrm{w/o}}$, approximately by $\mathcal{E}^{\mathrm{w}}$, see
Fig. 3 and table 2. That is close to the energy shift of
$E_{\mathrm{me}}(S_{\mathrm{w/o}})-E_{\mathrm{me}}(S_{\mathrm{with}})=47$ meV
for ZPL (55 meV for rep-ZPL) and together with similar time constants
$\tau_{1}^{\mathrm{TDPL}}$, pointing to similar physics behind the
$I_{\mathrm{IL}}$ transitions. The best fit emission energies of ZPL and rep-
ZPL after excitation show non-equilibrium carrier background potential,
initially squeezing the electron wavefunction Klenovsky2017;
llorens_topology_2019. Later, as the potential weakens, the wavefunction
spatially spreads, leading to the gradual red-shift $\Delta E$ of 14 meV and
11 meV for ZPL and rep-ZPL bands, respectively, to their steady-state
energies. This time, in agreement with large blue-shift in excitation power-
dependent PL, the shifts are more prominent due to significantly increased
number of defects created within QD layer formation and later due to
additional atom segregation Gajjela2020. In addition to the sample
$\mathrm{S}_{\mathrm{w/o}}$, we observe also $\Delta E$ of 14 meV for the TDPL
QD band with time constant of $\sim$10 ns, suggesting impurity induced
dynamics connected with the GaAs layer.
The TRPL signal, deconvoluted by Eq. (3) by three mono-exponential decay
contributions, shows two patterns: one similar to that observed for
$\mathrm{S}_{\mathrm{w/o}}$, and also a much faster one, which we attribute to
the emission from QDs. These processes, depicted in panels (d)–(f) of Fig. 4,
have different weight across the measured spectral range. While for energies
below 1.75 eV the DAP dynamical processes dominate, they lose importance for
larger energies in favor to the processes involving the GaAs IL. The QD
contribution is almost negligible in the whole spectral range, except for an
increase of $w_{3}$, corresponding to QDs, centered around 1.80 eV and 1.83
eV, where $w_{3}$ is larger than 10%. The mean values of $\tau_{3}$ in these
spectral ranges are $9.0\pm 1.0$ ns and $6.0\pm 1.0$ ns, respectively.
For the spectral characteristic of the transitions, the Gourdon and Lavallard
model Gourdon_PSSB1989 was used by means of one contribution for the process
$\tau_{2}$, and two contributions for the process $\tau_{1}$. The best-fit
values (see Tab. 2) show the mobility edge of the ZPL transition in IL shifted
with respect to that of $\mathrm{S}_{\mathrm{w/o}}$ by 47 meV, which is in the
agreement with the shift of the whole spectrum discussed previously. On the
other hand, the mobility edge of DAP in GaP remains not affected by the
heterostructure. The radiative time of the ZPL (rep-ZPL) band is $31.5\pm 0.7$
ns ($30.7\pm 0.3$ ns), which is more than two times larger than that of the
sample without QDs. That increase can be understood in terms of different
material distribution, as an effect of strain relaxation discussed in
Steindl2019_PL due to the GaAs IL overgrowth with QDs, leading to the change
of the confinement potentials. On the other hand, disorder energies $U_{0}$
originating from material redistribution – in our case mainly due to the
strain relaxation – are higher than for $\mathrm{S}_{\mathrm{w/o}}$,
indicating increased disorder of GaAs IL interface, causing not only creation
of trap states, but also non-radiative rates at higher energies effectively
enlarging the time constants.
### V.3 Sample with GaSb-capped QDs $\mathrm{S}_{\mathrm{cap}}$
As previously shown in Steindl2019_PL, overgrowing the QDs with a thin
($\sim$1ML) GaSb cap leads to an effective increase of the Sb content in QDs.
Through the TDPL analysis of sample $\mathrm{S}_{\mathrm{cap}}$ using the
line-shape model with emission energies and FWHM adopted from excitation power
dependence, we refine the character of the emission band and assign in Fig. 4
the lifetimes of the observed optical transitions, see particularly the fit in
inset of Fig. 3 (c).
Across the studied spectral range, we again observe similar signatures as in
$\mathrm{S}_{\mathrm{w/o}}$, but red-shifted by $\mathcal{E}^{\mathrm{c}}$.
This shift is also apparent from the comparison of mobility edges subtracted
from the Gourdon and Lavallard model Gourdon_PSSB1989, given in Tab. 2. In
contrast to the previous samples, we observe also 40 meV shift of DAP mobility
edge which is a rather significant change to be caused by a different
character of the DAP process only (i.e. type, or concentration) and possibly
causing much longer rep-ZPL transition time as extracted from TDPL. However,
we do not observe any change of the mobility edge for samples
$\mathrm{S}_{\mathrm{w/o}}$ and $\mathrm{S}_{\mathrm{with}}$: this might be
still connected to the effect of layer-overgrowth on dynamics. On the other
hand, we observe almost unchanged ZPL radiative time of $16.2\pm 0.2$ ns (and
$14.9\pm 0.1$ ns from TDPL).
The whole emission spectrum in Figs. 2(c) and 3(c) shows changes in the shape
of emission bands in the time domain, including observable spectrum red-shift.
From TRPL deconvolution by three mono-exponential decay curves, it can be seen
that the spectrum consists of the fast component at energies greater than 1.75
eV, which completely disappears during the first 50 ns after excitation, and
it is rapidly red-shifted during that period. After 50 ns, only a part of the
band at energies below 1.75 eV remains bright. In agreement with the
observations for $\mathrm{S}_{\mathrm{with}}$, below 1.74 eV the DAP dynamical
processes clearly dominate and their time constant is $\sim$1 $\mu$s. For
larger energies, the emission due to DAP loses importance in favor of GaAs IL
processes. For energies larger than 1.76 eV, also the contribution of QDs
starts to be noticeable with $w_{3}$ $\sim$10 % and $\tau_{3}$ of 2–6 ns.
The time-evolution of the best-fit emission energies of individual transitions
from the TDPL fit given in Fig. 3(c) shows that ZPL and rep-ZPL bands are
exponentially red-shifted by 17 meV and 5 meV, respectively, with time
constant $\tau_{E}$ being 19–44 ns.
The previous analysis showed an increase of QD recombination times with
decreasing energy from 6 ns to 9 ns for $\mathrm{S}_{\mathrm{with}}$, of 1.83
eV and 1.80 eV, respectively, and from 2 ns to 6 ns for
$\mathrm{S}_{\mathrm{cap}}$ of energies close to 1.79 eV and 1.73 eV. The
slower recombination times might be assigned to indirect momentum transitions,
even though, without detailed single dot spectroscopic study
Rauter_indirectQD, this is rather speculative because it could be as well
caused by ensemble averaging Schimpf2019.
## VI Temperature dependent TRPL
Figure 5: Individual TRPL decay times $\tau_{1}$–$\tau_{3}$ (black stars)
shown as a function of temperature with the radiative (blue) and non-radiative
(red) components for all three samples - panels (a) and (b) show decay times
for sample $\mathrm{S}_{\mathrm{w/o}}$, (c)–(e) that for
$\mathrm{S}_{\mathrm{with}}$ and (f)–(h) for $\mathrm{S}_{\mathrm{cap}}$. The
radiative and non-radiative component (circles and squares) are fitted by Eq.
(9) and Eq. (8) (broken curves), respectively. The best-fit parameters from
the models, including $\tau_{\mathrm{C}}$ (horizontal dash-dot lines), are
added for easier comparison.
In this section, we separated radiative and non-radiative contributions of the
observed decay times and complete the band schemes in Fig. 4 of the non-
radiative processes. Individual recombination channels as a function of $T$
were extracted again using the 3ME (2ME) model for deconvolution of TRPL
signal of $\mathrm{S}_{\mathrm{with}}$ and $\mathrm{S}_{\mathrm{cap}}$
($\mathrm{S}_{\mathrm{w/o}}$). Contrary to the sample
$\mathrm{S}_{\mathrm{w/o}}$, the lifetime of ZPL ($\tau_{2}$) for samples with
QDs ($\mathrm{S}_{\mathrm{with}}$ and $\mathrm{S}_{\mathrm{cap}}$) increases
with $T$ between 30 and 50 K and thereafter progressively reduces, which is
characteristic for the activation of thermally activated escape paths of
shallow defects (Manna_apl2012_TRPLtype2). Those are most likely generated at
the IL/QDs interface during the strain-relaxation caused by QDs overgrowth
Steindl2019_PL.
To separate the radiative ($\tau_{\mathrm{R}}$) and non-radiative
($\tau_{\mathrm{NR}}$) lifetimes from individual transition channels, we
assume, in accordance with Ref. (t_alvarez), that for 15 K the only loss
mechanism is the radiative recombination. Thereafter, $\tau_{\mathrm{R}}$ and
$\tau_{\mathrm{NR}}$ decay times can be extracted from the slow decay time
$\tau_{1}$ by
$\tau_{\mathrm{R}}=\frac{I_{0}}{I_{\mathrm{PL}}(T)}\tau_{1},$ (6)
and
$\frac{1}{\tau_{1}}=\frac{1}{\tau_{\mathrm{R}}}+\frac{1}{\tau_{\mathrm{NR}}},$
(7)
where $I_{0}$ and $I_{\mathrm{PL}}$ are the PL intensities at 15 K and at
larger $T$, respectively. As can be seen in Fig. 5, thermally activated
scattering processes cause an exponential decrease characterized by
$\tau_{\mathrm{NR}}$ of localized carriers with $T$. That process can be
quantitatively interpreted by the model involving two non-radiative processes
$\frac{1}{\tau_{\mathrm{NR}}}=\frac{1}{\tau_{\mathrm{NR}}^{1}}\exp{\left(\frac{-E_{1}}{k_{\mathrm{B}}T}\right)}+\frac{1}{\tau_{\mathrm{NR}}^{2}}\exp{\left(\frac{-E_{2}}{k_{\mathrm{B}}T}\right)},$
(8)
characterised by the activation energies $E_{1}$ and $E_{2}$ and time
constants $\tau_{\mathrm{NR}}^{1}$ and $\tau_{\mathrm{NR}}^{2}$, respectively.
Conversely, $\tau_{\mathrm{R}}$ of exciton increases exponentially with $T$
$\tau_{\mathrm{R}}=\tau_{\mathrm{R}}^{0}+\tau_{\mathrm{R}}^{T}\left[\exp{\left(\frac{T}{T_{C}}\right)}-1\right]\,,$
(9)
where $\tau_{\mathrm{R}}^{0}$ ($\tau_{\mathrm{R}}^{T}$) describes the $T$
independent (dependent) part of the radiative lifetime, and $T_{C}$ is the
characteristic value of $T$ corresponding to the energy of the localised
states. On the other hand, the behaviour of the decay time with $T$ of the
fast component $\tau_{2}$ suggests that there is a non-radiative contribution
even at lowest $T$, which prevents us to use Eq. (6). To overcome this
limitation, we assume that the radiative lifetime at 15 K is the same as that
for the slow component $\tau_{1}$, i.e.,
$\tau_{2}^{\mathrm{R}}(15K)=\tau_{1}^{\mathrm{R}}(15K)$, and a $T$ independent
non-radiative decay $\tau_{\mathrm{C}}$ is also present and given by
$\displaystyle\frac{1}{\tau_{\mathrm{C}}}=\frac{1}{\tau_{2}(15K)}-\frac{1}{\tau_{2}^{\mathrm{R}}(15K)}\,.$
(10)
Since $\tau_{\mathrm{C}}$ is not dependent on $T$, we can now calculate the
radiative lifetime $\tau_{2}^{\mathrm{R}}$ of the fast component at any $T$
using Eq. (6), replacing $\tau_{1}$ with $\tau_{2}$ and $1/\tau_{\mathrm{NR}}$
with $1/\tau_{\mathrm{C}}+1/\tau_{2}^{\mathrm{NR}}$. The overall decay time as
a function of $T$ is then given by
$\displaystyle\frac{1}{\tau_{2}(T)}=\frac{1}{\tau_{\mathrm{C}}}+\frac{1}{\tau_{2}^{\mathrm{R}}(T)}+\frac{1}{\tau_{2}^{\mathrm{NR}}(T)}\,.$
(11)
Hence, we can repeat the analysis of the radiative and non-radiative part
described by Eqs. (9)–(8) for $\tau_{2}^{\mathrm{R}}$ and
$\tau_{2}^{\mathrm{NR}}$. A similar approach can be used also for $\tau_{3}$
of samples $\mathrm{S}_{\mathrm{with}}$ and $\mathrm{S}_{\mathrm{cap}}$, with
the assumption that the same radiative lifetime is used for $\tau_{3}$ as that
for $\tau_{2}$, i.e., $\tau_{3}^{\mathrm{R}}(15K)=\tau_{2}^{\mathrm{R}}(15K)$,
and $T$ independent non-radiative lifetime $\tau_{\mathrm{C}}$ is similar to
Eq. (10) as
$1/\tau_{\mathrm{C}}=1/\tau_{3}(15K)-1/\tau_{3}^{\mathrm{R}}(15K)$. The
numerical results of the described deconvolution are summarised in Tab. 3 for
individual decay times taken at the maximum of the PL intensity for each
sample. Based on the previous analysis, we worked out the Arrhenius-like
equation with explicit dependence of PL on all parameters derived from the
TRPL results:
$\frac{I_{0}}{I_{\mathrm{PL}}(T)}=1+\sum_{i=1}^{2(3)}{\left[\tau_{i\mathrm{R}}^{0}+\tau_{i\mathrm{R}}^{T}\exp{\left(\frac{T}{T_{i\mathrm{C}}}\right)}\right]\times\left[\frac{1}{\tau_{i\mathrm{NR}}^{1}}\exp{\left(\frac{-E_{i1}}{k_{\mathrm{B}}T}\right)}+\frac{1}{\tau_{i\mathrm{NR}}^{2}}\exp{\left(\frac{-E_{i2}}{k_{\mathrm{B}}T}\right)}\right]},$
(12)
where the upper limit of the sum depends on a number of mono-exponential
decays in the fitting model used for deconvolution of the TRPL signal.
Table 3: Summary of the TRPL Arrhenius-like fits using Eq. (12). The displayed values are obtained with accuracy better than $10^{-2}\%$. sample | process | $E_{1}$ [meV] | $\tau_{\mathrm{NR}}^{1}$ [ns] | $E_{2}$ [meV] | $\tau_{\mathrm{NR}}^{2}$ [ns] | $\tau_{\mathrm{R}}^{0}$ [ns] | $\tau_{\mathrm{R}}^{T}$ [ns] | $T_{C}$ [K]
---|---|---|---|---|---|---|---|---
$\mathrm{S}_{\mathrm{w/o}}$ | $\tau_{1}$ | 16.7 | 0.234 | 441.4 | 0.020 | 64.62 | 0.00 | 8.18
$\tau_{2}$ | 16.7 | 0.234 | 339.6 | 0.059 | 14.66 | 0.30 | 49.3
$\mathrm{S}_{\mathrm{with}}$ | $\tau_{1}$ | 5.2 | 36.18 | 64.3 | 0.087 | 96.45 | 0.820 | 21.6
$\tau_{2}$ | – | – | 57.3 | 0.050 | 14.61 | 8.18 | 62.2
$\tau_{3}$ | 10.0 | 4.97 | – | – | 8.58 | 3.13 | 56.6
$\mathrm{S}_{\mathrm{cap}}$ | $\tau_{1}$ | 23.5 | 0.237 | 591.3 | 47.73 | 82.62 | 0 | 36.9
$\tau_{2}$ | 25.4 | 0.090 | 284.7 | 0.111 | 13.17 | 1.95 | 47.1
$\tau_{3}$ | 8.1 | 9.05 | – | – | 3.10 | 0.106 | 14.5
We attributed the slowest process $\tau_{1}$ to the recombination of DAP and
other crystalline defects, which follows the same trend with increasing $T$
for $\mathrm{S}_{\mathrm{w/o}}$ and $\mathrm{S}_{\mathrm{cap}}$, i.e., it
decreases over 2 orders of magnitude from 100 ns to 1 ns. Due to larger amount
of defects, $\tau_{1}$ of $\mathrm{S}_{\mathrm{with}}$ decreases only by one
order of magnitude to 20 ns, which significantly changes the character of the
radiative lifetime, increasing exponentially with $T$ from
$\tau_{\mathrm{R}}^{0}=96.45$ ns at 15 K due to thermalization of the defects.
In comparison with that for $\mathrm{S}_{\mathrm{w/o}}$ and
$\mathrm{S}_{\mathrm{cap}}$, we find that to be constant at 64.62 ns and 82.62
ns, respectively.
The radiative time constant $\tau_{\mathrm{R}}$ of the faster process
$\tau_{2}$ increases exponentially across the samples with $T$ from
$\tau_{2}=$14 ns. This increase is most likely caused by impurity
thermalization via $T_{\mathrm{C}}$ ($T_{\mathrm{C}}\approx 50$ K is close to
disorder energy determined for these samples in Steindl2019_PL). While no
material exchange with QD constituents in GaAs IL for sample
$\mathrm{S}_{\mathrm{w/o}}$ occurs by design, confirmed by the fact that the
amplitude $\tau_{\mathrm{R}}^{T}$ of thermalization change of
$\tau_{\mathrm{R}}$ is almost zero, after QD formation, In-Ga redistribution
occurs as previously reported in Refs. Steindl2019_PL; Gajjela2020, leading to
almost thirty-fold increase of $\tau_{\mathrm{R}}^{T}$ (sample
$\mathrm{S}_{\mathrm{with}}$). The redistribution can be prevented by
overgrowing the structure by a thin GaSb capping layer (see the similarity in
panels of $\mathrm{S}_{\mathrm{cap}}$ and $\mathrm{S}_{\mathrm{w/o}}$ in Fig.
5), which for a thickness of $\sim$1 ML leads to approximately six-times
larger $\tau_{\mathrm{R}}^{T}$ than that for sample
$\mathrm{S}_{\mathrm{cap}}$, and an As-Sb intermixing between QDs and capping
takes place, resulting in an increase of the Sb content in QDs Steindl2019_PL.
It can be assumed that the importance of this effect can be reduced if the Sb
layer is thicker because then the capping might be more robust, yet that can
also result in pushing the wavefunctions out of the QD body, and the
corresponding change of the type of spatial band-alignment, previously
reported for similar dots grown on GaAs substrate in Refs. Klenovsky_IOP2010;
Klenovsky2010; Klenovsky2015.
The fastest process $\tau_{3}$ was considered only for QD samples
$\mathrm{S}_{\mathrm{with}}$ and $\mathrm{S}_{\mathrm{cap}}$. The parameter
$\tau_{3}$ of the sample $\mathrm{S}_{\mathrm{with}}$ decreases from $\sim 10$
ns (at 15 K) to 6 ns (at 70 K). Since the value of the lifetime is close to
$\tau_{2}$, we assume that the electrons are localized preferably at the QD/IL
interface. The radiative part $\tau_{\mathrm{R}}$ is quenched with
$T_{\mathrm{C}}=56.6$ K, corresponding to thermalization energy of 4.9 meV,
which is in good agreement with 4.5 meV, previously extracted from the thermal
red shift Steindl2019_PL. The presence of additional Sb during QD formation
and ripening, which here would translate into the growth of the GaSb cap right
after the QD formation, has very likely led to the formation of smaller and
more homogeneous QDs, as a result of the Sb surfactant effect, as also pointed
out by Sala et al. in Refs. Sala2016; t_sala. This process could have, thus,
led to a better electron-wavefunction localization in the QD body, resulting
in a shorter decay time $\tau_{3}$ of $\approx 3$ ns (at 15 K and decreasing
to 2 ns at 70 K) for $\mathrm{S}_{\mathrm{cap}}$. This is in agreement with
the 2.5 ns observed for (InGa)(AsSb)/GaAs/GaP QDs grown with higher Sb flow
Sala2016. This points to the fact that both growing a thin GaSb cap above the
QDs and using a higher Sb flow before QD formation are both efficient ways to
affect the QD structural properties and possibly increase the Sb content in
the QDs Gajjela2020.
The transition is thermally quenched with $T_{\mathrm{C}}=14.5$ K (1.3 meV is
in good agreement with 1.4–2.0 meV extracted from $T$-resolved PL experiments
Steindl2019_PL) of $\tau_{R}$ into disordered centers most likely at the QD/IL
interface. The analysis in panels (a) and (b) of Fig. 5 shows that PL of the
sample $\mathrm{S}_{\mathrm{w/o}}$ is thermally quenched via phonon-excitation
from $X$-valley in GaAs, with activation energy $E_{1}=16.7$ meV, in good
agreement with energies of 10–12 meV extracted from steady-state PL
Steindl2019_PL, which was already observed for GaAs/GaP QDs t_dagostar, and
for larger $T$ via unipolar escape of electrons from $X$-valley of GaAs layer
and GaP to $L$-valley in GaP, with activation energies of $E_{2}=441.4$ meV
(461 meV determined from 8-band $\mathbf{k\cdot p}$) and $E_{2}=339.6$ meV
(370 meV from 8-band $\mathbf{k\cdot p}$) Steindl2019_PL, respectively.
From the analysis of non-radiative lifetime in panels (c)–(e) in Fig. 5, we
identify that the emission from sample $\mathrm{S}_{\mathrm{with}}$ at low $T$
is thermally quenched via electron-thermalization from $X_{xy}$ in IL to, most
likely, nitrogen complexes present in the structure from GaP growth
Skazochkin_GaPtraps, having escape energies of $8\pm 2$ meV, in good agreement
with Ref. ioffe. For larger temperatures, the dominant mechanism of quenching
with escape energies $\sim$60 meV is most likely the escape of electron from
$X_{xy}$-valley in IL to $X$-valley in bulk (41 meV determined from 8-band
$\mathbf{k\cdot p}$, $43\pm 7$ meV observed in Ref. Abramkin2019_GaAsonGaP).
Having lower eigenenergy and many of available electron states, this escape
process is preferably comparable to two concurrently possible ones with
similar energies – the escape of electron from $X_{xy}$-valley in IL to
$L$-valley in IL (87 meV) and the escape of $L$-electron in QDs to the bulk
GaP (46 meV).
Also, for the sample $\mathrm{S}_{\mathrm{cap}}$ we identify, using the same
analysis as in panels (f)–(h) of Fig. 5, a shallow impurity ($8.1$ meV),
phonon-emission ($\approx 25$ meV), escape of electron from IL to GaP
substrate (284.7 meV, from PL 245 meV Steindl2019_PL, 288 meV from 8-band
$\mathbf{k\cdot p}$), and hole-escape from IL to bulk ($\approx 590$ meV, 670
meV from 8-band $\mathbf{k\cdot p}$), see Fig. 15 in Steindl2019_PL. Note that
we attribute the increase in $E_{2}$ to correspond to the phonon emission to
As-Sb intermixing between GaAs IL and GaSb capping layer, reported already
above. Calculating the activation energies by $\mathbf{k\cdot p}$ model, i.e.,
without atomistic resolution, cannot explain the observed changes, such as
intermixing or material redistribution on the surface of QDs, which creates a
concentration gradient leading to local strain and potential changes affecting
the escape of carriers and, therefore, a slight discrepancy between experiment
and simulation is expected.
## VII Conclusions and outlook
We performed the first detailed analysis of the carrier dynamics of
(InGa)(AsSb)/GaAs/GaP QDs to date, by means of energy and temperature
modulated time-resolved-photoluminescence. Based on steady-state PL
measurements carried out in our previous work Steindl2019_PL as a reference,
we develop spectral shape model taking into account phononic, impurity-
related, and thermalization effects to address the four emission bands
expected from ${\bf k}\cdot{\bf p}$ calculations Klenovsky2018_TUB. The
application of analytical models shows similarities across the samples studied
here, originating from GaAs interlayer and defects in the GaP substrate.
Specifically, the transitions are zero-phonon and phonon-assisted transitions
of electrons in the GaAs interlayer from the $X_{xy}$ valley to the $\Gamma$
valence band, with decay times around 15 ns, and donor-acceptor pair
recombination in GaP decaying extremely slowly (up to few $\mu$s). Moreover,
we observe type-I emission from QDs, which is faster than 10 ns and its
recombination times varies across the studied range, most likely due to
coexistence of momentum direct and indirect transitions and compositional
changes of individual dots. Finally, we want to point out the spectral shift
of the type-I emission from GaAs interlayer and QDs bands caused by charge
potentials from defects created during QD formation. This shift is evident in
both pump-power resolved photoluminescence, as well as in the time domain
study of the emission.
Our data suggest that epitaxial growth strategies can be employed to
efficiently increase the Sb content in the QDs by a thin GaSb cap overgrowth.
Such Sb concentration increase in QDs increases the carrier confinement and
will subsequently lead to an increase of the QD storage time, which is of
utmost importance for the implementation of such QDs into nano-memory devices
Nowozin2013; Bimberg2011_SbQDFlash. However, the use of Sb, and its potential
partial segregation Gajjela2020; Desplanque_2017, may lead to the formation of
additional point defects, which could affect the storage time by increasing
capture cross-section t_nowozin. Therefore, the development of the truly
defect-free Sb-rich QDs on top of GaP is the key for further improvement of
QD-Flash nano-memories. In this respect, further epitaxial engineering
techniques are demanded. However, considering the present study and our
previous work Steindl2019_PL, we have demonstrated that overgrowing such QDs
with a GaSb capping layer is a promising epitaxial method to increase the Sb
content in (InGa)(AsSb) QDs and to manipulate their carrier dynamics.
Furthermore, for their naturally small FSS Klenovsky2018, such Sb-rich dots
are promising candidates for entangled-photon sources, potentially operating
not only at cryogenic temperatures due to Sb-increased electron confinement.
The use as entangled-photon, as well as single-photon, sources will require
future effort in the optimization of optical efficiency by both sample quality
and cavity enhancement Emberger2013. Even though the growth may be
challenging, these structures have benefits, such as small size and improved
compositional homogeneity compared to conventional SK QDs Sala2018; t_sala;
Gajjela2020. Moreover, considering the negligible lattice mismatch between GaP
and Si, they can serve as a CMOS compatible quantum platform. Finally, since
the incorporation of Sb during growth leads to (i) tunable quantum confinement
of the dots Klenovsky2018_TUB and (ii) the possibility to reduce the amount of
charge trap states originating from crystal structure imperfections, we
suppose our dots might be superior to those recently proposed on SiGe quantum
dots Rauter_ACSPhotonic2018_Ge-DEQD; Murphy2021.
## VIII Acknowledgements
P.S. is Brno Ph.D. Talent Scholarship Holder–Funded by the Brno City
Municipality. E.M.S. and D.B. thank the DFG (Contract No. BI284/29-2). A part
of the work was carried out under the project CEITEC 2020 (LQ1601) with
financial support from the Ministry of Education, Youth and Sports of the
Czech Republic under the National Sustainability Programme II. Project
CUSPIDOR has received funding from the QuantERA ERA-NET Cofund in Quantum
Technologies implemented within the European Union’s Horizon 2020 Programme.
In addition, this project has received national funding from the MEYS and
funding from European Union’s Horizon 2020 (2014-2020) research and innovation
framework programme under grant agreement No 731473. The work reported in this
paper was (partially) funded by project EMPIR 17FUN06 Siqust. This project has
received funding from the EMPIR programme co-financed by the Participating
States and from the European Union’s Horizon 2020 research and innovation
programme. This works was also partially funded by Spanish MICINN under grant
PID2019-106088RB-C3 and by the MSCA-ITN-2020 Funding Scheme from the European
Union’s Horizon 2020 programme under Grant agreement ID: 956548.
## IX Appendix
### IX.1 Repumping
Figure A1: TRPL decay signal with $\tau=350$ ns (blue for 1st window, red for
$2^{\mathrm{nd}}$) after excitation (black) shown in two consecutive temporal
windows (200 ns). Gray symbols represents compound signal from two temporal
windows. The arrow points to re-pumped signal from background level (including
dark counts) due to contribution to the measured signal from the previous
temporal window.
Because some of the observed transitions decay rather slowly and do not
completely disappear in one temporal window, we take into account re-pumping
of the slow TRPL component $\tau_{1}$ from previous pulses, which leads to a
“background” increase as can be seen in Fig. A1, complicating a proper
extraction of the background signal for individual wavelengths and correct
time-constant extraction. This issue is overcome in TDPL by disentangling
individual transitions by line-shape model fitting, where the slowest decay is
assigned to (mainly non-radiative) pair recombination processes of donor-
acceptor pairs (DAP) in GaP Dean_PR68; Dean_1970.
|
# Mean Trajectories of Multiple Tracking Points on A Brownian Rigid Body:
Convergence, Alignment and Twist
Jianping Xu<EMAIL_ADDRESS>The University of Texas at Austin, Austin, Texas
78712, USA
###### Abstract
We consider mean trajectories of multiple tracking points on a rigid body that
conducts Brownian motion in the absence and presence of an external force
field. Based on a naïve representation of rigid body - polygon and polyhedron
where hydrodynamic interactions are neglected, we study the Langevin dynamics
of these Brownian polygons and polyhedra. Constant force, harmonic force and
an exponentially decaying force are investigated as examples. In two
dimensional space, depending on the magnitude and form of the external force
and the isotropy and anisotropy of the body, mean trajectories of these
tracking points can exhibit three regimes of interactions: convergence, where
the mean trajectories converge to either a point or a single trajectory;
alignment, where the mean trajectories juxtapose in parallel; twist, where the
mean trajectories twist and intertwine, forming a plait structure. Moreover,
we have shown that in general a rigid body can sample from these regimes and
transit between them. And its Brownian behavior could be modified during such
transition. Notably, from a polygon in two dimensional space to a polyhedron
in three dimensional space, the alignment and twist regimes disappear and
there is only the convergence regime survived, due to the two more rotational
degrees of freedom in three dimensional space.
###### pacs:
05.40.Jc, 05.10.Gg
## I Introduction
A rigid body Favro (1960); Fernandes and de la Torre (2002); Delong et al.
(2015) that conducts Brownian motion can translate and rotate in space. Most
interestingly, in scenarios where the particle is screwlike Brenner (1965,
1967), L-shaped Kümmel et al. (2013), biaxial Wittkowski and Löwen (2012) and
ellipsoidal Han et al. (2006), etc., translation and rotation can couple
Brenner (1965, 1967); Sun et al. (2008); Chakrabarty et al. (2013, 2016,
2014); Han et al. (2006), leading to a rich class of trajectory patterns,
e.g., helical motion Wittkowski and Löwen (2012), circular motion Kümmel et
al. (2013). In recent years, apart from exploring these novel dynamic
behaviors arising from rigid-body Brownian motion, there were general models
built, such as Brownian Dynamics Ermak and McCammon (1978); Fernandes and de
la Torre (2002), Stokesian Dynamics Brady and Bossis (1988); Fiore and Swan
(2019), Fluctuating Hydrodynamics Sharma and Patankar (2004) and the Langevin
dynamics of arbitrarily shaped particle Sun et al. (2008); Delong et al.
(2015). Usually in these models the effects of non-stochastic factors such as
hydrodynamic interactions and particle geometry enter into the displacement
equations as resistance tensor Sun et al. (2008) or equivalently the mobility
tensor Delong et al. (2015), while stochasticity is contained in force and
torque terms. Subsequently, the trajectory of a tracking point (TP) on the
body, e.g., the center of mass (CoM), or the center of friction (CoF, also
known as center of hydrodynamic stress Chakrabarty et al. (2013)) is
generated. Hence the particle is still represented by a zero volume TP rather
than a finite volume body.
However, to a large extent, the analysis of a single trajectory of a single TP
could indeed provide rich information regarding the particle’s physical
properties and its interactions with the environment. Methods like single
trajectory analysis Tejedor et al. (2010); Holcman et al. (2015) and power
spectral analysis Schnellbächer and Schwarz (2018); Sposini et al. (2019) can
be utilized to extract useful information of the particle, e.g., diffusion
coefficient Michalet and Berglund (2012) and mean squared displacement
Michalet (2010). Most recently there is exciting new model based on
information theory to infer the external force field from a stochastic
trajectory Frishman and Ronceray (2020). Indeed, the toolkit one can use to
decipher a trajectory is updating fast. Nevertheless, when particle size
matters the trajectory traced by a single TP does not reveal the orientation
of the body along its path, thus not enough to characterize the particle’s
state. And even worse, trajectory recorded by tracking an inappropriately
chosen TP could contain error.
It is therefore natural to consider trajectories of multiple TPs on a rigid
body because this helps us understand either the difference or commonality
among different TPs. Since the object is Brownian, it is necessary to consider
the mean trajectory. Note that to obtain the “mean trajectory” of a designated
TP, one must consider an ensemble of identical body and average based on the
ensemble. We expect to find simple but characteristic regimes of interaction
among the mean trajectories of different tracking points. To achieve this, we
adopt an extremely simplified representation - polygon (in 2D) and polyhedron
(in 3D). As shown in Fig. 1, each vertex of the polygon/polyhedron is a mass
point with mass $m_{i}$ and friction coefficient $\xi_{i}$ ($1\leq i\leq n$,
$n$ being the number of vertices.) Vertex $i$ experiences thermal fluctuation
force $\delta\mathbf{F}_{i}$, friction force $\mathbf{f}_{i}$ exerted by the
fluids and external force $\mathbf{F}_{i}$. Edges are assumed rigid, massless
and noninteractive with the environment. Hydrodynamic interaction among
vertices (which are mediated by fluids) is also neglected. Consequently, it
leads to a simple Langevin equation system which describes translation and
rotation in space. This setup resembles a dot multisubunit representation De
La Torre and Bloomfield (1978) and the bead representation Delong et al.
(2015); Sun et al. (2008), although in many previous studies the hydrodynamic
interactions among beads are preserved to make more realistic cases. However,
our simplified model turns out not too simplified and is able to generate rich
dynamic behaviors. Besides, as shown in the Appendix, the mean displacement
curve generated from this model agrees well with experimental and theoretical
results of boomerang colloidal particle Chakrabarty et al. (2013).
Figure 1: (Color Online) Schematics of the polygon/polyhedron representation.
Each labeled vertex traces a trajectory in space as the body moves through.
The paper is organized as follows. In Section II, the construction of the
Langevin dynamics model of the Brownian polygon/polyhedron system is
presented. Details of computation are presented. In Section III, the
convergence regime for motion in 2D space is identified and modeled. In
Section IV, the alignment regime for motion in 2D space is identified and
modeled. In Section V, the twist regime for motion in 2D space is identified
and modeled. In Section VI, we discuss the transition between regimes and the
modification of Brownian behavior in the transition. In Section VII, we extend
the 2D investigations to 3D. Finally, concluding remarks are presented in
Section VIII.
## II Model Construction and Computation
Denote the position vector of the i-th vertex as $\mathbf{r}_{i}$. It is
customary to define various “centers” Chakrabarty et al. (2013) of the body.
In our context, first, the geometric center (GC), whose position vector is
$\mathbf{r}_{c}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{r}_{i}$. Second, CoM, whose
position vector is
$\mathbf{r}_{m}=\frac{1}{\sum_{i=1}^{n}m_{i}}\sum_{i=1}^{n}m_{i}\mathbf{r}_{i}$.
Third, CoF, whose position vector is
$\mathbf{r}_{f}=\frac{1}{\sum_{i=1}^{n}\xi_{i}}\sum_{i=1}^{n}\xi_{i}\mathbf{r}_{i}$.
Depending on the distribution of mass and friction, these three centers can
separate or coincide. Then, denote the vectors joining from these centers to
vertex i as $\mathbf{R}^{c}_{i}(=\mathbf{r}_{i}-\mathbf{r}_{c})$,
$\mathbf{R}^{m}_{i}(=\mathbf{r}_{i}-\mathbf{r}_{m})$ and
$\mathbf{R}^{f}_{i}(=\mathbf{r}_{i}-\mathbf{r}_{f})$, respectively. These
vectors have the properties
$\sum_{i=1}^{n}\mathbf{R}^{c}_{i}=\mathbf{0},\quad\sum_{i=1}^{n}m_{i}\mathbf{R}^{m}_{i}=\mathbf{0},\quad\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{f}_{i}=\mathbf{0},$
(1)
which can be easily shown to be true. They attach to the body and can
translate and rotate in the lab frame. Motion of the polygon/polyhedron is
decomposed into translation of CoM and rotation relative to CoM. Denote vertex
i’s velocity in the lab frame as $\mathbf{v}_{i}$. Obviously,
$\mathbf{v}_{i}=\mathbf{v}_{m}+\bm{\omega}\times\mathbf{R}^{m}_{i}$, where
$\mathbf{v}_{m}$ is the velocity of CoM, $\bm{\omega}$ the angular velocity.
Newtonian mechanics of $\\{\bm{\omega},\mathbf{v}_{m}\\}$ Sun et al. (2008) in
the polygon/polyhedron picture writes,
$\displaystyle\big{(}\sum_{i=1}^{n}m_{i}|\mathbf{R}^{m}_{i}|^{2}\big{)}\frac{d\bm{\omega}}{dt}$
$\displaystyle=\sum_{i=1}^{n}\mathbf{R}^{m}_{i}\times(\mathbf{f}_{i}+\delta\mathbf{F}_{i}+\mathbf{F}_{i}),$
(2) $\displaystyle\big{(}\sum_{i=1}^{n}m_{i}\big{)}\frac{d\mathbf{v}_{m}}{dt}$
$\displaystyle=\sum_{i=1}^{n}(\mathbf{f}_{i}+\delta\mathbf{F}_{i}+\mathbf{F}_{i}),$
where $\sum_{i=1}^{n}m_{i}|\mathbf{R}^{m}_{i}|^{2}$ is the moment of inertia.
$|\mathbf{R}^{m}_{i}|$ is the length of the vector and stays constant.
$\delta\mathbf{F}_{i}$ observes
$\langle\delta\mathbf{F}_{i}(t)\delta\mathbf{F}_{j}(t^{\prime})\rangle=2\xi_{i}kT\mathbf{B}\delta_{ij}\delta(t-t^{\prime}),$
(3)
where $\mathbf{B}$ is an identity matrix, $k$ is the Boltzmann constant, $T$
is the temperature, $\delta_{ij}$ is the Kronecker sign, $\delta(\cdot)$ is
the Dirac delta function and $\langle\cdot\rangle$ is the ensemble average. In
general, Eq. (3) could be written as
$\langle\delta\mathbf{F}_{i}(t)\delta\mathbf{F}_{j}(t^{\prime})\rangle=2kT\bm{\Xi}\delta_{ij}\delta(t-t^{\prime})$
Sun et al. (2008), where $\bm{\Xi}$ is a resistance tensor. In our simple
representation, $\bm{\Xi}$ reduces to $\xi_{i}\mathbf{B}$. $\delta_{ij}$
assumes thermal fluctuation at one vertex is not correlated to that at another
vertex, which is reasonable.
$\mathbf{f}_{i}=-\xi_{i}\mathbf{v}_{i}=-\xi_{i}(\mathbf{v}_{m}+\bm{\omega}\times\mathbf{R}^{m}_{i})$.
Substituting $\mathbf{f}_{i}$ into Eq. (2), after simple algebraic
manipulations one arrives at the Langevin equations,
$\displaystyle\big{(}\sum_{i=1}^{n}m_{i}|\mathbf{R}^{m}_{i}|^{2}\big{)}\frac{d\bm{\omega}}{dt}=$
$\displaystyle-\big{(}\sum_{i=1}^{n}\xi_{i}|\mathbf{R}^{m}_{i}|^{2}\big{)}\bm{\omega}-\sum_{i=1}^{n}\mathbf{R}^{m}_{i}\times\xi_{i}\mathbf{v}_{m}+\sum_{i=1}^{n}\mathbf{R}^{m}_{i}\times\delta\mathbf{F}_{i}+\sum_{i=1}^{n}\mathbf{R}^{m}_{i}\times\mathbf{F}_{i},$
(4)
$\displaystyle\big{(}\sum_{i=1}^{n}m_{i}\big{)}\frac{d\mathbf{v}_{m}}{dt}=$
$\displaystyle-\big{(}\sum_{i=1}^{n}\xi_{i}\big{)}\mathbf{v}_{m}-\sum_{i=1}^{n}\xi_{i}\bm{\omega}\times\mathbf{R}^{m}_{i}+\sum_{i=1}^{n}\delta\mathbf{F}_{i}+\sum_{i=1}^{n}\mathbf{F}_{i}.$
An additional equation for $\mathbf{R}^{m}_{i}$ comes from the rigidity of the
body. One could write
$\mathbf{R}^{m}_{i}-\mathbf{R}^{m}_{i}(0)=\Delta\mathbf{r}_{i}-\Delta\mathbf{r}_{m}=\int_{0}^{t}\mathbf{v}_{i}dt^{\prime}-\int_{0}^{t}\mathbf{v}_{m}dt^{\prime}=\int_{0}^{t}\bm{\omega}\times\mathbf{R}^{m}_{i}dt^{\prime}$,
where $\Delta\mathbf{r}$ is the displacement in lab frame. Equivalently,
$\frac{d\mathbf{R}^{m}_{i}}{dt}=\bm{\omega}\times\mathbf{R}^{m}_{i}.$ (5)
Eqs. (4-5) describe the Langevin dynamics for a polygon/polyhedron. Several
comments about Eqs. (4-5):
a): Different polygon/polyhedron geometries result in different sets of
vectors $\\{\mathbf{R}^{m}_{i}\\}_{1\leq i\leq n}$. On the one hand, these
vectors enter the moment of inertia/friction and impact the relaxation rate of
angular velocity. On the other hand, they participate in the torques acting on
the body, as well as the translation-rotation coupling term. Third, geometry
impacts the relative position of GC, CoM and CoF, which is important in the
mean trajectories patterns. In the numerical investigations we will study
different geometries.
b): Stochasticity is introduced into the system through
$\delta\mathbf{F}_{i}$. $\delta\mathbf{F}_{i}=\sqrt{2\xi_{i}kT}\mathbf{W}(t)$,
where $\mathbf{W}(t)$ is a vector and each of its component is a Gaussian
white noise. Stochasticity participates in driving the evolution of velocity
and angular velocity. It competes with deterministic force to shape the
evolution of a Brownian trajectory Frishman and Ronceray (2020).
c): Eq. (4) explicitly contains translation-rotation coupling Sun et al.
(2008); Chakrabarty et al. (2013, 2016, 2014); Han et al. (2006) terms. Under
such coupling, how the body translates influences how it rotates, and vice
versa. However, if $\xi_{i}/m_{i}=\xi_{j}/m_{j}|_{1\leqslant i\neq j\leqslant
n}$, clearly
$\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{m}_{i}=const.\cdot\sum_{i=1}^{n}m_{i}\mathbf{R}^{m}_{i}=\mathbf{0}$
(Eq. (1)), hence the coupling terms vanish. Under such condition, translation
and rotation decouple. This forms a criterion of categorizing different
polygons/polyhedra as translation-rotation coupled (TRC) or non-TRC, as shown
in Tab. 1. Since $\\{\xi_{i}\\}_{1\leqslant i\leqslant n}$ measures the
interaction strength between vertices and the carrying medium, we can also
categorize polygons/polyhedra based on how $\xi_{i}$’s are distributed. We
define here a body is isotropic if $\xi_{i}=\xi_{j}|_{1\leqslant i\neq
j\leqslant n}$ is true, and anisotropic if it’s not true. These two criteria
overlap and give finer description of the body. Noteworthily, it is readily
verifiable that for Eq. (4) all non-TRC bodies should behave similarly,
regardless of isotropy and anisotropy, which only make a difference in the
relaxation rate. Therefore, for simplicity it suffices to investigate
isotropic non-TRC, isotropic TRC and anisotropic TRC under different
geometries.
Table 1: Categorization of a polygon/polyhedron. $A=\\{\xi_{i}/m_{i}=\xi_{j}/m_{j}|_{1\leqslant i\neq j\leqslant n}\mathrm{\,is\,true}\\}$, $B=\\{\xi_{i}=\xi_{j}|_{1\leqslant i\neq j\leqslant n}\mathrm{\,is\,true}\\}$. | $A$ | ${}^{\neg}A$
---|---|---
$B$ | Isotropic non-TRC | Isotropic TRC
${}^{\neg}B$ | Anisotropic non-TRC | Anisotropic TRC
d): Given a geometry, $\bm{\omega}(t)$, $\mathbf{v}_{m}(t)$, and
$\mathbf{R}^{m}_{i}(t)$ are solved numerically. Initially, we set
$\bm{\omega}(0)=\mathbf{0}$ and $\mathbf{v}_{m}(0)=\mathbf{0}$.
$\mathbf{R}^{m}_{i}(0)$ depends on the initial placement of the body in the
coordinates. Then these vectors are evolved according to Eqs. (4-5). Time $t$
is discretized into $t=N\Delta t$, $N$ is the $N$-th time step and $\Delta t$
is the time step size. The simulation is carried out in a unitless fashion,
$kT=1$, time step $\Delta t=0.1$. At each time step, we sample $\mathbf{W}(N)$
in $x$, $y$ (for 2D) and $x$, $y$, $z$ (for 3D) directions independently from
a Gaussian probability density function of zero mean and standard deviation of
$\Delta t$, such that we get $\delta\mathbf{F}_{i}(N)$. A simplified
representation for the numerical iterations is as follows,
$\displaystyle\bm{\omega}(N)=\bm{\omega}(N-1)+\Delta t\cdot
f\big{(}\bm{\omega}(N-1),\mathbf{R}^{m}_{i}(N-1),\mathbf{v}_{m}(N-1),\delta\mathbf{F}_{i}(N-1),\mathbf{F}_{i}(N-1)\big{)};$
(6) $\displaystyle\mathbf{v}_{m}(N)=\mathbf{v}_{m}(N-1)+\Delta t\cdot
g\big{(}\bm{\omega}(N-1),\mathbf{R}^{m}_{i}(N-1),\mathbf{v}_{m}(N-1),\delta\mathbf{F}_{i}(N-1),\mathbf{F}_{i}(N-1)\big{)};$
$\displaystyle\mathbf{R}^{m}_{i}(N)=\mathbf{R}^{m}_{i}(N-1)+\Delta t\cdot
h\big{(}\bm{\omega}(N-1),\mathbf{R}^{m}_{i}(N-1)\big{)};$
where $f()$, $g()$, $h()$ are algebraic operations given by Eqs. (4-5),
$N=1,2,3,4\cdots$. This is an explicit scheme after applying Euler’s method
and it is easy to execute. Such scheme works well if $\Delta t$ is small,
e.g., in our case $\Delta t=0.1$. One may also try method such as Runge-Kutta,
which is computationally more expensive but is more tolerant to a large time
step. Given $\bm{\omega}(0)$, $\mathbf{v}_{m}(0)$ and $\mathbf{R}^{m}_{i}(0)$,
Eq. (6) evolves the time series of these quantities. After obtaining these
quantities, one could integrate to get displacement and hence the trajectory.
To obtain the mean trajectory, one must run the scheme multiple times to get
the ensemble average. In our computations, each mean trajectory is averaged
based on 2400 realizations.
## III Convergence of mean trajectories
The simplest situation is a freely roaming polygon in 2D space where no
external force is present ($\\{\mathbf{F}_{i}\\}_{1\leq i\leq
n}=\\{\mathbf{0}\\}$). The polygon is driven only by
$\\{\delta\mathbf{F}_{i}\\}_{1\leq i\leq n}$. Eqs. (4-5) are solved under some
representative geometries. First we consider an equilateral triangle of
anisotropic TRC type, as shown in Fig. 2 (a).
Figure 2: (Color Online) Mean trajectories of vertices and CoM in the absence
of external force. Black dashed lines are initial placement of the polygon.
(a) Equilateral triangle, anisotropic TRC. (b) Equilateral triangle, isotropic
TRC. (c) Arrow-shaped polygon and (d) equilateral hexagon, isotropic non-TRC.
The mean trajectories of the three vertices and CoM till $t=500$ in the
$\langle x\rangle$-$\langle y\rangle$ space are generated. In this case,
$m_{1}=m_{2}=m_{3}=1$, $\xi_{1}=0.28$, $\xi_{2}=\xi_{3}=0.01$. CoM coincides
with GC at $(0,0)$ (black cross in the figure). The CoF by definition is at
$(\frac{0.9}{2},-\frac{0.9\sqrt{3}}{6})$ , which is very close to vertex 1’s
initial position $(\frac{1}{2},-\frac{\sqrt{3}}{6})$. Results show that the
mean trajectories of the three vertices (red, green, blue circles for vertices
1, 2, 3) and the CoM (black solid line) converge to the CoF unanimously.
In Fig. 2 (b), isotropic TRC is considered - $m_{1}=m_{3}=0.5$, $m_{2}=2$, and
$\xi_{1}=\xi_{2}=\xi_{3}=0.1$. Again, CoM is initially placed at $(0,0)$. CoF
coincides GC at $(0,-\frac{\sqrt{3}}{6})$. The mean trajectories converge to
CoF as well. Fig. 2 (c) and (d) compare two isotropic non-TRC cases with
rather different shapes - (c) an arrow-shaped polygon and in (d) an
equilateral hexagon. In (c), vertices 1, 2, 3, 4’s initial positions are
$(\frac{1}{4},0)$, $(\frac{3}{4},\frac{\sqrt{3}}{6})$, $(-\frac{1}{4},0)$,
$(\frac{3}{4},-\frac{\sqrt{3}}{6})$, respectively.
$m_{1}=m_{2}=m_{3}=m_{4}=0.75$, $\xi_{1}=\xi_{2}=\xi_{3}=\xi_{4}=0.075$. CoF,
GC and CoM coincide at $(\frac{3}{8},0)$. However, because of concave
geometry, these centers fall out of the body Chakrabarty et al. (2013). Again,
all the trajectories converge to CoF. In (d), vertices 1, 2, 3, 4, 5, 6’s
initial positions are $(1,0)$, $(\frac{3}{4},\frac{\sqrt{3}}{4})$,
$(\frac{1}{4},\frac{\sqrt{3}}{4})$, $(0,0)$,
$(\frac{1}{4},-\frac{\sqrt{3}}{4})$, $(\frac{3}{4},-\frac{\sqrt{3}}{4})$.
$m_{1}=m_{2}=m_{3}=m_{4}=m_{5}=m_{6}=0.5$,
$\xi_{1}=\xi_{2}=\xi_{3}=\xi_{4}=\xi_{5}=\xi_{6}=0.05$. CoF, GC and CoM
coincide at $(\frac{1}{2},0)$. In this case, the mean trajectories converge to
CoF as well. This convergence behavior in the absence of external force is
also verified through a semi-analytical solution to Eqs. (4-5), as shown in
the Appendix.
Surprisingly, the above results suggest that the convergence behavior is
invariant with respect to change in polygon geometries. However, change in
geometries may lead to observational effects. A concave body’s CoF can be
outside the body (like in Fig. 2 (c)), whereas CoF of a convex body is inside.
Therefore the observed Brownian motion of a convex body, such as a sphere,
looks unbiased, while for a concave body, such as a boomerang particle, looks
biased Chakrabarty et al. (2013).
Given a geometry, we further show that details of convergence depend on size
of the system. Based on case of Fig. 2 (a), we explored how the size of the
system influences the spatial and temporal scale of the convergence. A
comparison of MD of the CoM when $l=1$, $l=2$, $l=4$ and $l=8$ is made, where
$l$ is the edge length of the triangle.
Figure 3: (Color Online) (a): MD ($\langle x\rangle$ and $\langle y\rangle$)
of CoM versus time based on Fig. 2(a)’s results under different triangle edge
length $l$. (b): Normalized MD ($\langle x\rangle/l$ and $\langle y\rangle/l$)
of CoM versus time based on Fig. 2(a)’s results under different triangle edge
length $l$.
Fig. 3 (a) shows $\langle x\rangle$ and $\langle y\rangle$ of CoM versus time.
Obviously, larger $l$, higher plateau. This is reasonable because the larger
triangle, the wider separation between CoM and CoF. Fig. 3 (b) shows $\langle
x\rangle/l$ and $\langle y\rangle/l$ versus time. The normalized MDs converge
to the same plateau for different $l$ values, which confirms that the plateau
is proportional to and bounded by triangle size. It also shows the larger the
triangle, the longer it takes to achieve the plateau, although given enough
time the triangle will eventually equilibrate with the environment.
Thenceforth on average the body behaves like a point. Noteworthily, there is a
zero size limit. For those triangles that have very small size, the spatial
and temporal scale of the convergence becomes negligible. MD of any TP on it
would be zero, as predicted by classical theory of Brownian motion.
Because of early stage MD increase and late stage MD plateau, the mean squared
displacement (MSD), typically, will first increase fast then converge back to
classical Brownian behavior ($\langle\Delta\mathbf{r}^{2}\rangle\propto t$),
exhibiting a crossover behavior Chakrabarty et al. (2013). We would like to
refer the readers to works dealing with MSD of different TPs, such as Refs.
Delong et al. (2015); Chakrabarty et al. (2013). The primary reason why MSD is
not chosen here as the signature for discriminating regimes is that the
information of direction and orientation is lost in MSD, which is important
for our study.
Now consider external force $\\{\mathbf{F}_{i}\\}_{1\leq i\leq n}$. Two forms
of forces are considered here: constant force and harmonic force. In the
constant force scenario, each vertex feels the same force no matter where the
body is. In the harmonic force case, the forces felt by different vertices are
different because they have distinct distances from the valley of harmonic
potential. In Fig. 4 (a), the triangle in Fig. 2 (b) is subjected to a
constant force $\mathbf{F}=(0.001,0.001)$
($\mathbf{F}_{1}=\mathbf{F}_{2}=\mathbf{F}_{3}=\mathbf{F}$.) The mean
trajectories (red for $m_{1}$, green for $m_{2}$, blue for $m_{3}$) of the
three vertices and that of the CoM (black solid line) till $t=500$ are shown
in the figure. These trajectories converge to a single one, which is the
translation of CoF in $\mathbf{F}$’s direction.
Figure 4: (Color Online) Convergence of mean trajectories of vertices on (a)
an equilateral triangle (isotropic TRC) and (b) an equilateral hexagon
(isotropic non-TRC) under a constant force (c) an equilateral triangle
(isotropic TRC) and (d) an equilateral hexagon (isotropic non-TRC) under a
harmonic force.
As a comparison, in Fig. 4 (b), the hexagon in Fig. 2 (d) is forced by a
constant force. The figure shows the mean trajectories of six vertices of the
hexagon till $t=1300$. The convergence to a single trajectory is also
observed.
In Fig. 4 (c) and (d), the constant force in Fig. 4 (a) and (b) is replaced by
a harmonic force. Depending on the spring strength, mean trajectories can over
shoot. However, ultimately they converge to a point on the valley.
The results in this section show that for isotropic body, whether there is
external force or not, the mean trajectories converge. They converge to the
CoF or the extrapolation of CoF in external force’s direction. For anisotropic
body, the convergence holds if the body is free.
## IV Alignment of the Mean Trajectories
In the alignment regime, the mean trajectories juxtapose each other in
parallel. If an anisotropic polygon is subject to external force, the system
could fall into the alignment regime. For example, in Fig. 5 (a), the triangle
in Fig. 2 (a) is subjected to a constant force $\mathbf{F}=(0.001,0.001)$
($\mathbf{F}_{1}=\mathbf{F}_{2}=\mathbf{F}_{3}=\mathbf{F}$). In this case
$\xi_{1}$ is significantly larger than $\xi_{2}$ and $\xi_{3}$, and vertex 1
will experience the highest resistance force in the triangle’s motion. This is
because friction force
$\mathbf{f}_{i}=-\xi_{i}\mathbf{v}_{i}=-\xi_{i}(\mathbf{v}_{m}+\bm{\omega}\times\mathbf{R}^{m}_{i})$.
Here $\xi_{1}$ is 27 times larger than $\xi_{2}$ and $\xi_{3}$, while the
magnitude of $\mathbf{v}_{i}$ only differs by a factor of
$|\bm{\omega}|\cdot|\mathbf{R}^{m}_{i}|$. $|\mathbf{R}^{m}_{i}|$ is bounded by
triangle size, which is $\sim 1$. And according to Fig. 6, magnitude of
$|\bm{\omega}|$ is smaller than 1 even for very strong force. Therefore the
effect of nonuniform $\\{\xi_{i}\\}_{1\leq i\leq n}$ is overwhelming and
$|\mathbf{f}_{1}|\gg|\mathbf{f}_{2}|\simeq|\mathbf{f}_{3}|$. Consequently,
vertices 2 and 3 will be pushed to the front with vertex 1 lagging behind. The
polygon reorients to accommodate to the force applied in the $(1,1)$
direction.
Figure 5: (Color Online) (a) Alignment of mean trajectories of vertices on an
equilateral triangle (anisotropic TRC) under a constant force. (b) The final
convergence of mean trajectories of vertices on an equilateral triangle
(anisotropic TRC) under a harmonic force.
After the accommodation, the trajectories continue and they keep the distance
from each other.
The situation gets more interesting if we replace the constant force with a
harmonic force, as shown in Fig. 5 (b). Similar to (a), the triangle tends to
align with the force but before it manages to do so the body shifts to the
other side of the potential well. Then it must orient again to the force
pointing to opposite direction. Depending on the magnitude of the spring
constant, the triangle can touch the center line several times, or just once.
Ultimately the triangle resides on the center line where force is zero. Under
zero force, the mean trajectories converge. This is consistent with the
results in Section III. Consequently, the alignment regime does not appear in
this force case. Furthermore, if the spring constant is too small to trap the
triangle, the triangle behaves like a free body and undergoes convergence
regime as well. In general, the alignment regime applies when the external
force does not frequently change its direction and the force persists long
enough such that the polygon has enough time to reorient itself towards the
force.
## V Twist of the Mean Trajectories
In Fig. 5 (a), before the triangle manages to align with the force there is a
period when the force is correcting the triangle’s orientation. It is found if
the magnitude of the external force rises this regime becomes more salient and
unlike the cleanly aligned trajectories they could be twisted and intertwined
to form a plait structure. We identify it as the twist regime.
Figure 6: (Color Online) The twist regime developed before alignment is
achieved under different magnitude of the constant external force. (a), (b),
(c), (d) show the mean trajectories traced by vertices 1, 2, 3 and CoM (red
circle, green asterisk, blue cross and black solid line) based on the
simulation in Fig. 5 (a) under forces of $(0.001,0.001)$, $(0.01,0.01)$,
$(0.1,0.1)$, $(1,1)$, respectively. The small panel at the top left corner
shows the whole trajectory till $t=500$. The major panel shows the snapshot of
the twist part of the whole trajectory. As the increase of force magnitude,
mean trajectories traced by different vertices become increasingly
intertwined. (e), (f), (g), (h) display the ensemble average of the triangle’s
angular velocity corresponding to (a), (b), (c), (d).
Fig. 5 (a) is selected as the base case. As shown in Figs. 6 (a)-(d), the
magnitude of the constant force rises from 0.001, 0.01, 0.1 to 1. As the force
strengthens, the twist of the mean trajectories becomes increasingly
significant. Figs. 6 (e)-(f) show the ensemble average of angular velocity
corresponding to (a)-(d). $\langle\omega\rangle$ reflects the rotation caused
by deterministic force. In (a) and (e), the force is small and one witnesses
very mild undulation of mean angular velocity which ultimately attenuates to
zero, meaning the triangle has reached the stable alignment. The triangle
first rotates clockwise (negative $\langle\omega\rangle$), then
counterclockwise (positive $\langle\omega\rangle$). When the force rises to
0.01, in (b) and (f) the frequency of switching between clockwise and
counterclockwise ramps up. Amplitude of the mean angular velocity also gets
higher. Proceeding to 0.1 ((c) and (g)) and 1 ((d) and (h)), both the
frequency and amplitude build up. This is because a large force overcorrects
the orientation and this overcorrection gets corrected again and again until
the triangle reaches the stable orientation. Under the switching between
clockwise and counterclockwise rotation, the triangle wiggles around the
force’s direction and the mean trajectories get twisted and intertwined.
Ultimately the triangle aligns with the force.
Note that twist may not necessarily end up in alignment. For example, in the
harmonic force case in Fig. 5 (b), the triangle ends up in convergence regime.
Therefore what regimes to sample also depends on the form of external force.
Under realistic conditions, the potential field could be irregular such that
the force frequently changes its direction and magnitude, which may keep the
system in the twist regime indefinitely.
One interesting aspect that can be investigated by this section’s apparatus is
how the Brownian behavior changes as one increases the magnitude of external
force. One obtains the Brownian angular velocity by subtracting
$\langle\omega\rangle$ from the instantaneous angular velocity $\omega$,
$\Delta\omega=\omega-\langle\omega\rangle.$ (7)
$\Delta\omega$ represents contribution from the Brownian source. Typical
results for $\Delta\omega$ corresponding to Fig. 6 (e) - (h) are shown in Fig.
7 (a) - (d).
Figure 7: (Color Online) Instantaneous angular velocity contributed from
Brownian source under different magnitudes of external force for anisotropic
triangle.
The figure shows that as the magnitude of external force rises, the Brownian
fluctuation is excited. After alignment has been achieved (mostly before
$t=200$ as shown in Fig. 6), the Brownian force is still making small
rotations of the triangle when $\langle\omega\rangle=0$. Again, the excitation
is caused by the correction mechanism. Every now and then the aligned triangle
proposes a random rotation, the force rejects and corrects it. And if the
force is large, the correction is quick. It looks as if the external force is
fueling the Brownian rotation. Numerically it stems from the translation-
rotation coupling in the original equations. Conversely, if we go from large
force to small force to zero force, the Brownian fluctuation relaxes to low
frequency. Therefore, typically the convergence regime features relaxed
Brownian fluctuation.
## VI Transition among convergence, alignment and Twist
As partly discussed in previous sections, polygon can sample and transit from
one regime to another. In this section we consider a force that exponentially
decays in space, where all the three regimes can be experienced in one path.
Based on Fig. 5 (a), the constant force is replaced by an exponentially
decaying force with respect to distance in (1,1) direction. The mean
trajectories under a small decay rate is shown in Fig. 8 (a).
Figure 8: (Color Online) Mean trajectories traced by vertices of the triangle
in Fig. 5 (a) under an exponentially decaying force in space till $t=500$. (a)
Small decay rate. (b) High decay rate.
The triangle experiences sequentially the twist, alignment and convergence
regimes until the force decays to zero. If, instead, the decay rate is high,
one may only have the twist to convergence transition, as shown in Fig. 8 (b).
In general, the overall magnitude of force determines the sampling between
convergence and nonconvergent regimes (alignment and twist), while the details
of the force will determine whether alignment or twist to choose. For example,
for free anisotropic body (zero force), convergence rules. Once a nontrivial
force is established, all three regimes could be possible depending on the
details and idiosyncracies of the field. For instance, as shown above, an
anisotropic body ends up in alignment under a constant force whereas it ends
up in convergence in harmonic and exponentially decaying potentials. Since the
Brownian rotation is correlated with magnitude of external force (Fig. 7), if
the regime change involves change in force magnitude, one shall expect the
modification of Brownian behavior. We arbitrarily select a single triangle
from the ensemble and obtain its Brownian angular velocity $\Delta\omega$ till
$t=800$ under the context of Fig. 8 with a force that is slowly decaying
($\mathbf{F}=(1,1)\cdot e^{-\frac{0.001}{\sqrt{2}}(x+y+1)}$). The results are
shown in Fig. 9.
Figure 9: (Color Online) The frequency attenuation of instantaneous angular
velocity contributed from Brownian source as the system undergoes a twist to
alignment to convergence transition under a slowly decaying exponential force.
It could be seen that the frequency undergoes attenuation as the regime
changes from twist to alignment to convergence. Sometimes the transition
between twist and alignment does not involve change in force’s magnitude (such
as the constant force case), then there will not be frequency change in such
transition.
## VII From Polygon to Polyhedron
It is well-known that 3D Brownian motion is different from its 2D version Han
et al. (2006, 2009); Mukhija and Solomon (2007). As discussed in Refs. Han et
al. (2006, 2009); Mukhija and Solomon (2007), 2D and quasi-2D confinement
significantly increase friction anisotropy of the particle and impact the
anisotropy in diffusion compared to 3D. In other words, from 2D to 3D, one may
expect the influence of friction anisotropy to decrease. Therefore it is
interesting and meaningful to investigate how the patterns for 2D polygon we
found above change when one instead has a polyhedron.
As displayed in Fig. 10 (a),
Figure 10: (Color Online) The convergence of mean trajectories of vertices on
a tetrahedron (a) of isotropic non-TRC in the absence of eternal force, (b) of
anisotropic TRC in the absence of external force, (c) of isotropic non-TRC in
the presence of a constant force, (d) of anisotropic TRC in the presence of a
constant force. Red, green, blue, magenta circles for trajectories of vertices
1, 2, 3, 4. Black solid line for trajectory of CoM.
we start by considering a simple isotropic equilateral tetrahedron without
external force. The initial coordinates for the four vertices 1, 2, 3, 4 are
$(\frac{1}{2},-\frac{\sqrt{3}}{6},-\frac{\sqrt{6}}{12})$,
$(0,\frac{\sqrt{3}}{3},-\frac{\sqrt{6}}{12})$,
$(-\frac{1}{2},-\frac{\sqrt{3}}{6},-\frac{\sqrt{6}}{12})$,
$(0,0,\frac{\sqrt{6}}{4})$, respectively. $m_{1}=m_{2}=m_{3}=m_{4}=1$. In (a)
the friction is distributed evenly among vertices as
$\xi_{1}=\xi_{2}=\xi_{3}=\xi_{4}=0.075$. Then by definition the CoM and CoF
coincide at $(0,0,0)$. The figure shows the mean trajectories of the four
vertices till $t=160$. They converge to the CoF in the end.
In Fig. 10 (b), the friction is redistributed as
$\xi_{1}=\xi_{2}=\xi_{3}=0.04$ and $\xi_{4}=0.18$. Now we have an anisotropic
tetrahedron with vertex 4 experiencing much more resistance from the medium.
Consequently, the new location for CoF is about $(0,0,0.2858)$. The figure
shows the mean trajectories of the four vertices and the CoM (black solid
line) till $t=160$. In the end they all converge to the CoF. Results in Fig.
10 (a) and (b) lead to the same conclusion as 2D system that in the absence of
external force the mean trajectories converge regardless of isotropy and
anisotropy.
We carry on to subject the isotropic tetrahedron in (a) to a constant external
force $\mathbf{F}=(0.001,0.001,0.001)$, shown in Fig. 10 (c). This figure
shows the mean trajectories till $t=400$. In the end they converge to one
single trajectory. This again draws the same conclusion as the 2D system that
the mean trajectories converge for isotropic body under external force. As a
comparison, Fig. 10 (d) shows the results of subjecting the anisotropic
tetrahedron in (b) to the constant force. They end up in convergence as well.
This differs from 2D system where an anisotropic body under external force can
experience the alignment and twist regime. In 3D, the alignment and twist
regimes disappear. It could be understood as that the body tries to align
itself with the force, but the rotation with respect to the “alignment axis”
would again lead to convergence. The wiggle by the “alignment axis” is also
evened out by the extra rotations hence there is no twist regime. Adding more
rotational degrees of freedom has made the particle behave more isotropically
Mukhija and Solomon (2007).
We summarize the results thus far in Tab. 2.
Table 2: Regime classifications for interactions between mean trajectories of multiple tracking points on a Brownian rigid body. (A = Alignment, C = Convergence, T = Twist) | 2D | 3D
---|---|---
| Free | Forced | Free | Forced
Isotropic | C | C | C | C
Anisotropic | C | C/A/T | C | C
As the table shows, the convergence regime has an overwhelming dominance in
most of the situations. Only for forced anisotropic body in 2D, one might have
the alignment and twist regimes which are made possible by translation-
rotation coupling. Translation-rotation coupling is also responsible for the
modification of Brownian behavior in regime transition. However, such coupling
yields to the overwhelming effect of multiple rotational degrees of freedom in
3D space. This is consistent with the results for an ellipsoid where the
effect of translation-rotation coupling is much stronger under 2D and quasi-2D
confinement Han et al. (2006) compared to 3D.
## VIII Conclusions
We have identified three regimes of interaction, namely, convergence,
alignment and twist, between mean trajectories of different tracking points on
a Brownian rigid body based on a polygon/polyhedron representation. Depending
on the properties of the rigid body and external force, in 2D the body can
sample from and transit between these three regimes, while in 3D there is only
convergence to sample. And when a body in 2D is transiting between regimes,
its Brownian behavior could be modified. The translation-rotation coupling
plays a fundamental role in making the nonconvergent regimes (alignment and
twist) possible. Otherwise the convergence regime dominates.
Our results show that in most situations, different tracking points on a
Brownian rigid body are statistically the same, because of the convergence of
mean trajectories. Only for systems in alignment and twist regimes, different
tracking points are statistically different, such that an inappropriately
chosen tracking point could lead to error in displacement. However, such error
is shown to be bounded by the size of the particle. Therefore, if the particle
size is relatively small compared to the spatial scale one is interested in,
the issue of choosing a tracking point will not be a concern. Nevertheless, in
scenarios such as rigid particle sedimentation near a surface, the spatial
scale is reduced to be commensurate with particle size, then there could be a
special choice of tracking point that best estimates long-time transport
coefficient Delong et al. (2015).
The result that only convergence regime survives from 2D to 3D suggests
Brownian behaviors of rigid body in bulk medium and under confinement are
quite different. Confinement in space limits the number of rotational degrees
of freedom and allows more anisotropic behaviors, while increasing the number
of rotational degrees of freedom makes the particle behave much more
isotropically Han et al. (2006, 2009); Mukhija and Solomon (2007). It is
somewhat surprising that our extremely simplified model has captured such
dramatic change during dimensional change. Indeed, there remain more details
to uncover in future works about the interactions between anisotropy,
translation-rotation coupling and spatial confinement in Brownian dynamics.
## APPENDIX: The Semi-Analytical Solution of Mean Displacement of CoM
This appendix presents the semi-analytical solution to mean displacement (MD)
of CoM in the absence of external forces.
When $\\{\mathbf{F}_{i}\\}_{1\leq i\leq n}=\\{\mathbf{0}\\}$, from Eq. (4),
one obtains
$\frac{d\mathbf{v}_{m}}{dt}=-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}\mathbf{v}_{m}-\bm{\omega}\times\frac{1}{\sum_{i=1}^{n}m_{i}}\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{m}_{i}+\frac{1}{\sum_{i=1}^{n}m_{i}}\sum_{i=1}^{n}\delta\mathbf{F}_{i}.$
(A.1)
Integrating Eq. (A.1) and using
$\Delta\mathbf{r}_{m}=\int_{0}^{t}\mathbf{v}_{m}dt^{\prime}$, we arrive at
($\mathbf{v}_{m}(0)=\mathbf{0}$ applied) an equation for displacement
$\Delta\mathbf{r}_{m}$,
$\frac{d}{dt}\Delta\mathbf{r}_{m}=-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}\Delta\mathbf{r}_{m}-\frac{1}{\sum_{i=1}^{n}m_{i}}\int_{0}^{t}\bm{\omega}\times(\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{m}_{i})dt^{\prime}+\frac{1}{\sum_{i=1}^{n}m_{i}}\sum_{i=1}^{n}\int_{0}^{t}\delta\mathbf{F}_{i}dt^{\prime}.$
(A.2)
Taking ensemble average of Eq. (A.2), one gets the Langevin equation for MD
$\langle\Delta\mathbf{r}_{m}\rangle$,
$\frac{d}{dt}\langle\Delta\mathbf{r}_{m}\rangle=-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}\langle\Delta\mathbf{r}_{m}\rangle-\frac{1}{\sum_{i=1}^{n}m_{i}}\Big{\langle}\int_{0}^{t}\bm{\omega}\times(\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{m}_{i})dt^{\prime}\Big{\rangle}.$
(A.3)
By Eq. (5), the integral in Eq. (A.3) could be carried out
$\frac{d}{dt}\langle\Delta\mathbf{r}_{m}\rangle=-(\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}})\langle\Delta\mathbf{r}_{m}\rangle-\frac{1}{\sum_{i=1}^{n}m_{i}}\sum_{i=1}^{n}\xi_{i}\langle\mathbf{R}^{m}_{i}\rangle+\frac{1}{\sum_{i=1}^{n}m_{i}}\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{m}_{i}(0).$
(A.4)
Applying the initial condition $\Delta\mathbf{r}_{m}(0)=\mathbf{0}$, solution
to Eq. (A.4) is
$\langle\Delta\mathbf{r}_{m}\rangle=-\frac{1}{\sum_{i=1}^{n}m_{i}}\int_{0}^{t}\Big{(}\sum_{i=1}^{n}\xi_{i}(\langle\mathbf{R}^{m}_{i}(t^{\prime})\rangle-\mathbf{R}^{m}_{i}(0))\Big{)}e^{-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}(t-t^{\prime})}dt^{\prime}.$
(A.5)
The second integral can be taken out and Eq. (A.5) simplifies to
$\langle\Delta\mathbf{r}_{m}\rangle=\frac{\sum_{i=1}^{n}\xi_{i}\mathbf{R}^{m}_{i}(0)}{\sum_{i=1}^{n}\xi_{i}}(1-e^{-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}t})-\frac{1}{\sum_{i=1}^{n}m_{i}}\int_{0}^{t}\Big{(}\sum_{i=1}^{n}\xi_{i}\langle\mathbf{R}^{m}_{i}(t^{\prime})\rangle\Big{)}e^{-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}(t-t^{\prime})}dt^{\prime}.$
(A.6)
Eq. (A.6) can be expressed in a more concise way. Let’s introduce the vector
$\mathbf{S}_{F}^{m}$, i.e., the vector joining from the CoM to the CoF. The
$\sum_{i=1}^{n}\xi_{i}\langle\mathbf{R}^{m}_{i}(t^{\prime})\rangle$ term can
be rewritten as
$\sum_{i=1}^{n}\xi_{i}\langle\mathbf{R}^{m}_{i}(t^{\prime})\rangle=\langle\sum_{i=1}^{n}\xi_{i}(\mathbf{S}_{F}^{m}(t^{\prime})+\mathbf{R}^{f}_{i}(t^{\prime}))\rangle=(\sum_{i=1}^{n}\xi_{i})\langle\mathbf{S}_{F}^{m}(t^{\prime})\rangle.$
This is true because of Eq. (1). Therefore Eq. (A.6) reduces to
$\langle\Delta\mathbf{r}_{m}\rangle=\mathbf{S}_{F}^{m}(0)(1-e^{-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}t})-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}\int_{0}^{t}\langle\mathbf{S}_{F}^{m}(t^{\prime})\rangle
e^{-\frac{\sum_{i=1}^{n}\xi_{i}}{\sum_{i=1}^{n}m_{i}}(t-t^{\prime})}dt^{\prime}.$
(A.7)
Figure 11: (Color Online) (a) The behavior of
$\langle\mathbf{S}_{F}^{m}\rangle$ versus time. (b) The behavior of
$\langle\Delta\mathbf{r}_{m}\rangle$ versus time.
This result shows that $\langle\Delta\mathbf{r}_{m}\rangle$ is biased towards
CoF until it saturates. This is only a semi-analytical result because we do
not have an analytical solution for $\langle\mathbf{S}_{F}^{m}\rangle$, which
is determined by the angular velocity $\omega$. The translation-rotation
coupling in Eq. (4) makes it difficult to obtain an analytical solution for
$\omega$. However, based on the case in Fig. 2 (a), we numerically compute the
behavior of $\langle\mathbf{S}_{F}^{m}\rangle$ versus time. The result is
shown in Fig. 11 (a). With the numerical solution for
$\langle\mathbf{S}_{F}^{m}\rangle$, we can substitute it into Eq. (A.7) and
obtain the MD of CoM. The result is shown in Fig. 11 (b). This result
indicates that the contribution from the integral part in Eq. (A7) is
negligible and the MD is almost determined by the first part of Eq. (A7) - a
saturated exponential growth, which agrees with the experimental and
theoretical results of a boomerang colloidal particle study Chakrabarty et al.
(2013).
## References
* Favro (1960) L. D. Favro, Phys. Rev. 119, 53 (1960).
* Fernandes and de la Torre (2002) M. X. Fernandes and J. G. de la Torre, Biophys. J. 83, 3039 (2002).
* Delong et al. (2015) S. Delong, F. Balboa Usabiaga, and A. Donev, J. Chem. Phys. 143, 144107 (2015).
* Brenner (1965) H. Brenner, J. Colloid Sci. 20, 104 (1965).
* Brenner (1967) H. Brenner, J. Colloid Interface Sci. 23, 407 (1967).
* Kümmel et al. (2013) F. Kümmel, B. ten Hagen, R. Wittkowski, I. Buttinoni, R. Eichhorn, G. Volpe, H. Löwen, and C. Bechinger, Phys. Rev. Lett. 110, 198302 (2013).
* Wittkowski and Löwen (2012) R. Wittkowski and H. Löwen, Phys. Rev. E 85, 021406 (2012).
* Han et al. (2006) Y. Han, A. M. Alsayed, M. Nobili, J. Zhang, T. C. Lubensky, and A. G. Yodh, Science 314, 626 (2006).
* Sun et al. (2008) X. Sun, T. Lin, and J. D. Gezelter, J. Chem. Phys. 128, 234107 (2008).
* Chakrabarty et al. (2013) A. Chakrabarty, A. Konya, F. Wang, J. V. Selinger, K. Sun, and Q.-H. Wei, Phys. Rev. Lett. 111, 160603 (2013).
* Chakrabarty et al. (2016) A. Chakrabarty, F. Wang, K. Sun, and Q.-H. Wei, Soft Matter 12, 4318 (2016).
* Chakrabarty et al. (2014) A. Chakrabarty, A. Konya, F. Wang, J. V. Selinger, K. Sun, and Q.-H. Wei, Langmuir 30, 13844 (2014).
* Ermak and McCammon (1978) D. L. Ermak and J. A. McCammon, J. Chem. Phys. 69, 1352 (1978).
* Brady and Bossis (1988) J. F. Brady and G. Bossis, Annu. Rev. Fluid Mech. 20, 111 (1988).
* Fiore and Swan (2019) A. M. Fiore and J. W. Swan, J. Fluid Mech. 878, 544 (2019).
* Sharma and Patankar (2004) N. Sharma and N. A. Patankar, J. Comp. Phys. 201, 466 (2004).
* Tejedor et al. (2010) V. Tejedor, O. Bénichou, R. Voituriez, R. Jungmann, F. Simmel, C. Selhuber-Unkel, L. B. Oddershede, and R. Metzler, Biophys. J. 98, 1364 (2010).
* Holcman et al. (2015) D. Holcman, N. Hoze, and Z. Schuss, Biophys. J. 109, 1761 (2015).
* Schnellbächer and Schwarz (2018) N. D. Schnellbächer and U. S. Schwarz, New J. Phys. 20, 031001 (2018).
* Sposini et al. (2019) V. Sposini, R. Metzler, and G. Oshanin, New J. Phys. 21, 073043 (2019).
* Michalet and Berglund (2012) X. Michalet and A. J. Berglund, Phys. Rev. E 85, 061916 (2012).
* Michalet (2010) X. Michalet, Phys. Rev. E 82, 041914 (2010).
* Frishman and Ronceray (2020) A. Frishman and P. Ronceray, Phys. Rev. X 10, 021009 (2020).
* De La Torre and Bloomfield (1978) J. G. De La Torre and V. A. Bloomfield, Biopolymers 17, 1605 (1978).
* Han et al. (2009) Y. Han, A. Alsayed, M. Nobili, and A. G. Yodh, Phys. Rev. E 80, 011403 (2009).
* Mukhija and Solomon (2007) D. Mukhija and M. J. Solomon, J. Colloid Interface Sci. 314, 98 (2007).
|
# Machine-learning accelerated geometry optimization in molecular simulation
Yilin Yang Department of Chemical Engineering, Carnegie Mellon University,
5000 Forbes Ave, Pittsburgh, PA 15213 Omar A. Jiménez-Negrón Department of
Chemical Engineering, University of Puerto Rico-Mayagüez, Mayagüez, PR 00681,
Puerto Rico, USA John R. Kitchin<EMAIL_ADDRESS>Department of
Chemical Engineering, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh,
PA 15213
###### Abstract
Geometry optimization is an important part of both computational materials and
surface science because it is the path to finding ground state atomic
structures and reaction pathways. These properties are used in the estimation
of thermodynamic and kinetic properties of molecular and crystal structures.
This process is slow at the quantum level of theory because it involves an
iterative calculation of forces using quantum chemical codes such as density
functional theory (DFT), which are computationally expensive, and which limit
the speed of the optimization algorithms. It would be highly advantageous to
accelerate this process because then one could either do the same amount of
work in less time, or more work in the same time. In this work, we provide a
neural network (NN) ensemble based active learning method to accelerate the
local geometry optimization for multiple configurations simultaneously. We
illustrate the acceleration on several case studies including bare metal
surfaces, surfaces with adsorbates, and nudged elastic band (NEB) for two
reactions. In all cases the accelerated method requires fewer DFT calculations
than the standard method. In addition, we provide an ASE-optimizer Python
package to make the usage of the NN ensemble active learning for geometry
optimization easier.
DFT, machine learning, geometry optimization, nudged elastic band,
acceleration
## I Introduction
Machine learning has been reshaping the research methods of many scientific
and engineering fields. In the area of surface catalysis, various applications
of machine learning techniques are emerging that enable larger simulations of
nanoparticles Jinnouchi and Asahi (2017), structure optimization Jacobsen,
Jørgensen, and Hammer (2018); Hansen _et al._ (2019), studies of segregation
Boes and Kitchin (2017), high throughput screening Lamoureux _et al._ (2019);
Back _et al._ (2019) and on the fly learning of force fields Vandermause _et
al._ (2020). One of the crucial requirements for a machine learning model to
work is a broad training dataset which ensures the generalization ability of
complex machine learning model on the test dataset. For example, accurate
adsorption energies of certain adsorbates on various kinds of catalytic
surfaces is one of the basic prerequisites to conduct high-throughput
screening for novel catalyst candidates Li _et al._ (2017); Ling _et al._
(2018); Zhang, Hu, and Wang (2020). Thus, many studies aim to build up a
reliable machine learning model to predict the adsorption energies on
different adsorption sites Gu, Zhang, and Tao (2012); Ulissi _et al._ (2017);
Hoyt _et al._ (2019). In this case, a training set covering most of the
possible configurations is necessary to obtain a reasonable model which
affects the reliability of the screening process.
The rate-limiting step to obtain the adsorption energies is often the geometry
optimization process. This process consists of a sequence of iterative single
point calculations with density functional theory (DFT), with the structure
update is completed by various optimizers like conjugate gradient descent or
the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithms. These algorithms start
with an initial guess, and then iteratively move the atoms to reduce the
forces to a specified tolerance. The forces are typically computed at each
step by the DFT code. One path to speeding up these calculations is to use a
better initial guess. An alternative approach is to use a surrogate model that
is computationally cheap, but sufficiently accurate that many steps can be
taken with the cheap model before a DFT calculation is required. Recently,
many machine learning methods have been developed to accelerate the local
geometry optimization process with this idea. For example, Peterson Peterson
(2016) used a neural network as the surrogate model to find the transition
state, but the uncertainty is not included. Torres et al Torres _et al._
(2019) and Koistinen et al Koistinen _et al._ (2017) used Gaussian Process
Regression (GPR) to estimate the uncertainty during the local geometry
optimization. Those implementations of GPR are solely based on the Cartesian
coordinates of the atoms, which limits the training set to the past geometries
of the same configuration size and composition during the optimization and the
information of other configurations can not be utilized. There are other
applications of active learning in geometry optimization Artrith and Behler
(2012); del Río, Mortensen, and Jacobsen (2019); del Río _et al._ (2020);
Vandermause _et al._ (2020); Shuaibi _et al._ (2021) or in molecular
dynamics Tong _et al._ (2018); Jinnouchi _et al._ (2020). Most of these
methods are also based on active learning with uncertainty measured by
Gaussian process regression or a neural network (NN) ensemble. In the active
learning relaxation process, a surrogate model is trained to replace the
expensive DFT calculation to conduct the energy minimization steps. At each
step the uncertainty of the model prediction is monitored. If the uncertainty
exceeds a specified threshold, DFT calls will be requested to get accurate
energy and force information for the uncertain configuration. Then this new
data point is used to update the surrogate model to improve it.
The work to date has mostly focused on the relaxation of a single
configuration, which might have limited acceleration when applied to relax
many configurations. In each case, the surrogate model essentially starts from
scratch, and has no ability to share information between similar
configurations. In this work, we illustrate and evaluate an online learning
method to accelerate the local geometry optimization for multiple
configurations simultaneously. More specifically, we focus on two aspects to
accelerate the online learning process. The first point is the training of the
surrogate model used to relax the target configurations. When the training set
grows up, the training of the machine learning model also takes more time,
which might resulted in longer relaxation time than using DFT solely, although
with less number of DFT calls. This issue is shared among various machine
learning models including GPR and deep learning model. We note that using
local training dataset is sufficient to conduct the local geometry
optimization at each step. Thus, the size of the training set used to update
the surrogate model at each step could be limited, which could significantly
reduce the time used to train the surrogate models. The second point of this
work is to discuss the potential methods that could be adapted to accelerate
the active learning relaxation process for large number of configurations. We
illustrate three adaptations to three different scenarios: relaxation from
scratch, relaxation from a small dataset and relaxation from a large existing
dataset. The main point under these methods is that the information of
different relaxation trajectories could be shared among each other to
accelerate the overall relaxation process. Another objective of this work is
to provide an overview about the performance of NN-based online learning on
various local geometry optimization tasks.
## II Methodology
### II.1 Machine Learning Model for the surrogate models
Many machine learning models have been established to model the potential
energy surface (PES) such as the Gaussian Approximation Potentials (GAP)
Bartók _et al._ (2010) and Behler Parrinello Neural Networks (BPNN) Behler
and Parrinello (2007). In this work, we choose the single neural network
(SingleNN) as our basic model to approximate the energies and forces
information of the atomic configurations Liu and Kitchin (2020). SingleNN is a
modified BPNN which represents different elements by multiple output nodes in
a single NN instead of separate NNs. SingleNN uses the same symmetry functions
as the conventional BPNN, but uses a single neural network with an output for
each element, rather than a separate neural network for each element. Under
the same NN structure (same number of hidden layers and same nodes in each
layer), it contains fewer parameters than BPNN. Thus, the training and
inference time is lower. The feature representation of the atomic environment
used in this work is the atom-centered symmetry function (ACSF) Behler (2011).
The NN structure contains two hidden layers with 50 neurons at each layer. The
activation function used is tanh. These hyperparameters were chosen by cross
validation among different NN architectures on the dataset of previous work
and they are typical for machine learned potentials. The structure of this NN
looks relatively over-parameterized considering the small size of the dataset
in this work (typically the dataset contains 50 configurations). This is
because we want to utilize the benefits of an over-parameterized deep learning
model: 1) With high probability, convergence of the training process is easy
from a random initialization, and 2) an over-parameterized NN could lead to
unsimilar models with high probability only using different random
initializations Allen-Zhu, Li, and Liang (2019); Lakshminarayanan, Pritzel,
and Blundell (2016). There is also some applications of large NN models on
small datasest with reasonable generalization ability Olson, Wyner, and Berk
(2018). We used early stopping to prevent the overfitting of the NN model. For
our specific application, we need to note that overfitting is not expected to
be a severe problem because 1) the surrogate model is only used when
uncertainty is low, 2) if the uncertainty exceeds a threshold the data is
augmented by new DFT data, and 3) the final minimum is always validated by
DFT.
The atomic energy, total energy and forces predicted by our SingleNN could be
formulated by Equation 1 \- 4.
$\textbf{o}_{i}=\textbf{W}^{(2)}f_{a}^{(2)}\left(\textbf{W}^{(1)}f_{a}^{(1)}\left(\textbf{W}^{(0)}\textbf{g}_{i}+\textbf{b}^{(0)}\right)+\textbf{b}^{(1)}\right)+\textbf{b}^{(2)}$
(1)
$E_{i}=mask_{i}\cdot\textbf{o}_{i}$ (2)
$E_{tot}=\sum_{i}^{N}E_{i}$ (3)
$\textbf{f}_{i}=-\frac{\partial E_{tot}}{\partial\textbf{r}_{i}}$ (4)
In these equations $\textbf{o}_{i}$ is the output layer of the SingleNN for
atom $i$, $\textbf{g}_{i}$ is the fingerprint vector for atom $i$,
$f_{a}^{(l)}$, $\textbf{W}^{l}$, $\textbf{b}^{(l)}$ are the activation
function, weight and bias at layer $l$. $mask_{i}$ is a one-hot vector
indicating the element of the atom $i$. $E_{i}$, $\textbf{f}_{i}$ are the
energy and forces of atom $i$. $N$ is the number of atoms in a configuration.
$E_{tot}$ is the total energy of the configuration.
To measure the uncertainties of the model predictions, we adopt the NN
ensemble method as an approximate estimation Lakshminarayanan, Pritzel, and
Blundell (2016). We use 10 NNs in the NN ensemble and each NN has the same
structure. As mentioned in the original ensemble method paper, each NN is
trained on the same training set without bootstrapping but with different
random initialization. This is because different initializations are already
able to generate different NN models using the same training set because of
the over-parameterization of the NN model. Allen-Zhu, Li, and Liang (2019)
The prediction uncertainty is estimated by the variance of the model
predictions in the ensemble. We used a relative ratio to the maximum variance
of the NN ensemble in the training set as a criterion to check if a
configuration is uncertain or not. More specifically, Equation 5 quantifies
this uncertainty threshold.
$thld=\alpha\max_{i}{\mathrm{Var}\left[E_{tot}^{i}\right]}$ (5)
Where $\alpha$ is the coefficient to control the extent to believe the
prediction of the NN ensemble. $\mathrm{Var}\left[E_{tot}^{i}\right]$ is the
prediction variance of the NN ensemble on the total energy of a configuration
$i$ in the training set. $thld$ is the threshold above which a prediction is
considered as uncertain. We chose the $\alpha$ by comparing the performance of
different values on a small dataset. For the various applications below,
setting alpha between 2 to 3 works for all examples and we use 2 as the
default value in the GitHub package. The intuition is that if the NN ensemble
has a similar variance on a test configuration as the variance in the training
set, then we could expect the test configuration is close to the region of the
training dataset, thereby, we could expect similar error with the training
error. If it is far away from the maximum variance in the training set, it is
probable that extrapolation is occurring, and we should be careful about the
prediction. This intuition is shared by different machine learning models like
the GPR and the NN ensemble. For example, Figure 1 shows the GPR and NN models
for the Lennard Jones potential 192 (1924). Both models have small prediction
variance in the region of the training data. As the test data goes far away
from the training set, the prediction error and variance also increase.
Figure 1: Surrogate machine learning models for the Lennard Jones potential.
Left plot shows the GPR while the right plot shows the NN ensemble. Both
models have low prediction variance in the region of training set and high
variance for the data that is far from the training set.
We also compare this NN model with the GPR model in one of our datasets. The
details of the GPR formula are attached in the supporting information.
Optimization of the hyperparameters like the bandwidth and the data noise term
was done according to the previous literature reports Koistinen _et al._
(2017); Torres _et al._ (2019). The data noise in this application could be
the DFT convergence error related to the factors like k points and cutoff
energy.
### II.2 Relaxation with Active Learning
The framework of the active learning for relaxation is shown in Figure 2 which
is similar to most active learning frameworks, Jacobsen, Jørgensen, and Hammer
(2018); Vandermause _et al._ (2020) but we process multiple configurations
simultaneously to obtain extra acceleration. The rationality of pooling
different trajectories together is that the information of similar atomic
environment across trajectories could be shared by a common atomic NN
surrogate model, which was also observed in the water NN potential. Schran,
Brieuc, and Marx (2021) Another benefit of the pooling is that it could be
applied in a scalable way. Different configurations could share a common
surrogate model and there is no need to assign separate computing resources
for the training of each trajectory. For the specific procedure, we start from
$N$ configurations to be relaxed, build a common NN ensemble for these $N$
configurations. At each step, we conduct relaxation until the model becomes
uncertain for each configuration. Then we query DFT for the true energies and
forces for these uncertain configurations, which are used to update the
surrogate model. During the relaxation process, we limit the size of the
training set and keep the configurations of the most recent steps; all
previous configurations are discarded in the iterative training of the NN
ensemble. This setting is used to reduce the time to train a NN when the
available data points grows as the relaxation steps. Intuitively, this
modification is similar to the L-BFGS compared to the BFGS, which estimates
the inverse of the Hessian matrix at a point using the recent gradients
instead of the full history. Liu and Nocedal (1989) However, L-BFGS aims to
alleviate the memory problem while we try to reduce the training time for the
surrogate model.
Before running the online learning to relax the target configurations, several
cases should be considered. If no prior data related to the target
configurations is available, then the initial model is built on the DFT
information of the initial configurations. If there are some existing
relaxation trajectories that are related to the target configurations (e.g.
alloys with the same elements but different configurations), then this data is
incorporated with the DFT data of the initial configurations to set up the
initial NN model. This part of reused data also accelerates the overall
process of relaxation. Finally, if training data is available from previous
relaxations that are similar to the initial configurations, then it is
possible to conduct the relaxation in a offline way through the NN model
trained on the prior training set without initially accessing the DFT
calculation.
Figure 2: Framework for relaxation with online active learning. The overall
workflow starts with the initial configurations that needs to be relaxed. At
first, the DFT energies and forces are calculated and the NN ensemble is
trained with these initial information. Then the model is utilized with
optimizers to reduce the energy of the configurations. The relaxation with
model stops when encounters with uncertain configurations or reaches the
relaxation criterion. The uncertain configurations are submitted for further
DFT calculations.
### II.3 Application Dataset
In this work, we test the proposed online learning methods on a variety of
structures including bare pure metal slabs, bare metal alloy slabs, slabs with
an adsorbate, and a nanoparticle with an adsorbate. These structures increase
in complexity, and are expected to be increasingly expensive to do geometry
optimization with. More specifically, we take Au FCC(100), Au FCC(111), Au
FCC(211), Au FCC(643), Au FCC(111) with propylene on the surface, AuPd
FCC(111), AgPd FCC(111) with acrolein on the surface, AuPd icosahedron with CO
on edge as the examples for these structures. For the slab, the bottom two
layers are fixed and the remaining atoms are free to be relaxed. For
nanoparticles, all atoms are free to move during the relaxation. In addition
to the geometry relaxation of these structures, we also evaluate this method
on two climbing-image nudged elastic band (CINEB) cases Henkelman, Uberuaga,
and Jónsson (2000): Pt heptamer rearrangement over Pt FCC(111) surface and
acetylene hydrogenation over Pd FCC(111) surface. The CINEB algorithm is like
a constrained geometry optimization where forces in the direction tangent to
the bands are projected out. The basic framework to perform CINEB using NN
ensemble is similar to the CINEB based on Gaussian Process Regression (GPR)
Torres _et al._ (2019). In our work, the surrogate model is the NN ensemble
instead of the GPR. During the relaxation, when one of the configurations in
the CINEB is identified in the uncertain region of the NN ensemble, we query
for a DFT calculation for this configuration. This process continues until all
configurations are relaxed with certainty, then we query the DFT information
for the configuration with highest energy until the energy and force
prediction for the highest-energy configuration is certain and the true force
is lower than a specified threshold.
The DFT used in this work is performed by the Vienna Ab initio Simulation
Package (VASP) Kresse and Hafner (1993); Kresse and Furthmüller (1996) with
Perdew-Burke-Ernzerhof generalized gradient approximation (GGA-PBE) as the
exchange-correlation functional Perdew, Burke, and Ernzerhof (1996, 1997). For
the Pt heptamer rearrangement case, we used EMT as the calculator for energy
and forces because of the size of this system (unit cell with 343 Pt atoms) as
implemented in ASE Larsen _et al._ (2017). The related dataset, relaxation
trajectory, configurations in the NEB as well as the code used to conduct the
active learning geometry optimization are available in on GitHub Yang , in
which the code to calculate the fingerprints is modified based on the
functions of SimpleNN Lee _et al._ (2019).
## III Results and Discussion
### III.1 Active learning for geometry optimization of single configuration
Usually, geometry optimization is done for each configuration separately. For
example, one may be interested in the relaxed geometry of an occupied
adsorption site, then the geometry optimization would be performed on an
initial guess of the configuration. Active learning could be integrated into
the optimization trajectory to accelerate the process by using a surrogate
model with uncertainty. With the example of Au slabs with or without an
adsorbate, we evaluated the performance of active learning on single
configuration relaxation and compare it with the quasi-Newton optimizer built
in VASP (RMM-DIIS) Pulay (1980). As shown in Figure 3, the acceleration for
the bare slabs is not as significant as it is for the slab with propylene on
the top. The more complex surface FCC(643) gains more acceleration than
simpler surface FCC(100), FCC(111) and FCC(211). The results suggest that the
surrogate model requires a minimum number of steps or configurations to build
up a sufficient approximated potential energy surface, and then to show
acceleration. These results how that with active learning the number of DFT
calls may be reduced by a factor of two to four for geometry optimizations
that require 20 or more relaxation steps.
Figure 3: Comparison of the number of DFT calls between active learning with
NN ensemble and quasi-Newton built in VASP when each configuration is relaxed
independently.
### III.2 Further acceleration by information sharing among configurations
and utilizing prior data
There are multiple ways to use machine learning to accelerate geometry
optimization. First one may build the surrogate machine learned model from the
relaxation trajectory of a single configuration as it develops, using the
surrogate model when it is sufficiently accurate. Alternatively, one can relax
many (related) configurations in parallel and train a single surrogate machine
learning model on the collection of developing trajectories (the multiple
method). Finally, if one has access to the relaxation trajectories from
previously relaxed configurations one can pretrain the surrogate machine
learning model and then use it (the warm up method).
We compare the performance of active learning with these different strategies:
single configuration, multiple configurations and multiple configurations with
warm up (pre-training) on the example of an adsorbed acrolein molecule on an
FCC(111) alloy AgPd system. This system is more complex than the examples in
the previous section with less symmetry and it is expected to take more
relaxation steps to find a minimum energy geometry. Here we use the same query
strategy for new DFT single point calculations, but with different settings
for the initialization. For the single configuration active learning, the
method only focuses on relaxing one configuration at each time. The surrogate
model starts with the DFT information of the target configuration. At each
relaxation step, it relaxes this configuration and queries the DFT label for
one uncertain configuration. For the multiple configurations setting, the DFT
energies and forces of all target initial configurations are used to
initialize the NN model. Then all configurations are optimized until each
configuration is fully relaxed or goes into uncertain region of the surrogate
model. In terms of the warm up setting, it requires some prior DFT data
related to the target configurations that need to be relaxed, such that the
surrogate model could be pre-trained with this prior DFT information which
serves as the prior beliefs for the potential energy surface.
The performance of above three methods on 13 different acrolein/AgPd
configurations are shown in Figure 4. With standard DFT/QN geometry
optimization it take about 193 DFT steps on average to relax the geometries.
All three methods in our work and the GPR model show acceleration, while the
NN methods present better performance over the GPR model. The hyperparameters
of the GPR model are referenced from the previous literature reports Koistinen
_et al._ (2017); Torres _et al._ (2019). We note that the hyperparameters
from the reported literatures might not be the optimal for our system, but
even still we observe acceleration of about four times fewer steps with the
GP, 11 times fewer steps for the single configuration, and thirteen times
fewer steps for the multiple configurations. The pretrained warm-up shows the
largest acceleration indicating that the surrogate model is more accurate and
has performed better. Clearly, the information sharing through the surrogate
model accelerates the active learning relaxation process. The large reduction
in the number of DFT calls required directly translates to saved time and
computing resources. In the limit of a fully trained machine learned
potential, one can expect no additional DFT calculations are required for a
new relaxation, but in our experience and in the literature it takes thousands
of DFT calculations to obtain that.
Figure 4: Number of DFT calls for three different active learning setting for
the relaxation of acrolein/AgPd(111). The blue line represents the single
configuration mode, the orange line is for the multiple configurations mode
and the green line shows the multiple configurations with warm up. The red
line serves as a baseline which is the performance of GPR model implemented
according to previous literatures Koistinen _et al._ (2017); Torres _et al._
(2019). For comparison, with no ML it takes about 193 DFT calls to converge.
A related scenario is when we have some data about the target configurations
that we want to relax. For example, if we have the active learning relaxation
trajectories for many configurations of acrolein/AgPd and we want to relax the
remaining configurations. In this case we can utilize the existing data to
build up a model to approximate the PES of the acrolein and AgPd, and then
conduct the relaxation process offline since it is possible that the
information required to relax the remaining configurations has been included
in the existing trajectories. We show the offline relaxation performance in
Figure 5, in which 243 acrolein/AgPd relaxation trajectories are used to train
a NN model. Then, another 13 configurations are relaxed using this model.
Without accessing any DFT calls, the NN could reduce the maximum force of the
configurations from 0.7 eV/$\AA$ to below 0.1 eV/$\AA$, which could serve as a
preprocessing step if lower forces are required, in other words to provide
better initial guesses. The NN ensembles provide uncertainty estimates, which
would be useful for determining if the pretrained models are sufficiently
accurate for new configurations that are not similar to the training set.
Figure 5: Offline relaxation on 13 acrolein/AgPd configurations using NN
trained on 243 existing relaxation trajectories. Blue points show the maximum
DFT forces for the initial configurations. Orange scatters are the maximum DFT
forces for the NN relaxed configurations while purple dots are the NN
predicted maximum forces.
In summary, this section shows that machine learning surrogate models can be
trained on the fly or in advance in a variety of ways to accelerate geometry
optimization. The biggest on the fly acceleration occurs when multiple similar
configurations are relaxed in parallel with shared training data in a single
surrogate model. Further acceleration can be obtained if training data already
exists to retrain the surrogate model on. In the next section we show the
acceleration is observed for many different atomistic systems, and the degree
of acceleration is system dependent.
### III.3 Performance of the active learning on more complex systems and
nudged elastic band calculations
To explore the ability of the active learning with multiple configurations to
accelerate geometry optimization, we evaluate this method on three different
chemical structures: bare AuPd FCC(111) slab, CO on an AuPd icosahedron
nanoparticle and acrolein on AgPd FCC(111) surface shown in the illustration
example. We measured the required DFT calls to fully relax the configurations
and compared it with the built-in VASP quasi-Newton optimizer RMM-DIIS. We
relaxed the configurations until the maximum force on the atoms is less than
0.05 eV/$\AA$. The results are shown in Figure 6. Active learning accelerates
the relaxation process to different extents across these three systems. For
the simpler case like the AuPd bare slab, the acceleration ratio is about 50%
compared to the pure VASP optimizer. For more complicated (i.e. lower symmetry
and more atomic degrees of freedom) systems, the acceleration was more
significant, reducing the number of DFT calls by more than 90%. This result
shows that active learning is suitable for relaxing more complicated
structures. Once the NN has a reasonable representation of the potential
energy surface of the target configurations by calling the first several DFT
calculations, this surrogate model could be used to fine-tune the structure as
a replacement of the DFT calls.
Figure 6: Comparison of active learning (AL) and VASP quasi-Newton (QN) method
on relaxing three different structures: bare AuPd slab, CO on AuPd icosahedron
and acrolein on AgPd slab.
In addition to the local geometry optimization in the aforementioned cases, we
also evaluated the NN ensemble based active learning method in two climbing
image NEB (CINEB) examples: Pt heptamer rearrangement over Pt FCC(111) surface
and acetylene hydrogenation over Pd FCC(111) surface. We use an effective
medium theory (EMT) calculator for the heptamer and DFT for the hydrogenation
reaction. Jacobsen, Stoltze, and Nørskov (1996) We use EMT for heptamer
because of the large size of the Pt slab. This example also shows that the NN
ensemble method is not limited to DFT. We note that EMT is a relatively
simpler potential than DFT, thus, we also include the acetylene hydrogenation
with DFT as an example. The reaction curves generated by the NN ensemble with
active learning and the corresponding VASP or EMT calculator are shown in
Figure 7. With the same initial and final state, the NN ensemble found
practically the same transition state as VASP or EMT for these two system. The
corresponding activation energies have 6 meV and 4 meV error compared to the
one from EMT or DFT which is within convergence tolerance. The required DFT or
EMT calls are much fewer than those without active learning as shown in Table
1. In the case of acetylene hydrogenation, there are some mismatched energies
between NN and VASP for the intermediate configurations except the transition
state. This is caused by the intrinsic setting of the low scaling CINEB method
based on active learning Torres _et al._ (2019). Only DFT data for the
configuration with the highest energy is evaluated for the convergence
criterion. This problem could be alleviated by modifying the convergence
criterion to include the energy and forces of other images in the elastic
band, such that all images in the band are fully relaxed instead of only
considering the highest-energy configuration Koistinen _et al._ (2017).
However, for the purpose of CINEB, the NN ensemble with active learning could
accelerate the process to find the transition state by finding the
configuration with the highest energy.
Figure 7: Climbing NEB curves generated by NN ensemble and (a) EMT for Pt heptamer rearrangement (b) DFT for acetylene hydrogenation over Pd FCC(111) surface. Table 1: EMT or DFT calls queried by NN emsemble with active learning, EMT with MDMin and VASP with built-in quasi newton optimizer for Pt heptamer rearrangement and acetylene hydrogenation. | Pt heptamer rearrangement | Acetylene Hydrogenation
---|---|---
| (EMT) | (VASP)
Calculator | 596 calls | 1109 calls
NN ensemble with AL | 9 calls | 30 calls
### III.4 Limiting the training data to recent configurations for training
efficiency
With the active learning approach we add training data as the geometry
optimizations proceed. This also adds (re)-training time which grows as the
size of the training set. In the first few steps from scratch, this is not a
problem since the training process could be completed quickly because of the
small size of the training set. The time cost for training is negligible
compared to the DFT calculations. However, when the size of the training set
grows large compared to the relaxation steps, the required time to train a
model with high accuracy also scales up. Figure 8 illustrates the training
time for the NN over the active learning iterations. The initial training set
consists of 13 different acrolein/AgPd configurations. At each iteration,
uncertain configurations are added into the training set and the surrogate
model is updated. The training cost time scales linearly with the size of the
training set, which could be time consuming when the iterations increase.
Figure 8: Time spent on the training process using a single NN with 2 layers
and 50 neurons at each layer over iterations. The blue line shows the time for
the model trained on all queried configurations while the orange line shows
the time for training on the training set with fixed size. The experiment is
repeated 10 times and the shaded area is the standard deviation for the 10
experiments. Time measured on 4 CPU cores.
It is not always necessary to use all of the training data however. We found
that the correlation (or similarity) between two configurations in the
relaxation trajectories decreases as the number of steps between them
increases. The correlation between two configurations can be illustrated by
averaging the Pearson correlations between corresponding atomic fingerprints
in two configurations. There is usually reasonable similarity between the
initial and final states (assuming a reasonable initial guess is used), so to
highlight the change in similarity we subtracted the final state correlation
from each configuration because the relaxation is local. The descending
correlation shown in Figure 9 for a relaxation trajectory suggests we may only
need to focus on utilizing the configurations in the most recent steps to
perform locally geometry relaxation.
Figure 9: Scaled Pearson correlation coefficient between the intermediate
configurations and the final relaxed configuration. The Pearson correlation is
scaled by the base correlation between the initial configuration and the final
configuration.
As a result of Fig. 9 it appears in this system at least that after about five
steps the new steps are decreasingly correlated with the initial steps.
Therefore, if we only focus on recent steps (e.g. the five most recent steps)
and only use these configurations to update the surrogate model, the training
time could be controlled as almost constant as the active learning proceeds
(see Figure 8). We note in this case that the training time is still small
compared to the time required for a single DFT calculation which is about 1.5
hours for the Acrolein/AgPd unit cell with the VASP settings in this work.
When the total training set continues to grow or there are fewer computational
resources available for training, the local training set could be more
preferable. We note that there are cheaper probabilistic models like GPR that
could be used for small dataset. But given the growing size of the available
data and the wide applications of deep learning models, a cheaper way to
access the uncertainty estimation for deep learning models is valuable.
## IV Conclusion
Active learning has demonstrated promising performance to accelerate the
structure optimization in various applications. In this work, we illustrate
that active learning with multiple configurations could achieve further
acceleration compared to the active learning with single configuration by
sharing the information across different configuration using a common NN
ensemble. On the basis of that, we also provide three active learning modes
for three scenarios with different amount of prior data. By integrating the
prior data into the active learning framework, more calls to expensive energy
and force calculators are saved. To explore the generalization ability of this
method, we compared the number of required underlying energetic calculations
between the active learning, built-in VASP quasi-Newton optimizer and BFGS in
ASE in various local geometry optimization tasks. The results show that active
learning reduces the amount of DFT or EMT calls by 50% - 90% based on
different systems. From bare slabs to surfaces with adsorbates, the
acceleration becomes more significant. In addition to the surface relaxation,
we also applied this method to the climbing NEB for Pt heptamer rearrangement
and acetylene hydrogenation. In these examples, the acceleration is even more
apparent (~98%) while keeping almost the same transition state with the
underlying ground truth energy and force calculators. In conclusion, this work
shows the potential of this NN ensemble based active learning method in
various computational surface science and catalysis tasks.
## V Supplementary Material
See supplementary material for specific information about the code used in
this work and instructions for accessing the datasets used in this work.
###### Acknowledgements.
This material is based upon work supported by the U.S. Department of Energy,
Office of Science, Office of Basic Energy Sciences, Catalysis program under
Award Number DE-SC0018187. OAJN was supported under NSF DMREF Award
CBET-1921946.
## VI Data availability
The data that supports the findings of this study are available within the
article [and its supplementary material].
## References
* Jinnouchi and Asahi (2017) R. Jinnouchi and R. Asahi, The Journal of Physical Chemistry Letters 8, 4279 (2017).
* Jacobsen, Jørgensen, and Hammer (2018) T. L. Jacobsen, M. S. Jørgensen, and B. Hammer, Physical Review Letters 120, 026102 (2018).
* Hansen _et al._ (2019) M. H. Hansen, J. A. G. Torres, P. C. Jennings, Z. Wang, J. R. Boes, O. G. Mamun, and T. Bligaard, CoRR (2019), arXiv:1904.00904v1 [physics.chem-ph] .
* Boes and Kitchin (2017) J. R. Boes and J. R. Kitchin, The Journal of Physical Chemistry C 121, 3479 (2017).
* Lamoureux _et al._ (2019) P. S. Lamoureux, K. Winther, J. A. G. Torres, V. Streibel, M. Zhao, M. Bajdich, F. Abild-Pedersen, and T. Bligaard, ChemCatChem 11, cctc.201900595 (2019).
* Back _et al._ (2019) S. Back, J. Yoon, N. Tian, W. Zhong, K. Tran, and Z. W. Ulissi, The Journal of Physical Chemistry Letters 10, 4401 (2019).
* Vandermause _et al._ (2020) J. Vandermause, S. B. Torrisi, S. Batzner, Y. Xie, L. Sun, A. M. Kolpak, and B. Kozinsky, npj Computational Materials 6, 20 (2020).
* Li _et al._ (2017) Z. Li, S. Wang, W. S. Chin, L. E. Achenie, and H. Xin, Journal of Materials Chemistry A 5, 24131 (2017).
* Ling _et al._ (2018) C. Ling, Y. Ouyang, Q. Li, X. Bai, X. Mao, A. Du, and J. Wang, Small Methods 3, 1800376 (2018).
* Zhang, Hu, and Wang (2020) J. Zhang, P. Hu, and H. Wang, The Journal of Physical Chemistry C 124, 10483 (2020).
* Gu, Zhang, and Tao (2012) J. Gu, Y.-W. Zhang, and F. F. Tao, Chemical Society Reviews 41, 8050 (2012).
* Ulissi _et al._ (2017) Z. W. Ulissi, M. T. Tang, J. Xiao, X. Liu, D. A. Torelli, M. Karamad, K. Cummins, C. Hahn, N. S. Lewis, T. F. Jaramillo, K. Chan, and J. K. Nørskov, ACS Catalysis 7, 6600 (2017).
* Hoyt _et al._ (2019) R. A. Hoyt, M. M. Montemore, I. Fampiou, W. Chen, G. Tritsaris, and E. Kaxiras, Journal of Chemical Information and Modeling 59, 1357 (2019).
* Peterson (2016) A. A. Peterson, The Journal of Chemical Physics 145, 074106 (2016).
* Torres _et al._ (2019) J. A. G. Torres, P. C. Jennings, M. H. Hansen, J. R. Boes, and T. Bligaard, Physical Review Letters 122, 156001 (2019).
* Koistinen _et al._ (2017) O.-P. Koistinen, F. B. Dagbjartsdóttir, V. Ásgeirsson, A. Vehtari, and H. Jónsson, The Journal of Chemical Physics 147, 152720 (2017).
* Artrith and Behler (2012) N. Artrith and J. Behler, Physical Review B 85, 045439 (2012).
* del Río, Mortensen, and Jacobsen (2019) E. G. del Río, J. J. Mortensen, and K. W. Jacobsen, Physical Review B 100, 104103 (2019).
* del Río _et al._ (2020) E. G. del Río, S. Kaappa, J. A. G. Torres, T. Bligaard, and K. W. Jacobsen, The Journal of Chemical Physics 153, 234116 (2020).
* Shuaibi _et al._ (2021) M. Shuaibi, S. Sivakumar, R. Q. Chen, and Z. W. Ulissi, Machine Learning: Science and Technology 2, 025007 (2021).
* Tong _et al._ (2018) Q. Tong, L. Xue, J. Lv, Y. Wang, and Y. Ma, Faraday Discussions 211, 31 (2018).
* Jinnouchi _et al._ (2020) R. Jinnouchi, K. Miwa, F. Karsai, G. Kresse, and R. Asahi, The Journal of Physical Chemistry Letters 11, 6946 (2020).
* Bartók _et al._ (2010) A. P. Bartók, M. C. Payne, R. Kondor, and G. Csányi, Physical Review Letters 104, 136403 (2010).
* Behler and Parrinello (2007) J. Behler and M. Parrinello, Physical Review Letters 98, 146401 (2007).
* Liu and Kitchin (2020) M. Liu and J. R. Kitchin, The Journal of Physical Chemistry C 124, 17811 (2020).
* Behler (2011) J. Behler, The Journal of Chemical Physics 134, 074106 (2011).
* Allen-Zhu, Li, and Liang (2019) Z. Allen-Zhu, Y. Li, and Y. Liang, in _Advances in Neural Information Processing Systems_, Vol. 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Curran Associates, Inc., 2019).
* Lakshminarayanan, Pritzel, and Blundell (2016) B. Lakshminarayanan, A. Pritzel, and C. Blundell, CoRR (2016), arXiv:1612.01474v3 [stat.ML] .
* Olson, Wyner, and Berk (2018) M. Olson, A. Wyner, and R. Berk, in _Advances in Neural Information Processing Systems_, Vol. 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Curran Associates, Inc., 2018).
* 192 (1924) Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 106, 441 (1924).
* Schran, Brieuc, and Marx (2021) C. Schran, F. Brieuc, and D. Marx, The Journal of Chemical Physics 154, 051101 (2021).
* Liu and Nocedal (1989) D. C. Liu and J. Nocedal, Mathematical Programming 45, 503 (1989).
* Henkelman, Uberuaga, and Jónsson (2000) G. Henkelman, B. P. Uberuaga, and H. Jónsson, The Journal of Chemical Physics 113, 9901 (2000).
* Kresse and Hafner (1993) G. Kresse and J. Hafner, Physical Review B 47, 558 (1993).
* Kresse and Furthmüller (1996) G. Kresse and J. Furthmüller, Computational Materials Science 6, 15 (1996).
* Perdew, Burke, and Ernzerhof (1996) J. P. Perdew, K. Burke, and M. Ernzerhof, Physical Review Letters 77, 3865 (1996).
* Perdew, Burke, and Ernzerhof (1997) J. P. Perdew, K. Burke, and M. Ernzerhof, Physical Review Letters 78, 1396 (1997).
* Larsen _et al._ (2017) A. H. Larsen, J. J. Mortensen, J. Blomqvist, I. E. Castelli, R. Christensen, M. Dułak, J. Friis, M. N. Groves, B. Hammer, C. Hargus, E. D. Hermes, P. C. Jennings, P. B. Jensen, J. Kermode, J. R. Kitchin, E. L. Kolsbjerg, J. Kubal, K. Kaasbjerg, S. Lysgaard, J. B. Maronsson, T. Maxson, T. Olsen, L. Pastewka, A. Peterson, C. Rostgaard, J. Schiøtz, O. Schütt, M. Strange, K. S. Thygesen, T. Vegge, L. Vilhelmsen, M. Walter, Z. Zeng, and K. W. Jacobsen, Journal of Physics: Condensed Matter 29, 273002 (2017).
* (39) Y. Yang, “NN ensemble relaxer,” https://github.com/yilinyang1/NN-ensemble-relaxer.
* Lee _et al._ (2019) K. Lee, D. Yoo, W. Jeong, and S. Han, Computer Physics Communications 242, 95 (2019).
* Pulay (1980) P. Pulay, Chemical Physics Letters 73, 393 (1980).
* Jacobsen, Stoltze, and Nørskov (1996) K. Jacobsen, P. Stoltze, and J. Nørskov, Surface Science 366, 394 (1996).
|
# Boosting Performance for Software Defined Networks from Traffic Engineering
Perspective
††thanks:
Mohammed I. Salman Department of Computer Science and Engineering
Wright State University
Dayton, Ohio
<EMAIL_ADDRESS>Bin Wang Department of Computer Science and Engineering
Wright State University
Dayton, Ohio
<EMAIL_ADDRESS>
###### Abstract
Paths selection algorithms and rate adaptation objective functions are usually
studied separately. In contrast, this paper evaluates some traffic engineering
(TE) systems for software defined networking obtained by combining path
selection techniques with average delay and load balancing, the two most
popular TE objective functions. Based on TE simulation results, the best TE
system suitable for software defined networks is a system where the paths are
calculated using an oblivious routing model and its adaptation rate calculated
using an average delay objective function. Thus, we propose the RACKE+AD
system combining path sets computed using Räcke’s oblivious routing and a
traffic splitting objective function using average delay. This model
outperforms current state-of-the-art models, maximizes throughput, achieves
better network resource utilization, and minimizes delay. The proposed system
outperformed SMORE and SWAN by 4.2% and 9.6% respectively, achieving 27%
better utilization and delivering 34% more traffic with 50% less latency
compared with both systems on a GÉANT network.
###### Index Terms:
Traffic engineering, routing schemes, software defined networking, oblivious
routing, simulation, optimization
## I Introduction
Centralized traffic engineering (TE) has gained much attention following new
software defined networking (SDN) developments. Large technology companies
such as Microsoft [1] and Google [2] have shifted to this technology over the
last few years.
Some previous studies have deviated from the standard SDN centralization
feature to improve scalability and fast adaptation to changing traffic
conditions, e.g. Contra [3], HULA [4], MP-HULA [5], and DASH [6] balance load
traffic entirely in the data plane to reduce controller overhead. These
solutions provide scalable systems with short response time, but degrade
performance, with resulting distributed solutions far from optimal [7].
Performance can also be affected by the traffic splitting objective function.
Some TE systems balance load over some paths by minimizing maximum link
utilization (MLU) [1, 8]. However, minimizing MLU does balance load and
enhance performance for low traffic and degrades performance significantly
during peak hours since it requires additional constraints to satisfy all the
demands [9]. Other TE systems use meta-heuristic [10] or heuristic [11]
solutions that can provide fast routing convergence, but the solutions are
sub-optimal since they may be only local optima. Prior to SDN, several studies
considered different objectives [12, 13]. To our knowledge, performance
impacts from these objectives and path selection strategies have not been
properly considered for SDN. Any TE system has two key ingredients: which set
of paths is used for forwarding traffic, and how to split traffic over these
selected paths. To the best of our knowledge, no previous study has focused on
boosting performance by optimizing combinations of these key ingredients, in
contrast, previous work has focused on either path selection algorithms or
traffic-splitting objective functions, but not both.
Many studies suggest that a set of shortest paths should be used in TE systems
to achieve reliable performance [1, 14, 15]. Unfortunately, choosing shortest
paths may exacerbate congestion for topologies with high link capacity
heterogeneity. Oblivious routing111We use “Oblivious routing” and “Räcke’s
oblivious routing” interchangeably strategies offer network demand independent
routing schemes, i.e., the routing scheme that is oblivious to the demands
[16, 17, 18, 19]. Although oblivious routing schemes can be devised with
guaranteed congestion ratio, the resulting routing scheme is static and unable
to adapt to changing traffic conditions. Several studies have shown that route
allocations calculated using an oblivious routing model achieve comparable
quality to adaptive solutions [8, 20]. Selected paths from this approach are
capacity-aware and diverse, which improves not only system performance, but
also robustness.
The capacity aware concept not only applies to path selection only, but also
to sending rates. For example, the Kleinrock delay objective function [21]
minimizes congestion by increasing highly utilized link costs, thus, avoiding
highly congested links. The widely used load balancing (LB) objective function
[1, 8, 22, 23, 24] minimizes utilization (relative load) for all links, and
can also be considered a capacity-aware objective function. The main goal for
demand aware objectives is to mitigate proportional increases for all demands
[12] by minimizing MLU. However, all source destination (SD) pair demands do
not increase at the same rate, and it is not trivial to predict future
demands. Thus, sending rates should not only be capacity aware, but also
demand aware.
Therefore, we constructed a new simulator, and motivated by SMORE [8] and AD
objective functions [23, 24, 25] we propose RACKE+AD, a centralized, adaptive,
semi-oblivious, demand aware, near optimal TE system with static routes
allocated using Räcke’s oblivious routing model [16, 18, 19] and dynamic rate
adaptation by approximating the average delay (AD) objective function.
RACKE+AD outperformed SWAN [1] and SMORE [8] for throughput, congestion, and
latency evaluated on GÉANT and ATT topologies.
Contributions. Critical contributions from the current paper are as follows:
1. 1.
We present a routing scheme that outperforms current state-of-the-art
techniques.
2. 2.
We introduce RACKE+AD, a new efficient TE simulator that can test many routing
schemes simultaneously. RACKE+AD is optimized for testing different route
selection algorithm and objective function combinations and can be easily
extended to test future TE systems.
3. 3.
We demonstrate that a TE system with static routes and adaptive traffic
splitting offers many benefits, including performance, throughput, and
resource utilization.
## II System Model
All TE systems comprise two phases: identifying a set of paths to be used to
forward traffic (path selection), and identifying splitting ratios to
distribute traffic over these paths (rate adaptation). Generally, routes
selected in the path selection phase are static, i.e., selected once and only
recalculated when the network topology changes. Path selection is usually
offline because updating end-to-end paths may take hundreds of seconds for
wide area networks. In contrast, the rate adaptation phase must update path
weights regularly due to frequent demand changes. However, the time required
to update path weights is considerably less than the time required to update
paths in the network. Among many techniques of paths selection algorithms and
rate adaptation objective functions, the aim of this research is to find the
best combination of these phases to enhance network performance.
Path and Rate Adaptation Properties: Intuitively, independently chosen paths
may not provide better performance than dependently chosen paths. However,
SMORE showed that path selection has considerable effect on performance [8].
Selected paths should be low stretch to minimize latency and naturally load
balanced to provide better performance. Low stretch motivated us to compare
SMORE performance and latency against k-shortest paths (KSP) approaches. SMORE
is naturally load balanced since route computation in Räcke’s oblivious
routing model is not independent and incorporates some randomness, i.e., the
obtained route set may not be the same if we were to run the model again.
Thus, we expect different performance for each run. On the other hand, KSP
selected paths are not capacity aware, whereas Räcke’s model selected paths
are capacity-aware due to the natural load balancing. Performance can be
further boosted if we use the same concept for splitting traffic over the
selected paths, and we expect best performance may be achieved using phases,
path selection, and rate adaptation.
### II-A Rate Adaptation Models
#### II-A1 Load Balance
The load balance (LB) objective is also known as minimizing MLU, Wozencraft
objective [26], or minimizing congestion, where LB minimizes the load on the
most congested link. Thus, the LB problem can be expressed as [24]
min $\displaystyle F(x)=r$ (1) s.t. $\displaystyle\sum_{p\in
P_{d}}x_{dp}=h_{d},$ $\displaystyle d$ $\displaystyle\in D$ (1a)
$\displaystyle\sum_{d\in D~{}}\sum_{p\in P_{d}}\delta_{dpl}x_{dp}\leq c_{l}r,$
$\displaystyle l$ $\displaystyle\in L$ (1b)
where: $x_{dp}$ is the flow on path $p$ for demand $d$; $h_{d}$ is the volume
for demand $d$; $c_{l}$ is the capacity for link $l$; $P_{d}$ is the number of
candidate paths for demand $d$; $\delta_{dpl}$ = 0, 1 is a link-path
indicator, with $\delta_{dpl}$ = 1 if path $p$ for demand $d$ uses link $l$,
and 0 otherwise.
Two constraints are applied. The demand constraint (1a) ensures that all
demands are satisfied over some paths. The capacity constraint (1b) ensures
that load does not exceed the link capacity where $r\leq 1$, after solving
(1). The linear program formulation above is the final form of the problem
whereas the original problem is non-linear. The reader is referred to Chapter
4 of [24] for details on how the problem can be converted to the current form.
#### II-A2 Average Delay
For this objective function, delay for any network link can be modeled as
$y/(c-y)$, as shown in (Figure 1, solid line). Similar to the LB objective,
the original AD problem is non-linear and cannot be formulated directly as a
linear program. Thus, the delay function is a piecewise linear approximation
(2) (Figure 1, dotted line)
Figure 1: Piecewise linear approximation of the delay function.
$g(z)=\begin{cases}(3/2)z&\text{for $1\leq z<1/3$}\\\ (9/2)z-1&\text{for
$1/3\leq z<2/3$}\\\ 15z-8&\text{for $2/3\leq z<4/5$}\\\ 50z-36&\text{for
$4/5\leq z<9/10$}\\\ 200z-171&\text{for $9/10\leq z<19/20$}\\\
4000z-3781&\text{for $z\geq 19/20$}\end{cases}$ (2)
The linear program for this AD problem is
min $\displaystyle F=\sum_{l=1}^{L}\frac{r_{l}}{c_{l}}$ (3) s.t.
$\displaystyle\sum_{p=1}^{P_{d}}x_{dp}=h_{d},\quad d=1,2,...,D$ (3a)
$\displaystyle\sum_{d=1}^{D}\sum_{p=1}^{P_{d}}\delta_{dpl}x_{dp}=y_{l},\quad
l=1,2,...,L$ (3b) $\displaystyle r_{l}\geq\frac{3}{2}y_{l},\quad l=1,2,...,L$
(3c) $\displaystyle r_{l}\geq\frac{9}{2}y_{l}-c_{l},\quad l=1,2,...,L$ (3d)
$\displaystyle r_{l}\geq 15y_{l}-8c_{l},\quad l=1,2,...,L$ (3e) $\displaystyle
r_{l}\geq 50y_{l}-36c_{l},\quad l=1,2,...,L$ (3f) $\displaystyle r_{l}\geq
200y_{l}-171c_{l},\quad l=1,2,...,L$ (3g) $\displaystyle r_{l}\geq
4000y_{l}-3781c_{l},\quad l=1,2,...,L$ (3h) $\displaystyle x_{dp}\geq 0,\quad
p=1,2,...,P_{k},d=1,2,...,D$ (3i) $\displaystyle y_{l}\geq 0,\quad
l=1,2,...,L$ (3j)
which is considerably more accurate [24] than the Fortz et al. [27]
approximation.
### II-B Paths Selection Algorithms
#### II-B1 Räcke’s oblivious routing model
Räcke’s oblivious routing model iteratively computes a distribution over
randomized routing trees using an approximation algorithm. Link weights are
adjusted for each iteration based on how much the link has been utilized in
previous routing tree sets. A routing tree has leaves corresponding to nodes
in the original topology. Thus, a path can be obtained between nodes $u$ and
$v$ in the original graph by finding corresponding leaves for $u$ and $v$ in
the routing tree.
However, paths for Räcke’s oblivious routing model are computed without
considering demands, thus, they do not overfit to a specific scenario [8].
Similar to SMORE, we also adopt the simple mechanism used to impose the number
of paths for each SD node pair. We use 4 paths for each SD pair of nodes that
have the highest weights.
#### II-B2 K-shortest paths
The proposed KSP algorithm is based on Yen’s algorithm, the most commonly used
algorithm for TE. KSP is a generalization of the shortest path routing
problem. The algorithm returns loopless $k$ shortest paths ordered from
shortest to longest. We use four paths for each SD pair, i.e., $k=4$.
## III Simulator Framework
We built a simulator to model and test different TE scenarios, with particular
attention to efficiency, simplicity, and extendibility. Although many network
simulators have been proposed previously [28, 29, 30, 31], they are generally
not optimized for modeling TE approaches and/or do not provide ease of use or
extendibility. The proposed simulator was built in Python and can test many TE
models in parallel while recording statistics in the background. We use Gurobi
optimization [32] to solve the linear programming problems, by integrating it
with Python. The framework, data and Räcke’s oblivious routing model
implementation are all available
online222https://github.com/MohammedSalman/TE-SIMULATOR.
Simulator inputs, (e.g. topology, demands, path selection algorithms,
objective functions, etc.) are all specified in a Python script or
configuration file. The simulation produces visualized throughput graphs for
each TE system. The graphs are updated periodically as throughput data becomes
available. Three time-series metrics for each TE system are recorded in the
background during simulation: overall throughput, congestion per link, and
latency per path. Topology and traffic matrices are provided as input files,
where the user provides the location to these files in the configuration file.
If the locations are unavailable, random topology and traffic matrices will be
generated according to provided parameters, including number of nodes $N$,
number of links $L$, and traffic distribution matrix.
## IV Simulation Setup
### IV-A Evaluating Routing Scheme Quality
We evaluate TE systems based on congestion, throughput, and delay. Congestion
reflects how a TE system utilizes network resources, and we mostly care about
congestion when traffic demand exceeds link capacity. Thus, avoiding
congestion can be considered as preserving as much residual capacity as
possible, which is important for unexpected traffic surges that could cause
bottlenecks. Congestion has negative impact on delay due to queuing. We
measure path delay by summing queuing delay for each link along that path,
$l/(c-l)$, where $l$ is the absolute link load and $c$ is the link capacity.
Throughput is the proportion of total demand that is successfully delivered to
the destinations.
### IV-B Simulation Settings
Path selection algorithms. We use three approaches for path selection (i)
paths selected using Räcke’s oblivious routing model, (ii) paths selected
using KSP algorithm, and (iii) select all available simple paths. We refer to
these RACKE, KSP, and OPTIMAL, respectively.
Rate adaptation objective functions. We use two objective functions for rate
adaptation: AD and LB. We refer to a routing scheme with paths selected using
KSP and rate adaptation using LB objective function as KSP+LB. Similarly,
models where the routing scheme selects all available paths and rate
adaptation uses AD is referred to as OPTIMAL (AD), etc. The RACKE+LB routing
scheme parallels that used in SMORE [8], and KSP+LB is an approximation to the
SWAN scheme [1]. Table I shows the TE systems used in our experiment.
TABLE I: Implemented TE algorithms TE System | Description
---|---
KSP+LB | k-Shortest Paths (KSP) for paths, LB for weights
KSP+AD | k-Shortest Paths (KSP) for paths, AD for weights
RACKE+LB | Räcke’s oblivious routing for paths, LB for weights
RACKE+AD | Räcke’s oblivious routing for paths, AD for weights
OPTIMAL(LB)333The best load balance is achieved with this system. | All paths, LB for weights
OPTIMAL(AD)444The best average delay is achieved with this system. | All paths, AD for weights
Path budget. Similar to SMORE and SWAN, and to ensure a fair comparison, we
use 4 paths to evaluate any routing scheme. If the Räcke’s oblivious routing
model produces a routing scheme with SD pairs that has more than 4 paths, we
use the 4 highest weight paths, similar to SMORE.
Traffic matrix generation. We use the gravity model to generate the traffic
matrix (TM) [8, 17]. The gravity model approximates real-world TMs for a
production network [33]. TMs are deduced based on incoming/outgoing flow for
each forwarding device. Since that information is not available, we use a
capacity based heuristic rather than incoming/outgoing flow information [17].
Topologies. We evaluate many TE systems for ATT and GÉANT555dataset available
at: http://www.topology-zoo.org/dataset.html production topologies. The GÉANT
network (European academic network) contains 38 nodes and 104 directed links
with heterogeneous capacities. Fig. 2 shows the link capacity distribution for
this network. Different TE systems may behave differently depending on link
capacity distributions. Shortest-path TE systems may introduce a bottleneck in
heterogeneous link capacities as many SD pairs compete for the same resources.
Figure 2: Capacity distribution for GÉANT network (log scaled).
## V Results
We evaluated multiple routing schemes using criteria focused on:
* •
how each TE system performs regarding throughput and congestion, and
* •
SMORE and KSP TE system impacts on latency.
### V-A Throughput
Performance for many TE systems were evaluated on GÉANT and ATT networks with
path budget = 4 for a fair comparison with SMORE. Figures 3(a) and 3(b) show
throughput and corresponding throughput distribution for GÉANT network,
respectively. Rate adaptation using AD objective function significantly
increases throughput, achieving 4.2% and 9.6% improvement over SMORE and
KSP+LB, respectively, which confirms path selection effectiveness using
Räcke’s oblivious routing algorithm.
(a) Throughput
(b) Throughput distribution
Figure 3: Throughput for GÉANT topology
Similar to GÉANT topology, a higher throughput was achieved for ATT topology
using the AD adaptation rate objective function. KSP had slightly better
throughput than Räcke’s oblivious routing path selection algorithm (Figs. 4(a)
and 4(b)).
(a) Throughput
(b) Throughput distribution
Figure 4: Throughput on ATT topology
Räcke’s oblivious routing model with LB adaptation rate performed 1.14% better
than KSP on average. This may confirm that AD favors shortest paths when all
links have the same capacity. However, there is no guarantee that SMORE will
always outperform (or underperform) KSP under the same conditions due to
oblivious routing scheme randomness. Figure 5 shows throughput distributions
for KSP+AD with a different Räcke’s oblivious routing TE systems obtained by
repeatedly calculating the oblivious routing scheme. Output from KSP+AD
remained constant since KSP+AD is deterministic. Räcke’s oblivious routing
scheme outperformed KSP for 5 runs and underperformed for 1 run. Thus, there
is a worst case scenario where KSP may perform better than SMORE. The best run
had 2.29% higher throughput than KSP+AD. Therefore, a network operator may
choose to run Räcke’s scheme several times and choose the best outcome.
Figure 5: Throughput distribution for ATT topology for 1 KSP and 6 Räcke
schemes
### V-B Congestion
Figures 6(a) and 6(b) show network congestion for GÉANT topology using AD and
LB. The AD objective function scheduled link loading differently from LB.
Figure 6(a) shows the maximum congested link over time. All TE systems
scheduled link loads that exceeded specific link capacities since we
deliberately fed the system with high volume demands to investigate TE system
performance well under stressed conditions. AD (Fig. 6(a)) seems to have
higher MLU whereas Fig. 6(b)) shows that the AD objective utilizes link loads
much better than LB. TE systems with LB caused a bottleneck for more than 40%
of links whereas TE systems with AD objective caused a bottleneck for 13% of
links. This low congestion ratio for AD is the main reason for the higher
throughput (Fig. 3).
The LB objective always distributes traffic perfectly across the available
routes, in the sense that all paths are used and all nodes send and receive
traffic with quite similar link utilization (relative load) for all links.
Thus, all links might be over-utilized under high demands when the system is
not feasible. On the other hand, AD deals more with delay and throughput, but
generates worse MLU than from LB. However, MLU is not a true network metric as
it only considers congestion for a single link rather than the whole network.
Thus, congestion distribution seems like a more reasonable metric, and we only
measured MLU to make that point since it is heavily used in the literature.
Thus, two factors contributed to better throughput and less congestion: routes
selected using Räcke’s oblivious routing algorithm, and using the AD
objective. Similar results were obtained for ATT topology (Figs. 7(a) and
7(b)).
(a) Max link congestion, GÉANT topology
(b) CDF of link congestions, GÉANT topology
Figure 6: Max link congestion and links’ congestion distribution on GÉANT
topology
(a) Max link congestion, ATT topology
(b) CDF of link congestions, ATT topology
Figure 7: Max link congestion and links’ congestion distribution on ATT
topology
### V-C Latency
Figure 8 shows link delay distribution with respect to traffic delivered
within that delay for GÉANT and ATT topologies. Latency for each path was
computed by summing the link delays to obtain the path delay. Including AD
selection outperforms LB, achieving significantly lower latency. Figure 8(a)
shows that LB and AD TE systems different considerably for GÉANT topology. TE
systems with AD objective initially deliver approximately 34% traffic more
than those with LB objective, which also has latency 50% lower latency than TE
systems with AD. RACKE+AD routing delivered slightly more traffic than
OPTIMAL(AD) since OPTIMAL(AD) goal is to reduce total delay rather than
throughput. Figure 8(b) shows that routing schemes with AD also delivered more
traffic than those with LB for ATT topology. However, the gap between the two
groups is somewhat smaller than for GÉANT topologies (Fig. 8(a)) because ATT
network links are heterogeneous, hence smaller performance differences between
individual links.
(a) GÉANT topology.
(b) ATT topology.
Figure 8: Latency distribution
## VI Related Work
The classic approach for TE problems is to solve them as a linear program (LP)
[24, 26], referred to as a multi-commodity flow problem, where the objective
function usually minimizes MLU. The approximation of AD objective function is
not as widely as used. However, this classical approach does not consider
decoupling TE system phases because all available paths are provided as
inputs. Choosing all available paths has two limitations: more paths means
more decision variables in the LP, and forwarding devices, such routers and
switches, have limited TCAM memory, hence fewer number paths is always
preferable to keep the routing table as small as possible.
The conventional approach adjusts link weights to find a good routing scheme
that can increase throughput or minimize congestion in the network [27, 34].
However, OSPF can never reach optimal because it uses the equal cost multi-
path approach that splits traffic evenly among available shortest paths
without rate adaptation. Furthermore, optimizing link weights is an NP-hard
problem.
Potentially centralized TE approaches recently became viable due to software-
defined networking (SDN) developments, that clearly decouple the two TE
phases. SWAN [1] distributes traffic over a set of k-shortest paths using an
LP that reserves a small amount of scratch capacity on links to apply updates
in a congestion-free manner. SOL [22] uses a greedy approach to randomly
select paths with the promise that this random selection will help load
balancing traffic across the network. This latter approach is somewhat similar
to valiant load balancing [35] but can lead to unnecessarily long paths and
consequently increased latency.
Oblivious routing [16, 17, 18] has also been proposed to find a routing scheme
that performs well under all possible demands. The Räcke oblivious routing
model [16] guarantees a congestion rate that is never worse than O(log n) of
optimal, where $n$ is the number of nodes in the graph. However, despite the
guaranteed congestion ratio, this approach cannot outperform systems like SWAN
since it considers all possible traffic demands. On the other hand, the
oblivious routing approach has inspired several studies (including the current
study) to investigate a careful path selection approach. SMORE [8] was
inspired by Räcke’s oblivious routing model to carefully select paths that
increase TE system performance and robustness. Paths selected this way have
low stretch, which is important to decrease latency, and are capacity aware,
which is important for load balancing. The proposed approach in this paper
suggests that careful route selection is not sufficient performance
enhancement to reach the expected maximum performance. However, a different
objective function from the commonly employed LB could further enhance
performance. Hence we were inspired to compare LB and AD objective function
performance, and subsequently propose the RACKE+AD TE system using oblivious
routing for path selection with AD to achieve better link delay and network
performance.
## VII Discussion
This section discusses the reason behind the high gap in performance and delay
between LB and AD objective functions and one potential limitation for this
work. The LB objective function tends to make the relative load the same for
all links when all SD pairs are sending and receiving traffic. This can
enhance performance to some extent but causes bottlenecks between some SD
pairs under stressed conditions and unpredicted demands, with consequential
congestion loss. On the other hand, the AD objective function increases the
cost for highly utilized links to avoid utilizing them if other less heavily
utilized links are available. Thus, AD is more demand aware than LB and hence
offers better contribution to performance. However, solving LP for LB is much
faster than for AD, particularly for larger networks due to the increased
number of constraints and decision variables.
## VIII Conclusion
Although a few TE systems have been optimized previously using different path
selection algorithms, few studies have investigates performance enhancement by
testing many objective functions for splitting traffic. These phases have only
been studied in isolation previously, with no prior studies testing all
possible combinations to find a routing scheme with the best available
performance.
This paper proposed RACKE+AD TE system and validated its performance
advantages by testing many possible combinations. RACKE+AD selects routes
using Räcke’s oblivious routing model and the average delay objective
function. Although the intuitive AD goal is to minimize network delay, it also
provides surprisingly better throughput than minimizing MLU (commonly known as
load balancing).
Simulations confirmed the proposed RACKE+AD system outperformed state-of-the-
art routing TE systems in terms of throughput, congestion, and delay. We
discussed a caveat when running Räcke’s oblivious routing model, where
k-shortest paths may give better performance due to randomness in oblivious
routing, and also discussed the importance of excluding the maximum congestion
metric when evaluating TE systems, particularly system that split traffic not
based on the LB objective function.
## Acknowledgment
We would like to thank the anonymous reviewers for their helpful comments and
suggestions. We also would like to thank Praveen Kumar from Cornell University
for addressing all the questions we had regarding the SMORE traffic
engineering system.
## References
* [1] Chi-Yao Hong et al. “Achieving High Utilization with Software-Driven WAN” In _Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM_ , SIGCOMM ’13 Hong Kong, China: Association for Computing Machinery, 2013, pp. 15–26 DOI: 10.1145/2486001.2486012
* [2] Sushant Jain et al. “B4: Experience with a Globally-Deployed Software Defined Wan” In _SIGCOMM Comput. Commun. Rev._ 43.4 New York, NY, USA: Association for Computing Machinery, 2013, pp. 3–14 DOI: 10.1145/2534169.2486019
* [3] Kuo-Feng Hsu et al. “Contra: A Programmable System for Performance-aware Routing” In _17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20)_ Santa Clara, CA: USENIX Association, 2020, pp. 701–721 URL: https://www.usenix.org/conference/nsdi20/presentation/hsu
* [4] Naga Katta et al. “HULA: Scalable Load Balancing Using Programmable Data Planes” In _Proceedings of the Symposium on SDN Research_ , SOSR ’16 Santa Clara, CA, USA: Association for Computing Machinery, 2016 DOI: 10.1145/2890955.2890968
* [5] Cristian Hernandez Benet, Andreas J. Kassler, Theophilus Benson and Gergely Pongracz “MP-HULA: Multipath Transport Aware Load Balancing Using Programmable Data Planes” In _Proceedings of the 2018 Morning Workshop on In-Network Computing_ , NetCompute ’18 Budapest, Hungary: Association for Computing Machinery, 2018, pp. 7–13 DOI: 10.1145/3229591.3229596
* [6] Kuo-Feng Hsu et al. “Adaptive Weighted Traffic Splitting in Programmable Data Planes” In _Proceedings of the Symposium on SDN Research_ , SOSR ’20 San Jose, CA, USA: Association for Computing Machinery, 2020, pp. 103–109 DOI: 10.1145/3373360.3380841
* [7] Mohammad Alizadeh et al. “CONGA: Distributed Congestion-Aware Load Balancing for Datacenters” In _Proceedings of the 2014 ACM Conference on SIGCOMM_ , SIGCOMM ’14 Chicago, Illinois, USA: Association for Computing Machinery, 2014, pp. 503–514 DOI: 10.1145/2619239.2626316
* [8] Praveen Kumar et al. “Semi-Oblivious Traffic Engineering: The Road Not Taken” In _Proceedings of the 15th USENIX Conference on Networked Systems Design and Implementation_ , NSDI’18 Renton, WA, USA: USENIX Association, 2018, pp. 157–170
* [9] C. Zhang et al. “Scalable Traffic Engineering for Higher Throughput in Heavily-loaded Software Defined Networks” In _NOMS 2020 - 2020 IEEE/IFIP Network Operations and Management Symposium_ , 2020, pp. 1–7 DOI: 10.1109/NOMS47738.2020.9110259
* [10] M.. Tajiki et al. “CECT: computationally efficient congestion-avoidance and traffic engineering in software-defined cloud data centers” In _Cluster Computing_ 21.4, 2018, pp. 1881–1897 DOI: 10.1007/s10586-018-2815-6
* [11] W. Quan et al. “Adaptive Transmission Control for Software Defined Vehicular Networks” In _IEEE Wireless Communications Letters_ 8.3, 2019, pp. 653–656 DOI: 10.1109/LWC.2018.2879514
* [12] Eric Gourdin and Olivier Klopfenstein “Comparison of Different QoS-oriented Objectives for Multicommodity Flow Routing Optimization” In _Proceedings of the International Conference on Telecommunications (ICT_ , 2006
* [13] Simon Balon, Fabian Skivée and Guy Leduc “How Well Do Traffic Engineering Objective Functions Meet TE Requirements?” In _NETWORKING 2006. Networking Technologies, Services, and Protocols; Performance of Computer and Communication Networks; Mobile and Wireless Communications Systems_ Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 75–86
* [14] B. Fortz, J. Rexford and M. Thorup “Traffic engineering with traditional IP routing protocols” In _IEEE Communications Magazine_ 40.10, 2002, pp. 118–124
* [15] Zhanyou Ye, Shi Hong Marcus Wu and Themistoklis Prodromakis “Computing Shortest Paths in 2D and 3D Memristive Networks” In _Memristor Networks_ Cham: Springer International Publishing, 2014, pp. 537–552 DOI: 10.1007/978-3-319-02630-5˙24
* [16] H. Racke “Minimizing congestion in general networks” In _The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings._ , 2002, pp. 43–52
* [17] D. Applegate and E. Cohen “Making Routing Robust to Changing Traffic Demands: Algorithms and Evaluation” In _IEEE/ACM Transactions on Networking_ 14.6, 2006, pp. 1193–1206 DOI: 10.1109/TNET.2006.886296
* [18] Harald Räcke “Optimal Hierarchical Decompositions for Congestion Minimization in Networks” In _Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing_ , STOC ’08 Victoria, British Columbia, Canada: Association for Computing Machinery, 2008, pp. 255–264 DOI: 10.1145/1374376.1374415
* [19] Philipp Czerner and Harald Räcke “Compact Oblivious Routing in Weighted Graphs”, 2020 arXiv:2007.02427 [cs.NI]
* [20] M. Chiesa, G. Rétvári and M. Schapira “Oblivious Routing in IP Networks” In _IEEE/ACM Transactions on Networking_ 26.3, 2018, pp. 1292–1305 DOI: 10.1109/TNET.2018.2832020
* [21] Leonard Kleinrock “Communication Nets; Stochastic Message Flow and Delay” USA: Dover Publications, Inc., 1972
* [22] Victor Heorhiadi, Michael K. Reiter and Vyas Sekar “Simplifying Software-Defined Network Optimization Using SOL” In _Proceedings of the 13th Usenix Conference on Networked Systems Design and Implementation_ , NSDI’16 Santa Clara, CA: USENIX Association, 2016, pp. 223–237
* [23] X. Liu, S. Mohanraj, M. Pióro and D. Medhi “Multipath Routing from a Traffic Engineering Perspective: How Beneficial Is It?” In _2014 IEEE 22nd International Conference on Network Protocols_ , 2014, pp. 143–154 DOI: 10.1109/ICNP.2014.34
* [24] Deep Medhi and Karthik Ramasamy “Chapter 4 - Network Flow Models” In _Network Routing (Second Edition)_ , The Morgan Kaufmann Series in Networking Boston: Morgan Kaufmann, 2018, pp. 114–157 DOI: https://doi.org/10.1016/B978-0-12-800737-2.00005-3
* [25] B. Fortz, L. Gouveia and M. Joyce-Moniz “On the convex piecewise linear unsplittable multicommodity flow problem” In _2016 12th International Conference on the Design of Reliable Communication Networks (DRCN)_ , 2016, pp. 9–13 DOI: 10.1109/DRCN.2016.7470829
* [26] R.. Gallager, A. Segall and J.. Wozencraft “Data network reliability”, Massachusetts Inst. of Tech. Report, 1976
* [27] B. Fortz and M. Thorup “Internet traffic engineering by optimizing OSPF weights” In _Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064)_ 2, 2000, pp. 519–528 vol.2 DOI: 10.1109/INFCOM.2000.832225
* [28] George F. Riley and Thomas R. Henderson “The ns-3 Network Simulator” In _Modeling and Tools for Network Simulation_ Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 15–34 DOI: 10.1007/978-3-642-12331-3˙2
* [29] “Opnet Projects”, http:opnetprojects.com/ URL: http://opnetprojects.com/
* [30] Praveen Kumar et al. “YATES: Rapid Prototyping for Traffic Engineering Systems” In _Proceedings of the Symposium on SDN Research_ , SOSR ’18 Los Angeles, CA, USA: Association for Computing Machinery, 2018 DOI: 10.1145/3185467.3185498
* [31] Steven Gay, Pierre Schaus and Stefano Vissicchio “REPETITA: Repeatable Experiments for Performance Evaluation of Traffic-Engineering Algorithms” In _CoRR_ abs/1710.08665, 2017 arXiv: http://arxiv.org/abs/1710.08665
* [32] LLC Gurobi Optimization “Gurobi Optimizer Reference Manual”, 2020 URL: http://www.gurobi.com
* [33] Matthew Roughan “Simplifying the Synthesis of Internet Traffic Matrices” In _SIGCOMM Comput. Commun. Rev._ 35.5 New York, NY, USA: Association for Computing Machinery, 2005, pp. 93–96 DOI: 10.1145/1096536.1096551
* [34] Srikanth Sundaresan, Cristian Lumezanu, Nick Feamster and Pierre Francois “Autonomous Traffic Engineering with Self-Configuring Topologies” In _Proceedings of the ACM SIGCOMM 2010 Conference_ , SIGCOMM ’10 New Delhi, India: Association for Computing Machinery, 2010, pp. 417–418 DOI: 10.1145/1851182.1851239
* [35] L.. Valiant “A Scheme for Fast Parallel Communication” In _SIAM Journal on Computing_ 11.2, 1982, pp. 350–361 DOI: 10.1137/0211027
|
# A Multi-Platform Study of Crowd Signals Associated with Successful Online
Fundraising
Henry K. Dambanemuya
Northwestern University
Evanston, IL 60208
<EMAIL_ADDRESS>
Emőke-Ágnes Horvát
Northwestern University
Evanston, IL 60208
<EMAIL_ADDRESS>
###### Abstract
The growing popularity of online fundraising (aka “crowdfunding”) has
attracted significant research on the subject. In contrast to previous studies
that attempt to predict the success of crowdfunded projects based on specific
characteristics of the projects and their creators, we present a more general
approach that focuses on crowd dynamics and is robust to the particularities
of different crowdfunding platforms. We rely on a multi-method analysis to
investigate the _correlates_ , _predictive importance_ , and _quasi-causal
effects_ of features that describe crowd dynamics in determining the success
of crowdfunded projects. By applying a multi-method analysis to a study of
fundraising in three different online markets, we uncover general crowd
dynamics that ultimately decide which projects will succeed. In all analyses
and across the three different platforms, we consistently find that funders’
behavioural signals (1) are significantly correlated with fundraising success;
(2) approximate fundraising outcomes better than the characteristics of
projects and their creators such as credit grade, company valuation, and
subject domain; and (3) have significant quasi-causal effects on fundraising
outcomes while controlling for potentially confounding project variables. By
showing that universal features deduced from crowd behaviour are predictive of
fundraising success on different crowdfunding platforms, our work provides
design-relevant insights about novel types of collective decision-making
online. This research inspires thus potential ways to leverage cues from the
crowd and catalyses research into crowd-aware system design.
_K_ eywords Fundraising, Crowdfunding, Collective Behaviour, Group Decision-
Making
## 1 Introduction
Increasingly, people recognise crowdfunding as an enabler of a variety of
online fundraising activities that range from pro-social campaigns and
supporting creative works to sizeable equity investments [1, 2, 3, 4, 5, 6,
7]. This growing phenomenon is effective in reducing barriers in access to
capital by eliminating the effects of geographic distance between project
creators and funders [8] and reducing the transaction costs of making such
fundraising possible. Crowdfunding is also praised for promoting
entrepreneurship by providing new opportunities to access funding [3, 9] and
means to improve the livelihoods of people living in emerging economies [10,
9]. In the wake of the recent novel Coronavirus pandemic, online fundraising
has received heightened attention from many civic and international
organisations that harnessed the power of crowdfunding to support their
efforts due to a lack of traditional fundraising. For instance, the World
Health Organisation (WHO) launched its first-ever crowdfunding
campaign111COVID-19 Solidarity Response Fund for WHO:
https://covid19responsefund.org/en/ and several other eminent GoFundMe
campaigns supported some of the most impacted countries, such as Italy222A
record-breaking crowdfunding campaign is helping Italy fight Covid-19:
https://qz.com/1836221/record-breaking-gofundme-campaigns-are-helping-italy-
fight-covid-19/.
The growing popularity of online fundraising has attracted significant
research on the subject. Most studies have tried to identify factors
associated with successful fundraising, focusing on a single platform (e.g.
[8, 11, 12]), despite huge market variations both geographically and in the
type of the fundraising effort. Existing research, therefore, often does not
automatically generalise to other platforms and has resulted in conflicting
findings concerning which project and creator determinants are associated with
success. Furthermore, most prior studies have attempted to predict success
based on various attributes of the projects [13, 4, 14], interactions with the
crowd [15], the creators [16], and their networks [17, 18, 19, 20, 21].
However, ad-hoc design and policy changes on crowdfunding platforms can
confound all these factors [22]. Hence, the social computing community needs
controlled approaches to systematically investigate the effects of project
attributes and crowd behaviour on fundraising success. We thus present a
general approach that is robust to the particularities of different
crowdfunding platforms and markets and focuses on the crowd dynamics that
contribute capital. This idea is backed up by evidence for the importance of
successfully attracting funders early in the campaign [23, 24] and the role of
subsequent herding in reaching the target amount [1, 25]. The broad spectrum
of projects and creators, the quick pace of funding and untrained crowds using
comparatively sparse data when selecting worthy projects are factors that
substantially complicate decision-making in crowdfunding’s low information and
high-risk situations. In this context, most funders rely on collective cues
when deciding to contribute to a project. Due to the significant signalling
among crowd members, when and how much capital people provide becomes a
crucial descriptor of the decision-making dynamics. Accordingly, previous work
has found, on individual platforms, that simple features describing crowd
dynamics can be significant markers of fundraising success [11, 26, 27]. We
build on this observation by systematically investigating the dynamics of
crowd behaviour across widely different crowdfunding platforms and markets
through a multi-method analysis that relies on three different empirical
methods to demonstrate the robustness of the crowd features. Our three main
contributions are:
1. 1.
We investigate similarities and differences between a charity platform that
collects donations for public schools333www.donorschoose.org, a dominant
crowdfunding site that connects borrowers with lenders444www.prosper.com, and
a leading equity crowdfunding platform that offers investors the opportunity
to buy shares in start-ups555We are unable to disclose the name of the
platform due to our Non-Disclosure Agreement (NDA) with them.. This is a
unique multi-platform and cross-market study on crowdfunding success.
2. 2.
We systematically test a set of intuitive and universal features that describe
funder dynamics (crowd features) and show their value in determining
fundraising success within and across the studied platforms that span
different markets, geographies, and fundraising efforts.
3. 3.
To substantiate our analysis, we develop a framework that uses an innovative
combination of methods for evaluating feature correlations and importance in a
human-interpretable machine learning model as well as in matching samples
along multiple dimensions to provide a causal understanding of the effect of
crowd features.
Our paper proceeds by first computing a set of crowd features that describe
collective behaviour in a variety of settings that involve decision-making
online. We first investigate correlation-based associations between individual
crowd features and fundraising success. In combination with characteristics of
projects that are visible to funders on each platform (project features), we
then perform supervised classification to predict fundraising outcomes and
compare the predictive performance of crowd features to that of project
features. Our results show that crowd features are significantly correlated
with and better at approximating fundraising success across different online
crowdfunding platforms than project features. However, since project features
have been shown in prior research to determine fundraising success [13, 4, 15,
16, 28] and are observable to funders on the crowdfunding platforms, we rely
on a quasi-experimental matching analysis to isolate and comparatively assess
the effects of crowd features on fundraising success while controlling for the
potential confounding influence of the observable project features. In
particular, we use Coarsened Exact Matching (CEM) [29] to examine the causal
effects of crowd features in relation to their specific crowdfunding platform
settings and show that the crowd effects are robust to platform heterogeneity.
By demonstrating that universal features deduced from the behaviour of the
contributing crowd are correlated with and predictive of fundraising success,
even when controlling for project features observable by the crowd, our study
provides empirical evidence of crowd dynamics features that are important in
the funding success of projects across different platforms and robust to the
particularities of the different online markets and platforms. Our work thus
contributes not only to crowdfunding, crowdsourcing, and social computing
literature but also to the growing body of knowledge on the science of
success. We provide empirical insights on the emergence of crowd dynamics that
eventually determine success in computer-supported cooperative work where
collective cues underpin decision-making, thereby promoting research-based,
crowd-aware platform design.
## 2 Related Work: Dynamics of Crowdfunding
Crowdfunding means raising money for a venture, cause, project, or
organisation by drawing on relatively small contributions from a large group
of individuals through a common online platform and without standard financial
intermediaries [5]. Online crowdfunding emerged in the early 2000s through
platforms such as DonorsChoose (2000), ArtistShare (2001), Prosper (2005),
IndieGoGo (2007), and Kickstarter (2009). Since then, these platforms have
attracted significant research attention in social computing and beyond (e.g.
[30, 31, 4, 14, 5, 3, 2, 32, 33]). Selecting from this vast literature, in
this section, we discuss the current understanding on why project creators
choose to crowdfund and what motivates diverse crowds to contribute towards
crowdfunding projects. We further review the literature on known indicators of
project success. We first focus on specific characteristics associated with
successful fundraising and then detail findings that might generalise across
different platforms.
For project creators, crowdfunding provides new opportunities to receive
capital [31] especially for demographics with limited access to resources from
traditional lending institutions [10]. In the wake of the 2008 financial
crisis, for example, crowdfunding became a viable solution for early-stage
companies struggling to obtain funding through conventional financing [9].
Project creators may also engage in crowdfunding for (1) establishing long-
term interactions with funders that extend beyond the financial transaction
and (2) receiving public validation for their projects and fundraising
abilities [31]. Existing studies further show that crowdfunding platforms also
range in terms of the motivations and goals of funders. For example, some
funders are attracted to these platforms as a means of demonstrating their
personal support to creators’ projects [31], in expectation of some kind of
reward [31, 3], seeking to support an important cause with no expectations of
reward [31], or making a political statement666For instance via
www.crowdpac.com. Stark differences in motivations both for project creators
and funders have given rise to various marketplaces and different crowdfunding
models (e.g. lending, charity, equity, and reward-based crowdfunding). This
heterogeneity in the nature of the fundraising effort raises the question:
Which findings from individual platforms hold for crowdfunding in general?
Despite the increasing public interest in crowdfunding, not all projects
succeed. In fact, most projects fail to reach their fundraising goal by
significant amounts and, typically, it is only by small margins that
successful projects meet their goal [5, 12]. Identifying factors that lead to
successful fundraising and predicting the probability of each project’s
success therefore remains one of the most important challenges in crowdfunding
research. Several studies have linked fundraising success to the nature of the
projects. For instance, across platforms like Kickstarter and Invesdor Oy
(reward and equity platforms, respectively), the type of project matters
because people tend to support efforts that reflect their cultural values or
further causes they care deeply about [34, 5]. As we would expect, the
fundraising goal correlates with fundraising success as indicated by research
on the reward-based platforms Kickstarter, Indiegogo, Ulule, Eppela, and
Demohour. Specifically, projects that request large amounts of money are more
likely to fail than modest requests [5, 35, 36, 37]. Additionally, the framing
of the request has also been linked to project success on the lending platform
Prosper, on Kickstarter, and on the two charity platforms DonorsChoose and
GoFundMe [38, 14, 39]. Furthermore, according to research based on
Kickstarter, Prosper, and AngelList (an equity platform) the visibility of the
project helps with attracting funders. In particular, social media posts [23,
18, 40], the size of the creators’ social network [5, 36, 41, 19, 42, 43], and
their reputation [16] increase chances of fundraising success. These studies
indicate that various characteristics of projects, especially some that are
specific to the platform, have an impact on potential funders’ decision-
making.
There is a general consensus in crowdfunding literature that identifiable
signals of quality play a key role in attracting contributions to projects.
However, different platforms have different ways to signal project quality.
For instance, project quality is often derived from descriptions that might
include financial information, e.g. income statements may signal transparency,
credibility, and feasibility [44, 34]. Additionally, media content on the
fundraising page has also been linked with perceived project quality, mainly
on Kickstarter. Particularly, a well-prepared concise video can quickly
capture the attention of the audience [14, 5], activity in terms of project
updates might indicate productivity [15, 45], and funders’ comments can
suggest engagement and increase accountability among project creators [45].
Most importantly, research also supports that collective cues play a crucial
role in funders’ evaluation of individual projects. On the one hand, there is
evidence for strong marketplace influences on funders’ behaviour: other
projects available on crowdfunding websites can draw money away [46], while
the structure and design of the platform also affects crowd engagement [22].
On the other hand, in line with findings about the importance of information
cascades and herding in successful fundraising [1, 25], most funders interpret
the amount [47] and arrival time [32, 24, 27, 23] of the first contributions
as indicators of project quality. This crucial signalling among crowd members
has triggered investigations into identifying descriptors of crowd dynamics
that are associated with high-quality projects and successful crowdfunding
[11, 26, 27, 33]. Yet, it remains unclear how important these crowd features
are as determinants of success on different crowdfunding platforms after
taking into account both general and platform-dependent project features.
Existing research points to the need for a study that is based on multiple
crowdfunding platforms as this might clarify contradictions in the literature
about the importance of specific aspects either related to qualities of the
project or the crowd dynamics among funders. For instance, a few studies have
found a negative correlation between the duration of crowdfunding campaigns
and their success [5, 35, 34]. While these studies suggest that longer
fundraising campaigns may convey a message of indecisiveness and inability to
deliver, Cordova et al. [37] found that longer campaigns may also increase the
likelihood of project success as the contributions will eventually add up to
or even exceed the requested amount. Another example is the inconclusive
finding about the role of activity on social media networks in fundraising
success. Specifically, while some evidence suggests that project creators’
social media posts are related to campaign success on Kickstarter [23, 18,
40], research on Indiegogo, for instance, suggests otherwise [35]. Further
work on Kickstarter observes that, although linked to the amount of early
contributions, social media connections don’t matter [24]. Possible
explanations for the conflicting nature of evidence from these studies are
that (1) they are based on different crowdfunding platforms and/or (2)
different research methods were applied in each study. By conducting the same
analysis on data from multiple crowdfunding platforms, we hope to resolve some
of the contradictions in the literature and provide a robust assessment of the
universality of crowd features.
## 3 Data: Crowdfunding Platforms & Markets
We obtained data from three crowdfunding platforms that represent different
markets both in terms of geography (US and UK) and the market model, i.e.
lending, equity, and charity crowdfunding777Several studies have looked at
reward-based crowdfunding, such as Kickstarter. Our analysis excludes the
reward model due to the lack of fine-grained data about crowd dynamics on such
platforms.. These different platforms capture the heterogeneity in funders’
motivations and goals which vary by the context and nature of the funding
effort in each market model. For example, lenders and investors may be
motivated by financial rewards [48, 49, 20, 50], whereas donors on charity
platforms may be motivated by reputation, self-image, or empathy-related
rewards [51, 52]. Additionally, the crowdfunding platforms differ in terms of
their uses (e.g. paying for financial, entrepreneurial, or social ventures)
and impacts (e.g. democratisation of financial services or greater
availability of funding for pro-social projects) [6]. Across the different
crowdfunding platforms, we further observe significant variation in the
information that is visible to funders, for example, project details that
inform potential contributors about the attributes of the project (e.g. auto
loan, request for classroom book supplies, or business expense) as well as the
characteristics of the project creators (e.g. their gender or income). Most
notably, the data from the different platforms come from very different time
periods (see Table 1). The temporal component is further compounded by the
fact that, at any considered time, different crowdfunding platforms and
markets will be experiencing different levels of adoption and maturity.
Considering the time differences across the platforms, a potential reliability
of crowd dynamics features in consistently predicting project success would be
unexpected and extremely interesting. Rather than provide a comparison between
the different platforms, in this section, we introduce the three crowdfunding
market models through representative platform data sets and describe important
project variables that are available for prospective funders. In addition to
identifying the project variables that are observable by funders on each
platform, we further compute a set of variables deduced from the behaviour of
the funding crowd and show in Section 5 that features pertaining to crowd
dynamics are significantly correlated with and predictive of fundraising
success even after we control for the potential confounding influences of the
observable project variables.
#### Lending Model
The Peer-to-Peer (P2P) lending model allows borrowers to receive varying
amounts of commercial interest-based unsecured loans from crowd members [17,
41, 11, 38, 19]. The contributed funds are offered as a loan to be paid within
a given time-frame and at a specified interest rate. We obtained crowd lending
data from Prosper, the oldest P2P lending marketplace in the US. The lending
data comprise 53,768 lenders who have collectively made 2,877,407
contributions towards 143,549 loans. The P2P platform attracts borrowers and
lenders from all walks of life seeking non-collateral loans or small
investments outside traditional financial institutions. For each project, the
data describe characteristics of the loan, such as the requested _amount_ ,
_interest rate_ on loan, and _monthly payment_. Included in the project
information are attributes of the borrower, such as their _Prosper score_ i.e.
a custom risk score built using historical Prosper.com data and allows the
platform to maintain consistency when evaluating individual loan requests.
There is also information about the _credit grade_ (i.e. the loan’s estimated
loss rate range), _debt-to-income ratio_ , and whether the borrower is a
_homeowner_ or not. These project features are shown to lenders on the
platform to signal each borrower’s creditworthiness. Additionally, these
features are commonly used by traditional financial institutions to make
expert lending decisions based on borrowers’ creditworthiness.
#### Equity Model
In equity crowdfunding, funders are investors entitled to shares of future
profits in an entrepreneurial venture. Equity crowdfunding expanded rapidly
after the 2008 financial crisis, but has grown slowly compared to peer-to-peer
lending due to high levels of government regulation on securities as well as
potential risks for fraud and the need for investor protection [9, 7, 34, 21,
25]. We obtained equity crowdfunding data from one of the leading platforms in
the UK and EU. The data comprise 21,907 investors who have collectively made
77,419 investments into 740 campaigns. On this platform, project creators
include start-ups and early-stage companies seeking capital. Since projects
are large capital campaigns, funders comprise both small and large
institutional investors as well as wealthy individual investors. For each
project, the data describe the requested _amount_ , _equity percentage_
offered in return of investment, and the company’s _valuation_ prior to the
investment. The data also describe the _number of entrepreneurs_ , whether the
entrepreneurs have passed the finance _quiz_ to make sure that investors
understand the risks of investing in startups and other growth-focused
businesses, and whether the equity investment requires investor
_self–certification_ , a process that requires investors to report their
income and net worth as well as the amount of their other crowdfunding
investments to reveal individual investor limits. Additionally, the project
data describe whether the equity campaign is compliant with the UK’s
Enterprise Investment Scheme (_EIS_) and Seed Enterprise Investment Scheme
(_SEIS_) which are tax incentive schemes for UK taxpayers who invest in
qualifying early-stage businesses that are permanently established within the
UK.
#### Charity Model
Some online fundraising efforts follow a charity model whereby funders serve
as philanthropists who expect no material or financial return for their
donations [47, 4, 46, 32]. We obtained charity crowdfunding data from
DonorsChoose, one of the earliest crowdfunding platforms that allows
individuals to make donations towards public school classroom projects. The
charity data comprise 850,498 donors who have made 1,004,658 donations to
215,825 public school projects from pre-K to grade 12. The projects are posted
by teachers from different parts of the US and from communities in rural,
urban, and suburban areas. They span several subject areas from math and
science to literacy and language. For each project, the data describe the
requested _amount_ , teacher’s _gender_ , students’ _grade level_ , _community
type_ , _subject area_ , and the _type of resource_ that the donations are
intended for (e.g. books, technology equipment, art supplies, or school
trips). Similar to the other platforms we study, these project details are
visible to donors (i.e. funders) on the site.
The notable differences between these crowdfunding markets and platforms are
reflected in the different project features listed above. From the different
project features observable on each platform, funders then decide what
projects to support based on their expectations of each project’s success
deduced from the project variables that they believe to be associated with
success. However, these project variables do not capture the role of funders’
contribution patterns towards project success [1, 25]. In the next section, we
therefore describe the crowd features that characterise funders’ behaviour
across all of these platforms and provide details about our methods for (1)
investigating the relationship between the crowd features and fundraising
success, (2) predicting fundraising success and comparing the relative
importance of project and crowd features in the predictive task, and (3)
estimating the quasi-causal effects of the crowd features on fundraising
success.
## 4 Predicting Successful Fundraising
On all three platforms, we only considered projects that were either fully
funded, or failed to meet their funding goal. We excluded active projects,
DonorsChoose projects that received funds re-allocated from failed projects as
these projects did not reflect true funder activity, as well as Prosper
projects that had no credit information888The platform stopped showing
borrowers’ credit grade to funders in 2009 and hence we focus on projects
posted before that time. Throughout our analyses, credit grade is an important
variable of creditworthiness because this is the most common indicator of
financial health used by lenders in traditional financial settings.. Table 1
provides a high level summary of the data.
Table 1: Summary of our data collected from three different crowdfunding platforms. As shown, data were collected across multiple years, but at different times for the three platforms. The crowdfunding platforms also differ in terms of the number of projects, contributors, and contributions (i.e. loans, investments, and donations made to various projects). Bottom row summarises computed crowd features (mean, std) for each platform. Variable | Lending | Equity | Charity
---|---|---|---
Period | 2005 - 2008 | 2013 - 2015 | 2002 - 2016
Projects | 143,549 | 740 | 215,825
Contributors | 53,768 | 21,907 | 850,498
Contributions | 2,877,407 | 77,419 | 1,004,658
Appeal | 19.041 (40.318) | 104.620 (175.694) | 4.655 (4.906)
Momentum | 1.100 (0.876) | 1.080 (0.505) | 1.023 (0.595)
Variation | 0.384 (0.513) | 2.416 (1.854) | 0.516 (0.495)
Latency | 0.458 (0.419) | 0.289 (0.324) | 0.616 (0.236)
Engagement | 7.029 (2.221) | 52.449 (38.681) | 33.557 (44.384)
### 4.1 Crowd Determinants of Fundraising Success
In addition to the project features identified above, we computed general
crowd features that characterise the collective dynamics of fundraising that
ultimately decide what is worthy of success. In contrast with old theories
claiming that genius and personal performance are behind outstanding
achievements in science, technology, business, and the arts [53, 54, 55, 56,
57], there is increasing evidence for the collective nature of success [58,
59]. Within this new line of research, there is indication that the crowd-
based valuation process is to a great extent random [60] and that arbitrary
initial advantages are inflated by positive feedback. We believe this
collective aspect can help us navigate the increasing number and diversity of
indicators conceivable and available via Web-based platforms to approximate
fundraising success via the broad appeal, crowd engagement, as well as the
variation and temporal patterns in fundraising activity. We therefore compute
the following five crowd features based on arguments from prior literature:
* •
Intuitively, the more funders a project attracts, the more likely that it will
meet its funding goal. Hence, we count the number of unique funders of each
project and consider that to be the project’s _appeal_. We expect appeal to
correlate with success as it has been shown to in previous studies [3, 31].
* •
Temporal aspects of funders’ activity, such as the arrival times of individual
contributions might also signal confidence in the project’s merit [11].
Accordingly, our next feature focuses on the speed at which funds are
accumulating, as a reflection of how fast funders make their determination to
contribute. We measure the _momentum_ of contributions through the coefficient
of variation for the times between consecutive contributions i.e. the ratio
between the mean and standard deviation of these time intervals.
* •
Along a similar argument, we also measure the _variation_ in contribution
amounts using a coefficient of variation. The main idea here is that the
amount of others’ contributions visible to funders can influence also the
behaviour of the crowd [26]. This feature signals potential herding mechanisms
that have been found to influence contribution dynamics on lending platforms
[11].
* •
Further, prior work has also found that early contributions to crowdfunding
may signal the crowd’s interest in a project thereby attracting other funders
to contribute as well [32]. To measure this temporal aspect, we compute each
project’s _latency_ as the difference between the time of the first
contribution and the time that the project was posted.
* •
Finally, for each project, we compute a crowd _engagement_ feature as the time
between the first and last contribution when the project reached either its
fundraising deadline or goal. While in some cases this measure may correlate
with project duration, it captures only the time frame in which funders were
actively contributing to a given project.
For all projects on the three crowdfunding platforms, we computed these five
features. Summary statistics per platform are shown in Table 1.
### 4.2 Methods
In this section, we introduce the methods that make up our multi-method
analysis. We used Pearson’s correlation to investigate the relationship
between crowd features and crowdfunding success. We then combined crowd
features with project features provided by each platform to train and evaluate
the performance of Random Forest classifiers in predicting fundraising success
[61]. Essentially, these were binary classifications aiming to differentiate
between funded and failed projects based on available features. Since the
range of values for each feature vary wildly, we use min-max normalisation to
scale the features to a fixed range from 0 to 1. The result of this pre-
processing technique is that each feature contributes approximately equally to
the learning process and hence the model’s sensitivity decreases due to the
relative scales of features. We also tried other classification methods such
as Logistic Regression, Naive Bayes, and Adaptive Boosting. The results with
these alternative methods were qualitatively indistinguishable from the ones
obtained with Random Forest, which are also interpretable and allow for a
better understanding of feature importance, which becomes crucial when
comparing the relative importance of crowd features to that of project
features. Since crowd lending and charity platforms have a large class
imbalance (20.2% and 99.4% funded projects, respectively), we under-sampled
the majority class and performed the classification task on balanced data on
all platforms. In all experimental setups, we perform $k$-fold cross
validation with hold-out samples. Specifically, for each platform, we randomly
divide the data into $k=5$ subsets. Each time, one of the $k$ subsets is used
as the test set (hold-out samples) and the other $k-1$ subsets are combined to
form a training set. Then we compute the average accuracy, precision, recall,
F1-Score, and area under the receiver operating characteristic curve (AUC)
across all $k$ trials. We further evaluated the importance of individual and
grouped (project vs crowd) features in predicting fundraising success using
the Random Forest permutation importance (piRF) score which is measured as the
relative increase in the model’s prediction error after permuting the
individual or grouped features’ value. We rely on Scikit-Learn’s Python API
for the Random Forest implementation [62].
While Random Forest permutation importance scores provide a systematic ranking
of crowd and project features based on how predictive they are of fundraising
success, they cannot help understand _why crowdfunded projects with similar
covariates sometimes end up with dissimilar outcomes_ or _identify differences
in crowd behaviour that may explain such seemingly arbitrary outcomes_. To
investigate this question, we rely on Coarsened Exact Matching (CEM) which is
a widely-used method for deriving causal inferences from observational data
where the treatment variable is not randomly assigned [29]. Specifically, CEM
provides a quasi-experimental approach for assessing the effects of crowd
dynamics features on fundraising success while controlling for the confounding
influence of project features that are associated with funding success. Common
in the social sciences, this method has been used effectively to investigate
the effect of race in online dating [63], the impact of temperature and
precipitation variability on the risk of violence in sub-Saharan Africa [64],
and the influence of women’s inner social circles on their leadership success
[65].
The CEM approach begins by identifying and grouping projects with similar
platform-specific features observable by funders, but with varying crowd
features. Lending crowdfunding projects were matched based on the requested
amount, monthly loan payment, interest rate, Prosper score, credit grade,
debt-to-income ratio, and homeownership. Equity projects were matched
according to the requested amount, equity percentage offered, pre-money
valuation, number of entrepreneurs, investor self-certification and quiz
status, and EIS and SEIS compliance. Charity projects were matched based on
the requested amount, resource type, teacher’s gender, students’ grade level,
subject area, and community type. We then rely on CEM’s automated algorithm
for “coarsening” these project features to discrete values or “bins” and
matching projects with exact “bin signatures” thereby generating groups of
similar projects.
We categorised projects into treatment and control groups based on whether
they were successfully funded or not, then estimated the effect of each crowd
feature on fundraising outcome (i.e. fully funded or not), while controlling
for project features. We do so using the traditional CEM measure of Sample
Average Treatment Effect on the Treated (SATT) measure:
$SATT=\frac{1}{n(T)}\sum\limits_{i\in T}\\{(Y_{i}|T_{i}=1)-(Y_{i}|T_{i}=0)\\}$
where $Y_{i}$ is the outcome variable (funded ($Y_{i}=1$) or not ($Y_{i}=0$)),
$T$ is the set of crowd treatments ($T_{1}$=Appeal, $T_{2}$=Momentum,
$T_{3}$=Variation, $T_{4}$=latency, $T_{5}$=Engagement), and $n(T)$ is the
number of crowd treatment effects, i.e, five. We thus compute the sample
average treatment effect of each crowd feature on fundraising success as the
difference between two possible outcomes. For each project, the _fundraising
outcome under crowd treatment condition_ $(Y_{i}|T_{i}=1)$ is always observed.
However, the counterfactual condition $(Y_{i}|T_{i}=0)$, i.e. the _fundraising
outcome if no treatment condition_ , e.g. if no crowd appeal, momentum,
variation etc., is always unobserved and imputed via simulation using a logit
model. Once the unobserved outcomes are imputed, the estimate of each crowd
feature’s sample average treatment effect is measured by simply averaging the
differences over all observations and imputed outcomes for the counterfactuals
$(Y_{i}|T_{i}=1)-(Y_{i}|T_{i}=0)$. The SATT therefore follows the Rubin causal
model (RCM), an approach to the statistical analysis of cause and effect based
on the framework of potential outcomes [66]. Based on the RCM, the causal
effect of each crowd feature is therefore the difference in fundraising
outcome between the observed and counterfactual condition.
To allow for comparisons with other matching methods that retain all treated
projects and select an equal number of control projects to include in the
matched data set based on a distance or similarity measure, e.g. nearest
neighbour matching (NNM), we further pruned the CEM solution using the
Euclidean distance within each matched sample to achieve similar one-to-one
matching solutions with CEM as one would obtain with NNM. In this case, the
advantage of CEM over other matching methods is that for each project in the
treatment group (funded = 1) we have exactly one “twin-project” in the control
group (funded = 0) that has the exact same coarsened project features as the
project in the treatment condition. Any projects in the treatment group that
have no “twin-project” are thus discarded. This additional filtering procedure
ensures that we are making counterfactual inferences only from valid points of
comparison [67, 68]. To assess the goodness of the matching solutions, we used
the $L1$ statistic ($1$: perfect imbalance, $0$: perfect balance) which is a
measure of global imbalance with respect to the joint distribution of the
project covariates. The $L1$ statistic is not valuable on its own, but serves
rather as a point of comparison between matching solutions, thus $L1$ works
for imbalance as $R^{2}$ works for model fit: the absolute values mean less
than comparisons between matching solutions [69]. In comparison to nearest
neighbour matching, CEM produced better matching solutions and hence provides
a more reliable approach for deriving causal inferences from the observational
data used in this study.
## 5 Results
We observe similar crowd behaviour across the different crowdfunding
platforms, despite differences in the number of projects posted per unit time,
individual contribution amounts towards each project, and project funding
success rate on each platform. The kernel density estimates of the crowd
features on all three platforms share similar distribution properties
indicating similarities in crowd activity in terms of individuals’ underlying
decisions about whether or not to fund a project, how quickly the crowd
decides to fund a project, how quickly funds are accumulating, variation in
contribution amounts, and how long funders remain engaged in fundraising
(Figure 1).
Figure 1: A comparison of kernel density estimates of crowd features on
different crowdfunding platforms shows similar distributions that describe the
underlying behaviour of funders on each platform.
We further empirically test the degree of multimodality of the crowd feature
distributions using Hartigan’s Dip Statistic (HDS) [70] and observe that crowd
appeal, momentum, and engagement follow uni-modal distributions (Dip test:
$p=1.0$). The crowd’s latency follows a bi-modal distribution (Dip test:
$p<0.05$) whereby some projects receive a substantial number of contributions
early, while other projects take much longer to secure those initial
contributions. The shapes of the bi-modal distributions also resemble the
“bathtub” effect (named after its shape), which is most notable on the lending
platform. This effect has been observed in simulation studies of funders’
donations over time on the Donors Choose platform [32]. The “bathtub” effect
in crowd latency arises when projects either quickly receive funds immediately
after being posted or go through an initial period of few to no contributions
due to lack of crowd appeal or funders choosing to observe other people’s
contributions before making their own.
### 5.1 Crowd Features are Correlated with Fundraising Success
On all platforms, statistical comparisons between the mean values of crowd
features for funded and failed projects show that successful projects have
greater appeal, higher momentum of contribution activity, and greater
variation in contribution amounts compared to failed projects (Table 2). Thus
the crowd’s appeal, momentum, and variation in contribution amounts are
significantly positively correlated with fundraising success on all
crowdfunding platforms. These findings support previous qualitative and
quantitative findings that demonstrated the role of the number of contributors
and frequency in contributions on fundraising success [3, 31, 11, 24]. Our
results also lend empirical evidence to qualitative studies as they show that
the higher the variation in contribution amounts, hence less herding in
funders’ contributions, the more likely a project is to reach its fundraising
goal [71, 48]. Based on these findings, we therefore anticipate that for
crowdfunded projects to be successful, they need to appeal to all sorts of
funders, big and small, whose contributions complement each other to meet the
fundraising goal.
Table 2: Mean (std) values of crowd features by project category and funding
outcome. Pearson correlation between crowd features and fundraising success.
Accordingly, crowd feature values are statistically significantly different
for funded and failed projects. The only exception is latency on the charity
platform. Notation: * significant at $p<0.05$; ** significant at $p<0.01$; ***
significant at $p<0.001$.
| Lending | Equity | Charity
---|---|---|---
| Funded 20.2% | Failed 79.8% | $r$ | Funded 35.3% | Failed 64.7% | $r$ | Funded 99.4% | Failed 0.6% | $r$
Appeal | 67.544 (62.957) | 6.754 (16.924) | 0.605*** | 175.789 (174.561) | 31.399 (43.619) | 0.534*** | 3.951 (3.953) | 2.204 (1.976) | 0.038***
Momentum | 1.906 (0.784) | 0.759 (0.664) | 0.599*** | 1.422 (0.534) | 0.881 (0.360) | 0.518*** | 1.025 (0.595) | 0.636 (0.544) | 0.040***
Variation | 0.946 (.0588) | 0.242 (0.377) | 0.551*** | 3.511 (2.042) | 1.819 (1.427) | 0.436*** | 0.517 (0.495) | 0.303 (0.425) | 0.033***
Latency | 0.135 (0.256) | 0.539 (0.413) | -0.388*** | 0.208 (0.310) | 0.332 (0.323) | -0.184*** | 0.616 (0.236) | 0.605 (0.233) | 0.004
Engagement | 5.762 (3.020) | 7.350 (1.833) | -0.287*** | 57.180 (37.646) | 49.871 (31.170) | 0.104** | 33.352 (44.093) | 83.833 (60.817) | -0.088***
We further anticipate that funders are more likely to contribute to projects
with notable initial contributions compared to projects with little to no
initial contributions. This hypothesis is based on previous research that
shows that while projects with a moderate-sized initial contribution slightly
outperform projects with no contribution, small initial contributions
significantly decrease the chances of success for a project [47]. On the
lending and equity platforms, we observe that the shorter the crowd latency
(i.e. first funders respond quickly to a posted project) the more likely a
project will reach its fundraising goal, hence significant negative
correlations. This finding supports previous qualitative studies that
highlight the importance of early donations in making the fundraising goal
easier to achieve by reducing the remaining funds needed, while at the same
time signalling project quality and funders’ buy-in and decisiveness on a
project’s merits [32, 5, 35, 34]. We observe no significant correlation
between crowd latency and fundraising success on the equity platform. Finally,
we observe that crowd engagement is significantly negatively correlated with
fundraising success in the lending and charity platforms meaning that
successful campaigns typically take less time to be fully funded compared to
those that are unlikely to succeed. In contrast to relatively small
contributions on lending and charity platforms, we anticipate that equity
campaigns targeting large contributions require significantly more fundraising
time and effort to reach full funding. Our expectations are confirmed and
projects do need more engagement to reach the investment goal on the equity
platform.
### 5.2 Crowd Features Predict Fundraising Success Better than Project
Features
We further combined the crowd features with project features provided by each
platform to train and evaluate the performance of Random Forest classifiers on
predicting fundraising success. Table 3 shows the Random Forest model’s
accuracy, precision, recall, F-Score, and area under the receiver operating
characteristic curve (AUC). On all platforms, the results of the evaluation
metrics are strongly correlated. In particular, we achieve accuracy and AUC
scores above 0.7.
Table 3: Random Forest validation results of predicting fundraising success agree across multiple evaluation metrics. Shown here are the mean 5-fold cross-validation results (all $std\leq 0.015$) using 100 estimators over 10,000 iterations and a random under-sampling of the majority class in each iteration. Category | Accuracy | Precision | Recall | F-Score | AUC
---|---|---|---|---|---
Lending | 0.989 | 0.988 | 0.990 | 0.989 | 0.989
Equity | 0.882 | 0.886 | 0.876 | 0.881 | 0.882
Charity | 0.691 | 0.720 | 0.626 | 0.670 | 0.691
Figure 2: Random Forest permutation importance (piRF) ranking for project and
crowd dynamics features. Crowd dynamics features (marked *) account for at
least 75% of the predictive feature importance on all platforms. Figure 3: A
comparison of the grouped Random Forest permutation importance (piRF) between
crowd and project features on all three platforms shows that crowd features
are superior to project features in predicting fundraising success.
Most importantly, we observe across classifiers built for the different
platforms that crowd features have relatively higher Random Forest permutation
importance (piRF) scores computed on hold-out test sets during cross-
validation compared to project features visible to investors, lenders, and
donors, respectively. As Figure 2 shows, the five crowd features are in the
top 7 on the lending platform and top 8 on the equity platform. On the charity
platform they occupy the top 4 positions, with latency coming after the
project features. Given the simplicity of the latency measure (time difference
between first contribution and project posting), unsurprisingly it is the
worst-ranked crowd feature across all platforms. Additionally, when grouped
together, crowd features account for 57.2% of the lending, 83.9% of the
equity, and 66.9% of the charity features’ permutation importance (Figure 3).
These findings suggest that the dynamics of crowd behaviour add significant
value toward predicting fundraising success in crowdfunding, beyond that of
traditional project features and further suggest that features deduced from
crowd behaviour have huge potential benefits for project creators and
crowdfunding platforms (see Section 6). However, since project features are
visible to funders and influence their contribution behaviour, we employed a
CEM approach to investigate the causal effects of crowd features irrespective
of funders’ observations of specific project features.
### 5.3 Crowd Features Have Significant Causal Effects towards Fundraising
Outcomes
To perform CEM, we began by matching funded projects to failed projects with
the exact same coarsened project features as explained in Section 4.2. We
matched $7,150$ of $29,013$ funded projects in the lending platform
($L1=0.740$), $198$ of $261$ funded projects in the equity platform
($L1=0.485$), and $1,249$ of $214,531$ funded projects in the charity platform
($L1=0.792$). It is important to highlight that the resulting decrease in the
sample sizes of the matched samples is an artefact of matching among only
those funded projects for which well matching failed projects exist. From the
matched data, we then computed the sample average treatment effect of crowd
features on fundraising success. Since the SATT is based on potential
outcomes, we interpret the unit-level causal effects in terms of how
statistically different they are from zero (no effect) at the 5% level.
Figure 4: Coarsened Exact Matching (CEM) sample average treatment effect on
the treated (SATT) results for the effect of crowd features on fundraising
success at 95% confidence intervals. The SATT estimate is only statistically
significant when the 95% confidence interval (horizontal line) for each crowd
feature does not overlap the dotted vertical line at $0$, representing no
effect.
We observed that crowd appeal, momentum, and variation of contributions are
significant treatment effects of funding success on all three platforms
(Figure 4). Our results show that among projects with similar covariates, some
projects may fail to meet their fundraising goal due to low crowd appeal, low
momentum, and low variation as well as prolonged latency and engagement. Hence
the sooner a project receives funding and the quicker the contributions gain
momentum, the more likely the project will be successfully funded independent
of its merits. While engagement had a significant effect on fundraising
success only on lending and charity platforms, latency had a significant
effect on fundraising success only on lending and equity platforms. The
treatment effects for both crowd engagement and latency were both negative
indicating that the more prolonged the crowd effects, the less chances of
project success. These quasi-causal effects further confirm our prior central
tendency and correlation results (cf. Table 2) and feature importance results
from the Random Forest classifier (cf. Figure 2). Finally, the CEM results
further reinforce our Random Forest finding that there are differences in the
strength of individual crowd features’ association with project outcome. Once
again, we find that high appeal, momentum, and variation are robust predictors
of fundraising success thereby providing empirical evidence for crowd features
that are important indicators of fundraising success across different
platforms, while also being robust to the particularities of different online
markets.
## 6 Discussion
Our work presents a general approach to predicting fundraising success that
focuses on the behaviour of the funders rather than the characteristics of
project creators or their projects. The presented approach is based on the
simple intuition that the timing and amount of funders’ contributions have an
effect on fundraising outcome. We therefore provide a multi-method analysis
for investigating the relationship between the funding crowd’s behaviour, as
measured using five crowd features, and fundraising success. Through a
combination of correlation-based, supervised learning, and quasi-causal
inference methods, we demonstrate that our findings regarding the importance
of crowd dynamics features in fundraising success are not only stable across
different crowdfunding settings, but they are also consistent across three
conventional empirical methods. Specifically, we find evidence for the
collective nature of success as crowd features are significantly correlated
with fundraising success, approximate fundraising success better than the
characteristics of projects or their creators, and have significant causal
effects towards fundraising outcomes. In the following sections, we elaborate
on these findings and their technological implications.
### 6.1 The Evolving Nature of Crowdfunding Platforms
Consistent across three conventional empirical methods, our findings show that
the crowd features are robust to the particularities of different crowdfunding
platforms and markets, and impartial to platform design and policy changes.
This is especially important in studies of crowdfunding due to the evolving
nature of both the crowdfunding platforms and markets that make it difficult
to consistently investigate the effects of project covariates on fundraising
success due to ad-hoc design and policy changes. For example, on the
DonorsChoose website, several longitudinal platform changes to location
filtering (2004), recommendation (2012), and search (2015) can be expected to
influence findings on the effects of both funders’ behaviour and project
characteristics, such as school location, subject area, and resource type on
fundraising success. Specifically, changes in users’ ability to filter
projects by poverty level (2005), ranking most urgent projects high as the
default setting for search (2008), and refining the most urgent criteria to
meet both the highest poverty and closest to completion criteria (2012) have
been observed in prior literature to increase the effects of project location
and community type on fundraising success [22]. In another example, since its
SEC registration in 2009, Prosper no longer provides credit grade and other
credit information to its prospective lenders. Credit scores, for example,
were replaced by the Prosper score which is a custom risk score built using
historical in-house data based on Prosper users. Additionally, since 2009, new
borrowers to the platform were required to have a FICO score of at least 640,
while returning borrowers only needed a score of 600 to request a loan.
The platform changes identified above affect the type of information presented
to funders, the kinds of projects funders are most likely to see, as well as
funders’ contribution activity. Such platform design and policy changes can be
confounding not only when estimating the effect of project features but also
when evaluating the impact of crowd behaviour on fundraising success. On the
one hand, studies that solely focus on project determinants of fundraising
success, i.e. most existing literature on crowdfunding, risk overestimating
their findings. For example, platform design features that enable users to
filter and search projects by location may increase the importance of
projects’ location in determining fundraising success compared to platforms
that do not afford location search and filtering [22]. On the other hand,
studies that focus on crowd-based indicators of fundraising success without
controlling for confounding project-level variables risk under-estimating the
impact of changes in platform design on crowd behaviour. This is because
despite the impact that location search and filtering features, for example,
may have on the importance of projects’ location in determining fundraising
success, these same platform design features may inadvertently impact the
crowd appeal of projects of similar quality but different geographic
locations. These challenges therefore require controlled approaches to
systematically investigate the effects of both project and crowd features on
fundraising success on evolving crowdfunding platforms. Our work contributes a
framework for studying such scenarios and has implications beyond the study of
crowdfunding as well.
### 6.2 Main Findings & Design Contributions
Through a multi-platform study that aims to improve our understanding of the
determinants of fundraising success in different online capital markets, our
work engages with ongoing CSCW research on crowdfunding. Specifically, it
provides generalisable support for existing empirical and qualitative findings
on the role of early contributions [32, 24] and presents a suitable approach
for controlling for the effects of platform architecture and design changes
[22]. Through this approach, we demonstrate the crucial role of three crowd
features in determining fundraising success: the crowd appeal, momentum of
contributions, and variation in contribution amounts. Prior qualitative work
has long emphasised the importance of mobilising a community in crowdfunding,
for example by personally reaching out to potential contributors to increase
appeal, having an early stage publicity plan to generate fundraising momentum,
as well as multiple funding levels (e.g. targeted at big and small funders) to
increase the variation in contribution amounts [42]. Therefore, not only do we
lend empirical evidence to the efficacy of mobilising fundraising communities,
but we further demonstrate computational approaches for measuring funders’
behaviour in terms of the key drivers of fundraising success that characterise
different fundraising efforts (i.e. appeal, momentum, and variation).
Additionally, these findings support previous qualitative studies that point
towards a self–reinforcing pattern whereby early contributions accelerate
crowd appeal and momentum through the internal social capital that project
creators may develop in the crowdfunding community which in turn provides
crucial assistance in igniting a self–reinforcing mechanism that ultimately
leads to fundraising success [24]. Our results further help clarify
contradictory findings about the effect of project duration on fundraising
success. For instance, our CEM analysis shows that the crowd engagement which
corresponds to a project’s duration has negative effect on charity and lending
platforms. As such, they support the argument that extended activity (i.e. a
longer project duration) in crowdfunding settings that rely on small
individual contributions may signal the crowd’s indecisiveness regarding a
project’s merits [5, 35, 34]. At the same time, the positive effect of crowd
engagement on fundraising success in the equity platform suggests that when it
comes to large capital investments that require significantly more fundraising
time and effort (e.g. through due diligence requiring potentially face-to-face
interactions in response to higher levels of risk [2]), longer campaign
duration may help to increase the likelihood of project success as the
contributions will eventually add up to or even exceed the requested amount
[37].
Our findings have important implications for crowdfunding platform design.
Having demonstrated that crowd dynamics have significant correlation and
causal effects on fundraising outcomes, we believe that the choice
architectures of the platforms that mediate crowd behaviour may influence
fundraising outcomes. We hope that platform designers can build upon these new
and consequential observations to design platforms that harness crowd dynamics
in ways that lead to more efficient and successful fundraising. Additionally,
our findings are intended to challenge designers to reflect and think more
critically about the ways in which their platform design choices enable or
inhibit the crowd dynamics that lead to successful fundraising. For instance,
how can crowdfunding platforms better signal a project’s merit and appeal in
such a way that affords funders the ability to quickly and intelligently
decide what projects to fund thereby increasing the project’s momentum and
chances of success?
We hope that our findings will inspire platform designers to think more
broadly about how to create crowdfunding platforms that both promote and
support efficient crowd awareness, navigation, and coordination, and are
attuned and sensitive to the potential biases and inequalities that may result
from inefficient crowd decision-making [46]. Our findings also have
implications for funders that contribute to these platforms as we show that
even for projects of comparable quality, sometimes the difference between
funded and not funded is the difference in the funders’ behaviour, e.g.
whether they find a project appealing, the timing of their contribution, and
variation in the amount of their contribution compared to previous
contributions. Together, these platform-design and user implications suggest
that crowd-aware system design approaches could enhance social navigation and
may help to better coordinate crowd behaviour in platform-mediated decision-
making environments.
## 7 Conclusion
In this study, we showed that universal features deduced from crowd activity
are predictive of fundraising success in different crowdfunding platforms and
markets, thereby providing empirical insights on the emergence of collective
dynamics that ultimately determine what is worthy of success. Our multi-method
analysis has shown that crowd features are correlated with fundraising
success, predict fundraising success better than project features, and have a
significant effect on fundraising success independent of project features.
These results advance a general approach to approximating fundraising success
in online capital markets that is robust to platform heterogeneity. Such a
general approach is vital considering the evolving nature of crowdfunding
platforms both in terms of their user policies and interface design. To better
understand how crowdfunding platforms can be designed to promote efficient
crowd decision-making, future research should investigate the ways in which
the identified crowd features may lead to sub-optimal fundraising outcomes,
inefficiencies in capital allocation, or re-enforce existing biases that may
exacerbate inequalities. Ultimately, a more nuanced understanding of how crowd
behaviour influences fundraising outcomes will inform how crowdfunding and
online campaign sites, in general, can be designed to promote the crowd
dynamics that lead to successful fundraising to achieve maximal impact.
## Acknowledgments
This work was supported by the U.S. National Science Foundation under Grant
No. IIS-1755873.
## References
* [1] Juanjuan Zhang and Peng Liu. Rational herding in microloan markets. Management Science, 58(5):892–912, 2012.
* [2] Ajay Agrawal, Christian Catalini, and Avi Goldfarb. Some simple economics of crowdfunding. Innovation policy and the economy, 14(1):63–97, 2014.
* [3] Paul Belleflamme, Thomas Lambert, and Armin Schwienbacher. Crowdfunding: Tapping the right crowd. Journal of business venturing, 29(5):585–609, 2014.
* [4] Tim Althoff, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. How to ask for a favor: A case study on the success of altruistic requests. In Eighth International AAAI Conference on Weblogs and Social Media, 2014.
* [5] Ethan Mollick. The dynamics of crowdfunding: An exploratory study. Journal of business venturing, 29(1):1–16, 2014.
* [6] Rob Gleasure and Joseph Feller. Emerging technologies and the democratisation of financial services: A metatriangulation of crowdfunding research. Information and Organization, 26(4):101–115, 2016.
* [7] Nir Vulkan, Thomas Astebro, and Manuel Fernandez Sierra. Equity crowdfunding: A new phenomena. Journal of Business Venturing Insights, 5:37 – 49, 2016.
* [8] Ajay K Agrawal, Christian Catalini, and Avi Goldfarb. The geography of crowdfunding. Technical report, National Bureau of Economic Research, 2011.
* [9] Garry Bruton, Susanna Khavul, Donald Siegel, and Mike Wright. New financial alternatives in seeding entrepreneurship: Microfinance, crowdfunding, and peer-to-peer innovations. Entrepreneurship Theory and Practice, 39(1):9–26, 2015.
* [10] Niina Arvila, Heike Winschiers-Theophilus, Pietari Keskinen, Roosa Laurikainen, and Marko Nieminen. Enabling successful crowdfunding for entrepreneurs in marginalized communities. In Proceedings of the 23rd International Conference on Academic Mindtrek, pages 45–54, 2020.
* [11] Simla Ceyhan, Xiaolin Shi, and Jure Leskovec. Dynamics of bidding in a p2p lending service: Effects of herding and predicting loan success. In Proceedings of the 20th International Conference on World Wide Web, pages 547–556. ACM, 2011.
* [12] Michael D. Greenberg and Elizabeth M. Gerber. Learning to fail: Experiencing public failure online through crowdfunding. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, pages 581–590, Toronto, Ontario, Canada, April 2014. Association for Computing Machinery.
* [13] Dan Marom and Orly Sade. Are the life and death of an early stage venture indeed in the power of the tongue? lessons from online crowdfunding pitches. Unpublished. Working Paper Hebrew University, 2013.
* [14] Tanushree Mitra and Eric Gilbert. The language that gets people to give: Phrases that predict success on kickstarter. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 49–61, 2014.
* [15] Anbang Xu, Xiao Yang, Huaming Rao, Wai-Tat Fu, Shih-Wen Huang, and Brian P Bailey. Show me the money! An analysis of project updates during crowdfunding campaigns. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 591–600, 2014.
* [16] Benjamin C Collier and Robert Hampshire. Sending mixed signals: Multilevel reputation effects in peer-to-peer lending markets. In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, pages 197–206, 2010.
* [17] Seth Freedman and Ginger Zhe Jin. Do social networks solve information problems for peer-to-peer lending? Evidence from Prosper.com. Technical Report 08-43, Indiana University, Bloomington: School of Public & Environmental Affairs, Bloomington, IN, 2008.
* [18] Chun-Ta Lu, Sihong Xie, Xiangnan Kong, and Philip S Yu. Inferring the impacts of social media on crowdfunding. In Proceedings of the 7th ACM International Conference on Web Search and data mining, pages 573–582, 2014.
* [19] Emőke-Ágnes Horvát, Jayaram Uparna, and Brian Uzzi. Network vs market relations: The effect of friends in crowdfunding. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 226–233, 2015.
* [20] Gerrit KC Ahlers, Douglas Cumming, Christina Günther, and Denis Schweizer. Signaling in equity crowdfunding. Entrepreneurship Theory and Practice, 39(4):955–980, 2015.
* [21] Silvio Vismara. Equity retention and social network theory in equity crowdfunding. Small Business Economics, 46(4):579–590, 2016.
* [22] Abhijnan Chakraborty, Nuno Mota, Asia J Biega, Krishna P Gummadi, and Hoda Heidari. On the impact of choice architectures on inequality in online donation platforms. In The World Wide Web Conference, pages 2623–2629, 2019.
* [23] Vincent Etter, Matthias Grossglauser, and Patrick Thiran. Launch hard or go home! predicting the success of kickstarter campaigns. In Proceedings of the first ACM Conference on Online Social Networks, pages 177–182, 2013.
* [24] Massimo G Colombo, Chiara Franzoni, and Cristina Rossi-Lamastra. Internal social capital and the attraction of early contributions in crowdfunding. Entrepreneurship theory and practice, 39(1):75–100, 2015.
* [25] Silvio Vismara. Information cascades among investors in equity crowdfunding. Entrepreneurship Theory and Practice, 2016.
* [26] Gordon Burtch, Anindya Ghose, and Sunil Wattal. An empirical examination of the antecedents and consequences of contribution patterns in crowd-funded markets. Information Systems Research, 24(3):499–519, 2013.
* [27] Ajay Agrawal, Christian Catalini, and Avi Goldfarb. Crowdfunding: Geography, social networks, and the timing of investment decisions. Journal of Economics & Management Strategy, 24(2):253–274, 2015\.
* [28] Riza Emekter, Yanbin Tu, Benjamas Jirasakuldech, and Min Lu. Evaluating credit risk and loan performance in online peer-to-peer (p2p) lending. Applied Economics, 47(1):54–70, 2015.
* [29] Stefano M Iacus, Gary King, and Giuseppe Porro. Causal inference without balance checking: Coarsened exact matching. Political Analysis, 20(1):1–24, 2012.
* [30] Pierre Azoulay, Joshua S Graff Zivin, and Gustavo Manso. Incentives and creativity: evidence from the academic life sciences. The RAND Journal of Economics, 42(3):527–554, 2011.
* [31] Elizabeth M. Gerber and Julie Hui. Crowdfunding: Motivations and deterrents for participation. ACM Transactions on Computer-Human Interaction (TOCHI), 20(6):34:1–34:32, December 2013.
* [32] Jacob Solomon, Wenjuan Ma, and Rick Wash. Don’t wait! how timing affects coordination of crowdfunding donations. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 547–556, 2015.
* [33] Henry K Dambanemuya and Emoke-Agnes Horvat. Harnessing collective intelligence in P2P lending. In Proceedings of the 10th ACM Conference on Web Science, pages 57–64. ACM, 2019.
* [34] Anna Lukkarinen, Jeffrey E Teich, Hannele Wallenius, and Jyrki Wallenius. Success drivers of online equity crowdfunding campaigns. Decision Support Systems, 87:26–38, 2016.
* [35] Douglas J Cumming, Gaël Leboeuf, and Armin Schwienbacher. Crowdfunding models: Keep-it-all vs. all-or-nothing. Financial Management, 2015.
* [36] Haichao Zheng, Dahui Li, Jing Wu, and Yun Xu. The role of multidimensional social capital in crowdfunding: A comparative study in china and us. Information & Management, 51(4):488–496, 2014.
* [37] Alessandro Cordova, Johanna Dolci, and Gianfranco Gianfrate. The determinants of crowdfunding success: Evidence from technology projects. Procedia-Social and Behavioral Sciences, 181:115–124, 2015.
* [38] Laura Larrimore, Li Jiang, Jeff Larrimore, David Markowitz, and Scott Gorski. Peer to peer lending: The relationship between language features, trustworthiness, and persuasion success. Journal of Applied Communication Research, 39(1):19–37, 2011.
* [39] Lauren Rhue and Lionel P. Robert. Emotional delivery in pro-social crowdfunding success. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA ’18, pages 1–6, Montreal QC, Canada, April 2018. Association for Computing Machinery.
* [40] Qizhen Zhang, Tengyuan Ye, Meryem Essaidi, Shivani Agarwal, Vincent Liu, and Boon Thau Loo. Predicting startup crowdfunding success through longitudinal social engagement analysis. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, pages 1937–1946, Singapore, Singapore, November 2017. Association for Computing Machinery.
* [41] Martina E Greiner and Hui Wang. The role of social capital in people-to-people lending marketplaces. ICIS 2009 Proceedings, page 29, 2009.
* [42] Julie S Hui, Michael D Greenberg, and Elizabeth M Gerber. Understanding the role of community in crowdfunding work. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 62–74, 2014.
* [43] Jinwook Chung and Kyumin Lee. A long-term study of a crowdfunding platform: Predicting project success and fundraising amount. In Proceedings of the 26th ACM Conference on Hypertext & Social Media, pages 211–220, 2015.
* [44] Hadar Gafni, Dan Marom, and Orly Sade. Are the life and death of an early-stage venture indeed in the power of the tongue? lessons from online crowdfunding pitches. Strategic Entrepreneurship Journal, 13(1):3–23, 2019.
* [45] SeungHun Lee, KangHee Lee, and Hyun-chul Kim. Content-based Success Prediction of Crowdfunding Campaigns: A Deep Learning Approach. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’18, pages 193–196, Jersey City, NJ, USA, October 2018. Association for Computing Machinery.
* [46] Rick Wash and Jacob Solomon. Coordinating donors on crowdfunding websites. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 38–48, 2014.
* [47] Rembrand Koning and Jacob Model. Experimental study of crowdfunding cascades: When nothing is better than something. Available at SSRN 2308161, 2013.
* [48] Yong Lu, Bin Gu, Qiang Ye, and Zhexiang Sheng. Social influence and defaults in peer-to-peer lending networks. 2012\.
* [49] Linda Dezső and George Loewenstein. Lenders’ blind trust and borrowers’ blind spots: A descriptive investigation of personal loans. Journal of Economic Psychology, 33(5):996–1011, 2012.
* [50] Magdalena Cholakova and Bart Clarysse. Does the possibility to make equity investments in crowdfunding projects crowd out reward–based investments? Entrepreneurship Theory and Practice, 39(1):145–172, 2015.
* [51] Katherine Choy and Daniel Schlagwein. It affordances and donor motivations in charitable crowdfunding: The" earthship kapita" case. 2015\.
* [52] Rob Gleasure and Joseph Feller. Does heart or head rule donor behaviors in charitable crowdfunding markets? International Journal of Electronic Commerce, 20(4):499–524, 2016\.
* [53] Dean Keith Simonton. Origins of genius: Darwinian perspectives on creativity. Oxford University Press, 1999.
* [54] Robert King Merton and Robert C Merton. Social theory and social structure. Simon and Schuster, 1968.
* [55] Peter J Bowler and Iwan Rhys Morus. Making modern science: A historical survey. University of Chicago Press, 2010.
* [56] Robert J Sternberg and Robert J Phd Sternberg. The nature of creativity: Contemporary psychological perspectives. CUP Archive, 1988.
* [57] F James and James F English. The economy of prestige: Prizes, awards, and the circulation of cultural value. Harvard University Press, 2009.
* [58] Albert-László Barabási. Network theory–the emergence of the creative enterprise. Science, 308(5722):639–641, 2005.
* [59] Roger Guimera, Brian Uzzi, Jarrett Spiro, and Luis A Nunes Amaral. Team assembly mechanisms determine collaboration network structure and team performance. Science, 308(5722):697–702, 2005.
* [60] Matthew J Salganik, Peter Sheridan Dodds, and Duncan J Watts. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762):854–856, 2006.
* [61] Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
* [62] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
* [63] Kevin Lewis. The limits of racial prejudice. Proceedings of the National Academy of Sciences, 110(47):18814–18819, 2013.
* [64] John O’Loughlin, Andrew M Linke, and Frank DW Witmer. Effects of temperature and precipitation variability on the risk of violence in sub-saharan africa, 1980–2012. Proceedings of the National Academy of Sciences, 111(47):16712–16717, 2014.
* [65] Yang Yang, Nitesh V Chawla, and Brian Uzzi. A network’s gender composition and communication pattern predict women’s leadership success. Proceedings of the National Academy of Sciences, 116(6):2033–2038, 2019.
* [66] Paul W Holland. Statistics and causal inference. Journal of the American statistical Association, 81(396):945–960, 1986.
* [67] Gary King, Richard Nielsen, Carter Coberley, James E Pope, and Aaron Wells. Comparative effectiveness of matching methods for causal inference. Unpublished manuscript, 15(1):41–67, 2011.
* [68] Gary King, Christopher Lucas, and Richard A Nielsen. The balance-sample size frontier in matching methods for causal inference. American Journal of Political Science, 61(2):473–489, 2017.
* [69] Matthew Blackwell, Stefano Iacus, Gary King, and Giuseppe Porro. cem: Coarsened exact matching in stata. The Stata Journal, 9(4):524–546, 2009.
* [70] John A Hartigan, Pamela M Hartigan, et al. The dip test of unimodality. The annals of Statistics, 13(1):70–84, 1985.
* [71] Sushil Bikhchandani, David Hirshleifer, and Ivo Welch. Learning from the behavior of others: Conformity, fads, and informational cascades. Journal of Economic Perspectives, 12(3):151–170, 1998.
|
Machine-Learning Mathematical Structures
Yang-Hui He
1 | Merton College, University of Oxford, OX14JD, UK
---|---
2 | London Institute of Mathematical Sciences, Royal Institution, London, W1S 4BS, UK
3 | Department of Mathematics, City, University of London, EC1V 0HB, UK
4 | School of Physics, NanKai University, Tianjin, 300071, China
<EMAIL_ADDRESS>
We review, for a general audience, a variety of recent experiments on
extracting structure from machine-learning mathematical data that have been
compiled over the years. Focusing on supervised machine-learning on labeled
data from different fields ranging from geometry to representation theory,
from combinatorics to number theory, we present a comparative study of the
accuracies on different problems. The paradigm should be useful for conjecture
formulation, finding more efficient methods of computation, as well as probing
into certain hierarchy of structures in mathematics. Based on various
colloquia, seminars and conference talks in 2020, this is a contribution to
the launch of the journal “Data Science in the Mathematical Sciences.”
###### Contents
1. 1 Introduction & Summary
1. 1.1 Bottom-up
2. 1.2 Top-Down
2. 2 Mathematical Data
1. 2.1 Methodology
3. 3 Exploring the Landscape of Mathematics
1. 3.1 Algebraic Geometry over $\mathbb{C}$
2. 3.2 Representation Theory
3. 3.3 Combinatorics
4. 3.4 Number Theory
4. 4 Conclusions and Outlook
## 1 Introduction & Summary
How does one do mathematics? We do not propose this question with the
philosophical profundity it deserves, but rather, especially in light of our
inexpertise in such foundational issues, merely to draw from some observations
on the daily practices of the typical mathematician. One might first appeal to
the seminal programme of Russel and Whitehead [RW] in the axiomatization of
mathematics via systemization of symbolic logic – perhaps one should go back
to Frege’s foundations of arithmetic [Fre] or even Leibniz’s universal
calculus [Leib]. This programme was, however, dealt with a devastating blow by
the incompleteness of Gödel [God] and the undecidability of Church-Turing
[Chu, Tur].
### 1.1 Bottom-up
Most practicing mathematicians, be they geometers or algebraists, are
undeterred by the theoretical limits of logic [MHK] – the existence of
undecidable statements does not preclude the continued search for the vastness
of provable propositions that constitute mathematics. Indeed, the usage of
Turing machines, and now, the computer, to prove theorems, dates to the late
1950s. The “Logical Theory Machine” and “General Problem Solver” of Newell-
Shaw-Simon [NSS] were able to prove some of the theorems of [RW] and in some
sense heralded artificial intelligence (AI) applied to mathematics.
The subsequent development of Type Theory in the 1970s by Martin-Löf [M-L],
Calculus of Constructions in the 1980s by Coquand [Coqu], Voevodsky’s
univalent foundations and homotopy type theory [Voe] in the 2000s, etc., can
all go under the rubric of automated theorem proving (ATP), a rich and
fruitful subject by itself [New]. With the dramatic advancement in
computational power and AI, modern systems such as Coq [Coq] managed to prove
the 4-colour theorem in 2005 and the Feit-Thompson theorem in 2012 [Gon].
Likewise, the Lean system [Lean] has been more recently launched with the
intent to gradually march through the basic theorems. To borrow a term from
theoretical physics, one could call all of the above as bottom-up mathematics,
where one reaches the truisms via constituent logical symbols.
Indeed, irrespective of ATP, the rôle of computers in mathematics is of
increasing importance. From the explicit computations which helped resolving
the 4-color theorem in 1976 [AHK], to the completion of the classification of
the finite simple groups by Gorenstein et al. from the mid-1950s to 2004
[Wil], to the vast number of software and databases emerging in the last
decade or so to aid researchers in geometry [M2, Sing, GRdB, HeCY], number
theory [MAG, LMFdB], representation theory [GAP], knot theory [KNOT], as well
as the umbrella MathSage project [SAGE] etc., it is almost inconceivable that
the younger generation of mathematicians would not find the computer as
indispensable a tool as pen or chalk. The ICM panel of 2018 [ICM18] documents
a lively and recent discussion on this progress in computer assisted
mathematics.
In his 2019 lecture on the “Future of Mathematics”, Buzzard [Buz] emphasizes
not only the utility, but also the absolute necessity, of using theorem
provers, by astutely pointing out two papers from no less than the Annals of
Mathematics which state contradictory results. With the launch of the
XenaProject [Xena] using [Lean], Buzzard and Hale foresee that by the end of
this decade, all undergraduate and early PhD level theorems will be formalized
and auto-proven. An even more dramatic view is held by Szegedy [Sze] that
computers have beaten humans at Chess (1990s), Go (2018), and will beat us in
finding and proving new theorems by 2030.
### 1.2 Top-Down
The successes of ATP aside, the biggest critique of using AI to do mathematics
is, of course, the current want, if not impossibility, of human “inspiration”.
Whilst fringing upon so amorphous a concept is both a challenge for the
computer and beyond the scope of logic, an inspection of how mathematics has
been done clearly shows that experience and experimentation precedes
formalization. Countless examples come to mind: calculus (C17th) before
analysis (C19th), permutations (C19th) before abstract algebra (C19-20th),
algebraic geometry (Descartes) before Bourbaki (C20th), etc., …Even our own
presentations in a journal submission are usually not in the order of how the
results are obtained in the course of research.
Perhaps “inspiration” can be defined as the sum of experience, experimentation
by trial and error, together with random firings of thoughts. Thus phrased,
“inspiration” perhaps becomes more amenable to the computer. In this sense,
one could think of the brain of Gauß as the best neural network of the C19th,
as demonstrated in countless cases. His noticing, at the age of 16 (and based
on our definition of inspiration), that $\pi(x):=\\#\\{p\leq x:p\mbox{
prime}\\}$ proceeds approximately as $x/\log(x)$, is an excellent example,
whose statement and proof as the prime number theorem (PNT) had to wait for
another 50 years when complex analysis became available. The celebrated
conjecture of Birch-Swinnerton-Dyer, one must remember, came about from
extensive computer experiments in the 1960s [BSD]; its proof is still perhaps
waiting for a new branch of mathematics to be invented.
To borrow again a term in theoretical physics, this approach to mathematics -
of gaining insight from a panorama of results and data, combined with inspired
experimentation - can be called top-down. One might argue that much of
mathematics is done in this fashion. In this talk [HeTalk], based on a
programme initiated in [HeDL], we will (1) explore how computers can use the
recent techniques of data science to aid us with largely top-down mathematics,
and (2) speculate on the implications to the bottom-up approach. To extend the
analogy further, one can think of AlphaGo as top-down and AlphaZero, as
bottom-up.
## Acknowledgments
We are grateful for the kind invitations, in person and over Zoom, of the
various institutions over the most extraordinary year of 2020 – the
hospitality and conversations before the lock-down and the opportunity for a
glimpse of the outside world during: Harvard University, Tsinghua University
(YCMS, BIMSA), ZheJiang University, Universidad Católica del Norte Chile,
London Institute of Mathematical Sciences, Queen’s Belfast, London
Triangle@KCL, University of Connecticut, “Clifford Algebra & Applications
2020”@UST China, “String Maths 2020”@Capetown, “Coral Gables 2020”@Miami,
“International Congress of Mathematical Software 2020”@Braunschweig, “East
Asia Strings”@Taipei-Seoul-Tokyo, Nankai University, Imperial College London,
“Iberian Strings 2021”@Portugal, and Nottingham University. We are indebted to
STFC UK for grant ST/J00037X/1 and Merton College, Oxford for a quiet corner
of paradise.
## 2 Mathematical Data
In tandem with the formidable projects such as the abovementioned Xena, Coq,
or Lean, it is natural to explore the multitude of available mathematical data
with the recent advances in “big data”. Suppose we were given 100,000 cases of
either (a) matrices, or (b) association rules, with a typical example being as
follows:
$(a){\scriptsize\left(\begin{array}[]{cccccccccc}5&3&4&3&5&1&4&4&1&2\\\
5&0&4&5&2&4&4&2&2&4\\\ 1&1&2&2&0&4&1&4&5&0\\\ 5&0&1&1&0&2&0&5&0&1\\\
2&5&0&1&1&3&2&3&0&3\\\ 3&2&2&3&0&0&2&2&1&0\\\ 2&2&5&1&4&4&0&0&1&2\\\
5&0&0&0&4&5&0&4&1&1\\\ 4&3&4&3&3&1&0&0&2&5\\\ 2&0&5&0&3&0&4&4&1&5\\\
\end{array}\right)}\
,\quad(b){\scriptsize\left(\begin{array}[]{cccccccccc}5&3&4&3&5&1&4&4&1&2\\\
5&0&4&5&2&4&4&2&2&4\\\ 1&1&2&2&0&4&1&4&5&0\\\ 5&0&1&1&0&2&0&5&0&1\\\
2&5&0&1&1&3&2&3&0&3\\\ 3&2&2&3&0&0&2&2&1&0\\\ 2&2&5&1&4&4&0&0&1&2\\\
5&0&0&0&4&5&0&4&1&1\\\ 4&3&4&3&3&1&0&0&2&5\\\ 2&0&5&0&3&0&4&4&1&5\\\
\end{array}\right)}\longrightarrow 3\ .$ (2.1)
The matrices could come from any problem, as the adjacency matrix of a
directed non-simple graph, or as the map between two terms in a sequence in
homology, to name but two. The association rule could be computing a graph
invariant, or the rank of a homology group, respectively. Such data can then
be fed into standard machine-learning (ML) algorithms, which excel in finding
patterns. In the parlance of data science, (a) would be called unsupervised ML
on unlabeled data and (b), supervised ML, on labeled data.
Having been trained on large numbers of cases, two questions immediately
present themselves:
Q1:
Is there a pattern? This could range from finding clustering to discovering
short-cuts to the association rules, all leading to potential conjectures
which could then be formulated precisely and hopefully proven. In some sense,
this is a top-down question;
Q2:
Which branch of mathematics is the data likely to have come from? This bottom-
up question could shed some light on the inherent structure of mathematics.
We will present experiments bearing both questions in mind. This talk will be
a status report of the various comparative experiments undertaken in the last
couple of years in the aforementioned programme of ML mathematical structure.
### 2.1 Methodology
To be specific, let us comment on the data structure and the method of attack.
First, we will focus on supervised ML (type (b)). One can certainly give the
ML free rein to attempt finding patterns with methods such as dimension
reduction, or clustering analysis, which should be performed on all ensuing
examples in future works. Here, we shall, however, discuss only labeled data.
This is primarily motivated by the speculations on “experience” in the
introduction.
Extraordinary effort has been engaged, especially over the past 20 years, in
creating data-sets in mathematics where requisite quantities have been
computed using oftentimes exponential-complexity methods, compiled and made
freely available (typically $\sim$ 10 Gb in size and manageable for the
contemporary laptop). This supervision, in telling the ML what is interesting
to calculate (regardless of the how), imparts the “experience” of the
mathematical community while leaving the freedom for “intuition” to the AI.
After all, is not much of mathematics concerned with how to generate an
“output” (the label) for an “input” (the configuration)?
A great analogy would be the archetypal problem in ML: hand-writing
recognition. Suppose one is given
(2.2)
and needs to let the computer know that these represent
$\\{i\\}_{i=0,1,\ldots,9}$. Given these shapes, a mathematician might first
think to set up some Morse function to detect critical points, or find a way
to calculate some topological invariant. This is, of course, highly expensive
and also vulnerable to the wide variation in how people write these digits.
How does Google or any modern personal device solve this problem? Any image is
represented by an $n\times n$ matrix each of whose entry is a 3-vector in
$[0,1]\times[0,1]\times[0,1]$. In other words, we have $n^{2}$ pixels of RGB
values. If one wants only black and white, each entry is simply the gray-scale
value in $[0,1]$. Over the years, long before the recent explosion in AI
research 111Incidentally, one of the causes of the recent AI explosion is the
success of [AlexNet] on image recognition which, on utilizing GPUs, has
rendered ML efficient to the personal computer., NIST (www.nist.gov) has been
collecting handwriting samples (amongst the myriad of other information) by
human labeling:
$\begin{array}[]{c}\includegraphics[trim=0.0pt 0.0pt 0.0pt
0.0pt,clip,width=216.81pt]{./PICTS/digitSample.jpg}\ldots\end{array}\quad\begin{array}[]{c}\framebox{\includegraphics[trim=0.0pt
0.0pt 0.0pt 0.0pt,clip,width=72.26999pt]{./PICTS/pixel3.pdf}}28\times
28\times(RGB)\end{array}\longrightarrow 3$ (2.3)
The bulk of the complicated task has thus been done 222 When I was a grad
student at MIT, I remember Prof. D. Freedman calling certain problems
“perfectly adapted to large parallel clusters of graduate students.” and with
only 10000 labeled samples, a standard supervised ML, in this case a
convolutional neural network (CNN), could very quickly reach astounding
accuracy. Indeed, the accuracy continue to improve each time a user corrects
Google.
We will not introduce ML here and refer the reader to canons such as [GBA],
and leave a rapid initiation to [HeUni], as well as longer recent monographs
for mathematicians to [HeCY, RuePR, TTH]. We mention in passing that
supervised machine-learning can be thought of as a generalized, non-linear,
and not necessarily analytic, regression. In this sense, Gauß’s observation on
the PNT was an early example of supervised ML. One remark is that when the
output is continuous, an ML algorithm is typically called a regressor, and
when discrete, a classifier. We will mostly deal with classifiers on
discrete/categorical data in our examples below both for uniformity and
because regression often requires analytic forms which one might not know a
priori 333 There is a field of symbolic regression which attempts to
systematically guess at the functional form, into which we will not delve
here. .
Next, let us remark on the data. As well known, the vast majority of the
explosion in AI research has been geared toward the human experience, from
image processing to medical treatments, from speech recognition to mechanical
design, etc. A power of ML, in its emergence of complexity via connectivism
[GBA], is its ability and effort to deal with “noise” and variability, as
clearly seen in (2.3). The irony is that mathematical data does not quite
suffer from this short-coming; there is, by definition and construction,
inherent structure and regularity. A plethora of such data we will shortly
encounter and explore.
What is more, outliers are sometimes even more interesting, as exceptional Lie
algebras or sporadic groups come to mind [HM]. One constraint we will make,
however, is that the range of values in our data, both in the input and the
output, be not too great. Such large variation, especially in the case of
integers, as we will see below, tends to make regressors and classifiers
struggle. In principle, we could standardize by only considering binary data
with binary labels, and such a study should be undertaken systematically,
particularly in light of our forthcoming discussion on hierarchical
difficulty. For now, we will restrict our attention to cases where the entries
to our input tensors as well as the output to within the same order of
magnitude.
Finally and perhaps most importantly, let us discuss the methodology.
Mathematicians have a “bag of tricks” when confronted with a problem and these
go under various names. While results grow steadily throughout history, the
fundamental set of ideas and methods increases at a much slower pace. Hilbert
formalized these to a programme of finitary methods. Landau established the
theoretical minimum. Migdal called them MathMagics. One can think of these as
a standard set of techniques, from analysis to algebra to combinatorics to
arithmetic, etc., which, combined together, can tackle the most abstruse of
problems. Again, complexity emerges from inter-connectivity. Perhaps hidden in
this emergence is the very basis of “intuition”.
Phrased this way, imitating this set of standard tricks seems natural to ML
444 Interestingly, the IMO Grand Challenge [IMO], which has just been
launched, aims to create an AI algorithm to get a Gold at the International
Maths Olympiad. It was rendered that the IMO presented a perfect set of
difficult problems to a limited set of techniques (known to the high school
student). . We can therefore take, as our set of methods, some of the standard
techniques from supervised ML, to name a few:
* •
neural network (NN): we will take care to use only relatively simple
architectures such as a feed-forward network (MLP) with only a few layers and
only simple activation functions, and without much hyper-parameter tuning.
While a typical such NN is usually represented graphically (with example
dimensions to illustrate schematically) as
$\begin{array}[]{c}\includegraphics[trim=28.45274pt 0.0pt 0.0pt
0.0pt,clip,width=361.34999pt]{./PICTS/NNclass.pdf}\end{array}$
One can think of this as a composition of maps as
$I\stackrel{{\scriptstyle
f_{0}}}{{\longrightarrow}}\mathbb{R}^{n_{1}}\stackrel{{\scriptstyle
f_{1}}}{{\longrightarrow}}\mathbb{R}^{n_{2}}\stackrel{{\scriptstyle
f_{2}}}{{\longrightarrow}}\ldots\mathbb{R}^{n_{k-1}}\stackrel{{\scriptstyle
f_{k-1}}}{{\longrightarrow}}\mathbb{R}^{n_{k}}\stackrel{{\scriptstyle
f_{k}}}{{\longrightarrow}}O$ (2.4)
with $f_{i}$ typically as a sigmoid function $\sigma(x)=(1+e^{-x})^{-1}$, or a
ReLU function $max(0,x)$. The integer $k$ is the depth of the NN and the
maximum over $n_{i}$ is the width. The power of “complexity via connectivism”
can now be very precisely stated in terms of the so-called universal
approximation theorems [UAT] which essentially state that given sufficient
depth/width, any input $\to$ output can be approximated to arbitrary
precision.
* •
support vector machine (SVM): this is a very interpretable way to analyze data
by finding an optimal hyperplane (and using so-called kernel tricks, hyper-
surfaces) which separate data-points with different labels (different
categories of configurations). The hyperplane, whose equation can be written
down explicitly, is found by maximizing its distances to points of different
labels.
* •
Statistical classifier: Improving on simple frequency analysis, a remarkably
powerful method is that of naïve Bayesian classifiers, where one tracks not
only an individual frequency of occurrence, but (assuming independence) also
that of sequences in the input collectively.
* •
Decision Tree & Clustering: One could organize the categorization of the
labeled data in a tree-like structure. Similarly, one could find nearest
neighbours and clusters in order to classify the input.
It is curious that in most of the ensuing experiments, the performance is
comparable among most of these above standard methods. That is, the inherent
structure of mathematical data responds well to the standard methods.
Performance can be quantified. For discrete output this is usually done as
follows. We have data ${\cal D}=\\{x_{I}^{(j)}\to d^{(j)}\\}$ where $x$ is
typically some tensor input with multi-index $I$ and $d$ is the associated
output (label); $j$ indexes the data-points. We split this disjointly into a
training set ${\cal T}$ and a validation set ${\cal V}$ so that ${\cal
D}={\cal T}\sqcup{\cal V}$. Usually, $|{\cal T}|$ is taken 555 A standard
thing is to perform 5-fold cross-validation, where the data is randomly
divided into 5 equal parts, so that 4 parts can be chosen to be ${\cal T}$ and
the 1 part, ${\cal V}$. The ML algorithm can then be performed 5 times for the
5 different choices of the 4-part ${\cal T}$, so that an error bar can be
collected for the accuracy upon validation. to be 80% of $|{\cal D}|$, and
$|{\cal V}|$, the remaining 20%. The ML algorithm is applied to ${\cal T}$
(the training of the machine) and then the inputs of ${\cal V}$ are fed so as
to give a set of predicted values $\\{\widetilde{d^{(j)}}\\}$. The pairwise
comparison between the actual values $d^{(j)}$ and $\widetilde{d^{(j)}}$ for
each of the members of ${\cal V}$ is then a measure of how good the ML is on
the data.
Since our output $d$ is mostly discrete, say $n$ distinct values (categories),
we can write an $n\times n$ matrix with the $(i,j)$-th entry being the number
of cases predicted to be $j$ while the actual value is $i$. This is called a
confusion matrix $M$ which we wish to be as close to diagonal as possible. One
can use naïve precision $p$ (percentage of agreement of $d$ with $\tilde{d}$)
in conjunction with confidence (e.g., by the chi-squared $\chi^{2}$ of $M$, or
more precisely, the Matthews’ correlation coefficient
$\phi:=\sqrt{\chi^{2}/n}$) as a measure of how good the prediction is. We
desire both $p$ and $\phi$ to be as close to 1 as possible. Henceforth, we
report the pair as a measure of accuracy for all of the experiments, under
80-20 training/validation split:
$\mbox{Accuracy}:=(p,\phi)=\mbox{(na\"{\i}ve precision, Matthews'
correlation)}\ .$ (2.5)
## 3 Exploring the Landscape of Mathematics
We have spent too long philosophizing and the advice from Leibniz to Feynman
to go and calculate rings in our ears. The main purpose of this talk is a
comparative status report of the results in different branches of mathematical
problems since [HeDL]. We will present the precision and confidence of the
various experiments while bearing in mind the two questions posed in the
beginning of §2.
### 3.1 Algebraic Geometry over $\mathbb{C}$
We begin with algebraic geometry over the complex numbers (we emphasize
$\mathbb{C}$ here as we will delve into arithmetic geometry later) for two
reasons. First, dealing with systems of complex multi-variate polynomials is
structurally convenient and the algebraic closure of $\mathbb{C}$ renders such
class of problems well behaved in a formal sense. Second, the initial
motivation of [HeDL] and, in parallel, the independent works of [KS, Rue,
CHKN], was to study the landscape of string theory. The reason for this is
that over the last 30 years or so, theoretical physicists, in alliance with
pure and computational mathematicians, have been compiling geometrical
quantities inspired by super-string compactification, especially for Calabi-
Yau manifolds [HeCY]. Meanwhile, the combined motivation from the Minimal
Model programme and Mirror Symmetry has led algebraic geometers to create
large databases of algebraic varieties [3CinG, GRdB]. This is the reason we
begin with this seemingly technical subject, which prompted [HeDL] to consider
supervised ML of mathematics. The details of the ensuing discussion are not
important; and the take-home message is that algebraic varieties are well-
represented by matrices.
#### Warm-up:
We can begin with a baby 0-dimension problem: consider a complex quadratic
$az^{2}+bz+c=0$, for $(a,b,c)\in\mathbb{C}$. For simplicity and let us take
the coefficients to be Gaussian integers uniformly sampled in the range $\pm
10\pm 10i$, and check whether there is root multiplicity. That is, we have
labeled data 666 Or, to facilitate an ML which tends to treat real data, we
can split the input into real and imaginary parts as
$\\{({\rm~{}Re}(a),{\rm~{}Im}(a),{\rm~{}Re}(b),{\rm~{}Im}(b),{\rm~{}Re}(c),{\rm~{}Im}(c))\to
r\\}$
${\cal D}=\\{(a,b,c)\to r\\}\mbox{ with $r=1$ or $2$ }\ .$ (3.1)
Of course, $r=1$ is much rarer, so we down-sample the number of cases of
$r=2$. This technique is called balancing and we will make sure all our data
are balanced, otherwise there will clearly be prediction bias. One can readily
generate, say $10^{6}$ cases, remove repeats and down-sample $r=2$ to produce
a balanced data $\tilde{{\cal D}}$ of size around 3000 each of $r=1,2$. At our
80-20 split validation, a decision tree classifier can readily achieve
accuracy $\sim(0.98,0.96)$. One can be a little more adventurous and demand
that $(a,b,c)$ be real and find the number of real roots of the quadratic, in
which case a similar level of accuracy is achieved.
#### Geometric Invariants:
To try something more sophisticated we need to appeal to databases of
varieties. As mentioned in the beginning of this section, Calabi-Yau manifolds
(CY), or complex, Kähler manifolds of zero Ricci curvature, have been a
favoured playground [HeEnc]. The simplest CY is the torus, which can
algebraically be realized as a cubic in $\mathbb{C}\mathbb{P}^{2}$ – an
elliptic curve. One can, as the quadratic equation example above, record these
as vectors of coefficients; to this we will return in §3.4. When computing
certain topological invariants, however, it suffices to consider only the
multi-degree of the polynomials. Such representation and the likes thereof,
luckily, has been extensively compiled over the decades.
One of the favourite data-bases of Calabi-Yau threefolds are the CICYs, short
for Completion Intersection Calabi-Yau manifolds, realized as complex
homogeneous multi-degree polynomials in products of complex project spaces.
That is, let the ambient space be
$A=\mathbb{C}\mathbb{P}^{n_{1}}\times\ldots\times\mathbb{C}\mathbb{P}^{n_{m}}$,
of dimension $n=n_{1}+n_{2}+\ldots+n_{m}$ and each having homogeneous
coordinates $[x_{1}^{(r)}:x_{2}^{(r)}:\ldots:x_{n_{r}}^{(r)}]$ with the
superscript $(r)=n_{1},n_{2},\ldots,n_{m}$ indexing the projective space
factors. The Calabi-Yau threefold is then defined as the complete intersection
of $K=n-3$ homogeneous polynomials in the coordinates $x_{j}^{(r)}$. This
information can be succinctly written as
$X=\left[\begin{array}[]{c|cccc}\mathbb{C}\mathbb{P}^{n_{1}}&q_{1}^{1}&q_{2}^{1}&\ldots&q_{K}^{1}\\\
\mathbb{C}\mathbb{P}^{n_{2}}&q_{1}^{2}&q_{2}^{2}&\ldots&q_{K}^{2}\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
\mathbb{C}\mathbb{P}^{n_{m}}&q_{1}^{m}&q_{2}^{m}&\ldots&q_{K}^{m}\\\
\end{array}\right]_{m\times K\
,}\quad\begin{array}[]{lrl}\mbox{(i)}&&K=\sum\limits_{r=1}^{m}n_{r}-3\ ,\\\
\mbox{(ii)}&&\sum\limits_{j=1}^{K}q^{r}_{j}=n_{r}+1\ ,\ \forall\;r=1,\ldots,m\
,\end{array}$ (3.2)
with non-negative integers $q_{j}^{r}$. Condition (i) demands complete
intersection, and condition (ii) implies the vanishing of the first Chern
class (CY condition), and also renders the column recording the dimensions
$n_{i}$ redundant. Thus, a homogeneous quintic threefold in
$\mathbb{C}\mathbb{P}^{4}$ can be written as $[5]$, the complete intersection
of 4 quadrics in $\mathbb{C}\mathbb{P}^{7}$ can be written as $[2,2,2,2]$,
etc. Likewise, the cubic elliptic curve can be written as $[3]$. We remark
that the physics, or the Calabi-Yau conditions are not important here: all
algebraic varieties can be written in term of such matrices, dropping
conditions (i) and (ii). Furthermore, since we are only keeping track of the
degrees, $X$ is really a family of varieties, as the coefficients of the
polynomials define complex structure.
The classification, up to permutation and other geometrical equivalences, of
(3.2) was undertaken in [CDLS] in the late 1980s; they were shown to be finite
in number, a total of 7890 configurations, with a maximum of 12 rows, a
maximum of 15 columns, and all having entries $q_{j}^{r}\in[0,5]$.
Interestingly, the best super-computer at the time (at the particle
accelerator CERN, to which physicists Candelas et al. had access) was
employed.
A problem of vital interest to both mathematicians and physicists is to
compute topological invariants, for which matrix representations like (3.2)
are sufficient (the topological invariant should not depend on mild complex
deformations). For instance 777 Incidentally, the manifold
$\left[{\begin{array}[]{cc}1&1\\\ 3&0\\\ 0&3\\\ \end{array}}\right]$ is the
well-known Schön threefold. It is a double elliptic fibration over
$\mathbb{C}\mathbb{P}^{1}$ and a self-mirror threefold with Hodge numbers
$h^{1,1}=h^{2,1}=19$. We will return to elliptic fibrations shortly. , a
typical calculation is that $h^{1,1}\left(\left[{\begin{array}[]{cc}1&1\\\
3&0\\\ 0&3\\\ \end{array}}\right]\right)=19$, where $h^{1,1}$ is a Hodge
number (a complexified Betti number). Indeed, due to index theorems, the Betti
numbers are not independent and sum to (with signs) Euler numbers, which are
easier to compute. Thus one needs to be judicious in choosing which Hodge
numbers to calculate. Typically, as argued before, we can choose the ones with
the least variation in range.
The method to obtain Hodge numbers, and in general rank of cohomology groups,
is standard long exact sequence chasing. But this is computationally very
expensive. Though most common quantities in algebraic geometry can in
principle be obtained from the excellent software such as [M2, Sing, SAGE],
the key component of Gröbner basis is a doubly exponential complexity
algorithm 888Note that ML techniques are beginning to be used in computing
Gröbner bases [GBML].. Yet, for various datasets such as the CICYs, the
topological quantities have been computed and compiled, using various tricks
[CDLS]. This is another reason the CICY data-set and those of CY manifolds in
general have been gazed upon with renewed zest.
Phrasing the above Hodge computation as the labeled data-point
$\left[{\begin{array}[]{cc}1&1\\\ 3&0\\\ 0&3\\\ \end{array}}\right]\to 19$ and
recognizing that this is structurally no different from a hand-writing problem
of (2.3), provided the starting point of [HeDL]. Enhancing the data by adding
in random row/column permutations, the 8000 or so CICYs can be established
into a labeled dataset of the form ${\cal D}=\\{M\to h^{1,1}\\}$ of size
$10^{6}$, say, where $M$ is the configuration matrix and $h^{1,1}$ is a
positive integer ranging from 1 to 19. To uniformize, we right-bottom pad all
configurations with 0 so that all $M$ are $12\times 15$; giving us a
19-channel classification problem:
$\\{\left[{\scriptsize\begin{array}[]{ccccccccccccccc}1&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&1&0&0&0&1&0&0&0&0&0&0&0&0\\\ 1&0&0&1&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&1&1&0&0&0&0&0&0&0&0&0\\\ 0&1&0&1&0&0&0&1&0&0&0&0&0&0&0\\\
1&0&0&0&0&0&0&0&1&1&0&0&0&0&0\\\ 0&0&0&0&0&1&0&1&0&0&1&0&0&0&0\\\
0&0&0&0&1&0&1&0&1&0&0&0&0&0&0\\\ 0&0&1&0&0&0&0&0&0&1&1&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ \end{array}}\right]\to 9\ ,\
\left[{\scriptsize\begin{array}[]{ccccccccccccccc}1&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
3&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&3&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\ \end{array}}\right]\to 19\ ,\ldots\\}$ (3.3)
Relatively simple MLPs and SVMs can perform this task to accuracy
$\sim(0.9,0.9)$ [HeDL, BHJM, HL]. Recently, with more sophisticated
convolutional NNs, the accuracy has exceed 0.99 [EF]. We emphasize again that
the computing a topological invariant of any manifold (appropriately embedded
as an algebraic variety) can be cast into the form of (3.3). We turned to CY
because of their being readily available, similar experiments should be
carried out for general problems in geometry.
#### A Host of Activity:
Other ML explorations within the CICYs, such as line-bundle cohomology [Rue,
CL, BCDL, LS, OT, DHLL], distinguishing elliptically fibered manifolds
[HeLee], etc., all of which achieved similar high accuracy and/or improved
computation times drastically. While on the topic of Calabi-Yau manifolds, one
cannot resist but mention the tour de force work of Kreuzer-Skarke [KrSk],
which classified all reflexive polytopes (lattice polytopes with a single
interior point such that all facets are distance 1 therefrom) in dimension
$n=3$ and 4 up to $SL(n;\mathbb{Z})$, generalizing the classical result of the
16 reflexive polygons in $n=2$. The CY threefold is then realized as a
hypersurface (as the canonical divisor) in the toric variety [BB]. This $n=4$
case is a staggering 473,800,776 in number, which is still being data-mined by
many collaborations. The ML of this set is performed in [CHKN, CCHKLN, ACHN,
KlSch]. Again, the representations of these manifolds are in terms of integer
matrices: the vector coordinates of the vertices of the polytopes.
Likewise, explorations in Calabi-Yau volumes [KS], numerical Kähler metrics
[AHO, AGGKRR, DLQ, JPM], Jones polynomials and hyperbolic volumes [CJKP] as
well as knot invariants [GHRS], have all met with admirable success. In
summary, this class of problems, involving algebraic varieties, their
topological invariants, metrics and volumes, tends to produce data that is
well adapted to our ML paradigm. It is therefore fortunate that such class of
problems is a favourite for theoretical physicists, especially string
theorists, as algebraic and differential geometry are undoubtedly the correct
language to describe Nature: general relativity being a manifestation of
Riemannian geometry and elementary particle physics, of gauge connections and
bundles (cf. a recent attempt in summarizing this dialogue [YGH]). There have
of late also been daring and intriguing proposals that quantum field theories
and space-time itself, are NNs [QFTNN].
### 3.2 Representation Theory
#### Warm-up:
With geometry behaving well to our pattern search, it is natural to wonder
about algebra. Again, we begin with a baby example. Let us take even versus
odd functions. Let $(x,y)$ be random real pairs uniformed distributed over a
rectangle, say $[0,\pi]\times[-1,1]$. Consider the 4-vectors $(x,y,-x,y)$ and
$(x,y,-x,-y)$; the former models an even function, and the latter, odd. We
thus have a dataset, say ranomly sampled to size $10^{5}$, of points in
$\mathbb{R}^{4}$, labeled into 2 categories:
${\cal D}=\\{(x,y,-x,y)\to 1\,(x,y,-x,-y)\to 0\\}\
,\quad(x,y)\in[0,\pi]\times[-1,1]\ .$ (3.4)
A simple SVM can readily achieve accuracy 999 Let us consider another
representation for even/odd. Suppose we fix a number $p$ and establish a
labeled data-set $\\{n_{i}\\}\rightarrow n\bmod p$, where $n_{i}$ is the list
of digits of $n$ in some base (it turns out that which base is not important).
A simple classifier such as logistic regression, or an MLP with a linear layer
and a sigmoid layer, will very quickly “learn” this to accuracy and confidence
very close to 1 for $p=2$ (even/odd). The higher the $p$, the more the
categories to classify, and the accuracy decrease as expected. However, if we
do not fix $p$ and feed the classifier with pairs $(n,p)$ all mixed up [MHK],
then, the accuracy is nearly zero. That arthmetic properties are hard to ML
will be the subject of §3.4. exceeding $(0.99,0.99)$. This, and more
sophisticated symmetry detection in various contexts were done [CHLZ, KrSy].
#### Finite Groups:
From symmetries, we obviously proceed to finite groups (and finite rings)
[HK]. Now, a finite group of size $n$ is defined by its $n\times n$ Cayley
multiplication table, which is a Latin square (Sudoku solution), i.e., each
row and column is a permutation $1,2,\ldots,n$ with each number appearing
exactly once. However, not every Latin square is a Cayley table of a finite
group – we must have associativity built in. Of course, there are standard
theorems to check this but we will not do so. For uniformity, let us consider,
say, $n=12$; there are 5 non-isomorphic groups of this size.
We can thus generate a data-set as follows: consider $12\times 12$ Latin
squares which are not the Cayley tables of any of the groups, perform random
permutations on the row and columns independently; label all these with 0;
likewise, consider those which are truly the groups, perform random
permutations, and label these matrices with 1. Again, the non-Cayley tables
vastly dominate so we need to perform less permutations for these to produce a
balanced set. An SVM classifier, for instance, can distinguish the 0 and 1
cases with accuracy $\sim(0.96,0.92)$.
Perhaps a more striking case study is that of finite simple groups, to which
we alluded in the introduction. It would be understandably fascinating if some
ML algorithm can tell a simple group from a non-simple one. Ordinarily, one
would have to go through Sylow theorems or compute the character table. Here,
let us see if ML can do so by “looking” at the Cayley table.
A preliminary study was initiated in [HK] by taking all finite groups up to
size 70, say, compute all their Cayley tables using [GAP], and then enhance
with random row and column permutations. Note that up to size 70, there are
602 groups, but only 20 are simple, thus we need to balance the data by
permuting the tables for simple groups more. One can easily establish around
$10^{5}$ matrices this way, approximately evenly divided amongst the simple
and non-simples. For uniformity, we bottom-right pad the results with 0 so
that we have a binary classification (1 for simple versus 0 for non-simple)
problem of $70\times 70$ integer matrices (Latin squares). To give a few
examples, we have
$\\{\left(\begin{array}[]{cccc|l}1&2&3&4&0\\\ 2&1&4&3&0\\\ 3&4&1&2&0\\\
4&3&2&1&0\\\ \hline\cr 0&0&0&0&\mbox{{\Huge 0}}_{66\times 66}\\\
\end{array}\right)\to 0\ ,\left(\begin{array}[]{ccccc|l}1&2&3&4&5&0\\\
2&3&4&5&1&0\\\ 3&4&5&1&2&0\\\ 4&5&1&2&3&0\\\ 5&1&2&3&4&0\\\ \hline\cr
0&0&0&0&0&\mbox{{\Huge 0}}_{65\times 65}\\\ \end{array}\right)\to 1\
,\left(\begin{array}[]{cccccccc|l}1&2&3&4&5&6&7&8&0\\\ 2&4&5&6&7&1&8&3&0\\\
3&8&4&7&2&5&1&6&0\\\ 4&6&7&1&8&2&3&5&0\\\ 5&3&6&8&4&7&2&1&0\\\
6&1&8&2&3&4&5&7&0\\\ 7&5&1&3&6&8&4&2&0\\\ 8&7&2&5&1&3&6&4&0\\\ \hline\cr
0&0&0&0&0&0&0&0&\mbox{{\Huge 0}}_{62\times 62}\\\ \end{array}\right)\to 0\
,\ldots\\}\ ,$ (3.5)
corresponding, respectively, to the Klein viergruppe $C_{2}\times C_{2}$, the
cyclic group $C_{5}$ , the quaternion group of size 8, etc. Surprisingly, in a
matter of minutes on an ordinary laptop, an SVM (with a Gaussian kernel) could
perform this task to accuracy $\gtrapprox(0.98,0.96)$. Whilst we need to
include more groups in this study (which, sadly becomes computationally and
memory intensive, since Cayley tables grow as $n^{2}$), but the investigations
hint at the remarkable possibility that
> Proto-conjecture: Consider the (infinite dimensional) space of finite
> groups, represented appropriately (e.g., by having the Cayley table
> flattened to vectors in $\mathbb{Z}^{n^{2}}$), then there is hyper-surface
> separating the simple groups from the non-simple groups.
Fixing an $n$, one can consider all groups of order less than $n$, mix them up
and balance the data as discussed; the explicit hyper-surfaces have been
computed [HK] and work is in progress to understand them.
#### Continuous Groups:
What about continuous groups? Experiments inspired by standard computations in
Lie groups were undertaken in [CHLM] (to be concrete, classical groups of type
$ABCD$, as well as the exceptional group $G_{2}$ were explored). Two of the
most important calculations in representation theory, especially that of Lie
groups (and especially for mathematical physics), are (1) branching rules: the
decomposition of a representation $R$ for a group $G$ to that of its maximal
subgroup $H$; and (2) tensor products: given a group $G$ and two of its
representations $R_{1,2}$, decompose $R_{1}\otimes R_{2}$ into irreps. Again,
there is a convenient way to encode this data: every representation $R$ of a
Lie group is uniquely written in terms of a weight vector
$v_{R}\in\mathbb{Z}_{\geq 0}^{r}$ where $r$ is the rank of the group. In this
way, a typical example of calculation (2) would be as follows. Take
$G=A_{2}=SU(3)$, the tensor decomposition ${\bf 3}\otimes{\bf 15}={\bf
8}\oplus{\bf 10}\oplus{\bf 27}$ can be phrased as
$\left([0,1]\ ,[2,1]\right)\longrightarrow\left([1,1]\ ,[0,3]\ ,[2,2]\right)\
.$ (3.6)
Such data can be readily obtained from [LieArt], even though the computation
time is exponential against dimension of the representation. While it might be
difficult to obtain the precise decomposition due to large variation in output
(something which would be interesting to investigate), predicting numerical
quantities such as the number of terms in the decomposition, for calculations
of both types (1) and (2), were found to be efficient and accuracy
$\sim(0.96,0.9)$ can be achieved [CHLM].
### 3.3 Combinatorics
From geometry and algebra, we move on to combinatorics and graph theory. Of
course, permutation symmetries have been a key component of ML, both in built-
in layers of NNs (q.v. e.g., [GBA]) as well as establishing algorithms in
detecting them (q.v. e.g., [HW]). Our motivations here, are different, and
will focus on the intrinsic patterns in combinatorial problems.
#### Graph Properties:
While the initial motivation of [HeYau] is to study the discrete
generalization of Calabi-Yau manifolds by considering the spectrum of the
graph analogue of the Laplacian, much of the preparatory work is of general
interest. The Wolfram database of connected, simple, undirected graphs [Wolf]
was downloaded up to 100 vertices, a total of around 8000. Such objects have a
standard representation in terms of the adjacency matrix $a_{ij}$ (whose
$(i,j)$-th entry corresponds to an arrow from vertex $i$ to vertex $j$) .
Because we are only dealing with simple undirected graphs here (no multi-
arrows, no self-loops and only edges rather than directed arrows), $a_{ij}$ is
binary, symmetric and with diagonal entry 0. Furthermore, the matrices are not
block-diagonalizable because the graphs are all connected.
There is a host of interesting properties of graphs - such as whether it is
planar, what is its genus, etc. - which have been studied over the centuries
since Euler – this is why the Wolfram database exists. Thus, we have yet
another family of labeled data exemplified by the following:
$genus(\begin{array}[]{c}\includegraphics[trim=34.1433pt 0.0pt 0.0pt
0.0pt,clip,width=108.405pt]{./PICTS/dipyramid.pdf}\end{array})=0\qquad\leadsto\qquad\left({\begin{array}[]{ccccc}0&0&1&1&1\\\
0&0&1&1&1\\\ 1&1&0&1&1\\\ 1&1&1&0&1\\\ 1&1&1&1&0\\\
\end{array}}\right)\longrightarrow 0\ .$ (3.7)
Again, we can enhance the data by including random permutations of
rows/columns (note that unlike Cayley tables, the rows and columns, which
index the vertices, must be simultaneously permuted). This, together with
balancing, gives us various labeled data-sets of the form $\\{a_{ij}\to P\\}$
with relevant property $P$, and of size $\sim 10^{5}$. In particular, [HeYau]
finds, in approximate decreasing order of accuracy:
Girth:
the min over the lengths of all cycles of the graph; it is $\infty$ if the
graph is acyclic (has no cycles). To test whether it is acylic or not, as a
binary classification problem, a decision tree can get accuracy
$\sim(0.95,0.91)$. On the other hand, a 3-category classification (of whether
the girth is 3, 4, or $>4$), achieves $\sim(0.77,0.66)$. This is interesting
since the decision of whether a graph is acyclic is easy: there is a
polynomial time algorithm.
Genus:
the genus of the Riemann surface onto which the graph can be embedded. This
gives a 3-way classification of $g=0$, $g=1$ and $g>1$ in analogy to Riemann
uniformization for surfaces (complex curves). Logistic regression gives
accuracy $\sim(0.81,0.72)$;
Planarity:
whether the graph can be embedded into a plane in that it can be drawn so that
no edges cross except meeting at the nodes. This is a binary classification
which logisitic regression can find accuracy $\sim(0.81,0.62)$;
Euler/Hamilton:
if a cycle traverses all edges exactly once, it is an Euler cycle. On the
other hand, if a cycle traverses all edges exactly once, it is a Hamilton
cycle. The presence of an Euler cycle is the famous Königsberg bridge problem
and that of a Hamilton cycle, the celebrated traveling salesman problem. The
former is known to have a polynomial time algorithm whilst the latter, NP
hard. Curiously, the binary classification problem of whether a graph has an
Euler cycle or not has, with a random forest classifier, accuracy
$\sim(0.73,0.47)$, while for the the presence of a Hamilton cycle, accuracy
$\sim(0.78,0.56)$, which is comparable. Though it seems counter-intuitive that
a “hard” and an “easy” problem should behave similarly to an ML algorithm, one
should bear in mind that heuristic and approximate solutions to the Hamilton
cycle problem abound. Thus, stochatically, these two problems are on the same
level of difficulty, in accordance with our ML results.
More sophisticated properties of graphs have also been explored, such as
categorizing chromatic number, graph Laplacians, Ricci-flatness, etc. [HeYau].
For directed graphs and associated representations of quivers, there has been
a host of recent activity, especially in the context of cluster algebras. In
physics, for instance, cluster mutation is identified as Seiberg duality for
supersymmetric QFTs. A systematic study was done in [BFHHMX] (q.v. summary
table on p7) to see how various ML algorithms detect quiver properties such as
mutation type and equivalence.
### 3.4 Number Theory
As one might intuitively suspect, number theory problems will be hard; finding
simple new patterns in the primes, for example, would have unfathomable
repercussions.
#### Warm-up:
Let us begin with primes as a warm-up. Suppose we have a sequence of labeled
data
$\begin{array}[]{l}\\{2\\}\to 3\ ;\\\ \\{2,3\\}\to 5\ ;\\\ \\{2,3,5\\}\to 7\
;\\\ \\{2,3,5,7\\}\to 11\ ;\ldots\end{array}$ (3.8)
One can easily check that even with millions of training data, one would be
hard pressed to find an ML algorithm in predicting the next prime and that we
are better off with a simple regression against the $n\log(n)$ curve of PNT.
In case the reader is worried about the large variation in the output, let us
re-cast this into a binary classification problem.
We set up the data as follows:
1. 1.
Let $\delta(n)=0$ or $1$ be the prime characteristic function so that it is 1
when an odd number $n$ is prime and 1 otherwise (there is no need to include
even numbers);
2. 2.
Consider a “sliding window” of size, say, 100, and consider the list of
vectors
$\\{\delta(2i+1),\ldots,\delta(2(i+100)+1)\\}_{i=1,\ldots,50000}$ in
$\mathbb{R}^{100}$. This is the list of our input;
3. 3.
For output, consider the prime characteristic of some distinct number from
each windows, say, $\delta(2(i+100+k)+1)$ for $k=10000$.
We thus have a binary classification problem of binary vectors, of the form (
we have ordered the set with a subscript for reference)
$\begin{array}[]{l}\\{1,1,1,0,1,1,\ldots,1,0,1,1,0,0\\}_{1}\to 0\ ;\\\
\\{1,1,0,1,1,0,\ldots,0,1,1,0,0,0\\}_{2}\to 0\ ;\\\ \ldots\\\
\\{1,0,0,0,0,0,\ldots,0,0,0,0,1,0\\}_{600}\to 1\ ;\ldots\end{array}$ (3.9)
Now, the primes are increasingly rare by PNT, so we down-sample the 0-hits to
around 9000 each of 0 and 1. Applying various classifiers it was found that
the k-nearest neighbour (using $k=50$ and Hamming distance) worked best, at
accuracy around $(0.77,0.60)$.
On the other hand, if we used the Liouville $\lambda$-function - which is 1 if
the number of prime factors of $n$ is even and $-1$ if odd - instead of the
prime-characteristic $\delta$, we find accuracy around $(0.50,0.001)$ with any
standard ML algorithm, which is a good as randomly guessing 101010 To give
another example of how difficult “divisibility” is, let us reconsider Footnote
9. As mentioned, instead of fixing a prime $p$ and consider the residue of $n$
(expressed as a string of its binary digits) mod $p$, which is essentially a
linear problem and can be quickly learnt by an SVM, if we did not fix $p$, and
had the input as $(n,p)$ both expressed as in binary digits, then it is much
more difficult to a find classifier which works. We remark that trying the
same for the even/odd property of the digits of $\pi$, say, also gives no
better than random guess. . This means that it is extremely difficult to
predict the precise behaviour of $\lambda(n)$, as is well known. Going back to
$\delta$, it is indeed curious that we are doing quite a bit better than
random guessing. Now, it has recently come to be known [AKS] that PRIMES, the
problem of deciding whether $\lambda(n)=0$ or 1, is actually polynomial time,
so this is an intrinsically “easier” problem. Experience tell us that a data-
structure like (3.9), had it come from algebraic geometry over $\mathbb{C}$,
would be getting much higher accuracies.
#### Arithmetic Geometry:
Having prepared ourselves with traditional problems involving primes, it is
natural to consider problems which lie between geometry and number theory,
which have spear-headed much of the modern approach to arithmetic. Initial
exploration [ABH] to BSD [BSD] using standard ML methods as well as
topological data analysis (persistence diagrams) showed that elliptic curve
data behaved not much better than frontal attacks to primes. However, the
representation used for the elliptic curve was the Weierstraß coefficients,
which, like (3.8), had huge variation in input/output structure. Indeed, as
emphasized before, one should normalize the data to avoid unnecessarily large
numerical range, such as the (3.9). Similarly, for the Hodge number problem
(3.3) in geometry, it was natural to use $h^{1,1}$ which is a 19-channel
classification problem, rather than $h^{2,1}$, which have a range in the
hundreds.
With this consideration, a much more conducive representation for the elliptic
curves was used [HLOac]: the non-trivial coefficient of the L-function. Recall
that for an elliptic curve $E$, the L-function is
$\exp\left(\sum\limits_{k=1}^{\infty}\frac{\\#E\left(\mathbb{F}_{p^{k}}\right)T^{k}}{k}\right):=\frac{L_{p}(X,T)}{(1-T)(1-pT)}$
and $L_{p}(E,T)=1-a_{p}T+pT^{2}$ with
$a_{p}=p+1-\\#E\left(\mathbb{F}_{p}\right)$. Thus we have labeled data-sets
(around size $10^{5}$) [LMFdB] of the form
$(a_{p_{1}},\dots,a_{p_{N}})\longrightarrow\mbox{ Property of }E$ (3.10)
where $p_{1},\ldots,p_{N}$ are the first $N$ primes. Now, with this
representation, even at $N$ as small as 100, we can classify rank, torsion
order, existence of integer points, etc, to accuracy $(0.98,0.96)$,
$(1.0,1.0)$, $(1.0,1.0)$, respectively, using a naïve Bayesian classifier.
Interestingly, the most difficult quantity of BSD, the Tate-Shafarevich group,
obtained the least accuracy, with precision $<0.6$. Similar results were
obtained for genus 2 curves. In fact, other refined properties for arithmetic
curves, such as those pertaining to the Sato-Tate Conjecture can also be
classified by establishing data-sets as (3.10), and high accuracy can be
attained [HLOst].
Along the same vein, [HLCnf] studied properties of number fields. The type of
Galois group and the order of the unit (class group size) can be predicted
from the coefficients of the minimal polynomial or from that of the Dedekind
zeta function, with accuracy $\gtrapprox(0.97,0.93)$. Likewise, the degree of
the Galois extension of a dessin d’enfant can be predicted from looking at the
permutation triple information of the dessin [HHP]. This is quite comforting
and surprising since computing Belyi maps from dessins is notoriously
difficult.
In sum, we have taken problems from some of the central themes of modern
number theory: BSD, the Langlands Programme and Grothendieck’s Esquisse.
Interestingly, the data therein possess structure which are amenable to ML,
much more than classical analytical number theory (even simple problems like
remainders on division, as discussed in Footnote 10, let alone Liouville
$\lambda$). It is as if arithmetic geometry is closer to geometry than to
arithmetic.
## 4 Conclusions and Outlook
We have taken a casual promenade in the vast landscape of mathematics, armed
purposefully only with a small arsenal of techniques from ML, in order to
explore the structure of different branches, exemplified by concrete data that
had been carefully compiled over the decades. The methods employed, from SVMs
to Bayesian classifiers, from simple feed-forwards NNs to decision trees, have
no idea of the intricacies of the underlying mathematics, yet they are
guessing correct answers to high accuracy, sometimes even to 100%.
This paradigm is clearly useful in at least two respects. First, in
computations which would traditionally be too expensive and one wants a quick
estimate, this ML approach would be orders of magnitude faster. For example,
computing cohomology groups for algebraic varieties requires putting
everything into Gröbner bases, which is exponentially prohibitive, but the ML,
exemplified by the CICYs, only takes matter of seconds to minutes. Second, in
the cases where accuracy consistently reaches 1.00, then one has a potential
conjecture. Of course, as with the first case, one needs to be careful about
“interpolation” versus “extrapolation”: we need to ensure that the ML’s
learning is not merely restricted the the data-set even when cross-validation
is performed, but it truly has the ability to go beyond the features. For
instance, one could train on bundles of lower degree and validate on those of
higher degree, or one could train on smaller graphs and validate on larger
graphs, and if the accuracy remains 1.00, then one could proceed to
conjectures 111111 One should in mind that in parallel to the traditional
conjecture formulation from data, such as PNT or BSD, there are increasing
number of important statistical statements in mathematics, such as
distributions of ranks of elliptic curves or in prime progressions. On the
other hand, it goes without saying that one should always be careful with
conjecturing formulation based on data mining: the famous Skewes number
immediately springs to mind as a caveat. .
Of course, interpretable ML is a burgeoning field and NNs, especially, due to
the complex inter-connectivities, are notoriously difficult to untangle. This
“intelligible intelligence” [UT] in uncovering laws of science [IMWRR],
information geometry [BN], and symbolic mathematics [LC] using NNs, are
becoming increasingly relevant. In the above explorations, we have already
seen exact cohomology formulae [BCDL] and conjectured existence of hyper-plane
separating simple and non-simple finite groups [HK], etc. Indeed, if the
predictions of [Buz, ICM18] are true, then machine aided conjectures and
proofs will go hand in hand within a decade. In our sense, finding
interpretable results would be extracting “semantics” from “syntax” [Zilb],
from “top-down” to “ bottom-up”.
One might even lean toward the other extreme and forgo interpretability in
certain situations. After all, if an ML algorithm - without an analytic
interpretation - does produce the correct result 100% of the time, it is as
good as an analytic formula. On the contrary, there are many exact formulae
which are ineffective. For example, a simple consequence of Wilson’s theorem,
is that the $n$-th prime is
$\left\lfloor{\frac{n!{\bmod{(}}n+1)}{n}}\right\rfloor(n-1)+2$. The $n!$
clearly compels people not to use this when finding the $n$-th prime.
Similarly, exact expressions for cohomology over algebraic varieties do exist,
to which we alluded in §3.1. Even for projective space, Bott-Borel-Weil gives
$h^{q}(\mathbb{C}\mathbb{P}^{n},(\wedge^{p}T\mathbb{C}\mathbb{P}^{n})\otimes{\cal
O}(k))=\left\\{\begin{array}[]{lll}{k+n+p+1\choose p}{k+n\choose
n-p}&q=0&k>-p-1,\\\ 1&q=n-p&k=-n-1,\\\ {-k-p-1\choose-k-n-1}{-k-n-2\choose
p}&q=n&k<-n-p-1,\\\ 0&{\rm otherwise}&\end{array}\right.$, which is a non-
trivial expression. Take the example of the cohomology of a single line bundle
of bi-degree $(-k,m)$ on a bi-degree $(2,4)$ hyper-surface in
$\mathbb{C}\mathbb{P}^{1}\times\mathbb{C}\mathbb{P}^{3}$ (which is a CICY
threefold), this is known, from painful long-exact-sequence chasing, to be
$h^{q}(X,\mathcal{O}_{X}(-k,m))=\left\\{\begin{array}[c]{ll}(k+1)\binom{m}{3}-(k-1)\binom{m+3}{3}&q=0\quad
k<\frac{(1+2m)(6+m+m^{2})}{3(2+3m(1-m))}\\\
(k-1)\binom{m+3}{3}-(k+1)\binom{m}{3}&q=1\quad
k>\frac{(1+2m)(6+m+m^{2})}{3(2+3m(1-m))}\\\ 0&{\rm
otherwise}\end{array}\right.\ .$ (4.11)
One can only imagine how much more complicated the expression would be for
non-complete-intersection and more complicated bundles than a single line
bundle! The precise answers and region of validity are more suited for a
computer programme than for any human comprehension beyond the guarantee that
an exact sequence calculation would produce the right result. The point is
that such expressions in principle exist and the principle is important.
Whether they are written explicitly, or as a list of parameters and
architecture of an NN, is not more enlightening either way. While we remain
agnostic, we hope the reader can appreciate both the necessity and the
sometime dispensableness of interpretability.
Utility aside, our paradigm is also an approach toward understanding the
fundamental structure of mathematics. Modeling our standard ML algorithms as
the “bag of tricks” of the working mathematician, and the various data as
representing the field whence they come, we have gone through a plethora of
problems ranging from geometry to arithmetic. The collection of algorithms has
no idea about the AKS algorithm, nor cohomology theory, nor graph theory, nor
abstract algebra, nor arithmetic geometry …, but they are seemingly picking up
“harder” versus “easier” problems. Guessing the Liouville $\lambda$ function
seems to be impossible for any of the standard methods, while guessing ranks
of cohomology groups of complex algebraic variety seems easy for several
different classifiers and regressors.
We are tempted to approximately rank this level of difficulty - being a well
aware that different disciplines are certainly intertwined and separation by
sub-field is often not possible. The “difficulty”, we note, is not necessarily
“computational complexity”. It is correlated to it, in the several examples we
have seen, exemplified by the polynomial algorithms for PRIMES, or by the
stochastic search for Hamiltonian cycles in graphs, etc. From the many
experiments, we seem to have (where $<$ means less amenable to algorithmic
analysis):
$\begin{split}\left[\mbox{numerical analysis}\right]<\left[\mbox{algebraic
geometry over $\mathbb{C}$ $\sim$ arithmetic geometry}\right]<\\\
\left[\mbox{algebra/representation
theory}\right]<\left[\mbox{combinatorics}\right]<\left[\mbox{analytic number
theory}\right]\end{split}$ (4.12)
This “hierarchy” of different branches of mathematics is reminiscent of the
multitude of problems, ranging from efficient numerical methods which
proliferate in all areas, in contrast to the undecidability of Diophantine
systems (Hilbert 10th) or to finding new patterns in the zeros of the Riemann
zeta function. One could take this to a fundamental level [Zilb] with model
theoretical considerations. In categoricity theory, the theorems of Morley-
Shelah and Hart-Hrushovski-Laskowski [Cat] give a classification of number of
isomorphism classes of models in various cardinalities, whereby giving a sense
of the difficulty of the theory to which the problem belongs. These and
endless further explorations we shall leave to the readers’ pleasure. In the
mean time, they are encouraged to submit to the topical collection [HDKL] and
to the upcoming journal Data Science in the Mathematical Sciences [DSMS].
## References
* [AlexNet] A. Krizhevsky, I. Sutskever, G. Hinton, “ImageNet classification with deep convolutional neural network”, Comm. ACM. 60 (6): 84 - 90 (2012).
* [ABH] L. Alessandretti, A. Baronchelli and Y. H. He, “Machine Learning meets Number Theory: The Data Science of Birch-Swinnerton-Dyer,” [arXiv:1911.02008 [math.NT]].
* [AKS] M. Agrawal, N. Kayal, N. Saxena, “PRIMES is in P”, Annals of Mathematics. 160 (2): 781 - 793, 2002.
* [ACHN] R. Altman, J. Carifio, J. Halverson and B. D. Nelson, “Estimating Calabi-Yau Hypersurface and Triangulation Counts with Equation Learners,” JHEP 1903, 186 (2019) [arXiv:1811.06490 [hep-th]].
* [AGGKRR] L. B. Anderson, M. Gerdes, J. Gray, S. Krippendorf, N. Raghuram and F. Ruehle, “Moduli-dependent Calabi-Yau and SU(3)-structure metrics from Machine Learning,” [arXiv:2012.04656 [hep-th]].
* [AHK] K. Appel, W. Haken, “Every Planar Map is Four Colorable. I. Discharging”, Illin. J. of Maths, 21 (3): 429 – 490; K. Appel, W. Haken, J. Koch, “Every Planar Map is Four Colorable. II. Reducibilit”, pp491–567, (1977)
* [AHO] A. Ashmore, Y. H. He and B. A. Ovrut, “Machine learning Calabi-Yau metrics,” Fortsch. Phys. 68 (2020) no.9, 2000068 [arXiv:1910.08605 [hep-th]].
* [BB] V. Batyrev, L. Borisov, “Mirror duality and string theoretic Hodge numbers”. Inv. Math. 126 (1): 183-203 (1996).
* [BCDL] C. R. Brodie, A. Constantin, R. Deen and A. Lukas, “Machine Learning Line Bundle Cohomology,” Fortsch. Phys. 68 (2020) no.1, 1900087 [arXiv:1906.08730 [hep-th]].
* [BFHHMX] J. Bao, S. Franco, Y. H. He, E. Hirst, G. Musiker and Y. Xiao, “Quiver Mutations, Seiberg Duality and Machine Learning,” Phys. Rev. D 102 (2020) no.8, 086013 [arXiv:2006.10783 [hep-th]].
* [BHJM] K. Bull, Y. H. He, V. Jejjala and C. Mishra, “Machine Learning CICY Threefolds,” Phys. Lett. B 785 (2018), 65-72 [arXiv:1806.03121 [hep-th]].
–, “Getting CICY High,” PLB 795 (2019), 700-706 [arXiv:1903.03113 [hep-th]].
* [BN] F. Barbaresco, F. Nielsen, Ed, Geometric Structures of Statistical Physics, Informational Geometry and Learning, SPIGL’20 proceedings, Les Houches, Springer 2021.
F. Nielsen, Ed, , Progress in Information Geometry: Theory and Applications,
Springer 2021
* [BSD] B. Birch, P. Swinnerton-Dyer, “Notes on Elliptic Curves (II)”, J. Reine Angew. Math. 165 (218): 79 - 108 (1965); experiments on EDSAC-2, Cambridge.
* [Buz] K. Buzzard, “The future of Mathematics,” https://wwwf.imperial.ac.uk/~buzzard/one_off_lectures/msr.pdf; https://www.youtube.com/watch?v=Dp-mQ3HxgDE
* [Cat] S. Shelah, “Classification theory and the number of nonisomorphic models”, Studies in Logic and the Found. of Maths, vol. 92, IX, 1.19, p.49 (1990).
B. Hart, E. Hrushovski, M. Laskowski, “The Uncountable Spectra of Countable
Theories”, Annals of Maths. 152 (1): 207257\. arXiv:math/0007199
B. Zilber, “Categoricity”, AMS Gödel Lecture, 2003,
https://people.maths.ox.ac.uk/zilber/godel.pdf
* [CCHKLN] J. Carifio, W. J. Cunningham, J. Halverson, D. Krioukov, C. Long and B. D. Nelson, “Vacuum Selection from Cosmology on Networks of String Geometries,” Phys. Rev. Lett. 121 (2018) no.10, 101602 [arXiv:1711.06685 [hep-th]].
* [CDLS] P. Candelas, A. M. Dale, C. A. Lutken, R. Schimmrigk, “Complete Intersection Calabi-Yau Manifolds,” Nucl. Phys. B 298, 493 (1988).
M. Gagnon, Q. Ho-Kim, “An Exhaustive list of complete intersection Calabi-Yau
manifolds,” Mod. Phys. Lett. A 9 (1994) 2235.
T. Hubsch, Calabi-Yau manifolds: A Bestiary for physicists, World Scientific,
1994, ISBN 9810206623
* [3CinG] T. Coates, M. Gross, A. Corti, M. Reid, “Classification, Computation, and Construction: New Methods in Geometry,” http://geometry.ma.ic.ac.uk/3CinG/
* [Chu] A. Church, “An Unsolvable Problem of Elementary Number Theory”. Amer. J. of Math. 58 (2): 345 - 363 (1936).
* [CHKN] J. Carifio, J. Halverson, D. Krioukov and B. D. Nelson, “Machine Learning in the String Landscape,” JHEP 1709, 157 (2017) [arXiv:1707.00655 [hep-th]].
* [CHLZ] H. Y. Chen, Y. H. He, S. Lal and M. Z. Zaz, “Machine Learning Etudes in Conformal Field Theories,” [arXiv:2006.16114 [hep-th]].
* [CHLM] H. Y. Chen, Y. H. He, S. Lal and S. Majumder, “Machine Learning Lie Structures & Applications to Physics,” [arXiv:2011.00871 [hep-th]].
* [CJKP] V. Jejjala, A. Kar and O. Parrikar, “Deep Learning the Hyperbolic Volume of a Knot,” Phys. Lett. B 799 (2019), 135033 [arXiv:1902.05547 [hep-th]].
J. Craven, V. Jejjala and A. Kar, “Disentangling a Deep Learned Volume
Formula,” [arXiv:2012.03955 [hep-th]].
* [CL] A. Constantin and A. Lukas, “Formulae for Line Bundle Cohomology on Calabi-Yau Threefolds,” Fortsch. Phys. 67 (2019) no.12, 1900084 [arXiv:1808.09992 [hep-th]].
* [Coq] The Coq Proof Assistant, https://coq.inria.fr/
* [Coqu] T. Coquand, “An analysis of Girard’s paradox,” Proc. IEEE Symposium on Logic in Computer Science, 227 - 236 (1986).
* [DHLL] R. Deen, Y. H. He, S. J. Lee and A. Lukas, “Machine Learning String Standard Models,” [arXiv:2003.13339 [hep-th]].
* [DSMS] Y.-H. He, Manag. Ed., Data Science in the Mathematical Sciences, World Scientific, 2021, to appear.
* [DLQ] M. R. Douglas, S. Lakshminarasimhan and Y. Qi, “Numerical Calabi-Yau metrics from holomorphic networks,” [arXiv:2012.04797 [hep-th]].
* [EF] H. Erbin and R. Finotello, “Inception Neural Network for Complete Intersection Calabi-Yau 3-folds,” [arXiv:2007.13379 [hep-th]].
–“Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study,” [arXiv:2007.15706 [hep-th]].
* [Fre] G. Frege, Die Grundlagen der Arithmetik. Eine logisch-mathematische Untersuchung über den Begriff der Zahl, Verlag von Wilhelm Koebner (1884).
* [GAP] The GAP Group, _GAP – Groups, Algorithms, and Programming, Version 4.9.2_ ; 2018, https://www.gap-system.org
* [GBA] Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning, ISBN: 9780262035613, MIT Press, 2016.
* [GBML] J. De Loera, S. Petrovic, L. Silverstein, D. Stasi, D. Wilburne. “Random monomial ideals,” J. Algebra 519, 440 - 473, 2019.
D. Peifer, M. Stillman, D. Halpern-Leistner, “Learning selection strategies in
Buchberger’s algorithm”, [arXiv:2005.01917]
* [GHRS] S. Gukov, J. Halverson, F. Ruehle and P. Sułkowski, “Learning to Unknot,” [arXiv:2010.16263 [math.GT]].
* [God] K. Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I.” Monatshefte für Mathematik und Physik 38: 173-198 (1931).
* [Gon] G. Gonthier et al., “Formal Proof—The Four- Color Theorem” Not. AMS, 2005
–, “A Machine-Checked Proof of the Odd Order Theorem”, Interactive Theorem
Proving pp 163 – 179, Lect. Notes in CS, Vol 7998 (2013)
* [GRdB] The Graded Ring Database, http://www.grdb.co.uk/
The $C^{3}$NG collaboration: http://geometry.ma.ic.ac.uk/3CinG/index.php/team-
members-and-collaborators/ Data at:
http://geometry.ma.ic.ac.uk/3CinG/index.php/data/
http://coates.ma.ic.ac.uk/fanosearch/
* [HeCY] Y.-H. He, “The Calabi-Yau Landscape: from Geometry, to Physics, to Machine-Learning,” [arXiv:1812.02893 [hep-th]]. To appear, Springer.
* [HeEnc] Y. H. He, “Calabi-Yau Spaces in the String Landscape,” Entry to Oxford Res. Encyclo. of Physics, B. Foster Ed., OUP, 2020 [arXiv:2006.16623 [hep-th]]
* [HeTalk] q.v., some of the author’s talks at StringData 2020 https://www.youtube.com/watch?v=GqoqxFsaogY; Clifford 2020 http://wlkt.ustc.edu.cn/video/detail_5372_24779.htm; and CMSA, Harvard https://www.youtube.com/watch?v=zj_Xc2QG-vw
* [HeDL] Y. H. He, “Deep-Learning the Landscape,” arXiv:1706.02714 [hep-th]. Science, vol 365, issue 6452, Aug 2019.
–, “Machine-learning the string landscape,” Phys. Lett. B 774, 564 (2017).
* [HDKL] Y-H. He, P. Dechant, A. Kaspryzyk, A. Lukas, Ed. “Machine-learning mathematical structures,” topical collection for Advances in Applied Clifford Algebras, Birkhäuser, Springer, call open: https://www.springer.com/journal/6/updates/18581430
* [HHP] Y. H. He, E. Hirst and T. Peterken, “Machine-Learning Dessins d’Enfants: Explorations via Modular and Seiberg-Witten Curves,” [arXiv:2004.05218 [hep-th]].
* [HeLee] Y. H. He and S. J. Lee, “Distinguishing elliptic fibrations with AI,” Phys. Lett. B 798 (2019), 134889 [arXiv:1904.08530 [hep-th]].
* [HK] Y. H. He and M. Kim, “Learning Algebraic Structures: Preliminary Investigations,” [arXiv:1905.02263 [cs.LG]].
* [HL] Y. H. He and A. Lukas, “Machine Learning Calabi-Yau Four-folds,” to appear PLB, [arXiv:2009.02544 [hep-th]].
* [HLOac] Y. H. He, K. H. Lee and T. Oliver, “Machine-Learning Arithmetic Curves,” [arXiv:2012.04084 [math.NT]].
* [HLCnf] Y. H. He, K. H. Lee and T. Oliver, “Machine-Learning Number Fields,” [arXiv:2011.08958 [math.NT]].
* [HLOst] Y. H. He, K. H. Lee and T. Oliver, “Machine-Learning the Sato–Tate Conjecture,” [arXiv:2010.01213 [math.NT]].
* [HM] Y. H. He and J. McKay, “Sporadic and Exceptional,” [arXiv:1505.06742 [math.AG]].
* [HeUni] Y.-H. He, “Universes as Big Data,” [arXiv:2011.14442 [hep-th]], to appear IJMPA.
* [HeYau] Y. H. He and S. T. Yau, “Graph Laplacians, Riemannian Manifolds and their Machine-Learning,” [arXiv:2006.16619 [math.CO]].
* [HW] D. Helmbold, M. Warmuth,“Learning Permutations with exponential weights”, J. Machine Learning Res. 2009 (10) 1705-1736.
* [ICM18] J. Davenport, B. Poonen, J. Maynard, H. Helfgott, P. H. Tiep, L. Cruz-Filipe, “Machine-Assisted Proofs”,
* [ICMS] Mathematical Software, http://icms-conference.org/, q.v., also the author’s talk at ICMS2020.
* [IMO] The IMO Grand Challenge, https://imo-grand-challenge.github.io/
* [IMWRR] R. Iten, T. Metger, H. Wilming, L. del Rio, R. Renner, “Discovering Physical Concepts with Neural Networks”, Phys. Rev. Lett. 124, 010508, 2020
* [JPM] V. Jejjala, D. K. Mayorga Pena and C. Mishra, “Neural Network Approximations for Calabi-Yau Metrics,” [arXiv:2012.15821 [hep-th]].
* [KNOT] The Knots Atlas, http://katlas.org/wiki/Main_Page
* [KrSk] M. Kreuzer and H. Skarke, “Complete classification of reflexive polyhedra in four-dimensions,” Adv. Theor. Math. Phys. 4, 1209 (2002) [hep-th/0002240]. http://hep.itp.tuwien.ac.at/~kreuzer/CY/
* [KlSch] D. Klaewer, L. Schlechter, “Machine Learning Line Bundle Cohomologies of Hypersurfaces in Toric Varieties,” PLB 789 (2019), 438-443 [arXiv:1809.02547 [hep-th]].
* [KS] D. Krefl and R. K. Seong, “Machine Learning of Calabi-Yau Volumes,” Phys. Rev. D 96 (2017) no.6, 066014 [arXiv:1706.03346 [hep-th]].
* [KrSy] S. Krippendorf and M. Syvaeri, “Detecting Symmetries with Neural Networks,” [arXiv:2003.13679 [physics.comp-ph]].
* [Lean] Lean Theorem Prover, https://leanprover.github.io/
* [Leib] G. Leibniz, Opera omnia nunc primum collecta, Dutens, 6 vols., Geneva 1768.
W. Lenzen, “Leibniz’s Logic” in Handbook of the History of Logic, D. M. Gabbay
& J. Woods (eds.), Elsevier (2004).
* [LieArt] R. Feger, T. W. Kephart and R. J. Saskowski, “LieART 2.0 – A Mathematica Application for Lie Algebras and Representation Theory,” Comput. Phys. Commun. 257 (2020), 107490 [arXiv:1912.10969 [hep-th]]. https://lieart.hepforge.org/
* [LC] G. Lample, F. Charton “Deep Learning for Symbolic Mathematics”, arXiv:1912.01412 [cs.SC]
* [LMFdB] The L-functions & Modular Forms Database, http://www.lmfdb.org/
* [LS] M. Larfors and R. Schneider, “Explore and Exploit with Heterotic Line Bundle Models,” Fortsch. Phys. 68 (2020) no.5, 2000034 [arXiv:2003.04817 [hep-th]].
* [MAG] Magma Comp. Algebra System, http://magma.maths.usyd.edu.au/
* [MHK] Minhyong Kim, Private communications.
* [M-L] P. Martin-Löf, “An intuitionistic theory of types: predicative part, Logic Colloquium” (Bristol, 1973), 73–118. Studies in Logic and the Foundations of Maths, Vol. 80, Amsterdam, 1975.
* [M2] D. Grayson, M. Stillman, “Macaulay2, a software system for research in algebraic geometry”, Available at https://faculty.math.illinois.edu/Macaulay2/
* [New] M. Newborn, Automated Theorem Proving: Theory and Practice, Springer 2001.
* [NSS] A. Newell, J. Shaw, H. Simon, Computer programme (1956) & “Report on a general problem-solving program,” Proc. Int. Conf. Information Processing, pp. 256 - 264 (1959).
* [OT] H. Otsuka and K. Takemoto, “Deep learning and k-means clustering in heterotic string vacua with line bundles,” JHEP 05 (2020), 047 [arXiv:2003.11880 [hep-th]].
* [QFTNN] K. Hashimoto, “AdS/CFT correspondence as a deep Boltzmann machine,” Phys. Rev. D 99 (2019) no.10, 106017 [arXiv:1903.04951 [hep-th]].
E. d. Koch, R. de Mello Koch and L. Cheng, “Is Deep Learning a Renormalization
Group Flow?,” [arXiv:1906.05212 [cs.LG]].
J. Halverson, A. Maiti and K. Stoner, “Neural Networks and Quantum Field
Theory,” [arXiv:2008.08601 [cs.LG]].
V. Vanchurin, “The world as a neural network.” Entropy 22.11 (2020): 1210.
arXiv:2008.01540.
S. Wolfram, https://www.wolframphysics.org/
* [Rue] F. Ruehle, “Evolving neural networks with genetic algorithms to study the String Landscape,” JHEP 08 (2017), 038 [arXiv:1706.07024 [hep-th]].
* [RuePR] F. Ruehle, “Data science applications to string theory,” Phys. Rept. 839 (2020), 1-117
* [RW] A. Whitehead, B. Russell, Principia mathematica, CUP (1910).
* [SAGE] SageMath, “the Sage Mathematics Software System”, The Sage Developers, http://www.sagemath.org
* [Sze] C. Szegedy, https://scale.com/interviews/christian-szegedy
* [Sing] W. Decker, G-M. Greuel, G. Pfister, H. Schönemann, Singular, A computer algebra system for polynomial computations. http://www.singular.uni-kl.de
* [TTH] A. Tanaka, A. Tomiya, K. Hashimoto, Deep Learning and Physics, Springer, to appear 2021.
* [Tur] A. M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem”, Proc. LMS, 2, 42, pp. 230-65 (1937).
* [Voe] V. Voevodsky, “A Very Short Note on Homotopy Lambda Calculus” (2006); “The Equivalence Axiom and Univalent Models of Type Theory,” arXiv:1402.5556; https://www.math.ias.edu/vladimir/Univalent_Foundations
* [UAT] G. Cybenko, “Approximation by superpositions of a sigmoidal function”, Math. of Control, Signals, and Systems, 2 (4), 303 - 314, 1989;
K. Hornik, “Approximation capabilities of multilayer feedforward networks”,
Neural Networks, 4(2), 251 - 257, 1991.
P. Kidger, T. Lyons, “Universal Approximation with Deep Narrow Networks”,
Conference on Learning Theory. [arXiv:1905.08539]
B. Hanin, “Approximating Continuous Functions by ReLU Nets of Minimal Width,”
ArXiv:1710.11278.
* [UT] S. M. Udrescu and M. Tegmark, “AI Feynman: a Physics-Inspired Method for Symbolic Regression,” [arXiv:1905.11481 [physics.comp-ph]].
* [Wolf] Wolfram Research, Inc., Mathematica, Champaign, IL www.wolfram.com
* [Xena] The Xena Project, https://wwwf.imperial.ac.uk/~buzzard/xena/
* [YGH] C. N. Yang, M. L. Ge and Y. H. He, Ed. “Topology and Physics,” with contributions from Atiyah, Penrose, Witten, et al., WS 2019. ISBN: 978-981-3278-49-3 https://doi.org/10.1142/11217
* [Wil] R. Wilson, The Finite Simple Groups, Springer, 2009.
* [Zilb] B. Zilber, Private communications and work in progress.
|
# Old but Gold: Reconsidering the value of
feedforward learners for software analytics
Rahul Yedida<EMAIL_ADDRESS>North Carolina State University , Xueqi Yang
<EMAIL_ADDRESS>North Carolina State University and Tim Menzies
<EMAIL_ADDRESS>North Carolina State University
(Date: Received: date / Accepted: date)
###### Abstract.
There has been an increased interest in the use of deep learning approaches
for software analytics tasks. State-of-the-art techniques leverage modern deep
learning techniques such as LSTMs, yielding competitive performance, albeit at
the price of longer training times.
Recently, Galke and Scherp (2021) showed that at least for image recognition,
a decades-old feedforward neural network can match the performance of modern
deep learning techniques. This motivated us to try the same in the SE
literature. Specifically, in this paper, we apply feedforward networks with
some preprocessing to two analytics tasks: issue close time prediction, and
vulnerability detection. We test the hypothesis laid by Galke and Scherp
(2021), that feedforward networks suffice for many analytics tasks (which we
call, the “Old but Gold” hypothesis) for these two tasks. For three out of
five datasets from these tasks, we achieve new high-water mark results (that
out-perform the prior state-of-the-art results) and for a fourth data set, Old
but Gold performed as well as the recent state of the art. Furthermore, the
old but gold results were obtained orders of magnitude faster than prior work.
For example, for issue close time, old but gold found good predictors in 90
seconds (as opposed to the newer methods, which took 6 hours to run).
Our results supports the “Old but Gold” hypothesis and leads to the following
recommendation: try simpler alternatives before more complex methods. At the
very least, this will produce a baseline result against which researchers can
compare some other, supposedly more sophisticated, approach. And in the best
case, they will obtain useful results that are as good as anything else, in a
small fraction of the effort.
To support open science, all our scripts and data are available on-line at
https://github.com/fastidiouschipmunk/simple.
††conference: MSR ’22: Proceedings of the 19th International Conference on
Mining Software Repositories; May 23–24, 2022; Pittsburgh, PA, USA
## 1\. Introduction
As modern infrastructure allows for cheaper processing, it has inevitably led
to the exploration of more complex modeling. For example, many software
engineering researchers are now using deep learning methods (Gao et al, 2020;
Hoang et al, 2019; Liu et al, 2019; Zhou et al, 2019, 2019; Chen and Zhou,
2018; Lee et al, 2020).
One problem with deep learning is that it can be very slow to run (Jiang and
Agrawal, 2018; Le et al, 2011; Martens et al, 2010). For example, for the case
study of this paper, we estimate that we would need 6 years of CPU time. Such
long runtimes can complicate many aspects of the scientific process (e.g.
initial investigations, subsequent attempts at reproduction).
Accordingly, this paper checks if anything simpler than deep learner can
handle SE tasks. Outside of SE there is some suggestion that deep learning
researchers have rushed on too far and have overlooked the benefits of simpler
neural architectures. For example, Galke and Scherp (2021) offer an Old but
Gold hypothesis; i.e. that in their rush to try new algorithms, researchers
have overlooked the advantages of more traditional approaches. In their work,
Galke and Scherp (2021) showed that for image classification, simple, decades-
old feedforward networks (described in §3.2) can perform as well as modern
deep learning techniques, at some small fraction of the computational cost.
Since deep learning is widely used in software engineering, it seems prudent
to check for old but gold effects in SE applications. In this paper we explore
two standard software analytics problems using older-style neural networks as
well as the latest state-of-the-art deep learning algorithms.
The experiments of this paper show that simpler methods than prior work are
better for some domains. Specifically, a simple extension to a 1980s-style
feedforward neural network, which we call “SIMPLE”, runs much faster than
prior work (90 seconds versus 6 hours for issue lifetime prediction). Since
they run faster, feedforward networks are more amenable to automatic tuning
methods. Such tuning requires multiple runs of a learner (Tantithamthavorn et
al, 2016; Fu et al, 2016; Agrawal and Menzies, 2018; Agrawal et al, 2019) and
so the faster the learner, the more we can tune it (which we do in this
paper). Hence SIMPLE’s feedforward networks out-perform the prior work in
issue lifetime prediction since the latter is fundamentally hard to customize
to the task at hand.
The rest of this paper is structured as follows. §2 presents the necessary
background and §2.2 discusses the SE task under consideration. §4 discusses
our proposed approach. Then, in §5, we show our results. We discuss the
threats to the validity of our study in §6. In §8 we conclude that before
analysts try very sophisticated (but very slow) algorithms, they might achieve
better results, much sooner, by applying hyper-parameter optimization to
simple (but very fast) algorithms.
### 1.1. Preliminaries
Before beginning, just to say the obvious, we note the experiments of this
paper are based on two case studies. Hence, they do not show that all deep
learners can be replaced by faster and simpler methods.
That said, we would argue that this paper is at the very least arguing for a
methodological change in how software analytics researchers report their deep
learning results. Deep learners (or, indeed, any data mining results) should
be compared to a simpler baseline method (in our case, feedforward networks)
and also be adjusted via automatic tuning algorithms. The experience of this
paper is that such a baseline + tuning analysis can lead to challenging and
insightful results.
## 2\. Case Studies
Before going into algorithmic details, this paper first presents the two
domains that will be explored by those algorithms.
### 2.1. Vulnerability Detection
Cyber attacks often rely on software vulnerabilities, i.e., unintentional
security flaws in software that can be taken advantage of to obtain
unauthorized access, steal data, etc. As of writing this paper, the Common
Vulnerabilities and Exposures (CVE) database111https://cve.mitre.org/ contains
over 165,000 records of vulnerabilities. This number only counts the
registered vulnerabilities, and not unknown (or “zero-day”) vulnerabilities.
For the security of software systems and the data associated with them (for
example, in SQL databases), it is critical that these vulnerabilities be
discovered and patched. However, manually searching for vulnerabilities is a
time-consuming task.
There are several existing solutions that attempt to automate this task (Viega
et al, 2000; Grieco et al, 2016; Kim et al, 2017). However, these rely on
significant human effort. Specifically, they rely on the use of human-
generated features, which can take time, and be expensive (since skilled human
time is expensive). Moreover, these approaches tend to either have too many
false negatives (i.e., missed vulnerabilities), or too many false positives
(i.e., a “learner” that blindly marks non-vulnerable code as a vulnerability).
These issues make these techniques less useful in practice.
#### 2.1.1. Algorithms for Vulnerability Detection
To tackle these two problems, deep learning solutions have been recently
proposed. Li et al (2018a) propose VulDeePecker, a bidirectional LSTM
(Hochreiter and Schmidhuber, 1997) technique. From an external perspective,
their approach takes in program segments, trains a deep learner, and then uses
it to detect vulnerable code. Because this approach relies on training on the
code to generate vector representations (which the network then uses to make
predictions), it can be slow to run. Zhou et al (2019) propose Devign, which
instead uses graph neural networks (Kipf and Welling, 2016) to detect
vulnerabilities. A graph neural network takes in a graph input, and uses
“graph convolutions” to extract hierarchical features. These features can then
be used to make predictions in the later layers of the network. The authors of
Devign experiment with several graph representations of source code, and
recommend a composite version of their approach. Based on our literature
review, we assert that this is the state-of-the-art approach for vulnerability
detection.
However, deep learning approaches themselves can have issues. The major one is
that deep learners can be slow to run (Jiang and Agrawal, 2018; Le et al,
2011; Martens et al, 2010). The primary reason for this is the use of more
modern deep learning techniques such as the above mentioned bidirectional
LSTMs. While these certainly have a lot of representational capacity, they
suffer from having orders of magnitude more parameters than simpler,
feedforward networks, and therefore take longer to optimize.
In this paper, we take a similar approach to VulDeePecker in that we use a
deep learning technique to transform code into a vector representation, and
then use our simple feedforward networks for prediction. However, unlike their
approach, we use an off-the-shelf code-to-vector transformation tool, code2vec
(Alon et al, 2019). Because there is no training involved in using this model
off-the-shelf, our runtimes are significantly faster, since only the
feedforward networks need to be trained.
### 2.2. Predicting Bugzilla Issue Close Time
When programmers work on repositories, predicting issue close time has
multiple benefits for the developers, managers, and stakeholders since it
helps:
* •
Developers prioritize work;
* •
Managers allocate resources and improve consistency of release cycles;
* •
Stakeholders understand changes in project timelines and budgets.
* •
It is also useful to predict issue close time when an issue is created; e.g.
to send a notification if it is predicted that the current issue is an easy
fix.
We explore issue close time, for two reasons. Firstly, it is a well studied
problem (Lee et al, 2020; Rees-Jones et al, 2017; Vieira et al, 2019;
Akbarinasaji et al, 2018; Guo et al, 2010; Giger et al, 2010; Marks et al,
2011; Kikas et al, 2016; Habayeb et al, 2017). Secondly, recent work has
proposed a state-of-the-art deep learning approach to issue close time
prediction (see the DeepTriage deep learning systems from COMAD’19, described
later in this paper (Mani et al, 2019)).
#### 2.2.1. Traditional Algorithms for Predicting Issue Close Time
Most large software systems have a system to track bugs, or issues, in the
product. These issues typically go through the same lifecycle, in which they
transition across various states, including UNCONFIRMED and CLOSED, while also
being assigned final states such as WONTFIX (Weiss et al, 2007).
To find prior work on predicting issue close time, we searched for papers in
the last ten years (since 2010) in Google Scholar using keywords “bug fix
time”, “issue close time”, and “issue lifetime”. Then, we filtered them
according to the criterion that they must be published in a top venue
according to Google Scholar metrics Software
Systems222https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=eng_softwaresystems.
Finally, using engineering judgement, we added in systems that were
recommended by reviewers of a prior draft of this paper. That search found
several noteworthy systems:
* •
Guo et al (2010) use logistic regression on a large closed-source project
(Microsoft Windows), to predict whether or not a bug will be fixed. Using
regression analysis, they identified the factors that led to bugs being fixed
or not fixed.
* •
Giger et al (2010) use decision trees to predict the bug-fix time for Mozilla,
Eclipse, and GNOME projects. They divided their target class into two labels:
fast and slow, to get a binary classification problem, and used the area under
the ROC curve (AUC) metric as their evaluation criteria.
* •
Marks et al (2011) also used decision trees, but instead, use an ensemble
method, i.e., random forests, on Eclipse and Mozilla data. Their motivation
for using random forests, apart from the better performance as compared to
standard decision trees, is the ability to extract the relative importance of
features in the input data. They report accuracy scores of 63.8% and 67.2% on
the Mozilla and Eclipse repositories respectively.
* •
At MSR’16, Kikas, Dumas, and Pfahl (Kikas et al, 2016) built time-dependent
models for issue close time prediction using Random Forests with a combination
of static code features, and non-code features to predict issue close time
with high performance
* •
More recently, Habayeb et al (2017) reported in IEEE TSE’17 a prediction
system based on hidden Markov chains. Like Giger et al (2010), they divided
their target labels into fast and slow fix-times and experimented with
different values of the number of hidden states of the hidden Markov model.
Based on the above, we assert that the two prior state-of-the-art non-neural
methods in area used random forests and logistic regression. Hence we will we
use these two systems as part of the following study.
#### 2.2.2. Deep Learning and Issue Close Time
As to deep learning and issue close time prediction, two contenders for
“state-of-the-art” are DASENet (Lee et al, 2020) and DeepTriage(Mani et al,
2019). The DASENet paper asserts that their algorithm defeats DeepTriage but,
after much effort, we could not reproduce that result333We found that the
reproduction package published with DASENet has missing files. We tried
contacting the authors of that paper, without success.. Hence, for this study,
we use DeepTriage since:
* •
It is a state-of-the-art deep learner that performs for lifetime prediction.
* •
It has been very recently published (2019);
* •
Its reproduction package allowed us to run that code on our machines.
* •
It uses datasets commonly used in the literature (Technical aside: we were
tempted to use the dataset provided by Vieira et al (2019) for our deep
learning baseline. However, their lack of prior benchmarks meant we could not
provide a comparison to demonstrate the efficacy of our approach.)
From a technical perspective, DeepTriage is Mani et al (2019)’s extension of
bidirectional LSTMs with an “attention mechanism”. A Long Short-Term Memory
(LSTM) (Hochreiter and Schmidhuber, 1997) is a form of recurrent neural
network that has additional “gate” mechanisms to allow the network to model
connections between long-distance tokens in the input. Bidirectional variants
of recurrent models, such as LSTMs, consider the token stream in both forward
and backward directions; this allows for the network to model both the
previous and the following context for each input token. Attention
mechanisms(Bahdanau et al, 2014) use learned weights to help the network “pay
attention” to tokens that are more important than others in a context. Prior
to running DeepTriage, its authors recommend using a standard set of
preprocessing techniques: pattern matching to remove special characters and
stack traces, tokenization, and and pruning the corpus to a fixed length.
Beyond these steps, they rely on the deep learner to perform automated feature
engineering.
## 3\. Algorithms
Having discussed the domains we need to explore, this paper now turns to how
we will explore them.
For the purposes of exposition, we label divide this discussion on these
algorithms into three groups:
* •
Older style Feedforward Networks
* •
Newer-style Deep Learners
* •
Hyperparameter optimizers, which we use to tune the parameters of Feedforward
Networks and Deep learners
Note that the system we are calling SIMPLE is a combination of Feedforward
Networks and Hyperparameter Optimization.
### 3.1. Feedforward Networks
Feedforward neural networks (LeCun et al, 2015) apply a general “activation
function” at each node after performing the matrix multiplication of the
weights with the inputs. These networks grew in popularity following the
invention of the ReLU (rectified linear unit) function (Nair and Hinton,
2010), $f(x)=\max(0,x)$, which significantly improved the results of neural
networks. Specifically, for a layer $i$, if the weight matrix is represented
in matrix form as $W^{[i]}$, the bias terms (the constants) are represented by
$b^{[i]}$, and the values of the activation function are represented as
$a^{[i]}$, then $a^{[0]}=X$ and $z^{[i]}=W^{[i]T}a^{[i-1]}+b^{[i]}$ and
$a^{[i]}=f(z^{[i]})$ where $X$ is the input matrix.
There are several activation functions; for brevity, we only discuss the ones
relevant in this study. Following the advice of LeCun et al (2015), for binary
and multi-classification problems:
* •
For the last layer of the network, this study uses Sigmoid(x)
$=\frac{1}{1+e^{-x}}$ and Softmax(x)
$=\frac{\exp(x_{k})}{\sum_{j=1}^{|x|}\exp(x_{j})}$ respectively.
* •
For the other layers, we use ReLU(x) $=\max(0,x)$.
### 3.2. Deep Learning
For the rest of this paper, the following distinction will be important:
* •
The algorithms DeepTriage and VulDeePecker (used for issue close time and
vulnerability defection, respectively) are based on new neural network
technology comprising extensive layers of reasoning, where layer $i$ organizes
the inputs offered to layer $i+1$.
* •
Our SIMPLE method is based on old feedforward neural networks which is a
technology that dates back decades. At each node of these networks, the inputs
are multiplied with weights that are learned, and then an activation function
is applied. The weights are learned by the backpropagation algorithm
(Rumelhart et al, 1985).
The difference between these approaches can be understood via Figure 1. The
older methods use just a few layers while the “deep” learners use many layers.
Also, the older methods use a threshold function at each node, while
feedforward networks typically use the ReLU function $f(x)=\max(0,x)$.
Figure 1. Illustration of a neural net model. Feedforward networks, such as
those used in SIMPLE, have far fewer hidden layers than deep learners.
### 3.3. Hyperparameter Optimization
A common factor in all neural networks (feed-forward, deep learner, etc) is
the architecture of the many layers of neural networks (Goodfellow et al,
2016). In deep learning terminology, an “architecture” refers to the
arrangement of nodes in the network and the connections between them, which
dictates how the backpropagation algorithm updates the parameters of the
model. Depending on the choice of the optimization algorithm (such as Adam
(Kingma and Ba, 2014)) and the architecture used, the model also has several
hyper-parameters, such as the number of layers, the number of nodes in each
layer, and hyper-parameters of the optimization algorithm itself (Brown et al,
2020).
The selection of appropriate hyper-parameters is something of a black art.
Hence there exists a whole line of research called hyper-parameter
optimization that explores automatic methods for finding these values.
For this study, we consider using two such optimizers: TPE (tree-structured
Parzen estimators) from Bergstra et al. (Bergstra and Bengio, 2012; Bergstra
et al, 2011) and DODGE from Agrawal et al. (Agrawal et al, 2019, 2021):
* •
TPE is a candidate hyper-parameter tuner since a December 2020 Google Scholar
search for “Hyper-parameter optimization” reported that papers by Bergstra et
al. (Bergstra and Bengio, 2012; Bergstra et al, 2011) on TPE optimization have
more citations (2159 citations and 4982 citations444The nearest other work was
a 2013 paper by Thornton et al. on Auto-WEKA (Thornton et al, 2013) with 931
citations.) that any other paper in this arena.
* •
DODGE is another candidate hyper-parameter since, unlike TPE, it has been
extensively tested on SE data sets. In 2019, Agrawal et al. (Agrawal et al,
2019) reported that for a range of SE problems (bad small detection, defect
prediction, issue severity prediction) learners tuned by DODGE out-perform
prior state-of-the art results (but a missing part of their analysis is that
they did not study deep learning algorithms, hence, this paper).
How to choose between these algorithms? In 2021, Agrawal et al. (Agrawal et
al, 2021) showed that DODGE is preferred over TPE for “intrinsically simple”
data sets. Levina and Bickel (2004) argue that many datasets embedded in high-
dimensional spaces can be compressed without significant information loss.
They go on to say that a simple linear transformation like Principal
Components Analysis (PCA) (Pearson, 1901) is insufficient, as the lower-
dimensional embedding of the high-dimensional points are not merely
projections. Instead, Levina and Bickel (2004) propose a method that computes
the intrinsic dimensionality by counting the number of points within a
distance $r$ while varying $r$. For notes on that computation, see Table 2
Table 1. Feedforward networks are controlled by these hyper-parameters.
Preprocessors: • StandardScaler : i.e. all input data set numerics are
adjusted to $(x-\mu)/\sigma$. • MinMaxScaler (range = (0, 1)): i.e. scale each
feature to $(0,1)$. • Normalizer (norm = randchoice([‘l1’, ‘l2’,‘max’])): i.e.
normalize to a unit norm. • MaxAbsScaler (range = (0, 1)): scale each feature
by its maximum absolute value • Binarizer (threshold = randuniform(0,100)),
i.e., divide variables on some threshold
---
Hyper-parameters: • Number of layers • Number of units in each layer • Batch
size (i.e., the number of samples processed at a time)
Intrinsic dimensionality (which we will denote as $D$) can be used to select
an appropriate hyper-optimization strategy. Agrawal et al. (Agrawal et al,
2021). experiments show that DODGE beasts TPE for low dimensional data (when
$D<8$) while TPE is the preferred algorithm for more complex data.
Table 2. Notes on intrinsic dimensionality. Before presenting the mathematics
of the Levina and Bickel (2004) measure, we offer a little story to explain
the intuition behind this measure Consider a brother and sister who live in
different parts of town. The sister lives alone, out-of-town, on a road
running north-south with houses only on one side of the street. Note that if
this sister tries to find company by walking: • Vertically up or down; • Or
east or west then she will meet no one else. But if she walks north or south,
then she might find company. That is, the humans in that part of town live in
a one-dimensional space (north-south). Meanwhile, the brother lives downtown
in the middle of a large a block of flats that is also oriented north-south.
The brother is ill-advised to walk east-west since then they will fall off a
balcony. On the other hand, if he : • Climbs up or down one storey • Or walks
to the neighboring flats north or south then the brother might meet other
people. That is to say, the humans in that block of flats effectively live in
a two-dimensional space (north-south and up-down). To compute Levina’s
intrinsic dimensionality, we create a 2-d plot where the x-axis shows $r$;
i.e. how far we have walked away from any instance and the y-axis show $C(r)$
which counts how many more people we have meet after walking some distance $r$
way from any one of $n$ instances:
$y=C(r)=\frac{2}{n(n-1)}\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}I\left[\lVert
x_{i},x_{j}\rVert<r\right]$ The maximum slope of $\ln C(r)$ vs. $\ln r$ is
then reported as the intrinsic dimensionality. Note that $I[\cdot]$ is the
indicator function (i.e., $I[x]=1$ if $x$ is true, otherwise it is 0); $x_{i}$
is the $i$th sample in the dataset. Note also that, as shown by Aggarwal et al
(2001), at higher dimensions the distance calculations should use the $L_{1}$
norm, i.e., $\sum\lvert x_{i}\rvert$ rather than the $L_{2}$ norm, i.e.,
$\sqrt{\sum x_{i}^{2}}$.
---
Using the calculation methods of Agrawal et al. (Agrawal et al, 2021), we find
that for our data:
$\mathit{D(Firefox,\;Chromium,\;Eclipse)}=\\{2.1,\;1.95,\;1.9\\}$
From this, we make two observations. Firstly, in a result that may not have
surprised Levina et al., this data from Firefox, Chromium, Eclipse can be
compressed w to just a few dimensions. Secondly, all our data can be found
below the $D<8$ threshold proposed by Agrawal et al. (Agrawal et al, 2021).
Hence, for this study, we use DODGE.
Compared to other hyper-parameter tuners, DODGE is a very simple algorithm
that runs in two steps:
1. (1)
During an initial random step, DODGE selects hyper-parameters at random from
Table 1. Each such tuning is used to configure a learner. The value of that
configuration is then assessed by applying that learner to a data set. If ever
a NEW result has performance scores near an OLD result, then a “tabu” zone is
created around OLD and NEW configurations that subsequent random searches
avoid that region of configurations.
2. (2)
In the next step, DODGE selects configurations via a binary chop of the tuning
space. Each chop moves in the bounds for numeric choices by half the distance
from most distant value to the value that produced the “best” performance. For
notes on what “best” means, see §4.6.
Agrawal et al. recommend less than 50 evaluations for each of DODGE’s two
stages. Note that this is far less than other hyper-parameter optimizations
strategies. To see that, consider another hyper-parameter optimization
approach based on genetic algorithms that mutate $P$ individuals over $G$
generations (and between each generation, individuals give “birth” to new
individuals by crossing-over attributes from two parents). Holland (John,
1992) recommends P=G=100 as useful defaults for genetic algorithms. Those
default settings implies that a standard genetic algorithm tuner would require
$100*100=10,000$ evaluations.
Note that we also considered tuning DeepTriage, but that proved impractical:
* •
The DeepTriage learner used in this study can take up to six CPU hours to
learn one model from the issue close time data. When repeated for 20 times
(for statistically validity) over our (15) data sets, that means that using
DODGE (using 42 evaluations) on DeepTriage would require over 8 years of CPU
time.
* •
On the other hand, with 20 repeats over our datasets, DODGE with feedforward
networks terminated in 26 hours; i.e. nearly 2,700 times faster than tuning
DeepTriage.
## 4\. Experimental Methods
### 4.1. Methods for Issue close time prediction
This section discusses how we comparatively evaluate different ways to do
issue close time prediction. We explore three learners:
* L1:
DeepTriage: a state-of-the-art deep learner from COMAD’19 (Mani et al, 2019);
* L2:
Our SIMPLE neural network learner, described in §4.5;
* L3:
Non-neural approaches: random forest from Marks et al (2011), and logistic
regression from Guo et al (2010) (we present the better of the two results,
where “better” is defined via the statistical methods of §4.7).
These learners will be studied twice:
* S0:
Once, with the default off-the-shelf settings for learners control parameters;
* S1:
Once again, using the settings found after some automatic tuning.
The original research plan was to present six sets of results:
planned = {L1,L2,L3} * {S0,S1}
However, as noted below, the tuning times from DeepTriage were so slow that we
could only report five results:
actual = ({L1} * {S0}) + ({L2,L3} * {S0,S1})
### 4.2. Methods for Vulnerability Detection
For vulnerability detection, we use source code as the starting point for our
approach. The first step is to convert the source code into a vector
representation. For this, we use the code2vec method of Alon et al (2019).
Specifically, inspired by the Attention mechanism (Bahdanau et al, 2014;
Vaswani et al, 2017), they propose a “Path-Attention” framework based on paths
in the abstract syntax tree (AST) of the code. However, the two systems that
we study (ffmpeg and qemu) are written in C++, while code2vec was initially
built for Java code. To our benefit, code2vec uses an intermediate AST
representation as its input, which we convert using the astminer
toolkit555https://github.com/JetBrains-Research/astminer. Having done that, we
then use code2vec to create vector representations of our two software
systems. Next, we reduce the dimensionality of these vectors using an
autoencoder, an encoder-decoder architecture (Badrinarayanan et al, 2017) that
performs non-linear dimensionality reduction. Finally, we perform random
oversampling to handle class imbalance.
We emphasize here that this step is preprocessing, not training. Any software
analytics solution that can be applied effectively in the real world must use
source code as input; however, machine learning models expect vector inputs.
Therefore, a preprocessing step is necessary to bridge this representation gap
between the raw source code and the input to the machine learning system. For
example, Li et al (2018b) use a bidirectional LSTM model to extract vectors,
and append a Dense layer to this deep learner to make predictions. Training
end-to-end in this manner has the advantage of simplicity, but comes at a
computational cost since each training step also trains the preprocessor. By
decoupling these two parts, we allow for training the preprocessor once (per
software system) and then using the actual learner (in our case, the
feedforward network) to make predictions.
We train our feedforward networks in a straightforward manner. We train for
200 epochs using the Adam optimizer with default settings. We perform hyper-
parameter optimization using DODGE, for 30 iterations as recommended by its
authors.
### 4.3. Data for Issue close time prediction
To obtain a fair comparison with the prior state-of-the-art, we use the same
data as used in the prior study (DASENet) (Lee et al, 2020). One reason to
select this baseline is that we were able to obtain the data used in the
original study (see our reproduction package) and, therefore, were able to
obtain results comparable to prior work. For a summary of that data, see Table
3.
For the comparison with the Mani et al (2019) study, the data was collected
from Bugzilla for the three projects: Firefox, Chromium, and Eclipse:
* •
To collect that data, Mani et al (2019) applied standard text mining
preprocessing (pattern matching to remove special characters and stack traces,
tokenization, and and pruning the corpus to a fixed length).
* •
Next, the activities of each day were collected into “bins”, which contain
metadata (such as whether the person was the reporter, days from opening,
etc.), system records (such as labels added or removed, new people added to
CC, etc.), and user activity such as comments.
* •
The metadata can directly be represented in numerical form, while the user and
system records are transformed from text to numerical form using the word2vec
(Mikolov et al, 2013a, b) system. These features, along with the metadata,
form the input to the DeepTriage (Mani et al, 2019) system and our feedforward
learners for comparison.
In the same manner as prior work using the Bugzilla datasets, we discretize
the target class into 2, 3, 5, 7, and 9 bins (so that each bin has roughly the
same number of samples). This yields datasets that are near-perfectly balanced
(for example, in the Firefox 2-class dataset, we observed a 48%-52% class
ratio).
Table 3. Issue close time prediction data. From Lee et al (2020) study. Note that because of the manner of data collection, i.e., using bin-sequences for each day for each report, there are many more data samples generated from the number of reports mined. Project | Observation Period | # Reports | # Train | # Test
---|---|---|---|---
Eclipse | Jan 2010–Mar 2016 | 16,575 | 44,545 | 25,459
Chromium | Mar 2014–Aug 2015 | 15,170 | 44,801 | 25,200
Firefox | Apr 2014–May 2016 | 13,619 | 44,800 | 25,201
Table 4. Vulnerability Detection datasets, from the Devign (Zhou et al, 2019) paper. VFCs = Vulnerability-Fixing Commits. Project | Total commits | VFCs | Non-VFCs
---|---|---|---
qemu | 11,910 | 4,932 | 6,978
ffmpeg | 13,962 | 5,962 | 8,000
### 4.4. Data for Vulnerability Detection
For vulnerability detection, we use the datasets provided by Zhou et al
(2019). However, although the authors test their approach on four projects,
only two are released: ffmpeg and qemu. These are two large, widely used C/C++
applications: ffmpeg is a library that handles audio and video tasks such as
encoding; qemu is a hypervisor. To collect this data, the authors collected
vulnerability-fixing commits (VFCs) and non-vulnerability-fixing commits (non-
VFCs) using (a) keyword-based filtering of commits based on the commit
messages (b) manual labeling. Then, vulnerable and non-vulnerable functions
are extracted from these commits. The authors use Joern (Yamaguchi et al,
2014) to extract abstract syntax trees, control flow graphs, and data flow
graphs from these functions.
In total, for qemu, the authors collected 11,910 commits, of which 4,932 were
VFCs and 6,978 were non-VFCs. For ffmpeg, the authors collected 13,962
commits, of which 5,962 were VFCs and 8,000 were non-VFCs. This data is
summarized in Table 4.
### 4.5. Tuning the SIMPLE Algorithm
Our SIMPLE algorithm is shown in Algorithm 1.
Table 1 shows the parameters that control the feedforward network used by
SIMPLE.
One issue with any software analytics paper is how researchers decide on the
“magic numbers” that control their learners (e.g. Table 1).
In order to make this paper about simpler neural feedforward networks versus
deep learning (and not about complex methods for hyper-parameter
optimization), we selected the controlling hyper-parameters for the
feedforward networks using hyper-parameter optimization.
1 Set random number seed;
2 for _20 times_ do
3 Shuffle data;
4 Set train, test = 70%,30% splits of the data;
/* Learning */
5 Apply a feedforward neural network.; On the training data, tune the hyper-
parameters of Table 1 using DODGE (see §3.3).;
6 Take the best model found from the training data, apply it to the test data;
7 Report performance scores on the test data. ;
8 end for
Algorithm 1 SIMPLE
### 4.6. Performance Metrics
Since we wish to compare our approach to prior work, we take the
methodological step of adopting the same performance scores as that seen in
prior work.Lee et al (2020) use the following two metrics in their study:
* •
Accuracy is the percentage of correctly classified samples. If TP, TN, FP, FN
are the true positives, true negatives, false positives, and false negatives
(respectively), then accuracy is $\mathit{(TP+TN)/(TP+TN+FP+FN)}$.
* •
Top-2 Accuracy, for multi-class classification, is defined as the percentage
of samples whose class label is among the two classes predicted by the
classifier as most likely. Specifically, we predict the probabilities of a
sample being in each class, and sort them in descending order. If the true
label of the sample is among the top 2 classes ranked by the classifier, it is
marked as “correct”.
Additionally, for vulnerability detection, Zhou et al (2019) use F1-score as
their metric, which is defined as follows. Let recall be defined as the
fraction of true positive samples that the classifier correctly identified,
and precision be the fraction of samples classified as positive, that were
actually positive. That is,
$\displaystyle\mathrm{Recall}$ $\displaystyle=\frac{TP}{TP+FN}$
$\displaystyle\mathrm{Precision}$ $\displaystyle=\frac{TP}{TP+FP}$
Then F1-score is the harmonic mean of recall and precision, i.e.,
$\mathrm{F1}=\frac{2\cdot\mathrm{precision}\cdot\mathrm{recall}}{\mathrm{precision}+\mathrm{recall}}$
Table 5. Results on BugZilla data used in prior deep learning state of the
art. The target label is discretized into a different number of classes
(columns) as in the prior work. Dark cells indicate statistically better
performance.
Key: DT = DeepTriage (Mani et al, 2019); NDL-T = best result of untuned non-neural methods; i.e. best of logistic regression (Guo et al, 2010) and random forests (Marks et al, 2011); NDL+T = best of DODGE-tuned non-neural methods; i.e. NDL-T plus tuning; FF = untuned feedforward network; i.e Algorithm 1, without tuning; SIMPLE = SIMPLE i.e. FF plus tuning; $T_{k}$ = Top-k accuracy; Project | Model | 2-class | 3-class | 5-class | 7-class | 9-class
---|---|---|---|---|---|---
| | $T_{1}$ | $T_{1}$ | $T_{2}$ | $T_{1}$ | $T_{2}$ | $T_{1}$ | $T_{2}$ | $T_{1}$ | $T_{2}$
Firefox | DT | 67 | 44 | 78 | 31 | 58 | 21 | 39 | 19 | 35
NDL-T | 70 | 43 | 64 | 30 | 42 | 18 | 30 | 18 | 30
NDL+T | 68 | 47 | 79 | 34 | 61 | 25 | 45 | 21 | 39
FF | 71 | 49 | 82 | 37 | 63 | 26 | 47 | 23 | 41
SIMPLE | 70 | 53 | 86 | 39 | 67 | 37 | 61 | 25 | 45
Chromium | DT | 63 | 43 | 75 | 27 | 52 | 22 | 38 | 18 | 33
NDL-T | 64 | 35 | 56 | 23 | 36 | 15 | 27 | 15 | 28
NDL+T | 64 | 49 | 79 | 30 | 56 | 26 | 42 | 23 | 40
FF | 65 | 53 | 82 | 35 | 60 | 27 | 45 | 26 | 42
SIMPLE | 68 | 55 | 83 | 36 | 61 | 29 | 48 | 28 | 45
Eclipse | DT | 61 | 44 | 73 | 27 | 51 | 20 | 37 | 19 | 34
NDL-T | 66 | 33 | 54 | 23 | 38 | 16 | 29 | 16 | 29
NDL+T | 65 | 52 | 81 | 30 | 56 | 27 | 44 | 27 | 42
FF | 66 | 54 | 81 | 32 | 59 | 30 | 47 | 30 | 46
SIMPLE | 69 | 56 | 84 | 35 | 62 | 31 | 48 | 33 | 49
Table 6. Vulnerability detection results. Project | Model | F1-score
---|---|---
qemu | NDL-T | 59
NDL+T | 45
FF | 51
SIMPLE | 73
Devign | 73
ffmpeg | NDL-T | 52
NDL+T | 52
FF | 57
SIMPLE | 67
Devign | 74
### 4.7. Statistics
Since some of our deep learners are so slow to execute, one challenge in these
results is to compare the results of a very slow system versus a very fast one
(SIMPLE) where the latter can be run multiple times while it is impractical to
repeatedly run the former. Hence, for our definition of “best”, we will
compare one result of size $|N_{1}|=1$ from the slower learner (DeepTriage) to
a sample of $|N_{2}|=20$ results from the other.
Statistically, our evaluation of these results requires a check if one results
is less than a “small effect” different to the central tendency of the other
population. For that statistical task, Rosenthal et al (1994) says there are
two “families” of methods: the $r$ group that is based on the Pearson
correlation coefficient; or the $d$ family that is based on absolute
differences normalized by (e.g.) the size of the standard deviation. Rosenthal
et al (1994) comment that “none is intrinsically better than the other”.
Hence, the most direct method is utilized in our paper. Using a $d$ family
method, it can be concluded that one distribution is the same as another if
their mean value differs by less than Cohen’s delta ($d$*standard deviation).
(1)
$d=\mathit{small\;effect}=0.3*\sqrt{\frac{\sum_{i}^{x}(x_{i}-({\sum}x_{i}/n))^{2}}{n-1}}$
i.e., 30% of the standard deviation of the $N_{2}$ population.
## 5\. Results
In this section, we discuss our results by answering two research questions:
RQ1. Does “Old but Gold” hold for issue lifetime prediction?
RQ2. Does “Old but Gold” hold for vulnerability detection?
### 5.1. RQ1: Issue lifetime prediction
In this section, we discuss the answer to RQ1, which was, “Does the Old but
Gold hypothesis hold for issue lifetime prediction?”
In Table 5, best results are indicated by the gray cells. The columns of that
table describe how detailed are our time predictions. A column labeled
$k$-class means that the data was discretized into $k$ distinct labels, as
done in prior work (see Lee et al (2020) for details).
Recall that cells are in gray if the are statistically significantly better.
In all cases, SIMPLE’s results were (at least) as good as anything else.
Further, once we start exploring more detailed time divisions (in the 3-class,
5-class, etc problems) then SIMPLE is the stand-out best algorithm.
Another thing we can say about these results is that SIMPLE is much faster
than other approaches. The above results took $\approx$ 90 hours to generate,
of which 9 hours was required for SIMPLE (for 20 runs, over all 15 datasets)
and 80 hours were required for the deep learner (for 1 run, over all 15
datasets). Recall that if we had also attempted to tune the deep learner, then
that runtime would have exploded to six years of CPU.
From this discussion, we conclude RQ1 as follows:
The “Old but Gold” hypothesis holds for issue lifetime prediction.
### 5.2. RQ2: Vulnerability detection
In this section, we discuss the answer to RQ2, which was, “Does the effect
hold for vulnerability detection”?
Table 6 shows our results for vulnerability detection. While our data is
limited (in that we could only use the two datasets released by the authors of
(Zhou et al, 2019)), the data we do have suggests that SIMPLE can perform as
well as Devign. In the case where SIMPLE lost, the difference was small (7%).
Therefore, we recommend the more complex deep learner when that 7% is
justified by domain constraints (e.g., a highly safety-critical system);
however, a pragmatic engineering case could be made that the difference is
marginal and negligible. We postulate that the slightly better performance of
Devign is due to the superior preprocessing done by the multiple deep learning
layers used by their approach, which allows for rich feature extraction and
superior performance. That said, we argue that our approach runs faster than
their sophisticated technique. While we could not reproduce their results
(since their code is not open source), our approach takes 205 seconds on
average, while their approach runs overnight666For their runtime, we contacted
the authors, who reported that “it ran overnight on their machines”..
Our conclusion is that:
For vulnerability detection, the “Old but Gold” hypothesis worked for half the
data sets studied here.
These results mean that we cannot unequivocally advocate simple methods for
vulnerability detection. But then neither can these advocate for the use of
deep learning for vulnerability prediction. In our view, these results
strongly motivate the need for further study in this area (since, if simpler
methods do indeed prevail fro vulnerability detection, then this would
simplify research into pressing current issues of software security).
## 6\. Threats to Validity
Sampling bias: As with any other data mining paper, it is important to discuss
sampling bias. We claim that this is mitigated by testing on 3 large SE
projects over multiple discretizations, and demonstrating our results across
all of them. Further, these datasets have been used in prior work that have
achieved state-of-the-art performance recently. Nevertheless, in future work,
it would be useful to explore more data.
Learner bias: Our learner bias here corresponds to the choice of architectures
we used in our deep learners. As discussed above, we chose the architectures
based on our reading of “standard DL” from the literature. While newer
architectures may lead to better results, the crux of this paper was on how
simple networks suffice. Therefore, we maintain that the intentional usage of
the simple, feedforward architecture was necessary to prove our hypothesis.
Evaluation bias: We compared our methods using top-1 and top-2 accuracy
scores, consistent with prior work. These metrics are valid since the method
the classes were discretized (as discussed in prior work) lends to equal-
frequency classes. We further reduce the evaluation bias by running our
experiments 20 times for each setup, and using distribution statistics, i.e.,
the Scott-Knott test, to check if one setup is significantly better than
another.
Order bias: This refers to bias in the order in which data elements appear in
the training and testing sets. We minimize this by running the experiment 20
times, each with a different random train-test split.
External validity: We tune the hyper-parameters of the neural network using
DODGE, removing external biases from the approach. Our baseline results are
based on the results of Montufar et al. (Montufar et al, 2014), which has been
evaluated by the deep learning community. We also compare our work to non-deep
learning methods, both with and without tuning by DODGE, to provide a complete
picture of the performance of our suggested approach in relation to prior work
and other learners.
Figure 2. The distribution of papers across venues Figure 3. A summary of our
literature review of deep learning methods in SE. The blue row denotes the
DeepTriage system used in this paper. Legend: A = attention mechanism, B =
deep belief network, C = convolutional networks, E = embedding, F =
feedforward networks (which includes traditional perceptrons (Rosenblatt,
1961) (McCulloch and Pitts, 1943)) G = graph networks, M = misc (i.e. some
particular architecture invented by the author, used in one paper), S =
sequence, W = word2vec. For a list of the papers shown in the right-hand-side
column, see Table 7.
## 7\. Literature Review: deep learning in SE
Using a literature review, this section argues that the issue raised in this
paper (that researchers seen rush to use the latest methods from deep learning
literature, without baselining them against simpler) is widespread in the
software analytics literature.
To understand how deep learning are used in SE, we performed the following
steps.
* •
Seed: Our approach started with collecting relevant papers. As a seed, we
collected papers from the recent literature review conducted by Watson
(Watson, 2020).
* •
Search: To this list, we added papers added by our own searches on Google
Scholar. Our search keywords included “deep learning AND software”, “deep
learning AND defect prediction”, and “deep learning AND bug fix” (this last
criteria was added since we found that some recent papers, such as Lee et al
(2020), used the term “bug fix time” rather than “issue close time”).
* •
Filter: Next, we filtered papers using the following criteria: (a) published
in top venues as listed in Google Scholar metrics for Software Systems,
Artificial Intelligence, and Computational Linguistics; or, released on arXiv
in the last 3 years or widely cited ($>$ 100 cites) (b) has at least 10 cites
per year, unless it was published in or after 2017 (the last three years). The
distribution of papers across different venues is shown in Figure 2.
* •
Backward Snowballing: As recommended by Wohlin (2014), we performed
“snowballing” on our paper (i.e. we added papers cited by the papers in our
list that also satisfy the criteria above). Our snowballing stopped when
either (a) the list of papers cited by the current generation is a subset of
the papers already in the list, or (b) there were no further papers found.
This led to a list of 99 papers, which we summarize in Figure 3. Some
engineering judgement was used in assigning papers to the categories of that
figure. For example, a paper on learning a latent embedding of an API (Nguyen
et al, 2017) for various purposes, such as discovering analogous APIs among
third-parties (Chen et al, 2019), was categorized as “code comprehension”.
Similarly, most papers performing some variant of code translation, including
API translation as in (Gu et al, 2017), were categorized into “language
processing”–a bin that contains programming language processing and natural
language processing. Tasks that we could not justifiably merge into an
existing bin (e.g. on image processing (Ott et al, 2018; Sun et al, 2018) were
given their own special category.
Note the numbers on top of the columns of Figure 3:
* •
Sightly more than half (60.1%) of those papers compare their results to non-DL
methods. We suggest that number should be higher–it is important to benchmark
new methods against prior state-of-the-art.
* •
Only a minority of papers (39.4%) performed any sort of hyper-parameter
optimization (HPO), i.e., used methods that tune the various “hyper-
parameters”, such as the number of layers of the deep learner, to eke out the
best performance of deep learning (39.4%).
* •
Even fewer papers (18.2%) applied hyper-parameter optimization in a non-
trivial manner; i.e., not using deprecated grid search (Bergstra and Bengio,
2012) and using a hold-out set to assess the tuning before going to a separate
test set).
* •
Finally, few papers (10.1%) used both non-trivial hyper-parameter optimization
and compared to results to prior non-deep learning work. These “best of breed”
papers are listed in Table 7.
Table 7. Papers in Column (z) of Figure 3. Paper | Reference
---|---
Suggesting Accurate Method and Class Names | (Allamanis et al, 2015)
Automated Vulnerability Detection in Source Code Using Deep Representation Learning | (Russell et al, 2018)
A convolutional attention network for extreme summarization of source code | (Allamanis et al, 2016)
Automating intention mining | (Huang et al, 2018)
Sentiment analysis for software engineering: How far can we go? | (Lin et al, 2018)
500+ times faster than deep learning: A case study exploring faster methods for text mining stackoverflow | (Menzies et al, 2018)
Automatically learning semantic features for defect prediction | (Wang et al, 2016)
Deep green: Modelling time-series of software energy consumption | (Romansky et al, 2017)
On the Value of Oversampling for Deep Learning in Software Defect Prediction | (Yedida and Menzies, 2021)
In summary, we find that the general pattern in the literature is that while
there is much new work on deep learning, there is not so much work on
comparing these new methods to older, simpler approaches. This is a concern
since, as shown in this paper, those older simpler methods, being faster, are
more amenable to hyper-parameter optimization, and can yield better results
when tuned. As we stated above, 40% of papers do not compare against simpler,
non-deep learning methods, and only 18% of papers apply hyper-parameter
optimization to their approach, possibly due to the computational infeasible
nature of doing so with more complex methods.
## 8\. Discussion and Conclusion
In this paper, we explored the state of literature applying deep learning
techniques to software engineering tasks. We discussed and explored a systemic
tendency to choose fundamentally more complex models than needed. We used
this, and the study by Galke and Scherp (2021) as motivation to apply simpler
deep learning models to two software engineering tasks, predicting issue close
time, and vulnerability detection. Our model is much simpler than prior state-
of-the-art deep learning models and takes significantly less time to run. We
argue that these “old but gold” models are sorely lacking in modern deep
learning applied in SE, with researchers preferring to use more sophisticated
methods.
As to why it performs so well, we hypothesize that the power of SIMPLE came
from tuning the hyper-parameters. To test this, we also ran a feedforward
architecture without tuning (see FF in Table 5). We note a stark difference
between the performance of the untuned and tuned versions of this
architecture.
From our results, we say that deep learning is a promising method, but should
be considered in the context of other techniques. We suggest to the community
that before analysts jump to more complex approaches, they try a simpler
approach; at the very least, this will form a baseline that can endorse the
value of the more complex learner. There is much literature on baselines in
SE: for example, in his textbook on empirical methods for AI, Cohen (1995)
strongly advocates comparing against simpler baselines. In the machine
learning community, Holte (1993) uses the “OneR” baseline to judge the
complexity of upcoming tasks. In the SE community, Whigham et al (2015)
recently proposed baseline methods for effort estimation (for other baseline
methods, see Mittas and Angelis (2012)). Shepperd and MacDonell (2012) argue
convincingly that measurements are best viewed as ratios compared to
measurements taken from some minimal baseline system. Work on cross versus
within-company cost estimation has also recommended the use of some very
simple baseline (they recommend regression as their default model (Kitchenham
et al, 2006)).
Our results present a cautionary tale about the pitfalls of using deep
learners. While it is certainly tempting to use the state-of-the-art results
from deep learning literature (which, as prior work has shown, certainly
yields good results), we advise the reader to instead attempt the use of
simpler models and apply hyper-parameter tuning to achieve better performance,
faster.
It is left as future work to explore whether this same principle of using
SIMPLE models for other software engineering tasks works equally well. By
relying on simple architectures of deep learners, we obtain faster, simpler,
and more space-efficient models. This exploration naturally lends itself to
the application of modern deep learning theory to further simplify these
SIMPLE models. In particular, Han et al (2015) explored model compression
techniques based on reduced-precision weights, an idea that is gaining
increasing attention in the deep learning community (we refer the reader to
Gupta et al (2015) and Wang et al (2018) for details, and Tung and Mori (2018)
for a parallel implementation of these techniques). Further, knowledge
distillation (Hinton et al, 2015), a method of training student learners (such
as decision trees) from a parent deep learning model, has shown great promise,
with the student learners outperforming the deep learners they were derived
from. This would make it possible to have the accuracy of deep learning with
the speed of decision tree learning.
To repeat some comments from the introduction, the experiments of this paper
are based on two case studies. Hence, they do not show that all deep learners
can be replaced by faster and simpler methods. That said, we would say that
there is enough evidence here to give the software analytics reasons to pause,
and reflect, on the merits of rushing headlong into new things without a
careful consideration of all that has gone before.
## Declarations
* •
Funding: None.
* •
Conflicts of interest/Competing interests: None.
* •
Availability of data and material: All data used in this manuscript is
publicly available at https://github.com/mkris0714/Bug-Related-Activity-Logs.
* •
Code availability: All source code used is available at
https://github.com/fastidiouschipmunk/simple.
## References
* Aggarwal et al (2001) Aggarwal CC, Hinneburg A, Keim DA (2001) On the surprising behavior of distance metrics in high dimensional space. In: International conference on database theory, Springer, pp 420–434
* Agrawal and Menzies (2018) Agrawal A, Menzies T (2018) Is” better data” better than” better data miners”? In: 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), IEEE, pp 1050–1061
* Agrawal et al (2019) Agrawal A, Fu W, Chen D, Shen X, Menzies T (2019) How to” dodge” complex software analytics. IEEE Transactions on Software Engineering
* Agrawal et al (2021) Agrawal A, Yang X, Agrawal R, Shen X, Menzies T (2021) Simpler hyperparameter optimization for software analytics: Why, how, when? arXiv preprint arXiv:200807334
* Akbarinasaji et al (2018) Akbarinasaji S, Caglayan B, Bener A (2018) Predicting bug-fixing time: A replication study using an open source software project. journal of Systems and Software 136:173–186
* Allamanis et al (2015) Allamanis M, Barr ET, Bird C, Sutton C (2015) Suggesting accurate method and class names. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pp 38–49
* Allamanis et al (2016) Allamanis M, Peng H, Sutton C (2016) A convolutional attention network for extreme summarization of source code. In: International conference on machine learning, pp 2091–2100
* Alon et al (2019) Alon U, Zilberstein M, Levy O, Yahav E (2019) code2vec: Learning distributed representations of code. Proceedings of the ACM on Programming Languages 3(POPL):1–29
* Badrinarayanan et al (2017) Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12):2481–2495
* Bahdanau et al (2014) Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:14090473
* Bergstra and Bengio (2012) Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. The Journal of Machine Learning Research 13(1):281–305
* Bergstra et al (2011) Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. Advances in neural information processing systems 24:2546–2554
* Brown et al (2020) Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, et al (2020) Language models are few-shot learners. arXiv preprint arXiv:200514165
* Chen et al (2019) Chen C, Xing Z, Liu Y, Ong KLX (2019) Mining likely analogical apis across third-party libraries via large-scale unsupervised api semantics embedding. IEEE Transactions on Software Engineering
* Chen and Zhou (2018) Chen Q, Zhou M (2018) A neural framework for retrieval and summarization of source code. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, Association for Computing Machinery, New York, NY, USA, ASE 2018, p 826–831, DOI 10.1145/3238147.3240471, URL https://doi.org/10.1145/3238147.3240471
* Cohen (1995) Cohen PR (1995) Empirical methods for artificial intelligence, vol 139. MIT press Cambridge, MA
* Fu et al (2016) Fu W, Menzies T, Shen X (2016) Tuning for software analytics: Is it really necessary? Information and Software Technology 76:135–146
* Galke and Scherp (2021) Galke L, Scherp A (2021) Forget me not: A gentle reminder to mind the simple multi-layer perceptron baseline for text classification. arXiv preprint arXiv:210903777
* Gao et al (2020) Gao Z, Jiang L, Xia X, Lo D, Grundy J (2020) Checking smart contracts with structural code embedding. IEEE Transactions on Software Engineering
* Giger et al (2010) Giger E, Pinzger M, Gall H (2010) Predicting the fix time of bugs. In: Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering, pp 52–56
* Goodfellow et al (2016) Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning, vol 1. MIT press Cambridge
* Grieco et al (2016) Grieco G, Grinblat GL, Uzal L, Rawat S, Feist J, Mounier L (2016) Toward large-scale vulnerability discovery using machine learning. In: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy, pp 85–96
* Gu et al (2017) Gu X, Zhang H, Zhang D, Kim S (2017) Deepam: Migrate apis with multi-modal sequence to sequence learning. arXiv preprint arXiv:170407734
* Guo et al (2010) Guo PJ, Zimmermann T, Nagappan N, Murphy B (2010) Characterizing and predicting which bugs get fixed: an empirical study of microsoft windows. In: Proceedings of the 32Nd ACM/IEEE International Conference on Software Engineering-Volume 1, pp 495–504
* Gupta et al (2015) Gupta S, Agrawal A, Gopalakrishnan K, Narayanan P (2015) Deep learning with limited numerical precision. In: International Conference on Machine Learning, pp 1737–1746
* Habayeb et al (2017) Habayeb M, Murtaza SS, Miranskyy A, Bener AB (2017) On the use of hidden markov model to predict the time to fix bugs. IEEE Transactions on Software Engineering 44(12):1224–1244
* Han et al (2015) Han S, Mao H, Dally WJ (2015) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:151000149
* Hinton et al (2015) Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:150302531
* Hoang et al (2019) Hoang T, Dam HK, Kamei Y, Lo D, Ubayashi N (2019) Deepjit: an end-to-end deep learning framework for just-in-time defect prediction. In: 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), IEEE, pp 34–45
* Hochreiter and Schmidhuber (1997) Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural computation 9(8):1735–1780
* Holte (1993) Holte RC (1993) Very simple classification rules perform well on most commonly used datasets. Machine learning 11(1):63–90
* Huang et al (2018) Huang Q, Xia X, Lo D, Murphy GC (2018) Automating intention mining. IEEE Transactions on Software Engineering
* Jiang and Agrawal (2018) Jiang P, Agrawal G (2018) A linear speedup analysis of distributed deep learning with sparse and quantized communication. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp 2530–2541
* John (1992) John H (1992) Genetic algorithms. Scientific american 267(1):44–50
* Kikas et al (2016) Kikas R, Dumas M, Pfahl D (2016) Using dynamic and contextual features to predict issue lifetime in github projects. In: Proceedings of the 13th International Conference on Mining Software Repositories, Association for Computing Machinery, New York, NY, USA, MSR ’16, p 291–302, DOI 10.1145/2901739.2901751, URL https://doi.org/10.1145/2901739.2901751
* Kim et al (2017) Kim S, Woo S, Lee H, Oh H (2017) Vuddy: A scalable approach for vulnerable code clone discovery. In: 2017 IEEE Symposium on Security and Privacy (SP), IEEE, pp 595–614
* Kingma and Ba (2014) Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980
* Kipf and Welling (2016) Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:160902907
* Kitchenham et al (2006) Kitchenham B, Mendes E, Travassos GH (2006) A systematic review of cross-vs. within-company cost estimation studies. In: 10th International Conference on Evaluation and Assessment in Software Engineering (EASE) 10, pp 1–10
* Le et al (2011) Le QV, Ngiam J, Coates A, Lahiri A, Prochnow B, Ng AY (2011) On optimization methods for deep learning. In: ICML
* LeCun et al (2015) LeCun Y, Bengio Y, Hinton G (2015) Deep learning. nature 521(7553):436–444
* Lee et al (2020) Lee Y, Lee S, Lee C, Yeom I, Woo H (2020) Continual prediction of bug-fix time using deep learning-based activity stream embedding. IEEE Access 8:10503–10515
* Lee et al (2020) Lee Y, Lee S, Lee CG, Yeom I, Woo H (2020) Continual prediction of bug-fix time using deep learning-based activity stream embedding. IEEE Access 8:10503–10515
* Levina and Bickel (2004) Levina E, Bickel P (2004) Maximum likelihood estimation of intrinsic dimension. Advances in neural information processing systems 17:777–784
* Li et al (2018a) Li H, Xu Z, Taylor G, Studer C, Goldstein T (2018a) Visualizing the loss landscape of neural nets. In: Advances in Neural Information Processing Systems, pp 6389–6399
* Li et al (2018b) Li Z, Zou D, Xu S, Ou X, Jin H, Wang S, Deng Z, Zhong Y (2018b) Vuldeepecker: A deep learning-based system for vulnerability detection. arXiv preprint arXiv:180101681
* Lin et al (2018) Lin B, Zampetti F, Bavota G, Di Penta M, Lanza M, Oliveto R (2018) Sentiment analysis for software engineering: How far can we go? In: Proceedings of the 40th International Conference on Software Engineering, pp 94–104
* Liu et al (2019) Liu H, Jin J, Xu Z, Bu Y, Zou Y, Zhang L (2019) Deep learning based code smell detection. IEEE Transactions on Software Engineering
* Mani et al (2019) Mani S, Sankaran A, Aralikatte R (2019) Deeptriage: Exploring the effectiveness of deep learning for bug triaging. In: COMAD’19: ACM India Joint International Conference on Data Science and Management of Data, pp 171–179
* Marks et al (2011) Marks L, Zou Y, Hassan AE (2011) Studying the fix-time for bugs in large open source projects. In: Proceedings of the 7th International Conference on Predictive Models in Software Engineering, pp 1–8
* Martens et al (2010) Martens J, et al (2010) Deep learning via hessian-free optimization. In: ICML, vol 27, pp 735–742
* McCulloch and Pitts (1943) McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics 5(4):115–133
* Menzies et al (2018) Menzies T, Majumder S, Balaji N, Brey K, Fu W (2018) 500+ times faster than deep learning:(a case study exploring faster methods for text mining stackoverflow). In: 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR), IEEE, pp 554–563
* Mikolov et al (2013a) Mikolov T, Chen K, Corrado G, Dean J (2013a) Efficient estimation of word representations in vector space. arXiv preprint arXiv:13013781
* Mikolov et al (2013b) Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013b) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111–3119
* Mittas and Angelis (2012) Mittas N, Angelis L (2012) Ranking and clustering software cost estimation models through a multiple comparisons algorithm. IEEE Transactions on software engineering 39(4):537–551
* Montufar et al (2014) Montufar GF, Pascanu R, Cho K, Bengio Y (2014) On the number of linear regions of deep neural networks. In: Advances in neural information processing systems, pp 2924–2932
* Nair and Hinton (2010) Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: ICML
* Nguyen et al (2017) Nguyen TD, Nguyen AT, Phan HD, Nguyen TN (2017) Exploring api embedding for api usages and applications. In: 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE), IEEE, pp 438–449
* Ott et al (2018) Ott J, Atchison A, Harnack P, Bergh A, Linstead E (2018) A deep learning approach to identifying source code in images and video. In: 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR), IEEE, pp 376–386
* Pearson (1901) Pearson K (1901) Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2(11):559–572
* Rees-Jones et al (2017) Rees-Jones M, Martin M, Menzies T (2017) Better predictors for issue lifetime. arXiv preprint arXiv:170207735
* Romansky et al (2017) Romansky S, Borle NC, Chowdhury S, Hindle A, Greiner R (2017) Deep green: Modelling time-series of software energy consumption. In: 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, pp 273–283
* Rosenblatt (1961) Rosenblatt F (1961) Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Tech. rep., Cornell Aeronautical Lab Inc Buffalo NY
* Rosenthal et al (1994) Rosenthal R, Cooper H, Hedges L (1994) Parametric measures of effect size. The handbook of research synthesis 621(2)
* Rumelhart et al (1985) Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. Tech. rep., California Univ San Diego La Jolla Inst for Cognitive Science
* Russell et al (2018) Russell R, Kim L, Hamilton L, Lazovich T, Harer J, Ozdemir O, Ellingwood P, McConley M (2018) Automated vulnerability detection in source code using deep representation learning. In: 2018 17th IEEE international conference on machine learning and applications (ICMLA), IEEE, pp 757–762
* Shepperd and MacDonell (2012) Shepperd M, MacDonell S (2012) Evaluating prediction systems in software project estimation. Information and Software Technology 54(8):820–827
* Sun et al (2018) Sun SH, Noh H, Somasundaram S, Lim J (2018) Neural program synthesis from diverse demonstration videos. In: International Conference on Machine Learning, pp 4790–4799
* Tantithamthavorn et al (2016) Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K (2016) Automated parameter optimization of classification techniques for defect prediction models. In: Proceedings of the 38th International Conference on Software Engineering, Association for Computing Machinery, New York, NY, USA, ICSE ’16, p 321–332, DOI 10.1145/2884781.2884857, URL https://doi.org/10.1145/2884781.2884857
* Thornton et al (2013) Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013) Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 847–855
* Tung and Mori (2018) Tung F, Mori G (2018) Clip-q: Deep network compression learning by in-parallel pruning-quantization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7873–7882
* Vaswani et al (2017) Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998–6008
* Viega et al (2000) Viega J, Bloch JT, Kohno Y, McGraw G (2000) Its4: A static vulnerability scanner for c and c++ code. In: Proceedings 16th Annual Computer Security Applications Conference (ACSAC’00), IEEE, pp 257–267
* Vieira et al (2019) Vieira R, da Silva A, Rocha L, Gomes JP (2019) From reports to bug-fix commits: A 10 years dataset of bug-fixing activity from 55 apache’s open source projects. In: Proceedings of the Fifteenth International Conference on Predictive Models and Data Analytics in Software Engineering, pp 80–89
* Wang et al (2018) Wang N, Choi J, Brand D, Chen CY, Gopalakrishnan K (2018) Training deep neural networks with 8-bit floating point numbers. In: Advances in neural information processing systems, pp 7675–7684
* Wang et al (2016) Wang S, Liu T, Tan L (2016) Automatically learning semantic features for defect prediction. In: 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), IEEE, pp 297–308
* Watson (2020) Watson CA (2020) Deep learning in software engineering. PhD thesis, College of William & Mary
* Weiss et al (2007) Weiss C, Premraj R, Zimmermann T, Zeller A (2007) How long will it take to fix this bug? In: Fourth International Workshop on Mining Software Repositories (MSR’07: ICSE Workshops 2007), IEEE, pp 1–1
* Whigham et al (2015) Whigham PA, Owen CA, Macdonell SG (2015) A baseline model for software effort estimation. ACM Transactions on Software Engineering and Methodology (TOSEM) 24(3):1–11
* Wohlin (2014) Wohlin C (2014) Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings of the 18th international conference on evaluation and assessment in software engineering, pp 1–10
* Yamaguchi et al (2014) Yamaguchi F, Golde N, Arp D, Rieck K (2014) Modeling and discovering vulnerabilities with code property graphs. In: 2014 IEEE Symposium on Security and Privacy, IEEE, pp 590–604
* Yedida and Menzies (2021) Yedida R, Menzies T (2021) On the value of oversampling for deep learning in software defect prediction. IEEE Transactions on Software Engineering
* Zhou et al (2019) Zhou Y, Liu S, Siow J, Du X, Liu Y (2019) Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. In: Advances in Neural Information Processing Systems, pp 10197–10207
|
# TextGNN: Improving Text Encoder via Graph Neural Network in Sponsored Search
Jason Yue Zhu Stanford UniversityStanfordCAUSA<EMAIL_ADDRESS>,
Yanling Cui MicrosoftBeijingChina<EMAIL_ADDRESS>, Yuming Liu
MicrosoftBeijingChina<EMAIL_ADDRESS>, Hao Sun MicrosoftBeijingChina
<EMAIL_ADDRESS>, Xue Li MicrosoftSunnyvaleCAUSA<EMAIL_ADDRESS>,
Markus Pelger Stanford UniversityStanfordCAUSA<EMAIL_ADDRESS>, Tianqi
Yang MicrosoftBeijingChina<EMAIL_ADDRESS>, Liangjie Zhang
MicrosoftBeijingChina<EMAIL_ADDRESS>, Ruofei Zhang
MicrosoftSunnyvaleCAUSA<EMAIL_ADDRESS>and Huasha Zhao
MicrosoftSunnyvaleCAUSA<EMAIL_ADDRESS>
(2021)
###### Abstract.
Text encoders based on C-DSSM or transformers have demonstrated strong
performance in many Natural Language Processing (NLP) tasks. Low latency
variants of these models have also been developed in recent years in order to
apply them in the field of sponsored search which has strict computational
constraints. However these models are not the panacea to solve all the Natural
Language Understanding (NLU) challenges as the pure semantic information in
the data is not sufficient to fully identify the user intents. We propose the
TextGNN model that naturally extends the strong twin tower structured encoders
with the complementary graph information from user historical behaviors, which
serves as a natural guide to help us better understand the intents and hence
generate better language representations. The model inherits all the benefits
of twin tower models such as C-DSSM and TwinBERT so that it can still be used
in the low latency environment while achieving a significant performance gain
than the strong encoder-only counterpart baseline models in both offline
evaluations and online production system. In offline experiments, the model
achieves a 0.14% overall increase in ROC-AUC with a 1% increased accuracy for
long-tail low-frequency Ads, and in the online A/B testing, the model shows a
2.03% increase in Revenue Per Mille with a 2.32% decrease in Ad defect rate.
Ad Relevance; Sponsored Search; Text Encoder; Graph Neural Network;
Transformers; C-DSSM; BERT; Knowledge Distillation
††journalyear: 2021††copyright: iw3c2w3††conference: Proceedings of the Web
Conference 2021; April 19–23, 2021; Ljubljana, Slovenia††booktitle:
Proceedings of the Web Conference 2021 (WWW ’21), April 19–23, 2021,
Ljubljana, Slovenia††doi: 10.1145/3442381.3449842††isbn:
978-1-4503-8312-7/21/04††ccs: Information systems Recommender systems††ccs:
Information systems Language models††ccs: Information systems Similarity
measures††ccs: Information systems Learning to rank††ccs: Information systems
Query representation
## 1\. Introduction
Sponsored search refers to the business model of search engine platforms where
third-party sponsored information is shown to targeted users along with other
organic search results. This allows the advertisers such as manufacturers or
retailers to increase the exposure of their products to more targeted
potential buyers, and at the same time gives users a quicker access to
solutions for their needs. Hence it has become an indispensable part of our
modern web experience. While many of the existing models are very powerful for
various tasks in sponsored search, there still remain three main challenges
for future developments in this field: 1) while the existing models have
strong performances on matching common queries with popular products, they
usually still find long-tail low-frequency queries/Ads to be more challenging.
The worse embedding representations in rare items are potentially caused by
under-training due to naturally scarce data on these low-frequency examples.
2) while many modern models improve in implicit feature engineering on the
existing input data, finding new and easily accessible data with complement
information is still a promising route to greatly improve the model
performance but is rarely explored. 3) the search engine systems generally
have very strict constraints on computational resources and latency
requirements. Many recently developed large powerful models are simply
infeasible to deploy onto the highly constrained online search engine systems.
Representation learning for queries, products, or users has been a key
research field with many breakthroughs over the last years and has been
adopted in many production sponsored search systems (Huang et al., 2020)(Pal
et al., 2020)(Grbovic and Cheng, 2018)(Bai et al., 2018). Convolutional Deep
Structured Semantic Model (C-DSSM) (Shen et al., 2014) is among the first
powerful solutions to encode text data into low-dimensional representation
vectors which can be applied to downstream tasks and have efficient inference
performance, but its NLU performance has been surpassed by many recently
developed NLP models. The pre-trained language models emerged in recent years,
such as transformers (Vaswani et al., 2017) and BERT (Devlin et al., 2019),
have demonstrated far superior performance in many NLU tasks and even reach
human level performance on many tasks. These models are better at capturing
contextual information in the sentences and generate better language
representation embeddings, leading to much stronger performance in downstream
tasks. However, due to the complexity, these models are unfortunately not
feasible to run in low latency systems without modifications. Recently, the
transformer model has been modified and trained with special techniques such
as knowledge distillation (Hinton et al., 2015), which allows us to use
similar transformers structure but much smaller model called TwinBERT (Lu et
al., 2020) to run with reasonable computational cost in the production systems
while having little or no performance loss compared to the full size BERT
models. This breakthrough significantly improves the user Information
Retrieval experience when using search engines. However, while both C-DSSM and
TwinBERT are specifically designed to be applied to the low latency systems
with strong performance, they are not the panacea to fully solve all the
problems in sponsored search. Their model ability is sometimes hindered by the
limited information in the original input texts and hence still suffers in
understanding many challenging low frequency inputs.
Given the strong performance of the baseline models in NLU tasks, it would be
extremely difficult to further improve them solely based on the structural
changes of the model without introducing new complement information. The newly
developed NLP models achieve relatively small improvements with exponentially
growth in model complexity, and hence reach the margin of diminishing returns
making it harder to satisfy all the latency constraints. A real improved model
in this field should then be able to take in additional information beyond the
tradition semantic text inputs, demonstrate stronger performance over the
harder low-frequency inputs, and at the same time should not significantly
increase the inference time.
A natural and easily accessible data source that provides information beyond
semantic text in the search engine system is users’ implicit feedbacks
recorded in logs in the form of clicks through the links shown to them. A
click signals a connection between a query and an Ad and hence a large
behavior graph based on clicks can be easily built. In the recent years,
various Graph Neural Network (GNN) structures (Zhou et al., 2019) have been
proposed to deal with the abundant graph-typed data and demonstrated strong
performance and breakthroughs in social networks, recommendations, or natural
science tasks. Motivated by the recent developments in GNN community, we are
aiming to identify ways to include complementary and abundant graph-type data
into the text model in a natural way. Most existing GNN models focus only on
the aggregation of pre-existing neighbor features that are fixed throughout
training. Instead of training the language model and the graph model
separately, we want the two models to work in conjunction with each other to
generate better query/Ad representations that can help understanding users’
needs in a deeper way.
The main contributions of this work are three-folds:
1. (1)
We propose TextGNN111The BERT version implementation of the model may be found
at: https://github.com/microsoft/TextGNN, a general end-to-end framework for
NLU that combines the strong language modeling text encoders with graph
information processed by Graph Neural Networks to achieve stronger performance
than each of its individual components.
2. (2)
We find a systematical way to leverage graph information that greatly improves
the robustness and performance by 1% on hard examples. These samples are very
challenging when only using semantic information.
3. (3)
We trained TextGNN with knowledge distillation to get a compact model. The
model has been adopted in the production system that has strict computational
and latency constraints while achieving a 2.03% increase in Revenue Per Mille
with a 2.32% decrease in Ad defect rate in the online A/B testing.
The rest of this paper is organized as follows. Section 2 is a brief
introduction of sponsored search and Ad relevance task. Section 3 reviews
related literature. Section 4 discusses the details of the model, including
the architecture, the construction of graph-type data, and the training
methodology. Section 5 reports the experimental results of TextGNN in
comparison to the baseline model under both offline and online settings with a
few illustrative case study examples. Section 6 concludes the paper and
briefly discusses the future directions of this work.
## 2\. Sponsored Search and Ad Relevance
The TextGNN model is developed to improve the existing Ad Relevance model at a
major Sponsored Search platform. In a typical sponsored search ecosystem,
there are often three parties: user, advertiser and search engine platform.
When the user types a query into the search engine, the goal of the platform
is to understand the underlying intent of the user behind the semantic
meanings of the query, and then try to best match it with a short list of Ads
submitted by the advertisers alongside other organic search results.
In the back-end when a query is received by the platform, the system will
first conduct a quick but crude recall step using highly efficient Information
Retrieval algorithms (such as TF-IDF (Jones, 1972) or BM25 (Robertson et al.,
1995)) to retrieve an initial list of matched candidates. The relatively long
list is then passed to the downstream components for a finer filtering and
final ranking using much more sophisticated but slightly less efficient models
to serve the users. In both of the later steps, Deep Learning based Ad
Relevance models play a key role in delivering high quality contents to the
user and match advertisers’ products with the potential customers. For the Ad
Relevance task, our model usually relies only on the query from a user and
keywords provided by the advertiser. A query refers to a short text that a
user typed into the search engine when he/she is looking for relevant
information or product, and the model needs to identify the user’s intent
based on the short query. A keyword is a short text submitted by an advertiser
that is chosen to express their intent about potential customers. The keyword
is in general not visible from end users, but it is crucial for the search
engine platform to match user intents.
When an Ad is displayed to a user, we call this an impression. The platform
does not receive anything from an impression but earns revenue only when the
displayed Ad is clicked by the user. Because of this mechanism, the search
engine platform has an incentive to display the Ads that best match user
intents, which directly affects the revenue. Lastly, given the scale of the
traffic of the search engine, Ad Relevance models are such an indispensable
component of the system and any improvement of the performance of the model
can lead to huge impact on the business side of the search engine.
## 3\. Related Work
Text Encoders including C-DSSM and Pre-trained Transformer-based Language
Models (such as BERT) have achieved impressive state-of-the-art performance in
many NLP tasks for their effective language or contextual word
representations, hence have become one of the most important and most active
research areas.
C-DSSM is developed specifically for extracting semantic information into a
low-dimension representation vector by combining convolutional layers that
extract local contextual semantic information in the string with max-pooling
layers that helps identifying globally important features. It is still a
workhorse model used extensively in the stacks of many production search
engine systems.
The large and expensive BERT model has recently become very popular. The model
is usually learned in two steps. First the model is trained on extremely large
corpus with unsupervised tasks such as masked language model (MLM) and next
sentence prediction (NSP) to learn the general language, and then in a second
step fine-tuned on the task-specific labelled data to be used in downstream
tasks. Despite the strong performance of the BERT models on language
representations, they are in general too expensive to be deployed in the real-
time search engine systems where there are strict constraints on computation
costs and latency.
Figure 1. Architecture of the twin tower TwinBERT model
Distilled TwinBERT is one successful model that adapts the Transformer family
models to the sponsored search applications and achieves comparable
performance at reasonable inference time cost compared with heavy stacked
transformer layers. The TwinBERT model as demonstrated in Figure 1 benefits
from two important techniques: 1) given two input texts, a query and a
keyword, a vanilla transformer encoder would concatenate them into one input
sequence, while TwinBERT has a twin tower structure to decouple the two-
sentence input. Such twin tower structure is first proposed in the DSSM model
(Huang et al., 2013) for web document ranking. Given that the keywords are
already known to the platform, the encoded outputs of the keyword-side tower
could then be pre-generated offline and fetched efficiently during inference
time. Without concatenating the keyword strings, the input to the query-side
tower can also be set with a low maximum length, and hence greatly reduce the
inference time complexity compared to a large BERT model. 2) knowledge
distillation technique is used to transfer the knowledge learnt by a teacher
model to a much smaller student model. Our teacher model can be seen as a
stronger version of the BM25 signal in the previous weak supervision method
(Dehghani et al., 2017). While the teacher model has strong performance, it is
usually too costly and infeasible to be directly used in a production system.
Knowledge distillation enables us to train a smaller model that is much faster
when inference with only little or no significant loss in performance (Li et
al., 2019b)(Sanh et al., 2019). When a TwinBERT model with only 3 layers of
encoders is used, with all the optimizations it is possible to be deployed in
the real-world production systems that satisfies the strict limit from
computational resources and latency requirement.
However, as a pure language model, TwinBERT can only rely on the semantic
meanings of the query-keyword pairs to infer the relationships, and in many
cases when we encounter uncommon words it is still very challenging to
correctly infer relevance for our main applications based on the limited input
information.
Graph Neural Network has also become a hot research area in recent years due
to its efficacy in dealing with complex graph data. Graph Convolutional
Networks (GCN) (Kipf and Welling, 2017), GraphSage (Hamilton et al., 2017),
and Graph Attention Networks (GAT) (Velickovic et al., 2018) are among the
most popular GNN models that can effectively propagate neighbor information in
a graph through connected edges and hence are able to generate convincing and
highly interpretable results on many graph specific tasks such as
node/edge/graph property predictions. Recently there are also attempts to
bring GNN to the sponsored search area such as click-through rate (CTR, ratio
of the number of clicks to the number of impressions) prediction (Li et al.,
2019a)(Yang et al., 2019), but so far these attempts have only focused on
using GNN to generalize the interactions among the existing fixed features.
There is no strong convincing story why these features naturally form a graph
and the GNN itself has no impact on the generation of the features.
Alternatively people have also proposed to utilize the graph information
implicitly through label-propagation to unlabeled examples(Kim et al., 2009),
but explicitly using the neighbor features in the model structure will be more
efficient in aggregating complementary information as demonstrated in the
experiments.
To the best of our knowledge, we are the first to extend various text encoders
with a graph in a natural way, and co-train both text encoders and GNN
parameters at the same time to achieve stronger performance in our downstream
tasks.
## 4\. TextGNN
Figure 2. TextGNN Architecture: twin tower structure for decoupled generation
of query/keyword embeddings
In this section we will discuss the architecture of the proposed TextGNN model
in Section 4.1. Then we describe the graph we used to naturally augment the
semantic information of the input query-keyword sentence pairs in Section 4.2.
Lastly in Section 4.3 we briefly recap knowledge distillation and its
application in our model.
### 4.1. Model Architecture
The architecture of the TextGNN model is discussed in detail in this
subsection and also illustrated in Figure 2. The proposed model is a natural
extension of the high-performance C-DSSM/TwinBERT baseline model with
additional information from graph structured data. In sponsored search
scenario, we have tens of millions candidate Ads. It is infeasible to use a
complex text encoder to compute the similarity between a search query and each
Ad one-by-one. Twin tower structure is a good choice for us where we could
compute Ads representation vectors in advance and when a query comes, we then
compute the representation vector of the query online. Notice that we only
need to run the complex text encoder once for each incoming search query,
compared with vanilla BERT which requires this for each unique pair. For
transformer encoders, the computation cost in self-attention is also quadratic
to the length of the input string. Hence, splitting the query and keyword
strings for separate calculation is also much less costly than calculating the
concatenated string. With these benefits in mind, our model also follows the
twin tower structure of the baseline models with small encoder structure
layers so that all the benefits of the twin tower structured model are
inherited and hence can be deployed in the production system. Taking the
query-side tower as an example, given a query and its three neighbors (defined
later in the graph construction section) will all go through any general Text
Encoder blocks to each generate a vector representation for the short
sentence. The information from the four representation vectors is then
aggregated by a GNN Aggregator to generate a single output vector. This output
vector is then connected with the direct output of the text encoder of the
query sentence through either concatenation or addition, similar to the idea
of a Residual Connection Network (He et al., 2015). The combined output vector
is considered as the final output of the query-side tower and can then be
interacted with keyword-side output (generated from the very similar
structured keyword-side tower) in the crossing layer to get the final output
similar to a C-DSSM/TwinBERT model.
#### 4.1.1. Text Encoder Block
The Text Encoder block is very similar to a single tower in the
C-DSSM/TwinBERT model. For example, for a transformer type text encoder, a
sentence is first tokenized using the BERT WordPiece tokenizer. Trainable
token embedding vectors is combined with BERT style positional embedding
through addition before it go through three BERT encoder layers. The only
difference with a BERT-style model is that the segment embeddings in the BERT
are no longer needed as all inputs will be from the same sentence. With this
structure so similar to a BERT-type one, we can conveniently load the weights
from the first three layers of the pre-trained large BERT model to get a good
starting point that leads to much better performance, faster model
convergence, and requires significantly less training data compared to a
random initialization. After the text encoder layers, we get a sequence of
vectors corresponding to each token in the sentence. The vectors are then
combined using a weighted-average pooling layer similar to the TwinBERT model
which has demonstrated better performance in generating a single vector
representation for a sentence.
The four Text Encoder blocks within a single tower are set to share the same
parameters. However, the model is flexible enough to allow the two towers to
have all different Text Encoder blocks, but as the TwinBERT paper shows that
shared encoder blocks generally lead to slightly better performance we use
that approach.
#### 4.1.2. GNN Aggregator
In one tower of our TextGNN, the four text encoder blocks generate four vector
representations, one for the center node (query/keyword) and the other three
for its three one-hop neighbors. To aggregate the information from four
vectors into one, we adopt a GNN aggregation layer, where we take the
query/keyword as the central node and perform one-hop aggregation using the
three neighbor nodes. The aggregation itself can be very general and use most
existing GNN aggregators such as GCN, GraphSAGE, and GAT. In our experiments
we found that GAT, which assigns learnable weights to the neighbors to
generated a weighted average, demonstrates the strongest performance and is
used in our experiments.
#### 4.1.3. Skip Layer
The output vector of the query/keyword encoder is connected to the output of
GNN Aggregator as the final output of the query-/keyword-side tower. This
layer can be thought as a skip layer (He et al., 2015) so that the additional
GNN outputs serve as a complementary information to the text semantic
representation vector. In this sense the encoder-only-models can also be
considered as a special case of the TextGNN model when the GNN output is
completely skipped. The two vectors are combined using either concatenation or
addition. In case they have different dimensions an additional dense layer is
applied after the GNN Aggregator to up/downscale the GNN output dimension to
match the Text Encoder output.
#### 4.1.4. Crossing Layer
Given the final outputs of the query-/keyword-side tower, the two vectors are
first combined through concatenation, and then compute the similarity score
using the Residual network proposed in the TwinBERT model. Formally, the
residual function is defined as:
(1) $\textbf{y}=\mathcal{F}(\textbf{x},W,b)+\textbf{x},$
where x is the concatenation of the query-side vector q and keyword-side
vector k and $\mathcal{F}$ is the mapping function from x to the residual with
parameters $W$ and $b$. A logistic regression layer is then applied to the
output vector y to predict the binary relevance label.
### 4.2. Graph Construction
On top of the powerful structure of the model, it is also crucial to get
access to high quality graph-type data. Such data should satisfy the following
properties:
1. (1)
Relevant: since the graph neural networks propagate information along the
edges, we are looking for neighbors that are highly relevant to the intent of
the center node (query/keyword).
2. (2)
Complementary: we expect the GNN to excel the most in situations where the
language modeling part struggles to infer the intention only from the semantic
meanings of the sentence, but the additional neighbors might be extremely
valuable to provide complementary information that help the model to better
understand the inputs. This situation happens most frequently on rare and low
frequency items where the language models usually struggles on these long-tail
inputs.
3. (3)
Accessible: in sponsored search system, there are large amount of user input
queries and candidate keywords. We try to find their neighbors in a graph. As
a large graph is preferred, the neighbors need to be found with little effort
and constructing the graph data should be feasible without heavy manual work,
strong assumptions, or complicated structures.
Given the requirements, we find that the user behavior graph generated from
historical user clicking logs is a great candidate for our purpose. It is
based on the insight that when a user inputs a query $a$ and then clicks the
Ad $b$, then $b$ has to sufficiently fit the user’s intent from $a$ to trigger
the click. In the next two subsections, we discuss such behavior graph and its
extension to address the sparse coverage issue of the behavior graph.
Figure 3. Click Graph Construction: use ANN proxy neighbor if no native
neighbor available
#### 4.2.1. User Click Graph
The eligible neighbors of a query are the keyword of Ads that have been shown
to be relevant to the query and received explicit positive feedback by a
click. One general assumption to sort all the candidates is that the
empirically observable CTR is highly correlated to the relevance between the
query and the keyword. Based on this assumption, as illustrated in Figure
3(a), we take all clicked Ads that have been shown to users at least 50 times
in the past year (to partially address the issue of noisy estimates of CTR on
Ads with small number of impressions) and take the top three as the neighbors.
Table 1 shows an illustrative example, where the search query is ”usps com
careers login”. Its top three neighbors, which are the keywords of the
corresponding Ads, are listed with their historical total number of
impressions and clicks. Although the first keyword ”united state postal
service jobs” is only shown 59 times which is significantly fewer than the
third keyword ”postal service hiring” with 1,721 impressions, it has a much
higher CTR of 30.5% compared to 22.3%, indicating that users who searched for
this query are more likely to find the first keyword useful, which is a strong
indication of higher relevance.
Table 1. Example of neighbors of a query from the Click Graph | Clicked Neigh | Neigh | Neigh
---|---|---|---
Query | Keyword | # Impress | # Click
| united state | |
usps com | postal service jobs | 59 | 18
careers login | usps com employment | 344 | 92
| postal service hiring | 1721 | 384
#### 4.2.2. User Click Graph with Semantic ANN
For rare and low frequency queries/keywords, we observe by construction
substantially less feedback from clicks logs. Furthermore, to avoid the noise
of selecting neighbors with high CTR, we have criteria to exclude neighbors
that are shown less than 50 times in the past year and this unfortunately
eliminates a number of neighbors and makes the situation even worse for long-
tail inputs. To address this issue, we propose a neighbor completion technique
based on Approximate Nearest Neighbor (ANN) (Indyk and Motwani, 1998) using
Neighborhood Graph Search (NGS) (Wang and Li, 2012). As illustrated in Figure
3(b), first we infer vector representations by a powerful C-DSSM (which is
used extensively in a major sponsored search system) for all nodes in user
click graph. Next, for a query that we could not identify any eligible clicked
keywords, we infer its vector representation by the same C-DSSM. Then, we
leverage the ANN search tool to find another query that is supposed to be
semantically close enough to the original query and has the click neighbors
and use its clicked keywords as approximate neighbors for the original query.
This has the same spirit as the common technique of query rewriting in search
engine systems but does so in a more implicit way. For keywords without any
clicked queries, we find neighbors for them in a similar way.
In Table 2 we show another example that we are not able to find any eligible
neighbors for the query ”video games computers free”, but its ANN query ”no
internet games” has user behavior feedback and the three approximate neighbors
are obviously relevant to the original query.
Table 2. Example of a query from with Semantic ANN: proxy neighbor are quite relevant to the original query | ANN | Clicked Neigh | Neigh | Neigh
---|---|---|---|---
Query | Query | Keyword | # Impress | # Click
video | | free games | 58 | 1
games | no | online games | 260 | 4
computers | internet | online | |
free | games | computer games | 67 | 1
For both types of graphs, we only take at most the top three neighbors. The
number of neighbors can be set as a hyper-parameter of the model framework. We
choose three for following reasons:
1. (1)
More than one neighbor to provides additional complementary information while
also adds robustness.
2. (2)
Each additional neighbor means an extra run of the text encoder. Even though
the encoder blocks can be run in parallel a large number of neighbors can
still be computationally challenging for the system.
3. (3)
We do not want to include more neighbors that are less relevant and introduce
additional noisy information to ”pollute” the encoded representation.
Therefore, choosing three neighbors balances all the requirements and
concerns.
### 4.3. Knowledge Distillation
In order to have a high performance but compact model that satisfies the
computation and latency constraints, the teacher-student training framework
via knowledge distillation is used. We use an expensive but high-performance
RoBERTa model (Liu et al., 2019) as the teacher model to label a very large
query-keyword pair dataset, the label scores are between 0 and 1. Our model is
relatively data-hungry and without this teacher model to automatically label
the huge dataset, our existing human-labelled data is not sufficient to train
a strong model that gets close to teacher model level performance. Since the
model target, the RoBERTa score, is a continuous value, it provides more fine-
grained information than the traditional binary labels. For example, a score
of 0.99 indicates a stronger relevance than a score of 0.51, although both
will be categorized as relevant pairs. We use mean squared error to measure
the difference between the model output and the RoBERTa teacher scores.
With such a strong teacher model, we train the student TwinBERT/TextGNN model
with small encoder blocks (only 3 transformer layers). Hence the student
models are much more feasible in inference time but are able to achieve close
to teacher model performance with only very minor performance loss. We could
even further finetune the student model on a smaller human-labelled dataset
with binary labels and achieve a performance surpassing the much larger
teacher model. Hence, the performance of our model is not capped/limited by
the teacher model.
## 5\. Experiments
In this section we present experiment results of TextGNN on various tasks. We
also show the comparison with the strong baseline models to show the
superiority of the proposed new model and the efficacy of introducing graph
information. In Section 5.1 we discuss some key statistics of the
complementary graph data, and some related details of our training methods.
Section 5.2 compares the performance with the baseline encoder-only models.
Section 5.3 shows a more detailed sub-group analysis. Section 5.4 presents
case studies of typical examples with false positive and false negative
examples for TwinBERT which are correctly classified by the new TextGNN model
and provide intuitive insights why the additional graph information can be
valuable. Lastly in Section 5.5 we present an initial effort to apply our
model to online production system and show the significant improvement over
the baseline in online A/B testings.
### 5.1. Data and Training Details
For our knowledge distillation training, 397 million query-keyword pairs are
scored by the teacher RoBERTa model. The student models are initialized using
the parameters of the first three transformer layers of the 12-layer uncased
BERT-base checkpoint (Wolf et al., 2019). The models are evaluated on a small
evaluation dataset consisting of 243 thousand human labelled samples. The
query and keyword pairs were given labels with five different levels of
relevance: excellent, perfect, good, fair, and bad. In the evaluation stage
the first four levels excellent, perfect, good, and fair are mapped as
positive samples (label 1) where the bad category is kept as negative category
(label 0). The model ROC-AUC is our main metric for evaluation.
We construct the behavior click graph based on the historical search engine
click logs from July 2019 to June 2020. Here in Table 3 we present some
statistics on the neighbor coverage comparing the two ways of graph
constructions. Here are some key observations:
1. (1)
Without the added ANN neighbors, almost 2/3 of the queries miss neighbors from
the user click graph. The situation is significantly better for keywords as
the majority of the Ads have been shown and clicked by users.
2. (2)
With the ANN search, we essentially increase the neighbor coverage to almost
100%.
3. (3)
Among all nodes, the majority of them have at least three eligible neighbors.
For the examples with less than 3 neighbors, dummy padding are added.
Table 3. Coverage Summary of Two Graph Construction Methods: almost full coverage after adopting ANN Neighbors | Click Only | ANN
---|---|---
| Q | K | Q | K
1 Neighbor | 4% | 7% | 5% | 7%
2 Neighbors | 3% | 4% | 3% | 4%
3 Neighbors | 30% | 76% | 92% | 88%
Coverage | 37% | 87% | 100% | 99%
### 5.2. Model Performance Results
In the experiment we train the baseline TwinBERT model and the new TextGNN
model with the same common hyper-parameters for a fair comparison. The same
training dataset files were used by both models, but the additional neighbor
information is not read by the baseline TwinBERT model as it does not have the
mechanism to process the additional information.
Tabel 4 presents the ROC-AUC values of the baseline model and TextGNN based on
two different types of graphs. We see that the addition of GNN has
significantly improves the performance of the baseline model and the
performance increase of this magnitude will lead to a huge difference in
revenue for large scale systems.
Table 4. ROC-AUC Comparison: TextGNN with ANN Neighbor Graph significantly outperform baseline TwinBERT Model | AUC
---|---
TwinBERT | 0.8459
TextGNN | 0.8461
TextGNN with ANN Neighbor | 0.8471
### 5.3. Sub-group Analysis
In addition to showing the stronger overall performance of the TextGNN models
over the baseline, we also conduct a more detailed sub-group analysis on
inference results to confirm that the TextGNN models indeed improve on the
tail examples just as expected.
We split the validation data into three bins by the Ads frequency in the
dataset (as a proxy for their population frequency of impressions). 43% of the
samples are Ads that have been shown only once (among 243k samples) which are
the rare examples, and 12% of the samples have been shown twice. Even though
the tail Ads individually are rarely recalled and shown to users, they consist
of the majority portion of the total traffic and the improvements on these
long-tail examples can lead to significant benefits.
We see the results in Figure 4 that the TextGNN model based on vanilla click
graph shows an extremely large improvement in the most rare Ads, but the
performance downgrades in common ones. Our hypothesis is that in the more
common examples the semantic information is already good, and the limited
additional information from a sparse graph is not enough to offset the
potential under-fitting from a more complex model. Once we adopt ANN to
generate a more complete graph, we see the TextGNN model demonstrates stronger
performance than baseline across the board.
Lastly, we note that the non-ANN version is still much stronger than the ANN
version in the bin of the most rare Ads, potentially because the ANN proxy
neighbors are on average having lower quality than the native neighbors, and
hence introduce noise to the model. This analysis also reveals a future
direction to further improve the model where we can potentially use the sample
frequency as a simple indicator to switch between various candidate models
based on their strength within different sub-groups.
Figure 4. Performance on Different Subgroups of Data by Ads Frequency: TextGNN
with vanilla click neighbor achieves extremely large gain in low frequency
Ads, while the ANN version outperforms the baseline across the board
### 5.4. Case Studies
Table 5. Case study Examples: neighbors provide crucial complementary
information False Positive Examples
---
Query | Query Neighbors | Keyword | Keyword Neighbors
achilles heel | what is an achilles heel | plantar fasciitis shoes | shoes plantar fasciitis heel pain
what is achilles heel | work shoes plantar fasciitis
causes heel spurs | tennis shoes good plantar fasciitis
animal repellent products | animal repeller | animal odor | best cleaning remove & product home
keep squirrel out attic | air fresheners home
animal repellent | best air fresheners
False Negative Examples
Query | Query Neighbors | Keyword | Keyword Neighbors
sharding | mongodb cluster | sql server | sql server download windows 10
database sharding | sql server hosting
N/A | sequel server database
use imovie | imovies | adobe premiere | adobe premiere pro mac
imovie 11 tutorials | adobe premier mac
imovie video editor | use imovie
We expect the introduction of graph data to improve the model performance
especially on tail inputs that are often seen as ”hard” samples for the
baseline models. In table 5, we present some ”hard” cases to demonstrate the
value that graph data could bring.
#### 5.4.1. False-positive Examples of TwinBERT
The first example shows that the user searched for the Greek methology
”achilles heel”, which was incorrectly determined by TwinBERT as relevant to
plantar fasciitis shoes. From the semantic meaning, heel is very close to
shoes and the achilles ankle is highly related to the pain of tendon. However,
the neighbors strongly indicate that people who search for this query are
actually looking for the story from Greek mythology and not the foot injury.
The second example shows that TwinBERT determines that ”animal repellent
products” is highly relevant to animal cleaning product. From the semantic
meaning it is true that repellent is close in meaning to the word ”remove” but
the two products are used for completely different purposes. When averaging
over the neighbors it is very clear that this is a negative example.
#### 5.4.2. False-negative Examples of TwinBERT
The query ”sharding” is a very specific concept in database systems on how
large data are split and stored. Without the domain knowledge it is very hard
to understand such an uncommon word. Furthermore, the word is tokenized to:
[CLS], sha, ##rdi, ##ng, [SEP] by the BERT WordPiece tokenizer, making it
essentially an impossible task for TwinBERT to identify the relevance.
However, from the historical user behaviors we clearly see both sides taking
the very important common words ”database”, hence allowing the TextGNN model
to leverage on the user behavior to identify domain specific connections and
find the hidden relevance.
The second false-negative one is an example of two video editing softwares on
the Mac platform. Without the domain knowledge is it impossible to conclude
from the semantic meaning that adobe premier mac is a video editing software.
However, since the query string is identified as a neighbor of the keyword,
our graph model can use this information to find the correct connection.
### 5.5. Online A/B Test
A slightly simplified version of our TextGNN model has already been
successfully deployed in a major sponsored search platform and demonstrated
significant performance gains. We have evaluated the performance of the models
on the sponsored product advertising system where user search queries are
matched with products with rich information provided by advertisers. In this
initial effort we choose C-DSSM as the text encoder for its much faster
inference time in the application of large-scale Ads corpus and use graph
aggregators only on the product side of the tower. Note again that the product
side representations can be generated offline in advance and hence at online
service stage the latency is identical to a traditional C-DSSM model. We use
the TextGNN model outputs as features to be feed into a downstream online
product advertising system and evaluated the efficacy of this simple model in
both offline and online settings.
For evaluation, we randomly sampled examples from online logs and labeled the
data manually by human experts and observe on average 1.3% (we only show
normalized relative numbers due to business confidentiality) PR-AUC lift
across different validation sets when comparing the simplified TextGNN model
with the baseline C-DSSM model.
The online A/B testing results of the TextGNN model are summarized in Table 6
as we applied the model to both recall and relevance stage of the Ads serving
in the system, where we observe significant gains in several normalized key
online metrics numbers that are crucial for our sponsored search system. The
two most important metrics are:
1. (1)
Revenue Per Mille (RPM): the revenue gained for every thousand search
requests, which is one of the most important online metrics for sponsored
search.
2. (2)
Ad Defect Rate: the ratio of irrelevant Ad impressions with respect to total
number of Ad impressions. In online A/B test, this ratio is approximated by
sampling Ad impressions and submitting them for human-evaluated labels. This
is highly correlated to user satisfaction and hence is considered as a very
crucial metric.
As shown in the table, the TextGNN model yields very impressive results as it
can greatly boost the RPM and reduce the Ad Defect Rate, which is a strong
sign that model could help to improve revenue and user experience
simultaneously. It’s worthy pointing out that current production model already
contains many advanced sub-models and features so the magnitude of the
improvement in the online KPI here is considered as a significant gain for our
system at the large scale.
Table 6. Online A/B Testing: significant improvements in production product advertising systems Tasks | Relative RPM | Relative Ad Defect Rate
---|---|---
TextGNN Relevance | +2.03% | -2.32%
TextGNN Selection | +1.21% | -0.34%
## 6\. Conclusion
We present a powerful NLP model TextGNN that combines two strong model
structures, text encoders and GNN, into a single end-to-end framework and
shows strong performance in the task of Ad relevance. The model retains the
strong natural language understanding ability from the existing powerful text
encoders, while complements text encoders with additional information from
graph-type data to achieve stronger performance than what could be achieved
from only pure semantic information. We demonstrate with experiments that the
TextGNN model show overall much stronger performance than a great baseline
model based only on text encoders, and that the new model demonstrates the big
gains in the most difficult task of low-frequency Ads. In our next step, the
ensemble model idea could be explored to automatically mix different
representation model outputs based on Ads frequency to achieve even better
performance.
## References
* (1)
* Bai et al. (2018) Xiao Bai, Erik Ordentlich, Yuanyuan Zhang, Andy Feng, Adwait Ratnaparkhi, Reena Somvanshi, and Aldi Tjahjadi. 2018. Scalable Query N-Gram Embedding for Improving Matching and Relevance in Sponsored Search. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ (London, United Kingdom) _(KDD ’18)_. Association for Computing Machinery, New York, NY, USA, 52–61. https://doi.org/10.1145/3219819.3219897
* Dehghani et al. (2017) Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. 2017. Neural Ranking Models with Weak Supervision. In _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval_ (Shinjuku, Tokyo, Japan) _(SIGIR ’17)_. Association for Computing Machinery, New York, NY, USA, 65–74. https://doi.org/10.1145/3077136.3080832
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
* Grbovic and Cheng (2018) Mihajlo Grbovic and Haibin Cheng. 2018. Real-Time Personalization Using Embeddings for Search Ranking at Airbnb. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ (London, United Kingdom) _(KDD ’18)_. Association for Computing Machinery, New York, NY, USA, 311–320. https://doi.org/10.1145/3219819.3219885
* Hamilton et al. (2017) Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017\. Inductive Representation Learning on Large Graphs. In _Advances in Neural Information Processing Systems 30_ , I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 1024–1034. http://papers.nips.cc/paper/6703-inductive-representation-learning-on-large-graphs.pdf
* He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015\. Deep Residual Learning for Image Recognition. http://arxiv.org/abs/1512.03385 cite arxiv:1512.03385Comment: Tech report.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015\. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [stat.ML]
* Huang et al. (2020) Jui-Ting Huang, Ashish Sharma, Shuying Sun, Li Xia, David Zhang, Philip Pronin, Janani Padmanabhan, Giuseppe Ottaviano, and Linjun Yang. 2020. Embedding-based Retrieval in Facebook Search. _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ (Aug 2020). https://doi.org/10.1145/3394486.3403305
* Huang et al. (2013) Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013\. Learning Deep Structured Semantic Models for Web Search using Clickthrough Data. ACM International Conference on Information and Knowledge Management (CIKM). https://www.microsoft.com/en-us/research/publication/learning-deep-structured-semantic-models-for-web-search-using-clickthrough-data/
* Indyk and Motwani (1998) Piotr Indyk and Rajeev Motwani. 1998. Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality. 604–613.
* Jones (1972) Karen Spärck Jones. 1972\. A statistical interpretation of term specificity and its application in retrieval. _Journal of Documentation_ 28 (1972), 11–21.
* Kim et al. (2009) Soo-Min Kim, Patrick Pantel, Lei Duan, and Scott Gaffney. 2009\. Improving Web Page Classification by Label-Propagation over Click Graphs. In _Proceedings of the 18th ACM Conference on Information and Knowledge Management_ (Hong Kong, China) _(CIKM ’09)_. Association for Computing Machinery, New York, NY, USA, 1077–1086. https://doi.org/10.1145/1645953.1646090
* Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In _Proceedings of the 5th International Conference on Learning Representations_ (Palais des Congrès Neptune, Toulon, France) _(ICLR ’17)_. https://openreview.net/forum?id=SJU4ayYgl
* Li et al. (2019b) Xue Li, Zhipeng Luo, Hao Sun, Jianjin Zhang, Weihao Han, Xianqi Chu, Liangjie Zhang, and Qi Zhang. 2019b. Learning Fast Matching Models from Weak Annotations. In _The World Wide Web Conference_. Association for Computing Machinery, 2985–2991.
* Li et al. (2019a) Zekun Li, Zeyu Cui, Shu Wu, Xiaoyu Zhang, and Liang Wang. 2019a. Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ (Beijing, China) _(CIKM ’19)_. Association for Computing Machinery, New York, NY, USA, 539–548. https://doi.org/10.1145/3357384.3357951
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv:1907.11692 [cs.CL]
* Lu et al. (2020) Wenhao Lu, Jian Jiao, and Ruofei Zhang. 2020. TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval. arXiv:2002.06275 [cs.IR]
* Pal et al. (2020) Aditya Pal, Chantat Eksombatchai, Yitong Zhou, Bo Zhao, Charles Rosenberg, and Jure Leskovec. 2020\. PinnerSage: Multi-Modal User Embedding Framework for Recommendations at Pinterest. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ (Virtual Event, CA, USA) _(KDD ’20)_. Association for Computing Machinery, New York, NY, USA, 2311–2320. https://doi.org/10.1145/3394486.3403280
* Robertson et al. (1995) Stephen Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. 1995\. Okapi at TREC-3. In _Overview of the Third Text REtrieval Conference (TREC-3)_ (overview of the third text retrieval conference (trec–3) ed.). Gaithersburg, MD: NIST, 109–126. https://www.microsoft.com/en-us/research/publication/okapi-at-trec-3/
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. _CoRR_ abs/1910.01108 (2019). arXiv:1910.01108 http://arxiv.org/abs/1910.01108
* Shen et al. (2014) Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gregoire Mesnil. 2014. Learning Semantic Representations Using Convolutional Neural Networks for Web Search. WWW 2014. https://www.microsoft.com/en-us/research/publication/learning-semantic-representations-using-convolutional-neural-networks-for-web-search/
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefinedukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ (Long Beach, California, USA) _(NIPS’17)_. Curran Associates Inc., Red Hook, NY, USA, 6000–6010.
* Velickovic et al. (2018) Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. _ICLR_ (2018).
* Wang and Li (2012) Jingdong Wang and Shipeng Li. 2012. Query-Driven Iterated Neighborhood Graph Search for Large Scale Indexing. In _Proceedings of the 20th ACM International Conference on Multimedia_ (Nara, Japan) _(MM ’12)_. Association for Computing Machinery, New York, NY, USA, 179–188.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. _ArXiv_ abs/1910.03771 (2019).
* Yang et al. (2019) Xiao Yang, Tao Deng, Weihan Tan, Xutian Tao, Junwei Zhang, Shouke Qin, and Zongyao Ding. 2019. Learning Compositional, Visual and Relational Representations for CTR Prediction in Sponsored Search. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ (Beijing, China) _(CIKM ’19)_. Association for Computing Machinery, New York, NY, USA, 2851–2859. https://doi.org/10.1145/3357384.3357833
* Zhou et al. (2019) Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019\. Graph Neural Networks: A Review of Methods and Applications. arXiv:1812.08434 [cs.LG]
|
# NICER Discovery of Millisecond X-ray Pulsations and an Ultracompact Orbit
in IGR J17494$-$3030
Mason Ng MIT Kavli Institute for Astrophysics and Space Research,
Massachusetts Institute of Technology, Cambridge, MA 02139, USA Paul S. Ray
Space Science Division, U.S. Naval Research Laboratory, Washington, DC 20375,
USA Peter Bult Department of Astronomy, University of Maryland, College
Park, MD 20742, USA Astrophysics Science Division, NASA Goddard Space Flight
Center, Greenbelt, MD 20771, USA Deepto Chakrabarty MIT Kavli Institute for
Astrophysics and Space Research, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA Gaurava K. Jaisawal National Space Institute,
Technical University of Denmark, Elektrovej 327-328, DK-2800 Lyngby, Denmark
Christian Malacaria NASA Marshall Space Flight Center, NSSTC, 320 Sparkman
Drive, Huntsville, AL 35805, USA Universities Space Research Association,
NSSTC, 320 Sparkman Drive, Huntsville, AL 35805, USA Diego Altamirano School
of Physics and Astronomy, University of Southampton, Southampton, SO17 1BJ, UK
Zaven Arzoumanian Astrophysics Science Division, NASA Goddard Space Flight
Center, Greenbelt, MD 20771, USA Keith C. Gendreau Astrophysics Science
Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Tolga
Güver Istanbul University, Science Faculty, Department of Astronomy and Space
Sciences, Beyazıt, 34119, Istanbul, Turkey Istanbul University Observatory
Research and Application Center, Istanbul University 34119, Istanbul, Turkey
Matthew Kerr Space Science Division, U.S. Naval Research Laboratory,
Washington, DC 20375, USA Tod E. Strohmayer Astrophysics Science Division,
NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Joint Space-
Science Institute, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
Zorawar Wadiasingh Astrophysics Science Division, NASA Goddard Space Flight
Center, Greenbelt, MD 20771, USA Centre for Space Research, North-West
University, Potchefstroom Campus, Private Bag X6001, Potchefstroom 2520, South
Africa Universities Space Research Association (USRA), Columbia, MD 21046,
USA Michael T. Wolff Space Science Division, U.S. Naval Research Laboratory,
Washington, DC 20375, USA
(Received January 15, 2021; Revised January 29, 2021; Accepted January 31,
2021)
###### Abstract
We report the detection of 376.05 Hz (2.66 ms) coherent X-ray pulsations in
NICER observations of a transient outburst of the low-mass X-ray binary IGR
J17494$-$3030 in 2020 October/November. The system is an accreting millisecond
X-ray pulsar in a 75 minute ultracompact binary. The mass donor is most likely
a $\simeq 0.02\,M_{\odot}$ finite-entropy white dwarf composed of He or C/O.
The fractional rms pulsed amplitude is 7.4%, and the soft (1–3 keV) X-ray
pulse profile contains a significant second harmonic. The pulsed amplitude and
pulse phase lag (relative to our mean timing model) are energy-dependent, each
having a local maximum at 4 keV and 1.5 keV, respectively. We also recovered
the X-ray pulsations in archival 2012 XMM-Newton observations, allowing us to
measure a long-term pulsar spin-down rate of $\dot{\nu}=-2.1(7)\times
10^{-14}$ Hz s-1 and to infer a pulsar surface dipole magnetic field strength
of $\simeq 10^{9}$ G. We show that the mass transfer in the binary is likely
non-conservative, and we discuss various scenarios for mass loss from the
system.
stars: neutron – stars: oscillations (pulsations) – binaries: close – stars:
rotation – X-rays: binaries – X-rays: individual (IGR J17494$-$3030)
††facilities: NICER, XMM††software: astropy (Astropy Collaboration et al.,
2013), NumPy and SciPy (Oliphant, 2007), Matplotlib (Hunter, 2007), IPython
(Perez & Granger, 2007), tqdm (Da Costa-Luis et al., 2020), NICERsoft, PRESTO
(Ransom et al., 2002), PINT (Luo et al., 2020), HEASoft 6.28
## 1 Introduction
Accreting millisecond X-ray pulsars (AMXPs; see Di Salvo & Sanna, 2020, for a
recent review) are rapidly rotating, weakly magnetized ($\sim 10^{8}$ G)
neutron stars accreting from a low-mass ($\lesssim 1M_{\odot}$) companion in a
low-mass X-ray binary (LMXB). Most known AMXPs are X-ray transient systems in
which long ($\sim$years) intervals of X-ray quiescence are punctuated by brief
($\sim$weeks) outbursts of enhanced X-ray emission. These transient outbursts
are understood to arise from a thermal instability in the accretion disk
around a neutron star or black hole LMXB primary, analogous to “dwarf nova”
optical outbursts in accreting white dwarfs (see Lasota, 2001; Hameury, 2020,
and references therein).
The X-ray transient IGR J17494$-$3030 (Galactic coordinates $l=359.1^{\circ}$,
$b=-1.5^{\circ}$; hereafter called IGR J17494) was first discovered in a 2012
March outburst in the 3–80 keV hard X-ray band (IBIS and JEM-X) in an INTEGRAL
survey of the Galactic center region (Boissay et al., 2012). Soft X-ray
(0.5–10 keV) monitoring observations with Swift showed that the outburst
lasted approximately one month (Armas Padilla et al., 2013) before fading into
quiescence (Chakrabarty et al., 2013). XMM-Newton 0.5–10 keV spectroscopy
suggested that the compact primary is a neutron star (Armas Padilla et al.,
2013). A new outburst was detected with INTEGRAL in 2020 October (Ducci et
al., 2020), leading to a more precise X-ray localization with Chandra
(Chakrabarty & Jonker, 2020) and the identification of a 4.5 GHz radio
counterpart with the VLA (van den Eijnden et al., 2020).
Soft X-ray observations of the 2020 outburst with the Neutron Star Interior
Composition Explorer (NICER) revealed the presence of coherent 376 Hz
pulsations modulated by a 75 minute binary orbit, establishing the system as a
millisecond pulsar (neutron star) in an ultracompact binary (Ng et al., 2020).
In this Letter, we first outline the NICER and XMM-Newton observations and
data processing. We then present results from timing and spectral analyses of
the NICER observations, as well as from a timing analysis of the archival 2012
XMM-Newton observations. Finally, we constrain the possible nature of the
donor in the IGR J17494 system and discuss further implications of the source.
## 2 Observations and Data Processing
### 2.1 NICER
NICER is an X-ray telescope mounted on the International Space Station (ISS)
since 2017 June. NICER has 56 aligned pairs of X-ray concentrator optics and
silicon drift detectors (52 detectors are usually active on NICER). NICER is
capable of fast-timing observations in the 0.2–12.0 keV band, with timing
accuracy of time-tagged photons to better than 100 ns (Gendreau et al., 2012;
LaMarr et al., 2016; Prigozhin et al., 2016).
NICER observed IGR J17494 from 2020 October 27 to November 4111The source
became unobservable due to Sun-angle constraints around November 5. for a
total exposure time of $32.3{\rm\,ks}$ after filtering, in ObsIDs
3201850101–3201850108222During the course of the observations, several
detectors were turned off for scheduled maintenance. Detectors 01, 02, 10, 13,
34, 43, and 44 were affected. In all observations, 46–48 detectors were
active. Detectors 11, 20, 22, and 60 have been inactive since launch.. These
observations were available through the public NASA HEASARC data archive.
There were additional NICER observations, to which we did not have access,
during this interval for a proprietary guest observer investigation (PI: A.
Sanna; shown as the shaded region in the top panel of Figure 1). The events
were barycenter-corrected in the ICRS reference frame, with source coordinates
R.A. $=267.348417\arcdeg$ and Decl.$=-30.499722\arcdeg$ (equinox J2000.0)
obtained from a recent Chandra observation (Chakrabarty & Jonker, 2020), using
barycorr from FTOOLS with the JPL DE405 solar system ephemeris (Standish,
1998).
The NICER observations were processed with HEASoft version 6.28 and the NICER
Data Analysis Software (nicerdas) version 7.0 (2020-04-23_V007a). The
following criteria, which we note are relaxed compared to standard filtering
criteria as the latter were too restrictive and resulted in no events, were
imposed in the construction of the good time intervals (GTIs): no
discrimination of events when NICER (on the ISS) was inside or outside of the
South Atlantic Anomaly during the course of the observations; $\geq 20\arcdeg$
for the source-Earth limb angle ($\geq 30\arcdeg$ for the Sun-illuminated
Earth); $\geq$ 38 operational Focal Plane Modules (FPMs); undershoot (dark
current) count-rate range of 0–400 per FPM (underonly_range); overshoot
(saturation from charged particles) count-rate range of 0–2 per FPM
(overonly_range and overonly_expr); pointing offset is $<0.015\arcdeg$ from
the nominal source position.
We analyzed spectral data using XSPEC v12.11.1 (Arnaud, 1996). NICER data were
selected in the range 1–10 keV, to avoid contamination from optical loading
and significant interstellar absorption at lower energy. The spectra were
rebinned to have at least 25 counts per bin. Background spectra were extracted
using nibackgen3C50 version 6 from the official NICER
tools333https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html..
Standard response files made available by the NICER team were used to perform
spectral
analysis444https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/data/nicer/xti/index.html..
### 2.2 XMM-Newton
XMM-Newton performed a $43{\rm\,ks}$ observation of IGR J17494 on 2012 March
31 (ObsID 0694040201). The EPIC-PN camera was operated in timing mode,
yielding a time resolution of 29.56 $\mu$s, which is sufficient to allow us to
search for the presence of coherent pulsations. We processed these data using
SAS version 18.0 and the latest version of the calibration
files555https://www.cosmos.esa.int/web/xmm-newton/current-calibration-files.
Applying standard screening criteria, we retained only those events with
photon energies in the 0.4–10 keV range, with $\textsc{pattern}\leq 4$ and
screening $\textsc{flag}=0$. Source events were extracted from rawx columns
$[34:42]$, while background events were extracted from rawx $[51:59]$.
Constructing a $32$-s resolution light curve of the source and background
data, we find that the source count-rate gradually decreased over the span of
the observation, dropping from 2 ct s-1 to 1 ct s-1. Additionally, we filtered
out an episode of background flaring that occurred between $15750{\rm\,s}$ and
$21500{\rm\,s}$ after the start of the observation. Finally, we applied
barycentric corrections to the cleaned event data, again using the JPL DE405
solar system ephemeris and the source coordinates quoted previously.
## 3 Results
### 3.1 NICER
The NICER 1–7 keV light curve for the 2020 outburst is shown in the top panel
of Figure 1. The source gradually faded until MJD 59155.4, after which it
decayed more rapidly. The X-ray spectrum prior to the proprietary data gap was
fairly constant and well-fit with a two-component absorbed power-law and
blackbody model (tbabs(powerlaw+bbodyrad) in XSPEC), with absorption column
density $n_{\rm H}=2.07(6)\times 10^{22}{\rm\,cm^{-2}}$, photon index
$\Gamma=1.90(6)$, blackbody temperature $kT=0.58(3)$ keV, and blackbody radius
$R_{\rm bb}=2.9(5)\,d_{10}$ km, where $d_{10}$ is the source distance in units
of 10 kpc. The uncertainties are reported at the 90% confidence level. The
reduced $\chi^{2}$ ($\chi_{\nu}^{2}$) of the fit was $1.14$ for 849 degrees of
freedom. The spectrum softened during the late decay phase of the outburst,
where the same two-component model fit yielded $\Gamma=4.3_{-0.6}^{+0.9}$ and
$n_{\rm H}$ is assumed to be unchanged throughout the observations. The peak
absorbed 1–10 keV flux we observed was $1.01\times
10^{-10}{\rm\,erg\,s^{-1}\,cm^{-2}}$ on MJD $59149.4$, corresponding to an
unabsorbed flux of $1.43\times 10^{-10}{\rm\,erg\,s^{-1}\,cm^{-2}}$. The
lowest absorbed flux we measured was $1.21\times
10^{-12}{\rm\,erg\,s^{-1}\,cm^{-2}}$ on MJD $59157.5$, corresponding to an
unabsorbed flux of $3.23\times 10^{-12}{\rm\,erg\,s^{-1}\,cm^{-2}}$. This is
roughly a factor of 3 fainter than the minimum flux detected by XMM-Newton at
the end of the 2012 outburst (Armas Padilla et al., 2013). A more detailed
X-ray spectral analysis will be reported elsewhere.
We first detected X-ray pulsations with a data analysis pipeline that employs
multiple techniques666https://github.com/masonng-astro/nicerpy_xrayanalysis,
particularly with Lv3_incoming.py and scripts therein for X-ray pulsation
searches, including averaged power spectral stacking with Bartlett’s method
(Bartlett, 1948) and acceleration searches with PRESTO (Ransom et al., 2002).
The initial detection was made through PRESTO, an open-source pulsar timing
software package777https://github.com/scottransom/presto designed for
efficient searches for binary millisecond pulsars. We ran a Fourier-domain
acceleration search scheme with the accelsearch task over the range 1–1000 Hz,
and posited that the Doppler motion would cause the possible signal to drift
over a maximum of 100 bins in Fourier frequency space. This yielded a strong
$\simeq 376.05{\rm\,Hz}$ pulsation candidate (trial-adjusted significance of
$3.5\sigma$) in the 2–12 keV range.
Figure 1: Top: NICER 1–7 keV light curve for IGR J17494. The shaded band
denotes a gap where proprietary NICER data was unavailable to us. Middle:
Pulse arrival time delay as a function of orbital phase relative to the
ascending node. The crosses are our measurements, and the solid curve is our
best-fit model. The squares are the fit residuals, plotted on a 30$\times$
magnified scale. Bottom: Pulse profiles in the 1–3 keV (solid red) and 3–7 keV
(dashed blue) bands. The 1–3 keV profile contains a significant second
harmonic.
After initial identification of the candidate in the 2–12 keV range, we
optimized the pulse significance by adjusting the energy range to maximize the
$Z_{1}^{2}$ statistic, where
$Z_{1}^{2}=\frac{2}{N}\left[\left(\sum_{j=1}^{N}\cos 2\pi\nu
t_{j}\right)^{2}+\left(\sum_{j=1}^{N}\sin 2\pi\nu t_{j}\right)^{2}\right],$
(1)
where $t_{j}$ are the $N$ photon arrival times (Buccheri et al., 1983). We
found that an optimal energy range of 1.01–7.11 keV yielded
$Z_{1}^{2}=1915.41$. Our subsequent timing analyses were carried out over 1–7
keV.
The acceleration searches indicated that the pulsation frequency is modulated
by a binary orbit. We used the acceleration data to estimate an initial timing
model with a provisional circular orbit. We then used this initial model to
construct $35$ pulse times of arrival (TOAs) with the photon_toa.py tool in
the NICERsoft888https://github.com/paulray/NICERsoft data analysis package,
using a Gaussian pulse template and ensuring an integration time of 500 s for
each TOA (with minimum exposure time of 200 s). We then used these TOAs to
compute corrections to our initial orbit model using weighted least-squares
fitting with the PINT pulsar data analysis package (Luo et al., 2020). Our
best-fit orbit ephemeris is shown in Table 1, and the orbital decay curve is
shown in the middle panel of Figure 1. Using our best-fit timing model,
pulsations were detected throughout the entire outburst. At the end of the
observations, we were able to detect the pulsations in observations from MJD
59154–59157 (November 1–4) by combining all the data. The mean unabsorbed flux
over this 4-day interval was $8.5\times 10^{-12}$ erg s-1 cm-2 (1–10 keV). We
did not have sufficient sensitivity to detect the pulsations in individual
pointings from these dates. The time-averaged fractional root-mean-squared
(rms) pulsed amplitude was 7.4% (1–7 keV). Examining the lower and higher
energies separately, we found amplitudes of 7.2% in the 1–3 keV band and 8.7%
in the 3–7 keV band. The soft and hard X-ray pulse profiles are shown in the
bottom panel of Figure 1. The 1–3 keV profile shows the presence of a second
harmonic; this component is not significantly detected in the 3–7 keV profile.
To further examine the energy dependence of the pulse waveform, we adaptively
binned the timing data in energy. We required the energy bins to contain a
multiple of 5 pulse-invariant (PI) energy channels (0.05 keV), such that each
bin contained at least 5000 counts. For each of these energy bins, we then
folded the data using our best-fit timing solution and measured the
background-corrected fractional rms pulsed amplitude and the pulse phase
offset relative to the model. The resulting energy dependencies are shown in
Figure 2. The pulsed amplitude has a local maximum of 11% at 4 keV, while the
pulse phase lag has a local maximum of $+0.05$ cycles (130 $\mu$s) at around
1.5 keV.
Figure 2: Top: Fractional rms pulsed amplitude as a function of energy, as
measured by NICER. Bottom: Pulse phase lag as a function of energy, as
measured by NICER. The lag is measured relative to the best-fit timing model
in Table 1.
### 3.2 XMM-Newton
The uncertainty in our $P_{\rm orb}$ value does not allow us to coherently
extrapolate our timing model back to the 2012 outburst. Thus, we searched for
pulsations in the XMM-Newton data by constructing a grid of trial
$T_{\mathrm{asc}}$ values around the local epoch that spanned one orbital
period. The grid resolution was set to 50 s, which is equivalent to $4\arcdeg$
in orbital longitude. For each trial ephemeris, we then demodulated the event
data and computed the $Z_{1}^{2}$ statistic (see Eq. 1). We evaluated this
statistic for pulse frequencies in a $\pm 3{\rm\,mHz}$ window around the spin
frequency measured with NICER, adopting a frequency resolution of $1/T$, with
$T$ the duration of the XMM-Newton observation. The best candidate solution
produced by this search had $Z_{1}^{2}=89$, which converts to a trial-adjusted
pulse detection significance of $8\sigma$.
Adopting the best $T_{\mathrm{asc}}$ and pulse frequency from the grid search
as a provisional model, we performed a phase-coherent pulse analysis. We
divided the light curve into $\approx 3$ ks segments, and measured the pulse
phase in each segment separately. The phase residuals were fit using a
circular orbital model and constant spin frequency, where we kept the orbital
period and projected semimajor axis fixed at their NICER values. The best-fit
values were $\nu_{2012}=376.0501759(19)$ Hz and $T_{\rm asc,2012}=$ MJD
$56017.33680(5)$. Comparing to our NICER measurement, we find
$\Delta\nu\equiv\nu_{2020}-\nu_{2012}=-5.7\pm 1.9$ mHz. This indicates long-
term spin-down of the pulsar between outbursts, at a rate
$\dot{\nu}=-2.1(7)\times 10^{-14}$ Hz s-1. Owing to the uncertainty in exact
orbital cycle count between the 2012 and 2020 epochs, we are unable to use
these $T_{\rm asc}$ measurements to further refine the orbital period.
The XMM-Newton data also showed an energy-dependent trend in pulse phase lag
similar to that observed in the NICER data. We were unable to measure an
energy-dependence in the pulsed amplitude with XMM-Newton, but the results
from the two data sets were consistent within the measurement uncertainties.
Table 1: IGR J17494$-$3030 timing parameters from the 2020 outburst Parameter | Value
---|---
Right ascension, $\alpha$ (J2000) | $267.348417\arcdeg$
Declination, $\delta$ (J2000) | $-30.499722\arcdeg$
Position epoch (TT) | MJD $59156.34$
Spin frequency, $\nu_{0}$ (Hz) | $376.05017022(4)$
Spin frequency derivative (during outburst), $|\dot{\nu}|$ (Hz/s) | $<1.8\times 10^{-12}$
Spin epoch, $t_{0}$ (TDB) | MJD $59149.0$
Binary period, $P_{\rm orb}$ (s) | $4496.67(3)$
Projected semimajor axis, $a_{x}\sin i$ (lt-ms) | $15.186(12)$
Epoch of ascending node passage, $T_{\rm asc}$ (TDB) | MJD $59149.069012(15)$
Eccentricity, $e$ | $<0.006\ (2\sigma)$
Spin frequency derivative (long-term), $\dot{\nu}$ (Hz/s) | $-2.1(7)\times 10^{-14}$
## 4 Discussion
The discovery of coherent millisecond X-ray pulsations from IGR J17494$-$3030
definitively identifies the source as an accreting neutron star. We can use
the long-term spin-down of the pulsar between its 2012 and 2020 X-ray
outbursts to estimate the pulsar’s magnetic field strength. Assuming that the
spin-down is due to magnetic dipole radiation, we can calculate the pulsar’s
magnetic dipole moment (Spitkovsky, 2006)
$\displaystyle\mu$ $\displaystyle=5.2\times
10^{26}\left(1+\sin^{2}\alpha\right)^{-1/2}$
$\displaystyle\times\left(\frac{I}{10^{45}\text{ g
cm}^{2}}\right)^{1/2}\text{G cm}^{3},$ (2)
where $\alpha$ is the angle between the magnetic and spin axes, and $I$ is the
neutron star moment of inertia. The corresponding surface dipole field
strength of $\simeq 10^{9}$ G is on the high end of the distribution inferred
for other AMXPs (Mukherjee et al., 2015).
We found that the fractional rms pulsed amplitude and the pulse phase of IGR
J17494 vary as a function of photon energy. Both the amplitude and the phase
lag reach a local maximum at a (different) characteristic energy of 4 and 1.5
keV, respectively. Energy-dependent variations of the pulse waveform are
ubiquitous among AMXPs, although the location of these local maxima varies
greatly from source to source (Gierliński et al., 2002; Gierliński & Poutanen,
2005; Falanga et al., 2005; Patruno et al., 2009; Falanga et al., 2012). The
behavior can be understood through a two-component emission model, with
thermal emission originating from the stellar surface and scattered Compton
emission originating from some height above the surface (Gierliński et al.,
2002; Wilkinson et al., 2011). Accounting for the difference in geometry and
emission patterns, such a model can self-consistently explain the energy
dependence of both the phase lags and the pulsed amplitudes (Poutanen &
Gierliński, 2003).
Our measurement of a 75 min binary orbit allows us to constrain the nature of
the mass donor in this system. The vast majority of Roche-lobe–filling LMXBs
and cataclysmic variables contain hydrogen-rich donor stars, and they all have
binary periods $P_{\rm orb}\gtrsim 80$ min (Paczynski & Sienkiewicz, 1981;
Rappaport et al., 1982). The so-called ultracompact binaries ($P_{\rm
orb}\lesssim 80$ min) have H-depleted donors (Nelson et al., 1986; Pylyser &
Savonije, 1988, 1989; Nelemans et al., 2010). IGR J17494 has the longest known
period for an ultracompact LMXB and lies near the period boundary, making it a
particularly interesting case. We also note the recent discovery of the
rotation-powered millisecond gamma-ray pulsar PSR J1653$-$0158 in a 75 min
(non-accreting) binary (Nieder et al., 2020). This is the shortest orbital
period known for a rotation-powered binary pulsar, and this “black widow”
system is believed to have evolved from an ultracompact LMXB after mass
transfer ended.
From our measured orbital parameters, the binary mass function of IGR J17494
is
$\displaystyle f_{m}$ $\displaystyle\equiv\frac{(M_{d}\sin i)^{3}}{(M_{\rm
ns}+M_{d})^{2}}=\frac{4\pi^{2}(a_{x}\sin i)^{3}}{GP_{\rm orb}^{2}}$
$\displaystyle\approx 1.39\times 10^{-6}\ M_{\odot},$ (3)
where $M_{\rm ns}$ is the neutron star mass, $M_{d}$ is the donor mass,
$a_{x}\sin i$ is the projected semimajor axis, and the binary inclination $i$
is defined as the angle between the line of sight and the orbital angular
momentum vector. For a given value of $M_{\rm ns}$, we can use Equation 4 to
calculate $M_{d}$ as a function of $i$ (see top panel of Figure 3). Assuming
$M_{\rm ns}=1.4\ (2.0)\,M_{\odot}$, the minimum donor mass (for an edge-on
binary with $i=90^{\circ}$) is 0.014 (0.018) $M_{\odot}$. For a random
ensemble of binaries, the probability distribution of $\cos i$ is uniformly
distributed and $\Pr(i<i_{0})=1-\cos i_{0}$. Thus, the donor mass is likely to
be very low, with a 90% confidence upper limit of $M_{d}<0.033\
(0.041)\,M_{\odot}$ for $M_{\rm ns}=1.4\ (2.0)\,M_{\odot}$.
Assuming a Roche-lobe–filling donor, we can calculate the donor radius $R_{d}$
as a function of $M_{d}$ (Eggleton, 1983); this is shown in the bottom panel
of Figure 3 for $M_{\rm ns}=1.4\,M_{\odot}$. For comparison, the figure also
shows the mass-radius relations for different types of low-mass stars: cold
white dwarfs (WDs; Zapolsky & Salpeter, 1969; Rappaport & Joss, 1984; Nelemans
et al., 2001); hot (finite-entropy) WDs composed of either He, C, or O (Deloye
& Bildsten, 2003); and low-mass H-rich stars, including brown dwarfs (Chabrier
et al., 2000). We see that cold WD models are inconsistent with our measured
mass-radius constraint, indicating that thermal bloating is likely important.
Moderately hot He WDs with central temperature $T_{c}=2.5\times 10^{6}$ K or
C/O WDs with $T_{c}=5\times 10^{6}$ K are consistent with our constraint at
high binary inclination. Hotter WDs and moderately old (cool) brown dwarfs are
also consistent, but the required inclinations have low a priori probability.
Finally, H-rich dwarfs above the mass-burning limit are also possible, but
only for extremely low (improbable) inclinations. We conclude that the donor
is likely to be a $\simeq 0.02\,M_{\odot}$ finite-entropy He or C/O white
dwarf.
Figure 3: Top: Donor star mass $M_{d}$ as a function of binary inclination
$i$, assuming $M_{\rm ns}=1.4\,M_{\odot}$. The a priori probability
distribution is uniform in $\cos i$, so low masses are likeliest. Bottom:
Mass-radius constraints for the donor star. The thick solid black curve is the
mass-radius constraint for a Roche-lobe–filling donor from our orbital
measurements. The dashed black line shows cold WD models. The blue and red
lines show representative “warm” and hot WD models, respectively, with He
(dotted), C (dashed), and O (dash-dotted) compositions. These models take
$T_{c}=2.5$ and 7.9 MK for He and $T_{c}=5$ and 10 MK for C/O. The solid cyan
curves show brown dwarf models for ages 0.1, 0.5, 1.0, 5.0, and 10.0 Gyr (from
top to bottom). The likeliest donor is a warm $\simeq 0.02M_{\odot}$ He or C/O
WD.
The angular momentum evolution of the binary is described by (Verbunt, 1993;
Verbunt & van den Heuvel, 1995)
$-\frac{\dot{J}}{J}=-\frac{\dot{M}_{d}}{M_{d}}\,f_{\rm ML},$ (4)
where $\dot{J}$ is the rate of change of the orbital angular momentum $J$ due
to effects other than mass loss from the system, $\dot{M}_{d}$ ($<0$) is the
rate of change of the donor mass, and the dimensionless factor $f_{\rm ML}$ is
given by
$f_{\rm ML}=\frac{5}{6}+\frac{n}{2}-\beta
q-\frac{(1-\beta)(q+3\alpha)}{3(1+q)},$ (5)
where $q=M_{d}/M_{\rm ns}\ll 1$ is the binary mass ratio, $\beta$ is the
fraction of $\dot{M}_{d}$ that accretes onto the neutron star ($\beta=1$ for
conservative mass transfer),
$n=\frac{d(\ln R_{d})}{d(\ln M_{d})}$ (6)
denotes how the donor radius $R_{d}$ changes with mass loss, and $\alpha$ is
the specific angular momentum of any (non-conservative) mass lost from the
system in units of the donor star’s specific angular momentum. Thus, $\alpha$
parameterizes the site of any mass ejection from the system, where
$\alpha\simeq 1$ for mass loss close to the donor and $\alpha\simeq q^{2}$ for
mass loss close to the pulsar. Mass transfer in ultracompact binaries is
primarily driven by angular momentum loss due to gravitational radiation from
the binary orbit (see Rappaport et al., 1982, and references therein); for a
circular orbit, this loss is given by (Landau & Lifshitz, 1989; Peters, 1964)
$-\left(\frac{\dot{J}}{J}\right)_{\rm
GW}=\frac{32\,G^{3}}{5\,c^{5}}\frac{M_{\rm ns}M_{d}(M_{\rm
ns}+M_{d})}{a^{4}},$ (7)
where $a$ is the binary separation. Inserting this into the left-hand side of
Equation 4, we can then calculate the gravitational-wave–driven mass transfer
rate from the donor into the accretion disk as
$\dot{M}_{\rm
GW}=-\dot{M}_{d}=\frac{32G^{3}}{5c^{5}}\left(\frac{4\pi^{2}}{G}\right)^{4/3}\frac{M_{\rm
ns}^{8/3}\,q^{2}}{(1+q)^{1/3}\,P_{\rm orb}^{8/3}\,f_{\rm ML}}$ $\approx
2.6\times 10^{-12}\left(\frac{M_{\rm ns}}{1.4M_{\odot}}\right)^{2/3}$
$\times\left(\frac{M_{d}}{0.014M_{\odot}}\right)^{2}\left(\frac{f_{\rm
ML}}{0.66}\right)^{-1}M_{\odot}\mbox{\rm\, yr${}^{-1}$}.$ (8)
Our scaling value of $f_{\rm ML}=0.66$ corresponds to $n=-1/3$ (typical for
degenerate donors) and $\beta=1$.
Although accretion onto the neutron star is mediated by episodic outbursts,
mass continuity requires that the long-term average accretion luminosity
reflect $\dot{M}_{\rm GW}$ if the mass transfer is conservative. Our
observations are not ideal for examining this, since we did not observe the
early (brightest) part of the 2020 outburst with NICER. However, the
unabsorbed 0.5–10 keV X-ray fluence in the 2012 outburst was $1.1\times
10^{-4}$ erg cm-2 (Armas Padilla et al., 2013). Assuming that the 2012
outburst was typical, that the long-term average accretion rate is dominated
by the outbursts, and that there were no intervening outbursts between 2012
and 2020, the outburst separation of $\approx 3100$ days yields a long-term
average X-ray flux of $F_{x,{\rm avg}}=3.9\times 10^{-13}$ erg s-1 cm-2
(0.5–10 keV). We can then write the accretion luminosity as
$\frac{GM_{\rm ns}\beta\dot{M}_{\rm GW}}{R_{\rm
ns}}=\left(\frac{\Delta\Omega}{4\pi}\right)4\pi d^{2}f_{\rm bol}\,F_{x,{\rm
avg}},$ (9)
where $R_{\rm ns}$ is the neutron star radius, $d$ is the distance to the
source, $f_{\rm bol}$ is the bolometric correction (accounting for accretion
luminosity outside the 0.5–10 keV bandpass), and $\Delta\Omega$ is the solid
angle into which the accretion luminosity is emitted. Based on the INTEGRAL
hard X-ray observations in 2012 (Boissay et al., 2012), we estimate $f_{\rm
bol}\approx 1.7$. Assuming $R_{\rm ns}=10$ km and taking $\beta=1$ and
$\Delta\Omega=4\pi$, we obtain an implausibly large distance of 20 kpc.
Although it is not impossible that the source lies on the far side of the
Galaxy, a location near the Galactic center is far more likely given the line
of sight. There are several reasons that our distance estimate might be
significantly inflated. Obtaining a more plausible distance of 8 kpc would
require
$\displaystyle\frac{1}{\beta}\left(\frac{\Delta\Omega}{4\pi}\right)\left(\frac{f_{\rm
bol}}{1.7}\right)\left(\frac{f_{\rm ML}}{0.66}\right)$
$\displaystyle\times\left(\frac{M_{\rm
ns}}{1.4\,M_{\odot}}\right)^{-5/3}\left(\frac{M_{d}}{0.014\,M_{\odot}}\right)^{-2}$
$\displaystyle\times\left(\frac{F_{x,{\rm avg}}}{3.9\times 10^{-13}\mbox{\rm\
erg~{}s${}^{-1}$~{}cm${}^{-2}$}}\right)$ $\displaystyle\approx$ $\displaystyle
6.$ (10)
Some combination of these factors may be different than what we assumed above.
However, a heavier neutron star ($M_{\rm ns}>1.4\,M_{\odot}$), a heavier mass
donor (equivalent to a lower binary inclination), or significant beaming
($\Delta\Omega<4\pi$) would further inflate the distance estimate. Also, our
estimate of $f_{\rm bol}$ is fairly robust, given the broad X-ray coverage of
the INTEGRAL data. It is possible that we have underestimated $F_{x,{\rm
avg}}$. This could happen if we missed accretion outbursts that occurred
between 2012 and 2020, or if the quiescent (non-outburst) flux is as high as
$\sim 10^{-12}$ erg s-1 cm-2. The former possibility can be explored through a
careful analysis of archival X-ray monitoring data, while the latter
possibility could be checked through sensitive X-ray observations of the
source in quiescence.
The factor $f_{\rm ML}$ may be somewhat larger than we assumed. Although we
calculated it using the usual value of $n=-1/3$ for degenerate donors, Deloye
& Bildsten (2003) showed that the WD donors in ultracompact binaries can have
$n$ values in the range of $-0.1$ to $-0.2$ due to the importance of Coulomb
interactions for extremely low donor masses. However, this is unlikely to
increase $f_{\rm ML}$ by more than a factor of $\simeq 1.2$.
Non-conservative mass transfer ($\beta<1$) is a more promising avenue. The
radio detection of IGR J17494 (van den Eijnden et al., 2020) points to the
likelihood of a collimated jet ejection during the outburst. Moreover, a
similar distance conundrum was invoked to infer non-conservative mass transfer
in the ultracompact LMXB pulsar XTE J0929$-$314 (Marino et al., 2017) as well
as several other AMXPs (Marino et al., 2019). Also, there was evidence found
for an outflow in the ultracompact LMXB pulsar IGR J17062$-$6143 (Degenaar et
al., 2017; van den Eijnden et al., 2018), possibly arising from a magnetic
propeller-driven wind from the inner accretion disk (Illarionov & Sunyaev,
1975).
During the long periods of X-ray (accretion) quiescence, mass loss from the
binary could arise from several different mechanisms. These are motivated by
the study of rotation-powered radio millisecond pulsars in detached (non-
accreting) binaries: the so-called “black widow” ($M_{c}\lesssim
0.05M_{\odot}$) and “redback” ($M_{c}\gtrsim 0.1M_{\odot}$) systems, where
$M_{c}$ is the companion mass (see, e.g., Romani et al., 2016, and references
therein). One possibility is black-widow–like ablation of the companion,
driven by rotation-powered gamma-ray emission from the pulsar (Ginzburg &
Quataert, 2020). Such ablation could also be driven by particle heating via
the rotation-powered pulsar wind (see Harding & Gaisser, 1990, and references
therein). Hard X-rays and gamma-rays from the intrabinary shock observed in
many black widow systems could significantly affect the mass loss rate
(Wadiasingh et al., 2018). Another possibility is that the pulsar wind could
drive an outflow from the inner Lagrange ($L_{1}$) point by overcoming the ram
pressure of accreting material (Burderi et al., 2001; Di Salvo et al., 2008).
As an example, we consider the case of gamma-ray ablation. If we assume the
gamma-ray luminosity is $\simeq 10\%$ of the spin-down luminosity ($\simeq
3\times 10^{35}$ erg s-1 based on our long-term $\dot{\nu}$ measurement) as
typically seen in black widow systems (Abdo et al., 2013), this would imply a
companion mass loss rate of $\sim 10^{-11}M_{\odot}/{\rm yr}$ (Ginzburg &
Quataert, 2020). For a source distance of 8 kpc and assuming that
gravitational wave losses dominate in Equation 4, this implies $\beta\approx
0.04$ and $\alpha\approx 0.4$, suggesting that the mass ejection occurs
somewhere between the pulsar and the $L_{1}$ point ($\alpha\approx 0.8$).
However, Ginzburg & Quataert (2020) argue that magnetic braking of the donor
(through magnetic coupling to the ablated wind) likely dominates gravitational
radiation as an angular momentum sink in black widow systems. If so, then that
could both decrease $\beta$ and increase $\alpha$ even further in our case.
All of the X-ray–quiescent mechanisms mentioned above rely on the system
entering a rotation-powered radio pulsar state during X-ray quiescence. We
note that a growing class of so-called transitional millisecond pulsars
(tMSPs) has been identified that switch between LMXB and radio pulsar states
(see Papitto & de Martino, 2020, for a review). The known tMSPs would be
classified as redback systems in their radio pulsar state. If IGR J17494 is a
tMSP, then its low companion mass would make it a black widow system in its
rotation-powered state. We note that the X-ray properties of IGR J17494
correspond to those of the so-called very faint X-ray transients (VFXTs;
Wijnands et al., 2006), whose low outburst luminosities and long-term
accretion rates are difficult to understand. Our observations support the
suggestion that some VFXTs may also be tMSPs (Heinke et al., 2015). The
distinction between VFXTs and ordinary LMXBs may somehow relate to the level
of non-conservative mass transfer.
M.N. and this work were supported by NASA under grant 80NSSC19K1287 as well as
through the NICER mission and the Astrophysics Explorers Program. NICER work
at NRL is also supported by NASA. D.A. acknowledges support from the Royal
Society. This research has made use of data and/or software provided by the
High Energy Astrophysics Science Archive Research Center (HEASARC), which is a
service of the Astrophysics Science Division at NASA/GSFC and the High Energy
Astrophysics Division of the Smithsonian Astrophysical Observatory.
## References
* Abdo et al. (2013) Abdo, A. A., Ajello, M., Allafort, A., et al. 2013, ApJS, 208, 17
* Armas Padilla et al. (2013) Armas Padilla, M., Wijnands, R., & Degenaar, N. 2013, MNRAS, 436, L89
* Arnaud (1996) Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Bartlett (1948) Bartlett, M. S. 1948, Nature, 161, 686
* Boissay et al. (2012) Boissay, R., Chenevez, J., Bozzo, E., et al. 2012, The Astronomer’s Telegram, 3984, 1
* Buccheri et al. (1983) Buccheri, R., Bennett, K., Bignami, G. F., et al. 1983, A&A, 128, 245
* Burderi et al. (2001) Burderi, L., Possenti, A., D’Antona, F., et al. 2001, ApJ, 560, L71
* Chabrier et al. (2000) Chabrier, G., Baraffe, I., Allard, F., & Hauschildt, P. 2000, ApJ, 542, 464
* Chakrabarty & Jonker (2020) Chakrabarty, D., & Jonker, P. G. 2020, The Astronomer’s Telegram, 14146, 1
* Chakrabarty et al. (2013) Chakrabarty, D., Jonker, P. G., & Markwardt, C. B. 2013, The Astronomer’s Telegram, 4886, 1
* Da Costa-Luis et al. (2020) Da Costa-Luis, C., Larroque, S. K., Altendorf, K., et al. 2020, tqdm: A fast, Extensible Progress Bar for Python and CLI, v.v4.47.0, Zenodo, doi:10.5281/zenodo.3912045
* Degenaar et al. (2017) Degenaar, N., Pinto, C., Miller, J. M., et al. 2017, MNRAS, 464, 398
* Deloye & Bildsten (2003) Deloye, C. J., & Bildsten, L. 2003, The Astrophysical Journal, 598, 1217–1228
* Di Salvo et al. (2008) Di Salvo, T., Burderi, L., Riggio, A., Papitto, A., & Menna, M. T. 2008, MNRAS, 389, 1851
* Di Salvo & Sanna (2020) Di Salvo, T., & Sanna, A. 2020, arXiv e-prints, arXiv:2010.09005
* Ducci et al. (2020) Ducci, L., Chenevez, J., Sidoli, L., et al. 2020, The Astronomer’s Telegram, 14119, 1
* Eggleton (1983) Eggleton, P. P. 1983, ApJ, 268, 368
* Falanga et al. (2012) Falanga, M., Kuiper, L., Poutanen, J., et al. 2012, A&A, 545, A26
* Falanga et al. (2005) —. 2005, A&A, 444, 15
* Gendreau et al. (2012) Gendreau, K. C., Arzoumanian, Z., & Okajima, T. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8443, Space Telescopes and Instrumentation 2012: Ultraviolet to Gamma Ray, 844313
* Gierliński et al. (2002) Gierliński, M., Done, C., & Barret, D. 2002, MNRAS, 331, 141
* Gierliński & Poutanen (2005) Gierliński, M., & Poutanen, J. 2005, MNRAS, 359, 1261
* Ginzburg & Quataert (2020) Ginzburg, S., & Quataert, E. 2020, MNRAS, 495, 3656
* Hameury (2020) Hameury, J. M. 2020, Advances in Space Research, 66, 1004
* Harding & Gaisser (1990) Harding, A. K., & Gaisser, T. K. 1990, ApJ, 358, 561
* Heinke et al. (2015) Heinke, C. O., Bahramian, A., Degenaar, N., & Wijnands, R. 2015, MNRAS, 447, 3034
* Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
* Illarionov & Sunyaev (1975) Illarionov, A. F., & Sunyaev, R. A. 1975, A&A, 39, 185
* LaMarr et al. (2016) LaMarr, B., Prigozhin, G., Remillard, R., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9905, Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray, ed. J.-W. A. den Herder, T. Takahashi, & M. Bautz, 99054W
* Landau & Lifshitz (1989) Landau, L. D., & Lifshitz, E. M. 1989, The Classical Theory of Fields, 4th rev. Eng. ed. (Oxford: Pergamon)
* Lasota (2001) Lasota, J.-P. 2001, New A Rev., 45, 449
* Luo et al. (2020) Luo, J., Ransom, S., Demorest, P., et al. 2020, arXiv e-prints, arXiv:2012.00074
* Marino et al. (2017) Marino, A., Di Salvo, T., Gambino, A. F., et al. 2017, A&A, 603, A137
* Marino et al. (2019) Marino, A., Di Salvo, T., Burderi, L., et al. 2019, A&A, 627, A125
* Mukherjee et al. (2015) Mukherjee, D., Bult, P., van der Klis, M., & Bhattacharya, D. 2015, MNRAS, 452, 3994
* Nelemans et al. (2001) Nelemans, G., Portegies Zwart, S. F., Verbunt, F., & Yungelson, L. R. 2001, A&A, 368, 939
* Nelemans et al. (2010) Nelemans, G., Yungelson, L. R., van der Sluys, M. V., & Tout, C. A. 2010, MNRAS, 401, 1347
* Nelson et al. (1986) Nelson, L. A., Rappaport, S. A., & Joss, P. C. 1986, ApJ, 304, 231
* Ng et al. (2020) Ng, M., Ray, P. S., Strohmayer, T. E., et al. 2020, The Astronomer’s Telegram, 14124, 1
* Nieder et al. (2020) Nieder, L., Clark, C. J., Kandel, D., et al. 2020, ApJ, 902, L46
* Oliphant (2007) Oliphant, T. E. 2007, Computing in Science and Engineering, 9, 10
* Paczynski & Sienkiewicz (1981) Paczynski, B., & Sienkiewicz, R. 1981, ApJ, 248, L27
* Papitto & de Martino (2020) Papitto, A., & de Martino, D. 2020, arXiv e-prints, arXiv:2010.09060
* Patruno et al. (2009) Patruno, A., Altamirano, D., Hessels, J. W. T., et al. 2009, ApJ, 690, 1856
* Perez & Granger (2007) Perez, F., & Granger, B. E. 2007, Computing in Science and Engineering, 9, 21
* Peters (1964) Peters, P. C. 1964, Phys. Rev., 136, B1124
* Poutanen & Gierliński (2003) Poutanen, J., & Gierliński, M. 2003, MNRAS, 343, 1301
* Prigozhin et al. (2016) Prigozhin, G., Gendreau, K., Doty, J. P., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9905, Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray, ed. J.-W. A. den Herder, T. Takahashi, & M. Bautz, 99051I
* Pylyser & Savonije (1988) Pylyser, E., & Savonije, G. J. 1988, A&A, 191, 57
* Pylyser & Savonije (1989) Pylyser, E. H. P., & Savonije, G. J. 1989, A&A, 208, 52
* Ransom et al. (2002) Ransom, S. M., Eikenberry, S. S., & Middleditch, J. 2002, AJ, 124, 1788
* Rappaport & Joss (1984) Rappaport, S., & Joss, P. C. 1984, ApJ, 283, 232
* Rappaport et al. (1982) Rappaport, S., Joss, P. C., & Webbink, R. F. 1982, ApJ, 254, 616
* Romani et al. (2016) Romani, R. W., Graham, M. L., Filippenko, A. V., & Zheng, W. 2016, ApJ, 833, 138
* Spitkovsky (2006) Spitkovsky, A. 2006, ApJ, 648, L51
* Standish (1998) Standish, E. M. 1998, JPL Planetary and Lunar Ephemerides, DE405/LE405, JPL Interoffice Memo 312.F-98-048 (Pasadena: NASA Jet Propulsion Laboratory)
* van den Eijnden et al. (2018) van den Eijnden, J., Degenaar, N., Pinto, C., et al. 2018, MNRAS, 475, 2027
* van den Eijnden et al. (2020) van den Eijnden, J., Degenaar, N., Wijnands, R., et al. 2020, The Astronomer’s Telegram, 14129, 1
* Verbunt (1993) Verbunt, F. 1993, ARA&A, 31, 93
* Verbunt & van den Heuvel (1995) Verbunt, F., & van den Heuvel, E. P. J. 1995, in X-ray Binaries, ed. W. H. G. Lewin, J. van Paradijs, & E. P. J. van den Heuvel (Cambridge, Cambridge Univ. Press), 457–494
* Wadiasingh et al. (2018) Wadiasingh, Z., Venter, C., Harding, A. K., Böttcher, M., & Kilian, P. 2018, ApJ, 869, 120
* Wijnands et al. (2006) Wijnands, R., in’t Zand, J. J. M., Rupen, M., et al. 2006, A&A, 449, 1117
* Wilkinson et al. (2011) Wilkinson, T., Patruno, A., Watts, A., & Uttley, P. 2011, MNRAS, 410, 1513
* Zapolsky & Salpeter (1969) Zapolsky, H. S., & Salpeter, E. E. 1969, ApJ, 158, 809
|
# Attention Based Video Summaries of Live Online Zoom Classes
Hyowon Lee, Mingming Liu, Hamza Riaz, Navaneethan Rajasekaren,
Michael Scriney, Alan F. Smeaton
Insight Centre for Data Analytics
Dublin City University,
Glasnevin, Dublin 9, Ireland.
<EMAIL_ADDRESS>
###### Abstract
This paper describes a system developed to help University students get more
from their online lectures, tutorials, laboratory and other live sessions. We
do this by logging their attention levels on their laptops during live Zoom
sessions and providing them with personalised video summaries of those live
sessions. Using facial attention analysis software we create personalised
video summaries composed of just the parts where a student’s attention was
below some threshold. We can also factor in other criteria into video summary
generation such as parts where the student was not paying attention while
others in the class were, and parts of the video that other students have
replayed extensively which a given student has not. Attention and usage based
video summaries of live classes are a form of personalised content, they are
educational video segments recommended to highlight important parts of live
sessions, useful in both topic understanding and in exam preparation. The
system also allows a Professor to review the aggregated attention levels of
those in a class who attended a live session and logged their attention
levels. This allows her to see which parts of the live activity students were
paying most, and least, attention to. The Help-Me-Watch system is deployed and
in use at our University in a way that protects student’s personal data,
operating in a GDPR-compliant way.
## Introduction
The conventional model of teaching at University level has been changed,
possibly forever, as a result of the COVID-19 pandemic. Prior to this, most
Universities and Colleges had moved to a method of teaching students based on
a combination of stand up lectures to large or small classes, smaller group
interactive tutorials, laboratory sessions, peer mentoring and others. This
was backed up by online resources and access to course materials like notes,
presentations, links to online pre-recorded videos, quizzes, and other
interactive artefacts. Online Virtual Learning Environments (VLEs) have
emerged as a platform to gather and manage such resources and the use of
systems including Moodle (?) and Blackboard (?) had become widespread.
MOOCs have also played a role in the digitisation of education with
Universities putting some or most of their teaching content online for both
their own students as well as for a wider population of those interested in
learning, either for formal qualification or just for broadening their
knowledge (?). The effect of this has been that the demands of students when
they are learning online are different to the on-campus environment and we are
still trying to understand what those new requirements are. Rather than being
driven by pedagogical concerns, much of the move to online learning, both the
gradual shift over the last decade and the recent stampede as a result of the
pandemic, has been done initially because it was possible and then because it
was necessary. We have not moved online because it was pedagogically correct,
and while it brings convenience and reach, as well as economies of scale, we
don’t know if the pedagogy should still be the same (?).
As reported in (?), who conducted a recent extensive bibliometric analysis of
the field, the rapid growth in the use of digital technologies in higher level
education is not restricted to just the sciences and engineering disciplines
but is right across the field, including the social sciences. This move to
educating students digitally and using digital technologies is partially as a
result of the development of technology itself which enables it, and partly
attributable to the changing nature of our students. Today, our students are
comfortable with using technology because they’ve grown up with it and thus
they can embrace the use of it as part of their education. In fact as pointed
out in (?), students as typical representatives of Generation Z are more
comfortable with technology than the typical Generation X, who correspond to
their Professors.
In early 2020 the higher level education sector, like most others, had to
change as a result of the pandemic. We had to pivot from a model of on-campus
activities to one where students were online, remote, perhaps had moved back
home, and accessing their teaching materials and classes using VLEs and video
conferencing systems like Zoom, Skype or Microsoft Teams. Because of the rate
of spread of COVID-19, this happened almost overnight in most places and with
almost zero preparation time, the default was to continue with scheduled
classes and other interactive sessions taking place as online video
conferences.
While this helped to get most of us through to the end of that teaching
semester, the feedback from students was that many found it difficult to
maintain interest and motivation for attending online lectures over Zoom
because they were isolated from others, working alone, and online synchronous
lectures were a lacklustre equivalent of face-to-face sessions that did not
transfer well (?). This illustrated that the pedagogy of online is not the
same as on-campus. Many also had internet access difficulties.
As a result of this experience, many Universities have since moved to flipped
classrooms or hybrid delivery as a form of blended learning (?) where students
are required to prepare for online classes in advance through pre-recorded
video lectures or material to be read. The online live classes are supposedly
more engaging as they are more interactive and can focus on problem-based
learning activities.
However, while this may improve the material, the other factors remain, namely
that students are on Zoom, likely working alone and isolated from their peers,
and they are easily demotivated. That means that the online class experience
for them is lessened because there are real world distractions, the level of
engagement afforded by video-conference sessions as a whole is poor and the
amount of social interaction with others while at an online lecture, is
minimal (?).
In the work introduced in this paper we use AI techniques to address these
shortcomings using the Help-Me-Watch system which is in use at our University.
This highlights for students, those parts of the live and interactive sessions
they missed because although they were attending, they were distracted or not
paying sufficient attention. We do this by generating a personalised video
summary of recommended content from a recording of the live session based on
their (lack of) attention during that live session. The approach presents
several interesting challenges in the way video summaries are generated based
on ambient monitoring of students’ attention levels, the playback usage of
different parts of the video recording and the ways which aggregated feedback
to the Professor can be generated and presented.
## Student Monitoring
Prior to the arrival of the COVID-19 pandemic, University education had
already taken steps towards virtualisation with the use of online platforms,
namely VLEs, for providing online access to learning materials. This has
helped the teaching and learning process by providing easy access as well as
allowing interaction through quizzes, class polls and surveys, computer
programming environments, etc.
However notwithstanding the reservations some have about its use for example
(?), VLE log access data which records student access to online resources has
been used for over a decade in a field known as learning analytics. The
applications for this are early detection of students who are struggling with
their learning (?) as well as personalising the delivery of educational
content (?) and even in predicting course outcome in terms of final grades in
examinations (?).
While this may be regarded as a form of student monitoring, learning analytics
has positive connotations compared to other types of person-monitoring, and
there are many examples of it having a positive impact on learning and on
outcomes. In general, similar to that reported in (?), students are not
affected in their learning despite knowing that they are being monitored
ambiently, just like in other aspects of society we do not change our
behaviour when we know our activities are logged when we are online, in city
spaces with CCTV, etc. Yet despite the widespread use of learning analytics,
the evidence for the success in its use in improving student learning remains
anecdotal rather than systematic (?) and it will take time for these benefits
to become accepted across the board.
## Video Summarisation
Automatic video summarization is the task of generating a short version of a
longer video by selecting the highlights or gist of the original video, thus
compacting the storyline into a reduced timespace. It is not a new topic as
this review article from 2008 shows (?). Video summarization approaches depend
on the genre of the video being summarised meaning that we will adopt
different strategies for summarizing different video types. For example if we
are summarizing a video with a storyline, like a movie which is a thriller,
then we may not want to reveal the ending of the story, whereas for an action
movie we may wish to include the best of the action shots in the summary (?).
If we want to generate a movie trailer which does not reveal the storyline but
includes the scenes with most suspense, as an incentive for the viewer to want
to watch the full movie, then we need semantic understanding of the original
video as was done with the movie Morgan where the trailer was generated using
the IBM Watson system (?).
Generating summaries of sports videos requires a different approach as we want
the most exciting moments in the sporting event to be included in the
highlights. Using cricket as an example, (?) have used a range of cues for
determining the highlights including excitation level as indicated by the
pitch of the commentator’s voice.
Other video genres including CCTV footage, egocentric video, TV news or
documentaries, and lecture presentations, will each have their own differing
criteria as to what should be included in their summary.
The idea of generating a video summary based on direct feedback from viewers
as the video is being watched, has been reported previously. For example,
using the facial expressions of viewers, perception based summaries which
identify the most affective scenes in videos, have been generated (?). This
approach was tested on 8 short video clips of various genres and a range of
emotions were classified from the facial analysis of viewers, including
neutral, happy, surprised, angry, disgust, fear, and sad. Using this it was
shown that it is possible to generate quite elaborate video summaries without
requiring analysis of the original video content.
## The Help-Me-Watch System
We built and deployed a system which generates personal video summaries of
live online Zoom class content for students, called Help-Me-Watch. The design
and information flow in the system is shown in Figure 1, and it operates in 4
phases.
Figure 1: Information flow for the Help-Me-Watch system
* •
The system begins by inviting a Professor to register their forthcoming course
with the system (1). This generates a unique public passcode for all lectures
in the course (2) which is shared with students for them to use for that
course (3). Also in advance of the live session, students download and install
our Help-Me-Watch application on their laptop (4). The system also generates a
private passcode for the Professor which she keeps private.
* •
During the class, students and the Professor connect to Zoom at the appointed
time for the live session and students also run the Help-Me-Watch app on their
computer (MAC or Windows) (5) entering the public passcode. This downloads the
course code and title and ensures that attention level data from students
attending different courses are kept separate. During the live class,
students’ webcams compute their individual attention levels and stream this
data back to our server (6) for processing.
* •
Some time after the live session, when a student wants to do a post-class
review of the material presented during the live class, the live session which
has been recorded in full on the Zoom platform (7) is automatically summarised
for that student using their own attention level data, and optionally the
attention level data from other students in the class and usage data on which
parts of the video other students have played. Each student is thus able to
review their own personalised summary version of the lecture on their own
laptop or mobile device (8).
* •
Also at some point after the class, the Professor can review the class (9) by
entering their private passcode for that course, so students cannot access
this facility, and the aggregated and anonymised student attention data for
the live session is presented (10) as feedback into what parts of their
lecture attracted most, and least, attention from the class. This is presented
as a stacked line graph and it is a proxy from the kind of visual body
language any presenter would get in front of any audience, except it is
retrospective and not live, though live feedback is an option we will pursue
in the future.
Figure 2 shows a screengrab where a student has used Help-Me-Watch for 6
recorded lectures for courses CA358 and CA349. She has chosen to review the
live lecture for CA349 IT Architecture, a class held on 20th October 2020
between 11am and 12pm. Figure 2 shows that the student can choose between
replaying the full original video (53 minutes duration) or playing just the
parts where her attention levels dropped below a threshold (18 minutes
duration), or automatically generated video summaries where the individual
video segments which are appended together to make the summary are of
5-minutes (25 minutes overall), 2-minutes (18 minutes overall) or 30 seconds
(9 minutes overall) duration.
In the case of Figure 2 the student has chosen to view the 18 minutes of the
“all I missed” summary and the actual parts that were missed, or where
attention levels dropped below a threshold, are highlighted as red bars on the
screen. The screen also includes an embedded video playback window, with play,
pause and stop controls. The student is about halfway through playing this
summary, with the on-screen material containing a description of the
convolutional neural network used in the ImageNet challenge in 2012, which is
part of the course on IT Architecture.
Figure 2: Screengrab of student replaying a video summary of a past lecture
In addition to using students’ attention level data to generate personalised
video summaries for each student, the attention level data is aggregated and
summarised with the contributions of students anonymised and can be presented
back to the Professor. For this to happen the Professor uses a second
automatically-generated passcode, this time the private passcode, so only the
Professor can access this.
Figure 3 shows a screengrab of the lecture review options for a Professor,
indicating she has allowed Help-Me-Watch to be used for 4 of her lectures to
date as part of her module CA229: Developing Internet Applications. She has
chosen to highlight the lecture which took place on 22 October 2020 between
2pm and 3pm. Figure 3 shows that the class was 48 minutes in duration, that 12
students used Help-Me-Watch and the stacked bar chart shows anonymised
aggregated attention levels from those 12 students. From this we can see that
for the first 20 minutes the lecture was of mid-range interest and then got
interesting towards the middle part, perhaps because the Professor was giving
details of the class assignment. It then tailed off to about 2:45 before
rising again for the remainder of the lecture. The lesson for this Professor
for this particular lecture is that the second half was better than the first
in terms of student attention, and there was something really interesting for
these 12 students in the middle. In a more recent implementation of Help-Me-
Watch we have synchronised the stacked line graph of attention levels with a
video playback window, similar to what us used in, for example, medial
debriefings.
Figure 3: Screengrab for Professor reviewing past lectures
The Help-Me-Watch system has been built, deployed and is in use at Dublin City
University where we are using it to gather usage data and feedback from
Professors and students. In this, the estimation of student attention level
which runs on the app downloaded onto students’ laptops is based on a real-
time eye blink detection algorithm (?) that computes the eye aspect ratio
(EAR) between height and width of the eye. It does this by estimating the
landmark positions around the eye in real time and extracting a single scalar
value. Baseline EAR values across different lectures, both mean and variances,
for each student will vary depending on their ethnicity and we use the values
specific to a student to determine the thresholds for including clips into
their video summaries. As our dataset and the EAR profiles for individual
students grow, we can generate summaries using not just the overall attention
levels across all students attending a class, but also how those attention
levels differ from the individual student baselines.
The eye aspect ratio is a simple algorithm which is accurate and robust and we
have tested it in our lab with students of different ethnicities, genders,
ages and in different lighting conditions. We have also tested it with and
without reading glasses and with and without facial hair. Other more
sophisticated real-time methods for attention measurement or even emotion
classification could be used but we are satisfied with the robustness of the
present implementation.
However, even though eye gaze as a proxy for attention may be a true
reflection of attention in one-to-one conversations between two people either
in person or on video conferencing, when listing to an online presentation a
user can be paying attention but not looking at the laptop screen. For
example, when taking notes or perhaps looking at a second, larger monitor on
the desk, a student’s eye gaze is not fixed at the webcam. We examine this
issue in the next section
## Analysis of Usage
To illustrate Help-Me-Watch in action we analyse recordings from an online
Zoom tutorial session where 9 students from the class used the system to log
and upload their attention levels during the 45 minute online tutorial.
Some students had started recording their attention before the start of the
video recording by the Professor or continued their recording after the
recording ended and we delete these readings before/after the lecture Zoom
recording’s time boundaries. Our baseline video summary approach used in this
example is based on “all I missed” and uses 1-minute aggregations of attention
levels. Where a one-minute aggregation is in the lower half of observed
attention levels for that Zoom session only, that minute is included in the
generated summary. Figure 4 and Table 1 show what these summaries look like
for the 9 students. Note that where a student’s attention level was not logged
in a 1-minute window either because s/he arrived late, left early, or was not
looking at their screen, then the missed part forms part of their video
summary.
We will ignore students G, H and I because they attended (or recorded their
attention for) a lot less than the full Zoom tutorial duration, the other 6
full attendances generate summaries about which we can say that they are of
varied duration, from 16 minutes (D) to 27 minutes (A) for a 47 minute
tutorial. They are also non-contiguous and fragmented, with an average of 8
segments appended together to make the summaries. The segments appearing in
the summaries for these 6 students vary from 1 minute (18 such segments) with
the longest contiguous segment being 15 minutes (C).
With the fragmented nature and strict cutoffs for stopping and starting
segments at 1-minute intervals in our baseline algorithm, these may be
difficult for students to view and comprehend. We offset that by inserting a
3-second video gap when there is a skip in the video, so as not to disorient
students
Figure 4: Video segments included in “all-I-missed” summaries for 9 students. Table 1: Video segments selected for inclusion in the “all-I-missed” baseline summaries for 9 students. Student | Summary segments (minutes) | No. Segments | Duration (min)
---|---|---|---
A | 0-9 10-13 17-21 24-28 29-30 34-36 38-44 46-47 | 8 | 27
B | 3-4 9-10 12-13 21-22 23-26 30-34 35-47 | 7 | 23
C | 0-2 3-5 10-11 13-14 15-18 20-21 23-27 32-47 | 8 | 29
D | 2-4 6-7 11-14 15-16 17-18 21-22 24-25 35-41 42-47 | 9 | 16
E | 0-4 6-8 11-15 22-23 28-31 35-38 39-47 | 7 | 25
F | 0-2 15-16 19-20 22-23 27-32 34-36 38-45 | 7 | 18
G | 0-6 12-13 21-26 27-47 | 4 | 32
H | 0-1 3-4 5-6 7-47 | 4 | 38
I | 0-16 18-24 31-47 | 3 | 38
The baseline summarisation strategy favours including material below some
computed mean attention level which does not factor in any variance of that
attention level for the student or for that Zoom session. As a hypothetical
example, we could have a student with a constant average attention level of
0.3 and dropping to 0.2 for just the last minute in which case everything
except that last minute will be above the mean. We could address the
fragmented nature by varying the threshold so as to reduce the actual number
of segments included in the summary but first we will look at the variance in
attention for the 9 students.
We regard the series of raw per-second attention levels and the 1-minute
attention levels as being a stationary time series in the sense that the means
and variances for each student are constant over time and not subject to some
evolving change during a Zoom class. For such time series, historical
volatility denoted $\sigma$ (?; ?), is a statistical measure widely used in
economics and finance by analysts and traders in the creation of investing
strategies. Historical volatility is the degree of variation over time,
usually measured by the standard deviation of logarithmic changes of attention
levels. We computed the raw per-second volatility of attention levels of the 6
students who attended all of the Zoom tutorial under consideration and we see
this in Table 2
Table 2: Video segments selected for inclusion in the “all-I-missed” baseline summaries for 9 students. Student | $\sigma$ (per-second | $\sigma$ (1-minute | 1-minute
---|---|---|---
attention levels) | attention levels) | volatility
A | 0.212 | 0.342 | 0.213
B | 0.079 | 0.110 | 0.236
C | 0.236 | 0.264 | 0.220
D | 0.085 | 0.082 | 0.323
E | 0.224 | 0.198 | 0.232
F | 0.071 | 0.112 | 0.193
This analysis shows a lot of difference among students in their concentration
levels, with $\sigma$ ranging from 0.342 to 0.198, lower values indicating
consistency in attention levels.
In generating a video summary we know there is a tension between one which is
choppy and fragmented but includes all the parts missed during the initial
online class, compared to a summary which is smoother and with fewer context
switches but which is longer in duration. If a student is constantly chopping
from attending to the online class to focusing on something else or is mind-
wandering, then it follows that with a higher volatility measure they will
have less focused attention to the Zoom class. A personal summary will thus
have to be either fragmented in nature, or include large contiguous segments
in the summary where the student may have already paid attention. This is
likely to be frustrating to view, somewhat like re-viewing an old movie or TV
episode and realising half-way through that you think you saw this before as
you’re remembering parts of it.
A summary generated for a student with low volatility in their attention to an
online class is even more unsatisfactory since their differences between
attention and non-attention are less pronounced, so it is more difficult to
identify which segments to include in the summary. Thus while our baseline
algorithm is a crude first implementation, this analysis supports the approach
of aggregating attention into 1-minute chunks for the purpose of summary
generation.
We also consider a student who may be looking at the lecture intently, then
looking away to take written rather than typed notes and then look back at the
screen, then away to take notes, etc. Here the student’s attention levels on a
per-second basis will flip or toggle a lot between high and low attention
levels over a short period of time. This will be reflected as high volatility
in attention for the 1-minute segment(s) during which this may occur.
We took all attention level values for the 6 participants and for each
participant’s 1-minute blocks we calculated attention volatility for that
minute. Figure 5 shows these volatility levels for each participant for each
minute during the Zoom session. From this we can see a lot of variability in
attention volatility across participants with participant D (average
volatility 0.323) being highest and participant F (average volatility 0.193),
these averages shown in Table 2. We see no correlation among when participants
have highly, or low, volatility periods. The message from this is that we need
to so some observational user experiments to interpret volatility and what is
actually causing it in practice but initial interpretation is that this could
indicate segments which should be included in summaries.
Figure 5: Volatility in attention levels during 1-minute spans
## Future Work
Some of our plans for future development are engineering improvements while
others are more conceptual. On the engineering side, instead of arbitrarily
including 1-minute segments into the summary, we will introduce some
intelligent trimming of segments incorporated into the summary. This trimming
should be based on pauses in the Professor’s dialogue or changes in the slides
where they are used.
We do not yet cater for recording attention levels of students attending live
online sessions using their smartphones or tablets, just their laptops, though
we do support playback and reviewing on smartphones.
Our feedback to the Professor is as a stacked bar chart, colour coded for
anonymised students. It is a rich infographic as not only does it show overall
class attention (from those who used the Help-Me-Watch app) but it also shows
when students joined and left the session and whether rises in overall class
attention are due to the majority of the class or just a small number of
students. In the case of the feedback to the Professor in Figure 3 we can see
that the rise (and subsequent fall) in student attention level around the
middle of the lecture is spread almost right across the class so its not just
attention from a small subset of students …the whole class was paying
attention.
We plan to include analysis of lecture content, both audio and visual, to feed
back to the Professor what, rather than just where, there were highs and lows
in student attention. In the case of audio, this will follow previous work on
summarization of sports video where the excitation level of the commentator in
terms of voice pitch is indicative of something exciting happening on the
sports broadcast. Similarly we will analyse the visual content to determine
what student attention is when PowerPoint slides are on-screen for too long
and the Professor’s face is off-screen, or do animations make a difference to
student attention, should slide material be revealed slowly, one bullet point
at a time or presented all at once. Data we are gathering from use of the
Help-Me-Watch system will allow personalised feedback to Professors on what
features of their presentation style works and what does not work, in terms of
grabbing and retaining student engagement.
Finally we would like to offer the Professor the opportunity to flag parts of
the content which should be included in everyone’s summary, i.e. to indicate
the important parts of the lecture which no student should miss.
## Conclusions
In this paper we introduced a system which uses relatively modest AI
techniques to generate personalised video summaries of online classes for
students to help with class revision and address some of the shortcomings of
online learning. The system called Help-Me-Watch allows educational content to
be recommended to students based on their, and in future others students’
attention levels during the live classes.
The system is deployed in a real world setting in our University and actively
gathering usage data. This will allow us to do A—B testing to compare the
usefulness of different approaches to generating video summaries and a series
of planned user studies will give further feedback on the usefulness of the
system.
## Acknowledgements
This research was partly supported by Science Foundation Ireland under Grant
Number SFI/12/RC/2289_P2, co-funded by the European Regional Development Fund.
The research was also supported by the Google Cloud COVID-19 Credits Program
for our cloud based development work.
## References
* [Azcona, Hsiao, and Smeaton 2018] Azcona, D.; Hsiao, I.-H.; and Smeaton, A. F. 2018\. Personalizing computer science education by leveraging multimodal learning analytics. In 2018 IEEE Frontiers in Education Conference (FIE), 1–9. IEEE.
* [Azcona, Hsiao, and Smeaton 2019] Azcona, D.; Hsiao, I.-H.; and Smeaton, A. F. 2019\. Detecting students-at-risk in computer programming classes with learning analytics from students’ digital footprints. User Modeling and User-Adapted Interaction 29(4):759–788.
* [Cilliers 2017] Cilliers, E. J. 2017\. The challenge of teaching generation z. PEOPLE: International Journal of Social Sciences 3(1).
* [Gil-Jaurena and Domínguez 2018] Gil-Jaurena, I., and Domínguez, D. 2018\. Teachers’ roles in light of massive open online courses (MOOCs): Evolution and challenges in higher distance education. International Review of Education 64(2):197–219.
* [González-Zamar et al. 2020] González-Zamar, M.-D.; Abad-Segura, E.; Luque de la Rosa, A.; and López-Meneses, E. 2020\. Digital education and artistic-visual learning in flexible university environments: Research analysis. Education Sciences 10(11):294.
* [Heaton-Shrestha et al. 2005] Heaton-Shrestha, C.; Edirisingha, P.; Burke, L.; and Linsey, T. 2005\. Introducing a VLE into campus-based undergraduate teaching: Staff perspectives on its impact on teaching. International Journal of Educational Research 43(6):370–386.
* [Hong and Lee 2017] Hong, Y., and Lee, Y.-J. 2017\. A general approach to testing volatility models in time series. Journal of Management Science and Engineering 2(1):1 – 33.
* [Ifenthaler, Mah, and Yau 2019] Ifenthaler, D.; Mah, D.-K.; and Yau, J. Y.-K. 2019\. Utilising learning analytics for study success: Reflections on current empirical findings. In Utilizing learning analytics to support study success. Springer. 27–36.
* [Javed et al. 2019] Javed, A.; Irtaza, A.; Malik, H.; Mahmood, M. T.; and Adnan, S. 2019\. Multimodal framework based on audio-visual features for summarisation of cricket videos. IET Image Processing 13:615–622.
* [Joho et al. 2009] Joho, H.; Jose, J. M.; Valenti, R.; and Sebe, N. 2009\. Exploiting facial expressions for affective video summarisation. In Proceedings of the ACM International Conference on Image and Video Retrieval, CIVR ’09. New York, NY, USA: Association for Computing Machinery.
* [Kelly 2020] Kelly, K. 2020\. Results from top hat’s covid-19 student survey about online learning. https://philonedtech.com/results-from-top-hats-covid-19-student-survey-about-online-learning/. Last Accessed 5 January 2021.
* [Kumar, Gankotiya, and Dutta 2011] Kumar, S.; Gankotiya, A. K.; and Dutta, K. 2011\. A comparative study of Moodle with other e-learning systems. In 2011 3rd International Conference on Electronics Computer Technology, volume 5, 414–418. IEEE.
* [Marques et al. 2018] Marques, M.; Ochoa, S. F.; Bastarrica, M. C.; and Gutierrez, F. J. 2018\. Enhancing the student learning experience in software engineering project courses. IEEE Transactions on Education 61(1):63–73.
* [Martin and Bolliger 2018] Martin, F., and Bolliger, D. U. 2018\. Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learning 22(1):205–222.
* [Money and Agius 2008] Money, A. G., and Agius, H. 2008\. Video summarisation: A conceptual framework and survey of the state of the art. Journal of Visual Communication and Image Representation 19(2):121 – 143.
* [Smeaton et al. 2006] Smeaton, A. F.; Lehane, B.; O’Connor, N. E.; Brady, C.; and Craig, G. 2006\. Automatically selecting shots for action movie trailers. In Proceedings of the 8th ACM International Workshop on Multimedia Information Retrieval, MIR ’06, 231–238. New York, NY, USA: Association for Computing Machinery.
* [Smith et al. 2017] Smith, J. R.; Joshi, D.; Huet, B.; Hsu, W.; and Cota, J. 2017\. Harnessing AI for augmenting creativity: Application to movie trailer creation. In Proceedings of the 25th ACM international conference on Multimedia, 1799–1808.
* [Somarajan et al. 2019] Somarajan, S.; Shankar, M.; Sharma, T.; and Jeyanthi, R. 2019\. Modelling and analysis of volatility in time series data. In Soft Computing and Signal Processing. Springer. 609–618.
* [Soukupová and Cech 2016] Soukupová, T., and Cech, J. 2016\. Real-time eye blink detection using facial landmarks. In 21st Computer Vision Winter Workshop, Rimske Toplice, Slovenia.
* [Stephenson 2018] Stephenson, J. 2018\. Teaching & learning online: new pedagogies for new technologies. Routledge.
* [Stokel-Walker 2020] Stokel-Walker, C. 2020\. Universities are using surveillance software to spy on students. WIRED.
* [van der Velde et al. 2020] van der Velde, R.; Blignaut-van Westrhenen, N.; Labrie, N. H.; and Zweekhorst, M. B. 2020\. ‘The idea is nice … but not for me’: First-year students’ readiness for large-scale ‘flipped lectures’—what (de) motivates them? Higher Education 1–19.
* [Waheed et al. 2020] Waheed, H.; Hassan, S.-U.; Aljohani, N. R.; Hardman, J.; Alelyani, S.; and Nawaz, R. 2020\. Predicting academic performance of students from vle big data using deep learning models. Computers in Human Behavior 104:106189.
|
††thanks: This manuscript has been authored by UT-Battelle, LLC under Contract
No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States
Government retains and the publisher, by accepting the article for
publication, acknowledges that the United States Government retains a non-
exclusive, paid-up, irrevocable, world-wide license to publish or reproduce
the published form of this manuscript, or allow others to do so, for United
States Government purposes. The Department of Energy will provide public
access to these results of federally sponsored research in accordance with the
DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
# Compressed Sensing for STM imaging of defects and disorder
Brian E. Lerner Anayeli Flores-Garibay Benjamin J. Lawrie Petro Maksymovych
<EMAIL_ADDRESS>Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak
Ridge, TN 37831
###### Abstract
Compressed sensing (CS) is a valuable technique for reconstructing
measurements in numerous domains. CS has not yet gained widespread adoption in
scanning tunneling microscopy (STM), despite potentially offering the
advantages of lower acquisition time and enhanced tolerance to noise. Here we
applied a simple CS framework, using a weighted iterative thresholding
algorithm for CS reconstruction, to representative high-resolution STM images
of superconducting surfaces and adsorbed molecules. We calculated
reconstruction diagrams for a range of scanning patterns, sampling densities,
and noise intensities, evaluating reconstruction quality for the whole image
and chosen defects. Overall we find that typical STM images can be
satisfactorily reconstructed down to 30% sampling - already a strong
improvement. We furthermore outline limitations of this method, such as
sampling pattern artifacts, which become particularly pronounced for images
with intrinsic long-range disorder, and propose ways to mitigate some of them.
Finally we investigate compressibility of STM images as a measure of intrinsic
noise in the image and a precursor to CS reconstruction, enabling a priori
estimation of the effectiveness of CS reconstruction with minimal
computational cost.
††preprint: APS/123-QED
Keywords: Compressed sensing, scanning tunneling microscopy
## I Introduction
Scanning tunneling microscopy (STM) and spectroscopy (STS) have become
indispensable techniques for electronic, structural and magnetic
characterization of surfaces with atomic resolution. STM has enabled
investigations of broken symmetry and vortex interactions in superconductors
Hoffman (2011); Fischer _et al._ (2007), enabled the band structure mapping
of quantum materials Oppliger and Natterer (2020), and was used for the first
observations of spatial LDOS modulations Hasegawa and Avouris (1993); Crommie
_et al._ (1993).
However, small tunneling currents limit the rate of current measurement to the
millisecond timescale, so that STM measurements are characterized by
comparatively long measurement times Oppliger and Natterer (2020). This
limitation becomes apparent in experiments that seek to probe extended surface
areas, seek rare events such as low density defects, and want to strike a
balance between high-resolution measurement in real space and energy
resolution. In such cases, the ability to accurately reconstruct the
underlying periodic and defect structure of nanoscale samples with reduced
measurement time is highly desirable.
Compressed sensing (CS) shows potential for meeting this demand. CS is based
on the notion that if a basis set can be found where the signal is sparse (and
as a corollary the signal is compressible in that basis), accurate
reconstruction is possible using fewer measurements than required by the
Shannon-Nyquist Sampling Theorem. CS has been successfully employed for
diverse applications including radio interferometry Honma _et al._ (2014),
nuclear magnetic resonance of protein structure Kazimierczuk and Orekhov
(2011); Holland _et al._ (2011), recovery of correlations of entangled photon
pairs Simmerman _et al._ (2020); Lawrie and Pooser (2013), medical imaging
Lustig _et al._ (2007, 2008) and many more.
An image is compressible by virtue of its sparsity in a transform domain. Most
images in the natural world have a sparse frequency or wavelet representation,
including those generated by scanning microscopies. Indeed, CS has been
successfully implemented in scanning electron He and Carin (2009), atomic
force Oxvig _et al._ (2017), and piezoresponse force microscopy Kelley _et
al._ (2020), and quasiparticle interference imaging by STS Oppliger and
Natterer (2020); Nakanishi-Ohno _et al._ (2016). However, a detailed
understanding of the potential of CS for STM has yet to be developed,
particularly with respect to imaging defects and disorder.
In this paper, we explore the parameter space of a simple CS framework in the
context of representative STM images from surfaces of superconductors and
single molecule layers (introduced in section II). Our specific focus is to
emphasize the quality of reconstruction around defects and as a function of
added noise. In sections III and IV, the basic methodology of CS is laid out,
and the framework is described. Using a soft weighted iterative thresholding
(SWIT) algorithm of practical computational complexity, we performed
reconstructions across variable noise perturbation intensities and sampling
densities. These reconstructions are evaluated for structural similarity index
measure (SSIM) and mean squared error (MSE) and are used to calculate
reconstruction diagrams in section V. Our results reveal that accurate
reconstruction can be obtained at sampling densities as low as 20-30% for
images with both point and extended nanoscale defects - i.e. with almost
5-fold compression. We also note artifacts arising in the reconstructions, and
detail ways of mitigating these deviations through proper algorithm
configuration. To effectively apply CS in practice, it is very helpful to
understand what types of images can be effectively reconstructed. In V, we
also characterize our images using compressibility, finding compressibility to
be an effective measure of noise in the STM images, and a necessary, albeit
not sufficient, criterion for effective CS reconstruction.
## II Experimental Data
We applied CS to representative STM images of a cleaved 100-surface of FeSe
superconductor with Se vacancy defects Huang _et al._ (2016) (Fig. 1a) and
two kinds of adsorbed molecular layers - C60 on Ag(111) (Fig. 1c) and TCNQ
(tetracyanoquinodimethane) on graphite (Fig. 1b). Each of the sample images
have a different size, lattice structure, and point or extended defect.
Moreover, as seen in Fig. 1d, the images represent three kinds of intensity
distribution, centered on low values corresponding to the atomic lattice in
the case of FeSe, a broader and more uniform distribution in the case of TCNQ
and a distinctly bimodal distribution for C60, owing to a single atomic step
of the underlying substrate.
## III CS Basics
Sparsity regularization is a common approach to impose constraints on
undefined optimization problems Claerbout and Muir (1973), which gave rise to
CS methodology in the mid-2000s Donoho (2006); Candes _et al._ (2006). CS is
designed to reconstruct a signal $x\in\mathbb{R}^{n\times 1}$ from samples
$y\in\mathbb{R}^{m\times 1}$, where typically $m\ll n$. Successful
reconstruction is possible when $x$ has a sparse representation
$\alpha\in\mathbb{R}^{n\times 1}$, i.e. in some basis the number of
significant coefficients $k$ in $\alpha$ is small compared to $n$. The CS
algorithm computes $\alpha$. Once obtained, $x$ is recovered using the basis
transform $\Psi\in\mathbb{R}^{n\times n}$:
$\displaystyle x=\Psi\alpha$ (1)
The sampling process has a matrix representation $\Phi\in\mathbb{R}^{m\times
n}$ constructed by stacking each measurement vector:
$\displaystyle\Phi x=y$ (2)
Substituting eq. 1 for $x$ in eq. 2 and setting $A=\Phi\Psi$ we have:
$\displaystyle A\alpha=y$ (3)
CS provides a solution $\alpha$ for this undetermined system of equations by
minimizing the sparsity of $\alpha$ under the constraints of eq. 3, expressed
as:
$\displaystyle min\|\alpha\|_{\ell_{0}}\quad\text{s.t.}\quad A\alpha=y$ (4)
While this provides an exact solution, $\ell_{0}$ minimization is a
combinatorial optimization problem that is computationally expensive, and
intractably so for large signals Candes _et al._ (2006). Fortunately, the
$\ell_{1}$ norm can be substituted to convert the problem into one of convex
optimization, where for most inputs, $\alpha$ is recovered exactly Candes _et
al._ (2006).
Figure 1: STM images of FeSe (a), TCNQ (b), and C60 (c), with representative
defects magnified in each inset. (d) The distribution of normalized constant-
current STM height for each image.
## IV Framework
The CS framework can utilize a variety of 1) sampling matrices $\Phi$, 2)
transform matrices $\Psi$, and 3) optimization algorithms. $\Psi$ should
necessarily be chosen to ensure sparsity in the transform domain, but it
should also be incoherent with $\Phi$. The algorithm minimizes the sparsity in
$\alpha$ while remaining correlated to the measurements $y$ (eq. 3). In our
reconstructions, we use Lissajous and rotated line trajectories for sampling
patterns, the discrete cosine transform (DCT), and a SWIT algorithm. The
elements of this framework, with special regard to their applicability for
STM, are discussed in the following.
### IV.1 Transform Matrix
STM images often exhibit a large amount of order and are generally smooth
(i.e. differentiable in the absence of noise). As a result, the images lend
themselves to sparsity in the DCT basis. The DCT transform matrix also has the
advantage of being maximally incoherent with point sampling matrices Candes
and Wakin (2008), and has a fast matrix implementation Arildsen _et al._
(2015). This transform has been utilized in previous applications of CS
Romberg (2008); Jensen _et al._ (2013); Anderson _et al._ (2013), and has
historically been used for JPEG compression Wallace (1992). The discrete
wavelet transform (DWT) is another commonly used dictionary in compressed
sensing, thought it works most efficiently with dense sampling matrices with
random entries like those used for single-pixel imaging and is less incoherent
than DCT for point sampling matrices Arildsen _et al._ (2015).
Figure 2: DCTs of (a) FeSe, (b) TCNQ, and (c) C60. (d) The intensity of the
diagonal coefficients for each DCT, as well as the DCT of an array of random
Gaussian noise, which demonstrate varying sparsity levels.
### IV.2 Sampling Matrix
When scanning a surface, it is conventional to use a raster scan, where the
probe traverses the sample in a series of alternating lines, resulting in an
evenly sampled grid. The speed of the probe and the sampling frequency are set
based on the demands of the experiment. While the design of the sampling
matrix $\Phi$ in other CS applications is often flexible (programmable with a
spatial light modulator for optical CS applications, for instance), we are
constrained to sampling along the continuous path of the probe. Here, since we
are concerned with the algorithmic aspects of the reconstruction, we chose to
use pre-existing STM images and resample them with smooth Lissajous (Fig. 3d)
and rotated line (Fig. 3a) patterns which make the methods more compatible
with fast scanning. The sampling can furthermore be randomized along the
sampling path, but we have not seen a significant impact from such
randomization.
Figure 3: The path of the rotated line pattern is shown in (a), with simulated
start and end points denoted by green and red circles. Despite sparse sampling
of the image (b), decent reconstruction is achieved (c). The same process is
also shown for Lissajous (d-f). Reconstructions in this figure performed for
20% sampling density and 100 iterations.
### IV.3 Optimization Algorithm
There are a variety of reconstruction algorithms that have already been
explored for other CS applications. In the convex optimization class, the
$\ell_{0}$ norm is replaced by the $\ell_{1}$ norm. Greedy pursuit algorithms
use an iterative approach where locally optimal decisions are made in each
iteration. Iterative thresholding Herrity _et al._ (2006) is a type of greedy
pursuit algorithm that has relatively low computational complexity and is
robust to noise. Due to these benefits, we employed a SWIT algorithm as
successfully demonstrated in Oxvig _et al._ (2017). The algorithm works as
follows:
⬇
$\alpha=0$
$r=y$
for i in I:
$c=A^{T}r$
$\alpha=\eta_{t}^{ws}(\alpha+\kappa\cdot c)$
$r=y-A\alpha$
if $\|r\|_{\ell_{2}}<\epsilon\|y\|_{\ell_{2}}$:
break
Initialization to $\alpha=0$ can be changed to an educated guess and the
stopping condition can be arbitrarily chosen, while the step size $\kappa$
ensures convergence. The soft weighted thresholding function $\eta_{t}^{ws}$
is implemented as:
$\displaystyle\eta_{t}^{ws}$
$\displaystyle=\frac{1}{w}sgn(x)(|wx|-t),|wx|-t>0$ (5)
$\displaystyle=0,|wx|-t\leq 0$ (6)
The method for calculating the threshold $t$ is customizable. Here, we set a
fixed value on the number of nonzero coefficients while initializing the
algorithm. In each iteration, the coefficients are weighted as described
above, $t$ is adjusted to maintain the specified sparsity, and coefficients
below $t$ are zeroed. By tuning the weights to model expected DCT dispersion,
weighted iterative thresholding algorithms tend to outperform their non-
weighted counterparts Oxvig _et al._ (2017). Each of the reconstructions
constituting the reconstruction diagrams ran for 100 iterations due to
computational considerations, though in our experiment we found that
reconstruction tends to improve up to around 300 iterations–and sometimes many
more–before plateauing.
Figure 4: Reconstructed images for ten, five, and two-fold undersampling for
TCNQ (a-c), C60 (d-f), and FeSe (g-i), with magnified defects in insets. All
reconstructions performed for 100 iterations using the rotated line sampling
pattern.
### IV.4 Quality Assessment
To understand the bounds of reconstruction, we evaluated the SWIT algorithm
while systematically varying the noise intensity $\delta$ and sampling density
$\rho$. While iterative thresholding algorithms are noted for being noise-
robust Qu _et al._ (2010), little investigation has been carried out to
confirm this for reconstruction of STM images. In order to test this, we
generated $1/f$ noise in Python and applied it to pixels along the simulated
measurement path so as to mimic varying noise levels during measurement. The
noise perturbation scale for each image was normalized to range from 0.1–1 of
the highest-peak FWHM in the image’s intensity histogram (Fig. 1d). We used
FWHM as a measure of spread due to the approximately Gaussian shape of the
distributions. We implemented rotated line and Lissajous sampling patterns
across $\rho$ from 0.02–0.5. The patterns used here were generated using magni
Oxvig _et al._ (2014), a compressed sensing Python package for atomic force
microscopy.
For each reconstruction in this $\delta$–$\rho$ parameter space, the quality
of the reconstructed image was evaluated for SSIM and MSE. SSIM was calculated
using scikit-image’s van der Walt _et al._ (2014) default implementation,
which is adapted from Wang _et al._ (2004). The MSE is derived in the
standard way,
$\displaystyle\frac{1}{N}\sum(\chi-x)^{2}$ (7)
where $N$ is the number of pixels, $x$ is the reconstructed image and $\chi$
is the base image.
To perform these reconstructions, we build our CS framework with a DCT
transform, due to the benefits espoused in Sec. IV, in combination with the
noted sampling patterns. We solve eq. 3 using a SWIT algorithm as described in
the code block in Sec. IV, with $\kappa=0.6$ and a stopping condition that
occurs when the ratio of the 2-norm of the residual ($y-Ax$) and the 2-norm of
$y$ is less than a tolerance $\epsilon=0.001$. The weights of the soft
thresholding function $\eta_{t}^{ws}$ used in these reconstructions are
adopted from a Gaussian model of DCT structure in Oxvig _et al._ (2017),
which was used to successfully reconstruct AFM images.
Figure 5: Noise perturbation intensity ($\delta$) vs. sampling density
($\rho$) phase diagrams for reconstructions of TCNQ (a), C60 (b) and FeSe (c),
with relevant reconstructions shown above and below the diagram for each
sample. The parameters used for the reconstructions in the top row are marked
by green dots in the respective diagrams; the bottom row parameters are marked
by yellow dots.
## V Results
Our first observation is that CS is generally very good at reconstructing STM
images even at a sampling density as low as 20% of the original image. To
ascertain that this conclusion applies not only to spatial order in the
images, but also to defect sites, we have identified the latter using state-
pace methods for detection of protrusions (using Laplacian of Gaussian
filter), and then built local masks of the defects, comparing reconstruction
in that local region. As seen in the insets of Fig. 4, single vacancies in
FeSe and extended defects in the TCNQ overlayer (missing molecules)
reconstruct well. At 50% sampling density, the reconstructed defects are
indistinguishable from their unsampled counterparts.
The $\delta$–$\rho$ reconstruction diagrams (Fig. 5) demonstrate the method’s
robustness to moderate $1/f$ noise. All reconstructions have high SSIM above
sampling density $\rho\approx 30\%$ which only begins to degrade at noise
perturbations of 0.4 for TCNQ and 0.8 for FeSe. While high-noise distortions
are apparent in the reconstructions of TCNQ (Fig. 5a) and FeSe (Fig. 5c), the
simplicity of FeSe’s vacancies and the regularity of its lattice likely lead
to smoother SSIM falloff at high noise. C60, in stark contrast, has a wholly
noise-independent transition (Fig. 5e). C60 also exhibits a sharp transition
to higher SSIM at sampling density around 30%, which exceeds the transition
point of the other samples by 10-20%. Visual examination of the
reconstructions (Fig. 5h) reveals the presence of sampling pattern artifacts
at low SSIM which disappear after the transition line. The reasons for this
deviation will be discussed below.
Figure 6: The STM images, along with random Gaussian noise (b) and an ordered
lattice (c), were transformed into the DCT basis before being compressed and
inverse transformed. The MSE–normalized against the highest value of each
curve– vs. the normalized compressed size, i.e. the compression ratio in the
DCT domain, is shown for each image in (a). This procedure is repeated for
different levels of Gaussian noise applied to TCNQ pre-transform (d). (e)
Compression error vs. CS reconstruction error as a function of noise for
varying sampling/compression ratio.
Given that CS is predicated on the principle of compression, we explored the
extent to which our CS results correlate to image compressibility for typical
STM images as well as simulated arrays, one composed of pseudo-random Gaussian
noise (Fig. 6b) and the other an ordered lattice (Fig. 6c). We evaluated
compressibility by transforming each image and kept a compressed set of the
most significant coefficients, setting the rest to 0 before transforming back
to real space and evaluating the MSE. The pseudo-random noise image displays
the highest error across compression sizes, i.e. it is the most
incompressible, while the ordered lattice is most compressible. STM images
fall between these two extremes, as seen in Fig. 6a. Intriguingly, there is a
very significant difference between individual images, which actually goes
against the trend that may be inferred from the visual inspection of the
original data in Fig. 1. C60, not TCNQ or FeSe, is the most compressible
image, while FeSe is notably less compressible than either TCNQ or C60. The
difference in compressibility stems from the signal to noise ratio that
characterizes these images. To ascertain that this is the case, in Fig. 6d, we
plot compressibility of TCNQ as a function of strength of added noise
(measured as a fraction of the largest signal in the image). The
compressibility curve very clearly traverses the range observed in Fig. 6a,
eventually becoming equivalent to noise. We note that all these images were
all acquired on different days, with different physical tips and different
instrument conditions. The ability to “calibrate” the STM image with
compressibility appears to be a valuable measure of the data quality and
experimental results.
We now show that the compressibility of an image generally correlates with its
CS performance. In Fig. 6e we plot the normalized CS reconstruction error vs
the normalized DCT compression error as a function of noise, for three levels
of data compression. For 5-fold compression (20% sampling), the correlation is
reasonably good, which confirms our notion. However, for smaller densities, CS
systematically produces higher error than obtained by DCT compression, which
reduces the correlation between the two techniques. We speculate that partly
these deviations are due to CS being sensitive to the compatibility of
sampling and transform matrices with both each other and the image, as well as
the algorithm type and configuration.
Figure 7: C60 reconstructions for Lissajous (b,e) and rotated line (c,f)
sampling patterns using a 1% threshold on the number of non-zero coefficients
and 300 iterations. The top reconstructions utilized a wide-variance DCT
weight model (a) which was also used for the reconstructions in Fig. 5. Those
on the bottom utilized a model with a severely limited variance; the relevant
low-frequency corner of this model is shown in the inset of (a). The diagonal
of each model and sample DCT is compared in (d).
A striking disparity, however, appears for C60, which is the most compressible
of the typical STM images (Fig. 6a) but requires the highest sampling density
to achieve quality reconstruction. Interestingly, past the transition line in
both SSIM (Fig. 5) and MSE (Fig. 9) phase diagrams. C60 generally has the
highest SSIM, followed by TCNQ then FeSe. Resolving this puzzle depends on an
understanding of how and when sampling pattern artifacts appear, as their
presence is the major cause of $\rho$ dependence in the phase transition. We
have found that this brand of artifact can be removed by properly configuring
the SWIT algorithm. Small disturbances can be removed by increasing the number
of iterations, but more prominent artifacts require increased iterations
and/or specialized setup of the threshold function (eq. 5).
In each iteration of the SWIT, the threshold function weights each coefficient
using a DCT model and based on a specified threshold ratio, keeps a certain
number of coefficients while setting the rest to 0. We show that setting the
threshold ratio to 1%, instead of 5%, running for 300 iterations, and
minimizing the variance in the weight model, the artifacts can be removed from
C60. Reconstruction with the Lissajous pattern was more responsive (Fig. 7b)
to the same DCT-model variance (Fig. 7a) used for the phase diagrams, though
interestingly the rotated line reconstructions improved (Fig. 7f) only with
severely minimized variance (Fig. 7a inset).
To determine the ideal thresholding function parameters, we evaluated C60 and
TCNQ for SSIM across a range of threshold ratios and variances (Fig. 8). We
see that SSIM falls off for TCNQ at low threshold ratios for all variances
$\sigma$, and in the limit of low $\sigma$ and threshold ratio– a trend
consistent for both sampling patterns. This behavior is expected as reducing
threshold ratio and decreasing $\sigma$ are both tantamount to applying a low-
pass filter. Surprisingly, the filtering at low $\sigma$ and threshold ratio
produces distinctly higher SSIM for the defect compared to the global image,
though visual inspection revels intense lattice warping. The defect diagrams
for both samples show higher SSIM for rotated line than Lissajous, a
difference especially stark for C60. In contrast to TCNQ, which has similar
trends in performance for both patterns, C60 is quite different. For
Lissajous, the SSIM falls off at at threshold ratios around 20% independently
of $\sigma$. Rotated line maintains high SSIM across low $\sigma$ for all
thresholds, though a transition line develops with increasing $\sigma$ that
exponentially falls to very low threshold ratios. At low threshold ratio, C60
is seemingly immune from SSIM degradation, though the defect diagram has a
slight dip at very low threshold. Visual inspection of reconstructions in this
regime reveals heavy and unsatisfactory smoothing which retains a semblance of
the step defect and an accordingly high SSIM. For all samples and patterns in
Fig. 8 though, overlapping high-SSIM regions across global and defect diagrams
reveal an optimal parameter space for defect-lattice reconstruction and
provide a proof-of-principle for effectively tuning the thresholding function
parameters.
Figure 8: SSIM evaluated for reconstructions of TCNQ (a,c) and C60 (b,d)
across varying levels of $\sigma$ (the width of the variance in the DCT weight
model) and threshold ratio (the relative number of of nonzero coefficients
used by the optimization algorithm). The top and bottom rows respectively
correspond to reconstructions performed using Lissajous and rotated line
sampling patterns. All reconstructions performed with sampling density
$\rho=0.2$.
To better understand C60’s sensitivity to sampling pattern, we refer back to
its DCT (Fig. 2). Each DCT coefficient distribution features a cluster of
high-magnitude coefficients in the upper left-hand corner, i.e. for low
frequencies. It is important to note that the spread is also dependent on the
image dimensions that dictate the full extent of the DCT frequency range. TCNQ
and FeSe exhibit denser low-frequency clusters and a scattering of high-
magnitude mini-clusters– features not present in the diffuse coefficient
spread for C60. The likely source of this spread is the multitude of
randomized short-range orientations of individual C60 atoms (inset of Fig. 8).
We postulate that the diffuse spread leads to complex frequency-domain
interactions with sampling patterns and the thresholding function, conditions
that make it more difficult to tune the algorithm’s parameters.
Figure 9: Noise perturbation intensity ($\delta$) vs. sampling density
($\rho$) MSE phase diagrams for reconstructions of TCNQ (a,d), C60 (b,e) and
FeSe (c,f) for rotated line (top row) and Lissajous (bottom row) sampling
patterns with defect phase diagrams in the insets. All phase diagrams have
been normalized to their respective maximum MSE.
In our studies, SSIM proved to be a faithful reconstruction quality metric in
terms of capturing the influence of unwanted artifacts. Reconstructions were
also evaluated for MSE, another commonly used quality metric. MSE lacks SSIM’s
useful universal scale, making cross-comparison of images and phase diagrams
more difficult. Furthermore, MSE is not adept at capturing structural
artifacts Wang and Bovik (2009), and this flaw is displayed in phase diagrams
created using the metric (Fig. 9). While they moderately resemble those for
SSIM, these diagrams fail to properly differentiate between good
reconstructions and those marred by artifacts. As a particularly harsh
example, the poorly reconstructed image of TCNQ at noise and sampling density
equal to 0.1 gives a poor SSIM; the MSE, however, is given a median value. A
reverse effect occurs for FeSe at these parameters, but visual inspection of
the reconstruction yields long-scale structure largely intact, seemingly
further confirming SSIM’s utility. However, small-scale structure, i.e. the
lattice and defects, are perturbed, and MSE may be better for capturing such
anomalies.
## VI Conclusions
Our results show that there are significant benefits for using CS for STM,
which should also extend to other scanning probe microscopies. Reduction in
the acquisition time can be sizeable, allowing for more efficient sampling of
materials, with greater extent and higher probability to locate regions of
interest. This methodology is readily applicable to imaging of periodic
structures, but also to defects and imperfections. We intentionally used a
simple framework to set-up a baseline on which future improvements in CS
reconstruction can be made. It is clear that with proper thresholding
initialization, satisfactory reconstruction can be obtained without the
presence of sampling pattern artifacts. However, in order to properly set the
weights, it is advisable to inform the model with prior imaging of a similar
sample. However, CS is a highly extensible framework open to more intelligent
and in-situ approaches to determine the most effective sampling path and
select successful algorithm parameters and transform matrices. Intriguingly,
“anomalies” in CS reconstruction, such as the ones we observed with C60 may
signal interesting properties of the material, such as the lack of true long-
range order or dynamic processes in the experiment, which can then be studied
with higher fidelity.
## References
* Hoffman (2011) J. E. Hoffman, Spectroscopic scanning tunneling microscopy insights into Fe-based superconductors, Reports on Progress in Physics 74, 124513 (2011).
* Fischer _et al._ (2007) Ø. Fischer, M. Kugler, I. Maggio-Aprile, C. Berthod, and C. Renner, Scanning tunneling spectroscopy of high-temperature superconductors, Reviews of Modern Physics 79, 353 (2007).
* Oppliger and Natterer (2020) J. Oppliger and F. D. Natterer, Sparse sampling for fast quasiparticle-interference mapping, Physical Review Research 2, 1 (2020).
* Hasegawa and Avouris (1993) Y. Hasegawa and P. Avouris, Direct observation of standing wave formation at surface steps using scanning tunneling spectroscopy, Physical Review Letters 71, 1071 (1993).
* Crommie _et al._ (1993) M. F. Crommie, C. P. Lutz, and D. M. Eigler, Imaging standing waves in a two-dimensional electron gas, Nature 363, 524 (1993).
* Honma _et al._ (2014) M. Honma, K. Akiyama, M. Uemura, and S. Ikeda, Super-resolution imaging with radio interferometry using sparse modeling, Publications of the Astronomical Society of Japan 66, 95 (2014).
* Kazimierczuk and Orekhov (2011) K. Kazimierczuk and V. Y. Orekhov, Accelerated nmr spectroscopy by using compressed sensing, Angewandte Chemie International Edition 50, 5556 (2011).
* Holland _et al._ (2011) D. J. Holland, M. J. Bostock, L. F. Gladden, and D. Nietlispach, Fast multidimensional nmr spectroscopy using compressed sensing, Angewandte Chemie International Edition 50, 6548 (2011).
* Simmerman _et al._ (2020) E. M. Simmerman, H. H. Lu, A. M. Weiner, and J. M. Lukens, Efficient compressive and bayesian characterization of biphoton frequency spectra, Optics Letters 45, 2886 (2020).
* Lawrie and Pooser (2013) B. J. Lawrie and R. C. Pooser, Toward real-time quantum imaging with a single pixel camera, Optics Express 21, 7549 (2013).
* Lustig _et al._ (2007) M. Lustig, D. Donoho, and J. M. Pauly, Sparse mri: The application of compressed sensing for rapid mr imaging, Magnetic Resonance in Medicine 58, 1182 (2007).
* Lustig _et al._ (2008) M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, Compressed sensing mri, IEEE Signal Processing Magazine 25, 72 (2008).
* He and Carin (2009) L. He and L. Carin, Exploiting structure in wavelet-based bayesian compressive sensing, IEEE Transactions on Signal Processing 57, 3488 (2009).
* Oxvig _et al._ (2017) C. S. Oxvig, T. Arildsen, and T. Larsen, Structure assisted compressed sensing reconstruction of undersampled AFM images, Ultramicroscopy 172, 1 (2017).
* Kelley _et al._ (2020) K. P. Kelley, M. Ziatdinov, L. Collins, M. A. Susner, R. K. Vasudevan, N. Balke, S. V. Kalinin, and S. Jesse, Fast scanning probe microscopy via machine learning: Non-rectangular scans with compressed sensing and gaussian process optimization, Small 16, 2002878 (2020).
* Nakanishi-Ohno _et al._ (2016) Y. Nakanishi-Ohno, M. Haze, Y. Yoshida, K. Hukushima, Y. Hasegawa, and M. Okada, Compressed sensing in scanning tunneling microscopy/spectroscopy for observation of quasi-particle interference, Journal of the Physical Society of Japan 85, 093702 (2016).
* Huang _et al._ (2016) D. Huang, T. A. Webb, C.-L. Song, C.-Z. Chang, J. S. Moodera, E. Kaxiras, and J. E. Hoffman, Dumbbell defects in fese films: A scanning tunneling microscopy and first-principles investigation, Nano Letters 16, 4224 (2016), pMID: 27282020, https://doi.org/10.1021/acs.nanolett.6b01163 .
* Claerbout and Muir (1973) J. F. Claerbout and F. Muir, Robust modeling with erratic data, Geophysics 38, 826 (1973).
* Donoho (2006) D. L. Donoho, Compressed Sensing, IEEE Transactions on Information Theory 52, 1289 (2006).
* Candes _et al._ (2006) E. J. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory 52, 489 (2006).
* Candes and Wakin (2008) E. J. Candes and M. B. Wakin, An introduction to compressive sampling, IEEE Signal Processing Magazine 25, 21 (2008).
* Arildsen _et al._ (2015) T. Arildsen, C. S. Oxvig, P. S. Pedersen, J. Østergaard, and T. Larsen, Reconstruction algorithms in undersampled afm imaging, IEEE Journal of Selected Topics in Signal Processing 10, 31 (2015).
* Romberg (2008) J. Romberg, Imaging via compressive sampling, IEEE Signal Processing Magazine 25, 14 (2008).
* Jensen _et al._ (2013) T. L. Jensen, T. Arildsen, J. Østergaard, and T. Larsen, in _2013 International Conference on Signal-Image Technology & Internet-Based Systems_ (2013) pp. 130–135.
* Anderson _et al._ (2013) H. S. Anderson, J. Ilic-Helms, B. Rohrer, J. Wheeler, and K. Larson, in _Computational Imaging XI_, Vol. 8657 (SPIE, 2013) pp. 94 – 105\.
* Wallace (1992) G. K. Wallace, The jpeg still picture compression standard, IEEE Transactions on Consumer Electronics 38, xviii (1992).
* Herrity _et al._ (2006) K. K. Herrity, A. C. Gilbert, and J. A. Tropp, in _2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings_, Vol. 3 (2006) pp. III–III.
* Qu _et al._ (2010) X. Qu, W. Zhang, D. Guo, C. Cai, S. Cai, and Z. Chen, Iterative thresholding compressed sensing mri based on contourlet transform, Inverse Problems in Science and Engineering 18, 737 (2010).
* Oxvig _et al._ (2014) C. S. Oxvig, P. S. Pedersen, T. Arildsen, J. Østergaard, and T. Larsen, Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images, Journal of Open Research Software 2, e29 (2014).
* van der Walt _et al._ (2014) S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, T. Yu, and the scikit-image contributors, scikit-image: image processing in Python, PeerJ 2, e453 (2014).
* Wang _et al._ (2004) Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing 13, 600 (2004).
* Wang and Bovik (2009) Z. Wang and A. C. Bovik, Mean squared error: Love it or leave it? A new look at signal fidelity measures, IEEE Signal Processing Magazine 26, 98 (2009).
###### Acknowledgements.
We gratefully acknowledge Seokmin Jeon and Simon Kelly for their help with
sample preparation for STM experiments with adsorbed molecules. Data analysis
and interpretation was sponsored by the U. S. Department of Energy, Office of
Science, Basic Energy Sciences, Materials Sciences and Engineering Division.
Experimental data was acquired at the Center for Nanophase Materials Sciences,
which is a DOE Office of Science User Facility. Student (BEL, AFG) research
support was provided by the DOE Science Undergraduate Laboratory Internships
(SULI) program.
|
# $C^{m}$ Semialgebraic Sections Over the Plane
Charles Fefferman, Garving K. Luli
## 1 Introduction
In this paper we settle the two-dimensional case of a conjecture involving
unknown semialgebraic functions with specified smoothness.
Recall that a _semialgebraic set_ $E\subset\mathbb{R}^{n}$ is a union of
finitely many sets of the form
$\\{x\in\mathbb{R}^{n}:P_{1}(x),P_{2}(x),\cdots,P_{r}(x)>0,\text{ and
}Q_{1}(x)=Q_{2}(x)=\cdots=Q_{s}(x)=0\\}$
for polynomials $P_{1},\cdots,P_{r},Q_{1},\cdots,Q_{s}$ on $\mathbb{R}^{n}$.
(We allow the cases $r=0$ or $s=0$.)
A _semialgebraic function_ $\phi:E\rightarrow\mathbb{R}^{D}$ is a function
whose graph $\\{(x,\phi(x)):x\in E\\}$ is a semialgebraic set.
We define smoothness in terms of $C^{m}$ and $C^{m}_{loc}$. Here,
$C^{m}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ denotes the space of all
$\mathbb{R}^{D}$-valued functions on $\mathbb{R}^{n}$ whose derivatives up to
order $m$ are continuous and bounded on $\mathbb{R}^{n}$.
$C_{loc}^{m}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ denotes the space of
$\mathbb{R}^{D}$-valued functions on $\mathbb{R}^{n}$ with continuous
derivatives up to order $m$. If $D=1$, we write
$C^{m}\left(\mathbb{R}^{n}\right)$ and
$C_{loc}^{m}\left(\mathbb{R}^{n}\right)$ in place of
$C^{m}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ and
$C_{loc}^{m}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$, respectively.
To motivate our conjecture, we pose the following problems.
###### Problem 1 (Semialgebraic Whitney Problem; see [43].)
Fix $m\geq 0$. Let $\phi:E\rightarrow\mathbb{R}$ be semialgebraic. Suppose
$\phi$ extends to a $C^{m}_{loc}$ function on $\mathbb{R}^{n}$. Does it
necessarily extend to a $C^{m}_{loc}$ semialgebraic function on
$\mathbb{R}^{n}$?
###### Problem 2 (Linear Equations)
Fix $m\geq 0.$ Consider the linear equation
(1) $A_{1}F_{1}+\cdots+A_{D}F_{D}=f$
for unknowns $F_{1},\cdots,F_{D}$ on $\mathbb{R}^{n}$, where
$A_{1},\cdots,A_{D}$, $f$ are given semialgebraic functions. If equation
$\left(\ref{equation}\right)$ admits a $C^{m}_{loc}$ solution
$F_{1},\cdots,F_{D}$, does it necessarily admit a $C^{m}_{loc}$ semialgebraic
solution?
More generally, in place of (1) we can consider underdetermined systems of
linear equations.
Problem 1 was raised by Bierstone and Milman in [43].
Note that $m$ is fixed in the above problems so we are not allowed to lose
derivatives.
Problems 1 and 2 are instances of a more general question. The purpose of this
paper is to settle that question, and in particular provide affirmative
answers to Problems 1 and 2, in the case of
$C^{m}_{loc}\left(\mathbb{R}^{2}\right)$.
To pose our more general question, we set up notations and give a few basic
definitions.
Fix $m\geq 0$. If $F\in C^{m}_{loc}(\mathbb{R}^{n})$ and $x\in\mathbb{R}^{n}$,
we write $J_{x}(F)$ (the “jet” of $F$ at $x$) to denote the $m$-th degree
Taylor polynomial of $F$ at $x$.
Thus, $J_{x}(F)$ belongs to $\mathcal{P}$, the vector space of all such
polynomials.
For $x\in\mathbb{R}^{n}$, $P,Q\in\mathcal{P}$, we define
$P\odot_{x}Q=J_{x}(PQ)$. The multiplication $\odot_{x}$ makes $\mathcal{P}$
into a ring, denoted by $\mathcal{R}_{x}$, the “ring of $m$-jets at $x$”. We
have $J_{x}\left(FG\right)=J_{x}\left(F\right)\odot_{x}J_{x}\left(G\right)$
for $F,G\in C^{m}_{loc}\left(\mathbb{R}^{n}\right)$.
We consider vector-valued functions
$F=\left(F_{1},\cdots,F_{D}\right):\mathbb{R}^{n}\rightarrow\mathbb{R}^{D}$,
and we write $F\in C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ if
each $F_{i}\in C^{m}_{loc}\left(\mathbb{R}^{n}\right)$. We define
$J_{x}F=\left(J_{x}F_{1},\cdots,J_{x}F_{D}\right)\in\mathcal{P\oplus\cdots\oplus
P}$. Under the natural multiplication
$Q\odot_{x}\left(P_{1},\cdots,P_{D}\right):=\left(Q\odot_{x}P_{1},\cdots,Q\odot_{x}P_{D}\right)\text{,}$
the vector space $\mathcal{P\oplus\cdots\oplus P}$ becomes an
$\mathcal{R}_{x}$ module, which we denote by $\mathcal{R}_{x}^{D}$.
We will discuss $\mathcal{R}_{x}$-submodules of $\mathcal{R}_{x}^{D}$; we
allow both $\left\\{0\right\\}$ and $\mathcal{R}_{x}^{D}$ as submodules of
$\mathcal{R}_{x}^{D}$.
Fix $m,n,D$, and a subset $E\subset\mathbb{R}^{n}$. For each $x\in E$, let
$H\left(x\right)=f\left(x\right)+I\left(x\right)\subset\mathcal{R}_{x}^{D}$
be given, where $f\left(x\right)\in\mathcal{R}_{x}^{D}$ and
$I\left(x\right)\subset\mathcal{R}_{x}^{D}$ is an $\mathcal{R}_{x}$-submodule.
Then the family
(2) $\mathcal{H}=(H(x))_{x\in E}$
is called a “bundle” over $E$. $H(x)$ is called the fiber of $\mathcal{H}$ at
$x$.
###### Remark 1.1
We remark that our notion of bundle differs from the notion of a bundle
considered previously (e.g, [28]). In the present version, we do not require
$E$ to be compact and we require all the fibers $H\left(x\right)$ to be non-
empty.
When $m,n,D$ are not clear from context, we speak of a “bundle with respect to
$C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$”.
If $\mathcal{H}$ is given by (2) and $E^{\prime}\subset E$, then we write
$\left.\mathcal{H}\right|_{E^{\prime}}$ to denote the bundle
$\left(H\left(x\right)\right)_{x\in E^{\prime}}$, and refer to
$\mathcal{H}|_{E^{\prime}}$ as the restriction of $\mathcal{H}$ to
$E^{\prime}$.
A “section” of the bundle $\mathcal{H}$ in (2) is a vector-valued function
$F\in C^{m}_{loc}(\mathbb{R}^{n},\mathbb{R}^{D})$ such that $J_{x}F\in H(x)$
for all $x\in E$.
Note that sections $F$ belong to
$C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ by definition.
The bundle (2) is called “semialgebraic” if
$\left\\{\left(x,P_{1},\cdots,P_{D}\right):\mathbb{R}^{n}\oplus\mathcal{P\oplus\cdots\oplus\mathcal{P}}:x\in
E,\left(P_{1},\cdots,P_{D}\right)\in H\left(x\right)\right\\}$
is a semialgebraic set.
We can now state our general problem.
###### Problem 3
Let $\mathcal{H}=(H(x))_{x\in E}$ be a semialgebraic bundle with respect to
$C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$. If $\mathcal{H}$ has
a section, does it necessarily have a semialgebraic section?
Again, we note that sections of $\mathcal{H}$ must belong to $C^{m}_{loc}$ for
fixed $m$, so we are not allowed to lose derivatives.
One checks easily that Problems 1 and 2 are instances of Problem 3.
Indeed, suppose $\phi:E\rightarrow\mathbb{R}$ is semialgebraic, as in Problem
1. Set $\mathcal{H=}\left(H\left(x\right)\right)_{x\in E}$, where
$H\left(x\right)=\left\\{P\in\mathcal{P}:P\left(x\right)=\phi\left(x\right)\right\\}\text{.}$
Then $\mathcal{H}$ is a semialgebraic bundle, and a section of $\mathcal{H}$
is precisely a function $F\in C^{m}_{loc}\left(\mathbb{R}^{n}\right)$ such
that $F=\phi$ on $E$.
Similarly, given an equation (1) as in Problem 2, set
$\mathcal{H=}\left(H\left(x\right)\right)_{x\in\mathbb{R}^{n}}$ with
$H\left(x\right)=\left\\{\left(P_{1},\cdots,P_{D}\right)\in\mathcal{P}^{D}:A_{1}\left(x\right)P_{1}\left(x\right)+\cdots+A_{D}\left(x\right)P_{D}\left(x\right)=f\left(x\right)\right\\}\text{.}$
Then $\mathcal{H}$ is a semialgebraic bundle, and a section of $\mathcal{H}$
is precisely a solution $F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ of equation (1).
In this paper, we settle the two-dimensional case of Problem 3.
###### Theorem 1
Let $\mathcal{H}$ be a semialgebraic bundle with respect to
$C^{m}_{loc}\left(\mathbb{R}^{2},\mathbb{R}^{D}\right).$ If $\mathcal{H}$ has
a section, then it has a semialgebraic section.
We give a quick sketch of the proof of Theorem 1.
By a change of coordinates and a partition of unity, we may localize the
problem to a small thin wedge
$\Gamma(c)=\\{(x_{1},x_{2})\in\mathbb{R}^{2}:x_{1}\in\left[0,c\right],0\leq
x_{2}\leq x_{1}\\}.$
More precisely, it is enough to prove that $\mathcal{H}|_{\Gamma(c^{\prime})}$
has a section for sufficiently small $c^{\prime}$.
We may assume also that our bundle
$\mathcal{H=}\left(H\left(x_{1},x_{2}\right)\right)_{\left(x_{1},x_{2}\right)\in\Gamma\left(c\right)}$
satisfies $H\left(\left(0,0\right)\right)=\left\\{0\right\\}$.
We analyze what it means for a given $F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$ with $J_{(0,0)}F=0$ to
be a section of $\mathcal{H}$. Our analysis produces finitely many
semialgebraic curves $\gamma_{1},\gamma_{2},\cdots,\gamma_{s_{\max}}$ in
$\Gamma\left(c\right)$, and we find that $F$ is a section of $\mathcal{H}$ if
and only if
* •
$F\left(x_{1},x_{2}\right)$ and its $x_{2}$-derivatives up to order $m$
satisfy finitely many linear equations on the $\gamma_{s}$ and
* •
$F$ satisfies finitely many linear equations on
$\Gamma(c)\setminus\left(\gamma_{1}\cup\cdots\cup\gamma_{s_{\max}}\right).$
The curves $\gamma_{s}$ have the form
$\gamma_{s}=\left\\{\left(x,\psi_{s}\left(x\right)\right):x\in\left[0,c\right]\right\\}$
for semialgebraic functions $\psi_{1},\cdots,\psi_{s_{\max}}$ of one variable.
The heart of our proof is to use the above characterization to produce
finitely many linear equations and inequalities for unknown functions
$\xi_{sk}^{l}\left(x\right)$ of one variable
($l=0,\cdots,m;k=1,\cdots,D;s=1,\cdots,s_{\max}$) with the following
properties:
(A)
If $F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{2},\mathbb{R}^{D}\right)$ is a section of
$\mathcal{H}$ then the functions
(3)
$\xi_{sk}^{l}\left(x_{1}\right)=\left.\partial_{x_{2}}^{l}F_{k}\left(x_{1},x_{2}\right)\right|_{x_{2}=\psi_{s}\left(x_{1}\right)}$
satisfy the above equations and inequalities for $x\in\left[0,c\right]$; and
conversely
(B)
If semialgebraic functions $\xi_{sk}^{l}\left(x\right)$ satisfy the above
equations and inequalities for $x\in\left[0,c\right]$, then for some small
$c^{\prime}<c$ there exists a semialgebraic section
$F=\left(F_{1},\cdots,F_{D}\right)$ of $\mathcal{H}|_{\Gamma{(c}^{\prime}{)}}$
such that (3) holds for $x\in\left[0,c^{\prime}\right]$.
We can easily deduce Theorem 1 from (A) and (B), as follows.
Because $\mathcal{H}|_{\Gamma\left(c\right)}$ has a section, (A) tells us that
the relevant equations and inequalities for the $\xi_{sk}^{l}$ admit a
solution.
Because all functions appearing in those equations and inequalities are
semialgebraic (except perhaps the unknowns $\xi_{sk}^{l}$), it follows easily
that we may take the $\xi_{sk}^{l}\left(x\right)$ to depend semialgebraically
on $x$. Thanks to (B), we obtain a semialgebraic section of
$\mathcal{H}|_{\Gamma\left(c^{\prime}\right)}$, completing the proof of
Theorem 1. See Section 7 for details.
Let us recall some of the literature regarding Problems 1, 2, 3. The
literature on Whitney’s extension problem goes back to the seminal works of H.
Whitney [41, 42], and includes fundamental contributions by G. Glaeser [31],
Yu. Brudnyi and P. Shvartsman [8, 10, 11, 9], E. Bierstone, P. Milman, and W.
Pawłucki [4, 5, 3], as well as our own papers [13, 14, 15, 16, 17, 18, 20, 19,
21, 22, 23, 24, 25, 26]. In the semialgebraic (and $o$-minimal) setting , the
analogue of the classical Whitney extension theorem is due to K. Kurdyka and
W. Pawłucki [34] and A. Thamrongthanyalak [39].
Problem 1 in the setting of $C^{1}_{loc}\left(\mathbb{R}^{n}\right)$ was
settled affirmatively by M. Aschenbrenner and A. Thamrongthanyalak [1]. Our
results on Problem 3 imply an affirmative solution for
$C^{m}_{loc}\left(\mathbb{R}^{2}\right)$. For
$C^{m}_{loc}\left(\mathbb{R}^{n}\right)$ with $m\geq 2$ and $n\geq 3$,
Problems 1, 2, 3 remain open.
The problem of deciding whether a (possibly underdetermined) system of linear
equations of the form (1) admits a $C^{0}_{loc}$ solution was proposed by
Brenner [7], and Epstein-Hochster [12]. Two independent solutions to this
problem appear in Fefferman-Kollár [27]. Fefferman-Luli [30] solved the
analogous problem for $C^{m}_{loc}$ $\left(m\geq 1\right)$. See also [29].
Kollár-Nowak [33] proved by example that an equation of the form (1) may fail
to admit a solution by $C^{0}_{loc}$-rational functions, even though
$A_{1},\cdots,A_{D}$ and $f$ are polynomials and a $C^{0}_{loc}$ solution
$\left(F_{1},\cdots,F_{D}\right)$ exists. They showed that
$x_{1}^{3}x_{2}f_{1}+(x_{1}^{3}-(1+x_{3}^{2})x_{2}^{3})f_{2}=x_{1}^{4}$ has a
continuous semialgebraic solution but no continuous rational solution
$(f_{1},f_{2})\in C^{0}_{loc}(\mathbb{R}^{3},\mathbb{R}^{2})$. However, [40]
shows that a semialgebraic $C^{0}_{loc}$ solution exists, and [33] shows that
a solution by $C^{0}_{loc}$ semialgebraic functions exists for Problems 1 and
2 posed over $\mathbb{R}^{2}$, again provided $A_{1},\cdots,A_{D},f$ are
polynomials.
A recent paper of Bierstone-Campesato-Milman [2] shows that given a system of
equations (1) with semialgebraic data $A_{i}$, $f$, there exists a function
$r:\mathbb{N}\rightarrow\mathbb{N}$ independent of $f$ such that if the system
(1) admits a $C^{r(m)}_{loc}$ solution, then it admits a semialgebraic
$C^{m}_{loc}$ solution. The result of Bierstone-Campesato-Milman is more
general than the version stated above; it applies to suitable $o$-minimal
structures.
Acknowledgement. We are grateful to Matthias Aschenbrenner, Edward Bierstone,
Jean-Baptiste Campesato, Fushuai (Black) Jiang, Bo’az Klartag, János Kollár,
Pierre Milman, Assaf Naor, Kevin O’Neill, Wiesław Pawłucki, and Pavel
Shvartsman for their interest and valuable comments. We would also like to
thank the participants of the 11-th Whitney workshop for their interest in our
work, and we thank Trinity College Dublin, for hosting the workshop. The first
author is supported by the Air Force Office of Scientific Research (AFOSR),
under award FA9550-18-1-0069, the National Science Foundation (NSF), under
grant DMS-1700180, and the US-Israel Binational Science Foundation (BSF),
under grant 2014055. The second author is supported by NSF Grant DMS-1554733
and the UC Davis Chancellor’s Fellowship.
## 2 Notation and Preliminaries
A function $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is called a Nash function
if it is real-analytic and semialgebraic.
Write $B(x,r)$ to denote the ball of radius $r$ about $x$ in $\mathbb{R}^{n}$.
The dimension of a semialgebraic set $E\subset\mathbb{R}^{n}$ is the maximum
of the dimensions of all the imbedded (not necessarily compact) submanifolds
of $\mathbb{R}^{n}$ that are contained in $E$.
We recall a few definitions from the Introduction.
Fix $m,n,D$, and a subset $E\subset\mathbb{R}^{n}$. For each $x\in E$, let
(4)
$H\left(x\right)=f\left(x\right)+I\left(x\right)\subset\mathcal{R}_{x}^{D}$
be given, where $f\left(x\right)\in\mathcal{R}_{x}^{D}$ and
$I\left(x\right)\subset\mathcal{R}_{x}^{D}$ is an $\mathcal{R}_{x}$-submodule.
Then the family
$\mathcal{H}=(H(x))_{x\in E}$
is called a bundle over $E$. $H(x)$ is called the fiber of $\mathcal{H}$ at
$x$.
When $m,n,D$ are not clear from context, we speak of a “bundle with respect to
$C^{m}_{loc}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$”.
If $\mathcal{H}$ is given by (4) and $E^{\prime}\subset E$, then we write
$\left.\mathcal{H}\right|_{E^{\prime}}$ to denote the bundle
$\left(H\left(x\right)\right)_{x\in E^{\prime}}$, and refer to it as the
restriction of $\mathcal{H}$ to $E^{\prime}$. If $\mathcal{H}=(H(x))_{x\in E}$
and $\mathcal{H}^{\prime}=(H^{\prime}(x))_{x\in E}$ are bundles,
$\mathcal{H}^{\prime}$ is called a subbundle of $\mathcal{H}$ if
$H^{\prime}(x)\subset H(x)$ for all $x\in E$. We write
$\mathcal{H}\supset\mathcal{H}^{\prime}$ to denote that $\mathcal{H}^{\prime}$
is a subbundle of $\mathcal{H}$.
What we called a “bundle” in [28] we now call a “classical bundle”.
The definition is as follows. Fix $m,n,D$. Let $E\subset\mathbb{R}^{n}$ be
compact. A classical bundle over $E$ is a family
$\mathcal{{H}}=\left({H}\left(x\right)\right)_{x\in E}$ of (possibly empty)
affine subspaces ${H}\left(x\right)\subset\mathcal{P}^{D}$, parametrized by
the points $x\in E$, such that each non-empty ${H}\left(x\right)$ has the form
${H}\left(x\right)=\vec{P}^{x}+\vec{I}\left(x\right)$
for some $\vec{P}^{x}\in\mathcal{P}^{D}$ and some $\mathcal{R}_{x}$-submodule
$\vec{I}\left(x\right)$ of $\mathcal{P}^{D}$.
When $m,n,D$ are not clear from context, we speak of a “classical bundle with
respect to $C^{m}(\mathbb{R}^{n},\mathbb{R}^{D})$”.
We remark again that our notion of bundle differs from the notion of bundles
considered previously (e.g., [28]). In the present version, we do not require
that $E$ be compact and we require all the fibers $H(x)$ to be non-empty.
A section of the bundle $\mathcal{H}$ is a vector-valued function $F\in
C_{loc}^{m}(\mathbb{R}^{n},\mathbb{R}^{D})$ such that $J_{x}F\in H(x)$ for all
$x\in E$. A section of a classical bundle $\mathcal{H}$ is a vector-valued
function $F\in C^{m}(\mathbb{R}^{n},\mathbb{R}^{D})$ such that $J_{x}F\in
H(x)$ for all $x\in E$.
## 3 Tools
### 3.1 Glaeser Refinements, Stable Glaeser Refinements
Given a bundle $\mathcal{H}=(H(x))_{x\in E}$ for
$C^{m}_{loc}(\mathbb{R}^{n},\mathbb{R}^{D})$ or a classical bundle
$\mathcal{H}=(H(x))_{x\in E}$ for $C^{m}(\mathbb{R}^{n},\mathbb{R}^{D})$, we
define the Glaeser refinement $\mathcal{H}^{\prime}=(H^{\prime}(x))_{x\in E}$
as follows:
(GR)
Let $x_{0}\in E$. A given $P_{0}\in H(x_{0})$ belongs to $H^{\prime}(x_{0})$
if and only if the following holds. Given $\epsilon>0$, there exists
$\delta>0$ such that for all $x_{1},\cdots,x_{k}\in B(x_{0},\delta)\cap E$,
where $k$ is a large enough constant depending only on $m$, $n$, and $D$,
there exist $P_{i}\in H(x_{i})$ ($i=1,\cdots,k$), such that
$\left|\partial^{\alpha}(P_{i}-P_{j})(x_{i})\right|\leq\epsilon|x_{i}-x_{j}|^{m-|\alpha|},$
for all $|\alpha|\leq m,0\leq i,j\leq k$.
A bundle or a classical bundle $\mathcal{H}$ is Glaeser stable if
$\mathcal{H}^{\prime}=\mathcal{H}$.
Note that the Glaeser refinement $\mathcal{H}^{\prime}$ of $\mathcal{H}$ may
have empty fibers, even if $\mathcal{H}$ has none. In that case, we know that
$\mathcal{H}$ has no sections. If $\mathcal{H}$ is a classical bundle, then so
is $\mathcal{H}^{\prime}$. If $\mathcal{H}$ is a bundle and no fibers of
$\mathcal{H}^{\prime}$ are empty, then $\mathcal{H}^{\prime}$ is a bundle.
Both for bundles and for classical bundles, every section of $\mathcal{H}$ is
a section of $\mathcal{H}^{\prime}$. (See [28] for the case of classical
bundles; the elementary proofs carry over unchanged for bundles.) Note in
particular that if a given bundle $\mathcal{H}$ has a section, then
$\mathcal{H}^{\prime}$ has no empty fibers, hence $\mathcal{H}^{\prime}$ is a
bundle and $\mathcal{H}^{\prime}$ has a section.
Starting from a classical bundle $\mathcal{H}$, or a bundle $\mathcal{H}$ with
a section, we can perform iterated Glaeser refinement to pass to ever smaller
subbundles $\mathcal{H}^{\left(1\right)}$, $\mathcal{H}^{\left(2\right)}$,
etc., without losing sections. We set
$\mathcal{H}^{\left(0\right)}=\mathcal{H}$, and for $l\geq 0$, we set
$\mathcal{H}^{\left(l+1\right)}=\left(\mathcal{H}^{\left(l\right)}\right)^{\prime}$.
Thus, by an obvious induction on $l$, we have
$\mathcal{H=\mathcal{H}}^{\left(0\right)}\supset\mathcal{H}^{\left(1\right)}\supset\cdots$,
yet $\mathcal{H}$ and $\mathcal{H}^{\left(l\right)}$ have the same sections
for all $l\geq 0$.
If $\mathcal{H}=(H(x))_{x\in E}$ is a semialgebraic bundle with respect to
$C^{m}_{loc}(\mathbb{R}^{n},\mathbb{R}^{D})$, by an obvious induction on $l$,
we have $H^{(l)}(x)$ depends semialgebraically on $x$, where
$\mathcal{H}^{(l)}=(H^{(l)}(x))_{x\in E}.$
In principle, each $\mathcal{H}^{\left(l\right)}$ can be computed from
$\mathcal{H}$. We remark that iterated Glaeser refinement stabilizes after
finitely many iterations (i.e. for a large enough integer $l^{*}$ determined
by $m,n,D$, we have $\mathcal{H}^{(l^{*}+1)}=\mathcal{H}^{(l^{*})}$; thus
$\mathcal{H}^{(l^{*})}$ is Glaeser stable. See [28] for the case of classical
bundles; the argument, which goes back to Glaeser [31] and Bierstone-Milman-
Pawłucki [4, 5], applies unchanged for bundles. We call
$\mathcal{H}^{(l^{*})}$ the stable Glaeser refinement of $\mathcal{H}$.)
The main results of [28] give the following
###### Theorem 2
For a large enough integer constant $l_{\ast}$ determined by $m,n,$ and $D$,
the following holds. Let $\mathcal{H}$ be a classical bundle with respect to
$C^{m}\left(\mathbb{R}^{n},\mathbb{R}^{D}\right)$. Let
$\mathcal{H}^{\left(0\right)},\mathcal{H}^{\left(1\right)},\mathcal{H}^{\left(2\right)},\cdots$
be its iterated Glaeser refinements. Then $\mathcal{H}$ has a section if and
only if $\mathcal{H}^{\left(l_{\ast}\right)}$ has no empty fibers. Suppose
$\mathcal{H}^{\left(l_{\ast}\right)}$ has no empty fibers. Let $x_{0}\in E$
and let $P_{0}$ belong to the fiber of $\mathcal{H}^{\left(l_{\ast}\right)}$
at $x_{0}$. Then there exists a section $F$ of the bundle $\mathcal{H}$, such
that $J_{x_{0}}(F)=P_{0}$. Moreover, there exists a constant $k^{\\#}$
depending only on $m,n,$ and $D$ such that the following holds: Suppose
$\mathcal{H}=(H(x))_{x\in E}$ is a Glaeser stable classical bundle. Assume the
following holds for some constant $M>0$:
* •
Given $x_{1},\cdots x_{k^{\\#}}\in E$, there exist polynomials
$P_{1},\cdots,P_{k^{\\#}}\in\mathcal{P}^{D}$, with $P_{i}\in H(x_{i})$ for
$1\leq i\leq k^{\\#}$; $|\partial^{\alpha}P_{i}(x_{i})|\leq M$ for all
$|\alpha|\leq m,1\leq i\leq k^{\\#}$; and
$|\partial^{\alpha}(P_{i}-P_{j})(x_{j})|\leq M|x_{i}-x_{j}|^{m-|\alpha|}$ for
all $|\alpha|\leq m,1\leq i,j\leq k^{\\#}$.
Then there exists $F\in C^{m}(\mathbb{R}^{n},\mathbb{R}^{D})$ with
$\|F\|_{C^{m}(\mathbb{R}^{n},\mathbb{R}^{D})}\leq C(m,n,D)M$ and $J_{x}(F)\in
H(x)$ for all $x\in E$.
### 3.2 Puiseux Series
We will use the following elementary result regarding semialgebraic functions.
For a proof, see [32].
###### Lemma 3.1
Suppose $f:\mathbb{R}\rightarrow\mathbb{R}$ is semialgebraic. Then there
exists a polynomial $P\left(z,x\right)\not\equiv 0$ on $\mathbb{R}^{2}$ such
that $P\left(f\left(x\right),x\right)\equiv 0$. Moreover, for each
$x_{0}\in\mathbb{R}$ there exists $\delta>0$ such that $f\left(x\right)$ for
$x\in(x_{0},x_{0}+\delta)$ is given by a convergent Puiseux series.
###### Corollary 3.1
Let $F(x)$ be a semialgebraic function of one variable, satisfying
$|F(x)|=O(x^{p})$ on $(0,c]$ for some given $p$. Then the derivatives of $F$
satisfy $|F^{(k)}(x)|=O(x^{p-k})$ on $(0,c^{\prime}]$ for some $c^{\prime}$.
Similarly, if $F(x)=o(x^{p})$ for $x$ in $(0,c)$, then $F^{(k)}(x)=o(x^{p-k})$
for $x$ in $(0,c^{\prime})$. More generally, $|F^{(k)}(x)|=O(|F(x)|/x^{k})$ on
$(0,c^{\prime})$.
###### Corollary 3.2
Let $F$ be a semialgebraic function in $C^{m}_{loc}(\Omega_{1})$, where
$\Omega_{\delta}=\\{(x,y)\in\mathbb{R}^{2}:0\leq y\leq x<\delta\\}$ for
$\delta>0$. Then for small enough $\delta$, $F|_{\Omega_{\delta}}$ extends to
a $C^{m}$ semialgebraic function on $\mathbb{R}^{2}$.
Sketch of proof. The result follows in one line from known results, but we
sketch an elementary proof.
Without loss of generality, we may suppose that $J_{(0,0)}F=0$. Then
$\partial_{x_{2}}^{k}F(x_{1},0)=o(x_{1}^{m-k})$ for $k\leq m$, hence
$\partial_{x_{1}}^{l}\partial_{x_{2}}^{k}F(x_{1},0)=o(x_{1}^{m-k-l})$ for
$0\leq k,l\leq m$.
We set $\tilde{F}(x_{1},x_{2})$ equal to the m-th degree Taylor polynomial of
$x_{2}\mapsto F(x_{1},x_{2})$ about $x_{2}=0$ for each fixed $x_{1}$. The
above estimates for derivatives of $F$ show that $\tilde{F}$ is $C^{m}$ on
$\tilde{\Omega}_{\delta}=\\{(x_{1},x_{2}):0\leq-x_{2}\leq x_{1}\leq\delta\\}$,
and its $x_{2}$-derivatives up to order $m$ agree with those of $F$ on the
$x_{1}$-axis. In particular, $J_{(0,0)}\tilde{F}=0$.
Similarly, we set $F^{\\#}(x_{1},x_{2})$ equal to the m-th degree Taylor
polynomial of $x_{2}\mapsto F(x_{1},x_{2})$ about $x_{2}=x_{1}$ for each fixed
$x_{1}$. Then $F^{\\#}$ is $C^{m}$ on
$\Omega^{\\#}_{\delta}=\\{(x_{1},x_{2}):0\leq x_{1}\leq x_{2}\leq 2x_{1}\leq
2\delta\\}$, and its $x_{2}$-derivatives up to order $m$ agree with those of
$F$ on the line $x_{1}=x_{2}$. In particular, $J_{(0,0)}F^{\\#}=0$.
Setting $F^{+}=\begin{cases}F&\mbox{on }\Omega_{\delta}\\\ \tilde{F}&\mbox{on
}\tilde{\Omega}_{\delta}\\\ F^{\\#}&\mbox{on
}\Omega^{\\#}_{\delta}\end{cases}$, we see that $F^{+}$ is a $C^{m}$
semialgebraic function on $\\{(x_{1},x_{2}):x_{1}\in[0,\delta],-x_{1}\leq
x_{2}\leq 2x_{1}\\},F^{+}=F$ on $\Omega_{\delta}$, and $J_{(0,0)}F^{+}=0$.
Next, let $\theta(t)$ be a $C^{m}$ semialgebraic function of one variable,
equal to 1 in $[0,1]$ and supported in $[-1,2]$. Then, for small enough
$\delta$, the function $F^{++}(x_{1},x_{2})=\theta(\frac{x_{2}}{x_{1}})\cdot
F^{+}(x_{1},x_{2})$ for $x_{1}>0$, $F^{++}(x_{1},x_{2})=0$ otherwise, is a
$C^{m}$ semialgebraic function on the disc $B(0,\delta)$ that agrees with our
given $F$ on $\Omega_{\delta}$.
Finally, multiplying $F^{++}$ by a semialgebraic cutoff function supported in
a small disc about $(0,0)$ and equal to $1$ in a smaller disc, we obtain a
$C^{m}$ semialgebraic function on $\mathbb{R}^{2}$ that agrees with $F$ on
$\Omega_{\delta}$ for small enough $\delta$.
### 3.3 Singularities of Semialgebraic Sets and Functions
We recall a few standard properties of semialgebraic sets and functions.
* •
Let $U\subset\mathbb{R}^{n}$ be an open semialgebraic set, and let
$F:U\rightarrow\mathbb{R}^{k}$ be semialgebraic. Then there exists a
semialgebraic subset $X\subset U$ of dimension less than $n$ (the “singular
set” of $F$) such that $F$ is real-analytic on $U\setminus X$. (See Chapter 8
in [6].)
* •
A zero-dimensional semialgebraic set is finite. A one-dimensional
semialgebraic set is a union of finitely many real-analytic arcs and finitely
many points. (See Chapter 2 in [6].)
### 3.4 Existence of Semialgebraic Selections
For sets $X,Y$, we denote a map $\Xi$ from $X$ to the power set of $Y$ by
$\Xi:X\rightrightarrows Y$ and call such $\Xi$ a set-valued map; a set-valued
map $\Xi$ is semialgebraic if $\\{(x,y):y\in\Xi(x)\\}$ is a semialgebraic set.
Let $E\subset\mathbb{R}^{n}$ and $\Xi:E\rightrightarrows\mathbb{R}^{D}$. A
selection of $\Xi$ is a map $f:E\rightarrow\mathbb{R}^{D}$ such that
$f(x)\in\Xi(x)$ for every $x\in E$. We recall the following well-known result
regarding semialgebraic selection (see, for example, [36]).
###### Theorem 3
Let $\Xi:E\rightrightarrows\mathbb{R}^{D}$ be semialgebraic. If each $\Xi(x)$
is nonempty, then $\Xi$ has a semialgebraic selection.
### 3.5 Growth of Semialgebraic Functions
Recall from [30] the following result
###### Lemma 3.2 (Growth Lemma)
Let $E\subset\mathbb{R}^{n_{1}}$ and $E^{+}\subset E\times\mathbb{R}^{n_{2}}$
be compact and semialgebraic, with $\dim E^{+}\geq 1$. Let $A$ be a
semialgebraic function on $E^{+}$. Then there exist an integer $K\geq 1$, a
semialgebraic function $A_{1}$ on $E$, and a compact semialgebraic set
$\underline{E}^{+}\subset E^{+}$, with the following properties.
(GL1)
$\dim\underline{E}^{+}<\dim E^{+}$.
For $x\in E$, set
$E^{+}\left(x\right)=\left\\{y\in\mathbb{R}^{n_{2}}:\left(x,y\right)\in
E^{+}\right\\}$ and
$\underline{E}^{+}\left(x\right)=\left\\{y\in\mathbb{R}^{n_{2}}:\left(x,y\right)\in\underline{E}^{+}\right\\}$.
Then, for each $x\in E$, the following hold.
(GL2)
If $\underline{E}^{+}\left(x\right)$ is empty, then
$\left|A\left(x,y\right)\right|\leq A_{1}\left(x\right)\text{ for all }y\in
E^{+}\left(x\right).$
(GL3)
If $\underline{E}^{+}\left(x\right)$ is non-empty, then
$\left|A\left(x,y\right)\right|\leq
A_{1}\left(x\right)\cdot\left[\text{dist}\left(y,\underline{E}^{+}\left(x\right)\right)\right]^{-K}\text{
for all }y\in E^{+}\left(x\right)\setminus\underline{E}^{+}\left(x\right).$
The Growth Lemma follows easily from a special of a theorem of Łojasiewicz and
Wachta [35], as explained in [30]. We thank W. Pawłucki for teaching us that
implication.
We will apply the Growth Lemma to prove the following.
###### Lemma 3.3
Let $F\left(x,y\right)$ be a bounded semialgebraic function on
$\left[-1,1\right]\times(0,1],$ and suppose that
(5) $\lim_{y\rightarrow 0^{+}}F\left(x,y\right)=0\text{ for each
}x\in\left[-1,1\right]\text{.}$
Then there exist a positive integer $N$ and a semialgebraic function
$A\left(x\right)$ on $\left[-1,1\right]$ such that
$F\left(x,y\right)\leq A\left(x\right)y^{\frac{1}{N}}\text{ for all
}\left(x,y\right)\in\left[-1,1\right]\times(0,1]\text{.}$
Proof. It is enough to show that for some positive integer $N$ we have
(6)
$\sup_{y\in(0,1]}\frac{\left|F\left(x,y\right)\right|}{y^{1/N}}<\infty\text{
for all }x\in\left[-1,1\right]\text{,}$
for we may then set
$A\left(x\right)=\sup_{y\in(0,1]}\frac{\left|F\left(x,y\right)\right|}{y^{1/N}}$,
and $A\left(x\right)$ will depend semialgebraically on $x$.
For each fixed $x$, the function $y\mapsto F\left(x,y\right)$ is bounded and
given near $\left(0,0\right)$ by a convergent Puiseux series that tends to
zero as $y\rightarrow 0^{+}$. Hence, for some positive integer $N_{x}$ we have
(7)
$\sup_{y\in(0,1]}\frac{\left|F\left(x,y\right)\right|}{y^{1/N_{x}}}<\infty\text{.}$
Our task is to show that $N_{x}$ may be taken independent of $x.$ Thanks to
(7), we may exclude from consideration any given finite set of “bad”
$x\in\left[-1,1\right]$.
We recall our main hypothesis (5). For each
$\left(x,\varepsilon\right)\in\left[-1,1\right]\times(0,1]$ there exists
$\delta\in(0,1]$ such that $\left(x,\varepsilon,\delta\right)$ belongs to the
semialgebraic set
$\left\\{\left(x,\varepsilon,\delta\right)\in\left[-1,1\right]\times(0,1]\times(0,1]:\left|F\left(x,y\right)\right|\leq\varepsilon\text{
for all }y\in(0,\delta]\right\\}.$
Hence, there exists a semialgebraic function
$\delta\left(x,\varepsilon\right)$ mapping $\left[-1,1\right]\times(0,1]$ into
$(0,1]$ such that
(8) $\left|F\left(x,y\right)\right|\leq\varepsilon\text{ for
}y\in(0,\delta\left(x,\varepsilon\right)],x\in\left[-1,1\right],\varepsilon\in(0,1].$
We set $\delta\left(x,0\right)=1$ for $x\in\left[-1,1\right]$. Then
$\delta:\left[-1,1\right]\times\left[0,1\right]\rightarrow(0,1]$ is
semialgebraic and satisfies (8).
We now apply Lemma 3.2 to the function
$\frac{1}{\delta\left(x,\varepsilon\right)}$.
Thus, we obtain a semialgebraic set
$\underline{E}\subset\left[-1,1\right]\times\left[0,1\right]$, a positive
integer $N,$ and a positive semialgebraic function
$\underline{\delta}\left(x\right)$ on $\left[-1,1\right]$, with the following
properties.
* •
$\dim\underline{E}\leq 1$.
* •
For $x\in\left[-1,1\right]$, let
$\underline{E}\left(x\right)=\left\\{\varepsilon:\left(x,\varepsilon\right)\in\underline{E}\right\\}$.
Then
(9)
$\delta\left(x,\varepsilon\right)\geq\underline{\delta}\left(x\right)\text{
(all }\varepsilon>0\text{) if }\underline{E}=\emptyset$
and
(10)
$\delta\left(x,\varepsilon\right)\geq\underline{\delta}\left(x\right)\cdot\left[\text{dist}\left(\varepsilon,\underline{E}\left(x\right)\right)\right]^{N}\text{
(all }\varepsilon\not\in\underline{E}(x)\text{) if
}\underline{E}\not=\emptyset\text{.}$
Because $\dim\underline{E}\leq 1,$ there are at most finitely many
$x\in\left[-1,1\right]$ for which $\underline{E}\left(x\right)$ is infinite.
As explained above, we may discard those “bad” $x$, it is enough to prove (6)
for all $x$ such that $\underline{E}\left(x\right)$ is finite.
From now on, we restrict attention to “good” $x,$ i.e., those $x$ for which
$\underline{E}\left(x\right)$ is finite.
Set
$\underline{\mathcal{\varepsilon}}\left(x\right)=\left\\{\begin{array}[]{l}\frac{1}{2}\min\left(\underline{E}\left(x\right)\setminus\left\\{0\right\\}\right)\\\
1\end{array}\right.\begin{array}[]{l}\text{if
}\underline{E}\left(x\right)\text{ contains points other than }0\\\
\text{otherwise}\end{array}\text{.}$
So $\underline{\mathcal{\varepsilon}}\left(x\right)>0$ for all “good” $x$.
If $\underline{E}\left(x\right)\not=\emptyset$, then
$\text{dist}\left(\varepsilon,\underline{E}\left(x\right)\right)\geq\varepsilon$
for $0<\varepsilon\leq\underline{\mathcal{\varepsilon}}\left(x\right)$, hence
(10) gives
(11)
$\delta\left(x,\varepsilon\right)\geq\underline{\delta}\left(x\right)\varepsilon^{N}\text{
for }0<\varepsilon\leq\underline{\varepsilon}\left(x\right)\text{.}$
If instead $\underline{E}\left(x\right)=\emptyset$, then because
$\underline{\mathcal{\varepsilon}}\left(x\right)=1,$ (9) again gives (11).
Thus, (11) holds in all cases.
Now suppose
$0<y<\underline{\delta}\left(x\right)\cdot\left(\underline{\varepsilon}\left(x\right)\right)^{N}$.
Then, setting
$\varepsilon=\left(\frac{y}{\underline{\delta}\left(x\right)}\right)^{1/N}$
and applying (11), we find that $\delta\left(x,\varepsilon\right)\geq y.$ The
defining property of $\delta\left(x,\varepsilon\right)$ therefore tells us
that
$\left|F\left(x,y\right)\right|\leq\varepsilon=\left(\frac{y}{\underline{\delta}\left(x\right)}\right)^{1/N}\text{.}$
Thus, for any “good” $x,$ we have shown that
(12)
$\frac{\left|F\left(x,y\right)\right|}{y^{1/N}}\leq\left(\underline{\delta}\left(x\right)\right)^{-1/N}\text{
for
}0<y<\underline{\delta}\left(x\right)\cdot\left(\underline{\varepsilon}\left(x\right)\right)^{N}\text{.}$
On the other hand, recall that $F$ is bounded; say,
$\left|F\left(x,y\right)\right|\leq M$ for all
$\left(x,y\right)\in\left[-1,1\right]\times(0,1]$.
Hence,
(13)
$\frac{\left|F\left(x,y\right)\right|}{y^{1/N}}\leq\frac{M}{\left(\underline{\delta}\left(x\right)\right)^{1/N}\underline{\varepsilon}\left(x\right)}\text{
for
}\underline{\delta}\left(x\right)\cdot\left(\underline{\varepsilon}\left(x\right)\right)^{N}\leq
y\leq 1\text{.}$
Our desired estimate (6) is now immediate from (12) and (13).
The proof of Lemma 3.3 is complete.
Similar ideas can be used to prove an $n$-dimensional version of Lemma 3.3,
but we don’t discuss it here.
### 3.6 Logarithmic Derivatives of Semialgebraic Functions
Let $V$ be a semialgebraic subset of $\mathbb{R}^{n}\times\mathbb{R}^{m}$.
Given $x\in\mathbb{R}^{n}$, we write $V(x)$ to denote the set of all
$t\in\mathbb{R}^{m}$ such that $(x,t)\in V$. Given
$(x,t)\in\mathbb{R}^{n}\times\mathbb{R}^{m}$, we write $\delta_{V}(x,t)$ to
denote the distance from $t$ to $V(x)$. We take $\delta_{V}(x,t)=+\infty$ if
$V(x)$ is empty. For a smooth function $F(x,t)$ on
$\mathbb{R}^{n}\times\mathbb{R}^{m}$, we write $\nabla_{t}F(x,t)$ to denote
the gradient of the function $t\mapsto F(x,t)$.
The following theorem is proven by A. Parusinski in [37, 38]. We thank Edward
Bierstone, Jean-Baptiste Campesato, Pierre Milman, and Wieslaw Pawłucki for
pointing out the references, and thus helping us remove 10 pages from our
paper.
###### Theorem 4
Let $F(x,t)$ be a (real-valued) subanalytic function of
$(x,t)\in\mathbb{R}^{n}\times\mathbb{R}^{m}$. Then there exist a closed
codimension 1 subanalytic set $V\subset\mathbb{R}^{n}\times\mathbb{R}^{m}$ and
a constant $C>0$ such that outside $V$ the function $F$ is smooth and
moreover,
(14) $|\nabla_{t}F(x,t)|\leq
C\frac{\left|F\left(x,t\right)\right|}{\delta_{V}\left(x,t\right)}\text{.}$
If $F$ is semialgebraic, then we can take $V$ to be semialgebraic.
As a special case of Theorem 4, we have the following.
###### Theorem 5
Let $F\left(x\right)$ be a semialgebraic function on $\mathbb{R}^{n}$. Then
there exist a closed semialgebraic $V\subset\mathbb{R}^{n}$ of dimension at
most $\left(n-1\right)$, and a constant $C$, such that $F$ is $C^{m}_{loc}$
outside $V$, and
$\left|\nabla F\left(x\right)\right|\leq
C\left|F\left(x\right)\right|\cdot\left[\text{dist}\left(x,V\right)\right]^{-1}$
for $x\in\mathbb{R}^{n}\setminus V$.
### 3.7 Variant of Helly’s Theorem
We recall the following result from convex geometry. Surely more precise
versions of the result are well known, but we had trouble tracking down a
reference so we will provide a proof.
###### Theorem 6 (Helly’s Theorem Variant)
Let $(p_{\omega})_{\omega\in\Omega}$ be a family of seminorms on a vector
space $V$ of dimension $D$. Assume that
$\sup_{\omega\in\Omega}p_{\omega}(v)<\infty$ for every $v\in V$. Then there
exist $\omega_{1},\cdots,\omega_{L}\in\Omega$, with $L$ depending only on $D$,
such that
$\sup_{\omega\in\Omega}p_{\omega}(v)\leq
C\cdot\max\\{p_{\omega_{1}}(v),\cdots,p_{\omega_{L}}(v)\\}\text{ for all }v\in
V,$
with $C$ also depending only on $D$.
We use the following variant of the classical Helly theorem (see Section 3 in
[14]) from elementary convex geometry.
###### Lemma 3.4
Let $(K_{\omega})_{\omega\in\Omega}$ be a collection of compact convex
symmetric subsets of $\mathbb{R}^{D}$. Suppose the intersection of all the
$K_{\omega}$ has nonempty interior. Then there exist
$\omega_{1},\cdots,\omega_{L}$ such that $K_{\omega_{1}}\cap\cdots\cap
K_{\omega_{L}}\subset C\cdot\bigcap_{\omega\in\Omega}K_{\omega}$, where $C$
and $L$ depend only on $D$.
The proof of the “Lemma on Convex Sets” in Section 3 of [14] applies here and
proves Lemma 3.4, even though our present hypotheses differ slightly from
those of [14].
We apply Lemma 3.4 to prove Theorem 6.
Proof of Theorem 6. Suppose first that each $p_{\omega}$ is a norm, not just a
seminorm. Then the conclusion of Theorem 6 follows by applying Lemma 3.4 to
the family of convex sets $K_{\omega}=\\{v\in V:p_{\omega}(v)\leq 1\\}$,
${\omega\in\Omega}$.
Now suppose each $p_{\omega}$ is a seminorm. Let $H(\omega)=\\{v\in
V:p_{\omega}(v)=0\\}$, and let $H$ be the intersection of all the $H(\omega)$.
Each $H(\omega)$ is a vector subspace of $V$. Consequently there exist
$\lambda_{1},\cdots,\lambda_{s}\in\Omega$, with $s\leq D$, such that
$H=H(\lambda_{1})\cap\cdots\cap H(\lambda_{s})$.
For $\omega\in\Omega$ and $v\in V$, set
$p^{*}_{\omega}(v)=p_{\lambda_{1}}(v)+\cdots+p_{\lambda_{s}}(v)+p_{\omega}(v)$.
Then $p^{*}_{\omega}$ is a seminorm on $V$, and $p^{*}_{\omega}(v)=0$ if and
only if $v\in H$. Regarding each $p^{*}_{\omega}$ as a norm on $V/H$, and
applying Theorem 6 for collections of norms, we complete the proof of Theorem
6.
## 4 Preliminary Reductions
The purpose of this section is to reduce Theorem 1 to the following:
###### Lemma 4.1 (Main Lemma)
Let $\mathcal{H}=(H(x))_{x\in\mathbb{R}^{2}}$ be a semialgebraic bundle for
$C^{m}_{loc}(\mathbb{R}^{2},\mathbb{R}^{D})$. Assume $\mathcal{H}$ is Glaeser
stable. Assume $H(0)=\\{0\\}$. Then, for small enough $c>0$,
$\mathcal{H}|_{\Gamma(c)}$ has a semialgebraic section, where
$\Gamma(c)=\\{(x_{1},x_{2})\in\mathbb{R}^{2}:x_{1}\in\left[0,c\right],0\leq
x_{2}\leq x_{1}\\}.$
To deduce Theorem 1 from Lemma 4.1 we argue as follows.
Suppose we are given a Glaeser stable bundle
$\mathcal{H}=(H(x))_{x\in\mathbb{R}^{2}}$ for
$C^{m}_{loc}(\mathbb{R}^{2},\mathbb{R}^{D})$ with $H(x)\subset\mathcal{P}^{D}$
depending semialgebraically on $x$. Assume $H(0)=\\{0\\}$.
Let
$\Gamma(c)=\\{(x_{1},x_{2})\in\mathbb{R}^{2}:x_{1}\in\left[0,c\right],0\leq
x_{2}\leq x_{1}\\}$. Theorem 2 tells us that $\mathcal{H}|_{\Gamma(c)}$ has a
section $F_{c}$. The main lemma asserts that for $c$ small enough
$\mathcal{H}|_{\Gamma(c)}$ has a semialgebraic section.
We will cover a full neighborhood of $0$ by rotating wedges of the form
$\Gamma(c)$. Using a partition of unity subordinate to the cover and the fact
that $H(0)=\\{0\\}$, we can then patch together sections of $\mathcal{H}$, and
obtain a semialgebraic section over a full neighborhood of $0$.
We may drop the restriction $H(0)=\\{0\\}$, because without loss of generality
our given section $F_{c}$ has jet $0$ at the origin, so we may just cut down
$H(0)$ to $\\{0\\}$. We can also drop the restriction that $\mathcal{H}$ is
Glaeser stable (assuming $\mathcal{H}$ has a section) since we can always pass
to the stable Glaeser refinement. Thus, any semialgebraic bundle having a
section has a semialgebraic section over some neighborhood of $0$. We can use
compactness and a partition of unity to conclude that $\mathcal{H}$ admits a
semialgebraic section over any given compact set.
###### Lemma 4.2
Suppose $H(z)$ depends semialgebraically on $z\in\mathbb{R}^{2}$. If
$\mathcal{H}=(H(z))_{z\in\mathbb{R}^{2}}$ has a section, then $\mathcal{H}$
has a section $F\in C^{m}_{loc}(\mathbb{R}^{2},\mathbb{R}^{D})$ such that for
all $|\alpha|\leq m$, $|\partial^{\alpha}F(x)|\leq C(1+|x|)^{K}$ on
$\mathbb{R}^{2}$, for some $C$ and $K$.
Proof. To prove this lemma, we may assume that $\mathcal{H}$ is Glaeser
stable.
Taking $E_{R}=\left\\{x\in\mathbb{R}^{2}:\left|x\right|\leq R\right\\}$ with
$R\geq 1$, and applying Theorem 2, we obtain a section $F_{R}$ of
$\mathcal{H}|_{E_{R}}$, with $\left|\left|F_{R}\right|\right|_{C^{m}}\leq
C\left(R\right)^{K}$, because the “$M$ ” in the result quoted above applied to
$\mathcal{H}|_{E_{R}}$ can be taken to depend semialgebraically on $R$.
(That’s where we use the fact that the bundle $\mathcal{H}$ is semialgebraic.)
We can now easily use a partition of unity to patch together $F_{2^{k}}$,
$k=1,2,3,\cdots$, into a section $F$ as in the conclusion of Lemma 4.2.
Fix $K$ as in the conclusion of Lemma 4.2. Let $\Phi:\text{ Open Disc
}\Delta\rightarrow\mathbb{R}^{2}$ be a semialgebraic diffeomorphism, for
example, $\Phi(x)=\frac{x}{1-|x|^{2}}$. Let $\theta(x)>0$ be a semialgebraic
function on $\mathbb{R}^{2}$ that tends to zero so rapidly that
$\partial^{\alpha}[(\theta F)\circ\Phi](y)\rightarrow 0\text{, for all
}|\alpha|\leq m\text{ as }y\rightarrow\partial\Delta,$
whenever $|\partial^{\alpha}F(x)|\leq C(1+|x|)^{K}$ on $\mathbb{R}^{2}$,
$|\alpha|\leq m$.
We can now form a bundle $\mathcal{H}^{*}$ as follows: For $x$ in $\Delta$,
the fiber $H^{*}(x)$ consists of all $J_{x}((\theta F)\circ\Phi)$ for sections
$F$ of the bundle $\mathcal{H}$.
The fibers of $\mathcal{H}^{*}$ over points not in $\Delta$ are $\\{0\\}$.
Then $\mathcal{H}^{*}$ is a semialgebraic bundle admitting a section.
We have seen that semialgebraic bundles with sections have semialgebraic
sections over any compact set. In particular, $\mathcal{H}^{*}$ has a
semialgebraic section $\mathcal{F}$ over $\Delta^{\text{closure}}$. Then
$\frac{\mathcal{F}\circ\Phi^{-1}(x)}{\theta(x)}$ is a semialgebraic section of
$\mathcal{H}$ over $\mathbb{R}^{2}$.
Consequently, we can deduce Theorem 1 from Lemma 4.1.
The rest of the paper is devoted to the proof of Lemma 4.1.
## 5 Characterization of Sections
### 5.1 Semialgebraic Bundles
Fix $U\subset\mathbb{R}^{n}$ open, semialgebraic. Fix
$\psi:U\rightarrow\mathbb{R}^{k}$ Nash. Let
$\hat{\psi}(x)=(x,\psi(x))\in\mathbb{R}^{n}\times\mathbb{R}^{k}$ for $x\in U$.
We set $\hat{U}=\hat{\psi}(U)$. Let $\mathcal{P}$ denote the vector space of
polynomials of degree at most $m$ on $\mathbb{R}^{n}\times\mathbb{R}^{k}$. We
write $z=(x,y)$ to denote a point of $\mathbb{R}^{n}\times\mathbb{R}^{k}$. We
write $\mathcal{R}_{z}$ to denote the ring obtained from $\mathcal{P}$ by
multiplication of $m$-jets at $z$. We fix a bundle
$\mathcal{H}=(H(z))_{z\in\hat{U}}$, where, for each
$z=\hat{\psi}(x)\in\hat{U}$ we have $H(z)=f^{x}+I(x)$,
$f^{x}\in\mathcal{P}^{D}$, $I(x)$ an $\mathcal{R}_{\hat{\psi}(x)}$-submodule
of $\mathcal{P}^{D}$. (We point out that $\mathcal{H}$ is a bundle, not a
classical bundle, see Remark 1.1.)
We suppose $\mathcal{H}$ is Glaeser stable. We assume that $H(z)$ depends
semialgebraically on $z\in\hat{U}$. (We sometimes abuse notion by writing
$I(z)$ for $I(x)$, where $z=\hat{\psi}(x)$.)
Under the above assumptions and definitions, we will prove the following
result.
###### Lemma 5.1
There exist a semialgebraic set $U_{\text{bad}}\subset\mathbb{R}^{n}$ of
dimension less than $n$; Nash functions $A_{j\beta}^{i},G^{i}$ on $U\setminus
U_{\text{bad}}$ ($i=1,\cdots,i_{\max},j=1,\cdots,D,\beta$ a multiindex of
order $\leq m$ for $\mathbb{R}^{k}$) with the following property. Let
$B\subset U\setminus U_{\text{bad}}$ be a closed ball. Set
$\hat{B}=\hat{\psi}(B)$. Let $F=(F_{1},\cdots,F_{D})\in
C^{m}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D})$. Then $F$ is a
section of $\mathcal{H}|_{\hat{B}}$ if and only if $\sum_{|\beta|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}(x)\cdot(\partial_{y}^{\beta}F_{j}(x,\psi(x)))=G^{i}(x)$
for all $x\in B$ (each $i$).
Proof. We may suppose that $f^{x}$ and $I(x)$ depend semialgebraically on
$x\in U$. We write $f^{x}=(f_{1}^{x},\cdots,f_{D}^{x})$ and
$\psi(x)=(\psi_{1}(x),\cdots,\psi_{k}(x))\quad(x\in U)$.
For $l=1,\cdots,n$, we introduce the vector field
$X_{l}=\frac{\partial}{\partial
x_{l}}+\sum_{p=1}^{k}\frac{\partial\psi_{p}(x)}{\partial
x_{l}}\frac{\partial}{\partial y_{p}}\text{on }U\times\mathbb{R}^{k}.$
On $U\times\mathbb{R}^{k}$, then $X_{l}$ are Nash, and
$[X_{l},X_{l^{\prime}}]=0$. For $\alpha=(\alpha_{1},\cdots,\alpha_{n})$, we
write $X^{\alpha}=X_{1}^{\alpha_{1}}\cdots X_{n}^{\alpha_{n}}$.
The $X_{1},\cdots,X_{n}$, $\frac{\partial}{\partial
y_{1}},\cdots,\frac{\partial}{\partial y_{k}}$ form a frame on
$U\times\mathbb{R}^{k}$. Because $I\left(x\right)$ depends semialgebraically
on $x\in U$, we may express
* (15)
$I\left(x\right)=\left\\{\left(P_{1},\cdots,P_{D}\right)\in\mathcal{P}^{D}:\left.\sum_{\begin{subarray}{c}\left|\alpha\right|+\left|\beta\right|\leq
m\\\
j=1,\cdots,D\end{subarray}}\tilde{A}_{j\alpha\beta}^{i}\left(x\right)\left(X^{\alpha}\partial_{y}^{\beta}P_{j}\right)\right|_{\tilde{\psi}\left(x\right)}=0\text{,
for }i=1,\cdots,i_{\max}\right\\}$ for semialgebraic
$\tilde{A}_{j\alpha\beta}^{i}$ on $U$.
We take $U_{\text{bad}}^{1}$ to be the union of the singular sets of the
$\tilde{A}_{j\alpha\beta}^{i}$. Then $U_{\text{bad}}^{1}$ is a semialgebraic
set of dimension $<n$ in $\mathbb{R}^{n}$, and the
$\tilde{A}_{j\alpha\beta}^{i}$ are real-analytic on $U\setminus
U_{\text{bad}}^{1}$.
We may therefore rewrite the equation in ((15)) in the form
$\left.\sum_{\begin{subarray}{c}\left|\alpha\right|+\left|\beta\right|\leq
m\\\
j=1,\cdots,D\end{subarray}}\left(X^{\alpha}\left\\{A_{j\alpha\beta}^{i}\left(x\right)\partial_{y}^{\beta}P_{j}\right\\}\right)\right|_{\hat{\psi}\left(x\right)}=0\text{.}$
The $A_{j\alpha\beta}^{i}$ are Nash on $U\setminus U_{\text{bad}}^{1}$. Thus,
for any closed ball $B\subset U\setminus U_{\text{bad}}^{1}$ the following
holds. (We set $\hat{B}=\hat{\psi}\left(B\right)$.)
A given $F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\left(I\left(z\right)\right)_{z\in\hat{B}}$ if and only if
$\sum_{\left|\alpha\right|\leq
m}X^{\alpha}\left\\{\sum_{\left|\beta\right|\leq
m-\left|\alpha\right|}A_{j\alpha\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)\right\\}=0\text{
on }\hat{B}\text{ for all }i\text{.}$
We look for integers $s\geq 0$ for which there exist Nash functions
$A_{j\alpha\beta}^{i}$ on $U\setminus U_{\text{bad}}^{1}$ with the following
property (“Property $\prod\left(s\right)$”):
Let $B\subset U\setminus U_{\text{bad}}^{1}$ be a closed ball; set
$\hat{B}=\hat{\psi}\left(B\right)$. Then $\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\left(I\left(z\right)\right)_{z\in\hat{B}}$ if and only if
(17) $\sum_{\left|\alpha\right|\leq
s}X^{\alpha}\left\\{\sum_{\left|\beta\right|\leq
m-\left|\alpha\right|}\sum_{j=1}^{D}A_{j\alpha\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)\right\\}=0\text{
on }\hat{B}\text{ for all }i\text{.}$
We have seen that we can achieve Property $\prod\left(m\right)$.
###### Claim 5.1
Let $s$ be the smallest possible integer $\geq 0$ for which we can achieve
Property $\prod\left(s\right)$, and let $A_{j\alpha\beta}^{i}$ be as in
Property $\prod\left(s\right)$. Then $s=0$. In other words, Property
$\prod(0)$ holds.
Proof of Claim 5.1. Assuming $s\geq 1$, we will achieve Property $\prod(s-1)$,
contradicting the fact that $s$ is as small as possible.
Fix $B\subset U\setminus U_{\text{bad}}^{1}$ a closed ball, and let
$(F_{1},\cdots,F_{D})\in
C^{m}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D})$ be a section
of $(I(z))_{z\in\hat{B}}$. (As always, $\hat{B}=\psi(B)$.) Fix $x_{0}\in B$
and fix a multiindex $\alpha_{0}$ with $|\alpha_{0}|=s$. For $j=1,\cdots,D$,
define functions on $\mathbb{R}^{n}\times\mathbb{R}^{k}$ by setting
$F_{j}^{\\#}(z)=\theta\cdot F_{j}(z)$ where $\theta\in
C_{0}^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{k})$ with jet
$(J_{\hat{\psi}(x_{0})}\theta)(x,y)=(x-x_{0})^{\alpha_{0}}$.
Then $(F_{1}^{\\#},\cdots,F_{D}^{\\#})\in
C^{m}_{loc}(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D})$ is a section
of $(I(z))_{z\in\hat{B}}$ because each $I(z)$ is an
$\mathcal{R}_{z}$-submodule of $\mathcal{R}_{z}^{D}$.
Applying Property $\prod(s)$ to $(F_{1}^{\\#},\cdots,F_{D}^{\\#})$, we learn
that
$\left.\sum_{\left|\beta\right|\leq
m-\left|\alpha_{0}\right|}\sum_{j=1}^{D}A_{j\alpha_{0}\beta}^{i}\left(x_{0}\right)\left(\partial_{y}^{\beta}F_{j}\right)\right|_{\hat{\psi}\left(x_{0}\right)}=0\text{
}\left(\text{all }i\right)\text{.}$
This holds for all $x_{0}$ and for all $\left|\alpha_{0}\right|=s$. Thus, if
$\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\left(I\left(z\right)\right)_{z\in\hat{B}}$, then
(18) $\sum_{\left|\beta\right|\leq
m-\left|\alpha\right|}\sum_{j=1}^{D}A_{j\alpha\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)=0$
on $\hat{B}$ for all $\left|\alpha\right|=s$ and for all $i$. Because the
$X_{j}$ are tangent to $\hat{B}$, it follows from (18) that
(19) $X^{\alpha}\left\\{\sum_{\left|\beta\right|\leq
m-\left|\alpha\right|}\sum_{j=1}^{D}A_{j\alpha\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)\right\\}=0$
on $\hat{B}$ for all $\left|\alpha\right|=s$ and for all $i$. From (17) and
(19), we conclude that
(20) $\sum_{\left|\alpha\right|\leq
s-1}X^{\alpha}\left\\{\sum_{\left|\beta\right|\leq
m-\left|\alpha\right|}\sum_{j=1}^{D}A_{j\alpha\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)\right\\}=0$
on $\hat{B}$ for all $i$. Thus, any section of
$\left(I\left(z\right)\right)_{z\in\hat{B}}$ satisfies (18) and (20).
Conversely, suppose $\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{k}\mathbb{\times\mathbb{R}}^{k},\mathbb{R}^{D}\right)$
satisfies (18) and (20). Then, because (18) implies (19), it follows that (17)
holds, and consequently $\left(F_{1},\cdots,F_{D}\right)$ is a section of
$\left(I\left(z\right)\right)_{z\in\hat{B}}$. Thus, a given
$\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\left(I\left(z\right)\right)_{z\in\hat{B}}$ if and only if (18)
and (20) hold. If $s\geq 1,$ this implies that we have achieved Property
$\prod\left(s-1\right)$, contradicting the minimal character of $s$, and
establishing Claim 5.1.
We return to the proof of Lemma 5.1. Because Property $\prod(s)$ holds with
$s=0$, there exist Nash functions $A_{j\beta}^{i}$ on $U\setminus
U_{\text{bad}}^{1}$, for which the following (“Property $\prod^{\ast}$”)
holds:
Let $B\subset U\setminus U_{\text{bad}}^{1}$ be a closed ball. Set
$\hat{B}=\hat{\psi}\left(B\right)$. Then a given
$\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\left(I\left(z\right)\right)_{z\in\hat{B}}$ if and only if
(21) $\sum_{\left|\beta\right|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)=0\text{
on }\hat{B}\text{ (all }i\text{)}.$
We fix $A_{j\beta}^{i}$ as above.
We now return to our bundle
$\mathcal{H}=\left(f^{z}+I\left(z\right)\right)_{z\in\hat{U}}$.
(We abuse notation by writing $f^{z}$ for $f^{x}$ where
$z=\hat{\psi}\left(x\right)$.)
Let $B\subset U\setminus U_{\text{bad}}^{1}$ be a closed ball, and let
$\hat{B}=\hat{\psi}\left(B\right)$. Let $\left(F_{1},\cdots,F_{D}\right)$ and
$\left(\tilde{F}_{1},\cdots,\tilde{F}_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ be
any two sections of $\mathcal{H}|_{\hat{B}}$.
Then $\left(F_{1}-\tilde{F}_{1},\cdots,F_{D}-\tilde{F}_{D}\right)$ is a
section of $\left(I\left(z\right)\right)_{z\in\hat{B}}$, and therefore by
(21), we have
(22) $\sum_{\begin{subarray}{c}\left|\beta\right|\leq m\\\
j=1,\cdots,D\end{subarray}}A_{j\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)=\sum_{\begin{subarray}{c}\left|\beta\right|\leq
m\\\
j=1,\cdots,D\end{subarray}}A_{j\beta}^{i}\left(x\right)\partial_{y}^{\beta}\tilde{F}_{j}\left(x,y\right)\text{
on }\hat{B}\text{ for all }i\text{.}$
Moreover, given $x_{0}\in B$, we can take our section
$\left(\tilde{F}_{1},\cdots,\tilde{F}_{D}\right)$ above to satisfy
$J_{\hat{\psi}\left(x_{0}\right)}\tilde{F}_{j}=f_{j}^{x_{0}}\text{
}\left(j=1,\cdots,D\right)\text{,}$
because $\left(f_{1}^{x_{0}},\cdots,f_{D}^{x_{0}}\right)\in
H\left(\hat{\psi}\left(x_{0}\right)\right)$ and $\mathcal{H}|_{\hat{B}}$ is
Glaeser stable and has nonempty fibers. (See Theorem 2.) Therefore, (22)
implies that
(23) $\sum_{\left|\beta\right|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)=G^{i}\left(x\right)\text{
}$
on $\hat{B}$ for each $i$, where
$G^{i}\left(x\right)=\sum_{\left|\beta\right|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}\left(x\right)\left(\partial_{y}^{\beta}f^{x}\right)|_{\hat{\psi}\left(x\right)}\text{\quad}\left(x\in
U\setminus U_{\text{bad}}^{1}\right)\text{.}$
Clearly, $G^{i}\left(x\right)$ is a semialgebraic function on $U\setminus
U_{\text{bad}}^{1}$, and it is independent of the ball $B$ in the above
discussion.
Thus, we have seen that any section $\left(F_{1},\cdots,F_{D}\right)$ of
$\mathcal{H}|_{\hat{B}}$ must satisfy (23).
Conversely, suppose $\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$
satisfies (23). Let $\left(\tilde{F}_{1},\cdots,\tilde{F}_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ be
a section of $\mathcal{H}|_{\hat{B}}$. (We know that a section exists because
$\mathcal{H}|_{\hat{B}}$ is Glaeser stable and has nonempty fibers.) We know
that $\left(\tilde{F}_{1},\cdots,\tilde{F}_{D}\right)$ satisfies (23), hence
$\sum_{\left|\beta\right|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}\left(x\right)\partial_{y}^{\beta}\left[F_{j}-\tilde{F}_{j}\right]\left(x,y\right)=0$
on $\hat{B}$ for each $i$.
Recalling Property $\prod^{\ast}$, we now see that
$\left(F_{1}-\tilde{F}_{1},\cdots,F_{D}-\tilde{F}_{D}\right)$ is a section of
$\left(I\left(z\right)\right)_{z\in\hat{B}}.$ Because
$\left(\tilde{F}_{1},\cdots,\tilde{F}_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of
$\mathcal{H}|_{\hat{B}}=\left(f^{z}+I\left(z\right)\right)_{z\in\hat{B}}$, we
conclude that $\left(F_{1},\cdots,F_{D}\right)$ is a section of
$\mathcal{H}|_{\hat{B}}$. Thus, if $\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$
satisfies (23), then it is a section of $\mathcal{H}|_{\hat{B}}$.
We have now seen that a given $\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\mathcal{H}|_{\hat{B}}$ if and only if (23) holds.
Thus, all the conclusions of Lemma 5.1 hold, except that perhaps the $G^{i}$
are not real-analytic.
We set $U_{\text{bad}}^{2}=$union of all the singular sets of the
semialgebraic functions $G^{i}$. That’s a semialgebraic set of dimension $<n$
in $\mathbb{R}^{n}$.
We take $U_{\text{bad}}=U_{\text{bad}}^{1}\cup U_{\text{bad}}^{2}$, a
semialgebraic set of dimension $<n$ in $\mathbb{R}^{n}$.
The functions $A_{j\beta}^{i}$ and $G^{i}$ are Nash on $U\setminus
U_{\text{bad}}$.
If $B\subset U\setminus U_{\text{bad}}$ is a closed ball and
$\hat{B}=\psi\left(B\right)$, then a given $\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right)$ is
a section of $\mathcal{H}|_{\hat{B}}$ if and only if
$\sum_{\left|\beta\right|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}\left(x\right)\left(\partial_{y}^{\beta}F_{j}\right)|_{\hat{\psi}\left(x\right)}=G^{i}\left(x\right)$
on $B$ for each $i$.
This completes the proof of Lemma 5.1.
###### Remark 5.1
Lemma 5.1 and its proof hold also for $k=0$. In that case, $\hat{\psi}$ is the
identity map and there are no $y$-variables, hence no $y$-derivatives in the
conclusion of Lemma 5.1.
###### Corollary 5.1
Let $\mathcal{H},U,\psi,\cdots$ be as in Lemma 5.1. Let
$\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\mathbb{R}^{n}\times\mathbb{R}^{k},\mathbb{R}^{D}\right).$
Then $\left(F_{1},\cdots,F_{D}\right)$ is a section of
$\mathcal{H}|_{\hat{U}\setminus\hat{\psi}\left(U_{\text{bad}}\right)}$ if and
only if
$\sum_{\left|\beta\right|\leq
m}\sum_{j=1}^{D}A_{j\beta}^{i}\left(x\right)\partial_{y}^{\beta}F_{j}\left(x,y\right)=G^{i}\left(x\right)$
on $\hat{U}\setminus\hat{\psi}\left(U_{\text{bad}}\right)$, for all $i$.
Proof. $U\setminus U_{\text{bad}}$ is a union of (infinitely many overlapping)
closed balls $B$. Applying Lemma 5.1 to each $B$, we obtain the desired
conclusion.
### 5.2 Gaussian Elimination with Parameters
Suppose we are given a system of linear equations
* (24)
$X_{i}+\sum_{j>k}A_{ij}X_{j}=b_{i}$, for $i=1,\cdots,k$ with
$\left|A_{ij}\right|\leq 2^{k}$ for $i=1,\cdots k,$ $j=k+1,\cdots,M$, and
* (26)
$\sum_{j>k}C_{ij}X_{j}=g_{i}$, for $i=k+1,\cdots,N$,
where $0\leq k\leq N,M;$ the $A_{ij}$, $C_{ij}$, $b_{i}$, $g_{i}$ are
semialgebraic functions defined on a semialgebraic set
$E\subset\mathbb{R}^{n}$; and $X_{1},\cdots,X_{M}$ are unknowns.
We say that this system is in $k$-echelon form on $E$
If $k=0$, then we have simply ((26)) for $i=1,\cdots,N$, so every system of
linear equations with coefficient matrix and right-hand sides depending
semialgebraically on $x\in E$ is in $0$-echelon form on $E$.
If also $C_{ij}\equiv 0$ on $E$ for all $i=k+1,\cdots,N$, $j=k+1,\cdots,M$,
then we say that our system of equations is in echelon form on $E$. In
particular, a system in $k$-echelon form with $k=\min\\{N,M\\}$ is in echelon
form on $E$. Suppose our system is in $k$-echelon form with $k<\min\\{N,M\\}$.
We partition $E$ as follows. Let $E_{\text{good}}=\\{x\in E:\text{All the
}C_{ij}(x)=0\\}$. For $\tilde{i}=k+1,\cdots,N$ and $\tilde{j}=k+1,\cdots,M$,
we let $\tilde{E}(\tilde{i},\tilde{j})=\\{x\in
E:|C_{\tilde{i}\tilde{j}}|=\max_{ij}|C_{ij}|>0\\}$. The $E_{\text{good}}$ and
$\tilde{E}(i,j)$ form a covering of $E$.
We enumerate the pairs $(i,j)$ in any order and then form sets $E(i,j)$ by
removing from $\tilde{E}(i,j)$ all points contained in some
$\tilde{E}(i^{\prime},j^{\prime})$ with $(i^{\prime},j^{\prime})$ preceding
$(i,j)$. Then $E_{\text{good}}$ and the $E(i,j)$ form a partition of $E$ into
semialgebraic sets. On $E_{\text{good}}$, our system is in echelon form.
On each $E(a,b)$, we will exhibit a system of linear equations in
$(k+1)$-echelon form, equivalent to the given system ((24)), ((26)). For fixed
$(a,b)$, we relabel equations and unknowns so that our system still has the
form ((24)), ((26)), but with $|C_{k+1,k+1}|=\max_{ij}|C_{ij}|>0$. Dividing
equations ((26)) by $C_{k+1,k+1}$, we may assume that
(28) $C_{k+1,k+1}=1$
and all
(29) $|C_{ij}|\leq 1.$
Note that $A_{ij},C_{ij},b_{i},g_{i}$ still depend semialgebraically on $x$.
From each equation ((24)), we subtract $A_{i(k+1)}$ times equation ((26)) with
$i=k+1$. From each equation ((26)) ($i\not=k+1$), we subtract $C_{i,k+1}$
times equation ((26)) with $i=k+1$. Thus, we obtain equations of the form
(30)
$\left[\begin{array}[]{l}X_{i}+\sum_{j>k}\tilde{A}_{ij}X_{j}=\tilde{b}_{i},\text{for
}i=1,\cdots,k\\\ X_{k+1}+\sum_{j>k+1}C_{k+1,j}X_{j}=g_{k+1},\\\ \sum_{j\geq
k+1}\tilde{C}_{ij}X_{j}=\tilde{g}_{i}\text{, for }i>k+1.\end{array}\right.$
Here, $\tilde{A}_{ij}=A_{ij}-A_{i\left(k+1\right)}C_{k+1,j}$ for
$i=1,\cdots,k$, $j\geq k+1$; and $\tilde{C}_{ij}=C_{ij}-C_{i,k+1}C_{k+1,j}$
for $i=k+2,\cdots,N$, $j>k+1$.
In particular, $\tilde{A}_{i,k+1}=A_{i,k+1}-A_{i,k+1}\cdot C_{k+1,k+1}=0$, and
$\tilde{C}_{i,k+1}=C_{i,k+1}-C_{i,k+1}\cdot C_{k+1,k+1}=0$, thanks to (28).
Also,
$\left|\tilde{A}_{ij}\right|\leq\left|A_{ij}\right|+\left|A_{i,k+1}\right|\cdot\left|C_{k+1,j}\right|\leq\left|A_{ij}\right|+\left|A_{i,k+1}\right|$
(by (29))$\leq 2^{k}+2^{k}$ (because our system ((24)), ((26)) is in
$k$-echelon form)$=2^{k+1}$. Recall that $|C_{k+1,j}|\leq 1$.
These remarks show that the system of equations (30) is in
$\left(k+1\right)$-echelon form.
We repeat this procedure, starting with a system in $0$-echelon form, and
partition $E$ more and more finely into pieces $E_{\nu}$, on each of which an
equivalent system to ((24)), ((26)) is either in echelon form, or in
$k$-echelon form for ever higher $k$. The procedure has to stop after at most
$\min\left(N,M\right)$ steps, because a system in $k$-echelon form with
$k=\min\left(N,M\right)$ is automatically in echelon form.
Thus, we have proven the following result
###### Lemma 5.2
Consider a system of linear equations
(31) $\sum_{j=1}^{M}C_{ij}\left(x\right)X_{j}=g_{i}\left(x\right)\text{
}\left(i=1,\cdots,N\right)$
where the $C_{ij}\left(x\right)$ and $g_{i}\left(x\right)$ are semialgebraic
functions defined on a semialgebraic set $E\subset\mathbb{R}^{n}$.
Then we can partition $E$ into semialgebraic sets $E_{\nu}$
$\left(\nu=1,\cdots,\nu_{\max}\right)$, for which the following holds for each
$\nu$:
There exist a permutation
$\pi:\left\\{1,\cdots,M\right\\}\rightarrow\left\\{1,\cdots,M\right\\}$ and an
integer $0\leq k\leq\min\left(N,M\right)$ such that for each $x\in E_{\nu}$,
the system (31) is equivalent to a system of the form
(32) $\left[\begin{array}[]{c}X_{\pi
i}+\sum_{j>k}\tilde{A}_{ij}\left(x\right)X_{\pi
j}=\tilde{g}_{i}\left(x\right)\text{ for }i=1,\cdots,k\\\
0=\tilde{b}_{i}\left(x\right)\text{ for
}i=k+1,\cdots,N\text{.}\end{array}\right.$
That is, for each $x\in E_{\nu}$ and each
$\left(X_{1},\cdots,X_{M}\right)\in\mathbb{C}^{M}$, (31) holds at $x$ if and
only if (32) holds at $x$. Here, the $\tilde{A}_{ij},\tilde{g}_{i},$ and
$\tilde{b}_{i}$ are semialgebraic functions on $E_{\nu}$, and
$\left|\tilde{A}_{ij}\left(x\right)\right|\leq 2^{k}$ on $E_{\nu}$.
In essence, the method for solving the system (31) is just the usual Gaussian
elimination, except that we take extra care to maintain the growth condition
$\left|\tilde{A}_{ij}\left(x\right)\right|\leq 2^{k}$.
### 5.3 What It Means to be a Section of a Semialgebraic Bundle
We work with a semialgebraic bundle $\mathcal{H}=(H(x))_{x\in\mathbb{R}^{2}}$.
Each $H(x)$ is a coset of an $\mathcal{R}_{x}$-submodule of
$(\mathcal{R}_{x})^{D}$, depending semialgebraically on $x$. Here,
$\mathcal{R}_{x}$ is the ring of the $m$-jets of functions at $x$. A function
$F=(F_{1},\cdots,F_{D})\in C^{m}_{loc}(\Omega,\mathbb{R}^{D})$
($\Omega\subset\mathbb{R}^{2}$ open) is a section of $\mathcal{H}$ if for all
$x\in\Omega$ the $m$-jet $J_{x}F$ belongs to $H(x)$. A function $F\in
C^{m}_{loc}(\Omega,\mathbb{R}^{D})$ is called a local section near $x^{0}$
($x^{0}\in\Omega$) if for some small disc $B\subset\Omega$ centered at $x^{0}$
we have $J_{x}F\in H(x)$ for all $x\in B$.
Let $\Omega=\\{(x,y)\in\mathbb{R}^{2}:0\leq y\leq x\\}$. Let
$\mathcal{H}=(H(x))_{x\in\mathbb{R}^{2}}$ be a semialgebraic bundle, with
$H((0,0))=\\{0\\}$. We assume that $\mathcal{H}$ has a section. We want a
convenient condition on functions $F\in C^{m}_{loc}(\Omega,\mathbb{R}^{D})$
that is equivalent to the assertion that $F|_{B\cap\Omega^{\text{interior}}}$
is a section of $\mathcal{H}$ for a small enough disc $B$ centered at the
origin. We achieve (approximately) that.
To do so, we partition $\Omega$ into semialgebraic open subsets of
$\mathbb{R}^{2}$, finitely many semialgebraic curves in $\mathbb{R}^{2}$, and
finitely many points. To start with, we partition $\Omega$ into the point
$(0,0)$, the arcs $\\{(x,0):x>0\\},\\{(x,x):x>0\\},$ and
$\Omega^{\text{interior}}$.
As we proceed, we will cut up each of our semialgebraic open sets into
finitely many semialgebraic open subsets, finitely many semialgebraic arcs,
and finitely many points. We won’t keep track explicitly of the arcs and
points at first; we just discard semialgebraic subsets of $\mathbb{R}^{2}$ of
dimension $\leq 1$.
We apply Lemma 5.1 in the case $k=0$ to $\Omega^{\text{interior}}$ and
$\mathcal{H}$. (See Remark 5.1.)
Thus, we obtain a semialgebraic $V_{1}\subset\Omega^{\text{interior}}$ of
dimension $\leq 1$, outside of which the following holds for some
semialgebraic functions $A_{ij}^{\\#}(x),\phi_{i}^{\\#}(x)$ for $1\leq i\leq
i_{\max},1\leq j\leq D,x\in\Omega^{\text{interior}}\setminus V_{1}$:
Let $F=(F_{1},\cdots,F_{D})$ belong to $C^{m}_{loc}(U,\mathbb{R}^{D})$ where
$U$ is a neighborhood of $x^{0}\in\Omega^{\text{interior}}\setminus V_{1}$.
Then $F$ is a local section of $\mathcal{H}$ near $x^{0}$ if and only if
* (33)
$\sum_{j=1}^{D}A_{ij}^{\\#}(x)F_{j}(x)=\phi_{i}^{\\#}(x)$, for
$i=1,\cdots,i_{\max}$, for all $x$ in a neighborhood of $x^{0}$.
The equations ((33)) have a solution for each fixed $x$, because $\mathcal{H}$
has a section. Next, we apply Lemma 5.2 to the above system of linear
equations.
Thus, we obtain a partition of $\Omega^{\text{interior}}\setminus V_{1}$ into
semialgebraic sets $E_{\nu}^{\\#}$ ($\nu=1,\cdots,\nu_{\max}^{\\#}$), for
which we have integers $\tilde{k}_{\nu}\geq 0$, permutations
$\tilde{\pi}_{\nu}:\\{1,\cdots,D\\}\rightarrow\\{1,\cdots,D\\}$, and
semialgebraic functions $\tilde{A}_{ij}^{\nu}(x)$ ($1\leq
i\leq\tilde{k}_{\nu},\tilde{k}_{\nu}+1\leq j\leq D,x\in E_{\nu}^{\\#}$),
$\tilde{\phi}_{i}^{\nu}(x)$ such that for any $x\in E_{\nu}^{\\#}$, the system
of equations ((33)) is equivalent to
(35)
$F_{\pi_{\nu}i}\left(x\right)+\sum_{j>\tilde{k}_{\nu}}\tilde{A}_{ij}^{\nu}\left(x\right)F_{\pi_{\nu}j}\left(x\right)=\tilde{\varphi}_{i}^{\nu}\left(x\right)\text{
for }i=1,\cdots,\tilde{k}_{\nu}.$
Moreover, the $\tilde{A}_{ij}^{\nu}\left(x\right)$ are bounded. Note that the
functions $\tilde{b}_{i}$ in (32) are identically $0$ because our equations
((33)) have a solution.
Because $\mathcal{H}$ has a section, there exists
$F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(\Omega,\mathbb{R}^{D}\right)$ satisfying ((33)) for all
$x\in\Omega^{\text{interior}}\setminus V_{1}$, hence also satisfying (35) in
$E_{\nu}^{\\#}$. Consequently, the left-hand side of (35) is bounded (for
bounded $x$), and thus also the $\tilde{\varphi}_{i}^{D}\left(x\right)$ are
bounded (for bounded $x$).
Applying Theorem 5, we obtain a semialgebraic $V_{2}\subset\mathbb{R}^{2}$ of
dimension $\leq 1$, satisfying
(36)
$\left|\partial^{\alpha}\tilde{\varphi}_{i}^{\nu}\left(x\right)\right|,\left|\partial^{\alpha}\tilde{A}_{ij}^{\nu}\left(x\right)\right|\leq
C\left[\text{dist}\left(x,V_{2}\right)\right]^{-\left|\alpha\right|}\text{ for
bounded }x\text{ outside }V_{2}\text{, for }\left|\alpha\right|\leq
m+100\text{.}$
By adding $\partial\Omega$ to $V_{2}$ and removing from $V_{2}$ all points
outside $\Omega$, we may assume $V_{2}\subset\Omega$. (This operation does not
increase the distance from $V_{2}$ to any point of $\Omega$.)
Let $\hat{E}_{\nu}$ $\left(\nu=1,\cdots,\nu_{\max}\right)$ be the connected
components of the interiors of the sets $E_{\nu}^{\\#}\setminus V_{2}$
($\nu=1,\cdots,\nu_{\max}^{\\#}$).
Then $\Omega$ is partitioned into the $\hat{E}_{\nu}$ and $V_{3}$, where
$V_{3}$ is a semialgebraic subset of $\Omega$ of dimension $\leq 1$. The
$\hat{E}_{\nu}$ are pairwise disjoint open connected semialgebraic sets. Any
path in $\Omega$ that does not meet $V_{3}$ stays entirely in a single
$\hat{E}_{\nu}$. Indeed, suppose not: let $\gamma\left(t\right)\in\Omega$
$\left(t\in\left[0,1\right]\right)$ be a path starting at
$\gamma\left(0\right)\in\hat{E}_{\nu}$ not staying in $\hat{E}_{\nu}$ and not
meeting $V_{3}$. Pick $t_{\ast}=$
$\inf\left\\{t>0:\gamma\left(t\right)\not\in\hat{E}_{\nu}\right\\}$. Then
$t^{*}>0$ since $\hat{E}_{\nu}$ is open. We can’t have
$\gamma\left(t_{\ast}\right)\in\hat{E}_{\nu^{\prime}}$ with
$\nu^{\prime}\not=\nu$ else $\gamma\left(t\right)\in\hat{E}_{\nu^{\prime}}$
(and $\in\hat{E}_{\nu}$) for $t\in[t_{\ast}-\varepsilon,t_{\ast})$. We can’t
have $\gamma(t_{\ast})$ in $E_{\nu}$, since that would imply $\gamma(t)$ in
$E_{\nu}$ for all $t$ in $[t_{\ast},t_{\ast}+\varepsilon]$. Thus,
$\gamma\left(t_{\ast}\right)\in V_{3}$, contradicting the fact that $\gamma$
does not meet $V_{3}$.
Moreover, there exist integers $\hat{k}_{\nu}\geq 0$, permutations
$\hat{\pi}_{\nu}:\left\\{1,\cdots,D\right\\}\rightarrow\left\\{1,\cdots,D\right\\}$,
and semialgebraic functions $\hat{A}_{ij}^{\nu}\left(x\right)$ $\left(1\leq
i\leq\hat{k}_{\nu}\text{, }\hat{k}_{\nu}+1\leq j\leq D\right)$ and
$\hat{\varphi}_{i}^{\nu}\left(x\right)$ $\left(1\leq
i\leq\hat{k}_{\nu}\right)$ defined on $\hat{E}_{\nu}$, with the following
properties
* (37)
$\left|\partial^{\alpha}\hat{A}_{ij}^{\nu}\left(x\right)\right|$,
$\left|\partial^{\alpha}\hat{\varphi}_{i}^{\nu}\left(x\right)\right|\leq
C\left[\text{dist}\left(x,V_{3}\right)\right]^{-\left|\alpha\right|}$ for
bounded $x\in\hat{E}_{\nu}$, $\left|\alpha\right|\leq m+100$, and
* (39)
Let $x^{0}\in\hat{E}_{\nu}$ and let $F=\left(F_{1},\cdots,F_{D}\right)$ be
$C^{m}_{loc}$ in a neighborhood of $x^{0}$. Then $F$ is a local section of
$\mathcal{H}$ near $x^{0}$ if and only if
$F_{\pi_{\nu}i}\left(x\right)+\sum_{j>\hat{k}_{\nu}}\hat{A}_{ij}^{\nu}\left(x\right)F_{\pi_{\nu}j}\left(x\right)=\hat{\varphi}_{i}^{\nu}\left(x\right)$
in a neighborhood of $x^{0}$ for each $i=1,\cdots,\hat{k}_{\nu}$.
We partition $V_{3}\cup\left\\{\left(x,0\right):x\geq
0\right\\}\cup\left\\{\left(x,x\right):x\geq 0\right\\}$ into finitely many
Nash open arcs (not containing their endpoints) and finitely many points.
For small enough $\delta>0$,
$B\left(0,\delta\right)\mathbb{\subset\mathbb{R}}^{2}$ avoids all the above
arcs not containing $0$ in their closure, and all the above points except
possibly for the point $0$. Taking $\delta$ small, we may assume that the
remaining arcs have convergent Puiseux series in $B(0,\delta)$.
Notice that our semialgebraic one-dimensional sets are all contained in
$\Omega$; so no arcs have tangent lines at $0$ lying outside the sector
$\Omega$. Thus, the remaining arcs have the form $\\{y=\psi_{s}(x)\\}$ in
$B(0,\delta)$, where $\psi_{1},\cdots,\psi_{s_{\max}}$ are semialgebraic
functions of one variable, with convergent Puiseux expansion in $[0,\delta]$.
We discard duplicates, i.e., we may assume $\psi_{s}$ is never identically
equal to $\psi_{s^{\prime}}$ for $s^{\prime}\not=s$. Note that the line
segments $\\{(x,0):0<x<\delta\\}$ and $\\{(x,x):0<x<\delta\\}$ are among our
arcs $\gamma_{s}$. Taking $\delta>0$ smaller yet, we may assume that for each
$s\not=s^{\prime}$, either $\psi_{s}(x)<\psi_{s^{\prime}}(x)$ for all
$x\in(0,\delta)$, or $\psi_{s}(x)>\psi_{s^{\prime}}(x)$ for all
$x\in(0,\delta)$. (That’s because the $\psi_{s}$ are given by convergent
Puiseux expansions.) Thus, in $B(0,\delta)$, our curves may be labelled so
that $0\equiv\psi_{0}(x)<\psi_{1}(x)<\cdots<\psi_{s_{\max}}(x)\equiv x$ for
$x\in(0,\delta)$. The arcs are
$\gamma_{s}=\\{(x,\psi_{s}(x)):x\in[0,\delta]\\}$ for $s=0,\cdots,s_{\max}$.
(Here we have thrown in the point $0$, and taken $\delta$ small to allow
ourselves to include $x=\delta$, not just $x<\delta$.)
The sets we discarded in passing from $V_{3}$ to the semialgebraic arcs
$\gamma_{0},\cdots,\gamma_{s_{\max}}$ are irrelevant in the sense that
$V_{3}\cap
B(0,\delta)\subset(\gamma_{0}\cup\gamma_{1}\cup\cdots\cup\gamma_{s_{\max}})\cap
B(0,\delta)$.
Let $E_{s}$ ($s=1,\cdots,s_{\max}$) be the part of the $B(0,\delta)$ lying
between $\gamma_{s-1}$ and $\gamma_{s}$, i.e., $E_{s}=\\{(x,y)\in
B(0,\delta):0<x<\delta,\psi_{s-1}(x)<y<\psi_{s}(x)\\}$.
Any two points in a given $E_{s}$ may be joined by a path in
$B(0,\delta)\setminus\bigcup_{s=0}^{s_{\max}}\gamma_{s}\subset
B(0,\delta)\setminus V_{3}$, hence all points in a given $E_{s}$ lie in the
same $\hat{E}_{\nu}$.
Therefore, for $s=1,\cdots,s_{\max}$, there exist $k_{s}\geq 0$, permutations
$\pi_{s}:\\{1,\cdots,D\\}\rightarrow\\{1,\cdots,D\\}$, and semialgebraic
functions $A_{ij}^{s}(x)$, $\psi_{i}^{s}(x)$ ($1\leq i\leq
k_{s};j=k_{s}+1,\cdots,D$) on $E_{s}$, with the following properties
* (41)
Let $x^{0}\in E_{s}$, and let $F=(F_{1},\cdots,F_{D})$ be $C^{m}_{loc}$ in a
neighborhood of $x^{0}$. Then $F$ is a local section of $\mathcal{H}$ near
$x^{0}$ if and only if
* (43)
$F_{\pi_{s}i}(x)+\sum_{j>k_{s}}A_{ij}^{s}(x)F_{\pi_{s}j}(x)=\psi_{i}^{s}(x)$
in a neighborhood of $x^{0}$ for each $i=1,\cdots,k_{s}$.
Moreover,
* (45)
$|\partial^{\alpha}A_{ij}^{s}(x)|$, $|\partial^{\alpha}\psi_{i}^{s}(x)|\leq
C\left[\text{dist}(x,\gamma_{s}\cup\gamma_{s-1})\right]^{-|\alpha|}$ on
$E_{s}$ for $|\alpha|\leq m+100$.
In particular, if $F=(F_{1},\cdots,F_{D})\in
C^{m}_{loc}(\Omega,\mathbb{R}^{D})$, then $J_{x}F\in H(x)$ for all
$x\in[\Omega\cap
B(0,\delta)]\setminus(\gamma_{0}\cup\cdots\cup\gamma_{s_{\max}})$ if and only
if for each $s=1,\cdots,s_{\max}$, ((43)) holds on all of $E_{s}$.
Next, we apply Lemma 5.1 to $\mathcal{H}_{s}=(H(x))_{x\in\gamma_{s}}$,
($s=0,\cdots,s_{\max}$). We obtain semialgebraic functions for which the
following holds.
Let $\left(x^{0},\psi_{s}\left(x^{0}\right)\right)\in\gamma_{s}$ be given, and
let $F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}_{loc}\left(U,\mathbb{R}^{D}\right)$, where $U$ is a neighborhood of
$\gamma_{s}$ in $\mathbb{R}^{2}$. Then, except for finitely many bad $x^{0}$,
we have the following equivalence:
$F$ is a local section of $\mathcal{H}_{s}$ near
$\left(x^{0},\psi_{s}\left(x^{0}\right)\right)$ if and only if
$\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\Theta_{jl}^{is}\left(x\right)\partial_{y}^{l}F_{j}|_{\left(x,\psi_{s}\left(x\right)\right)}=g^{si}\left(x\right)\quad\left(i=1,\cdots,i_{\max}\left(s\right)\right)$
for all $x$ in a neighborhood of $x^{0}$. Here, the $\Theta$’s and $g$’s are
semialgebraic functions of one variable. To say that $F$ is a local section of
$\mathcal{H}_{s}$ near $\left(x^{0},\psi_{s}\left(x^{0}\right)\right)$ means
that $J_{\left(x,\psi_{s}\left(x\right)\right)}F\in
H\left(x,\psi_{s}\left(x\right)\right)$ for all $x$ in a neighborhood of
$x^{0}$.
By restricting attention to $B\left(0,\delta\right)$ and taking $\delta>0$
smaller, we may exclude from $B\left(0,\delta\right)$ all these bad $x^{0}$,
except for $x^{0}=0$.
Combining our results ((41)), ((45)) on the $E_{\nu}$ with the above result on
the arcs $\gamma_{s}$, we obtain the following result.
###### Lemma 5.3
Let $\Omega=\left\\{\left(x,y\right)\in\mathbb{R}^{2}:0\leq y\leq x\leq
1\right\\}$ and let $\mathcal{H=}\left(H\left(x\right)\right)_{x\in\Omega}$ be
a semialgebraic bundle, with each $H\left(x\right)$ consisting of $m$-jets at
$x$ of functions from $\mathbb{R}^{2}$ to $\mathbb{R}^{D}$.
Assume $H\left(\left(0,0\right)\right)=\left\\{0\right\\}$ and assume
$\mathcal{H}$ has a section.
Then there exist the following objects, with properties to be specified below:
* •
A positive number $\delta\in\left(0,1\right)$.
* •
Semialgebraic functions
$0=\psi_{0}\left(x\right)<\psi_{1}\left(x\right)<\cdots<\psi_{s_{\max}}\left(x\right)=x$
on $\left(0,\delta\right),$ all given by convergent Puiseux expansions on
$\left(0,\delta\right)$.
* •
Integers $k_{s}$ $\left(0\leq k_{s}\leq D\right)$ and permutations
$\pi_{s}:\left\\{1,\cdots,D\right\\}\rightarrow\left\\{1,\cdots,D\right\\}$
for $s=1,\cdots,D$.
* •
Semialgebraic functions $A_{ij}^{s}\left(x,y\right)$
$\left(s=1,\cdots,s_{\max},1\leq i\leq k_{s},k_{s}<j\leq D\right)$ and
$\varphi_{i}^{s}\left(x,y\right)$ $(s=1,\cdots,s_{\max},1\leq i\leq k_{s})$
defined on
$E_{s}=\left\\{\left(x,y\right):0<x<\delta,\psi_{s-1}\left(x\right)<y<\psi_{s}\left(x\right)\right\\}$.
* •
Semialgebraic functions $\Theta_{jl}^{si}\left(x\right)$,
$g^{si}\left(x\right)$
$(s=0,\cdots,s_{\max},i=1,\cdots,i_{\max}\left(s\right)$, $j=1,\cdots,D,$
$l=0,\cdots,m$ defined on $\left(0,\delta\right)$, and given there by there by
convergent Puiseux expansions.
The above objects have the following properties
* •
(Estimates) For $\left(x,y\right)\in\Omega$ with $0<x<\delta$ and
$\psi_{s-1}\left(x\right)<y<\psi_{s}\left(x\right)$, we have
$\left|\partial^{\alpha}A_{ij}^{s}\left(x,y\right)\right|$,
$\left|\partial^{\alpha}\varphi_{i}^{s}\left(x,y\right)\right|\leq
C\left[\min\left(\left|y-\psi_{s}\left(x\right)\right|,\left|y-\psi_{s-1}\left(x\right)\right|\right)\right]^{-\left|\alpha\right|}$
for $\left|\alpha\right|\leq m+100$.
* •
(Condition for sections) Let $F=(F_{1},...,F_{D})\in
C^{m}_{loc}(\Omega,\mathbb{R}^{D})$, and suppose $J_{x}F\in H\left(x\right)$
for all $x\in\Omega$.
* (47)
Then for $s=1,\cdots,s_{\max}$, $i=1,\cdots,k_{s}$,
$x\in\left(0,\delta\right)$,
$\psi_{s-1}\left(x\right)<y<\psi_{s}\left(x\right)$, we have
$F_{\pi_{s}i}\left(x,y\right)+\sum_{D\geq
j>k_{s}}A_{ij}^{s}\left(x,y\right)F_{\pi_{s}j}\left(x,y\right)=\varphi_{i}^{s}\left(x,y\right)\text{;}$
and for $s=0,1,\cdots,s_{\max}$, $i=1,\cdots,i_{\max}\left(s\right)$,
$x\in\left(0,\delta\right)$, we have
$\sum_{j=1}^{D}\sum_{l=0}^{m}\Theta_{jl}^{si}\left(x\right)\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=g^{si}\left(x\right)\text{;}$
and $J_{\left(0,0\right)}F_{j}=0$ for all $j$.
Conversely, if $F=(F_{1},...,F_{D})\in C^{m}_{loc}(\Omega,\mathbb{R}^{D})$ and
the conditions in ((47)) are satisfied, then $J_{z}F$ $\in H\left(z\right)$
for all $z=\left(x,y\right)\in\Omega$ with $0\leq x<\delta$.
## 6 A Second Main Lemma
This section is devoted to the proof of the following lemma. See (A) and (B)
in the Introduction.
###### Lemma 6.1 (Second Main Lemma)
Let $\mathcal{H}=(H(z))_{z\in\Omega}$ with
$\Omega=\left\\{(x,y)\in\mathbb{R}^{2}:0\leq y\leq x\leq 1\right\\}$ and
suppose $H(z)$ depends semialgebraically on $z$. (As usual,
$H(z)\subset\mathcal{R}_{z}^{D}$ is a coset of an
$\mathcal{R}_{z}$-submodule.)
Suppose $\mathcal{H}$ has a section, and suppose
$\mathcal{H}((0,0))=\left\\{0\right\\}$. Then there exist semialgebraic
functions $\theta_{jl}^{si}(x)$, $g^{si}(x)$, $\tilde{\theta}_{jl}^{si}(x)$,
$\tilde{g}^{si}(x)$ of one variable, and
$0=\psi_{0}(x)<\cdots<\psi_{s_{\max}}(x)=x$, also semialgebraic, for which the
following hold.
Suppose $F=(F_{1},\cdots,F_{D})\in C^{m}(\Omega,\mathbb{R}^{D})$ is a section
of $\mathcal{H}$. Let $f_{jl}^{s}(x)=\partial_{y}^{l}F_{j}(x,\psi_{s}(x))$ for
$0\leq s\leq s_{\max}$, $0\leq l\leq m$, $1\leq j\leq D$.
Then
1. (49)
$\sum_{j,l}\theta_{jl}^{si}(x)f_{jl}^{s}(x)=g^{si}(x)$ on $(0,\delta)$ for
some $\delta>0$ for each $s,i$; and
$\sum_{j,l}\tilde{\theta}_{jl}^{si}(x)f_{jl}^{s}(x)=\tilde{g}^{si}(x)+o(1)$ as
$x\rightarrow 0^{+}$, each $s$, $i$; and
$f_{jl}^{s}(x)=\sum_{k=0}^{m-l}\frac{1}{k!}f_{j(l+k)}^{s-1}(x)\cdot\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)^{k}+o\left(\left[\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right]^{m-l}\right)$
as $x\rightarrow 0^{+}$, each $s$, $j$, $l$.
1. (51)
Conversely, if $f_{jl}^{s}\left(x\right)$ are semialgebraic functions
satisfying ((49)), then there exists a semialgebraic $C^{m}$ section
$F=\left(F_{1},\cdots,F_{D}\right)$ of $\mathcal{H}$ over
$\Omega_{\delta^{\prime}}=\left\\{\left(x,y\right):0\leq y\leq
x\leq\delta^{\prime}\right\\}$ (some $\delta^{\prime}>0$) such that
$\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=f_{jl}^{s}\left(x\right)$
for $0<x<\delta^{\prime}$.
We call the curves $y=\psi_{s}(x)$ “critical curves”.
### 6.1 The Jet of a Section at a Critical Curve
Fix $m\geq 1$. Recall that $\mathcal{P}$ denotes the space of polynomials of
degree $\leq m$ on $\mathbb{R}^{2}$, and $J_{z}F\in\mathcal{P}$ denotes the
$m$-jet of $F$ at $z\in\mathbb{R}^{2}$. $\odot_{z}$ denotes multiplication of
jets at $z$. We write $\mathfrak{p}$ to denote the space of polynomials of
degree $\leq m$ on $\mathbb{R}$. If $F(x,y)$ is a $C^{m}_{loc}$ function in a
neighborhood of $(\bar{x},0)$, then $j_{\bar{x}}F\in\mathfrak{p}$ is the
$m$-jet at $0$ of the function $y\mapsto F(\bar{x},y)$. We write $\boxdot$ to
denote multiplication of $m$-jets at $0$ of $C^{m}_{loc}$ functions of one
variable.
If $\vec{F}=(F_{1},\cdots,F_{j_{\max}})$ is a vector of $C^{m}_{loc}$
functions on $\mathbb{R}^{2}$, then $J_{z}\vec{F}$ denotes
$(J_{z}F_{1},\cdots,J_{z}F_{j_{\max}})\in\mathcal{P}^{j_{\max}}.$
Similarly, $j_{\bar{x}}\vec{F}$ denotes
$(j_{\bar{x}}F_{1},\cdots,j_{\bar{x}}F_{j_{\max}})\in\mathfrak{p}^{j_{\max}}.$
A function $F^{\\#}:(0,\delta)\rightarrow\mathfrak{p}$ may be regarded as a
function of $(x,y)\in(0,\delta)\times\mathbb{R}$ such that for fixed $x$, the
function $y\mapsto F^{\\#}(x,y)$ is a polynomial of degree at most $m$.
Fix positive integers $i_{\max},j_{\max}$. Let Aff denote the vector space of
all affine functions defined on $\mathfrak{p}^{j_{\max}+i_{\max}}$. We make
the following assumptions:
* •
We are given $C^{\infty}$ semialgebraic functions
$A_{ij},B_{i},(i=1,\cdots,i_{\max},j=1,\cdots,j_{\max})$ defined on
$\Omega_{1}$, where for $\delta>0$,
$\Omega_{\delta}=\\{(x,y)\in\mathbb{R}^{2}:0<x<\delta,0<y<\psi(x)\\}$, and
$\psi:(0,1)\rightarrow(0,\infty)$ is a semialgebraic function satisfying
$0<\psi(x)\leq x$ for $x\in(0,1)$.
* •
We assume that $\partial^{\alpha}A_{ij},\partial^{\alpha}B_{i}$ extend to
continuous functions on $\Omega_{1}^{+}$ for $|\alpha|\leq m$, where, for
$\delta>0$,
$\Omega_{\delta}^{+}=\\{(x,y)\in\mathbb{R}^{2}:0<x\leq\delta,0<y\leq\psi(x)\\}$.
* •
We suppose that
$\displaystyle|\partial^{\alpha}A_{ij}(x,y)|$ $\displaystyle\leq$
$\displaystyle Cy^{-|\alpha|},\text{ and}$
$\displaystyle|\partial^{\alpha}B_{i}(x,y)|$ $\displaystyle\leq$
$\displaystyle Cy^{-|\alpha|}$
on $\Omega^{+}_{1}$ for $|\alpha|\leq m$.
###### Lemma 6.2
Under the above assumptions, there exist $\delta\in(0,1)$ and semialgebraic
maps
$\lambda_{1},\cdots,\lambda_{k_{\max}},\mu_{1},\cdots,\mu_{l_{\max}}:(0,\delta)\rightarrow\text{Aff}$
such that the following hold:
* (53)
Suppose $\vec{F}=(F_{1},\cdots,F_{j_{\max}})$ and
$\vec{G}=(G_{1},\cdots,G_{i_{\max}})$ belong to
$C^{m}(\Omega_{\delta}^{\text{closure}},\mathbb{R}^{j_{\max}})$ and
$C^{m}(\Omega_{\delta}^{\text{closure}},\mathbb{R}^{i_{\max}})$ respectively,
with $J_{(0,0)}\vec{F}=0,J_{(0,0)}\vec{G}=0$. Suppose also that
$G_{i}=\sum_{j}A_{ij}F_{j}+B_{i}$ for each $i$. Then
$[\lambda_{k}(\bar{x})](j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G})=0$ for
$k=1,\cdots,k_{\max},\bar{x}\in(0,\delta)$, and
$[\mu_{l}(\bar{x})](j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G})$ is bounded on
$(0,\delta)$ and tends to zero as $\bar{x}\rightarrow 0$, for each
$l=1,\cdots,l_{\max}$. We do not assume $\vec{F}$ or $\vec{G}$ is
semialgebraic.
* (55)
Suppose there exists an $(\vec{F},\vec{G})$ as in ((53)). Let
$\vec{F}^{\\#}=(F_{1}^{\\#},\cdots,F_{j_{\max}}^{\\#})$,
$\vec{G}^{\\#}=(G_{1}^{\\#},\cdots,G_{i_{{}_{\max}}}^{\\#})$, where the
$F_{j}^{\\#}$ and $G_{i}^{\\#}$ are semialgebraic maps from
$(0,\delta)\rightarrow\mathfrak{p}$. Suppose that
$[\lambda_{k}(\bar{x})](\vec{F}^{\\#}(\bar{x}),\vec{G}^{\\#}(\bar{x}))=0,$
for $k=1,\cdots,k_{\max},\bar{x}\in(0,\delta)$; and that
$[\mu_{l}(\bar{x})](\vec{F}^{\\#}(\bar{x}),\vec{G}^{\\#}(\bar{x}))$ is bounded
on $(0,\delta)$ and tends to zero as $\bar{x}\rightarrow 0$. Then there exist
$\delta^{\prime}>0$ and $\vec{F}=(F_{1},\cdots,F_{j_{\max}})$,
$\vec{G}=(G_{1},\cdots,G_{i_{\max}})$ semialgebraic and in
$C^{m}(\Omega_{\delta^{\prime}}^{\text{closure}},\mathbb{R}^{j_{\max}})$ and
$C^{m}(\Omega_{\delta^{\prime}}^{\text{closure}},\mathbb{R}^{i_{\max}})$
respectively, with $J_{(0,0)}\vec{F}=0,J_{(0,0)}\vec{G}=0$,
$G_{i}=\sum_{j}A_{ij}F_{j}+B_{i}$ and
$j_{\bar{x}}\vec{F}=\vec{F}^{\\#}(\bar{x}),j_{\bar{x}}\vec{G}=\vec{G}^{\\#}(\bar{x})$,
for all $\bar{x}\in(0,\delta^{\prime})$. (Note that here we have passed from
$\delta$ to a smaller $\delta^{\prime}$.)
The remainder of this section is devoted to a proof of Lemma 6.2.
Let $\delta>0$ be small enough to be picked below,
###### Definition 6.1
We define a bundle $\mathcal{H}$ over
$[0,1]\times\\{0\\}\subset\mathbb{R}^{2}$. Here,
$\mathcal{H}=(H(\bar{x},0))_{\bar{x}\in[0,1]}$, with
$H(\bar{x},0)\subset\mathcal{P}^{j_{\max}+i_{\max}}$ defined as follows.
* •
$H(0,0)=\\{0\\}$.
* •
If $\bar{x}\in(0,1]$, then
$(\vec{P},\vec{Q})=(P_{1},\cdots,P_{j_{\max}},Q_{1},\cdots,Q_{i_{\max}})\in
H(\bar{x},0)$ if and only if
$y^{|\alpha|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+B_{i}-Q_{i}\right\\}(\bar{x},y)\rightarrow
0$
as $y\rightarrow 0^{+}$, for each $|\alpha|\leq m$ and each $i$.
We will show that $\mathcal{H}$ is a bundle, i.e., $H(z)$ is a translate of an
$\mathcal{R}_{z}$-submodule of $\mathcal{R}_{z}^{j_{\max}+i_{\max}}$ for each
$z\in[0,\delta]\times\\{0\\}$; and we will show that
$J_{(\bar{x},0)}(\vec{F},\vec{G})\in H(\bar{x},0)$ (each
$\bar{x}\in[0,\delta]$) if $\vec{F},\vec{G}$ are as in ((53)).
Suppose $J_{(0,0)}(\vec{F},\vec{G})=0$, $\vec{F},\vec{G}$ are $C^{m}$ on
$\Omega_{\delta}^{\text{closure}}$, $G_{i}=\sum_{j}A_{ij}F_{j}+B_{i}$ on
$\Omega_{\delta}$. Let $\bar{x}\in(0,\delta]$. Then
$\partial^{\alpha}[A_{ij}(F_{j}-J_{(\bar{x},0)}F_{j})](\bar{x},y)=o(y^{m-|\alpha|})$
and
$\partial^{\alpha}[G_{i}-J_{(\bar{x},0)}G_{i}](\bar{x},y)=o(y^{m-|\alpha|})$
on $\Omega_{\delta}$ for $|\alpha|\leq m$, by Taylor’s theorem and our
estimates for $\partial^{\alpha}A_{ij}$. The above remarks imply that
$\partial^{\alpha}\\{\sum_{j}A_{ij}J_{(\bar{x},0)}F_{j}+B_{i}-J_{(\bar{x},0)}G_{i}\\}(\bar{x},0)=o(y^{m-|\alpha|})$.
Therefore, $J_{(\bar{x},0)}(\vec{F},\vec{G})\in H(\bar{x},0)$ for
$\bar{x}\in(0,\delta]$. For $\bar{x}=0$, we just note that
$J_{(0,0)}(\vec{F},\vec{G})=0\in H(0,0)$. That proves our assertion about
$J_{(\bar{x},0)}(\vec{F},\vec{G})$.
Note that for $\bar{x}\not=0,$ $H\left(\bar{x},0\right)$ is a translate in
$\mathcal{P}$ of
$I\left(\bar{x}\right)=\left\\{\left(\vec{P},\vec{Q}\right):\partial^{\alpha}\left(\sum_{j}A_{ij}P_{i}-Q_{i}\right)\left(\bar{x},y\right)=o\left(y^{m-\left|\alpha\right|}\right)\text{,
as }y\rightarrow 0^{+}\text{, }\left|\alpha\right|\leq m\right\\}\text{.}$
Let $\left(\vec{P},\vec{Q}\right)\in I\left(\bar{x}\right)$ and let
$S\in\mathcal{P}$. Then for $\left|\alpha\right|\leq m,$ we have
$\partial^{\alpha}\left(S\cdot\left[\sum_{j}A_{ij}P_{j}-Q_{i}\right]\right)\left(\bar{x},y\right)=o\left(y^{m-\left|\alpha\right|}\right),$
hence
(57)
$\partial^{\alpha}\left(\sum_{j}A_{ij}\left(SP_{j}\right)-\left(SQ_{i}\right)\right)\left(\bar{x},y\right)=o\left(y^{m-\left|\alpha\right|}\right)\text{,
as }y\rightarrow 0^{+}\text{.}$
Also, our estimates on $\partial^{\alpha}A_{ij},$ together with Taylor’s
theorem, give
$\partial^{\alpha}\left(A_{ij}\left(SP_{i}-J_{\left(\bar{x},0\right)}\left(SP_{j}\right)\right)\right)\left(\bar{x},0\right)=o\left(y^{m-\left|\alpha\right|}\right)$
and
$\partial^{\alpha}\left(SQ_{i}-J_{\left(\bar{x},0\right)}\left(SQ_{i}\right)\right)\left(\bar{x},0\right)=o\left(y^{m-\left|\alpha\right|}\right)\text{
as }y\rightarrow 0^{+}\text{ for }\left|\alpha\right|\leq m\text{.}$
That is,
(58)
$\partial^{\alpha}\left(A_{ij}\left(SP_{j}-S\odot_{\left(\bar{x},0\right)}P_{j}\right)\right)\left(\bar{x},y\right)=o\left(y^{m-\left|\alpha\right|}\right)$
and
(59)
$\partial^{\alpha}\left(SQ_{i}-S\odot_{\left(\bar{x},0\right)}Q_{i}\right)\left(\bar{x},0\right)=o\left(y^{m-\left|\alpha\right|}\right)\text{
as }y\rightarrow 0^{+}\text{ for }\left|\alpha\right|\leq m.$
It now follows from (57), (58), and (59) that
$\partial^{\alpha}\left(\sum_{j}A_{ij}\left[S\odot_{\left(\bar{x},0\right)}P_{j}\right]-\left[S\odot_{\left(\bar{x},0\right)}Q_{i}\right]\right)\left(\bar{x},y\right)=o\left(y^{m-\left|\alpha\right|}\right)$
as $y\rightarrow 0^{+}$, for each $\left|\alpha\right|\leq m$.
This completes the proof that the $I\left(\bar{x}\right)$ is a submodule, when
$\bar{x}\not=0$.
For $\bar{x}=0,$ we just note that $\left\\{0\right\\}$ is an
$\mathcal{R}_{\left(0,0\right)}$-submodule of
$\mathcal{R}_{\left(0,0\right)}^{j_{\max}+i_{\max}}.$
We have now shown that
* •
$\mathcal{H}=(H(\bar{x},0))_{\bar{x}\in[0,\delta]}$ is a bundle.
* •
If $(\vec{F},\vec{G})$ is as in (I) of Lemma 6.2, then $(\vec{F},\vec{G})$ is
a section of $\mathcal{H}$.
* •
$H(\bar{x},0)\subset\mathcal{P}^{j_{\max}+i_{\max}}$ depends semialgebraically
on $\bar{x}$, since $A_{ij}$ and $B_{i}$ are semialgebraic.
###### Lemma 6.3
Let $\mathcal{H}=(H(\bar{x},0))_{(\bar{x},0)\in[0,\delta]\times\\{0\\}}$ be a
semialgebraic bundle,
$\mathcal{H}=H(\bar{x},0)\subset\mathcal{P}^{j_{\max}+i_{\max}}$. Then there
exist semialgebraic functions
$\lambda_{1},\cdots,\lambda_{k_{\max}}:(0,\delta)\rightarrow\text{Aff}$, and a
finite set of bad points
$\\{\bar{\bar{x}}_{1}^{\text{bad}},\cdots,\bar{\bar{x}}_{S}^{\text{bad}}\\}$
such that the following holds for any $\bar{\bar{x}}\in\left(0,\delta\right)$
other than the bad points. Let
$\left(\vec{F},\vec{G}\right)=\left(F_{1},\cdots,F_{j_{\max}},G_{1},\cdots,G_{i_{\max}}\right)$
be $C^{m}$ in a neighborhood of $\left(\bar{\bar{x}},0\right)$ in
$\mathbb{R}^{2}$. Then
$J_{\left(\bar{x},0\right)}\left(\vec{F},\vec{G}\right)\in
H\left(\bar{x},0\right)\text{ for all }\bar{x}\text{ in some neighborhood of
}\bar{\bar{x}}$
if and only if
$\left[\lambda_{k}\left(\bar{x}\right)\right]\left(j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G}\right)=0\text{
for all }\bar{x}\text{ in some neighborhood of
}\bar{\bar{x}},(k=1,\cdots,k_{\max})\text{.}$
Proof. This is a 1 dimensional case of Lemma 5.1, whose proof can be found in
Section 5.1.
Proof of Lemma 6.2. We apply Lemma 6.3 to the bundle $\mathcal{H}$ defined in
Definition 6.1. By making $\delta$ smaller, we may assume there are no bad
points $\bar{\bar{x}}_{{}^{\text{bad}}}$. Thus, we have achieved the
following: There exist semialgebraic functions
$\lambda_{1},\cdots,\lambda_{k_{\max}}:(0,\delta]\rightarrow\text{Aff}$ such
that for any $\bar{\bar{x}}\in(0,\delta)$ and any $(\vec{F},\vec{G})$ that is
$C^{m}$ in a neighborhood of $(\bar{\bar{x}},0)$, we have
$J_{\left(\bar{x},0\right)}(\vec{F},\vec{G})\in H(\bar{x},0)\text{ for all
}\bar{x}\text{ in some neighborhood of }\bar{\bar{x}}$
if and only if
$\left[\lambda_{k}\left(\bar{x}\right)\right]j_{\bar{x}}\left(\vec{F},\vec{G}\right)=0\text{
for all }\bar{x}\text{ in some neighborhood of
}\bar{\bar{x}},(k=1,\cdots,k_{\max})\text{.}$
In particular, if $\left(\vec{F},\vec{G}\right)$ is as in ((53)), then
$\left[\lambda_{k}\left(\bar{x}\right)\right]j_{\bar{x}}\left(\vec{F},\vec{G}\right)=0\text{
for all }\bar{x}\in(0,\delta),(k=1,\cdots,k_{\max})\text{.}$
Next, we apply Theorem 6 in Section 3.7.
Recall $H(\bar{x},0)$ is an affine space, so $\mathbb{R}\cdot H(\bar{x},0)$ is
a vector space.
We regard $\mathbb{R}\cdot H(\bar{x},0)$ as the space of all
$(\vec{P},\vec{Q},t)$ such that
$\partial^{\alpha}\\{\sum_{j}A_{ij}P_{j}+tB_{i}-Q_{i}\\}(\bar{x},y)=o(y^{m-|\alpha|})$
as $y\rightarrow 0^{+}$.
We define seminorms on $\mathbb{R}\cdot H(\bar{x},0)$ by
$|||(\vec{P},\vec{Q},t)|||_{\alpha,i,y}=\left|y^{|\alpha|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+tB_{i}-Q_{i}\right\\}(\bar{x},y)\right|$
for fixed $\bar{x}$ and $0<y<\psi(\bar{x})$. Notice that on $H(\bar{x},0)$,
the seminorm agrees with
$|||(\vec{P},\vec{Q})|||_{\alpha,i,y}=\left|y^{|\alpha|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+B_{i}-Q_{i}\right\\}(\bar{x},y)\right|$
for fixed $\bar{x}\not=0$ and $0<y<\psi(\bar{x}),|\alpha|\leq
m,i=1,\cdots,i_{\max}$.
Note that
$\sup_{\alpha,i,y}|||(\vec{P},\vec{Q})|||_{\alpha,i,y}$
is bounded for fixed $\left(\vec{P},\vec{Q}\right)\in
H\left(\bar{x},0\right)$, by definition of $H\left(\bar{x},0\right)$.
Thus, by Theorem 6 in Section 3.7, for each $\bar{x}\in\left(0,\delta\right),$
there exist $y_{\sigma}\in\left(0,\psi\left(\bar{x}\right)\right)$
$\left(\sigma=1,\cdots,\sigma_{\max}\right)$ with $\sigma_{\max}$ depending
only on $i_{\max},j_{\max},m$ such that for any
$\left(\vec{P},\vec{Q}\right)\in H\left(\bar{x},0\right)$, we have
(60) $\displaystyle\sup_{\begin{subarray}{c}0<y<\psi\left(\bar{x}\right)\\\
\left|\alpha\right|\leq m\\\
i=1,\cdots,i_{\max}\end{subarray}}\left|y\right|^{\left|\alpha\right|-m}\left|\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+B_{j}-Q_{i}\right\\}\left(\bar{x},y\right)\right|$
$\displaystyle\leq$ $\displaystyle
C\max_{\begin{subarray}{c}\sigma=1,\cdots,\sigma_{\max}\\\
\left|\alpha\right|\leq m\\\
i=1,\cdots,i_{\max}\end{subarray}}\left|y_{\sigma}\right|^{\left|\alpha\right|-m}\left|\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+B_{j}-Q_{i}\right\\}\left(\bar{x},y_{\sigma}\right)\right|\text{.}$
Moreover, (60) is a semialgebraic condition. Therefore, we may take
$y_{1},\cdots,y_{\sigma}\in\left(0,\psi\left(\bar{x}\right)\right)$ satisfying
(60) to depend semialgebraically on $\bar{x}\in\left(0,\delta\right)$.
Because $0<y_{\sigma}\left(\bar{x}\right)<\psi\left(\bar{x}\right)\leq\bar{x}$
for $\bar{x}\in\left(0,\delta\right)$ and because
$y_{\sigma}\left(\bar{x}\right)$ depends semialgebraically on $\bar{x}$, we
can take $\delta$ small to achieve the estimates
* (61)
$\left|\left(\frac{d}{dx}\right)^{\alpha}y_{\sigma}\left(\bar{x}\right)\right|\leq
C\bar{x}^{1-\alpha}$ for $0\leq\alpha\leq m+100$,
$\sigma=1,\cdots,\sigma_{\max},$ $\bar{x}\in\left(0,\delta\right).$
* (63)
$0<y_{\sigma}(\bar{x})<\psi(\bar{x})\leq\bar{x}$ for
$\sigma=1,\cdots,\sigma_{\max}$, $\bar{x}\in\left(0,\delta\right)$.
* (65)
$\bar{x}\mapsto y_{\sigma}(\bar{x})$ is a semialgebraic function.
* (67)
For any $\bar{x}\in(0,\delta)$ and any
$\left(\vec{P},\vec{Q}\right)=\left(P_{1},\cdots,P_{j_{\max}},Q_{1},\cdots,Q_{i_{\max}}\right)\in
H\left(\bar{x},0\right),$ we have
$\displaystyle\sup_{\begin{subarray}{c}0<y<\psi\left(\bar{x}\right)\\\
\left|\alpha\right|\leq m\\\
i=1,\cdots,i_{\max}\end{subarray}}\left|y\right|^{\left|\alpha\right|-m}\left|\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+B_{j}-Q_{i}\right\\}\left(\bar{x},y\right)\right|$
$\displaystyle\leq$ $\displaystyle
C\max_{\begin{subarray}{c}\sigma=1,\cdots,\sigma_{\max}\\\
\left|\alpha\right|\leq m\\\
i=1,\cdots,i_{\max}\end{subarray}}\left|y_{\sigma}\left(\bar{x}\right)\right|^{\left|\alpha\right|-m}\left|\partial^{\alpha}\left\\{\sum_{j}A_{ij}P_{j}+B_{j}-Q_{i}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right|\text{.}$
with $C$ depending only on $i_{\max},j_{\max},m$.
Fix $\bar{x}\in(0,\delta)$, and let
$(\vec{p},\vec{q})=\left(p_{1},\cdots,p_{j_{\max}},q_{1},\cdots,q_{i_{\max}}\right)\in\mathfrak{p}^{j_{\max}+i_{\max}}$.
Thus, each $p_{j}$ and $q_{i}$ is a polynomial in $y$ of degree at most $m$.
For $0\leq a\leq m,$ $\sigma=1,\cdots,\sigma_{\max},$ $i=1,\cdots,i_{\max},$
let
$\displaystyle\mu_{a,\sigma,i}^{\\#}\left[\bar{x}\right]\left(p_{1},\cdots,p_{j_{\max}},q_{1},\cdots,q_{i_{\max}}\right)$
$\displaystyle=$
$\displaystyle\left.\left(y_{\sigma}\left(\bar{x}\right)\right)^{a-m}\partial_{y}^{a}\left\\{\sum_{j}A_{ij}\left(\bar{x},y\right)p_{j}\left(y\right)+B_{i}\left(\bar{x},y\right)-q_{i}\left(y\right)\right\\}\right|_{y=y_{\sigma}\left(\bar{x}\right)}.$
Note that we don’t take $x$-derivatives here, only $y$-derivatives.
The $\mu_{a,\sigma,i}^{\\#}(\bar{x})$ are affine functions from
$\mathfrak{p}^{j_{\max}+i_{\max}}$ to $\mathbb{R}$; thus, each
$\mu_{a,\sigma,i}^{\\#}(\bar{x})$ belongs to Aff. Let
$\mu_{1}\left(\bar{x}\right),\cdots,\mu_{l_{\max}}\left(\bar{x}\right)$ be an
enumeration of the $\mu_{a,\sigma,i}^{\\#}\left(\bar{x}\right)$, together with
the linear maps
$\displaystyle\left(p_{1},\cdots,p_{j_{\max}},q_{1},\cdots,q_{i_{\max}}\right)$
$\displaystyle\mapsto$
$\displaystyle\left(\bar{x}\right)^{a-m}\partial_{y}^{a}p_{j}\left(0\right)$
$\displaystyle\left(p_{1},\cdots,p_{j_{\max}},q_{1},\cdots,q_{i_{\max}}\right)$
$\displaystyle\mapsto$
$\displaystyle\left(\bar{x}\right)^{a-m}\partial_{y}^{a}q_{i}\left(0\right)\text{.}$
We will prove the following
* (69)
Let $\vec{F},\vec{G}$ be as assumed in ((53)). Then, as $\bar{x}$ varies over
$\left(0,\delta\right)$, the
$\left[\mu_{l}\left(\bar{x}\right)\right]\left(j_{\bar{x}}\vec{F},j_{\bar{x}}\bar{G}\right)$
remain bounded, and these quantities tend to zero as $\bar{x}$ tends to
$0^{+}$.
To prove ((69)), we recall that
$\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}=0\text{,}$
hence
$\displaystyle\mu_{a,\sigma,i}^{\\#}\left(\bar{x}\right)\left(j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G}\right)$
$\displaystyle=$
$\displaystyle-\left(y_{\sigma}\left(\bar{x}\right)\right)^{a-m}\left.\partial_{y}^{a}\left[\sum_{i}A_{ij}\left(\bar{x},y\right)\left\\{F_{j}\left(\bar{x},y\right)-j_{\bar{x}}F_{j}\left(y\right)\right\\}-\left\\{G_{j}\left(\bar{x},y\right)-j_{\bar{x}}G_{j}\left(y\right)\right\\}\right]\right|_{y=y_{\sigma}}.$
Let
$w_{F}\left(\bar{x}\right)=\max_{\left|\beta\right|=m,j=1,\cdots,j_{\max}}\left(\sup_{0\leq
y\leq\psi\left(\bar{x}\right)}\left[\partial^{\beta}F_{j}\left(\bar{x},y\right)\right]-\inf_{0\leq
y\leq\psi\left(\bar{x}\right)}\left[\partial^{\beta}F_{j}\left(\bar{x},y\right)\right]\right)$
and similarly define $w_{G}\left(\bar{x}\right)$ as above, with $G$ in place
of $F.$
Because $\vec{F},\vec{G}$ belong to
$C^{m}\left({\Omega}^{\text{closure}}_{\delta},\mathbb{R}^{j_{\max}}\right)$
and
$C^{m}\left({\Omega}^{\text{closure}}_{\delta},\mathbb{R}^{i_{\max}}\right)$
respectively, while $\psi\left(\bar{x}\right)\rightarrow 0$ as
$\bar{x}\rightarrow 0,$ we know that $w_{F}\left(\bar{x}\right)$,
$w_{G}\left(\bar{x}\right)$ are bounded as $\bar{x}$ varies over
$\left(0,\delta\right)$, and moreover
$w_{F}\left(\bar{x}\right),w_{G}\left(\bar{x}\right)\rightarrow 0$ as
$\bar{x}\rightarrow 0^{+}$.
Taylor’s theorem gives
* (72)
$\left|\partial_{y}^{a}\left[F_{j}\left(\bar{x},y\right)-j_{\bar{x}}F_{j}\left(y\right)\right]\right|\leq
Cw_{F}\left(\bar{x}\right)\cdot y^{m-a}$ for $0\leq a\leq m$,
$0<y<\psi\left(\bar{x}\right),$ $j=1,\cdots,j_{\max}$.
* (74)
$\left|\partial_{y}^{a}\left\\{G_{i}\left(\bar{x},y\right)-j_{\bar{x}}G_{i}\left(y\right)\right\\}\right|\leq
Cw_{G}\left(\bar{x}\right)\cdot y^{m-a}$ for $0\leq a\leq
m,0<y<\psi\left(\bar{x}\right),i=1,\cdots,i_{\max}$.
We recall that
* (76)
$|\partial_{y}^{a}A_{ij}(\bar{x},y)|\leq Cy^{-a}$ for $0\leq a\leq
m,0<y<\psi\left(\bar{x}\right),i=1,\cdots,i_{\max},j=1,\cdots,j_{\max}$.
Putting ((72)),((74)),((76)) into (6.1), we find that
$\left|\mu_{a,\sigma,i}^{\\#}\left(\bar{x}\right)\left(j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G}\right)\right|\leq
Cw_{F}\left(\bar{x}\right)+Cw_{G}\left(\bar{x}\right)\text{,}$
hence the
$\mu_{a,\sigma,i}^{\\#}\left(\bar{x}\right)\left(j_{\bar{x}}\vec{F},J_{\bar{x}}\vec{G}\right)$
remain bounded as $\bar{x}$ varies over $\left(0,\delta\right)$, and these
quantities tend to zero as $\bar{x}\rightarrow 0^{+}$.
Also, because $J_{\left(0,0\right)}\vec{F}=0,J_{\left(0,0\right)}\vec{G}=0,$
and $\vec{F},\vec{G}$ are in
$C^{m}\left({\Omega}_{\delta}^{\text{closure}},\mathbb{R}^{j_{\max}}\right)$
and
$C^{m}\left({\Omega}_{\delta}^{\text{closure}},\mathbb{R}^{i_{\max}}\right)$
respectively, we see that
$\left(\bar{x}\right)^{a-m}\partial_{y}^{a}F_{j}\left(\bar{x},0\right),\left(\bar{x}\right)^{a-m}\partial_{y}^{a}G_{i}\left(\bar{x},0\right),$
for $0\leq a\leq m$, remain bounded as $\bar{x}$ varies over
$\left(0,\delta\right),$ and these quantities tend to zero as
$\bar{x}\rightarrow 0^{+}$.
Thus, all the
$\mu_{l}\left(\bar{x}\right)\left(j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G}\right)$
remain bounded on $\left(0,\delta\right)$ and tend to zero as
$\bar{x}\rightarrow 0^{+}$.
We have proven ((69)). Thus, we have defined our
$\lambda_{1},\cdots,\lambda_{k_{\max}}$ and $\mu_{1},\cdots,\mu_{l_{\max}}$
and we have proven ((53)).
We now set out to prove ((55)).
Thus, let $\vec{F}^{\\#}=\left(F_{1}^{\\#},\cdots,F_{j_{\max}}^{\\#}\right)$
and $\vec{G}^{\\#}=\left(G_{1}^{\\#},\cdots,G_{i_{\max}}^{\\#}\right)$ be as
in ((55)).
Recall, each $F_{j}^{\\#}$ and $G_{i}^{\\#}$ is a semialgebraic map from
$(0,\delta)$ into $\mathfrak{p}$, and moreover
$\left[\lambda_{k}\left(\bar{x}\right)\right]\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)=0\text{
for }k=1,\cdots,k_{\max},\text{ all }\bar{x}\in\left(0,\delta\right);\text{
and}$
$\left[\mu_{l}\left(\bar{x}\right)\right]\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)$
is bounded as $\bar{x}$ varies over $\left(0,\delta\right)$ and tends to zero
as $\bar{x}\rightarrow 0^{+}$ for each $l=1,\cdots,l_{\max}$.
Then
* (78)
$F_{j}^{\\#}\left(\bar{x}\right)$ has the form
$y\mapsto\sum_{s=0}^{m}F_{js}\left(\bar{x}\right)y^{s}$ and
* (80)
$G_{i}^{\\#}\left(\bar{x}\right)$ has the form
$y\mapsto\sum_{s=0}^{m}G_{is}\left(\bar{x}\right)y^{s}$,
with $F_{js},G_{is}$ semialgebraic functions of one variable. Taking $\delta$
small (depending on $\vec{F}^{\\#},\vec{G}^{\\#}$), we may assume the
$F_{js},G_{is}$ are $C^{\infty}$ on $(0,\delta)$.
Now, we define $\vec{F}=\left(F_{1},\cdots,F_{j_{\max}}\right)$,
$\vec{G}=\left(G_{1},\cdots,G_{i_{\max}}\right),\vec{G}^{\\#\\#}=\left(G_{1}^{\\#\\#},\cdots,G_{i_{\max}}^{\\#\\#}\right),$
where
(82)
$F_{j}\left(\bar{x},y\right)=\sum_{s=0}^{m}F_{js}\left(\bar{x}\right)y^{s}$
for $\left(\bar{x},y\right)\in\left(0,\delta\right)\times\mathbb{R}$,
$j=1,\cdots,j_{\max},$
(83)
$G_{i}^{\\#\\#}\left(\bar{x},y\right)=\sum_{s=0}^{m}G_{is}\left(\bar{x}\right)y^{s}$
for $\left(\bar{x},y\right)\in\left(0,\delta\right)\times\mathbb{R}$,
$i=1,\cdots,i_{\max}$,
$G_{i}\left(\bar{x},y\right)=\sum_{j}A_{ij}\left(\bar{x},y\right)F_{j}\left(\bar{x},y\right)+B_{i}\left(\bar{x},y\right)$
for $\left(\bar{x},y\right)\in\Omega_{\delta}$, $i=1,\cdots,i_{\max}$.
Note that $F_{j},G_{i}^{\\#\\#}$ are $C^{\infty}$ functions on
$\left(0,\delta\right)\times\mathbb{R}$ because the $F_{js},G_{is}$ are
$C^{\infty}$ functions on $\left(0,\delta\right)$.
The functions $F_{j},G_{i}^{\\#\\#},G_{i}$ are semialgebraic because
$F_{j}^{\\#},G_{i}^{\\#}$ are semialgebraic.
Let $\bar{x}\in\left(0,\delta\right)$. Then
(84)
$j_{\bar{x}}F_{j}=F_{j}^{\\#}\left(\bar{x}\right)\in\mathfrak{p},j_{\bar{x}}G_{i}^{\\#\\#}=G_{i}^{\\#}\left(\bar{x}\right)\in\mathfrak{p}.$
Therefore, for all $\bar{x}$ in a small neighborhood of a given
$\overline{\overline{x}}\in\left(0,\delta\right)$, we have
$\lambda_{k}(\bar{x})\left(j_{\bar{x}}\vec{F},j_{\bar{x}}\vec{G}^{\\#\\#}\right)=\lambda_{k}(\bar{x})\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)=0$
for $k=1,\cdots,k_{\max}$; the last equality is an assumption made in ((55)).
Because $\vec{F},\vec{G}^{\\#\\#}$ are $C^{\infty}$ in a neighborhood of
$\left(\overline{\overline{x}},0\right),$ the defining property of the
$\lambda_{k}$ now tells us that
$\left(J_{\left(\bar{x},0\right)}\vec{F},J_{\left(\bar{x},0\right)}\vec{G}^{\\#\\#}\right)\in
H\left(\bar{x},0\right)$
for all $\bar{x}$ in a small neighborhood of $\overline{\overline{x}}$.
Recalling that $\overline{\overline{x}}\in\left(0,\delta\right)$ is arbitrary,
we conclude that
(85)
$\left(J_{\left(\bar{x},0\right)}\vec{F},J_{\left(\bar{x},0\right)}\vec{G}^{\\#\\#}\right)\in
H\left(\bar{x},0\right)\text{ for all }\bar{x}\in\left(0,\delta\right).$
By definition of $H\left(\bar{x},0\right)$ and by the estimates
$\displaystyle\partial^{\alpha}\left(F_{j}-J_{\left(\bar{x},0\right)}F_{j}\right)\left(\bar{x},y\right)$
$\displaystyle=$ $\displaystyle o\left(y^{m-\left|\alpha\right|}\right),$
$\displaystyle\partial^{\alpha}\left(G_{i}^{\\#\\#}-J_{\left(\bar{x},0\right)}G_{i}^{\\#\\#}\right)\left(\bar{x},y\right)$
$\displaystyle=$ $\displaystyle o\left(y^{m-\left|\alpha\right|}\right),\text{
and}$ $\displaystyle\left|\partial^{\alpha}A_{ij}\left(x,y\right)\right|$
$\displaystyle\leq$ $\displaystyle Cy^{-\left|\alpha\right|}\text{,}$
we therefore have the following:
* (86)
For any $\bar{x}\in\left(0,\delta\right),$ any $i=1,\cdots,i_{\max},$ and any
$\left|\alpha\right|\leq m$, the quantity
$y^{\left|\alpha\right|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y\right)$
is bounded as $y$ varies over $\left(0,\psi\left(\bar{x}\right)\right)$ and
tends to zero as $y\rightarrow 0^{+}$.
We don’t yet know that the above convergence is uniform in $\bar{x}.$
Next, we recall from ((55)) the assumption that the
$\mu_{l}\left(\bar{x}\right)\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)$
remain bounded as $\bar{x}$ varies over $\left(0,\delta\right)$ and moreover
these quantities tend to zero as $\bar{x}\rightarrow 0^{+}$.
Thus, the quantities
(88)
$\left(y_{\sigma}\left(\bar{x}\right)\right)^{a-m}\partial_{y}^{a}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)$
for $0\leq a\leq m,i=1,\cdots,i_{\max},\sigma=1,\cdots,\sigma_{\max}$, remain
bounded as $\bar{x}$ varies over $\left(0,\delta\right)$, and tend to zero as
$\bar{x}\rightarrow 0^{+}$.
Because those quantities are semialgebraic functions of one variable, we may
pass to a smaller $\delta$ and assert for any $b$, say $0\leq b\leq m$, that
(89)
$\left(\frac{d}{d\bar{x}}\right)^{b}\left\\{y_{\sigma}\left(\bar{x}\right)^{a-m}\partial_{y}^{a}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}=o\left(\bar{x}^{-b}\right)$
as $\bar{x}\rightarrow 0^{+}$ and this quantity is bounded for $\bar{x}$
bounded away from $0$.
For $0\leq a+b\leq m,$ we will check that
(90)
$\left(\bar{x}\right)^{a+b-m}\left(\frac{d}{d\bar{x}}\right)^{b}\left\\{\partial_{y}^{a}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}=o\left(1\right)$
as $\bar{x}\rightarrow 0^{+}$ and the left-hand side is bounded.
To see this, we write
$\displaystyle\left(\frac{d}{d\bar{x}}\right)^{b}\left\\{\partial_{y}^{a}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}$
$\displaystyle=$
$\displaystyle\left(\frac{d}{d\bar{x}}\right)^{b}\left\\{\left(y_{\sigma}\left(\bar{x}\right)\right)^{m-a}\left(y_{\sigma}\left(\bar{x}\right)\right)^{a-m}\partial_{y}^{a}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}$
$\displaystyle=$
$\displaystyle\sum_{b^{\prime}+b^{\prime\prime}=b}\text{coeff}\left(b^{\prime},b^{\prime\prime}\right)\underset{\left(\dagger\right)}{\underbrace{\left[\left(\frac{d}{d\bar{x}}\right)^{b^{\prime}}\left(y_{\sigma}\left(\bar{x}\right)\right)^{m-a}\right]}}\cdot$
$\displaystyle\underset{\left(\ddagger\right)}{\underbrace{\left[\left(\frac{d}{d\bar{x}}\right)^{b^{\prime\prime}}\left\\{\left(y_{\sigma}\left(\bar{x}\right)\right)^{a-m}\partial_{y}^{a}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}\right]}}\text{.}$
Since $y_{\sigma}\left(\bar{x}\right)$ is given by a Puiseux series for
$\bar{x}\in\left(0,\delta\right)$ (small enough $\delta$),
$\left(\dagger\right)=O\left(y_{\sigma}\left(\bar{x}\right)\right)^{m-a}\cdot\bar{x}^{-b^{\prime}}=O\left(y_{\sigma}\left(\bar{x}\right)^{m-a-b^{\prime}}\right),$
because
$0<y_{\sigma}\left(\bar{x}\right)<\psi\left(\bar{x}\right)\leq\bar{x}$. By
(89), $\left(\ddagger\right)$ is $o\left(\bar{x}^{-b^{\prime\prime}}\right)$
as $\bar{x}\rightarrow 0^{+}$.
So in fact, we get not only (90) but the stronger result
(91)
$\left(\frac{d}{d\bar{x}}\right)^{b}\left\\{\partial_{y}^{a}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}=o\left(y_{\sigma}\left(\bar{x}\right)^{m-a}\cdot\bar{x}^{-b}\right)$
as $\bar{x}\rightarrow 0^{+};$ the left-hand side is bounded.
Introduce the vector field $X_{\sigma}=\frac{\partial}{\partial
x}+y_{\sigma}^{\prime}\left(\bar{x}\right)\frac{\partial}{\partial y}$ on
$\mathbb{R}^{2}.$ We have
$\left(\frac{d}{d\bar{x}}\right)^{b}\left\\{\mathcal{F}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right\\}=\left.\left(X_{\sigma}\right)^{b}\mathcal{F}\right|_{\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)}\text{
for any }\mathcal{F}\in C_{loc}^{b}\left(\mathbb{R}^{2}\right)\text{.}$
Therefore, (91) yields
(92)
$\left(X_{\sigma}^{b}\partial_{y}^{a}\right)\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)=o\left(y_{\sigma}\left(\bar{x}\right)^{m-a}\cdot\bar{x}^{-b}\right)\text{
as }\bar{x}\rightarrow 0^{+}$
and the left-hand side is bounded for all $\bar{x}$, for $a+b\leq
m,\sigma=1,\cdots,\sigma_{\max},i=1,\cdots,i_{\max}$.
This implies that
* (93)
$\left(y_{\sigma}\left(\bar{x}\right)\right)^{\left|\alpha\right|-m}\partial^{\alpha}\left[\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right]\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)$
is bounded on $(0,\delta)$ and tends to zero as $\bar{x}\rightarrow 0^{+}$,
for $\left|\alpha\right|\leq
m,i=1,\cdots,i_{\max},\sigma=1,\cdots,\sigma_{\max}$.
Let $\alpha=\left(b,a\right),$
$\partial^{\alpha}=\partial_{x}^{b}\partial_{y}^{a}$.
We deduce ((93)) from (92) by induction on $b$. For $b=0,$ ((93)) is the same
as (92).
Assume we know ((93)) for all $b^{\prime}<b.$ We prove ((93)) for the given
$b,$ using our induction hypothesis for $b^{\prime}$, together with (92).
The quantity
(95)
$X_{\sigma}^{b}\partial_{y}^{a}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)$
is a sum of terms of the form
(96)
$\left(\partial_{x}^{b_{1}}y_{\sigma}\left(\bar{x}\right)\right)\cdot\cdots\cdot\left(\partial_{x}^{b_{\nu}}y_{\sigma}\left(\bar{x}\right)\right)\cdot\partial_{x}^{\bar{b}}\partial_{y}^{a+\nu}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)$
with $b_{t}\geq 1$ each $t$, $b_{1}+\cdots+b_{\nu}+\bar{b}=b.$
Note
$\bar{b}+\left(a+\nu\right)=a+\bar{b}+b_{1}+\cdots+b_{\nu}-\left(b_{1}-1\right)-\cdots-\left(b_{\nu}-1\right)\leq
a+b$.
We know that
$\eqref{LHS}=o\left(y_{\sigma}\left(\bar{x}\right)^{m-a-b}\right)$ by (92).
If $\bar{b}<b,$ then by our induction hypothesis, the term (96) is dominated
by
$\displaystyle O\left(\overset{\text{Here again we use
}0<y_{\sigma}<\bar{x}\text{.}}{\overbrace{y_{\sigma}\left(\bar{x}\right)^{-\left[b_{1}-1\right]-\cdots-\left[b_{\nu}-1\right]}}}\right)\cdot
o\left(y_{\sigma}\left(\bar{x}\right)^{m-\left[a+\nu\right]-\bar{b}}\right)$
$\displaystyle=$ $\displaystyle
o\left(y_{\sigma}\left(\bar{x}\right)^{m-a-\bar{b}-b_{1}-\cdots-
b_{\nu}}\right)=o\left(y_{\sigma}\left(\bar{x}\right)^{m-a-b}\right)\text{.}$
Therefore, in the equation $\eqref{LHS}=\sum\eqref{RHS}$, all terms are
$o\left(y_{\sigma}\left(\bar{x}\right)^{m-a-b}\right)$, except possibly the
term arising from $\bar{b}=b$, which is
$\partial_{x}^{b}\partial_{y}^{a}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\text{.}$
Therefore,
$\partial_{x}^{b}\partial_{y}^{a}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)=o\left(y_{\sigma}\left(\bar{x}\right)^{m-a-b}\right)\text{,
as }\bar{x}\rightarrow 0^{+}\text{.}$
This completes our induction on $b$, proving ((93)).
Thus,
* (97)
$\max_{\begin{subarray}{c}\sigma=1,\cdots,\sigma_{\max}\\\
i=1,\cdots,i_{\max}\\\ \left|\alpha\right|\leq
m\end{subarray}}\left(y_{\sigma}\left(\bar{x}\right)\right)^{\left|\alpha\right|-m}\left|\partial^{\alpha}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)\right|\text{
}$ is bounded on $\left(0,\delta\right)$ and tends to zero as $\bar{x}$ tends
to $0^{+}$.
Recall that our $\mu_{l}(\bar{x})$ include the affine maps
$(p_{1},\cdots,p_{j_{\max}},q_{1},\cdots,q_{i_{\max}})\mapsto\bar{x}^{a-m}\partial_{y}^{a}p_{j}(0)$
and
$(p_{1},\cdots,p_{j_{\max}},q_{1},\cdots,q_{i_{\max}})\mapsto\bar{x}^{a-m}\partial_{y}^{a}q_{i}(0)$
for $0\leq a\leq m.$ Our assumption on the $\mu$’s made in ((55)) tells us
therefore that
$\bar{x}^{a-m}\partial_{y}^{a}\left(F_{j}^{\\#}\left(\bar{x}\right)\right)\left(0\right)$
and
$\bar{x}^{a-m}\partial_{y}^{a}\left(G_{i}^{\\#}\left(\bar{x}\right)\right)\left(0\right)$
are bounded on $\left(0,\delta\right)$ and tend to zero as $\bar{x}\rightarrow
0^{+}$.
That is,
* (99)
$\bar{x}^{s-m}F_{js}\left(\bar{x}\right),\bar{x}^{s-m}G_{is}\left(\bar{x}\right)$
are bounded on $\left(0,\delta\right)$ and tend to zero as $\bar{x}\rightarrow
0^{+}$. $\left(0\leq s\leq m\right)$.
* (101)
Because $F_{js},G_{js}$ are semialgebraic functions of one variable, it
follows that, for $s,t\leq m$, the functions
$\left(\frac{d}{d\bar{x}}\right)^{t}F_{js}\left(\bar{x}\right),\left(\frac{d}{d\bar{x}}\right)^{t}G_{is}\left(\bar{x}\right)$
are bounded on $\left(0,\delta\right)$ if $s+t\leq m$ and are
$o\left(\bar{x}^{m-s-t}\right)$ as $\bar{x}\rightarrow 0^{+}$ (even if
$s+t>m$).
Recalling now the definitions of the $F_{j}$ and $G_{i}^{\\#\\#}$ in terms of
the $F_{j},G_{is}$ (see (82), (83)), we conclude that
$\displaystyle\partial_{\bar{x}}^{t}\partial_{y}^{s}F_{j}\left(\bar{x},y\right)$
$\displaystyle=$ $\displaystyle\sum_{m\geq\underline{s}\geq
s}\left[\left(\frac{d}{d\bar{x}}\right)^{t}F_{j\underline{s}}\left(\bar{x}\right)\right]\left(\text{coefficient
}\left(\underline{s},s\right)\right)\cdot y^{\underline{s}-s}$
$\displaystyle=$ $\displaystyle\sum_{m\geq\underline{s}\geq
s}o\left(\bar{x}^{m-t-\underline{s}}\right)\cdot y^{\underline{s}-s}\text{.}$
If $s+t=m,$ then this is equal to
$o\left(\frac{y}{\bar{x}}\right)^{\underline{s}-s}=o\left(1\right)$ for
$0<y<\psi\left(\bar{x}\right)\leq\bar{x}$.
Therefore, for $\left|\beta\right|=m,$ we have
$\left|\partial^{\beta}F_{j}\left(\bar{x},y\right)\right|=o\left(1\right)$ as
$\left(\bar{x},y\right)\in\Omega_{\delta}$ tends to zero.
Similarly,
$\left|\partial^{\beta}G_{i}^{\\#\\#}\left(\bar{x},y\right)\right|=o\left(1\right)$
as $\left(\bar{x},y\right)\in\Omega_{\delta}$ tends to zero.
That is, for $\left|\beta\right|=m$, the functions
$\partial^{\beta}F_{j}\left(\bar{x},y\right)$ and
$\partial^{\beta}G_{i}^{\\#\\#}\left(\bar{x},y\right)$ are bounded on
$\Omega_{\delta}$ and they tend to zero as $\bar{x}\rightarrow 0^{+}$ (keeping
$\left(\bar{x},y\right)\in\Omega_{\delta}$).
Let
$\mathcal{E}\left(\bar{x}\right)=\sup\left\\{\left|\partial^{\beta}F_{j}\left(\bar{x},y\right)\right|,\left|\partial^{\beta}G_{i}^{\\#\\#}\left(\bar{x},y\right)\right|:\left|\beta\right|=m,0<y<\psi\left(\bar{x}\right)\text{
(all }i,j)\right\\}.$
Then
(103) $\mathcal{E}\left(\bar{x}\right)\text{is bounded on
}\left(0,\delta\right)\text{ and tends to zero as }\bar{x}\rightarrow 0^{+}.$
By Taylor’s theorem,
$\left|\partial^{\alpha}\left\\{F_{j}-J_{\left(\bar{x},0\right)}F_{j}\right\\}\left(\bar{x},y\right)\right|\leq
Cy^{m-\left|\alpha\right|}\mathcal{E}\left(\bar{x}\right)\text{ for
}\left|\alpha\right|\leq m,\left(\bar{x},y\right)\in\Omega_{\delta}\text{.}$
Recall that
$\left|\partial^{\alpha}A_{ij}\left(\bar{x},y\right)\right|\leq
Cy^{-\left|\alpha\right|}\text{ for }\left|\alpha\right|\leq m\text{ and
}\left(\bar{x},y\right)\in\Omega_{\delta}\text{.}$
Just as we estimated the functions $F_{j}$ above, we have from Taylor’s
theorem that
$\left|\partial^{\alpha}\left\\{G_{i}^{\\#\\#}-J_{\left(\bar{x},0\right)}G_{i}^{\\#\\#}\right\\}\left(\bar{x},y\right)\right|\leq
Cy^{m-\left|\alpha\right|}\mathcal{E}\left(\bar{x}\right)\text{ for
}\left|\alpha\right|\leq m,\left(\bar{x},y\right)\in\Omega_{\delta}\text{.}$
Combining these estimates, we see that
(104)
$\displaystyle\left|\partial^{\alpha}\left\\{\sum_{j}A_{ij}\left(F_{j}-J_{\left(\bar{x},0\right)}F_{j}\right)-\left(G_{i}^{\\#\\#}-J_{\left(\bar{x},0\right)}G_{i}^{\\#\\#}\right)\right\\}\left(x,y\right)\right|$
$\displaystyle\leq$ $\displaystyle
Cy^{m-\left|\alpha\right|}\mathcal{E}\left(\bar{x}\right)\text{ for
}\left|\alpha\right|\leq m,\left(\bar{x},y\right)\in\Omega_{\delta}\text{.}$
Combining ((97)), (103), (104), we see that
(105)
$\displaystyle\left(y_{\sigma}\left(\bar{x}\right)\right)^{\left|\alpha\right|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}\left[J_{\left(\bar{x},0\right)}F_{j}\right]+B_{i}-\left[J_{\left(\bar{x},0\right)}G_{i}^{\\#\\#}\right]\right\\}\left(\bar{x},y_{\sigma}\left(\bar{x}\right)\right)$
$\displaystyle\text{is bounded on }\left(0,\delta\right)\text{ and tends to 0
as }\bar{x}\text{ tends to }0^{+}\text{.}$
Recall that
$\left(J_{(\bar{x},0)}\vec{F},J_{\left(\bar{x},0\right)}\vec{G}^{\\#\\#}\right)\in
H\left(\bar{x}\right)$ for all $\bar{x}\in(0,\delta]$ (see (85)).
The above results, together with the property ((67)) of the
$y_{\sigma}\left(\bar{x}\right)$ now tells us that
* (106)
$y^{\left|\alpha\right|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}\left(J_{\left(\bar{x},0\right)}F_{j}\right)+B_{i}-\left(J_{\left(\bar{x},0\right)}G_{i}^{\\#\\#}\right)\right\\}\left(\bar{x},y\right)$
is bounded on $\Omega_{\delta}$ and tends to zero as
$\left(\bar{x},y\right)\in\Omega_{\delta}$ tends to zero.
Together with (103), (104), this yields the following result
* (108)
$y^{\left|\alpha\right|-m}\partial^{\alpha}\left\\{\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right\\}\left(\bar{x},y\right)$
is bounded on $\Omega_{\delta}$ and tends to zero as
$\left(\bar{x},y\right)\in\Omega_{\delta}$ tends to zero. Here,
$i=1,\cdots,i_{\max}$ and $|\alpha|\leq m$ are arbitrary.
From ((86)), we have
* (110)
$\lim_{y\rightarrow
0^{+}}y^{|\alpha|-m}\partial^{\alpha}\left(\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right)(x,y)=0$
for each fixed $x\in(0,\delta)$.
The functions $A_{ij},F_{j},B_{i},G_{i}^{\\#\\#}$ are semialgebraic.
Therefore, by Lemma 3.3, there exist a positive integer $K$ and a
semialgebraic function of one variable $\mathcal{A}(x)$ such that
* (112)
$\left|y^{|\alpha|-m}\partial^{\alpha}\left(\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right)(x,y)\right|\leq\mathcal{A}(x)\cdot
y^{\frac{1}{K}}$ for all $(x,y)\in\Omega_{\delta}$, $|\alpha|\leq
m,i=1,\cdots,i_{\max}$.
Taking $\delta$ smaller, we may assume $\mathcal{A}(x)$ is $C^{\infty}$ on
$(0,\delta]$.
Consequently,
$y^{|\alpha|-m}\partial^{\alpha}\left(\sum_{j}A_{ij}F_{j}+B_{i}-G_{i}^{\\#\\#}\right)(x,y)$
tends to zero as $y\rightarrow 0^{+}$, uniformly as $x$ varies over
$(\varepsilon,\delta)$ for any $\varepsilon>0$.
Recalling that $G_{i}=\sum_{j}A_{ij}F_{j}+B_{i}$, we see that for
$\left|\alpha\right|\leq m,i=1,\cdots,i_{\max},$
(114)
$y^{|\alpha|-m}\partial^{\alpha}\left\\{G_{i}-G_{i}^{\\#\\#}\right\\}\left(x,y\right)\rightarrow
0$
as $y\rightarrow 0^{+}$ uniformly for $x$ in each interval
$\left(\varepsilon,\delta\right)$.
Recalling that $G_{i}^{\\#\\#}$ belongs to $C^{\infty}$ in a neighborhood of
$\left(x,0\right)$ (each $x\in\left(0,\delta\right)$), we conclude that the
derivatives $\partial^{\alpha}G_{i}\left(x,y\right)$ ($\left|\alpha\right|\leq
m,i=1,\cdots,i_{\max}$), initially defined on
$\Omega_{\delta}=\left\\{\left(x,y\right):0<x<\delta,0<y<\psi\left(x\right)\right\\}$
extend to continuous functions on
(115) $\Omega_{\delta}^{++}\equiv\left\\{\left(x,y\right):0<x<\delta,0\leq
y<\psi\left(x\right)\right\\}.$
Next, recall that $F_{js}$ is $C^{\infty}$ on $\left(0,\delta\right)$ and that
we assume that
$\left|\partial^{\alpha}A_{ij}\left(x,y\right)\right|,\left|\partial^{\alpha}B_{i}\left(x,y\right)\right|\leq
Cy^{-\left|\alpha\right|}$ on
(116)
$\Omega^{+}=\left\\{\left(x,y\right):0<x<\delta,0<y\leq\psi\left(x\right)\right\\}$
on which the functions $\partial^{\alpha}A_{ij},\partial^{\alpha}B_{i}$ are
assumed to be continuous.
We defined
$\displaystyle G_{i}$ $\displaystyle=$
$\displaystyle\sum_{j}A_{ij}F_{j}+B_{i}$ $\displaystyle=$
$\displaystyle\sum_{j}A_{ij}\left(x,y\right)\left[\sum_{s=0}^{m}F_{js}\left(x\right)y^{s}\right]+B_{i}\left(x,y\right)\text{.}$
The above remarks (and the fact that $\psi\left(x\right)\not=0$ for
$x\in\left(0,\delta\right)$) show that $\partial^{\alpha}G_{i}$ extends to a
continuous function on $\Omega^{+}$ (see (116)), for $\left|\alpha\right|\leq
m,i=1,\cdots,i_{\max}$.
Combining our results for $\Omega^{+}$ (see (116)) and for $\Omega^{++}$ (see
(115)), we see that $\partial^{\alpha}G_{i}$ extends to a continuous function
on
$\Omega_{\frac{2\delta}{3}}^{\text{closure}}\setminus\left\\{\left(0,0\right)\right\\}$
for each $i=1,\cdots,i_{\max},\left|\alpha\right|\leq m$.
Also, $\partial^{\alpha}F_{i}$ is a continuous function on
$\Omega_{\frac{2}{3}\delta}^{\text{closure}}\setminus\left\\{\left(0,0\right)\right\\}$
because $F_{i}$ is $C^{\infty}$ on $\left(0,\delta\right)\times\mathbb{R}$.
By ((99)), we have $G_{is}(x)=o(x^{m-s})$ ($0\leq s\leq m$) on $(0,\delta)$.
Because $G_{is}$ is semialgebraic, it follows that after possibly reducing
$\delta$, we have
$\left(\frac{d}{dx}\right)^{t}G_{is}\left(x\right)=o\left(x^{m-s-t}\right)\text{
for }0\leq t\leq m,0\leq s\leq m,i=1,\cdots,i_{\max}\text{.}$
Because
$G_{i}^{\\#\\#}\left(x,y\right)=\sum_{\underline{s}=0}^{m}G_{i\underline{s}}\left(x\right)y^{\underline{s}}$
and $0<y<\psi\left(x\right)\leq x$ on $\Omega_{\delta}$, we have on
$\Omega_{\delta}$ that
$\displaystyle\left|\partial_{x}^{t}\partial_{y}^{s}G_{i}^{\\#\\#}\left(x,y\right)\right|$
$\displaystyle=$
$\displaystyle\left|\sum_{\underline{s}=s}^{m}\text{coeff}\left(\underline{s},s\right)\cdot\left(\frac{d}{dx}\right)^{t}G_{i\underline{s}}\left(x\right)\cdot
y^{\underline{s}-s}\right|$ $\displaystyle=$ $\displaystyle
o\left(\sum_{\underline{s}=s}^{m}x^{m-\underline{s}-t}\cdot
y^{\underline{s}-s}\right)$ $\displaystyle=$ $\displaystyle
o\left(\sum_{\underline{s}=s}^{m}x^{m-\underline{s}-t}\cdot
x^{\underline{s}-s}\right)$ $\displaystyle=$ $\displaystyle
o\left(x^{m-s-t}\right)\text{ on }\Omega_{\delta}\text{ for }s,t\leq
m\text{.}$
In particular,
* (117)
$\partial^{\alpha}G_{i}^{\\#\\#}\left(x,y\right)\rightarrow 0$ as
$\left(x,y\right)\in\Omega_{\delta}$ tends to $\left(0,0\right)$ for
$\left|\alpha\right|\leq m,i=1,\cdots,i_{\max}$.
On the other hand, recalling the definition $G_{i}=\sum_{j}A_{ij}F_{j}+B_{i}$,
we see from ((108)) that
$\partial^{\alpha}\left(G_{i}-G_{i}^{\\#\\#}\right)\left(x,y\right)\rightarrow
0$ as $\left(x,y\right)\in\Omega_{\delta}$ tends to $\left(0,0\right)$ for
each $\left|\alpha\right|\leq m$. Together with ((117)), this shows that
$\partial^{\alpha}G_{i}\left(x,y\right)\rightarrow 0$ as
$\left(x,y\right)\in\Omega_{\delta}$ tends to $\left(0,0\right)$ for each
$\left|\alpha\right|\leq m$.
Next, recall from ((99)) that $F_{js}(x)=o(x^{m-s})$ for $x\in(0,\delta)$,
$j=1,\cdots,j_{\max},s=0,\cdots,m$.
Because the $F_{jk}$ are semialgebraic functions of one variable, we conclude
(after reducing $\delta$) that
$\left(\frac{d}{dx}\right)^{t}F_{js}\left(x\right)=o\left(x^{m-s-t}\right)$ on
$\left(0,\delta\right)$ for $t\leq m$.
Now, for $s+t\leq m$ and $\left(x,y\right)\in\Omega_{\delta}$ (hence
$0<y<\psi\left(x\right)\leq x$), we have
$\displaystyle\left|\left(\frac{\partial}{\partial
y}\right)^{s}\left(\frac{\partial}{\partial
x}\right)^{t}F_{j}\left(x,y\right)\right|$ $\displaystyle=$
$\displaystyle\left|\left(\frac{\partial}{\partial
y}\right)^{s}\left(\frac{\partial}{\partial
x}\right)^{t}\sum_{\underline{s}=0}^{m}F_{j\underline{s}}\left(x\right)y^{\underline{s}}\right|$
$\displaystyle=$
$\displaystyle\left|\sum_{\underline{s}=s}^{m}\text{coeff}\left(\underline{s},s\right)\left[\left(\frac{d}{dx}\right)^{t}F_{j\underline{s}}\left(x\right)\right]\cdot
y^{\underline{s}-s}\right|$ $\displaystyle\leq$ $\displaystyle
C\sum_{\underline{s}=s}^{m}\left|\left(\frac{d}{dx}\right)^{t}F_{j\underline{s}}\left(x\right)\right|\cdot
x^{\underline{s}-s}$ $\displaystyle=$ $\displaystyle
o\left(\sum_{\underline{s}=0}^{m}x^{m-\underline{s}-t}x^{\underline{s}-s}\right)=o\left(x^{m-s-t}\right)\text{.}$
Thus, for $\left|\alpha\right|\leq m$, and $j=1,\cdots,j_{\max},$ we have
$\partial^{\alpha}F_{j}\left(x,y\right)\rightarrow 0\text{ as
}\left(x,y\right)\in\Omega_{\delta}\text{ tends to }\left(0,0\right)\text{.}$
We now know the following: $G_{i}=\sum_{j}A_{ij}F_{j}+B_{i}$ on
$\Omega_{\delta}.$ The $F_{j}$ and $G_{i}$ are semialgebraic on
$\Omega_{\delta}$
For $\left|\alpha\right|\leq m$, the derivatives
$\partial^{\alpha}F_{j},\partial^{\alpha}G_{i}$ extend to continuous functions
on
$\Omega_{2\delta/3}^{\text{closure}}\setminus\left\\{\left(0,0\right)\right\\}$.
For $\left|\alpha\right|\leq m,$ the derivatives
$\partial^{\alpha}F_{j}\left(z\right)$, $\partial^{\alpha}G_{i}\left(z\right)$
tend to zero as $z\in\Omega_{\delta}$ tends to zero.
It follows that the $F_{j}$ and $G_{i}$ extend from $\Omega_{\delta/2}$ to
semialgebraic functions in
$C^{m}\left(\Omega_{\delta/2}^{\text{closure}}\right)$ and those functions all
have $m$-jet zero at the origin. We extend $F_{j},G_{i}$ to semialgebraic
$C^{m}_{loc}$ functions on $\mathbb{R}^{2}$, using Corollary 3.2.
Next, we show that
$j_{\bar{x}}\left(\vec{F},\vec{G}\right)=\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)$
for $\bar{x}\in\left(0,\delta\right)$.
From (84), we have
$j_{\bar{x}}\left(\vec{F},\vec{G}^{\\#\\#}\right)=\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)\text{.}$
From (114), we see that $j_{\bar{x}}\left(G_{i}-G_{i}^{\\#\\#}\right)=0$ for
all $\bar{x}\in\left(0,\delta\right)$. Therefore,
$j_{\bar{x}}\left(\vec{F},\vec{G}\right)=j_{\bar{x}}\left(\vec{F},\vec{G}^{\\#\\#}\right)=\left(\vec{F}^{\\#}\left(\bar{x}\right),\vec{G}^{\\#}\left(\bar{x}\right)\right)\text{,}$
as desired.
Thus, we have proven ((55)).
The proof of Lemma 6.2 is complete.
### 6.2 Patching near a cusp
###### Lemma 6.4
Let $\psi(x)$ be a semialgebraic function on $[0,\delta]$, satisfying
$\psi(0)=0,0<\psi(x)\leq x$ for all $x\in(0,\delta]$. We set
$E_{\delta}=\\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq\delta,0\leq
y\leq\psi(x)\\},$ $E_{\delta}^{+}=\\{(x,y)\in\mathbb{R}^{2}:0\leq
x\leq\delta,\frac{1}{3}\psi(x)\leq y\leq\psi(x)\\},\text{ and }$
$E_{\delta}^{-}=\\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq\delta,0\leq
y\leq\frac{2}{3}\psi(x)\\}.$
Fix a semialgebraic function of one variable, $\theta\left(t\right)$,
satisfying $0\leq\theta\left(t\right)\leq 1$, $\theta\left(t\right)=1$ for
$t\leq 1/3,$ $\theta\left(t\right)=0$ for $t\geq 2/3$, $\theta\in C^{m+100}$.
Then set
$\theta_{-}\left(x,y\right)=\theta\left(\frac{y}{\psi\left(x\right)}\right)\text{,
}\theta_{+}\left(x,y\right)=1-\theta_{-}\left(x,y\right)\text{ for
}\left(x,y\right)\in E_{\delta}\setminus\\{(0,0)\\}\text{.}$
Thus, $\theta_{+},\theta_{-}\geq 0$ and $\theta_{+}+\theta_{-}=1$ on
$E_{\delta}\setminus\\{(0,0)\\}$.
Let $F^{+}\in C^{m}(E_{\delta}^{+})$ and $F^{-}\in C^{m}(E_{\delta}^{-})$ be
semialgebraic functions, with $J_{(0,0)}F^{+}=J_{(0,0)}F^{-}=0$.
Suppose that
(119)
$\partial_{y}^{l}F^{+}(x,\psi(x))-\sum_{j=0}^{m-l}\frac{1}{j!}\partial_{y}^{l+j}F^{-}(x,0)\cdot(\psi(x))^{j}=o((\psi(x))^{m-l})$
as $x\rightarrow 0^{+}$ for each $l=0,\cdots,m$.
Define $F=\theta_{+}\cdot F^{+}+\theta_{-}\cdot F^{-}$ on
$E_{\delta}\setminus\\{(0,0)\\},F(0,0)=0$.
Then $F$ is a $C^{m}$ semialgebraic function on $E_{\delta^{\prime}}$ for some
small $\delta^{\prime}$. The jet of $F$ at the origin is zero. Moreover,
$F=F^{+}$ in a neighborhood of any point $(x,\psi(x))$, $0<x<\delta^{\prime}$;
and $F=F^{-}$ in a neighborhood of any point $(x,0),0<x<\delta^{\prime}$.
Proof. Because $0\leq\psi(x)\leq x$ and $\psi$ is given near $0$ by a
convergent Puiseux series, we have $\psi^{(k)}(x)=O(x^{1-k})$ as $x\rightarrow
0^{+}$, for $k=0,\cdots,m+100$. Also, because $F^{+},F^{-}$ have zero jet at
$(0,0)$, we have, for $|\alpha|=m$, $\partial^{\alpha}F^{+}(x,y)=o(1)$ as
$(x,y)\in E_{\delta}^{+}$ tends to zero and $\partial^{\alpha}F^{-}(x,y)=o(1)$
as $(x,y)\in E_{\delta}^{-}$ tends to zero.
By induction on $\mu$, we now prove that
* (120)
$\partial_{x}^{\mu}\partial_{y}^{l}F^{+}(x,\psi(x))-\sum_{j=0}^{m-l-\mu}\frac{1}{j!}\partial_{x}^{\mu}\partial_{y}^{l+j}F^{-}(x,0)\cdot(\psi(x))^{j}=o((\psi(x))^{m-\mu-l})$
as $x\rightarrow 0^{+}$ for $\mu+l\leq m$.
For $\mu=0$, ((120)) is a hypothesis of our lemma. Assuming ((120)) for $\mu$,
we prove it for $\mu+1$. Thus, fix $l$ satisfying $(\mu+1)+l\leq m$. Recalling
that $\partial_{x}^{\mu}\partial_{y}^{l+j}F^{-}(x,0)=o(1)$ when $\mu+(l+j)=m$,
we conclude from ((120)) that
* (122)
$\partial_{x}^{\mu}\partial_{y}^{l}F^{+}(x,\psi(x))-\sum_{j=0}^{m-l-\mu-1}\frac{1}{j!}\partial_{x}^{\mu}\partial_{y}^{l+j}F^{-}(x,0)\cdot(\psi(x))^{j}=o((\psi(x))^{m-\mu-l})$
as $x\rightarrow 0^{+}$.
Because the above functions are semialgebraic functions of one variable and
thus given near $0$ by convergent Puiseux series, it follows that
$\frac{d}{dx}\\{\eqref{pnc-1-lhs}\\}=o((\psi(x))^{m-\mu-l}\cdot x^{-1})$,
hence $\frac{d}{dx}\\{\eqref{pnc-1-lhs}\\}=o((\psi(x))^{m-\mu-l-1})$, because
$0<\psi(x)\leq x$. Thus,
$\displaystyle\left[\left(\partial_{x}+\psi^{\prime}\left(x\right)\partial_{y}\right)\left(\partial_{x}^{\mu}\partial_{y}^{l}F^{+}\right)\right]\left(x,\psi\left(x\right)\right)-\sum_{j=0}^{m-l-\mu-1}\frac{1}{j!}\partial_{x}^{\mu+1}\partial_{y}^{l+j}F^{-}\left(x,0\right)\left(\psi\left(x\right)\right)^{j}$
$\displaystyle-\sum_{j=1}^{m-l-\mu-1}\frac{1}{j!}\partial_{x}^{\mu}\partial_{y}^{l+j}F^{-}\left(x,0\right)j\left(\psi\left(x\right)\right)^{j-1}\psi^{\prime}\left(x\right)$
$\displaystyle=$ $\displaystyle
o\left(\left(\psi\left(x\right)\right)^{m-\mu-l-1}\right)\text{.}$
It follows that
$\displaystyle\left[\partial_{x}^{\mu+1}\partial_{y}^{l}F^{+}\left(x,\psi\left(x\right)\right)-\sum_{j=0}^{m-l-\left(\mu+1\right)}\frac{1}{j!}\partial_{x}^{\mu+1}\partial_{y}^{l+j}F^{-}\left(x,0\right)\left(\psi\left(x\right)\right)^{j}\right]$
$\displaystyle+\psi^{\prime}\left(x\right)\left[\partial_{x}^{\mu}\partial_{y}^{l+1}F^{+}\left(x,\psi\left(x\right)\right)-\sum_{j=0}^{m-l-\mu-2}\frac{1}{j!}\partial_{x}^{\mu}\partial_{y}^{l+1+j}F^{-}\left(x,0\right)\left(\psi\left(x\right)\right)^{j}\right]$
$\displaystyle=$ $\displaystyle
o\left(\left(\psi\left(x\right)\right)^{m-\left(\mu+1\right)-l}\right)\text{.}$
For $j=m-l-\mu-1$, we have
$\partial_{x}^{\mu}\partial_{y}^{l+1+j}F^{-}\left(x,0\right)=o\left(1\right)$,
hence inductive hypothesis ((120)) for $\left(l+1\right)$ in place of $l$
tells us that the second term in square brackets in (6.2) is
$o\left(\left(\psi\left(x\right)\right)^{m-\left(\mu+1\right)-l}\right)$.
Also, $\left|\psi^{\prime}\left(x\right)\right|=O\left(1\right)$.
Consequently, the first term in square brackets in (6.2) is
$o\left(\left(\psi\left(x\right)\right)^{m-\left(\mu+1\right)-l}\right)$,
proving the analogue of ((120)) for $\mu+1,$ thus completing the induction and
establishing ((120)).
We bring in the cutoff functions $\theta_{+}$ and $\theta_{-}$. Note that
$\theta_{+}$ is supported in $E_{\delta}^{+}$ and $\theta_{-}$ is supported in
$E_{\delta}^{-}$.
We will estimate the derivatives of $\theta_{+}$, $\theta_{-}$ on
$E_{\delta}$.
We have
$\left(\frac{d}{dx}\right)^{k}\frac{1}{\psi\left(x\right)}=O\left(\frac{1}{\psi\left(x\right)}x^{-k}\right)\text{
as }x\rightarrow 0^{+},$
because $\psi$ is given by a convergent Puiseux series.
Because $0<\psi\left(x\right)\leq x$ for $x\in\left(0,\delta\right)$ and
$0\leq y\leq\psi\left(x\right)$ in $E_{\delta}$, it follows that
$\partial_{x}^{l}\partial_{y}^{k}\left(\frac{y}{\psi\left(x\right)}\right)=O\left(\left(\psi\left(x\right)\right)^{-k-l}\right)$
as $\left(x,y\right)\in E_{\delta}\rightarrow 0$, for all $k,l\geq 0$.
Now, $\partial_{x,y}^{\alpha}\theta_{-}\left(x,y\right)$ is a sum of terms
$\theta^{\left(s\right)}\left(\frac{y}{\psi\left(x\right)}\right)\cdot\prod_{\sigma=1}^{s}\left[\partial_{x,y}^{\alpha_{\sigma}}\left(\frac{y}{\psi\left(x\right)}\right)\right]$
with $\alpha_{1}+\cdots+\alpha_{s}=\alpha$, $s\leq\left|\alpha\right|$.
Each such term is
$O\left(\prod_{\sigma=1}^{s}\left(\frac{1}{\psi\left(x\right)}\right)^{\left|\alpha_{\sigma}\right|}\right)=O\left(\left(\frac{1}{\psi\left(x\right)}\right)^{\left|\alpha\right|}\right)$.
Thus,
(125)
$\left|\partial_{x,y}^{\alpha}\theta_{-}\left(x,y\right)\right|,\left|\partial_{x,y}^{\alpha}\theta_{+}\left(x,y\right)\right|\leq\frac{C_{\alpha}}{\left(\psi\left(x\right)\right)^{\left|\alpha\right|}}\text{
on }E_{\delta}\text{ (smaller }\delta\text{) for }\left|\alpha\right|\leq
m+100\text{.}$
Next, we return to $F^{+},F^{-}$, and prove the following estimate
(126)
$\partial_{x}^{\mu}\partial_{y}^{l}\left(F^{+}-F^{-}\right)\left(x,y\right)=o\left(\left[\psi\left(x\right)\right]^{m-\mu-l}\right)\text{
as }\left(x,y\right)\in E_{\delta}^{+}\cap E_{\delta}^{-}\rightarrow 0$
for each $\mu,l$ with $\mu+l\leq m$.
To see this, fix $\mu$, $0\leq\mu\leq m$, and look at the polynomials
$\displaystyle P_{x}^{+}\left(y\right)$ $\displaystyle=$
$\displaystyle\sum_{j=0}^{m-\mu}\frac{1}{j!}\left[\partial_{y}^{j}\partial_{x}^{\mu}F^{+}\left(x,\psi\left(x\right)\right)\right]\cdot\left(y-\psi\left(x\right)\right)^{j}\text{,}$
$\displaystyle P_{x}^{-}\left(y\right)$ $\displaystyle=$
$\displaystyle\sum_{j=0}^{m-\mu}\frac{1}{j!}\left[\partial_{y}^{j}\partial_{x}^{\mu}F^{-}\left(x,0\right)\right]\cdot
y^{j}\text{.}$
Estimate ((120)) shows that
(127)
$\partial_{y}^{l}\left(P_{x}^{+}-P_{x}^{-}\right)|_{y=\psi\left(x\right)}=o\left(\left(\psi\left(x\right)\right)^{m-\mu-l}\right)\text{
for }l=0,\cdots,m-\mu\text{.}$
For $y$ satisfying $\left(x,y\right)\in E_{\delta}^{+}\cap E_{\delta}^{-}$, we
have $\left|y\right|,\left|y-\psi\left(x\right)\right|\leq\psi\left(x\right)$
and therefore (127) yields
$\partial_{y}^{l}\left(P_{x}^{+}-P_{x}^{-}\right)\left(x,y\right)=o\left(\left(\psi\left(x\right)\right)^{m-\mu-l}\right)$
as $\left(x,y\right)\in E_{\delta}^{+}\cap E_{\delta}^{-}$ tends to zero.
On the other hand, Taylor’s theorem gives for $\left(x,y\right)\in
E_{\delta}^{+}\cap E_{\delta}^{-}\setminus\\{(0,0)\\}$ the estimates
$\partial_{y}^{l}\left[\partial_{x}^{\mu}F^{+}-P_{x}^{+}\right]\left(x,y\right)=O\left(\left(\psi\left(x\right)\right)^{m-\mu-l}\cdot\max_{\bar{y}\in\left[\frac{1}{3}\psi\left(x\right),\psi\left(x\right)\right]}\left|\partial_{y}^{m-\mu}\partial_{x}^{\mu}F^{+}\left(x,\bar{y}\right)\right|\right)$
and
$\partial_{y}^{l}\left[\partial_{x}^{\mu}F^{-}-P_{x}^{-}\right]\left(x,y\right)=O\left(\left(\psi\left(x\right)\right)^{m-\mu-l}\cdot\max_{\bar{y}\in\left[0,\frac{2}{3}\psi\left(x\right)\right]}\left|\partial_{y}^{m-\mu}\partial_{x}^{\mu}F^{-}\left(x,\bar{y}\right)\right|\right)\text{.}$
The maxima in these last two estimates are $o\left(1\right)$, because
$J_{\left(0,0\right)}F^{+}=J_{\left(0,0\right)}F^{-}=0$.
Thus, as $\left(x,y\right)\in E_{\delta}^{+}\cap
E_{\delta}^{-}\setminus\\{(0,0)\\}$ approaches zero, the quantities
$\partial_{y}^{l}\left[\partial_{x}^{\mu}F^{+}-P_{x}^{+}\right]\left(x,y\right)$,
$\partial_{y}^{l}\left[\partial_{x}^{\mu}F^{-}-P_{x}^{-}\right]\left(x,y\right)$,
$\partial_{y}^{l}\left[P_{x}^{+}-P_{x}^{-}\right]\left(x,y\right)$ are all
$o\left(\left(\psi\left(x\right)\right)^{m-\mu-l}\right)$.
Consequently,
$\left(\partial_{y}^{l}\partial_{x}^{\mu}F^{+}-\partial_{y}^{l}\partial_{x}^{\mu}F^{-}\right)\left(x,y\right)=o\left(\left(\psi\left(x\right)\right)^{m-\mu-l}\right)$
as $\left(x,y\right)\in E_{\delta}^{+}\cap E_{\delta}^{-}\setminus\\{(0,0)\\}$
approaches zero, completing the proof of (126).
We now set $F=\theta_{+}F^{+}+\theta_{-}F^{-}$ on
$E_{\delta}\setminus\\{(0,0)\\}$ and $F(0,0)=0$.
Evidently, $F$ is $C^{m}$ away from the origin, and semialgebraic; moreover,
$F=F^{+}$ in a neighborhood of any point
$\left(x^{0},\psi\left(x^{0}\right)\right)$ in $E_{\delta}$
$\left(x^{0}\not=0\right)$ and $F=F^{-}$ in a neighborhood of any point
$\left(x^{0},0\right)\in E_{\delta}$ $\left(x^{0}\not=0\right)$.
It remains to check that $F\in C^{m}\left(E_{\delta}\right)$ near $0$ and that
$J_{\left(0,0\right)}F=0$. That amounts to showing that
(128)
$\partial_{x,y}^{\alpha}F\left(x,y\right)=o\left(x^{m-\left|\alpha\right|}\right)\text{
as }\left(x,y\right)\in E_{\delta}\setminus\\{(0,0)\\}\text{ approaches
}(0,0)\text{ (all }\left|\alpha\right|\leq m\text{).}$
To prove (128), we may assume $\left(x,y\right)\in E_{\delta}^{+}\cap
E_{\delta}^{-}\setminus\\{(0,0)\\}$, because otherwise the left-hand side of
(128) is $\partial_{x,y}^{\alpha}F^{+}$ for $\left(x,y\right)\in
E_{\delta}^{+}\setminus\\{(0,0)\\}$ or else $\partial_{x,y}^{\alpha}F^{-}$ for
$\left(x,y\right)\in E_{\delta}^{-}\setminus\\{(0,0)\\}$, in which case (128)
holds because $J_{\left(0,0\right)}F^{+}=J_{\left(0,0\right)}F^{-}=0$.
For $\left(x,y\right)\in E_{\delta}^{+}\cap
E_{\delta}^{-}\setminus\\{(0,0)\\}$, we have
(129) $F=F^{-}+\theta_{+}\left(F^{+}-F^{-}\right)\text{.}$
Because $J_{\left(0,0\right)}F^{-}=0$, we have
(130)
$\partial_{x,y}^{\alpha}F^{-}\left(x,y\right)=o\left(x^{m-\left|\alpha\right|}\right)\text{
as }\left(x,y\right)\in E_{\delta}^{+}\cap
E_{\delta}^{-}\setminus\\{(0,0)\\}\text{ tends to }(0,0)\text{, for
}\left|\alpha\right|\leq m\text{.}$
We recall that
$\partial_{x,y}^{\alpha}\theta_{+}\left(x,y\right)=O\left(\left(\psi\left(x\right)\right)^{-\left|\alpha\right|}\right)$
for $\left|\alpha\right|\leq m$ and that
$\partial_{x,y}^{\alpha}\left(F^{+}-F^{-}\right)\left(x,y\right)=o\left(\left(\psi\left(x\right)\right)^{m-\left|\alpha\right|}\right)$
for $\left|\alpha\right|\leq m$ as $\left(x,y\right)\in E_{\delta}^{+}\cap
E_{\delta}^{-}\setminus\\{(0,0)\\}$ tends to $(0,0)$, for
$\left|\alpha\right|\leq m$.
Therefore, for $\left|\alpha\right|\leq m$, as $\left(x,y\right)\in
E_{\delta}^{+}\cap E_{\delta}^{-}\setminus\\{(0,0)\\}$ tends to $(0,0)$, we
have
$\partial_{x,y}^{\alpha}\left\\{\theta_{+}\left(F^{+}-F^{-}\right)\left(x,y\right)\right\\}=o\left(\left(\psi\left(x\right)\right)^{m-\left|\alpha\right|}\right)\text{,}$
hence
(131)
$\partial_{x,y}^{\alpha}\left\\{\theta_{+}\left(F^{+}-F^{-}\right)\left(x,y\right)\right\\}=o\left(x^{m-\left|\alpha\right|}\right)\text{,}$
because $0<\psi\left(x\right)\leq x$. Putting (130), (131) into (129), we see
that
$\partial_{x,y}^{\alpha}F\left(x,y\right)=o\left(x^{m-\left|\alpha\right|}\right)$
as $\left(x,y\right)\in E_{\delta}^{+}\cap E_{\delta}^{-}\setminus\\{(0,0)\\}$
tends to $(0,0)$, for $\left|\alpha\right|\leq m$.
Thus, (128) holds. The proof of Lemma 6.4 is complete.
Next, we introduce a change of variables in a neighborhood of $0$ in
$\mathbb{R}_{+}^{2}=\left\\{\left(x,y\right):x>0\right\\}$ of the form
(132) $\bar{x}=x,\bar{y}=y+\tilde{\psi}\left(x\right)\text{,}$
where $\tilde{\psi}\left(x\right)$ is semialgebraic and satisfies
$\left|\tilde{\psi}\left(x\right)\right|\leq Cx$ for
$x\in\left(0,\delta\right)$.
The inverse change of variables is of course
$x=\bar{x},y=\bar{y}-\tilde{\psi}\left(\bar{x}\right)\text{.}$
Note that
$\partial_{x,y}^{\alpha}\left(\bar{x},\bar{y}\right)=O\left(x^{1-\left|\alpha\right|}\right)$
for $\left|y\right|\leq Cx\ll 1$ because $\tilde{\psi}$ is given near $0$ as a
convergent Puiseux series, hence $\left|\tilde{\psi}\left(x\right)\right|\leq
Cx$ implies $\left|\tilde{\psi}^{\left(k\right)}\right|\leq C_{k}x^{1-k}$ for
small $x$.
The change of variables (132) does not preserve $C^{m}$, but it does preserve
$C^{m}$ functions whose jets at $0$ are equal to zero.
Indeed, suppose $F\left(\bar{x},\bar{y}\right)\in C^{m}\left(\bar{E}\right)$
for
$\bar{E}\subset\left\\{\left(\bar{x},\bar{y}\right):\left|\bar{y}\right|\leq
C\bar{x}\right\\}$, with $0\in\bar{E}$ and $J_{0}F=0$.
Then $\bar{E}$ corresponds under (132) to a set
$E\subset\left\\{\left(x,y\right):\left|y\right|\leq C^{\prime}x\right\\}$,
$0\in E$.
We may regard $F$ as a function of $\left(x,y\right)$, and for
$\left|\alpha\right|\leq m$, $\partial_{x,y}^{\alpha}F\left(x,y\right)$ is a
sum of terms
$\left|\partial_{\bar{x},\bar{y}}^{\beta}F\left(\bar{x},\bar{y}\right)\right|\cdot\prod_{\nu=1}^{\left|\beta\right|}\left[\partial_{x,y}^{\alpha_{\nu}}\left(\bar{x},\bar{y}\right)\right]$
with $|\beta|\leq m$ and $\sum_{\nu}\alpha_{\nu}=\alpha$. If
$J_{\left(0,0\right)}F=0$ as a function of $\left(\bar{x},\bar{y}\right)$,
then
$\partial_{\bar{x},\bar{y}}^{\beta}F\left(\bar{x},\bar{y}\right)=o\left(\bar{x}^{m-\left|\beta\right|}\right)$
on $\bar{E}$, hence
$\partial_{\bar{x},\bar{y}}^{\beta}F\left(\bar{x},\bar{y}\right)=o\left(x^{m-\left|\beta\right|}\right)$
on $E$. Also, on $E,$
$\prod_{\nu=1}^{\left|\beta\right|}\left[\partial_{x,y}^{\alpha_{\nu}}\left(\bar{x},\bar{y}\right)\right]=\prod_{\nu=1}^{\left|\beta\right|}O\left(x^{1-\left|\alpha_{\nu}\right|}\right)=O\left(x^{\left|\beta\right|-\sum_{\nu}\left|\alpha_{\nu}\right|}\right)=O\left(x^{\left|\beta\right|-\left|\alpha\right|}\right)\text{.}$
Consequently,
$\partial_{x,y}^{\alpha}F\left(x,y\right)=o\left(x^{m-\left|\alpha\right|}\right)$
on $E\setminus\\{(0,0)\\}$, for $\left|\alpha\right|\leq m$. Thus, as claimed,
$F\in C^{m}\left(E\right)$ and $J_{\left(0,0\right)}F=0$.
The following generalization of Lemma 6.4 is reduced to Lemma 6.4 by means of
the change of variables discussed above.
###### Lemma 6.5
Let $0\leq\psi_{-}(x)\leq\psi_{+}\left(x\right)\leq x$ be semialgebraic
functions on $[0,\delta]$, with $\psi_{-}<\psi_{+}$ on $(0,\delta]$. We set
$E_{\delta}=\\{(x,y)\in\mathbb{R}^{2}:0\leq
x\leq\delta,\psi_{-}\left(x\right)\leq y\leq\psi_{+}(x)\\},$
$E_{\delta}^{+}=\\{(x,y)\in\mathbb{R}^{2}:0\leq
x\leq\delta,0\leq\psi_{+}(x)-y\leq\frac{2}{3}\left(\psi_{+}(x)-\psi_{-}\left(x\right)\right)\\},\text{
and}$ $E_{\delta}^{-}=\\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq\delta,0\leq
y-\psi_{-}\left(x\right)\leq\frac{2}{3}\left(\psi_{+}(x)-\psi_{-}\left(x\right)\right)\\}.$
Fix a semialgebraic function of one variable, $\theta\left(t\right)$,
satisfying $0\leq\theta\left(t\right)\leq 1$, $\theta\left(t\right)=1$ for
$t\leq 1/3,$ $\theta\left(t\right)=0$ for $t\geq 2/3$, $\theta\in C^{m+100}$.
Then set
$\theta_{-}\left(x,y\right)=\theta\left(\frac{y-\psi_{-}(x)}{(\psi_{+}-\psi_{-})\left(x\right)}\right)\text{,
}\theta_{+}\left(x,y\right)=1-\theta_{-}\left(x,y\right)\text{ for
}\left(x,y\right)\in E_{\delta}\setminus\\{(0,0)\\}\text{.}$
Thus, $\theta_{+},\theta_{-}\geq 0$ and $\theta_{+}+\theta_{-}=1$ on
$E_{\delta}\setminus\\{(0,0)\\}$.
Let $F^{+}\in C^{m}(E_{\delta}^{+})$ and $F^{-}\in C^{m}(E_{\delta}^{-})$ be
semialgebraic functions, with $J_{(0,0)}F^{+}=J_{(0,0)}F^{-}=0$.
Suppose that
$\partial_{y}^{l}F^{+}(x,\psi_{+}(x))-\sum_{j=0}^{m-l}\frac{1}{j!}\partial_{y}^{l+j}F^{-}(x,\psi_{-}(x))\cdot(\psi_{+}(x)-\psi_{-}\left(x\right))^{j}=o((\psi_{+}(x)-\psi_{-}\left(x\right))^{m-l})$
as $x\rightarrow 0^{+}$ for each $l=0,\cdots,m$.
Define $F=\theta_{+}\cdot F^{+}+\theta_{-}\cdot F^{-}$ on
$E_{\delta}\setminus\\{(0,0)\\},F(0,0)=0$.
Then $F$ is a $C^{m}$ semialgebraic function on $E_{\delta^{\prime}}$ for some
small $\delta^{\prime}$. The jet of $F$ at $(0,0)$ is zero. Moreover,
$F=F^{+}$ in a neighborhood of any point $(x,\psi_{+}(x))$,
$0<x<\delta^{\prime}$, and $F=F^{-}$ in a neighborhood of any point
$(x,\psi_{-}(x))$, $0<x<\delta^{\prime}$.
### 6.3 Proof of Lemma 6.1
Let $\mathcal{H}=(H(z))_{z\in\mathbb{R}^{2}}$ be a semialgebraic bundle with a
$C^{m}_{loc}$ section. Each $H(z)$ is a coset of an $\mathcal{R}_{z}$
submodule in $\mathcal{R}_{z}^{D}$. Assume $H((0,0))=\\{0\\}$. Let
$\Omega_{\delta}=\\{(x,y)\in\mathbb{R}^{2}:0\leq x\leq\delta,0\leq y\leq x\\}$
for $\delta>0$. We look for semialgebraic $C^{m}_{loc}$ sections of
$\mathcal{H}|_{\Omega_{\delta}}$, for some small $\delta$ (which will keep
shrinking as we discuss further).
We apply Lemma 5.3. Thus, we obtain the following
* •
Semialgebraic functions
$0\leq\psi_{0}\left(x\right)\leq\psi_{1}\left(x\right)\leq\cdots\leq\psi_{s_{\max}}\left(x\right)=x$
on $\left(0,\delta\right),$ all given by convergent Puiseux expansions on
$\left(0,\delta\right)$.
* •
Integers $k_{s}$ $\left(0\leq k_{s}\leq D\right)$ and permutations
$\pi_{s}:\left\\{1,\cdots,D\right\\}\rightarrow\left\\{1,\cdots,D\right\\}$
for $s=1,\cdots,s_{\max}$.
* •
Semialgebraic functions $A_{ij}^{s}\left(x,y\right)$ $(s=1,\cdots,s_{\max}$,
$1\leq i\leq k_{s},k_{s}<j\leq D)$ and $\varphi_{i}^{s}\left(x,y\right)$
$\left(s=1,\cdots,s_{\max},1\leq i\leq k_{s}\right)$ defined on
$E_{s}=\left\\{\left(x,y\right):0<x<\delta,\psi_{s-1}\left(x\right)<y<\psi_{s}\left(x\right)\right\\}$.
* •
Semialgebraic functions $\theta_{jl}^{si}\left(x\right)$,
$g^{si}\left(x\right)$
$(s=0,\cdots,s_{\max},i=1,\cdots,i_{\max}\left(s\right)$, $j=1,\cdots,D,$
$l=0,\cdots,m)$ defined on $\left(0,\delta\right)$, and given there by
convergent Puiseux expansions.
The above objects have the following properties
* •
(Estimates) For $\left(x,y\right)\in\Omega_{1}$ with $0<x<\delta$ and
$\psi_{s-1}\left(x\right)<y<\psi_{s}\left(x\right)$, we have
$\left|\partial^{\alpha}A_{ij}^{s}\left(x,y\right)\right|$,
$\left|\partial^{\alpha}\varphi_{i}^{s}\left(x,y\right)\right|\leq
C\left[\min\left(\left|y-\psi_{s}\left(x\right)\right|,\left|y-\psi_{s-1}\left(x\right)\right|\right)\right]^{-\left|\alpha\right|}$
for $\left|\alpha\right|\leq m+100$.
* •
(Condition for sections) Let $F=(F_{1},\cdots,F_{D})\in
C^{m}\left(\Omega_{1},\mathbb{R}^{D}\right)$, and suppose $J_{x}F\in
H\left(x\right)$ for all $x\in\Omega_{1}$.
Then for $s=1,\cdots,s_{\max}$, $i=1,\cdots,k_{s}$,
$x\in\left(0,\delta\right)$,
$\psi_{s-1}\left(x\right)<y<\psi_{s}\left(x\right)$, we have
(133) $F_{\pi_{s}i}\left(x,y\right)+\sum_{D\geq
j>k_{s}}A_{ij}^{s}\left(x,y\right)F_{\pi_{s}j}\left(x,y\right)=\varphi_{i}^{s}\left(x,y\right)\text{;}$
and for $s=0,1,\cdots,s_{\max}$, $i=1,\cdots,i_{\max}\left(s\right)$,
$x\in\left(0,\delta\right)$, we have
(134)
$\sum_{j=1}^{D}\sum_{l=0}^{m}\theta_{jl}^{si}\left(x\right)\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=g^{si}\left(x\right)\text{;}$
and
(135) $J_{\left(0,0\right)}F_{j}=0$
for all $j$.
Conversely, if $F=(F_{j})_{j=1,\cdots,D}\in
C^{m}_{loc}\left(\mathbb{R}^{2},\mathbb{R}^{D}\right)$ satisfies (133), (134),
(135), then $F$ is a section of $\mathcal{H}$ over
$\Omega_{\delta}^{\text{closure}}$.
Next, we set (for $s=1,\cdots,s_{\max})$:
$E_{s}^{+}=\left\\{\left(x,y\right)\in\mathbb{R}^{2}:0\leq x\leq\delta,\text{
}0\leq\psi_{s}\left(x\right)-y\leq\frac{2}{3}\left(\psi_{s}-\psi_{s-1}\left(x\right)\right)\right\\}$
and
$E_{s}^{-}=\left\\{\left(x,y\right)\in\mathbb{R}^{2}:0\leq x\leq\delta\text{,
}0\leq
y-\psi_{s-1}\left(x\right)\leq\frac{2}{3}\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)\right\\}\text{.}$
Then $E_{s}^{+,\text{interior}}\cup E_{s}^{-,\text{interior}}=E_{s}$. On
$E_{s}^{+,\text{interior}}$ we have
$\left|\partial^{\alpha}A_{ij}^{s}\left(x\right)\right|$,
$\left|\partial^{\alpha}\varphi_{i}^{s}\left(x,y\right)\right|\leq
C\left(\psi_{s}\left(x\right)-y\right)^{-\left|\alpha\right|}$ for
$\left|\alpha\right|\leq m+100$, and on $E_{s}^{-\text{, interior}}$ we have
$\left|\partial^{\alpha}A_{ij}^{s}\left(x\right)\right|$,
$\left|\partial^{\alpha}\varphi_{i}^{s}\left(x,y\right)\right|\leq
C\left(y-\psi_{s-1}\left(x\right)\right)^{-\left|\alpha\right|}$ for
$\left|\alpha\right|\leq m+100$.
We may apply Lemma 6.2 after a change of variables of the form
$(\bar{x},\bar{y})=(x,\pm(y-\psi(x))).$
Thus, we obtain the following objects, with properties described below.
* •
Semialgebraic functions $\theta_{jl}^{+,si}\left(x\right)$,
$g^{+,si}\left(x\right)$, $i=1,\cdots,i_{\max}^{+}\left(s\right)$,
$\theta_{jl}^{-,si}\left(x\right)$, $g^{-,si}\left(x\right)$,
$i=1,\cdots,i_{\max}^{-}\left(s\right)$, $l=0,\cdots,m,$ defined on
$\left(0,\delta\right)$ (smaller $\delta$).
* •
Semialgebraic functions $\tilde{\theta}_{jl}^{+,si}\left(x\right)$,
$\tilde{g}^{+,si}\left(x\right)$,
$i=1,\cdots,\tilde{\imath}_{\max}^{+}\left(s\right)$,
$\tilde{\theta}_{jl}^{-,si}\left(x\right)$, $\tilde{g}^{-,si}\left(x\right)$,
$i=1,\cdots,\tilde{\imath}_{\max}^{-}\left(s\right)$, $l=0,\cdots,m,$ defined
on $\left(0,\delta\right)$ (smaller $\delta$).
The properties for these functions are as follows.
Let $F=(F_{1},\cdots,F_{D})\in
C^{m}_{loc}\left(\mathbb{R}^{2},\mathbb{R}^{D}\right)$ satisfy (133) in
$E_{s}^{+,\text{interior}}$ and $J_{\left(0,0\right)}F=0$. Then
(136) $\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\theta_{jl}^{+,si}\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=g^{+,si}\left(x\right)$
for $x\in\left(0,\delta\right)$ and all $i$, and
(137) $\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\tilde{\theta}_{jl}^{+,si}\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=\tilde{g}^{+,si}\left(x\right)+o\left(1\right)\text{
as }x\rightarrow 0^{+}$
for $x\in\left(0,\delta\right)$ and all $i$.
Similarly, let $F=(F_{1},\cdots,F_{D})\in
C^{m}_{loc}\left(\mathbb{R}^{2},\mathbb{R}^{D}\right)$ satisfy (133) in
$E_{s}^{-,\text{interior}}$ and $J_{\left(0,0\right)}F=0$. Then
(138) $\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\theta_{jl}^{-,si}\partial_{y}^{l}F_{j}\left(x,\psi_{s-1}\left(x\right)\right)=g^{-,si}\left(x\right)$
for $x\in\left(0,\delta\right)$ and all $i$, and
(139) $\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\tilde{\theta}_{jl}^{-,si}\partial_{y}^{l}F_{j}\left(x,\psi_{s-1}\left(x\right)\right)=\tilde{g}^{-,si}\left(x\right)+o\left(1\right)\text{
as }x\rightarrow 0^{+}$
for all $i$.
* (140)
Conversely, fix $s$ and suppose we are given semialgebraic functions
$f_{jl}^{+,s}\left(x\right)$ on $\left(0,\delta\right)$ satisfying
$\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\theta_{jl}^{+,si}f_{jl}^{+,s}\left(x\right)=g^{+,si}\left(x\right)\text{
(all }i\text{)}$
and
$\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\tilde{\theta}_{jl}^{+,si}f_{jl}^{+,s}\left(x\right)=\tilde{g}^{+,si}\left(x\right)+o\left(1\right)\text{
as }x\rightarrow 0^{+}\text{ (all }i\text{)}.$
Then there exists a semialgebraic function
$F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}\left(E_{s}^{+},\mathbb{R}^{D}\right)$ such that (133) holds in
$E_{s}^{+,\text{interior}}$ and
$\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=f_{jl}^{+,s}\left(x\right)$
and $J_{\left(0,0\right)}F_{j}=0$ for all $j$.
* (142)
Similarly, fix $s$ and suppose we are given we are given semialgebraic
functions $f_{jl}^{-,s}\left(x\right)$ on $\left(0,\delta\right)$ satisfying
$\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\theta_{jl}^{-,si}f_{jl}^{-,s}\left(x\right)=g^{-,si}\left(x\right)\text{
(all }i\text{)}$
and
$\sum_{\begin{subarray}{c}1\leq j\leq D\\\ 0\leq l\leq
m\end{subarray}}\tilde{\theta}_{jl}^{-,si}f_{jl}^{-,s}\left(x\right)=\tilde{g}^{-,si}\left(x\right)+o\left(1\right)\text{
as }x\rightarrow 0^{+}\text{ (all }i\text{)}.$
Then there exists a semialgebraic function
$F=\left(F_{1},\cdots,F_{D}\right)\in
C^{m}\left(E_{s}^{-},\mathbb{R}^{D}\right)$ such that (133) holds in
$E_{s}^{-,\text{interior}}$ and
$\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)=f_{jl}^{-,s}\left(x\right)$
and $J_{\left(0,0\right)}F_{j}=0$ for all $j$.
* (144)
Moreover, if $F=(F_{1},\cdots,F_{D})\in
C^{m}\left(E_{s}^{\text{closure}},\mathbb{R}^{D}\right)$ with
$J_{\left(0,0\right)}F=0$, then
$f_{jl}^{+,s}=\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)$ and
$f_{jl}^{-,s}=\partial_{y}^{l}F_{j}\left(x,\psi_{s-1}\left(x\right)\right)$
satisfy the key hypothesis of Lemma 6.5, namely,
$f_{jl}^{+,s}\left(x\right)-\sum_{k=0}^{m-l}\frac{1}{k!}f_{j(l+k)}^{-,s}\left(x\right)\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)^{k}=o\left(\left[\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right]^{m-l}\right)\text{
as }x\rightarrow 0^{+}$
by Taylor’s theorem.
Now, suppose $F=(F_{1},\cdots,F_{D})\in
C^{m}_{loc}\left(\mathbb{R}^{2},\mathbb{R}^{D}\right)$ is a section of
$\mathcal{H}$ over $\Omega_{\delta}$. Then, setting
$f_{jl}^{s}\left(x\right)=\partial_{y}^{l}F_{j}\left(x,\psi_{s}\left(x\right)\right)$
for $x\in\left(0,\delta\right)$ (smaller $\delta$), we learn that (because the
$F_{j}$ satisfy (133), (134), (135)), properties (134)$\cdots$(139) yield a
collection of assertions of the form
(146) $\sum_{\begin{subarray}{c}j=1,\cdots,D\\\
l=0,\cdots,m\end{subarray}}\theta_{jl}^{\\#,si}\left(x\right)f_{jl}^{s}\left(x\right)=g^{\\#,si}\left(x\right)\text{
on }\left(0,\delta\right)$
and
(147) $\sum_{\begin{subarray}{c}j=1,\cdots,D\\\
l=0,\cdots,m\end{subarray}}\tilde{\theta}_{jl}^{\\#,si}\left(x\right)f_{jl}^{s}\left(x\right)=\tilde{g}^{\\#,si}\left(x\right)+o\left(1\right)\text{
as }x\rightarrow 0^{+}\text{;}$
and also from ((144)) we have
(148)
$f_{jl}^{s}\left(x\right)=\sum_{k=0}^{m-l}\frac{1}{k!}f_{j\left(l+k\right)}^{s-1}\left(x\right)\left[\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right]^{k}+o\left(\left[\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right]^{m-l}\right)\text{
as }x\rightarrow 0^{+}\text{.}$
Conversely, if the $f_{jl}^{s}\left(x\right)$ are semialgebraic functions of
one variable, satisfying (146), (147), and (148), then for each
$s=1,\cdots,s_{\max}$ there exist
$F_{+}^{s}=(F_{+,1}^{s},\cdots,F_{+,D}^{s})\in C^{m}\left(E_{+}^{s\text{,
closure}},\mathbb{R}^{D}\right)$,
$F_{-}^{s}=(F_{-,1}^{s},\cdots,F_{-,D}^{s})\in C^{m}\left(E_{-}^{s\text{,
closure}},\mathbb{R}^{D}\right)$ semialgebraic such that (133), (134), (135)
hold in $E_{s}^{+}$, $E_{s}^{-}$, respectively and
$\partial_{y}^{l}F_{+,j}^{s}\left(x,\psi_{s}\left(x\right)\right)=f_{jl}^{s}\left(x\right)$,
$\partial_{y}^{l}F_{-,j}^{s}\left(x,\psi_{s-1}\left(x\right)\right)=f_{jl}^{s-1}\left(x\right)$
and $J_{\left(0,0\right)}F_{+}^{s}=J_{\left(0,0\right)}F_{-}^{s}=0$.
Note that $F_{+}^{s}$ is a section of $\mathcal{H}$ over $E_{s}^{+}$, and
$F_{-}^{s}$ is a section of $\mathcal{H}$ over $E_{s}^{-}$.
Thanks to (148) and Lemma 6.5, we may patch together $F_{+}^{s}$, $F_{-}^{s}$
into a semialgebraic $F_{s}=(F_{s,1},\cdots,F_{s,D})\in
C^{m}(E_{s}^{\text{closure}},\mathbb{R}^{D})$ such that $J_{(0,0)}F_{s}=0$,
$F_{s}$ is a section of $\mathcal{H}$ over $E_{s}^{\text{closure}}$, and
$\partial_{y}^{l}F_{sj}(x,\psi(x))=f_{jl}^{s}(x)$ and
$\partial_{y}^{l}F_{sj}(x,\psi_{s-1}(x))=f_{jl}^{s-1}(x)$.
Because of these conditions, the $F_{s}$ ($s=1,\cdots,s_{\max}$) fit together
(their transverse derivatives up to order $m$ match at the boundaries where
the $E_{s}$ meet), so using also Corollary 3.2, we obtain from the $F_{s}$ a
single semialgebraic $F=(F_{1},\cdots,F_{D})\in
C^{m}_{loc}(\mathbb{R}^{2},\mathbb{R}^{D})$ such that $J_{(0,0)}F=0$, and $F$
is a section of $\mathcal{H}$ over $\Omega_{\delta}$.
Thus, we have proven Lemma 6.1.
## 7 Proof of Lemma 4.1 (Main Lemma)
From the Second Main Lemma (Lemma 6.1), we can easily deduce Lemma 4.1.
Indeed, suppose
$\mathcal{H=}\left(H\left(x,y\right)\right)_{\left(x,y\right)\in\Omega_{\delta}}$
is as in the hypotheses of Lemma 4.1.
Let $\theta_{jl}^{si},g^{si},\tilde{\theta}_{jl}^{si},\tilde{g}^{si},\psi_{s}$
be as in Lemma 6.1.
For $x\in\left(0,\delta\right)$ with $\delta$ small enough, we introduce the
following objects:
$\displaystyle W\left(x\right)$ $\displaystyle=$
$\displaystyle\left\\{\left(\xi_{jl}^{s}\right)_{\begin{subarray}{c}0\leq
s\leq s_{\max}\\\ 0\leq l\leq m\\\ 1\leq j\leq
D\end{subarray}}\in\mathbb{R}^{\left(s_{\max}+1\right)\cdot\left(m+1\right)\cdot
D}:\sum_{j,l}\theta_{jl}^{si}\left(x\right)\xi_{jl}^{s}=g^{si}\left(x\right)\text{,
each }s,i\right\\}\text{,}$
$\displaystyle\mathcal{F}\left(\left(\xi_{jl}^{s}\right),x\right)$
$\displaystyle=$
$\displaystyle\sum_{s,i}\left|\sum_{j,l}\tilde{\theta}_{jl}^{si}\left(x\right)\xi_{jl}^{s}-\tilde{g}^{si}\left(x\right)\right|$
$\displaystyle+\sum_{s\not=0}\sum_{j,l}\frac{\left|\xi_{jl}^{s}-\sum_{k=0}^{m-l}\frac{1}{k!}\xi_{j\left(l+k\right)}^{s-1}\cdot\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)^{k}\right|}{\left[\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right]^{m-l}}\text{,}$
$\displaystyle\mathcal{F}_{\min}\left(x\right)$ $\displaystyle=$
$\displaystyle\inf\left\\{\mathcal{F}\left(\left(\xi_{jl}^{s}\right),x\right):\left(\xi_{jl}^{s}\right)\in
W\left(x\right)\right\\}\text{, and}$ $\displaystyle\Xi_{OK}\left(x\right)$
$\displaystyle=$ $\displaystyle\left\\{\left(\xi_{jl}^{s}\right)\in
W\left(x\right):\mathcal{F}\left(\left(\xi_{jl}^{s}\right),x\right)\leq\mathcal{F}_{\min}\left(x\right)+x\right\\}\text{.}$
Because
$\theta_{jl}^{si},g^{si},\tilde{\theta}_{jl}^{si},\tilde{g}^{si},\psi_{s}$ are
semialgebraic, the objects defined above depend semialgebraically on $x$.
Thanks to conclusion ((49)) of Lemma 6.1, each $W\left(x\right)$ and each
$\Xi_{OK}(x)$ is non-empty, and
(149) $\mathcal{F}_{\min}\left(x\right)\rightarrow 0\text{ as }x\rightarrow
0^{+}\text{.}$
From Theorem 3 we obtain
* (150)
Semialgebraic functions $\xi_{jl}^{s}\left(x\right)$ on
$\left(0,\delta\right)$ such that
$\left(\xi_{jl}^{s}\left(x\right)\right)\in\Xi_{OK}\left(x\right)$ for each
$x\in\left(0,\delta\right)$.
In particular, for $x\in\left(0,\delta\right)$, we have
(152)
$\displaystyle\sum_{j,l}\theta_{jl}^{s,i}\left(x\right)\xi_{jl}^{s}\left(x\right)$
$\displaystyle=$ $\displaystyle g^{si}\left(x\right)\text{ for each }s,i,j;$
(153)
$\displaystyle\left|\sum_{j,l}\tilde{\theta}_{jl}^{si}\left(x\right)\xi_{jl}^{s}\left(x\right)-\tilde{g}^{si}\left(x\right)\right|$
$\displaystyle\leq$
$\displaystyle\left[\mathcal{F}_{\min}\left(x\right)+x\right]\text{ for each
}s,i;$
and
(154)
$\displaystyle\left|\xi_{jl}^{s}\left(x\right)-\sum_{k=0}^{m-l}\frac{1}{k!}\xi_{j\left(l+k\right)}^{s-1}(x)\cdot\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)^{k}\right|$
$\displaystyle\leq$
$\displaystyle\left[\mathcal{F}_{\min}\left(x\right)+x\right]\cdot\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)^{m-l}\text{,
for each }s,j,l\text{ }\left(s\not=0\right)\text{.}$
From (149), (153), (154), we see that
(155)
$\sum_{j,l}\tilde{\theta}_{jl}^{si}\left(x\right)\xi_{jl}^{s}\left(x\right)=\tilde{g}^{si}\left(x\right)+o\left(1\right)\text{
as }x\rightarrow 0^{+}\text{,}$
and
(156)
$\displaystyle\xi_{jl}^{s}\left(x\right)-\sum_{k=0}^{m-l}\frac{1}{k!}\xi_{j\left(l+k\right)}^{s-1}(x)\cdot\left(\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right)^{k}$
$\displaystyle=$ $\displaystyle
o\left(\left[\psi_{s}\left(x\right)-\psi_{s-1}\left(x\right)\right]^{m-l}\right)\text{
as }x\rightarrow 0^{+}\text{.}$
Finally, from ((150)), (152), (155), (156), and the assertion ((51)) in Lemma
6.1, we conclude that $\mathcal{H}|_{\Omega_{\delta^{\prime}}}$ has a
$C^{m}_{loc}$ semialgebraic section for some $\delta^{\prime}<\delta$.
This completes the proof of Lemma 4.1 and that of Theorem 1.
## References
* [1] Matthias Aschenbrenner and Athipat Thamrongthanyalak. Whitney’s extension problem in o-minimal structures. Rev. Mat. Iberoam., 35(4):1027–1052, 2019.
* [2] Edward Bierstone, Jean-Baptiste Campesato, and Pierre D. Milman. $\mathcal{C}^{m}$ solutions of semialgebraic or definable equations. arXiv e-prints, page arXiv:2010.13815, October 2020.
* [3] Edward Bierstone, Charles Fefferman, Pierre D. Milman, and Wiesław Pawłucki. Examples concerning Whitney’s ${C}^{m}$ extension problem. Math. Res. Lett., 13(5-6):833–845, 2006.
* [4] Edward Bierstone, Pierre D. Milman, and Wiesław Pawłucki. Differentiable functions defined in closed sets. A problem of Whitney. Invent. Math., 151(2):329–352, 2003.
* [5] Edward Bierstone, Pierre D. Milman, and Wiesław Pawłucki. Higher-order tangents and Fefferman’s paper on Whitney’s extension problem. Ann. of Math. (2), 164(1):361–370, 2006.
* [6] Jacek Bochnak, Michel Coste, and Marie-Françoise Roy. Real Algebraic Geometry, volume 36 of A Series of Modern Surveys in Mathematics. Springer-Verlag, Berlin, first edition, 1998.
* [7] Holger Brenner and Jonathan Steinbuch. Tight closure and continuous closure. arXiv:1712.00337, December 2017.
* [8] Yuri A. Brudnyĭ and Pavel Shvartsman. A linear extension operator for a space of smooth functions defined on a closed subset in ${\bf R}^{n}$. Dokl. Akad. Nauk SSSR, 280(2):268–272, 1985.
* [9] Yuri A. Brudnyĭ and Pavel Shvartsman. Generalizations of Whitney’s extension theorem. Internat. Math. Res. Notices, 1994(3):129 ff., approx. 11 pp. (electronic), 1994.
* [10] Yuri A. Brudnyĭ and Pavel Shvartsman. The Whitney problem of existence of a linear extension operator. J. Geom. Anal., 7(4):515–574, 1997.
* [11] Yuri A. Brudnyĭ and Pavel Shvartsman. Whitney’s extension problem for multivariate $C^{1,\omega}$-functions. Trans. Amer. Math. Soc., 353(6):2487–2512 (electronic), 2001.
* [12] Neil Epstein and Melvin Hochster. Continuous closure, axes closure, and natural closure. Trans. Amer. Math. Soc., 370(5):3315–3362, 2018.
* [13] Charles Fefferman. A generalized sharp Whitney theorem for jets. Rev. Mat. Iberoam., 21(2):577–688, 2005.
* [14] Charles Fefferman. Interpolation and extrapolation of smooth functions by linear operators. Rev. Mat. Iberoamericana, 21(1):313–348, 2005.
* [15] Charles Fefferman. A sharp form of Whitney’s extension theorem. Ann. of Math. (2), 161(1):509–577, 2005.
* [16] Charles Fefferman. Whitney’s extension problem for $C^{m}$. Ann. of Math. (2), 164(1):313–359, 2006.
* [17] Charles Fefferman. $C^{m}$ extension by linear operators. Ann. of Math. (2), 166(2):779–835, 2007.
* [18] Charles Fefferman. The structure of linear extension operators for $C^{m}$. Rev. Mat. Iberoam., 23(1):269–280, 2007.
* [19] Charles Fefferman. Extension of $C^{m,\omega}$-smooth functions by linear operators. Rev. Mat. Iberoam., 25(1):1–48, 2009.
* [20] Charles Fefferman. Fitting a $C^{m}$-smooth function to data III. Ann. of Math. (2), 170(1):427–441, 2009.
* [21] Charles Fefferman, Arie Israel, and Garving K. Luli. Sobolev extension by linear operators. Journal A.M.S., 27(1):69–145, 2013.
* [22] Charles Fefferman, Arie Israel, and Garving K. Luli. Fitting a Sobolev function to data i. Rev. Mat. Iberoam., 32(1):275–376, 2015.
* [23] Charles Fefferman, Arie Israel, and Garving K. Luli. Fitting a Sobolev function to data ii. Rev. Mat. Iberoam., 32(2):649–750, 2015.
* [24] Charles Fefferman, Arie Israel, and Garving K. Luli. Fitting a Sobolev function to data iii. Rev. Mat. Iberoam., 32(3):1039–1126, 2015.
* [25] Charles Fefferman, Arie Israel, and Garving K. Luli. Finiteness principles for smooth selections. Geom. Funct. Anal., 26(2):422–477, 2016.
* [26] Charles Fefferman, Arie Israel, and Garving K. Luli. Interpolation of data by smooth non-negative functions. Rev. Mat. Iberoam., 33(1):305—324, 2016.
* [27] Charles Fefferman and János Kollár. Continuous solutions of linear equations. In From Fourier analysis and number theory to Radon transforms and geometry, volume 28 of Dev. Math., pages 233–282. Springer, New York, 2013.
* [28] Charles Fefferman and Garving K. Luli. The Brenner-Hochster-Kollár and Whitney problems for vector-valued functions and jets. Rev. Mat. Iberoam., 30(3):875–892, 2014.
* [29] Charles Fefferman and Garving K. Luli. Generators for the $C^{m}$-closures of ideals. Rev. Mat. Iberoam, 2020.
* [30] Charles Fefferman and Garving K. Luli. Solutions to a system of equations for $C^{m}$ functions. Rev. Mat. Iberoam., 2020.
* [31] Georges Glaeser. Étude de quelques algèbres tayloriennes. J. Analyse Math., 6:1–124; erratum, insert to 6 (1958), no. 2, 1958\.
* [32] Lars Hörmander. The analysis of linear partial differential operators. II. Classics in Mathematics. Springer-Verlag, Berlin, 2005. Differential operators with constant coefficients, Reprint of the 1983 original.
* [33] János Kollár and Krzysztof Nowak. Continuous rational functions on real and $p$-adic varieties. Math. Z., 279(1-2):85–97, 2015.
* [34] Krzysztof Kurdyka and Wiesław Pawłucki. O-minimal version of Whitney’s extension theorem. Studia Math., 224(1):81–96, 2014.
* [35] Stanisł aw Ł ojasiewicz and Krystyna Wachta. Séparation regulière avec un paramètre pour les sous-analytiques. Bull. Acad. Polon. Sci. Sér. Sci. Math., 30(7-8):325–328, 1982\.
* [36] Abraham Neyman. Real algebraic tools in stochastic games. In Stochastic games and applications (Stony Brook, NY, 1999), volume 570 of NATO Sci. Ser. C Math. Phys. Sci., pages 57–75. Kluwer Acad. Publ., Dordrecht, 2003.
* [37] Adam Parusiński. Lipschitz properties of semi-analytic sets. Ann. Inst. Fourier (Grenoble), 38(4):189–213, 1988.
* [38] Adam Parusiński. Lipschitz stratification of subanalytic sets. Ann. Sci. École Norm. Sup. (4), 27(6):661–696, 1994.
* [39] Athipat Thamrongthanyalak. Whitney’s extension theorem in o-minimal structures. Ann. Polon. Math., 119(1):49–67, 2017.
* [40] Athipat Thamrongthanyalak. On $p$-adic semi-algebraic continuous selections. MLQ Math. Log. Q., 66(1):73–81, 2020.
* [41] Hassler Whitney. Differentiable functions defined in closed sets. I. Trans. Amer. Math. Soc., 36(2):369–387, 1934.
* [42] Hassler Whitney. Functions differentiable on the boundaries of regions. Ann. of Math. (2), 35(3):482–485, 1934.
* [43] Nahum Zobin. Open problems. Second Workshop on Whitney Problems, August 3-8, 2009, College of William and Mary, Williamsburg, VA, USA, https://nxzobi.people.wm.edu/whitney/whitproblems.pdf.
|
# Slider: On the Design and Modeling of a 2D Floating Satellite Platform
Avijit Banerjee, Jakub Haluska, Sumeet G. Satpute, Dariusz Kominiak, and
George Nikolakopoulos1 1 Robotics and Artificial Intelligence, Department of
Computer, Electrical and Space Engineering, Luleå University of Technology,
Luleå {aviban, jakhal, sumsat, darkom<EMAIL_ADDRESS>
###### Abstract
In this article, a floating robotic emulation platform for a virtual
demonstration of satellite motion in space is presented. The robotic platform
design is characterized by its friction-less, levitating, yet planar motion
over a hyper-smooth surface. The robotic platform, integrated with sensor and
actuator units, is fully designed and manufactured from the Robotics and
Artificial Intelligence Team at Luleå University of Technology. A detailed
design description along with the mathematical modeling describing the
platform’s dynamic motion is formulated. Finally, the proposed design is
validated in extensive simulation studies, while the overall test bed
experimental setup, as well as the vehicle hardware and software
architectures, are discussed in detail. Furthermore, the entire design,
including 3D printing CAD model and different testbed elements, is provided in
an open-source repository and a test campaign is used to showcase its
capabilities and illustrate its operations.
## I Introduction
The advancements of space technology in the recent era has revolutionised our
perception about space activities. Space agencies across the globe focus on
the autonomous robotic missions that enable on-orbit servicing, manufacturing
and maintenance of satellites, close monitoring, docking and active debris
removal. Since the past decade with the growing interest of small scale
satellites [1], the present-generation space missions focus on the prospect of
multiple spacecraft formation missions towards autonomous operations in space.
Such ambitious tasks inherit highly complex and challenging objectives that
require efficient and effective autonomous guidance, navigation and control
systems to ensure the overall mission success [2].
To autonomously carry out such highly complex widespread objectives, advanced
navigation, guidance, and control (GNC) techniques are of primary importance
[2] in executing these challenging operations with a high degree of
reliability and accuracy. To ensure higher technology readiness levels before
deploying into space, rigorous validation of the robotic space system through
ground-based high fidelity campaigns in the relevant condition is essential
for cost-effective, low-risk, and potentially high return solutions.
In view of that, a friction-less, micro gravity orbital motion is one of most
the critical aspects of the space environment, that needs to be replicated in
hardware-in-the-loop tests [3]. However, in reality, recreating the space like
micro-gravity condition is indeed challenging to set up in laboratory
conditions on the Earth. A comprehensive study reported in [4] provides a
systematic review on development of such simulating environment. Performing a
parabolic flights test [5] with a free fall maneuver for small period is
presented in [5] as potential micro-gravity research tool. Another possibility
is to conduct a drop-towers test [6], which can provide realistic micro-
gravity condition. A parabolic flight test based experimental validation
related to space robotics and on orbit maintenance were published in [7].
These techniques, however, are constrained by the overall flight time and
limited space that could be provided inside dropped capsule [8]. Moreover,
such approaches are significantly expensive for simulating the micro-gravity
environment.
Researchers have suggested innovative methods for long duration flight tests
for space robotic emulation. One such approach considers an underwater natural
bouncy system to replicate weightlessness [9]. The underwater test facility
provides an significant capacity for an astronaut’s training, the utility of
submerging a satellite is significantly limited. Another approach considers a
weight reducing suspension system [10] to counter balance the gravitational
force. The concept has been validated with a robotic manipulator. However, the
limited allowable motion and disturbances introduced by suspension mechanism
restricts the applicability of such method.
In view of generating an approximate micro-gravity environment, planar air-
bearing based mechanisms provide the most flexible dynamic equivalency with
ideal space representative framework. Robotic vehicles supported by air
bearing, representing spacecrafts/satellites, have some level of manoeuvring
capability to move over a smooth planar nearly friction-less surrounding. Air-
bearings attached with the platform releases pressurized air and creates a
thin film to levitate the platform, and thereby counter balance its weight to
produce a micro-gravity effect (in-plane components of gravity on the test
vehicles are negligible). Thus emulating the drag free and weightless
environment of orbital spaceflight. The only limitation of the design is that
they are restricted to three degrees of freedom, i.e., two translation and one
rotation motion, which is also closely resembles with a space scenario (since
out of plane motion are very limited for actual space mission). Moreover,
these facilities provide various hardware phenomena (e.g. realistic actuation
mechanism, computational constraints, sensor noise, actuator uncertainty,
delay etc.). In this manner, an air bearing based friction less platform
provides GNC testbeds for rigorous validation, both in terms of software
evaluation, as well as hardware-based implementation in high fidelity test
environments that have the capability to emulate realistic conditions in
space. Towards this direction, it should be mentioned that up to best of our
knowledge, there are no such commercially available platforms for these
friction-less micro gravity emulation platform. Various government
organization and university laboratories across the globe has indigenously
constructed their own test bed facilities. Examples can be found in [11],
[12], [13]. Primarily these emulation platform synthesises planner motion of a
robotic vehicle, while in some designs, additional degrees of freedom are
achieved by adding an air-bearing on top of the planar platform [14, 15, 16].
It should be noted that there is usually no thorough characterization of the
test beds in the literature, which restricts a through comparison of the
experimental results obtained by using various test facilities.
In line with the presented background, this article aims in presenting the
design of a novel floating platform, the Slider named from now and on, which
has been fully designed and manufactured developed from the Robotics and
Artificial Intelligence Team [17] at Luleå University of Technology in Sweden.
Figure 1 depicts the hardware-in-loop test-bed facility, which consists of an
epoxy-topped flat table, with a robotic arm connected to a two-dimensional
gantry system and the slider platform on top of the flat table, and an ABB
industrial robot in the vicinity of the flat table. The platform will be a
useful tool to advance the state of the art of GNC evaluation and can be used
to perform end-to-end system-level verification and validation before the
system’s operational deployment.
The main contributions of the article stems from: a) the introduction of a
novel design of a planar floating platform, Slider, with a detailed design
specification of the floating platform based on a limited number of air
bearings, b) a full mathematical model derivation for representing the
transnational and rotational motion of the slider platform over the friction-
less table that is required for the sequential control design step, and c) on
top of that an analytical mathematical model of the framework suitable for
control design is established, while also presenting a low level actuator
design framework, specifically designed for the proposed platform enabling an
accurate actuation. The entire design of our slider platform including 3D CAD
model, data for 3D printing, laser cutting, blueprint diagram and an extensive
list of various components are made available in the GitHub repository [18].
We sincerely believe that our design’s open-sourcing will befit interested
space research communities to rebuild the hardware platform in an individually
customized setup quickly. A visual demonstration of the slider in operational
mode can be found in [19].
The rest of the article is structured as follows. In Section II, a design
description of the physical floating platform equipped with various components
is presented. An initial validation of the overall design is described in
Section III. In Section IV, the mathematical model of the floating platform is
formulated. An open loop simulation with low level actuator selection logic
has been developed and the mathematical model has been validated numerical
simulations in Section V and finally, the article is concluded with its future
direction in Section VI.
## II Design Description of Slider Platform
The robotic emulator platform is designed to smoothly maneuver over a
friction-less table. The flat top surface of the table (shown in Fig. 1) is
coated with epoxy resins which creates a smooth and flat table surface
required to replicate the friction-less motion of a spacecraft in space
environment. A schematic design of the slider platform is presented in Fig. 2.
The slider is supported with three air-bearings attached at its bottom deck.
The functional surface of each air-bearings is porous in nature. Compressed
air is evenly released through these small holes, which eventually creates an
air cushion. The air-cushion supports the weight of the slider platform and
allow it to levitate of the epoxy-topped table. Indeed, it can not offer a
micro-gravity framework, however, such a mechanism provides a nearly friction-
less environment along the 2-dimensional plane of the flat-table, which
closely resembles a motion in space. Due to this fact, it is preferred as an
emulation platform to demonstrate the state-of-the-art autonomous technologies
for complicated space missions. The slider platform is consisting of the
various subsystems described as follows
Figure 1: Hardware-in-loop testing facility with a $4\times 4~{}\mathrm{m}$
epoxy-topped flat table, floating platform (Slider), and two robotic
manipulators at LuleåUniversity of Technology.
(a) Side View
(b) Bottom View
Figure 2: Physical model of slider platform
Figure 3: Blueprint model of slider platform
### II-A Structural Design of Slider
The structural design of the platform is constructed in such way that it is
light-weight, supports the necessary payload components (e.g. air bearings,
compressed air tank, thrusters etc.) and provide sufficient rigidity to the
over all assembly. One of the key features of the design is the easiness of
manufacturing. The technologies used for production are primarily 3D printing
and laser cutting. The physical construction of the outline structure is
consist of a circular base along with a ceiling surface, which are connected
by three ‘top-down-frames’ as shown in Fig.2. The circular base of the
structure is made out of $6mm$ poly-carbonate sheet, which is light-weight and
has sufficient stiffness to hold various components mounted over it. The
substantial components, which and relatively bulky (e.g. air tank, battery,
etc.) are placed above the circular base, as close as possible to the center
of the platform. Such compact placement ensures that the centre of gravity of
the slider is placed close to its geometrical center. The base line dimension
of the platform (approximately 350mm in diameter) is largely defined by the
length of the air tank and the size of the air pressure regulators mounted
over it. The components like thruster assembly and regulators are placed over
the periphery of the circular base as shown in Fig.3 that provides maximum
possible torque arm for controlling its motion. The a list of major components
for building the slider platform is given in Table I. A more elaborating
design description is presented in the Table A1, A2 of appendix section.
Various components of the baseline structure used to construct the slider
platform are built in laboratory using 3D printing technology which uses
Polylactic Acid (PLA) [20] as printing material.
TABLE I: List of Components Components | Manufacturer | Product type/Specification
---|---|---
Solenoid valve | Festo | MFH-2-M5
12V coil | Festo | MSFG-12-OD
Regulator | Festo | MS2-LR-QS6-D6-AR-BAR-B
Airsoft regulator | Polarstar | MRS
Tank regulator | Ninja | HP UL Reg 4500psi
Air tank | DYE | UL
Relay module | Seeed | Groove -2-Channel SPDT Relay
Upboard Computer | Arduino | Micro-controller
Figure 4: Schematic representation of Air-management system
### II-B Air Management System
In order to drive all pneumatic components (primarily consist of air bearings
and thruster assembly), the slider platform carries an air tank of size
$1.2l$, which is filled with pressurized air and placed symmetrically about
the $X_{B}-Z_{B}$ plane of the slider. The air tank structure can stores
compressed air, pressurized up to $300$ bar. The air tank is connected with
air-bearings (attached to bottom deck) and the thrusters through air tubes. A
schematic representation of the air-management system is presented in Fig.4.
Pneumatic flow from air-tank is divided into two separate branches. Each of
the branches is equipped with low-pressure regulator, which are capable to
control the output pressure. The output air pressure, is regulated down to
about $5$ bar for operation of air-bearings and about $7$ bar for thrusting
mechanism. Relatively heavy regulators are mounted on the front side of the
tank, to appropriately counteract the weight of the air tank along
$Y_{B}-Z_{B}$ plane. The compressed air is evenly released through the porous
surface of the air bearing in regulated manner. Three air bearings each of
size $40$ mm are mounted in the bottom deck. While operational the three air-
bearing together makes a plane of air cushion of a thickness of about $15$
microns, resulting in friction less motion of the platform over the friction-
less table.
### II-C Actuation Mechanism
In order to mobilize the slider platform in a controlled manner, it is
equipped with eight small thrusters that operate in an on-off mode. The
thrusters are synthesized with 3D printed nozzles integrated with $12$ V
solenoid valves. The solenoid valves are controlled by relay modules, which
operates on signals received from on-board computer or RC receiver through
‘Arduino’ board. The energy is stored in $4S$ Lipo $1400$ mAh battery and it
is regulated down to 12V for solenoid valves and down to $5$V for on-board
computer. Engineering resin is used as material for synthesizing nozzles.
Eight thrusters are distributed into four brackets, which are placed wide
apart from the geometric centre, while maintaining a compact footprint of a
square with a side about $39$mm. The placement of the thruster assembly is
presented in Fig.3. Two thrusters sharing a bracket (One aligned with the
$x$-axis and the other with the $y$-axis) maintain a small offset as shown in
the Fig.8. Such wide spread arrangement come up with large torque arms, which
maximizes the magnitude of torque. Each of the thruster can produce a constant
magnitude force of $0.7N$ while activated. The thruster assembly is capable to
provide force and torque that is required to translate, as well to orient the
slider over the friction-less table. An initial set of experiments has been
cried out for construction of force and torque model of small thrusters. The
experimental setup for thruster modeling is presented in Fig.6. A thruster is
mounted over a rod of length about $0.5$ m to provide a sufficiently large
torque-arm. Such arrangement is required to mitigate the the resolution of the
6D Torque/Force sensor. A calibrated response of the thrusters are presented
in Fig.6. It is evident that the delay in the thruster response are fairly
insignificant for engineering practice. However, the undulation in measured
signals are essentially due to presence of unwanted sensor noise. Since the
thrusters operates in a switching mode, a specific combination of thruster
activation results into a particular type of directed motion of the slider.
The specific combination of the thruster required to activate for various
directed motions are presented in Table II.
TABLE II: Thrust activation logic Motion | Thrusters
---|---
| $T_{1}$ | $T_{2}$ | $T_{3}$ | $T_{4}$ | $T_{5}$ | $T_{6}$ | $T_{7}$ | $T_{8}$
Forward | | | ✓ | | ✓ | | |
Backward | ✓ | | | | | | ✓ |
Left | | ✓ | | ✓ | | | |
Right | | | | | | ✓ | | ✓
Clockwise | ✓ | | | ✓ | ✓ | | | ✓
C-Clockwise | | ✓ | ✓ | | | ✓ | ✓ |
Figure 5: Experimental setup for thruster modeling
Figure 6: Excitation and measurement of output force and torque
## III Initial Test
An initial trial for the platform has been carried out in the Kiruna space lab
facility. The experiment’s goal is to test and prove the concept and
demonstrate its manoeuvrability over the friction-less table. During the
trial, the platform motion has been controlled manually using a remote control
(RC) transmitter in an open-loop manner. The platform is equipped with an RC
receiver, which is directly connected to the on-board computer. The on-board
computer communicates with the thruster assembly and provides the ‘on/off’
command state to a set of assigned thrusters. The RC transmitter’s sticks are
set to be operated in an ‘on/off’ fashion while Modulation and pulsation of
the thrusters are controlled directly from the operator-end. The control logic
for the RC transmitter has been designed as follows. Three dedicated channels
of the RC transmitter are used to provide the command for the platform’s
directed platform motion. Among these three ones is assigned to control the
forward and backwards movement, while others are used for sidewise (towards
left-right) and rotational (clockwise and counter-clockwise direction) motion.
The actuation logic for selecting the set of the thruster for each directed
motion is described in Table II.
During the initial trial, the Air tank has been filled with compressed air up
to $200$ bar. The operating condition for the platform is set up as follows.
The pressure regulator for the air bearings is set to $5$ bar, and the same
for the thruster assembly is maintained at $7$ bar. With this setup, we
reached about $7$ minutes of flight time. Airflow through the bearings is
allowed continuously during the entire operation period whereas the thrusters
are operated through the on-board computer (as commanded from the RC
transmitter). The on-time for the thrusters is estimated to be $30\%$ of the
entire flight time. Typically, the flight time is primarily dependent on the
consumption rate of the pressurised air and hence on the thrusters usage. The
battery life is over exceeding the lifetime of the air tank heavily; thus, it
will not affect the platform’s flight time. However, refiling of the air tank
is required more often. A proof of demonstration of platform motion during the
initial trial is recorded video-graphically and can be found in [19].
During the initial trial, the platform motion is controlled manually and
operated in open-loop mode. However, the realistic demonstration of various
orbital manoeuvre and synchronised movement of multiple slider platform
requires advanced control law to be operated autonomously in a closed-loop
manner. The formulation of advanced model-based control design requires the
mathematical model of the platform. In view of that, the dynamic model of the
slider platform consisting of the thruster based actuation mechanism is
presented next.
## IV Equation of Motion
In order to describe the equation of motion of the slider over friction-less
platform, two frame references have been considered. An inertial frame of
reference (denoted as $X-Y-Z$) is assumed to be attached on a corner point of
the friction-less table. The axis of the inertial frames are considered to be
directed along the length, width and height of the table. Another, moving
frame of reference (denoted as $X_{B}-Y_{B}-Z_{B}$) is considered to be
attached to the centre of gravity (CG) of the slider, which moves along the
slider. The slider can translate over the table surface, as well it can rotate
about its $Z_{B}$ axis. The position of the slider (denoted as $x,y$) is
described in the inertial frame of reference. Since the various sensors and
actuators are attached with the slider body, it is preferred to define its
velocity and actuation forces in the body frame. Lets $v_{x}$ and $v_{y}$
denotes the velocity components of the slider expressed along $X_{B}$ and
$Y_{B}$ respectively. Since, the motion of the slider is restricted in $2-$D,
the motion along $z$ component is ignored. Slider transnational kinematic
equations of motion are described as:
Figure 7: Reference frames used to describe the dynamics
Figure 8: Thruster placement in a bracket
$\left[\begin{matrix}{\dot{x}}\\\ {\dot{y}}\\\
\end{matrix}\right]=R_{B}^{I}\left[\begin{matrix}{{v}_{x}}\\\ {{v}_{y}}\\\
\end{matrix}\right]$ (1)
where, $R_{B}^{I}=\left[\begin{matrix}\cos\theta&-\sin\theta\\\
\sin\theta&\cos\theta\\\ \end{matrix}\right]$ represents the rotation matrix,
that transform a vector from the body frame to inertial frame of reference.
Here, $\theta$ represents the orientation of the slider (heading angle between
$X$ and $X_{B}$ axis). Since the slider can rotate only about its $Z_{B}$
axis, it rotational velocity denoted as $r$ is directed along $Z_{B}$. Based
on the fundamental Newton’s law of motion [21], transnational dynamics of the
slider is formulated as
$\left[\begin{matrix}{{{\dot{v}}}_{x}}\\\
{{{\dot{v}}}_{y}}\end{matrix}\right]=\left[\begin{matrix}r{{v}_{y}}+\frac{{{f}_{x}}}{m}\\\
-r{{v}_{x}}+\frac{{{f}_{y}}}{m}\end{matrix}\right]$ (2)
where $f_{x},f_{y}$ denotes the actuation forces and $m$ represents the mass
of the slider. Note that, the transnational dynamics incorporates the coriolis
effects ($r{{v}_{x}},r{{v}_{y}}$) due to its rotational motion. The rotational
motion of the slider is formulated based on conservation of angular momentum
[22] and presented as follows
$\left[\begin{matrix}{\dot{\theta}}\\\ {\dot{r}}\\\
\end{matrix}\right]=\left[\begin{matrix}r\\\ \frac{\tau}{{{I}_{zz}}}\\\
\end{matrix}\right]$ (3)
where, $\tau$ denotes the applied torque and $I_{zz}$ indicates the principal
moment of inertia along the $Z_{B}$ direction. Note that the slider platform
is designed in a balanced manner such that off diagonal components of moment
of inertia matrix are negligible. combining the Eqs.(1)-(3), the dynamical
equation of motion in compact form is represented as
$\left[\begin{matrix}{\dot{x}}\\\ {\dot{y}}\\\ {\dot{\theta}}\\\
{{{\dot{v}}}_{x}}\\\ {{{\dot{v}}}_{y}}\\\ {\dot{r}}\\\
\end{matrix}\right]=\left[\begin{matrix}{{v}_{x}}\cos\theta-{{v}_{y}}\sin\theta\\\
{{v}_{x}}\sin\theta+{{v}_{y}}\cos\theta\\\ r\\\
r{{v}_{y}}+\frac{{{f}_{x}}}{m}\\\ -r{{v}_{x}}+\frac{{{f}_{y}}}{m}\\\
\frac{\tau}{{{I}_{zz}}}\\\ \end{matrix}\right]$ (4)
The slider’s actuation unit is equipped with a total number of eight small
thrusters attached with the platform. The control action, i.e. forces and
torque components are related with the actuation of thruster units, modeled as
$\left[\begin{matrix}{{f}_{x}}\\\ {{f}_{y}}\\\ \tau\\\
\end{matrix}\right]=\left[\begin{matrix}\sum\limits_{k=1}^{8}{{{T}_{k}}\cos{{\beta}_{k}}}\\\
\sum\limits_{k=1}^{8}{{{T}_{k}}\sin{{\beta}_{k}}}\\\
\left(\sum\limits_{k=1}^{8}{\left({{T}_{k}}r_{{{T}_{k}}}^{y}\cos\beta_{k}-{{T}_{k}}r_{{{T}_{k}}}^{x}\cos{{\beta}_{k}}\right)}\right)\\\
\end{matrix}\right]$ (5)
where, $T_{k}$ denotes the constant thrust magnitude,
$(r_{{{T}_{k}}}^{x},r_{{{T}_{k}}}^{y})$ together indicates the position of the
$k^{th}$ thruster in $X_{B},Y_{B}$ plane and ${{\beta}_{k}}$ represents its
orientation with respect to $X_{B}$ axis.
TABLE III: Numerical values for system and simulation parameters
(a) Position and orientation of individual thruster Thruster | | Position ($r_{{{T}_{k}}}^{x},r_{{{T}_{k}}}^{y}$)
---
(mm)
| Orientation $\beta_{k}$
---
(deg)
$T_{1}$ | $(195,-140)$ | $0$
$T_{2}$ | $(140,-195)$ | $270$
$T_{3}$ | $(-195,-140)$ | $180$
$T_{4}$ | $(-140,-195)$ | $270$
$T_{5}$ | $(-195,140)$ | $180$
$T_{6}$ | $(-140,195)$ | $90$
$T_{7}$ | $(195,-140)$ | $0$
$T_{8}$ | $(140,195)$ | $90$
(b) System parameters Parameters | Values
---|---
Mass ($m$) | $4.436$ kg
Moment of inertia ($I_{zz}$) | $1.092$ $kg-m^{2}$
| Control command
---
time step
$0.5$ s
| System Propagation
---
time step
$0.01$ s
| Minimum
---
on time of thruster
$10$ ms
$T_{lb},T_{ub}$ | $0,0.7$ N
## V Model based Open Loop Simulation
In order to validate the mathematical model, an open-loop simulation has been
carried out in this section. Various numerical parameters used for the
simulation study is presented in Table III. Initially the slider platform is
considered to be resting at the corner of the table, i.e.
$[x,y,\theta,v_{x},v_{y},r]=0_{6\times 1}$. It is intended to excite the
platform model with two-step ramp type input command (combining both the
positive and negative actuation). The open-loop control excitation signal is
depicted in the fourth subplot of Fig.11. The identical control input is
considered to be applied along the three channels, i.e. $f_{x},f_{y}$ and
$\tau$. The simulation-based validation has been carried out in the two-step
approach. In the first step, the ideal system response is evaluated by
propagating of the dynamic model Eq.(4) in the presence of continuous-time
force and torque command. Here, the realistic actuator model consisting of
eight thruster assembly, is ignored. Ideal time response of the slider pose
and velocities due to continuous input command are presented in Figs. 11 and
11 respectively. Here the blue dotted lines represent the ideal system
responses. In the next step, the realistic actuator model with its physical
limitations (such as maximum thrust bound, minimum on-time etc.) is accounted
for in the simulation study. The open-loop force and torque command, i.e.
$f_{x},f_{y},\tau$ needs to be realised by actuating the on-off thrusters
assembly, i.e. $T_{1}\cdot T_{8}$. Rewriting the Eq.(5), the relation between
$f_{x},f_{y},\tau$ and $T_{1}\cdot T_{8}$ can be expressed as
$Ax=b$ (6)
where, $b=\left[\begin{array}[]{c}f_{x}\\\ f_{y}\\\
\tau\end{array}\right],x=\left[\begin{array}[]{c}T_{1}\\\ T_{2}\\\ \vdots\\\
T_{8}\end{array}\right],A=\left[\begin{array}[]{ccc}\cos\beta_{1}&&\cos\beta_{8}\\\
\sin\beta_{1}&\vdots&\sin\beta_{8}\\\
{\left(r_{{{T}_{1}}}^{y}\cos\beta_{1}-r_{{{T}_{1}}}^{x}\cos{{\beta}_{1}}\right)}&&{\left(r_{{{T}_{8}}}^{y}\cos\beta_{8}-r_{{{T}_{8}}}^{x}\cos{{\beta}_{8}}\right)}\end{array}\right]$
Note that the matrix $A$ is known and constant, determined by placement of
each thruster’s and its orientation.
Figure 9: Schematic representation of low level actuation logic
### Thruster Selection Logic
The objective of the thruster selection logic is to determine the set of
thrusters required to be activated in order to achieve the desired commanded
force and torque inputs. A pseudo-inverse based solution of Eq. (6) is adopted
in [23]. The method incorporates a null space adjustment depending on the sign
of input command. In this paper, the above problem has been translated into an
optimization framework [24] as follows:
$\displaystyle MinJ=x^{T}x$ (7) $\displaystyle S.T.Ax=b$ (8) $\displaystyle
x_{ub}\geq x\geq x_{lb}$ (9)
Figure 10: Open-loop response of slider pose
Figure 11: Variation of slider velocity profiles and input excitation
The solution of the above optimization problem provides the optimal magnitude
of each thruster ($T_{k}^{*}$) that collectively mitigates the necessary force
and torque demand, while minimizes the actuation effort and ensuring the
physical limitations of each thruster. Note that, the magnitude of each
thruster $T_{k}^{*}$ obtained based on thruster selection logic can be any
value between $x_{b}$ and $u_{b}$ limits. However, in practice these thrusters
operate in an ON-OFF mode, i.e. either it can provide a full actuation with a
maximum thrust or operates in no thrust mode. In order to address this, the
pulse width modulation (PWM) technique is incorporated. Dedicated PWM units
are assigned for each thruster, as shown in Fig.9. The PWM block generates a
sequence of pulses in such a way that the average thrust produced by each
thruster closely follows the required magnitude $T_{k}^{*}$.
The implementation of actuator based simulation has been carried out as
follows. The input commands ($f_{x},f_{y}$ and $\tau$) are processed through
the zero-order-hold mechanism and its magnitude is hold at a constant value
for each control time step, i.e. $0.5$ s. During this period, the input signal
is processed through various intermediate blocks, as presented in Fig. 9. Each
PWM blocks are set to be operated with a frequency of $10$ Hz. The resulting
sequence of output pulses for each thruster $T_{1}-T_{8}$ are presented in
Figs.13 and 13. With the application of pulse input as actuation command for
thrusters, the nonlinear dynamic model is propagated, and the corresponding
time responses are presented in Figs.11 and 11. Here the solid black lines
represent the thrust actuated system responses. In the first subplot of Fig.
11, the slider trajectory is presented where the arrowheads are indicating its
orientation at the given point. Detailed orientation profile is presented in
the fourth subplot of Fig.11. It is evident that the thruster based actuation
mechanism closely resembles the ideal responses. However, the zoomed view in
Fig. 11 shows a non-smooth behaviour, which is due to pulsating mode of
actuation input.
Figure 12: Actuation command for thrusters $T_{1}-T_{4}$
Figure 13: Actuation command for thrusters $T_{5}-T_{8}$
## VI Conclusion
In this article, a friction-less floating robotic test-bed facility for a
hardware-in-loop experimental study of a planar satellite that has
indigenously developed at Luleå University of Technology is presented. The
details design description of the physical platform, with the each component
specification was presented. Moreover a mathematical model describing its
dynamic motion was formulated. This capability can be used to conduct research
on coordinated control of spacecraft teams. A successful initial test has been
conducted in a open manually operated loop remote control manner. Multiple
simulation studies has been carried out with formulated mathematical model in
various scenarios. An optimization based actuator allocation logic has been
developed and tested in simulation framework. The on-board actuators, composed
of eight thrusters, which is equivalent to the actuators configuration of
typical spacecraft and further establishes close realistic equivalence.
Additionally, multiple such robotic platform can be operated simultaneously
over the friction less table. This state-of-the-art dynamic hardware-in-the-
loop emulation facility will continue to be fruitful for the advancement of
spacecraft proximity operations research.
## References
* [1] A. Toorian, K. Diaz, and S. Lee, “The cubesat approach to space access,” in _2008 IEEE Aerospace Conference_. IEEE, 2008, pp. 1–14.
* [2] M. Sorgenfrei and M. Nehrenz, “Operational considerations for a swarm of cubesat-class spacecraft,” in _SpaceOps 2014 Conference_ , 2014, p. 1679\.
* [3] P. Bodin, R. Larsson, F. Nilsson, C. Chasset, R. Noteborn, and M. Nylund, “Prisma: an in-orbit test bed for guidance, navigation, and control experiments,” _Journal of Spacecraft and Rockets_ , vol. 46, no. 3, pp. 615–623, 2009.
* [4] J. L. Schwartz, M. A. Peck, and C. D. Hall, “Historical review of air-bearing spacecraft simulators,” _Journal of Guidance, Control, and Dynamics_ , vol. 26, no. 4, pp. 513–522, 2003.
* [5] V. Pletser, “Short duration microgravity experiments in physical and life sciences during parabolic flights: the first 30 esa campaigns,” _Acta Astronautica_ , vol. 55, no. 10, pp. 829–854, 2004.
* [6] P. Von Kampen, U. Kaczmarczik, and H. J. Rath, “The new drop tower catapult system,” _Acta Astronautica_ , vol. 59, no. 1-5, pp. 278–283, 2006.
* [7] C. Menon, A. Aboudan, S. Cocuzza, A. Bulgarelli, and F. Angrilli, “Free-flying robot tested on parabolic flights: kinematic control,” _Journal of Guidance, Control, and Dynamics_ , vol. 28, no. 4, pp. 623–630, 2005.
* [8] T. Rybus and K. Seweryn, “Planar air-bearing microgravity simulators: review of applications, existing solutions and design parameters,” _Acta Astronautica_ , vol. 120, pp. 239–259, 2016.
* [9] W. Whitacre, “An autonomous underwater vehicle as a spacecraft attitude control simulator,” in _43rd AIAA Aerospace Sciences Meeting and Exhibit_ , 2005, p. 137.
* [10] H. B. Brown and J. M. Dolan, “A novel gravity compensation system for space robots.” ASCE, 1994.
* [11] M. Sabatini, M. Farnocchia, and G. B. Palmerini, “Design and tests of a frictionless 2d platform for studying space navigation and control subsystems,” in _2012 IEEE Aerospace Conference_. IEEE, 2012, pp. 1–12.
* [12] R. Zappulla, J. Virgili-Llop, C. Zagaris, H. Park, and M. Romano, “Dynamic air-bearing hardware-in-the-loop testbed to experimentally evaluate autonomous spacecraft proximity maneuvers,” _Journal of Spacecraft and Rockets_ , vol. 54, no. 4, pp. 825–839, 2017.
* [13] D. Gallardo, R. Bevilacqua, and R. Rasmussen, “Advances on a 6 degrees of freedom testbed for autonomous satellites operations,” in _AIAA Guidance, Navigation, and Control Conference_ , 2011, p. 6591.
* [14] Y. Eun, S.-Y. Park, and G.-N. Kim, “Development of a hardware-in-the-loop testbed to demonstrate multiple spacecraft operations in proximity,” _Acta Astronautica_ , vol. 147, pp. 48–58, 2018.
* [15] K. Saulnier, D. Pérez, R. Huang, D. Gallardo, G. Tilton, and R. Bevilacqua, “A six-degree-of-freedom hardware-in-the-loop simulator for small spacecraft,” _Acta Astronautica_ , vol. 105, no. 2, pp. 444–462, 2014.
* [16] M. Wilde, B. Kaplinger, T. Go, H. Gutierrez, and D. Kirk, “Orion: A simulation environment for spacecraft formation flight, capture, and orbital robotics,” in _2016 IEEE Aerospace Conference_. IEEE, 2016, pp. 1–14.
* [17] “Robotics and AI research group at luleå university of technology,” _Retrieved from www.ltu.se$/$robotics_, 2021.
* [18] “A repository of design document related to friction-less slider platform,” _Retrieved from www.github.com $/$LTU-RAI$/$The$\\_$SliderLow$\\_$Friction$\\_$Platform_, 2021\.
* [19] H. Jakub, G. Sumeet, B. Avijit, and N. George, “The slider-low friction platform-luleå university of technology,” _Retrieved from www.youtube.com $/$watch$?$v$=$nJjMiHpQhhA_, 2021.
* [20] R. E. Drumright, P. R. Gruber, and D. E. Henton, “Polylactic acid technology,” _Advanced materials_ , vol. 12, no. 23, pp. 1841–1846, 2000\.
* [21] J. L. Junkins and H. Schaub, _Analytical mechanics of space systems_. American Institute of Aeronautics and Astronautics, 2009.
* [22] M. J. Sidi, _Spacecraft dynamics and control: a practical engineering approach_. Cambridge university press, 1997, vol. 7.
* [23] M. Ghobadi, M. Shafaee, and M. J. Nadoushan, “Reliability approach to optimal thruster configuration design for spacecraft attitude control subsystem,” _Journal of Aerospace Technology and Management_ , vol. 12, 2020.
* [24] F. Martel, “Optimal 6 axis command of a space vehicle with a precomputed thruster selection catalogue table,” in _Proceedings of the 18th International Symposium on Space Flight Dynamics_ , 2004, pp. 11–15.
## VII appendix
Additional supplementary details of each component used to construct the
slider platform are presented in this section. Each component is
chronologically identified with a serial number between $1-55$. The location
of each component is indicated with the help of the blueprint model of the
slider platform, as shown in Fig. A1. Various description such as product
specification, materiel used for fabrication etc. are listed for each
components in Tables A1 and A2. A more detail design related supplementary
materials of the slider platform are made available in the GitHub repository
[18], which include 3D CAD model, data for 3D printing, laser cutting,
blueprint diagram and extensive list of various components.
(a) Top view
(b) Side view
Figure A1: Blueprint model of slider platform indicating detailed components TABLE A1: List of Components (serial No. 1- 25) indicated in Figs A1 ITEM | No. of QTY | Product type/Specification | Material | Description
---|---|---|---|---
1 | 1 | Base-6mm | Polycarbonate, Clear | Laser Cut (6mm)
2 | 6 | 130778 QSM-M5-4-100 | | QSM-push-in fitting
3 | 3 | S104001 | | Air bearing
4 | 3 | S8013B11 | Stainless Steel | Air bearing bolt
5 | 3 | S8013H04-NuT | Stainless Steel | Air bearing Hex nut
6 | 3 | S8013H04-ScrewNut | Brass, Soft Yellow | Air bearing housing
7 | 1 | Low-top-platform-mount-front-V4-1st part | PLA | 3D print
8 | 1 | Name-plate | PLA | 3D print
9 | 3 | Leg-washer | PLA | 3D print
10 | 2 | Low-top-platform-mount-v2 | PLA | 3D print
11 | 3 | LM2596 DC-DC StepDown Converter v1 | | Step-down voltage regulator
12 | 8 | 4573 MFH-2-M5 | | MFH-Solenoid valve
13 | 8 | 320410 MSFG-12-OD—(P) | | MSFG-p-Solenoid coil
14 | 2 | Valve-holder-v3 | PLA | 3D print
15 | 6 | 153333 QSML-M5-4 | | QSML-Push-in L-fitting
16 | 8 | Festo-connector-cover | PLA | 3D print
17 | 8 | Nozzle-SLA-base | Resin "Grey pro" | 3D print (SLA)
18 | 8 | 8030314 NPFC-R-G18-M5-FM | | NPFC-R-Threaded fittings
19 | 8 | Nozzle-bumper-V2 | TPU 95A | 3D print
20 | 4 | 153374 QSMY-6-4 | | QSMY-Push-in Y-connector
21 | 5 | 153129 QST-6 | | QST-Push-in T connector
22 | 4 | T-piece-clamp | PLA | 3D print
23 | 4 | Relay-mount | PLA | 3D print
24 | 4 | Grove 2 Channel SPDT Relay | | Relay
25 | 2 | 153484 QH-QS-6 | | QH-QS-ball valve
TABLE A2: List of Components (serial No. 26- 55) indicated in Figs A2 ITEM | No. of QTY | Product type/Specification | Material | Description
---|---|---|---|---
26 | 1 | Valve holder-V3 | PLA |
27 | 1 | Zippy compact 1400 | | Battery
28 | 1 | Battery-case-zippy-1400i-V2 | PLA | 3D print
29 | 1 | Regulator-mount-v2 | PLA | 3D print
30 | 1 | 8086628 MS2-LR-M5-D6-AR-BAR-B | | Regulator w filter Air bearings
31 | 2 | 5003640 MS2-LR/LFR-B | | MS2-WR (p)-Mounting bracket
32 | 1 | 8086644 MS2-LFR-QS6-D6-AR-BAR-C-M-B | | Regulator Thrusters
33 | 1 | Regulator-mount-long-v3 | PLA | PLA 3D print
34 | 1 | T-piece-holder | PLA | 3D print
35 | 2 | 130618 QSW-6HL | | QSW-HL-Push-in connector
36 | 1 | Splice-sla | Resin "Grey pro" | 3D print (SLA)
37 | 1 | 186096 QS-G1/8-6 | | QS_G-Push-in fitting
38 | 1 | Bottle mount V3-extended | PLA | 3D print
39 | 1 | Bottle mount V3 | PLA | 3D print
40 | 1 | Arduino shield | | Arduino shield
41 | 1 | Arduino Nano 33 IoT | | Arduino
42 | 1 | Arduino-w-shiled-holder | Resin "Black" | 3D print (SLA)
43 | 1 | Polarstar-regulator | | Airsoft regulator
44 | 1 | paintball air tank-DYE | | Paintball tank 1,1l
45 | 1 | top ring-6MM-V2 | PLA | 3D print
46 | 1 | UP_BOARD_&_HEAT_SINK_by_JMJV | | Up-boad computer
47 | 1 | Futaba-reciever | PLA | RC reciever
48 | 1 | Futaba-controller-hodler | PLA | 3D print
49 | 1 | Valve holder-V4 | PLA | 3D print
50 | 1 | holder_ps | PLA | 3D print
51 | 1 | ps-camera | | Playstation camera
52 | 1 | Gopro-mount | PLA | 3D print
53 | 2 | Valve-holder-v3-mirror | PLA | 3D print
54 | 1 | Spacer | PLA | 3D print
55 | 1 | Low-top-platform-mount-front-V4-2nd part | PLA | 3D print
|
Further author information: (Send correspondence to E.R.K.)
E-mail<EMAIL_ADDRESS>
# Design and implementation of a noise temperature measurement system for the
Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX)
Emily R. Kuhn Department of Physics, Yale University, New Haven, CT, USA
Benjamin R. B. Saliwanchik Department of Physics, Yale University, New Haven,
CT, USA Department of Physics, Brookhaven National Laboratory, Upton, NY, USA
Maile Harris Department of Physics, Yale University, New Haven, CT, USA
Moumita Aich School of Mathematics, Statistics & Computer Science, University
of KwaZulu-Natal, Westville Campus, Durban4041, South Africa Kevin Bandura
Department of Computer Science and Electrical Engineering, and Center for
Gravitational Waves and Cosmology, West Virginia University, Morgantown, WV,
USA Tzu-Ching Chang Jet Propulsion Laboratory, California Institute of
Technology, Pasadena, CA, USA H. Cynthia Chiang School of Mathematics,
Statistics & Computer Science, University of KwaZulu-Natal, Westville Campus,
Durban4041, South Africa Department of Physics, McGill University, Montréal,
QC, Canada Devin Crichton School of Mathematics, Statistics & Computer
Science, University of KwaZulu-Natal, Westville Campus, Durban4041, South
Africa Institute for Particle Physics and Astrophysics, ETH Zürich, Zürich,
Switzerland Aaron Ewall-Wice Department of Astronomy and Physics, UC
Berkeley, CA, USA Austin A. Gumba School of Mathematics, Statistics &
Computer Science, University of KwaZulu-Natal, Westville Campus, Durban4041,
South Africa N. Gupta Inter-University Centre for Astronomy and
Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007, India Kabelo Calvin
Kesebonye School of Mathematics, Statistics & Computer Science, University of
KwaZulu-Natal, Westville Campus, Durban4041, South Africa Jean-Paul Kneib
Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale
de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland
Martin Kunz Département de Physique Théorique and Center for Astroparticle
Physics, University of Geneva Kavilan Moodley School of Mathematics,
Statistics & Computer Science, University of KwaZulu-Natal, Westville Campus,
Durban4041, South Africa Astrophysics Research Centre, University of KwaZulu-
Natal, Westville Campus, Durban 4041, South Africa Laura B. Newburgh
Department of Physics, Yale University, New Haven, CT, USA Viraj Nistane
Département de Physique Théorique and Center for Astroparticle Physics,
University of Geneva Warren Naidoo School of Mathematics, Statistics &
Computer Science, University of KwaZulu-Natal, Westville Campus, Durban4041,
South Africa Deniz Ölçek Department of Physics, McGill University, Montréal,
QC, Canada Jeffrey B. Peterson Department of Physics, Carnegie Mellon
University. Pittsburgh. PA, USA Alexandre Refregier Institute for Particle
Physics and Astrophysics, ETH Zürich, Zürich, Switzerland Jonathan L. Sievers
Department of Physics, McGill University, Montréal, QC, Canada School of
Chemistry and Physics, University of KwaZulu-Natal, Durban, South Africa
Corrie Ungerer ArioGenix(Pty) Ltd, Pretoria, South Africa Alireza Vafaei
Sadr Département de Physique Théorique and Center for Astroparticle Physics,
University of Geneva Jacques van Dyk Pronex Engineering Management
Consultants CC, Pretoria, South Africa Amanda Weltman Department of
Mathematics and Applied Mathematics, University of Cape Town, South Africa
Dallas Wulf Department of Physics, McGill University, Montréal, QC, Canada
###### Abstract
This paper describes the design, implementation, and verification of a test-
bed for determining the noise temperature of radio antennas operating between
400-800 MHz. The requirements for this test-bed were driven by the HIRAX
experiment, which uses antennas with embedded amplification, making system
noise characterization difficult in the laboratory. The test-bed consists of
two large cylindrical cavities, each containing radio-frequency (RF) absorber
held at different temperatures (300 K and 77 K), allowing a measurement of
system noise temperature through the well-known ‘Y-factor’ method. The
apparatus has been constructed at Yale, and over the course of the past year
has undergone detailed verification measurements. To date, three preliminary
noise temperature measurement sets have been conducted using the system,
putting us on track to make the first noise temperature measurements of the
HIRAX feed and perform the first analysis of feed repeatability.
###### keywords:
Radio instrumentation, 21cm cosmology, antenna characterization
## 1 INTRODUCTION
The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) is a 21 cm
neutral hydrogen intensity mapping experiment to be deployed in the Karoo
Desert in South Africa [1]. It will consist of 1024 six-meter parabolic dishes
[2], and will map much of the southern sky over the course of four years.
HIRAX is designed to improve constraints on the dark energy equation of state
through measurements of large scale structure at high redshift. It will target
a measurement of the $100h^{-1}$Mpc Baryon Acoustic Oscillation scale through
21 cm emission of neutral hydrogen contained in abundance in galaxies. The
HIRAX redshift range, 0.8 $<z<2.5$, corresponds to the radio band of 400-800
MHz and will be measured in 1024 frequency bins. In addition to 21cm cosmology
and hydrogen absorber science, HIRAX will discover and monitor transients such
as fast radio bursts (FRBs) and pulsars, as is currently done with CHIME in
the Northern Hemisphere [3, 4, 5, 6, 7, 8]. HIRAX’s southern location will
allow for a variety of cross-correlation measurements with other cosmology
surveys such as ACTPol, DES, and the Vera Rubin Observatory. Currently, an
8-element prototype array has been deployed at Hartebeesthoek Radio Astronomy
Observatory (HartRAO), and a 256-element array is being developed at the final
HIRAX site (Figure 1).
Figure 1: The HIRAX prototype array at Hartebeesthoek Radio Astronomy
Observatory, South Africa (left) and a rendering of the final 1024 dish
configuration with the current prototype dish model in the Karoo Desert, South
Africa (right).
The HIRAX signal chain is comprised of both custom and commercial parts[1].
Each of the 1024 dishes will have a dual-polarization antenna feed with a
first-stage differential low-noise amplifier (LNA, Avago MGA–16116) embedded
directly into the antenna balun for noise reduction (Figure 2). This design
choice has been adopted to keep the total system noise to less than 50 K,
allowing a sensitive measurement of the $\sim$100$\mu$K cosmological 21 cm
signal[9]. The signal is transformed to an optical signal using an RF-over-
Fiber (RFoF) module[10], and then carried to the correlator building, where it
is turned back to RF. The analog signal is filtered and further amplified,
before it is channelized and digitized by an ICE board[11], and then
correlated.
The HIRAX feed is a dual polarization cloverleaf dipole with low loss ($<$0.15
dB) and small reflectivity ($<-15$ dB). It consists of FR-4 dielectric (PCB)
with a metalized layer, a PCB balun and support board. A ring choke is used to
circularize the beam and decrease crosstalk and ground spillover. The
cloverleaf design is based on that of the CHIME antenna [12], with the main
differences being HIRAX uses FR-4 instead of Teflon-based PCB (to reduce cost)
and sums polarization along a different axis.
Frequency Range | 400–800 MHz
---|---
Frequency Resolution | 390 kHz, 1024 channels
Dish size | 6 m diameter, f/D = 0.23
Field of View | 15-56 deg2
System Temperature | $\sim$50 Kelvin
Antenna Noise Temperature | 20-30 Kelvin
Table 1: HIRAX instrument parameters.
The focus of this work is on assessing the HIRAX feed contribution to system
noise. To measure cosmological emission efficiently we require the total noise
to be kept below 50K, of which at most 30K can come from the feed itself. The
feed design is optimized to reduce system noise by removing particular sources
of loss in the analog chain, primarily by moving the first stage amplification
into the antenna balun. The choice to embed amplification directly into the
balun stem makes noise temperature characterization challenging, as the noise
temperature, gain, and stability of the amplifier cannot be measured directly
with a noise figure meter or related laboratory equipment. To measure the
noise temperature of the feed and amplifier, we must inject a broad-band
signal into the HIRAX antenna. In this paper, we describe a test-bed built at
Yale University that allows us to measure signals from radio absorber at two
known temperatures, and infer the noise temperature of the antenna and
amplifier via the Y-factor method (see e.g. Microwave Engineering [13]).
Figure 2: The HIRAX antenna. The HIRAX antenna feed has the first stage
amplification integrated into the antenna structure. This reduces feed noise
and makes it more lightweight, but in turn prevents the noise temperature from
being directly measured with a noise figure meter.
The Y-factor method allows one to determine the noise temperature of an
antenna by comparing the output power from two loads at different
temperatures. This method can be derived from the observation that an antenna
in a cavity in thermal equilibrium at temperature T behaves like a resistor of
temperature T, where the antenna internal noise can be likened to that of
Johnson noise in the resistor[14]. It is most commonly expressed through the
following set of equations[13]:
$Y=\frac{P_{\text{hot}}}{P_{\text{cold}}}$ (1)
$T_{\text{noise}}=\frac{T_{\text{hot}}-YT_{\text{cold}}}{Y-1},$ (2)
where $P_{\text{hot}}$ is the measured power from a “hot” load,
$P_{\text{cold}}$ is the measured power from a “cold” load, and
$T_{\text{hot}}$, $T_{\text{cold}}$ are the corresponding load temperatures.
$Y$ is referred to as the “Y Factor”. This measurement requires linearity in
the gain of active components and known sources of input power.
This type of measurement is commonly taken using the sky as a cold load, but
this is impractical for the HIRAX feed, which has a wide beam that is still
being characterized, and for us locally due to the radio frequency
interference (RFI) rich environments around universities. We instead designed
and constructed an experimental system with hot/cold loads of 300K/77K
(corresponding to room temperature and liquid nitrogen temperature) to be used
in a laboratory setting. We aim to understand the antenna noise temperature to
within 5K, or 10% of the expected system noise.
## 2 Experiment Design
The noise temperature measurement system has been designed as a pair of
reflective closed cylindrical cavities with radio-frequency (RF) absorber in
the bottom. One cavity is kept at ambient temperatures such that the absorber
emits at $T_{\rm hot}$ = 300 K, and designated the ‘warm’ load. The second
cavity is filled with liquid nitrogen such that the absorber emits at $T_{\rm
cold}$ = 77 K, and is designated the ‘cold’ load. The feed is attached to the
lid of each cavity such that its beam terminates in the absorber. There were
several constraints on the design, including:
* •
Cost: We use commercially available materials with minimal labor to assemble.
* •
Size: We required that the cavities are smaller than $\sim$1.5 m, to stay
within a reasonable footprint in the lab space, and ensure standard material
stock is suitable for construction.
* •
Cavity material: We use steel because it is easily weldable (important for
containing liquid nitrogen as a safeguard) and inexpensive relative to other
materials.
* •
Absorber: We use commercially available AEP-18 series pyramidal
foam111https://www.mvg-world.com/en/products/absorbers/standard-
absorbers/pyramidal-absorbers-aep-series for the RF absorber, which has
$\sim$30 dB absorption in our band.
* •
Shape: We optimize the cavity dimensions to minimize resonances, and to be
structurally stable and capable of containing 600 L of liquid nitrogen.
* •
Insulation: We line all surfaces of the chamber with insulation to reduce
liquid nitrogen boil-off and limit the accumulation of water vapor. We also
add a foam lid to encourage a nitrogen vapor layer and isolate the feed from
the cold vapor.
* •
Reflectivity: We coat the inside of the cavities with aluminum tape to
increase the reflectivity, primarily to improve the hold-time of the liquid
nitrogen in the cavity.
* •
Indistinguishable: The two cavities must be sufficiently similar, or their
differences sufficiently repeatable and characterized, to keep systematic
errors less than 5 K. Cavity differences will be quantified in more detail in
later sections.
As described in this section, the design of the test-bed was optimized using
CST simulations prior to construction, and materials were chosen to allow one
of the cavities to hold liquid nitrogen.
### 2.1 Simulations
We optimized the design of the cavities using CST Microwave
Studio222https://www.3ds.com/products-services/simulia/products/cst-studio-
suite/solvers/. CST was a natural choice for modeling, as the HIRAX and CHIME
collaborations had previously used it to construct feed models, which were
leveraged for this work. We began by simulating the HIRAX feed in free space
and monitoring the $S_{11}$ (return loss). Because the cavity should be an
absorptive blackbody, the primary figure of merit was to match the cavity
reflections, parameterized by $S_{11}$, to those of the feed in free space
(such that dominant reflections are internal to the feed itself). The
simulations were performed and optimized with a passive HIRAX feed attached to
the lid of the cavity, with the lid functioning as a ground plane. HIRAX plans
to employ a circular choke to circularize the beam, which was not included in
these simulations.
The company from which we obtained RF absorber did not supply relevant
material parameters, such as dielectric constant ($\epsilon$) and loss tangent
($\tan\delta$), so we created a user-defined absorptive material through the
CST optimization function with guidance from literature[15]. This optimization
was performed by placing a slab of RF absorber in front of the feed in
software, and sweeping material parameters until $S_{11}$ was minimized. We
ultimately simulated the RF absorber as 18 in tall pyramidal cones (numbering
16 to a 2 ft $\times$ 2 ft block, to match dimensions of those available
commercially) with density 159 kg/m3, $\epsilon$=2.7 and $\tan{\delta}$=1.
Once the RF absorber material was modeled, we set it within several steel
cavities of various shapes to determine which to pursue for the design. This
determination was made by comparing the feed $S_{11}$ profile of different
cavity options with the free space profile. We ultimately settled on a
cylinder, as it matched free space as well as the other options, is known to
be the most mechanically robust shape, and is simplest to construct (Figure
3). We then optimized cavity dimensions, finding a diameter of 129.5 cm and a
height of 70.8 cm.
Figure 3: Preliminary $S_{11}$ simulation results for the unamplified HIRAX
feed in various cavity shapes before optimization (orange), plotted against
simulation results for free space (blue). All cavities are the same height,
use the same RF absorber material parameters, are made from aluminum, and
share similar base dimensions (i.e. the diameter of the cylinder and cone
bases and the length of the cube sides are equal in length). From these early
simulations, we find little difference in RF performance between leading
contenders, and settled on a cylinder to optimize for the final design. Aside
from RF performance, cylinders are mechanically robust, and simple to
construct.
We similarly estimated the reflection off of the liquid nitrogen surface by
specifying $\epsilon=1.44$ and $\tan\delta=5\times 10^{-5}$, which are
measured parameters for 18-26 GHz[16, 17] (parameters were not available in
our band). We determined reflections to be below -14 dB. As described in more
detail in Section 4, the nitrogen is sufficiently transparent to continue with
the design, but may have to be estimated and accounted for during later
analysis.
Finally, we investigated tolerances on cavity dimensions by sweeping through a
variety of length parameters centered about the optimal dimensions and solving
for $S_{11}$. The resulting deviations in $S_{11}$ would represent possible
differences between the two cavities, which we required to be less than 1 dB.
The results indicated that the cavity dimensions must be constructed with
tolerance of $\sim$1 cm. We followed a similar procedure to set a tolerance
for the insulation thickness, accomplished by assuming total RF transparency
of insulation, and leaving an air gap of corresponding volume in simulation.
This procedure yielded an insulating later of thickness up to 80 mm.
The simulated $S_{11}$ for the final, optimized cavity design is shown in
Figure 4. The resulting $S_{11}$ is similar to the free-space reflection, and
below the -10 dB reflection requirement for HIRAX. Also shown is the
simulation result for reflection including the liquid nitrogen surface, which
modestly changes the $S_{11}$ but remains well below -10 dB within the 400-800
MHz band.
Figure 4: $S_{11}$ simulation results for the unamplified HIRAX feed. We
compare free space results to results from the Y-factor measurement system hot
and cold loads (the cold load contains a simulated liquid nitrogen surface),
finding all profiles to be similar to well below -10dB. Their differences are
computed to give a sub-1K error in noise temperature from 400-750 MHz.
### 2.2 System Construction
From the simulation work described above, we determined that cylinders, 129.5
$\pm 1$ cm in diameter and 70.8 $\pm 1$ cm in height, were the optimal cavity
shape for the noise temperature test-bed loads. The cylinders were constructed
in 2018 by Welding Works in Madison, CT333http://weldingworks.com/, formed of
1/32 in steel with welded seams and circular removable lids, which attach to
the rim of the cavities with 24 threaded screws along the circumference (see
Figure 5 for full schematic). The dimensions of the cylinders were measured
upon delivery, and found to be within specifications. Each lid was reinforced
with two horizontal L-bars to prevent sagging. A square hole 30 cm$\times$30
cm was removed from the lid, so that a feed mounted to a 32.4 cm$\times$32.4
cm plate could be easily moved from one load to the other during measurements.
The outside of the steel cylinders was painted white to prevent rusting and to
decrease the radiative power load during a nitrogen fill, thus reducing the
nitrogen boil off rate. Aluminium tape was applied to the cylinder interiors
to decrease the emissivity further and improve nitrogen boil-off by a factor
of $\sim$3.5 for an expected boil off rate of 8.6 L/hour.
This design required constructing an insulating cylinder capable of holding
$>$550 L of liquid nitrogen, to be inserted into one of the cavities. This
insert must be radio transparent in our frequency range, closed-cell or
otherwise leak-proof, and thermally insulating. We constructed small-scale
test inserts of a variety of materials, including heat-sealed HD30
Zotefoam444https://www.zotefoams.com/ and conformable polyurethane spray
insulations. Ultimately, a layered approach was found to meet the requirements
above. The insulation consists of three layers: a fiberglass inner layer
bonded to 0.6 cm thick foamular555https://www.homedepot.com/p/Owens-Corning-
FOAMULAR-1-4-in-x-4-ft-x-50-ft-R-1-Fanfold-Rigid-Foam-Board-Insulation-
Sheathing-21UM/100320301, a middle layer of 5 cm thick cryogenically-rated
Polyurethane foam 666https://www.rhhfoamsystems.com/products/all-
products/high-density-spray-foam/#eluid099ab007, and an outer layer of 0.6 cm
foamular. The fiberglass layer was constructed by placing strips of fiberglass
cloth and epoxy over a mold that was then set in a vacuum-bag to remove air
pockets. This fiberglass layer was placed in the “cold” cylinder, the cylinder
walls were lined with foamular, and the cryogenically-rated Polyurethane spray
foam was used to fill in the interior. The final insert is 49.5 cm deep, with
an internal diameter of 116.8 cm and thickness of 5.1 cm. A zotefoam lid is
secured to the insulation insert to help thermally isolate the HIRAX feed from
the cold nitrogen vapor, and keep the vapor shield intact during measurements.
The warm cylinder is not required to hold liquid nitrogen, so a simpler
Zotefoam insert of the same internal diameter as the cold insulation was
constructed to hold the RF absorber, matching the absorber placement between
the two loads. As described in Section 3, the two cylinders were measured to
have $S_{11}$ and radiated power properties that are sufficiently similar to
allow a measurement within the 5 K specified error range.
For the blackbody, we used commercial 18 in RF absorber. It is packaged in 24
in$\times$24 in blocks, with cones 18 in high. We cut the RF absorber into the
insert curvature using a hand-constructed hot wire foam cutter, and used a
laser cut jig of appropriate curvature to guide the cutting process.
Figure 5: (Top:) Schematic of the noise temperature measurement system design.
(Bottom:) Labeled photograph of the hot/cold loads for noise temperature
tests, taken during a liquid nitrogen fill. The loads are covered and sealed
with a steel lid for measurements.
A schematic of the two cylinders is shown in Figure 5 (upper panel), and a
photo of the constructed cylinders is shown in Figure 5 (lower panel). We have
filled the cold cylinder on five occasions with $\sim$550 L of liquid nitrogen
for a total of $\sim$4 weeks of measurement time, and it has maintained its
structural integrity. It takes three 230 L nitrogen dewars to fill the cavity
insert, and one dewar per day for refilling. The boil-off rate is $\sim$5.5
L/hour, with slight variations depending on seasonal climate conditions.
## 3 System Verification Measurements
A variety of measurements were performed to characterize experimental
uncertainties and verify that the cavities met specifications. As described
below, these measurements include verifying the radio-frequency transparency
of the foam materials, quantifying the degree of similarity between the two
cavities, and assessing the RF spectrum of the absorbers in each cavity. The
verification measurements were primarily performed with an unamplified HIRAX
feed to allow $S_{11}$ (return loss) measurements, which are not possible with
the amplified HIRAX feed.
### 3.1 RF transparency tests of insulation materials
As described in Section 2.2, several layers of insulation were added into the
experimental system to successfully contain the 550 L of liquid nitrogen
required for this experiment. These layers must be RF transparent, as any
absorption in the insulating foam will add an unquantified warm temperature
component to the cold temperature measurement and bias the calculated noise
temperature. To assess material transparency and inform the final design, we
took $S_{11}$ measurements777A R&S FSH4 multi-purpose analyzer was used for
these measurements, www.rohde-schwarz.com/us/product/fsh-
productstartpage_63493-8180.html of the cavity at different stages of
insulation construction, which occurred over the span of several months in
2019. These measurements were performed without RF absorber such that the
cavity should be highly reflective, forming a sensitive measurement of
absorption in the insulating foam.
The $S_{11}$ measurements used to verify insulation transparency are shown in
Figure 6. As noted, there is no RF absorber present in the cavity for these
measurements, so the $S_{11}$ value should be near 0.0 dB (indicating a fully
reflective system). The median value across the spectrum is 0.3 dB, which can
be attributed to losses in the feed. The lines marked ‘empty’ are measurements
of the cavity devoid of any insulating foam. There are strong negative
features in the $S_{11}$ spectrum which are not present when RF absorber is
added (see Figure 7), so we attribute these features to destructive
interference from standing waves at a variety of characteristic distances
inside of the cavity. The foam inserts were constructed over the course of six
months, in order of: the cryogenic polyurethane foam (2019/03), the fiberglass
insert (2019/06), and the full insulation layer (2019/08). In addition, the
empty cavity was measured twice (2019/03 and 2019/06), with the $S_{11}$
spectra changing by up to 0.2 dB. The typical change between the empty
cylinder and the full insulation is $<$0.1 dB, within the range of
fluctuations between the two empty measurements. This indicates that the
insulating foam layer has no discernible absorption within measurement errors.
Figure 6: (Top) Return loss ($S_{11}$) measurements of the unamplified HIRAX
feed in a reflective cavity as various insulation components are added in the
construction. (Bottom) Return loss measurements of the cavity with full
insulation compared with the range of empty measurements. The empty cavity was
measured twice (2019/03 and 2019/06), with the $S_{11}$ spectra changing by up
to 0.2 dB. The insulation components are shown to be RF transparent to within
this range of fluctuations for most of the band. Not shown is the addition of
an aluminum tape layer, which also has a negligible impact. Figure 7: $S_{11}$
measurements of the unamplified HIRAX feed in the warm cavities with RF
absorber installed. We compare measurement results from the two cylinders that
make up the Y-factor measurement system (both at 300 K), finding they are
identical to sub-dB level, and share the same overall level as simulation
results, though resonance peak locations differ. These return loss
measurements match with published passive feed measurements of CHIME, which
shares the HIRAX antenna design[12]. More details are described in the text.
### 3.2 Return loss with RF absorber installed
For the second set of verification measurements, we performed a series of
$S_{11}$ measurements of the full system (including RF absorber) at ambient
temperature to verify: (1) that the system does indeed mimic free space and
(2) that the two cavities are sufficiently similar to one another, within the
-10 dB design specification for the antenna. These $S_{11}$ measurements were
taken over several months using a passive HIRAX feed.
The $S_{11}$ measured for a single polarization in both cylinders is shown in
Figure 7. Also shown is the simulated feed in free-space for comparison,
reproduced from Figure 4. The $S_{11}$ profiles evolved slightly in time, but
consistently remained at or below -12dB across the full band, indicating a
well-matched antenna viewing a system simulating free space. These
measurements matched simulations in overall $S_{11}$ level, but had different
resonance locations. Although the measurements do not fully agree with the
free-space simulations, they appear consistent with measured free-space values
shown for a similar feed built for the CHIME experiment[12], and so we
attribute differences in resonance locations to differences between the
modelled feed in CST and the as-built feed. Despite their differences, these
measurements indicate that the cavities are similar to within specifications,
with cavity differences accounting for sub-Kelvin uncertainty across our band.
This will be explored in Section 4.
### 3.3 Blackbody spectrum comparison
We can perform an additional check to verify that the RF absorber in the
cavity is indeed functioning as a blackbody. The RF absorber should emit
thermal radiation, and hence have a blackbody thermal spectrum. This aspect of
system performance is verified by installing the unamplified feed in one of
the cavities, amplifying the resulting signal with commercial amplifiers of
known gain and noise temperature, and measuring the resulting spectrum with a
spectrum analyzer888Measurements are made with an R&S FSH4 Multi-purpose
analyzer. For a $\sim$300 K absorber in the frequency range 400-800 MHz, the
low-frequency approximation to the blackbody spectrum is valid, providing an
estimated power of:
$P=GkT\Delta\nu$ (3)
where $G$ is the device gain, $k$ is Boltzmann’s constant, $T$ is the sum of
the physical temperature and device noise temperature, and $\Delta\nu$ is the
bandwidth in Hz. We assume a temperature T containing contributions from the
temperature of the RF absorber (300 K), the estimated feed loss (20 K), and
the noise temperature of the first amplifier in the amplifier chain
(determined by the data sheets).
We compared this theoretical power with the measured power in dBm from the
spectrum analyzer, using a variety of amplification chains. The amplifiers
used in this measurement are commercially obtained from Mini-Circuits
999www.minicircuits.com/, and have available data sheets reporting gain and
noise temperature. These amplifiers were chosen because they had good gain in
the HIRAX band and have high enough compression points to ensure the
measurements would be linear with input power. Using multiple amplification
stages brought the signal well above the noise floor of the spectrum analyzer,
and 3 dB attenuators were placed between the amplifiers to reduce reflections
and oscillations in the amplified signal.
The results comparing the inferred power to the expected blackbody spectrum
are shown in Figure 8. The different amplifier chains agree with the expected
spectrum to within 2 dBm, which is consistent with contributions we have not
taken into account, such as estimated losses from the cable ($<$1 dB) and
systematic errors in the absolute power measurement from the spectrum analyzer
($<$1 dBm, typ. $<$0.5 dBm). The additional features in the spectrum between
725-775 MHz occur near communication bands, which is evidence that we have not
eliminated RFI in these measurements using the closed cylinder as the only RFI
protection (further RFI mitigation will be employed in future measurements).
Figure 8: Theoretical (blackbody) spectrum, measured spectrum, and residuals
for the passive HIRAX feed + four commercial amplifier chains. The top plot
shows a comparison between the expected power of the amplifier chains at 300K
(dashed lines) and the corresponding spectrum measurements in the experimental
system (solid lines). The expected power is computed from P $=G_{\rm
chain}(\nu)k_{B}(T_{\rm load}+T_{\rm loss}+T_{\rm LNA,1})\Delta\nu$, where
$T_{\rm load}$ is the thermal load temperature, $T_{\rm loss}$ is the assumed
feed noise temperature from material loss, and $T_{\rm LNA,1}$ is the noise
temperature of the first LNA in the chain. The bottom plot shows the
residuals, revealing only slight differences between experimental and
theoretical values (neglecting the features in the vicinity of 750 MHz) that
could be accounted for by cable loss, gain uncertainty, and other systematics,
verifying that we measure a blackbody. The chains are comprised of the
following Mini-circuits amplifiers: chain 1 = ZFL-1000H+ $\to$ ZX60-P103LN+
$\to$ ZX60-P103LN+; chain 2 = ZX60-112LN+ $\to$ ZX60-P103LN+ $\to$
ZX60-P103LN+; chain 3 = ZX60-P103LN+ $\to$ ZX60-P103LN+ $\to$ ZX60-P103LN+;
chain 4 = ZX60-P103LN+ $\to$ ZX60-P105LN+ $\to$ ZX60-P103LN+. All amplifiers
have frequency dependent gain, and all chains include 9dB attenuation.
## 4 Systematics and Uncertainties
In this section we describe and quantify the contributions of statistical and
systematic errors to the noise temperature measurement error budget.
### 4.1 Statistical uncertainties
The noise temperature measurements are limited in integration time, as the
antennas under test can cool while attached to the cold cylinder, thereby
changing the noise temperature of the LNAs (LNA noise is temperature dependent
and lower at lower temperatures). For initial data taking, we averaged 50
samples with sweep time 0.02 s for a total integration time of 1 s. This
averaging takes $<$10 s, which is more than adequate to keep the feed from
cooling (temperature effects still to be characterized). For an integration
over 50 samples in 3 MHz frequency bins, statistical fluctuations in the
individual frequency bins are limited to $\pm 0.1$ dBm. These fluctuations can
be extrapolated to an error in noise temperature by standard error propagation
for the Y-factor linear calculation, summing errors in measured $P_{\rm hot}$
and $P_{\rm cold}$ in quadrature, to obtain $\pm$ 4.82 K for a 30 K assumed
noise temperature. This noise can be smoothed through further binning, and
fluctuations in noise temperature can be integrated down between successive
noise temperature measurements to further reduce noise. We also observe slight
fluctuations in the average spectrum level to within $\pm 0.01$ dBm over the
course of a one-hour measurement, corresponding to a $\pm$ 0.48 K uncertainty.
### 4.2 Contribution of reflections to uncertainties
A variety of systematics must be considered for our measurements, including
reflections from various system components such as the liquid nitrogen and
zotefoam insulation lid. We can use passive feed $S_{11}$ measurements to
bound how cavity differences will impact the noise temperature measurements.
The measured $S_{11}$ contains three contributions: (i) signal reflected
within the feed structure, (ii) losses in the feed (i, ii inherent to the
feed), and (iii) signal reflected back to the feed within the cavity (which
depends on the cavity and its interplay with the feed). Here we use the
measured $S_{11}$ without distinguishing between these contributions since
they cannot be differentiated within our test setup. We follow the scheme
detailed below:
The amount of energy radiated from a load can be expressed as an equivalent
brightness temperature[18],
$T_{B}=(1-\Gamma^{2})T$ (4)
where $\Gamma$ is the load reflection coefficient and $T$ is load temperature.
Supposing a noise temperature $T_{\text{noise}}$, the power measured from this
load during a Y-factor measurement would be:
$P_{B}=Gk[T_{B}+T_{\text{noise}}]\Delta\nu=Gk[(1-\Gamma^{2})T+T_{\text{noise}}]\Delta\nu.$
(5)
For the purposes of estimating uncertainty, the reflection coefficient
$\Gamma$ can be obtained from $S_{11}$ via
$\Gamma=10^{S_{11}\text{[dB]}/20}.$ (6)
$S_{11}$ measurements of the Y-factor system provide an upper bound on load
reflections, as $\Gamma$ includes reflection contributions from the feed
itself as well as from the RF absorber. The following set of equations shows
the noise temperature computation with and without corrections for the
reflections
$\displaystyle T^{\text{true}}_{\text{noise}}$
$\displaystyle=\frac{(1-\Gamma_{\text{hot}}^{2})T_{\text{\text{hot}}}-Y(1-\Gamma_{\text{cold}}^{2})T_{\text{\text{cold}}}}{Y-1}$
(7) $\displaystyle T^{\text{uncorrected}}_{\text{noise}}$
$\displaystyle=\frac{T_{\text{hot}}-YT_{\text{cold}}}{Y-1}$ (8)
where $T^{\text{true}}_{\text{noise}}$ is the true noise temperature and
$T^{\text{uncorrected}}_{\text{noise}}$ is the noise temperature one would
naively calculate by applying a Y-factor computation without accounting for
reflections. When conducting a noise temperature measurement, Y is measured
directly, but for the purposes of error assessment we compute a theoretical Y
value from expected noise temperature and measured reflections:
$Y=\frac{P_{B,\text{hot}}}{P_{B,\text{cold}}}=\frac{(1-\Gamma_{\text{hot}}^{2})T_{\text{hot}}+T_{\text{noise}}}{(1-\Gamma_{\text{cold}}^{2})T_{\text{cold}}+T_{\text{noise}}}$
(9)
We estimate uncertainties from reflections as the difference between the
“true” and “uncorrected” cases (eqs. 7, 8) for various measurement scenarios.
We take $T_{\text{hot}}$ = 300 K, $T_{\text{cold}}$ = 77 K, and
$T_{\text{noise}}$ = 30 K, and compute $\Gamma_{\text{hot}}$ and
$\Gamma_{\text{cold}}$ directly from $S_{11}$ measurements in the ambient
temperature/cryogenic cavities.
Using this description, we estimate the impact of reflections in two different
regimes:
* •
When $S_{11}=S_{11,{\rm hot}}=S_{11,{\rm cold}}$, the noise temperature is
modified by $T_{\rm uncorrected}=T_{\rm true}/(1-\Gamma^{2})$, so $T_{\rm
uncorrected}>T_{\rm true}$ (this is the case where the two cylinders are
identical and the liquid nitrogen surface is completely sub-dominant). In this
regime, we can use the Y-factor measurements without temperature modifications
to get an upper bound on noise temperature.
* •
If $S_{11,{\rm hot}}\neq S_{11,{\rm cold}}$, and they are sufficiently
discrepant, we are no longer guaranteed an upper bound on noise temperature.
For a noise temperature of 30 K, and $S_{11,{\rm hot}}$ = -15 dB, the expected
level of the free space HIRAX feed profile, we require $S_{11,{\rm
cold}}<-13.5$ dB to maintain this upper bound.
Figure 9 (right) shows the biases we incur when not accounting for reflections
in these two cases. Without liquid nitrogen, we are in the regime where
$S_{11,{\rm hot}}=S_{11,{\rm cold}}$, and find the uncertainty at most $\sim$1
K (black curve). When liquid nitrogen is introduced, there are new reflections
($S_{11,{\rm hot}}\neq S_{11,{\rm cold}}$), giving at most $\pm$ 3.5 K
uncertainty. These uncertainties provide a lower bound at some frequencies and
upper bound at others, and remain within 2 K for much of the frequency band.
Although a 2 K error is within the specification, it is significant enough
that removing or accounting for some of the effect of these reflections is
desirable.
We measured the $S_{11}$ of the system at various nitrogen depths, as shown in
Figure 9 (left), to assess the reflections off of the liquid nitrogen layer.
This measurement revealed that as the distance from the liquid nitrogen
surface changes, the location of the ‘dip’ feature, which we associate with
constructive interference in the cavity, shifts by $\sim$30 MHz. The profile
continued to change when the cavity was almost full of nitrogen, at a stage
when material contractions due to cooling would be complete and RF absorber
was nearly submerged, so we suspect this feature is due to reflections from
the liquid surface layer and not contraction or altered RF properties from
cooling. Still, it is possible that the RF absorber shrinks in cold
conditions, and we are currently investigating the impact of perturbations in
absorber size in CST.
Figure 9: $S_{11}$ profiles (as measured by the passive HIRAX feed) at various
liquid nitrogen depths and associated uncertainty in noise temperature.
(Left): The return loss $S_{11}$ measurements of the cold load exhibit a
resonance that shifts in frequency as the cavity fills with liquid nitrogen.
Though the resonance shifts, the $S_{11}$ profiles remain below -10 dB (dashed
line) across the full band, which is within the design specification for the
feed. (Right): The uncertainty in noise temperature measurements is computed
as described in the main text (equations 4 to 8). The shaded region indicates
when this uncertainty provides an upper bound on the measurements, which is
the preferred regime. Outside of this regime, the uncertainties remain below
approximately 3 K, which is already a cautiously high bound.
In addition to reflections from liquid nitrogen, it is important to consider
the reflections that might occur off of the insulation lid. This lid is made
from zotefoam, which is known for its RF transparent properties. The
insulating lid sits directly below the antenna under test, shielding it from
the cold nitrogen gas. It is important to understand the lid transparency, as
any absorption in the zotefoam could introduce a $>$77 K component into the 77
K measurement in a way that is difficult to quantify. To assess the lid RF
reflections, we measured the $S_{11}$ of the cold system with and without the
zotefoam lid on, finding the measurements identical to $\sim$0.2 dB for values
above -15 dB, indicating these reflections provide a negligible contribution
to the uncertainty.
We also consider the impact that asymmetries in the system might have on
reflections, and how that propagates into noise temperature. To assess system
symmetry, we took $S_{11}$ measurements at a series of feed rotations
(rotating in polarization angle), considering angles 0, 30, 60, and 90
degrees. Comparing these measurements, we see no discernible difference in
$S_{11}$ (order $<$0.25 dB discrepancies for $S_{11}>$-17 dB), and determine
system asymmetries to have negligible contributions to uncertainty. To further
support this determination, we also performed Y-factor measurements at the
same series of angles, and found no discernible difference in the
corresponding noise temperature results.
### 4.3 Estimated contribution of cavity differences to noise temperature
calculations
Differences between cavities can cause systematic errors in the measurements,
and because only one cavity is filled with liquid nitrogen the cavities cannot
be interchanged. We can assess the magnitude of these systematic errors
through a series of tests at both ambient and cryogenic temperatures.
For the first of the analyses, we use a HIRAX feed to measure spectra from
both cavities when warm, and apply a -4.89 dB offset to the measurement from
the cylinder that will be filled with nitrogen. This result corresponds to the
level we would expect from a cold spectrum measurement, assuming a 30 K noise
temperature. We take the warm cavity result as $P_{\rm hot}$ and the offset
result as $P_{\rm cold}$ and use the Y-factor method with $T_{\rm hot}$ = 300
K, $T_{\rm cold}$ = 77 K to compute a noise temperature. The resulting ‘mock’
noise temperature should have a mean value of 30 K, with spectral features
that are generated only by differences between the spectra in the two
cylinders. The results are shown in Figure 10, where the mean noise
temperature of 30 K has been removed, showing only the expected variations
from differences between the cylinders. This feature, a few Kelvin in size,
indicates a discrepancy between the RF properties of the two cavities. It is
consistent across polarizations and repeatable across feeds and measurement
days. It dampens slightly, though remains, when RF absorber is swapped between
two systems (demonstrated in Figure 10). These results suggest there could be
slight geometric differences in cylinder construction or a difference in
insulation transparency (one cylinder has different insulation than the other,
for ease of construction).
Figure 10: Deviations from the expected 30 K noise temperature due to inherent
differences between the two loads (measured at 300 K). The frequency dependent
feature is common across both polarizations and different feeds, and remains
when absorber configurations are switched between the two systems. This offset
can be removed from the final measurement results.
This sinusoidal feature is recovered in real noise temperature data from
hot/cold measurements using a cryogenic load. We take a Y-factor measurement
using the same cavity for both hot and cold measurements, where the hot
measurement is taken just before the liquid nitrogen fill and the cold
measurement is taken directly after. Subtracting this measurement from a
measurement taken using the two different cavities at two different
temperatures removes any spectral features common to both measurements (e.g.
from the noise temperature) and yields features that result from cavity
differences. This subtraction is shown in Figure 11 and reveals the same
frequency-dependent profile seen in the warm measurement comparison detailed
above. Their strong agreement indicates that the systematics from cavity
differences can be subtracted out from measurements to obtain the final noise
temperature results.
Figure 11: Characteristic offset in noise temperature measurements due to
discrepancies between the two RF cavities, as predicted by warm verification
measurements and verified by Y-factor measurements. Here, $T_{N}^{(1,2)}$
denotes the noise temperature measured with cavity 1 (at 300 K) as the hot
load/ cavity 2 (at 77 K) as the cold load. We take $\overline{T}_{N}^{(1,2)}$
to denote the predicted noise temperature computed from two warm measurements
(also shown in Figure 10) using a spectrum measurement in cavity 1 (at 300 K)
as the hot power/ a spectrum measurement in cavity 2 (at 300 K) minus 4.9 dBm
as the “cold” power (the 4.9 dBm offset is consistent with measuring 77 K). We
plot the difference between the noise temperature measured using two different
loads for hot/cold measurements and using the same load for hot/cold
measurements, along with the difference in predicted noise temperature for two
different loads and the same load. The recovered feature is highly consistent
across different feeds and different measurement days, and as a result can be
removed from final measurements.
### 4.4 Additional sources of error
We anticipate additional sources of error in our measurements that we are
working to mitigate. These include amplifier drifts, feed temperature
fluctuations, cable effects, airflow over the nitrogen surface, and RFI. The
full error budget is shown in Table 2.
We have observed long timescale drifts and fluctuations in amplifier gain.
Over the course of several months, the amplifier gain appears to drift up to a
few dB, presumably in accordance with temperature and humidity conditions in
the lab environment. Although we plan to measure noise temperature on far
shorter time scales (typically no more than 5 minutes between a hot and cold
measurement), these observations highlight the importance of documenting
ambient temperature and humidity during the measurements in case absolute
power measurements must to be compared across longer time scales at a later
point.
The long-term gain drifts suggest that lab conditions like temperature and
humidity impact antenna gain. These effects are critical to consider in the
measurement scheme, as we cannot completely isolate the antenna from the cool
nitrogen gas leaking out of the insulation when making a cold measurement, and
therefore cannot guarantee it measures both loads at the same physical
temperature. We have a preliminary data set to characterize this effect, where
we continuously measured the spectrum from each load for 10 minutes (the
largest timescale we’d consider for one measurement, and ample time for the
feed temperature to equilibrate), and then compared the relative drift of
spectrum medians in that time frame. Although these measurements suggest no
discernible drift, if further analysis indicates this is problematic we can
measure the cold load for multiple, shorter time periods. We seek to further
mitigate temperature and humidity effects by enclosing the front of the
antenna in a zotefoam box through which we flow room-temperature nitrogen gas.
This precaution will help keep the antenna at 300K for the full measurement
and reduce frost build up.
In addition to long-term gain drifts, we notice that the HIRAX amplifiers and
the spectrum analyzer both take up to an hour to warm up and stabilize in gain
and readout when first plugged in. These effects are mitigated by keeping all
amplifiers under test powered for at least one hour prior to measurements.
We also consider the effects of using lossy cables and measuring too close to
the amplifier noise floor. The noise temperature of our set up propagates as,
$T_{\text{noise}}=T_{\text{LNA}}+\frac{T_{\text{cable}}}{G_{\text{LNA}}}+\frac{T_{\text{LNA,
2}}}{G_{\text{LNA}}G_{\text{cable}}}+...$ (10)
(following noise of cascaded devices treatment from Microwave Engineering
[13]). From this equation, we see that noise temperature contributions from
cable loss should be negligible in the measurement, as the cable sits behind
$G_{\text{LNA}}$ = 40 dB of amplification in the signal chain. Assuming
cautiously that $T_{\text{cable}}$ = 300 K, $G_{\text{cable}}<$ -3 dB and
$T_{\text{LNA,2}}<$ 100 K, cable loss and second-stage amplifier effects would
contribute $<$0.05 K to the final result. Similarly, measuring near the
spectrum analyzer noise floor is not of concern. While measuring close to the
noise floor will bias a noise temperature measurement, this is avoidable by
adding appropriate second-stage LNAs into the measurement chain.
Previous experiments have found that rapid airflow over a nitrogen surface can
artificially raise the nitrogen temperature by 2 degrees[19]. This was
initially a concern for this experiment– for safety reasons, we draw the
nitrogen gas expelled from the tank into the laboratory’s HVAC system, and
were worried about drawing O2 through the tank in the process. However, this
is very likely not an issue in the set up, as we are careful to only remove
the gas once it has already been vented, and the zotefoam insulation lid forms
a seal that prevents rapid O2 draw. As a worst case, if we were to mistake
$T_{\text{cold}}$ as off by 2K, we would see a 3.5K error in the noise
temperature. This error is more than we can tolerate, given existing sources
of uncertainty from the nitrogen surface and cavity discrepancies.
Category | Source of Error | Projected Uncertainty
---|---|---
statistical uncertainty | spectrum analyzer fluctuations | $<$5 K*
| spectrum analyzer & amplifier gain drift | $<$0.5 K
| liquid nitrogen surface reflections | $<$3.5 K*
characterized together | ln2 fill cavity contractions | $\ll$3.5 K
| insulation reflections | $\ll$3.5 K
| cavity RF differences | $<$8 K*
| airflow over ln2 surface | $\ll$3.5 K
characterized separately | cavity asymmetry | $<$0.1 K
| cable effects | $<$0.1 K
| amplifier compression | $<$0.1 K
| temperature/humidity effects | —
characterization in progress | RFI | —
| back lobe size | —
Table 2: Experimental uncertainties and systematics. The main contributors to
the error budget are spectrum analyzer fluctuations, nitrogen surface
reflections, and cavity differences. These have been marked with a * and are
removable or able to be averaged down, as discussed in the text. Additional
sources of error are under investigation.
## 5 Preliminary results
We have taken Y-factor measurement data for three different HIRAX feeds over
the course of three separate measurement trials. The procedure is as follows.
We first mount the antenna under test on a 12 in $\times$ 12 in ground plate
that slots into the cavity lid, and power the antenna through a bias-t along
an SMA cable. We then bolt the mounting plate into place on the “hot” cylinder
lid, so the antenna is directly facing the RF absorber. We initiate data
taking on the spectrum analyzer (previously a Rhode and Schwarz FSH4), and
leave the antenna passively taking data (integrating with 3 MHz bandwidth for
1 s total integration time), saving a file every 10 seconds for 5 minutes. At
this stage, we move the antenna and plate over to the cold cylinder and bolt
the plate onto its lid. We leave it passively data taking for 5 minutes. This
process is repeated three times.
Though we established that feed orientation does not impact noise temperature
results in a measurable way, we keep the orientation consistent for all
measurements. We are also careful to ensure that cables are positioned to
reduce strain on the feed SMA connector joints. Later, in the analysis, we tag
data files that correspond to hot/cold measurements, and use those files to
perform a Y-factor computation.
Because we have not yet verified noise temperature measurements with known
amplifiers, we present relative measurements between different HIRAX feeds and
different measurement days instead of absolute noise temperature values. We
find the HIRAX feed noise temperature results are repeatable across all three
feeds, and three separate measurement days to within $\pm$ 5% (shown in Figure
12). These preliminary results are encouraging, but further measurements and
analysis are required for verification, as outlined in the next section.
Figure 12: Noise temperature consistency for HIRAX feed measurements, shown as
a percent change between measurement days (left) and different feeds (right).
In both scenarios, excluding RFI, noise temperatures are consistent to within
$5\%$ across measurements (light blue band). We note that RFI is persistent
across the frequency band.
## 6 Summary and Future Work
This paper details a system designed and optimized to measure the noise
temperature of the HIRAX feeds. In addition, we report results from
verification measurements performed to assess systematic and statistical
errors. Because HIRAX uses an embedded amplifier in the antenna, a system like
this, which can inject an optical signal, is the only way to measure noise
temperature in the lab. This measurement will be critical to verifying that
the HIRAX feed design meets the noise specifications required to detect the
faint cosmological signal of interest.
This measurement system consists of two identical cavities held at different
temperatures to allow for a ‘Y-factor’ measurement. It was designed in the
software CST, and constructed over several months in 2019. The construction
process included building a cryogenic cavity able to safely and effectively
contain over 550 L of liquid nitrogen. The verification procedure utilized
both passive and active HIRAX antennas to measure return loss and spectra of
the two cavities and quantify systematic and statistical errors. Initial
results indicate that this system is within tolerances, with the main sources
of error able to be removed or averaged down (Table 2).
The verification data sets have shown additional limitations that will need to
be accounted for and mitigated. Passive feed measurements indicate that RFI is
a contaminant to the data stream, with the potential to significantly bias the
noise temperature measurement. To address this challenge we will take two
approaches: (i) building a Faraday enclosure around the measurement system and
(ii) upgrading the data acquisition from a handheld spectrum analyzer to the
ICE board used for HIRAX[11]. The fast sampling rate of the ICE board allows
detection of rapid RFI spikes that may currently be averaged into the data. We
can then remove contaminated frequency channels in analysis. In addition, the
correlator will provide a lower noise floor, and streamline data storage and
analysis.
The current set of verification measurements have focused on quantifying
sources of systematic error and assessing whether the cylinders are similar to
within specifications. Once these upgrades are completed, we will begin noise
temperature verification measurements with this system. We will measure the
noise temperature of the HIRAX passive feed with commercial amplifiers of
known noise temperatures to assess whether we have met required specification
(amplifiers will be the set of four used in Figure 8). The noise temperature
of the commercial amplifiers will also be independently verified with a
commercial noise figure meter. From these Y-factor measurements, we expect to
recover the amplifier noise temperature plus some consistent contribution from
the antenna loss, which we estimate at 10-20 K from simulations.
An additional consideration is the antenna backlobe size. Any backlobe in this
system will be looking at the laboratory ceiling, a $\sim$300 K source. As a
result, additional power is added during the cold measurement equivalent to
300 K times the ratio of the backlobe to the total beam. Backlobe effects are
a significant contributor to noise temperature ($>$5 K) if the backlobe makes
up more than 1.5% of the total beam, which sets a requirement that the
backlobe remain on average below -17 dB relative to the peak. Simulations have
indicated that the HIRAX feed may be in this regime, such that we will need to
account for the additional power in the cold temperature computation.
Once this verification work is completed and systematics are fully
characterized, we expect to have an operational Y-factor measurement system,
which will provide the first noise temperature measurements of the HIRAX feed.
This system will be used to measure all 256 feeds used for the initial HIRAX
deployment, as a spot check on production quality and consistency, as well as
next generation feed designs to help inform future prototypes.
###### Acknowledgements.
This work was made possible by the support from the Yale Wright Laboratory and
Yale Center for Research Computing staff and administrators, and the Wright
Laboratory computing and machine shop resources. In particular, we acknowledge
Frank Lopez, Craig Miller, William Tyndall and James Nikkel for their help
constructing the experiment and ensuring personnel safety. Some of the
supplementary beam measurements were made in the North Carolina State
University Neofabrication Facility’s anechoic chamber. The initial feed
simulation was generously shared by Andre Johnson, who did some of the feed
simulation work for CHIME. This work was supported by a NASA Space Technology
Research Fellowship, and is based upon work supported by the National Science
Foundation under Grant No. 1751763. KM acknowledges support from the National
Research Foundation of South Africa. HIRAX is funded by the NRF of South
Africa.
## References
* [1] Newburgh, L., Bandura, K., Bucher, M., Chang, T.-C., Chiang, H., Cliche, J., Davé, R., Dobbs, M., Clarkson, C., Ganga, K., et al., “HIRAX: a probe of dark energy and radio transients,” in [Ground-based and Airborne Telescopes VI ], 9906, 99065X, International Society for Optics and Photonics (2016).
* [2] Saliwanchik, B., Bandura, K., Chiang, H. C., Chichton, D., Ewall-Wice, A., Kuhn, E., MacKay, V., Moodley, K., Newburgh, L., Peterson, J., et al., “Mechanical and Optical Design of the HIRAX Radio Telescope,” in [Ground-based and Airborne Telescopes VIII ], 11445, 114455O, International Society for Optics and Photonics (2020).
* [3] Boyle, P., Collaboration, C., et al., “First detection of fast radio bursts between 400 and 800 MHz by CHIME/FRB,” ATel 11901, 1 (2018).
* [4] Amiri, M., Bandura, K., Bhardwaj, M., Boubel, P., Boyce, M. M., Boyle, P. J., Brar, C., Burhanpurkar, M., Cassanelli, T., Chawla, P., Cliche, J. F., Cubranic, D., Deng, M., Denman, N., Dobbs, M., Fandino, M., Fonseca, E., Gaensler, B. M., Gilbert, A. J., Gill, A., Giri, U., Good, D. C., Halpern, M., Hanna, D. S., Hill, A. S., Hinshaw, G., Höfer, C., Josephy, A., Kaspi, V. M., Landecker, T. L., Lang, D. A., Lin, H. H., Masui, K. W., Mckinven, R., Mena-Parra, J., Merryfield, M., Michilli, D., Milutinovic, N., Moatti, C., Naidu, A., Newburgh, L. B., Ng, C., Patel, C., Pen, U., Pinsonneault-Marotte, T., Pleunis, Z., Rafiei-Ravandi, M., Rahman, M., Ransom, S. M., Renard, A., Scholz, P., Shaw, J. R., Siegel, S. R., Smith, K. M., Stairs, I. H., Tendulkar, S. P., Tretyakov, I., Vanderlinde, K., Yadav, P., and Collaboration, T. C., “A second source of repeating fast radio bursts,” Nature 566(7743), 235–238 (2019).
* [5] Amiri, M., Bandura, K., Bhardwaj, M., Boubel, P., Boyce, M. M., Boyle, P. J., Brar, C., Burhanpurkar, M., Chawla, P., Cliche, J. F., Cubranic, D., Deng, M., Denman, N., Dobbs, M., Fandino, M., Fonseca, E., Gaensler, B. M., Gilbert, A. J., Giri, U., Good, D. C., Halpern, M., Hanna, D., Hill, A. S., Hinshaw, G., Höfer, C., Josephy, A., Kaspi, V. M., Landecker, T. L., Lang, D. A., Masui, K. W., Mckinven, R., Mena-Parra, J., Merryfield, M., Milutinovic, N., Moatti, C., Naidu, A., Newburgh, L. B., Ng, C., Patel, C., Pen, U., Pinsonneault-Marotte, T., Pleunis, Z., Rafiei-Ravandi, M., Ransom, S. M., Renard, A., Scholz, P., Shaw, J. R., Siegel, S. R., Smith, K. M., Stairs, I. H., Tendulkar, S. P., Tretyakov, I., Vanderlinde, K., Yadav, P., and Collaboration, T. C., “Observations of fast radio bursts at frequencies down to 400 megahertz,” Nature 566(7743), 230–234 (2019).
* [6] Andersen, B., Bandura, K., Bhardwaj, M., Boubel, P., Boyce, M., Boyle, P., Brar, C., Cassanelli, T., Chawla, P., Cubranic, D., et al., “CHIME/FRB Discovery of Eight New Repeating Fast Radio Burst Sources,” The Astrophysical Journal Letters 885(1), L24 (2019).
* [7] Amiri, M., Andersen, B. C., Bandura, K. M., Bhardwaj, M., Boyle, P. J., Brar, C., Chawla, P., Chen, T., Cliche, J. F., Cubranic, D., Deng, M., Denman, N. T., Dobbs, M., Dong, F. Q., Fandino, M., Fonseca, E., Gaensler, B. M., Giri, U., Good, D. C., Halpern, M., Hessels, J. W. T., Hill, A. S., Höfer, C., Josephy, A., Kania, J. W., Karuppusamy, R., Kaspi, V. M., Keimpema, A., Kirsten, F., Landecker, T. L., Lang, D. A., Leung, C., Li, D. Z., Lin, H. H., Marcote, B., Masui, K. W., Mckinven, R., Mena-Parra, J., Merryfield, M., Michilli, D., Milutinovic, N., Mirhosseini, A., Naidu, A., Newburgh, L. B., Ng, C., Nimmo, K., Paragi, Z., Patel, C., Pen, U. L., Pinsonneault-Marotte, T., Pleunis, Z., Rafiei-Ravandi, M., Rahman, M., Ransom, S. M., Renard, A., Sanghavi, P., Scholz, P., Shaw, J. R., Shin, K., Siegel, S. R., Singh, S., Smegal, R. J., Smith, K. M., Stairs, I. H., Tendulkar, S. P., Tretyakov, I., Vanderlinde, K., Wang, H., Wang, X., Wulf, D., Yadav, P., Zwaniga, A. V., and Collaboration*, T. C., “Periodic activity from a fast radio burst source,” Nature 582(7812), 351–355 (2020).
* [8] Andersen, B. C., Bandura, K. M., Bhardwaj, M., Bij, A., Boyce, M. M., Boyle, P. J., Brar, C., Cassanelli, T., Chawla, P., Chen, T., Cliche, J. F., Cook, A., Cubranic, D., Curtin, A. P., Denman, N. T., Dobbs, M., Dong, F. Q., Fandino, M., Fonseca, E., Gaensler, B. M., Giri, U., Good, D. C., Halpern, M., Hill, A. S., Hinshaw, G. F., Höfer, C., Josephy, A., Kania, J. W., Kaspi, V. M., Landecker, T. L., Leung, C., Li, D. Z., Lin, H. H., Masui, K. W., Mckinven, R., Mena-Parra, J., Merryfield, M., Meyers, B. W., Michilli, D., Milutinovic, N., Mirhosseini, A., Münchmeyer, M., Naidu, A., Newburgh, L. B., Ng, C., Patel, C., Pen, U. L., Pinsonneault-Marotte, T., Pleunis, Z., Quine, B. M., Rafiei-Ravandi, M., Rahman, M., Ransom, S. M., Renard, A., Sanghavi, P., Scholz, P., Shaw, J. R., Shin, K., Siegel, S. R., Singh, S., Smegal, R. J., Smith, K. M., Stairs, I. H., Tan, C. M., Tendulkar, S. P., Tretyakov, I., Vanderlinde, K., Wang, H., Wulf, D., Zwaniga, A. V., and Collaboration, T. C., “A bright millisecond-duration radio burst from a Galactic magnetar,” Nature 587(7832), 54–58 (2020).
* [9] Shaw, J. R., Sigurdson, K., Sitwell, M., Stebbins, A., and Pen, U.-L., “Coaxing cosmic 21 cm fluctuations from the polarized sky using m-mode analysis,” Physical Review D 91(8), 083514 (2015).
* [10] Mena, J., Bandura, K., Cliche, J.-F., Dobbs, M., Gilbert, A., and Tang, Q. Y., “A Radio-Frequency-over-Fiber link for large-array radio astronomy applications,” Journal of Instrumentation 8(10), T10003 (2013).
* [11] Bandura, K., Bender, A., Cliche, J., de Haan, T., Dobbs, M., Gilbert, A., Griffin, S., Hsyu, G., Ittah, D., Parra, J. M., et al., “ICE: a scalable, low-cost FPGA-based telescope signal processing and networking system,” Journal of Astronomical Instrumentation 5(04), 1641005 (2016).
* [12] Deng, M., Campbell-Wilson, D., and the CHIME Collaboration, “The cloverleaf antenna: A compact wide-bandwidth dual-polarization feed for CHIME,” International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM) 16, IEEE (July 2014).
* [13] Pozar, D. M., [Microwave Engineering ], John Wiley & Sons (2009).
* [14] Dicke, R. H., “The Measurement of Thermal Radiation at Microwave Frequencies,” Review of Scientific Instruments 17(7), 268–275 (1946).
* [15] Lamb, J. W., “Miscellaneous data on materials for millimetre and submillimetre optics,” International Journal of Infrared and Millimeter Waves 17(12), 1997–2034 (1996).
* [16] Reesor, G., Dagg, I., and Mohabir, S., “The complex dielectric constant of liquid nitrogen in the region 18 to 26 GHz,” Canadian Journal of Physics 53(23), 2611–2612 (1975).
* [17] Smith, P., Davis, L., Button, T., and Alford, N. M., “The dielectric loss tangent of liquid nitrogen,” Superconductor Science and Technology 4(3), 128 (1991).
* [18] Balanis, C. A., [Antenna theory: analysis and design ], John wiley & sons (2016).
* [19] Paine, S. N., Turner, D. D., and Küchler, N., “Understanding thermal drift in liquid nitrogen loads used for radiometric calibration in the field,” Journal of Atmospheric and Oceanic Technology 31(3), 647–655 (2014).
|
Further author information: (Send correspondence to B.R.B.S.)
E-mail<EMAIL_ADDRESS>
# Mechanical and Optical Design of the HIRAX Radio Telescope
Benjamin R. B. Saliwanchik Department of Physics, Yale University, New Haven,
CT, USA Department of Physics, Brookhaven National Laboratory, Upton, NY, USA
Aaron Ewall-Wice NASA Jet Propulsion Laboratory, Pasadena, CA, USA
Department of Astronomy, University of California, Berkeley, CA, USA Devin
Crichton School of Mathematics, Statistics, & Computer Science, University of
KwaZulu-Natal, Durban, South Africa Institute for Particle Physics and
Astrophysics, ETH Zürich, Zürich, Switzerland Emily R. Kuhn Department of
Physics, Yale University, New Haven, CT, USA Deniz Ölçek Department of
Physics, McGill University, Quebec, Canada Kevin Bandura Department of
Computer Science and Electrical Engineering, and Center for Gravitational
Waves and Cosmology, West Virginia University, Morgantown, WV, USA Martin
Bucher School of Mathematics, Statistics, & Computer Science, University of
KwaZulu-Natal, Durban, South Africa Astroparticle and Cosmology Laboratory,
University of Paris, Paris, France Tzu-Ching Chang NASA Jet Propulsion
Laboratory, Pasadena, CA, USA H. Cynthia Chiang School of Mathematics,
Statistics, & Computer Science, University of KwaZulu-Natal, Durban, South
Africa Department of Physics, McGill University, Quebec, Canada Kit Gerodias
Department of Physics, McGill University, Quebec, Canada Kabelo Kesebonye
School of Mathematics, Statistics, & Computer Science, University of KwaZulu-
Natal, Durban, South Africa Vincent MacKay Department of Physics, University
of Toronto, Toronto, Canada Kavilan Moodley School of Mathematics,
Statistics, & Computer Science, University of KwaZulu-Natal, Durban, South
Africa Laura B. Newburgh Department of Physics, Yale University, New Haven,
CT, USA Viraj Nistane Department of Theoretical Physics and Centre for
Astroparticle Physics, Université de Genève, Genève, Switzerland Jeffrey B.
Peterson Department of Physics, Carnegie Mellon University, Pittsburgh, PA,
USA Elizabeth Pieters Department of Physics, McGill University, Quebec,
Canada Carla Pieterse School of Mathematics, Statistics, & Computer Science,
University of KwaZulu-Natal, Durban, South Africa Keith Vanderlinde
Department of Physics, University of Toronto, Toronto, Canada Jonathan L.
Sievers Department of Physics, McGill University, Quebec, Canada School of
Chemistry and Physics, University of KwaZulu-Natal, Durban, South Africa
Amanda Weltman Department of Mathematics, University of Cape Town, Cape Town,
South Africa Dallas Wulf Department of Physics, McGill University, Quebec,
Canada
###### Abstract
The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) is a planned
interferometric radio telescope array that will ultimately consist of 1024
close packed 6 m dishes that will be deployed at the SKA South Africa site.
HIRAX will survey the majority of the southern sky to measure baryon acoustic
oscillations (BAO) using the 21 cm hyperfine transition of neutral hydrogen.
It will operate between 400-800 MHz with 391 kHz resolution, corresponding to
a redshift range of $0.8<z<2.5$ and a minimum $\Delta z/z$ of $\sim$0.003
(frequency resolution $500<R<1000$). One of the primary science goals of HIRAX
is to constrain the dark energy equation of state by measuring the BAO scale
as a function of redshift over a cosmologically significant range. Achieving
this goal places stringent requirements on the mechanical and optical design
of the HIRAX instrument which are described in this paper. This includes the
simulations used to optimize the mechanical and electromagnetic
characteristics of the instrument, including the dish focal ratio, receiver
support mechanism, and instrument cabling. As a result of these simulations,
the dish focal ratio has been reduced to 0.23 to reduce inter-dish crosstalk,
the feed support mechanism has been redesigned as a wide (35 cm diam.) central
column, and the feed design has been modified to allow the cabling for the
receiver to pass directly along the symmetry axis of the feed and dish in
order to eliminate beam asymmetries and reduce sidelobe amplitudes. The beams
from these full-instrument simulations are also used in an astrophysical
m-mode analysis pipeline which is used to evaluate cosmological constraints
and determine potential systematic contamination due to physical non-
redundancies of the array elements. This end-to-end simulation pipeline was
used to inform the dish manufacturing and assembly specifications which will
guide the production and construction of the first-stage HIRAX 256-element
array.
## 1 Introduction
The fact that the universe is in a state of accelerated expansion today is
supported by the observational evidence from various cosmological tools such
as the Cosmic Microwave Background (CMB), Type 1a Supernovae (SN1a), and
Baryon Acoustic Oscillations (BAO)[1]. The increasing rate of expansion is
attributed to an unknown cosmological component called Dark Energy, which
constitutes around 70% of the total energy density of the current universe.
Current data shows Dark Energy began to affect the dynamics of the universe
around a redshift of z $\sim$ 2 (10.5 Gyr ago), and became the dominant
component of the energy density at around a redshift of z $\sim$ 0.5 (5.2 Gyr
ago). Constraining the properties of Dark Energy, and the evolution of the
universe over cosmic timescales, is an important goal of modern cosmology.
Baryon acoustic oscillations provide a standard ruler, a structure of a fixed
co-moving scale, which can be used to measure the expansion history of the
universe over a wide range of redshifts[2]. Baryon acoustic oscillations in
the observed matter density power spectrum arise because all modes are
initially excited with the same phase on superhorizon scales, and only the
growing mode is excited. Before recombination the photons and baryons are
tightly coupled to each other owing to Thomson scattering and oscillate much
like ordinary sound, following horizon crossing. However, after recombination
the photons free stream, causing the phase oscillations of the plasma at the
moment of recombination to become frozen into the power spectrum. The
characteristic scale of the first peak of this power spectrum is the sound
horizon (comoving distance that density waves in the photon-baryon fluid could
have travelled until decoupling), which is strongly constrained by the CMB to
be 146.8 $\pm$ 1.8 Mpc [3]. Due to the variations in matter density produced
by the BAO, and structure formation favoring areas of over-density, the
distribution of galaxies subsequently displays a spatial correlation function
which corresponds to the BAO power spectrum. This large scale structure
produced by the BAO has been detected at high significance in optical galaxy
surveys of the low redshift universe, and has provided an important constraint
on cosmological parameters, including Dark Energy[4, 5, 6].
One method to measure the BAO spectrum at higher redshift is to directly
measure the neutral hydrogen density associated with galaxies by using the 21
cm emission line. This probe is particularly attractive because of the
omnipresence of neutral Hydrogen at all redshifts. Since the characteristic
scale of the BAO is very large, there is no need to detect individual galaxies
at high resolution, and instead a measurement of the power spectrum of neutral
hydrogen density on large scales can completely capture the BAO structure.
This technique is called intensity mapping [7]. Galaxies are observed
collectively via low spatial resolution measurements of redshifted 21 cm lines
of neutral Hydrogen, which also has the potential to make 3D intensity mapping
more efficient as compared to optical galaxy surveys.
The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) will measure
the HI density field over the redshift range $0.8<z<2.5$, bracketing the
redshift at which dark energy begins to affect the dynamics of the universe.
The large daily survey area (approximately 1,000 deg2), and real-time
processing capabilities of HIRAX will also make it an effective observatory
for detecting and monitoring radio transients, such as fast radio bursts
(FRBs) and pulsars[8, 9, 10, 11, 12]. The Southern Hemisphere survey location
also provides overlap with the survey fields of cosmology surveys undertaken
by ACT[13], SPT[14], DESI[15], and the Vera Rubin Observatory[16], allowing
for significant cross correlation studies. HIRAX will also deliver a blind HI
21 cm line survey at $0.8<z<2.5$. This will complement the MeerKAT Absorption
Line Survey (MALS; $0<z<1.4$) by extending the exploration of cold atomic gas
in the circum-galactic and inter-galactic medium to higher redshifts[17].
Detecting the BAO in the presence of significant astronomical foregrounds,
most notably synchrotron radiation from the Milky Way galaxy, is a significant
challenge. The high level requirements for the array to achieve its science
goals are sensitivity, redundancy, and control of systematics. The array is
designed to meet the sensitivity requirements by a combination of low system
temperature and large collecting area. The later aspect is what drives us to a
large array, consisting of approximately one thousand 6 m dishes. The former
is achieved by developing low loss antennas and amplifiers, which are
addressed in Kuhn et al.[18]. The simulations presented in this work are part
of the effort to design and produce an array with very high levels of
redundancy between elements, and with very low systematic levels.
Tight control over redundancy is a new design driver that has emerged for the
next generation of 21 cm observatories such as HIRAX[19]. We are aiming to
solve the problem of redundancy in hardware, which requires a level of
precision that is significantly higher than existing experiments such as
CHIME[20] and HERA[21]. That is, rather than correcting for the differences
between array element structures and positions completely in software, we aim
to reduce the element differences as much as possible in hardware, before
applying fine corrections in software. This requires significantly more
stringent specifications than normal for a radio telescope, on the order of 1
part in 1000 relative to the wavelength, driven largely by the high foreground
to signal ratio for the 21 cm signal. Edge effects due to the finite size of
the array are also a source of non-redundancy, and must be taken into
consideration. Additionally, we are also concerned with minimizing crosstalk
between the elements in the array, which can result from either sky signals
bouncing between elements, or from amplifier noise being transmitted from one
element and picked up in another. The issues we want to address in this work
are, broadly, how to optimize the design of the instrument to minimize
systematics, and to determine what level of element redundancy is necessary,
and whether this level is feasible to achieve in hardware, within cost.
## 2 Telescope Design
The full HIRAX array will consist of 1024 parabolic dishes 6 m in diameter,
with a focal ratio of f/D = 0.23. The optimization of the focal ratio is
discussed in Section 3.1. A first-stage array consisting of 256 elements is
fully funded, and bidding for construction of the array is underway. The full
array will be arranged in a close-packed $32\times 32$ element configuration
to enhance sensitivity on the BAO length scales, and to increase the
redundancy of the array. The increased array redundancy in turn simplifies
calibration, reduces the number of correlations to be calculated, and reduces
the data volume to be stored. The telescope design and array layout can be
seen in Figure 1.
Figure 1: Top: Conceptual illustration of the HIRAX telescope design. The
dishes are 6 m in diameter, with a focal ratio of 0.23. The receiver is
supported by a fiberglass column which allows for axisymmetric cabling, which
is important for beam symmetry, polarization, and low sidelobe amplitudes, and
which is rigid under high winds. The receiver is a dual-polarization
cloverleaf antenna, with a metal “can” structure to provide a backplane, and
reduce spillover and crosstalk. Analog signals from the dishes are transmitted
via RF over fiber to the back end correlator. Bottom: A rendering of the full
1024 element array. The array is close-packed to provide improved sensitivity
on the BAO angular scales, and to provide highly redundant baselines, which
facilitate calibration and correlation.
The dish design includes a low mount that allows for easier access to the
feed, reduces wind force on the base, and reduces cost. The HIRAX array
operates in drift scan, so the only degree of freedom necessary in the dish
pointing is to vary the elevation between $\pm 30^{\circ}$ degrees from zenith
to encompass the intended survey field.
The receiver is supported by a central fiberglass column. The architecture of
this support structure was motivated by the optimal path of the cables from
the feed to the surface of the dish. Because the cables are metal and lie in
the optical path, the sidelobes of the primary beam depend sensitively upon
the cable placement and angle. Simulations show that sidelobe levels and
asymmetry are minimized when the cables run straight down the boresight axis
of the dish, rather than at an inclined angle, as discussed in Section 3.1. A
column was therefore chosen to provide a natural environmental enclosure
following this cable path. The support structure includes additional provision
for fully enclosing the feed and the radio-frequency over fiber (RFoF) modules
that are co-located with the feed, for weatherproofing and protection of the
full receiver system.
The HIRAX feed is a dual-polarization cloverleaf antenna consisting of copper
layers in an FR-4 (printed circuit board) composite (shown in Figure 3). This
method of manufacturing is easy to produce at scale, easy to assemble with a
minimal jig for alignment, and the FR-4 protects the metallic elements from
corrosion. The cloverleaf antenna is a proven design, having been deployed on
the Canadian Hydrogen Intensity Mapping Experiment (CHIME) [22]. While the
original CHIME feeds were passive, the versions developed for HIRAX integrate
the first stage low noise amplifier (LNA) with the antenna balun to reduce the
system noise. As a result of the lower system noise, the feed material was
changed to FR-4 (compared to teflon in CHIME), which has higher loss, but is
substantially cheaper. In an additional change from the CHIME architechture,
the signal from the individual array elements is transmitted to the correlator
by RFoF, instead of via coaxial cable. This reduces cost relative to coax for
a large array, with no loss to performance. Noise temperature measurements of
the HIRAX feeds are being conducted using a cryogenic RF chamber to perform a
differential y-factor measurement between hot (295 K) and cold (77 K) loads.
Details of those measurements are presented in Kuhn et al. [18].
Since we intend to use our EM simulations to optimize elements of the feed and
dish design, we wanted to verify that those simulations were accurate. We
conducted the first range measurements of the HIRAX feed beams at the MESA
antenna test facility at the NASA Jet Propulsion Laboratory, and further
measurements at North Carolina State University. Figure 2 shows the measured
beams of the HIRAX cloverleaf antenna and can, and compares this to the CST
simulations. These measurements confirm the accuracy of the simulations, and
that the feeds were produced according to design.
[capbesideposition=right,center,capbesidewidth=0.3 ]figure[]
Figure 2: E-plane co-polarization beam measurements of the HIRAX cloverleaf
antenna and can (orange), compared to the simulated beams (blue) in 50 MHz
increments across the HIRAX observing band. These measurements confirm the
beam width and gain are as designed and simulated. Differences in the depth of
nulls are due to low amplitude reflections in the range, which are not present
in free-space simulations. Measurements were taken at the North Carolina State
University test range.
Work is currently underway to develop a drone calibration platform, which will
allow for beam measurements of the full instrument, including the 6m dish and
amplification chains, in order to verify the simulated performance of the
instrument, fully characterize the instrument, and aid in calibration. This
platform has already been used to perform beam measurements of the Baryon
Mapping Experiment (BMX) array at Brookhaven National Laboratory[23]. Beam
measurements of the HIRAX dishes is currently being delayed due to the ongoing
Covid-19 pandemic, but will be performed when international research travel is
permitted by institutions again.
## 3 Electromagnetic Simulations
### 3.1 Design Simulations
Electromagnetic simulations of the instrument were performed to select between
design options, and to optimize the design elements. Instrument elements
explored include:
* •
The diameter of the feed “can” structure
* •
The nature and positioning of the power cabling for the feed
* •
The feed mechanical support mechanism, in particular deciding between a system
with several “feed legs” and a monolithic “feed column”
* •
Optimizing the dish focal ratio to reduce crosstalk and other array effects
Figure 3: The HIRAX feed and CST models. Top left: The HIRAX dual-polarization
cloverleaf antenna (green PCB) and can (white). Top right: CST model of the
HIRAX feed and can. Only one polarization is fed in the simulation, as
indicated by the blue bar and red arrow. Bottom left: The 6 m dish and 10 cm
diameter fiberglass feed column model. Bottom right: The dish and 35 cm
diameter feed column model.
#### 3.1.1 Feed Can
The HIRAX cloverleaf antenna is backed with a metal structure referred to as
the “can” (Figure 3), a combination of a ground plane and a cylindrical
surface which helps to circularize the beam and reduce the beam FWHM, to
prevent over illuminating the dish, and reduce spillover. Spillover is
especially a concern in a close-packed array, as proximity of the neighboring
dishes increased pickup of spilled power. Without the can, the FWHM of the
cloverleaf alone is $90^{\circ}$ ($60^{\circ}$) at 400 MHz (800 MHz), while
with a nominal 330 mm diameter can the FWHM is reduced to $70^{\circ}$
($67^{\circ}$) at 400 MHz (800 MHz). Increasing the can diameter reduces
$S_{11}$ because it provides a better ground plane, but it also decreases
aperture efficiency, because it blocks more of the beam reflected from the
dish. The effect on $S_{11}$ was found to be a slow function of radius, while
the loss of aperture efficiency scales roughly with $r^{2}$. Therefore a small
diameter can was used; $330$ mm in diameter, slightly larger than the
cloverleaf itself.
#### 3.1.2 Feed Support
In initial prototype dishes the feed was supported above the dish by four
cylindrical aluminum conduit “legs” 25 mm in diameter, forming the edges of a
square right pyramid, with the legs meeting at an apex behind the feed, and an
apex angle of approximately $45^{\circ}$. However, simulations showed that
scattering off the metal feed legs introduced strongly polarized components
into the beam, and increased sidelobe levels, motivating a design change to
non-conductive support structure.
Similar feed legs could have been constructed out of a non-conductive
composite material such as fiberglass, but there were additional mechanical
problems with the feed leg model. Positioning the feed could be accomplished
by a mechanism that slid over the conduit legs, and could adjust the position
of the receiver independently on each leg. However, this mechanism did not
translate easily into adjustments in a simple orthogonal coordinate system.
Furthermore, it was prone to motion in several axes, including compression
towards the dish (z-axis), and rotation around the z-axis. Weatherproofing and
feed access were also issues with the feed leg support mechanism.
A proposed solution was to instead use a fiberglass column, extending from the
vertex of the dish to the feed (Figure 3). Mechanical FEA simulations showed
this feed support mechanism simultaneously solved several problems. It was
significantly more stable in all axes, particularly the z-axis, which is
especially important for ensuring the instrument is in focus. Large diameter
cylinders are especially strong, and a 35 cm diameter cylinder, just larger
than the feed can, was found to be able to withstand the peak survival wind
speed of 44.4 m/s at the SKA South Africa site, while deflecting from the
focal point by less than 1 mm. A feed column also provided a weather proof
enclosure, easy access to the feed and RFoF module by a detachable section at
the top of the cylinder, and a stable point from which fine adjustments in
feed position could be made in an orthogonal basis system. A similar system
was employed by the CBASS collaboration, using an RF transparent receiver
support column constructed of Plastazote LD45, a closed-cell polyethylene
foam[24].
A small diameter (10 cm) feed column was also proposed, and a prototype was
fielded in the dish shown in Figure 8. However, in addition to greater
mechanical stability for fixed wall thickness, a large diameter column also
has some RF advantages. EM simulations showed that a small diameter column
reduces the main beam amplitude by up to 0.5 dB, due to the long path length
of dielectric material along the main beam line of sight (see Figure 4). Both
models slightly alter the sidelobe amplitudes, but in different directions at
different frequencies. A larger diameter column does slightly reduce the
instrument radiative efficiency compared to a smaller diameter column, for
fixed wall thickness, because of the greater net amount of dielectric in the
beam path, but this effect is also not significant. There was a $<1\%$
difference in radiative efficiency between 10 cm and 35 cm diameter columns
with 5mm wall thickness, several times thicker than what is mechanically
required.
[capbesideposition=right,center,capbesidewidth=0.35 ]figure[]
Figure 4: Comparison of 10 cm diameter receiver support column (orange) and 35
cm column (blue), with 5 mm wall thickness, at 400 MHz. A smaller diameter
support column slightly decreases main beam amplitude (by 0.5 dB). Sidelobes
can be slightly decreased or enhanced, depending on frequency.
#### 3.1.3 Power Cabling
In connection with the issue of feed support, it was suspected that the cables
for powering the LNA in the feed might affect the instrument beams.
Simulations showed that a single coax cable running from the feed to the dish
edge induced polarization structures in the beam and enhanced sidelobe
structure at a level comparable to the feed support legs themselves, because
an incident plane wave polarized in the same direction as the cable will
excite currents along it. Additionally, if there is only one cable, and not a
symmetric set of feed legs, the sidelobe effect is asymmetric, which is
undesirable.
This effect can be reduced, but not eliminated, by reducing the cable
diameter. Significant improvements were seen by reducing from coax
($\mbox{$\sim$}1$ cm diam.) to 15 AWG wire ($\mbox{$\sim$}1.5$ mm diam.).
Reductions below approximately 1 mm diameter produce diminishing returns. 15
AWG wire is sufficient for the current required to power the LNA (200mA), and
since in the full array the sky signal will be transmitted on RFoF, neither
coax, nor any other metal cable elements will be required to run from the dish
to the feed, further reducing scattering.
The asymmetric sidelobes from the cabling can be further reduced by orienting
the power cables parallel to the beam, that is, running them from the feed to
the dish vertex. Simulations were run exploring a range of cable angles,
between 90 degrees (perpendicular to the beam axis) and zero degrees (parallel
to the beam axis), and with the feed side of the cable positioned at a radius
from the feed center greater than the can radius (because it mechanically had
to wrap around the can to access the feed). The sidelobe behavior was not a
monotonic function of angle, but the asymmetry was generally reduced with
increasing angle. However, even at zero degrees, the effect was not completely
eliminated, as there is still a small component of the cable perpendicular to
the beam, due to the offset from the feed center. Running the cable down the
exact symmetry axis of the feed and telescope, however, does completely remove
the asymmetric sidelobe effect (See Figure 5). This configuration also reduces
the sidelobe amplitude by up to 7 dB relative to the initial (perpendicular)
wiring case.
Figure 5: HIRAX telescope beams at 400 MHz using 15 AWG wire
($\mbox{$\sim$}1.5$ mm diameter) to power the LNA embedded in the balun of the
feed. Top: wire routed from focal point to edge of dish. Middle: wire routed
from the edge of the feed can ($\mbox{$\sim$}18$ cm offset from the axis of
symmetry of the dish and antenna) to the vertex of dish. Bottom: wire routed
though cloverleaf balun, precisely along the boresight axis of the antenna, to
the dish vertex. The completely symmetric wiring case eliminates all
asymmetric beam features, and reduces the sidelobe amplitudes by up to 7 dB
relative to the initial configuration.
We determined that the feed could easily be modified to allow for this cable
routing since the balun is already hollow, and a hole in the base board could
be added to the PCB design to allow the cable to pass through. There was a
minor concern that this would require slightly offsetting the feed point of
the antenna, which has also been shown to produce beam asymmetry. However,
simulations showed that as long as both the cable and the feed point were both
within approximately the balun diameter ($\mbox{$\sim$}1$ cm), which is
mechanically feasible, then the effects of both offsets are negligible. The
feed design has therefore been modified to allow the power cable to be exactly
on axis, and the feed point to be $<1$ cm offset from center.
#### 3.1.4 Dish Focal Ratio
Reflections within and between an interferometer’s antenna elements introduce
spectral structure in the antenna gain at frequency scales corresponding to
the time delay of the reflection. If these reflections reach to fine frequency
scales, they can cause the otherwise smooth continuum foregrounds to
contaminate the spectral scales important for cosmology. Lowering the focal
ratio reduces cross coupling between feeds, reducing reflections between
elements but also has the potential to increase reflections within a single
antenna element.
We evaluated the relative delay performance of dishes with different focal
ratios, by running CST simulations of a plane-wave excitation of the dish and
feed from zenith, and use the voltages in a 50 $\Omega$ termination of the
feed to estimate the time-domain response kernel that leaks these foregrounds
using the formalism from Ewall-Wice et al.[25]. In order to evaluate the
relative contributions from inter-dish versus intra-dish reflections, we
perform our simulations with two different sets of boundary conditions. First,
we set our boundary conditions to be open, simulating a dish in isolation,
which gives us the contributions from reflections within the dish. Second, we
perform simulations with periodic rectangular boundary conditions to simulate
one of the HIRAX antennas embedded within an infinite rectangular packed
array. This response function corresponds to the spectral structure that would
appear in an autocorrelation for a sky with only a source at zenith. While
auto-correlations are not very sensitive to 21 cm fluctuations, the
simulations are useful for evaluating the relative levels of spectral
structure. The impact of intra-dish reflections on cross-correlations between
dishes is the subject of ongoing work.
In Figure 6, we compare the time-domain responses of antennas with different
focal ratios with open and periodic boundary conditions. Comparing the open
boundary curves, we see that decreasing the focal ratio worsens intra-dish
reflections by several dB. However, comparing the curves from periodic
boundary conditions, we see that inter-dish reflections dominate the time-
response of a single element, especially at large delays, which are most
sensitive to cosmology. From the perspective of reducing spectral structure,
it makes sense for us to decrease the focal ratio for a fixed dish diameter as
much as is allowed by mechanical constraints and cost. Reducing the focal
ratio also reduces the potential for line-of-sight noise coupling by reducing
$S_{31}$. We demonstrate this reduction in the right hand panel of Figure 7
where we plot the $S_{31}$ coupling parameter between two parallel polarized
feeds of two adjacent antennas.
However, reducing the focal ratio can also decrease the illumination of the
dish and degrade sensitivity. In the bottom panel of Fig 7, we see that the
aperture efficiency of the dish is moderately affected by reducing the focal
ratio. A focal ratio of 0.23 reduces $S_{31}$ by $\mbox{$\sim$}10$ dB relative
to 0.25, while only reducing aperture efficiency by $\mbox{$\sim$}5\%$. The
selected focal ratio value of 0.23 was optimal for minimizing $S_{31}$ and the
delay kernel amplitude at high delay ($\tau>50$ ns), within the constraints of
mechanical support and per-element cost.
Figure 6: The delay kernel of the HIRAX dish towards a plane-wave at zenith
with open boundary conditions (dashed lines) and periodic boundary conditions
with two meters between dish edges (solid lines). This kernel determines the
extent to which foregrounds are leaked from small line-of-sight Fourier modes
where they exist intrinsically to large line-of-sight Fourier modes. In order
to minimize foreground leakage and maximize our ability to recover
cosmological fluctuations, this kernel should be kept as narrow as possible in
delay. Different colors represent different values for the dish focal ratio
for a fixed dish diameter of six meters. Reducing the focal ratio deepens the
dish and lowers the feed below the rim. Comparing the open boundary curves, we
see that reducing the focal ratio increases intra-dish reflections, raising
the time-domain response by several dB. The curves with periodic boundaries
include reflections of the sky signal off of nearby antennas. While a higher
focal ratio is preferred for a single dish in isolation, we see that the
inter-dish reflections dominate over intra-dish reflections, especially at
higher delays. These inter-dish reflections are mitigated by lowering the
focal ratio. Thus, when considering the HIRAX dish embedded in an array, we
find that lowering the focal ratio has a net beneficial impact on the time-
domain response.
Figure 7: Comparison of nearest-neighbor crosstalk ($S_{31}$, left), and
aperture efficiencies (bottom), for a wide range of focal ratios. Decreasing
focal ratio (deeper dishes) reduces crosstalk. Decreasing focal ratio also
reduces the aperture efficiency (right), reducing overall system sensitivity.
Geometries below f/D$\sim$0.225 are also increasingly difficult to
mechanically support, requiring significantly more backing structure due to
the deep shape. The selected value of 0.23 reduces $S_{31}$ by
$\mbox{$\sim$}10$ dB relative to 0.25, while only reducing aperture efficiency
by $\mbox{$\sim$}5\%$.
### 3.2 Design Tolerance Simulations
In addition to the design selection and optimization simulations, an extensive
suite of simulations was performed exploring the results of inaccuracies in
the manufacture or assembly of the instrument which degrade the redundancy of
the array, in order to place specifications on the components and assembly.
These simulations included, among others:
* •
Varying feed position in six axes (translations and rotations)
* •
Varying dish diameter, holding focal ratio constant
* •
Varying dish focal ratio, holding diameter constant
* •
Perforations in dish conductive surface
* •
Reduced dish surface conductivity
The resulting beams from these simulations were in turn used to mock-observe
simulated 21 cm skies, as described in Section 5, in order to examine the
effects of these mechanical errors on observations, and the recovered data
products. In addition to non-uniform manufacturing, edge effects due to the
finite extent of the array are also a source of non-redundancy. This category
of effects is not explored here, but will be the subject of future work.
#### 3.2.1 Feed Position
For all simulations, the coordinate system origin is at the nominal feed
position. The positive z-axis points from the feed to the dish vertex, the
x-axis is horizontal when the dish is pointed at the horizon, and the y-axis
is vertical. For the feed position simulations an array of simulations were
performed for each axis, shifting the feed along that axis. The positions
were: from 0 mm to 10 mm from the nominal focal point in 1 mm steps along the
x and y-axes, and from -5 mm to 5 mm in 1 mm steps along the z-axis. For the
three rotational axes, the feed was rotated from $0^{\circ}$ to $5^{\circ}$ in
$0.5^{\circ}$ increments, around each axis.
The primary effect of translation along the z-axis is to throw the instrument
out of focus, which reduces gain, and alters beam sidelobe structure.
Translations in the other two dimensions primarily result in beam pointing
errors, and asymmetric sidelobes. Rotation around the z-axis does not change
the shape of the beam, but due to the non-azimuthally symmetric sidelobe does
result in relative changes in sidelobe amplitudes between pairs of dishes with
different rotations. Rotations around the x and y-axis primarily change the
pointing of the beam, and creates asymmetric sidelobes. The resulting design
tolerance was $\pm 1$ mm in all linear translation directions, $\pm 1.5$
arcmin for the azimutal rotation, and $\pm 2.5$ armin for the other rotations.
The summary of design specifications can be seen in Table LABEL:tab:specs.
#### 3.2.2 Dish Diameter
The dish diameter was varied from 599 cm to 601 cm in 1 mm steps, while
holding the focal ratio constant (so as not to conflate the results with those
of the focal ratio simulations). Changing the dish diameter throws the
instrument out of focus, and changes the beam width. We determined the
accuracy, averaging over they array, must be within $\pm 3$ mm, and the
individual dish precision within $\pm 1$ mm.
#### 3.2.3 Focal Ratio
The focal ratio was varied from f/D = 0.245 to 0.255 in steps of 0.001, while
holding the dish diameter fixed at 6 m. Changing the focal ratio changes the
aperture efficiency, and can alter the sidelobe levels. These simulations were
distinct from those exploring the focal ratio in Section 3.1, as they were
intended to explore fine variations around a nominal value, to constrain
manufacturing tolerances, not to explore large changes in the nominal value.
These simulations contributed to the dish shape accuracy specification in
Table LABEL:tab:specs ($\pm 3$ mm maximum deviation from the ideal paraboloid
curve).
#### 3.2.4 Dish Perforations
Radio dishes are often constructed of metal mesh instead of solid metal
panels, for purposes of lightweighting, lowering wind cross section, and cost
reduction. As long as the gaps in the mesh are substantially smaller than the
wavelength, there is no reduction in the performance, but as the gap size
approaches the wavelength, the dish becomes increasingly transparent. A common
criteria given for maximum gap sizes is $\lambda/10$. We explored the effects
of hole size on the instrument beams in order to set specifications on what
types of mesh were permissible and to regulate production errors resulting in
accidental perforations or tears in the dish surface. As a first-pass method
of simulating this effect, the dish was perforated with a regular array of
circular holes, arranged in 16 equally spaced azimuthal angles ($22.5^{\circ}$
separation), and 6 radial distances from the center of the dish with 25 cm
spacing, for a total of 96 perforations in the dish. The hole diameter was
varied from 0.5 cm to 5.0 cm, in 5 mm steps.
This method was used because a realistically large number of holes in the dish
would vastly increase the complexity of the meshing, and therefore the
simulation run times. The holes were restricted to a central 1.5 m radius of
the dish to again simplify the model: at this diameter holes that are circular
in cross section and projected from a plane normal to the dish vertex are not
significantly distorted by the curvature of the dish. If holes were projected
out to the full 3 m radius of the dish, they would be extremely elongated in
the azimuthal direction, and instead of exploring one hole radius, we would be
exploring a radially and azimuthally varying range of values. Altering the
projection angle of the holes with the radius would allow the full surface
area to be perforated, and will be explored in future analysis analysis.
Perforations create additional structure within the beam, which increases with
increasing hole size, and at higher frequencies. At larger hole sizes it
decreases gain, and increases power from the ground (though this effect was
not modeled in our simulations). The resulting specification on perforation
size from this analysis (5 mm maximum gap dimension, see Table
LABEL:tab:specs) is significantly stronger than the $\lambda/10$ criteria (as
large as 7.5 cm at 400 MHz). In future this set of simulations will be
expanded to a more realistic model using a pseudo-randomly placed array of
perforations over a wider area of the dish.
#### 3.2.5 Conductivity
Lastly, we simulated different conductivity values of the dish’s reflective
surface to determine any impact on the beams (if for example, the metal mesh
used in the surface was excessively fine, and the resistance per strand
increased); or if a non-uniformity in the conductivity would have an effect on
the beams (as if the mesh density varied, or if different metals were used in
different parts of the dish). Two sets of simulations were run to explore
these possibilities, one in which the conductivity of the entire dish was
varied, and one in which the dish was modeled as two halves with different
conductivities. The latter represents a limiting case, and the beam effects
calculated from a dish surface with maximally inhomogeneous conductivity can
be interpreted as an upper limit. In the first set of simulations, the
conductivity was varied from $6\times 10^{7}$ S/m (approximately the
conductivity of copper) to $1\times 10^{6}$ S/m (roughly the conductivity of
stainless steel, and the lowest conductivity of materials one might reasonably
use) in steps of $5\times 10^{6}$ S/m. In the second set of simulations, one
half of the dish was held at $6\times 10^{7}$ S/m, while the other was stepped
down from $6\times 10^{7}$ S/m in increments of $5\times 10^{6}$ S/m to
$1\times 10^{6}$ S/m. This series of simulations showed essentially no changes
in the beam over the range of explored values. This is in agreement with
experimental measurements of the radiation efficiency of dipole and meander
antennas as a function of conductivity, which show no change in efficiency
until the conductivity is below $1\times 10^{6}$ S/m [26]. The model of
conductivity variations employed in these simulations is a simplified model of
the large possible parameter space of variations in the dish surface
conductivity, and will be expanded upon in future simulations.
Table 1: HIRAX Telescope Element Specifications
Element | Specification | Notes
---|---|---
Axial symmetry of | $\pm$1 mm |
receiver support | |
Receiver support | $<0.5$ dB |
RF attenuation | |
Deviation of power | $\pm$2 mm |
cabling from boresight | |
Rigidity of | $\pm 0.5$ mm | In x,y, and z dimensions
receiver support | |
Positioning of receiver | $\pm 0.5$ mm | In x,y, and z dimensions
relative to focal point | |
Orientation of receiver | $\pm 2.5$ arcmin | polar angle
relative to boresight | $\pm 1.5$ arcmin | azimuthal angle
Dish diameter | $\pm 3$ mm | Accuracy
| $\pm 1$ mm | Precision
Dish shape accuracy | $\pm 3$ mm | Deviation from ideal paraboloid
Dish electrical connectivity | $<5$ mm | Maximum dimension of gaps
Dish surface conductivity | $>1\times 10^{6}$ S/m |
* Instrument specifications determined by EM simulations. This is a subset of the total system specifications.
## 4 Photogrammetry Measurements
The specifications outlined above are significantly more stringent than those
typically required for radio telescopes at these wavelengths, due to the
increased accuracy necessary to control array redundancy. Achieving these
specifications within cost is a significant engineering challenge.
Photogrammetry measurements of our first prototype dishes were performed to
ascertain whether the specifications had been meet, and to inform the design
and production of future dishes. The results show that already in the first
prototype we are close to achieving the specified goals, though improvements
still need to be made.
The depth of the dishes is designed to reduce crosstalk between the elements
in the close-packed array, but this is an uncommon design for radio telescopes
(though similarly deep dishes have been used for other close-packed radio
arrays such as HERA[21], CHIME[20], and CHORD[27]), and requires greater
support than more typical shallow profile dishes. Figure 8 shows a prototype
dish located at the Hartebeesthoek Radio Astronomy Observatory (HartRAO). The
pictured prototype dish was manufactured by MMS Technology Ltd. in South
Africa (mmstechnology.co.za). A second metal dish prototype was constructed
jointly by NVJ and Rebcon Engineering, but is not discussed here.
Figure 8: A 6 m diameter f/D = 0.25 prototype dish at the Hartebeesthoek Radio
Astronomy Observatory in South Africa. The dish is aluminum embedded in
fiberglass composite, and the receiver is supported from the dish vertex by a
10 cm diameter fiberglass column. HIRAX will operate in drift scan, so the
mount has only an elevation axis, which allows for pointings between $\pm
30^{\circ}$ degrees off zenith. PI for scale.
Photogrammetry measurements were performed on the prototype dish in Figure 8
to assess the accuracy of the dish surface and to compare gravitational
deflections to those predicted by mechanical finite element analysis (FEA)
simulations. Figure 9 shows FEA simulations111Dish FEA simulations performed
by MMS Technology (Pty) Ltd. Figure courtesy of Heinrich Bauermeister. of the
f/D = 0.25 dish, and corresponding photogrammetry results222Photogrammetry
performed by the South African Radio Astronomy Observatory (SARAO). Figure
courtesy of Mattieu de Villiers. for zenith angles of $0^{\circ}$ and
$30^{\circ}$.
The photogrammetry data points are fit to a rotationally symmetric paraboloid,
and the plots show the data residuals with respect to this fit. The dominant
deformation is the quadrupolar “potato chip” mode, which arises from the
cross-shaped dish backing structure and increases in amplitude with zenith
angle. The FEA simulations confirm the quadrupolar pattern of the distortions
under gravitational load; however, the expected amplitudes are several times
smaller than those observed in measurements. Deformations at zenith pointing
are predicted to be less than 1 mm, but are measured to have a maximum
amplitude of 5 mm. At $30^{\circ}$ zenith angle the maximum amplitude
increases to 4 mm in the model and 8 mm in the measured data.
The deformations in both the zenith and $30^{\circ}$ pointing are also most
significant towards the edge of the dish. This may be an indication that a
more substantial backing structure is necessary to support the large diameter
of the dish.
Figure 9: HIRAX prototype dish photogrammetry and FEA simulations. The top row
shows FEA simulations with predicted total deformations, while the bottom row
shows the measured deviations from the theoretical paraboloid surface,
measured normal to the surface. The left column shows zenith pointing, the
right column shows $30^{\circ}$ elevation below zenith, the lowest pointing
for the HIRAX survey. The measured deformations, while close to the design
specifications, are several times larger than those predicted by the
simulations. Dish FEA simulations were performed by MMS Technology (Pty) Ltd.,
and figures are courtesy of Heinrich Bauermeister. Photogrammetry was
performed by the South African Radio Astronomy Observatory (SARAO), and
figures are courtesy of Mattieu de Villiers.
Figure 10 shows the dish surface displacements after the best-fit paraboloid
and quadrupole are subtracted at three different angles: $0^{\circ}$ (zenith),
$15^{\circ}$, and $30^{\circ}$. The residual displacements are less than 1 mm,
showing the dish surface shape is well described by the combination of the
ideal paraboloid and a quadrupolar mode. Figure 10 also shows the differences
between pairs of angles, which shows the systematic changes in the dish
surface as the elevation angle is adjusted. These differences are larger, with
maximum amplitudes of between one and two millimeters. This is further
evidence that the dish needs addition support structure, to reduce the
variations in dish shape as it is tilted.
This data informs our future design and production methods; while close to
acceptable limits, the dish surface needs to be more precise, and the backing
structure may need to be improved as well. The dish at zenith has a measured
RMS surface deviation of 2.8 mm, and the requirement is $<1$ mm RMS deviation
from the ideal paraboloid surface. We are confident that the mechanical
improvements required to achieve the design specifications are feasible. The
derivation of the specifications is discussed below.
Figure 10: Residual displacements after subtracting best-fit paraboloid and
quadrupole. Color scales are in millimeters. Top row (left to right):
$0^{\circ}$ (zenith), $15^{\circ}$, and $30^{\circ}$. Deviations from the
model are below 1 mm, showing that the surface is well described by a
paraboloid plus a quadrupolar distortion arising from the backing structure.
Bottom row, differences between residuals at angle pairs, showing the
systematic changes in the dish surface as the dish is tilted (left to right):
$0^{\circ}$ and $15^{\circ}$, $0^{\circ}$ and $30^{\circ}$, and $15^{\circ}$
and $30^{\circ}$.
## 5 Implications for 21 cm Cosmology
In order to examine the effects of beam shape systematics related to the
mechanical specification of the dishes, we have propagated visibility
perturbations derived from the CST simulations described above through to 21
cm power spectrum errors. The method for this is outlined briefly here but
will be more comprehensively described in future work.
Firstly, we construct a summarised representation of the deviations observed
in the beam shapes in the CST parameter sweep simulations outlined in Section
3.2. This is done by applying a principle component analysis (PCA) to the
observed radial profile deviations of the beam directivities exported from the
CST simulations (currently only deviations in the co-pol. beams are
considered). From this analysis, we select the 2 modes (functions of frequency
and radial angular coordinate) that encapsulate the most variance in the beam
radial profile for a given parameter in the dish-feed system that has been
swept. The appropriately weighted linear combination of these modes then
represents an approximation of the linear-order perturbation of the beam
shape, which can be expressed as a derivative with respect to the physical
parameter under consideration. With this linear approximation of the effect of
varying a given physical parameter on the beam shape, we can efficiently
generate realisations of perturbed beams motivated from what is observed in
the CST simulation parameter sweeps.
To evaluate a potential systematic’s effect on 21 cm cosmology, we generate
random realisations of the perturbed beams based on a distribution of physical
parameters across the dishes of the array. Figure 11 shows the effect of such
perturbations in the feed position along the focal axis relative to the dish,
$\delta z$, on the recovered 21 cm power spectrum. Here we have assumed the
distribution of $\delta z$ across dishes is that of a Gaussian with standard
deviation of 10 mm. Using the perturbed beams we first generate visibility-
space perturbations in simulated data assuming a nominal sky model including
galactic foregrounds and cosmological 21 cm signal, and a nominal instrument
and survey. These perturbed visibilities are then used to construct a
foreground filtered power spectrum estimate using a modified version of the
$m$-mode formalism [28, 29]. In Figure 11 a plot of the deviations of the
recovered power spectrum with respect to the input 21 cm power spectrum due to
these systematic perturbations is shown.
Currently this effort has only considered small scale simulations of the array
(limited to core baselines and subsets of the full frequency range), and
radially symmetric perturbations in the co-pol. beam shapes. Future work will
extend this to full scale simulations using a less summarised prescription for
the beam deviations as well as incorporating full polarization information.
These results have been used to inform the telescope specifications listed in
Table LABEL:tab:specs. For this purpose we first determined our tolerance on
systematic contributions to the required power spectrum sensitivity from high-
level Fisher matrix forecasts on Dark Energy parameter constraints. This
tolerance was set such that the impact of the systematic effects under
evaluation would reduce the Fisher-forecasted Dark Energy Figure of Merit
(FoM) by no more than 20% of the fiducial forecasted level that assumes a
power spectrum noise comprised of purely statistical noise with a nominal
residual foreground contribution. Table LABEL:tab:specs represent a
mechanically achievable set of specifications that is consistent with this
tolerance as informed by the simulations described here. A more quantitative
description of this process will be presented in future work.
Figure 11: Deviations in the recovered 21 cm power spectrum due to a single
realization of systematic beam perturbations due to Gaussian distributed
positional offsets (with standard deviation=10 mm) of the feed along the focal
axis, as calibrated by CST simulations. The results shown here are scaled by
the estimated statistical noise on the power spectrum bins such that values of
magnitudes greater than one would indicate systematic contributions larger
than the estimated statistical noise level. The simulations from which these
results were generated consider only the frequency range of 600-650 MHz and
utilises only a subset of the baselines available to HIRAX, but assumes the
same level of redundancy as in the 1024 element array. Upcoming publications
will expand upon these results.
## 6 Summary and Future Work
The simulations and verification measurements performed for this work have
significantly impacted the design of the HIRAX array. The cabling and receiver
support structure were completely re-designed to improve RF performance and
mechanical stability. Routing the cabling precisely along the telescope axis
of symmetry reduces sidelobe amplitudes by 7 dB from the previous prototype
design, and eliminates asymmetries in the sidelobe structure. Selecting a
large (35 cm) diameter receiver support column over a small (10 cm) support
column increases the main beam amplitude by 0.5 dB. The focal ratio was
adjusted, resulting in a significant decrease in crosstalk. Decreasing the
focal ratio from 0.25 to 0.23 reduces inter-dish reflections by up to 5 dB, as
measured by the delay kernel, and line-of-sight noise coupling ($S_{31}$) by
up to 10 dB, while only reducing aperture efficiency by $5\%$, from
approximately $60\%$ to $55\%$. We have also conducted extensive range
measurements of the HIRAX feed to verify the accuracy of our simulations, and
are preparing for field measurements of the full 6 m dish elements with a
drone based calibration platform.
We have traced the effects of the modeled system non-redundancies to recovered
cosmological parameters, and have derived specifications for the manufacture
and assembly of the dish from those cosmological simulations, as summarized in
Table LABEL:tab:specs. This is of particular importance for the 21 cm
cosmology field: this is the first time the redundancy requirements of a 21 cm
array have been explicitly derived from the telescope mechanical and
electromagnetic performance to the cosmology simulations pipeline.
Future simulations will continue to examine the effects of intra-dish and
inter-dish reflections on array performance, will expand the simulation models
of dish perforations and conductivity beyond the first-pass models used here,
and examine the effects of large scale (quadrupolar) and small scale (RMS)
deformations of the dish surface on the instrument beams and survey
cosmological constraints. Future publications will expand on the details of
the HIRAX electromagnetic simulations, beam verification measurements, and
cosmological simulations. There is currently a tender out for production of
the dishes for the HIRAX 256-element array, based in part on the
specifications derived here, and construction of the array will begin once the
tender is complete, in 2021.
###### Acknowledgements.
This work was supported by the National Science Foundation (NSF) under Grant
No. 1751763. AEW acknowledges support from the Berkeley Center of Cosmological
Physics. ERK is supported by a NASA Space Technology Research Fellowship
(NSTRF). KM acknowledges support from the National Research Foundation (NRF)
of South Africa. The HIRAX radio array is funded by the NRF. DC acknowledges
the financial assistance of the South African Radio Astronomy Observatory
(SARAO, www.sarao.ac.za). Results presented here were produced with the
support of the computing and technical resources of the Yale Center for
Research Computing and Wright Laboratory, and with the support of personnel
and facilities at NCSU, Caltech, and NASA JPL.
## References
* [1] Weinberg, D. H., Mortonson, M. J., Eisenstein, D. J., Hirata, C., Riess, A. G., and Rozo, E., “Observational probes of cosmic acceleration,” Physics Reports 530, 87–255 (Sept. 2013).
* [2] Bassett, B. A. and Hlozek, R., [Dark Energy ], ch. Baryon Acoustic Oscillations, Cambridge University Press (2010).
* [3] Kogut, A. et al., “Three-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Foreground Polarization,” Astrophys. J. 665, 355–362 (2007).
* [4] Eisenstein, D. J. e. a., “Detection of the Baryon Acoustic Peak in the Large‐Scale Correlation Function of SDSS Luminous Red Galaxies,” The Astrophysical Journal 633, 560–574 (Nov 2005).
* [5] Percival, W. J. et al., “Measuring the Baryon Acoustic Oscillation scale using the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey,” Monthly Notices of the Royal Astronomical Society 381, 1053–1066 (Sep 2007).
* [6] Abbott, T. M. C. e. a., “Dark Energy Survey Year 1 results: measurement of the baryon acoustic oscillation scale in the distribution of galaxies to redshift 1,” Monthly Notices of the Royal Astronomical Society 483, 4866–4883 (Dec 2018).
* [7] Chang, T.-C., Pen, U.-L., Peterson, J. B., and McDonald, P., “Baryon Acoustic Oscillation Intensity Mapping as a Test of Dark Energy,” Phys. Rev. Lett. 100, 091303 (2008).
* [8] The CHIME/FRB Collaboration, “Observations of fast radio bursts at frequencies down to 400 megahertz,” Nature 566, 230–234 (Jan 2019).
* [9] The CHIME/FRB Collaboration, “A second source of repeating fast radio bursts,” Nature 566, 235–238 (Jan 2019).
* [10] The CHIME/FRB Collaboration, “CHIME/FRB Detection of Eight New Repeating Fast Radio Burst Sources,” The Astrophysical Journal Letters 885(1) (2019).
* [11] Fonseca, E. et al., “Nine New Repeating Fast Radio Burst Sources from CHIME/FRB,” The Astrophysical Journal 891, L6 (Feb 2020).
* [12] The CHIME/FRB Collaboration, “Periodic activity from a fast radio burst source,” Nature 582, 351–355 (Jun 2020).
* [13] Swetz, D. S. et al., “The Atacama Cosmology Telescope: The Receiver and Instrumentation,” The Astrophysical Journal Supplement Series 194 (2018).
* [14] Benson, B. A. et al., “SPT-3G: A Next-Generation Cosmic Microwave Background Polarization Experiment on the South Pole Telescope,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 9153 (July 2014).
* [15] Michael Levi and the DESI collaboration, “The DESI Experiment, a whitepaper for Snowmass 2013,” (2013).
* [16] Sarah Brough et al., “The Vera Rubin Observatory Legacy Survey of Space and Time and the Low Surface Brightness Universe,” (2020).
* [17] Gupta, N. et al., “The MeerKAT Absorption Line Survey (MALS),” (2017).
* [18] Kuhn et al., “Noise temperature testing for the Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX),” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 11445 (Dec. 2020).
* [19] Newburgh, L. B. et al., “HIRAX: A Probe of Dark Energy and Radio Transients,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 9906 (July 2016).
* [20] Bandura, K. et al., “Canadian hydrogen intensity mapping experiment (chime) pathfinder,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 9145 (July 2014).
* [21] DeBoer, David R. et al., “Hydrogen Epoch of Reionization Array (HERA),” Publications of the Astronomical Society of the Pacific 129, 045001 (Mar 2017).
* [22] Deng, M., Campbell-Wilson, D., and the CHIME Collaboration, “The cloverleaf antenna: A compact wide-bandwidth dual-polarization feed for chime,” International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM) 16, IEEE (July 2014).
* [23] O’Connor, P. et al., “The Baryon Mapping Experiment (BMX), a 21cm intensity mapping pathfinder,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 11445 (Dec. 2020).
* [24] King, Oliver G. et al., “The C-Band All-Sky Survey: instrument design, status, and first-look data,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7741 (Jul 2010).
* [25] Ewall-Wice, A. et al., “The Hydrogen Epoch of Reionization Array Dish. II. Characterization of Spectral Structure with Electromagnetic Simulations and Its Science Implications.,” The Astrophysical Journal 831, 196 (Nov. 2016).
* [26] Shahpari, M. and Thiel, D. V., “The impact of reduced conductivity on the performance of wire antennas,” IEEE Transactions on Antennas and Propagation 63, 4686–4692 (Nov 2015).
* [27] Vanderlinde, K. et al., “LRP 2020 Whitepaper: The Canadian Hydrogen Observatory and Radio-transient Detector (CHORD),” (2019).
* [28] Shaw, J. R., Sigurdson, K., Pen, U.-L., Stebbins, A., and Sitwell, M., “All-sky Interferometry with Spherical Harmonic Transit Telescopes,” Astrophysical Journal 781, 57 (Feb. 2014).
* [29] Shaw, J. R., Sigurdson, K., Sitwell, M., Stebbins, A., and Pen, U.-L., “Coaxing cosmic 21 cm fluctuations from the polarized sky using m -mode analysis,” Phys. Rev. D 91, 083514 (Apr. 2015).
|
# Advances In Video Compression System Using Deep Neural Network: A Review And
Case Studies
Dandan Ding⋆, Zhan Ma⋆, Di Chen, Qingshuang Chen, Zoe Liu, and Fengqing Zhu D.
Ding is with the School of Information Science and Engineering, Hangzhou
Normal University, Hangzhou, Zhejiang, China.Z. Ma is with the School of
Electronic Science and Engineering, Nanjing University, Nanjing, Jiangsu,
China.D. Chen, Q. Chen, and F. Zhu are with the School of Electrical and
Computer Engineering, Purdue University, West Lafayette, Indiana, USA.Z. Liu
is with Visionular Inc, 280 2nd St., Los Altos, CA, USA.⋆These authors
contributed equally.
###### Abstract
Significant advances in video compression system have been made in the past
several decades to satisfy the nearly exponential growth of Internet-scale
video traffic. From the application perspective, we have identified three
major functional blocks including pre-processing, coding, and post-processing,
that have been continuously investigated to maximize the end-user quality of
experience (QoE) under a limited bit rate budget. Recently, artificial
intelligence (AI) powered techniques have shown great potential to further
increase the efficiency of the aforementioned functional blocks, both
individually and jointly. In this article, we review extensively recent
technical advances in video compression system, with an emphasis on deep
neural network (DNN)-based approaches; and then present three comprehensive
case studies. On pre-processing, we show a switchable texture-based video
coding example that leverages DNN-based scene understanding to extract
semantic areas for the improvement of subsequent video coder. On coding, we
present an end-to-end neural video coding framework that takes advantage of
the stacked DNNs to efficiently and compactly code input raw videos via fully
data-driven learning. On post-processing, we demonstrate two neural adaptive
filters to respectively facilitate the in-loop and post filtering for the
enhancement of compressed frames. Finally, a companion website hosting the
contents developed in this work can be accessed publicly at
https://purdueviper.github.io/dnn-coding/.
###### Index Terms:
Deep Neural Networks, Texture Analysis, Neural Video Coding, Adaptive Filters
## I Introduction
In recent years, Internet traffic has been dominated by a wide range of
applications involving video, including video on demand (VOD), live streaming,
ultra-low latency real-time communications, etc.. With ever increasing demands
in resolution (e.g., 4K, 8K, gigapixel [1], high speed [2]), and fidelity,
(e.g., high dynamic range [3], and higher bit precision or bit depth [4]),
more efficient video compression is imperative for content transmission and
storage, by which networked video services can be successfully deployed.
Fundamentally, video compression systems devise appropriate algorithms to
minimize the end-to-end reconstruction distortion (or maximize the quality of
experience (QoE)), under a given bit rate budget. This is a classical rate-
distortion (R-D) optimization problem. In the past, the majority of effort had
been focused on the development and standardization of video coding tools for
optimized R-D performance, such as the intra/inter prediction, transform,
entropy coding, etc., resulting in a number of popular standards and
recommendation specifications (e.g., ISO/IEC MPEG series [5, 6, 7, 8, 9, 10,
11], ITU-T H.26x series [9, 12, 13, 10, 11], AVS series [14, 15, 16], as well
as the AV1 [17, 18] from the Alliance of Open Media (AOM)[19]). All these
standards have been widely deployed in the market and enabled advanced and
high-performing services to both enterprises and consumers. They have been
adopted to cover all major video scenarios from VOD, to live streaming, to
ultra-low latency interactive real-time communications, used for applications
such as telemedicine, distance learning, video conferencing, broadcasting,
e-commerce, online gaming, short video platforms, etc. Meanwhile, the system
R-D efficiency can also be improved from pre-processing and post-processing,
individually and jointly, for content adaptive encoding (CAE). Notable
examples include saliency detection for subsequent region-wise quantization
control, and adaptive filters to alleviate compression distortions [20, 21,
22].
In this article, we therefore consider pre-processing, coding, and post-
processing as three basic functional blocks of an end-to-end video compression
system, and optimize them to provide compact and high-quality representation
of input original video.
* •
The “coding” block is the core unit that converts raw pixels or pixel blocks
into binary bits presentation. Over the past decades, the “coding” R-D
efficiency has been gradually improved by introducing more advanced tools to
better exploit spatial, temporal, and statistical redundancy [23].
Nevertheless, this process inevitably incurs compression artifacts, such as
blockiness and ringing, due to the R-D trade-off, especially at low bit rates.
* •
The “post-processing” block is introduced to alleviate visually perceptible
impairments produced as byproducts of coding. Post-processing mostly relies on
the designated adaptive filters to enhance the reconstructed video quality or
QoE. Such “post-processing” filters can also be embedded into the “coding”
loop to jointly improve reconstruction quality and R-D efficiency, e.g., in-
loop deblocking [24] and sample adaptive offset (SAO) [25];
* •
The “pre-processing” block exploits the discriminative content preference of
the human visual system (HVS), caused by the non-linear response and frequency
selectivity (e.g., masking) of visual neurons in the visual pathway. Pre-
processing can extract content semantics (e.g., saliency, object instance) to
improve the psychovisual performance of the “coding” block, for example, by
allocating unequal qualities (UEQ) across different areas according to pre-
processed cues [26]. 111Although adaptive filters can also be used in pre-
processing for pre-filtering, e.g., denoising, motion deblurring, contrast
enhancement, edge detection, etc., our primary focus in this work will be on
semantic content understanding for subsequent intelligent “coding”.
Building upon the advancements in deep neural networks (DNN), numerous
recently-created video processing algorithms have been greatly improved to
achieve superior performance, mostly leveraging the powerful nonlinear
representation capacity of DNNs. At the same time, we have also witnessed an
explosive growth in the invention of DNN-based techniques for video
compression from both academic research and industrial practices. For example,
DNN-based filtering in post-processing was extensively studied when developing
the VVC standard under the joint task force of ISO/IEC and ITU-T experts over
the past three years. More recently, the standard committee issued a Call-for-
Evidence (CfE) [27, 28] to encourage the exploration of deep learning-based
video coding solutions beyond VVC.
In this article, we discuss recent advances in pre-processing, coding, and
post-processing, with particular emphasis on the use of DNN-based approaches
for efficient video compression. We aim to provide a comprehensive overview to
bring readers up to date on recent advances in this emerging field. We also
suggest promising directions for further exploration. As summarized in Fig. 1,
we first dive into video pre-processing, emphasizing the analysis and
application of content semantics, e.g., saliency, object, texture
characteristics, etc., to video encoding. We then discuss recently-developed
DNN-based video coding techniques for both modularized coding tool development
and end-to-end fully learned framework exploration. Finally, we provide an
overview of the adaptive filters that can be either embedded in codec loop, or
placed as a post enhancement to improve final reconstruction. We also present
three case studies, including 1) _switchable texture-based video coding_ in
pre-processing; 2) _end-to-end neural video coding_ ; and 3) _efficient neural
filtering_ , to provide examples the potential of DNNs to improve both
subjective and objective efficiency over traditional video compression
methodologies.
Figure 1: Topic Outline. This article reviews DNN-based techniques used in
pre-processing, coding, and post-processing of a practical video compression
system. The “pre-processing” module leverages content semantics (e.g.,
texture) to guide video coding, followed by the “coding” step to represent the
video content using more compact spatio-temporal features. Finally, quality
enhancement is applied in “post-processing” to improve reconstruction quality
by alleviating processing artifacts. Companion case studies are respectively
offered to showcase the potential of DNN algorithms in video compression.
The remainder of the article is organized as follows: From Section II to IV,
we extensively review the advances in respective pre-processing, coding, and
post-processing. Traditional methodologies are first briefly summarized, and
then DNN-based approaches are discussed in detail. As in the case studies, we
propose three neural approaches in Section V, VI, and VII, respectively.
Regarding pre-processing, we develop a CNN based texture analysis/synthesis
scheme for AV1 codec. For video compression, an end-to-end neural coding
framework is developed. In our discussion of post-processing,we present
different neural methods for in-loop and post filtering that can enhance the
quality of reconstructed frames. Section VIII summarizes this work and
discusses open challenges and future research directions. For your
convenience, Table I provides an overview of abbreviations and acronyms that
are frequently used throughout this paper.
TABLE I: Abbreviations and Annotations Abbreviation | Description
---|---
AE | AutoEncoder
CNN | Convolutional Neural Network
CONV | Convolution
ConvLSTM | Convolutional LSTM
DNN | Deep Neural Network
FCN | Fully-Connected Network
GAN | Generative Adversarial Network
LSTM | Long Short-Term Memory
RNN | Recurrent Neural Network
VAE | Variational AutoEncoder
BD-PSNR | Bjøntegaard Delta PSNR
BD-Rate | Bjøntegaard Delta Rate
GOP | Group of Pictures
MS-SSIM | Multiscale SSIM
MSE | Mean Squared Error
PSNR | Peak Signal-to-Noise Ratio
QP | Quantizatin Parameter
QoE | Quality of Experience
SSIM | Structural Similarity Index
UEQ | UnEqual Quality
VMAF | Video Multi-Method Assessment Fusion
AV1 | AOMedia Video 1
AVS | Audio Video Standard
H.264/AVC | H.264/Advanced Video Coding
H.265/HEVC | H.265/High-Efficiency Video Coding
VVC | Versatile Video Coding
AOM | Alliance of Open Media
MPEG | Moving Picture Experts Group
## II Overview of DNN-based Video Pre-processing
Pre-processing techniques are generally applied prior to the video coding
block, with the objective of guiding the video encoder to remove psychovisual
redundancy and to maintain or improve visual quality, while simultaneously
lowering bit rate consumption. One category of pre-processing techniques is
the execution of pre-filtering operations. Recently, a number of deep
learning-based pre-filtering approaches have been adopted for targeted coding
optimization. These include denoising [29, 30], motion deblurring [31, 32],
contrast enhancement [33], edge detection [34, 35], etc. Another important
topic area is closely related to the analysis of video content semantics,
e.g., object instance, saliency attention, texture distribution, etc., and its
application to intelligent video coding. For the sake of simplicity, we refer
to this group of techniques as “pre-processing” for the remainder of this
paper. In our discussion below, we also limit our focus to saliency-based and
analysis/synthesis-based approaches.
### II-A Saliency-Based Video Pre-processing
#### II-A1 Saliency Prediction
Saliency is the quality of being particularly noticeable or important. Thus,
the salient area refers to region of an image that predominantly attracts the
attention of subjects. This concept corresponds closely to the highly
discriminative and selective behaviour displayed in visual neuronal processing
[36, 37]. Content feature extraction, activation, suppression and aggregation
also occur in the visual pathway [38].
Earlier attempts to predict saliency typically utilized handcrafted image
features, such as color, intensity, and orientation contrast [39]; motion
contrast [40]; camera motion [41], etc., to predict saliency.
Later on, DNN-based semantic-level features were extensively investigated for
both image content [42, 43, 44, 45, 46, 47, 48] and video sequences [49, 50,
51, 52, 53, 54, 55]. Among these features, image saliency prediction only
exploits spatial information, while video saliency prediction often relies on
spatial and temporal attributes jointly. One typical example of video saliency
is a moving object that incurs spatio-temporal dynamics over time, and is
therefore more likely to attract users’ attention. For example, Bazzani _et
al._ [49] modeled the spatial relations in videos using 3D convolutional
features and the temporal consistency with a convolutional long short-term
memory (LSTM) network. Bak _et al._ [50] applied a two-stream network that
exploited different fusion mechanisms to effectively integrate spatial and
temporal information. Sun _et al._ [51] proposed a step-gained FCN to combine
the time-domain memory information and space-domain motion components. Jiang
_et al._ [52] developed an object-to-motion CNN that was applied together with
a LSTM network. All of these efforts to efficiently predict video saliency
leveraged spatio-temporal attributes. More details regarding the spatio-
temporal saliency models for video content can be found in [56].
#### II-A2 Salient Object
One special example of image saliency involved the object instance in a visual
scene, specifically, the moving object in videos. A simple yet effective
solution to the problem of predicting image saliency in this case involved
segmenting foreground objects and background components.
The segmentation of foreground objects and background components has mainly
relied on foreground extraction or background subtraction. For example, motion
information has frequently been used to mask out foreground objects [57, 58,
59, 60, 61].
Recently, both CNN and foreground attentive neural network (FANN) models have
been developed to perform foreground segmentation [62, 63]. In addition to
conventional Gaussian mixture model-based background subtraction, recent
explorations have also shown that CNN models could be effectively used for the
same purpose [64, 65]. To address these separated foreground objects and
background attributes, Zhang _et al._ [66] introduced a new background mode to
more compactly represent background information with better R-D efficiency. To
the best of our knowledge, such foreground object/background segmentation has
been mostly applied in video surveillance applications, where the visual scene
lends itself to easier separation.
#### II-A3 Video Compression with UEQ Scales
Recalling that saliency or object refers to more visually attentive areas. It
is straightforward to apply UEQ setting in a video encoder, where light
compression is used to encode the saliency area, while heavy compression is
used elsewhere. Use of this technique often results in a lower level of total
bit rate consumption without compromising QoE.
For example, Hadi _et al._ [67] extended the well-known Itti-Koch-Niebur (IKN)
model to estimate saliency in the DCT domain, also considering camera motion.
In addition, saliency-driven distortion was also introduced to accurately
capture the salient characteristics, in order to improve R-D optimization in
H.265/HEVC. Li _et al._ [68] suggested using graph-based visual saliency to
adapt the quantizations in H.265/HEVC, to reduce total bits consumption.
Similarly, Ku _et al._ [69] applied saliency-weighted Coding Tree Unit
(CTU)-level bit allocation, where the CTU-aligned saliency weights were
determined via low-level feature fusion.
The aforementioned methodologies rely on traditional handcrafted saliency
prediction algorithms. As DNN-based saliency algorithms have demonstrated
superior performance, we can safely assume that their application to video
coding will lead to better compression efficiency. For example, Zhu _et al._
[70] adopted a spatio-temporal saliency model to accurately control the QP in
an encoder whose spatial saliency was generated using a 10-layer CNN, and
whose temporal saliency was calculated assuming the 2D motion model (resulting
in an average of 0.24 BD-PSNR gains over H.265/HEVC reference model (version
HM16.8)). Performance improvement due to fine-grained quantization adaptation
was reported using an open-source x264 encoder [71]. This was accomplished by
jointly examining the input video frame and associated saliency maps. These
saliency maps were generated by utilizing three CNN models suggested in [52,
56, 72]. Up to 25% bit rate reduction was reported when distortion was
measured using the edge-weighted SSIM (EW-SSIM). Similarly, Sun _et al._ [73]
implemented a saliency-driven CTU-level adaptive bit rate control, where the
static saliency map of each frame was extracted using a DNN model and dynamic
saliency region when it was tracked using a moving object segmentation
algorithm. Experiment results revealed that the PSNR of salient regions was
improved by 1.85 dB on average.
Though saliency-based pre-processing is mainly driven by psychovisual studies,
it heavily relies on saliency detection to perform UEQ-based adaptive
quantization with a lower rate of bit consumption but visually identical
reconstruction. On the other hand, visual selectivity behaviour is closely
associated with video content distribution (e.g., frequency response), leading
to perceptually unequal preference. Thus, it is highly expected that such
content semantics-induced discriminative features can be utilized to improve
the system efficiency when integrated into the video encoder. To this end, we
will discuss the analysis/synthesis-based approach for pre-processing in the
next section.
### II-B Analysis/Synthesis Based Pre-processing
Since most videos are consumed by human vision, subjective perception of HVS
is the best way to evaluate quality. However, it is quite difficult to devise
a profoundly accurate mathematical HVS model in actual video encoder for rate
and perceptual quality optimization, due to the complicated and unclear
information processing that occurs in the human visual pathway. Instead, many
pioneering psychovisual studies have suggested that neuronal response to
compound stimuli is highly nonlinear [74, 75, 76, 77, 78, 79, 80, 81] within
the receptive field. This leads to well-known visual behaviors, such as
frequency selectivity, masking, etc., where such stimuli are closely related
to the content texture characteristics. Intuitively, video scenes can be
broken down into areas that are either “perceptually significant” (e.g.,
measured in an MSE sense) or “perceptually insignificant”. For “perceptually
insignificant” regions, users will not perceive compression or processing
impairments without a side-by-side comparison with the original sample. This
is because the HVS gains semantic understanding by viewing content as a whole,
instead of interpreting texture details pixel-by-pixel [82]. This notable
effect of the HVS is also referred to as “masking,” where visually
insignificant information, e.g., perceptually insignificant pixels, will be
noticeably suppressed.
In practice, we can first analyze the texture characteristics of original
video content in the pre-processing step, e.g., Texture Analyzer in Fig. 2, in
order to sort textures by their significance. Subsequently, we can use any
standard compliant video encoder to encode the perceptually significant areas
as the main bitstream payload, and apply a statistical model to represent the
perceptually insignificant textures with model parameters encapsulated as side
information. Finally, we can use decoded areas and parsed textures to jointly
synthesize the reconstructed sequences in Texture Synthesizer. This type of
texture modeling makes good use of statistical and psychovisual representation
jointly, generally requiring fewer bits, despite yielding visually identical
sensation, compared to the traditional hybrid “prediction+residual” method222A
comprehensive survey of texture analysis/synthesis based video coding
technologies can be found in [83]. . Therefore, texture analysis and synthesis
play a vital role for subsequent video coding. We will discuss related
techniques below.
Figure 2: Texture Coding System. A general framework of analysis/synthesis
based video coding.
#### II-B1 Texture Analysis
Early developments in texture analysis and representation can be categorized
into filter-based or statistical modeling-based approaches. Gabor filter is
one typical example of a filter-based approach, by which the input image is
convoluted with nonlinear activation for the derivation of corresponding
texture representation [84, 85]. At the same time, in order to identify static
and dynamic textures for video content, Thakur et al. [86] utilized the 2D
dual tree complex wavelet transform and steerable pyramid transform [87],
respectively. To accurately capture the temporal variations in video, Bansal
et al. [88] again suggested the use of optic flow for dynamic texture
indication and later synthesis, where optical flow could be generated using
temporal filtering. Leveraging statistical models such as the Markovian random
field (MRF) [89, 90] is an alternative way to analyze and represent texture.
For efficient texture description, statistical modeling such as this was then
extended using handcrafted local features, e.g., the scale invariant feature
transform (SIFT) [91], speeded up robust features (SURF) [92], and local
binary patterns (LBP) [93]
Recently, stacked DNNs have demonstrated their superior efficiency in many
computer vision tasks, This efficiency is mainly due to the powerful capacity
of DNN features to be used for video content representation. The most
straightforward scheme directly extracted features from the FC6 or FC7 layer
of AlexNet [94] for texture representation. Furthermore, Cimpoi et al. [95]
demonstrated that Fisher vectorized [96] CNN features was a decent texture
descriptor candidate.
#### II-B2 Texture Synthesis
Texture synthesis reverse-engineers the analysis in pre-processing to restore
pixels accordingly. It generally includes both non-parametric and parametric
methods. For non-parametric synthesis, texture patches are usually resampled
from reference images [97, 98, 99]. In contrast, the parametric method
utilized statistical models to reconstruct the texture regions by jointly
optimizing observation outcomes from the model and model itself [100, 101,
87].
DNN-based solutions exhibit great potential for texture synthesis
applications. One notable example demonstrating this potential used a pre-
trained image classification-based CNN model to generate texture patches
[102]. Li et al. [103], then demonstrated that a Markovian GAN-based texture
synthesis could offer remarkable quality improvement.
To briefly summarize, earlier “texture analysis/synthesis” approaches often
relied on handcrafted models, as well as corresponding parameters. While they
have shown good performance to some extent for a set of test videos, it is
usually very difficult to generalize them to large-scale video datasets
without fine-tuning parameters further. On the other hand, related
neuroscience studies propose a broader definition of texture which is more
closely related to perceptual sensation, although existing mathematical or
data-driven texture representations attempt to fully fulfill such perceptual
motives. Furthermore, recent DNN-based schemes present a promising
perspective. However, the complexity of these schemes has not yet been
appropriately exploited. So, in Section V, we will reveal a CNN-based pixel-
level texture analysis approach to segment perceptually insignificant texture
areas in a frame for compression and later synthesis. To model the textures
both spatially and temporally, we introduce a new coding mode called the
“switchable texture mode” that is determined at group of pictures (GoP) level
according to the bit rate saving.
## III Overview of DNN-based Video Coding
A number of investigations have shown that DNNs can be used for efficient
image/video coding [104, 105, 106, 107]. This topic has attracted extensive
attention in recent years, demonstrating its potential to enhance the
conventional system with better R-D performance.
There are three major directions currently under investigation. One is
resolution resampling-based video coding, by which the input videos are first
down-sampled prior to being encoded, and the reconstructed videos are up-
sampled or super-resolved to the same resolution as the input [108, 109, 110,
111]. This category generally develops up-scaling or super-resolution
algorithms on top of standard video codecs. The second direction under
investigation is modularized neural video coding (MOD-NVC), which has
attempted to improve individual coding tools in traditional hybrid coding
framework using learning-based solutions. The third direction is end-to-end
neural video coding (E2E-NVC), which fully leverages the stacked neural
networks to compactly represent input image/video in an end-to-end learning
manner. In the following sections, we will primarily review the latter two
cases, since the first one has been extensively discussed in many other
studies [112].
### III-A Modularized Neural Video Coding (MOD-NVC)
The MOD-NVC has inherited the traditional hybrid coding framework within which
handcrafted tools are refined or replaced using learned solutions. The general
assumption is that existing rule-based coding tools can be further improved
via a data-driven approach that leverages powerful DNNs to learn robust and
efficient mapping functions for more compact content representation. Two great
articles have comprehensively reviewed relevant studies in this direction
[107, 106]. We briefly introduce key techniques in intra/inter prediction,
quantization, and entropy coding. Though in-loop filtering is another
important piece in the “coding” block, due to its similarities with post
filtering, we have chosen to review it in quality enhancement-aimed “post-
processing” for the sake of creating a more cohesive presentation.
#### III-A1 Intra Prediction
Video frame content presents highly correlated distribution across neighboring
samples spatially. Thus, block redundancy can be effectively exploited using
causal neighbors. In the meantime, due to the presence of local structural
dynamics, block pixels can be better represented from a variety of angular
directed prediction.
In conventional standards, such as the H.264/AVC, H.265/HEVC, or even emerging
VVC, specific prediction rules are carefully designated to use weighted
neighbors for respective angular directions. From the H.264/AVC to recent VVC,
intra coding efficiency has been gradually improved by allowing more fine-
grained angular directions and flexible block size/partitions. In practice, an
optimal coding mode is often determined by R-D optimization.
One would intuitively expect that coding performance can be further improved
if better predictions can be produced. Therefore, there have been a number of
attempts to leverage the powerful capacity of stacked DNNs for better intra
predictor generation, including the CNN-based predictor refinement suggested
in [113] to reduce prediction residual, additional learned mode trained using
FCN models reported in [114, 115], using RNNs in [116], using CNNs in [108],
or even using GANs in [117], etc. These approaches have actively utilized the
neighbor pixels or blocks, and/or other context information (e.g., mode) if
applicable, in order to accurately represent the local structures for better
prediction. Many of these approaches have reported more than 3% BD-Rate gains
against the popular H.265/HEVC reference model. These examples demonstrate the
efficiency of DNNs in intra prediction.
#### III-A2 Inter Prediction
In addition to the spatial intra prediction, temporal correlations have also
been exploited via inter prediction, by which previously reconstructed frames
are utilized to generate inter predictor for compensation using displaced
motion vectors.
Temporal prediction can be enhanced using references with higher fidelity, and
more fine-grained motion compensation. For example, fractional-pel
interpolation is usually deployed to improve prediction accuracy [118]. On the
other hand, motion compensation with flexible block partitions is another
major contributor to inter coding efficiency.
Similarly, earlier attempts have been made to utilize DNNs solutions for
better inter coding. For instance, CNN-based interpolations were studied in
[119, 120, 121] to improve the half-pel samples. Besides, an additional
virtual reference could be generated using CNN models for improved R-D
decision in [122]. Xia et al. [123] further extended this approach using
multiscale CNNs to create an additional reference closer to the current frame
by which accurate pixel-wise motion representation could be used. Furthermore,
conventional references could also be enhanced using DNNs to refine the
compensation [124].
#### III-A3 Quantization and Entropy Coding
Quantization and entropy coding are used to remove statistical redundancy.
Scalar quantization is typically implemented in video encoders to remove
insensitive high-frequency components, without losing the perceptual quality,
while saving the bit rate. Recently, a three-layer DNN was developed to
predict the local visibility threshold $C_{T}$ for each CTU, by which more
accurate quantization could be achieved via the connection between $C_{T}$ and
actual quantization stepsize. This development led to noticeable R-D
improvement, e.g., upto 11% as reported in [125].
Context-adaptive binary arithmetic coding (CABAC) and its variants are
techniques that are widely adopted to encode binarized symbols. The efficiency
of CABAC is heavily reliant on the accuracy of probability estimation in
different contexts. Since the H.264/AVC, handcrafted probability transfer
functions (developed through exhaustive simulations, and typically implemented
using look-up tables) were utilized. In [115] and [126], the authors
demonstrated that a combined FCN and CNN model could be used to predict intra
mode probability for better entropy coding. Another example of a combined FCN
and CNN model was presented in [127] to accurately encode transform indexes
via stacked CNNs. And likewise, in [128], intra DC coefficient probability
could be also estimated using DNNs for better performance.
All of these explorations have reported positive R-D gains when incorporating
DNNs in traditional hybrid coding frameworks. A companion H.265/HEVC-based
software model is also offered by Liu et al. [106], to advance the potential
for society to further pursue this line of exploration. However, integrating
DNN-based tools could exponentially increase both the computational and space
complexity. Therefore, creating harmony between learning-based and
conventional rule-based tools under the same framework requires further
investigation. It is also worth noting that an alternative approach is
currently being explored in parallel. In this approach, researchers suggest
using an end-to-end neural video coding (E2E-NVC) framework to drive the raw
video content representation via layered feature extraction, activation,
suppression, and aggregation, mostly in a supervised learning fashion, instead
of refining individual coding tools.
### III-B End-to-End Neural Video Coding (E2E-NVC)
Representing raw video pixels as compactly as possible by massively exploiting
its spatio-temporal and statistical correlations is the fundamental problem of
lossy video coding. Over decades, traditional hybrid coding frameworks have
utilized pixel-domain intra/inter prediction, transform, entropy coding, etc.,
to fulfill this purpose. Each coding tool is extensively examined under a
specific codec structure to carefully justify the trade-off between R-D
efficiency and complexity. This process led to the creation of well-known
international or industry standards, such as the H.264/AVC, H.265/HEVC, AV1,
etc.
On the other hand, DNNs have demonstrated a powerful capacity for video
spatio-temporal feature representation for vision tasks, such as object
segmentation, tracking, etc. This naturally raises the question of whether it
is possible to encode those spatio-temporal features in a compact format for
efficient lossy compression.
Recently, we have witnessed the growth of video coding technologies that rely
completely on end-to-end supervised learning. Most learned schemes still
closely follow the conventional intra/inter frame definition by which
different algorithms are investigated to efficiently represent the intra
spatial textures, inter motion, and the inter residuals (if applicable) [104,
129, 130, 131]. Raw video frames are fed into stacked DNNs to extract,
activate, and aggregate appropriate compact features (at the bottleneck layer)
for quantization and entropy coding. Similarly, R-D optimization is also
facilitated to balance the rate and distortion trade-off. In the following
paragraphs, we will briefly review the aforementioned key components.
#### III-B1 Nonlinear Transform and Quantization
The autoencoder or variational autoencoder (VAE) architectures are typically
used to transform the intra texture or inter residual into compressible
features.
For example, Toderic et al. [132] first applied fully-connected recurrent
autoencoders for variable-rate thumbnail image compression. Their work was
then improved in [133, 134] with the support of full-resolution image, unequal
bit allocation, etc. Variable bit rate is intrinsically enabled by these
recurrent structures. The recurrent autoencoders, however, suffer from higher
computational complexity at higher bit rates, because more recurrent
processing is desired. Alternatively, convolutional autoencoders have been
extensively studied in past years, where different bit rates are adapted by
setting a variety of $\lambda$s to optimize the R-D trade-off. Note that
different network models may be required for individual bit rates, making
hardware implementation challenging, (e.g., model switch from one bit rate to
another). Recently, conditional convolution [135] and scaling factor [136]
were proposed to enable variable-rate compression using a single or very
limited network model without noticeable coding efficiency loss, which makes
the convolutional autoencoders more attractive for practical applications.
To generate a more compact feature representation, Balle et al. [105]
suggested replacing the traditional nonlinear activation, e.g., ReLU, using
generalized divisive normalization (GDN) that is theoretically proven to be
more consistent with human visual perception. A subsequent study [137]
revealed that GDN outperformed other nonlinear rectifiers, such as ReLU,
leakyReLU, and tanh, in compression tasks. Several follow-up studies [138,
139] directly applied GDN in their networks for compression exploration.
Quantization is a non-differentiable operation, basically converting arbitrary
elements into symbols with a limited alphabet for efficient entropy coding in
compression. Quantization must be derivable in the end-to-end learning
framework for back propagation. A number of methods, such as adding uniform
noise [105], stochastic rounding [132] and soft-to-hard vector quantization
[140], were developed to approximate a continuous distribution for
differentiation.
#### III-B2 Motion Representation
Chen et al. [104] developed the DeepCoder where a simple convolutional
autoencoder was applied for both intra and residual coding at fixed
32$\times$32 blocks, and block-based motion estimation in traditional video
coding was re-used for temporal compensation. Lu et al. [141] introduced the
optical flow for motion representation in their DVC work, which, together with
the intra coding in [142], demonstrated similar performance compared with the
H.265/HEVC. However, coding efficiency suffered from a sharp loss at low bit
rates. Liu et al. [143] extended their non-local attention optimized image
compression (NLAIC) for intra and residual encoding, and applied second-order
flow-to-flow prediction for more compact motion representation, showing
consistent rate-distortion gains across different contents and bit rates.
Motion can also be implicitly inferred via temporal interpolation. For
example, Wu et al. [144] applied RNN-based frame interpolation. Together with
the residual compensation, RNN-based frame interpolation offered comparable
performance to the H.264/AVC. Djelouah et al. [145] furthered interpolation-
based video coding by utilizing advanced optical flow estimation and feature
domain residual coding. However, temporal interpolation usually led to an
inevitable structural coding delay.
Another interesting exploration made by Ripple _et al._ in [130] was to
jointly encode motion flow and residual using compound features, where a
recurrent state was embedded to aggregate multi-frame information for
efficient flow generation and residual coding.
#### III-B3 R-D Optimization
Li et al. [146] utilized a separate three-layer CNN to generate an importance
map for spatial-complexity-based adaptive bit allocation, leading to
noticeable subjective quality improvement. Mentzer et al. [140] further
utilized the masked bottleneck layer to unequally weight features at different
spatial locations. Such importance map embedding is a straightforward approach
to end-to-end training. Importance derivation was later improved with the non-
local attention [147] mechanism to efficiently and implicitly capture both
global and local significance for better compression performance [136].
Probabilistic models play a vital role in data compression. Assuming the
Gaussian distribution for feature elements, Balle et al. [142] utilized hyper
priors to estimate the parameters of Gaussian scale model (GSM) for latent
features. Later Hu et al. [148] used hierarchical hyper priors (coarse-to-
fine) to improve the entropy models in multiscale representations. Minnen et
al. [149] improved the context modeling using joint autoregressive spatial
neighbors and hyper priors based on the Gaussian mixture model (GMM).
Autoregressive spatial priors were commonly fused by PixelCNNs or PixelRNNs
[150]. Reed et al. [151] further introduced multiscale PixelCNNs, yielding
competitive density estimation and great boost in speed (e.g., from $O(N)$ to
$O(\log N)$). Prior aggregation was later extended from 2D architectures to 3D
PixelCNNs [140]. Channel-wise weights sharing-based 3D implementations could
greatly reduce network parameters without performance loss. A parallel 3D
PixelCNNs for practical decoding is presented in Chen et al. [136]. Previous
methods accumulated all the priors to estimate the probability based on a
single GMM assumption for each element. Recent studies have shown that
weighted GMMs can further improve coding efficiency in [152, 153].
Pixel-error, such as MSE, was one of the most popular loss functions used.
Concurrently, SSIM (or MS-SSIM) was also adopted because of its greater
consistency with visual perception. Simulations revealed that SSIM-based loss
can improve reconstruction quality, especially at low bit rates. Towards the
perceptual-optimized encoding, perceptual losses that were measured by
adversarial loss [154, 155, 156] and VGG loss [157] were embedded in learning
to produce visually appealing results.
Though E2E-NVC is still in its infancy, its fast growing R-D efficiency holds
a great deal of promise. This is especially true, given that we can expect
neural processors to be deployed massively in the near future [158].
## IV Overview of DNN-based Post-processing
Compression artifacts are inevitably present in both traditional hybrid coding
frameworks and learned compression approaches, e.g., blockiness, ringing,
cartoonishness, etc., severely impairing visual sensation and QoE. Thus,
quality enhancement filters are often applied as a post-filtering step or in-
loop module to alleviate compression distortions. Towards this goal, adaptive
filters are usually developed to minimize the error between original and
distorted samples.
### IV-A In-loop Filtering
Existing video standards are mainly utilizing the in-loop filters to improve
the subjective quality of reconstruction, and also to offer better R-D
efficiency due to enhanced references. Examples include deblocking [24],
sample adaptive offset (SAO) [25], constrained directional enhancement filter
(CDEF) [159], loop-restoration (LR) [160], adaptive loop filter (ALF) [161],
etc.
Recently, numerous CNN models have been developed for in-loop filtering via a
data-driven approach to learn the mapping functions. It is worth pointing out
that prediction relationships must be carefully examined when designing in-
loop filters, due to the frame referencing structure and potential error
propagation. Both intra and inter predictions are utilized in popular video
encoders, where an intra-coded frame only exploits the spatial redundancy
within current frame, while an inter-coded frame jointly explores the spatio-
temporal correlations across frames over time.
Earlier explorations of this subject have mainly focused on designing DNN-
based filters for intra-coded frames, particularly by trading network depth
and parameters for better coding efficiency. For example, IFCNN [162], and
VRCNN [163] are shallow networks with $\approx$50,000 parameters, providing up
to 5% BD-Rate savings for the H.265/HEVC intra encoder. More gains can be
obtained if we use a deeper and denser network [164, 165, 166], e.g., 5.7% BD-
Rate gain reported in [164] by using the model with 3,340,000 parameters, and
8.50% BD-Rate saving obtained in [167] by using the model with 2,298,160
parameters. The more parameters a model has, the more complex it is.
Unfortunately, greater complexity limits the network’s potential for practical
application. Such intra-frame-based in-loop filters treat decoded frames
equally, without the consideration of in-loop inter-prediction dependency.
Nevertheless, aforementioned networks can be used in post-filtering out of the
coding loop.
It is necessary to include temporal prediction dependency while designing the
in-loop CNN-based filters for inter-frame coding. Some studies leveraged prior
knowledge from the encoding process to assist the CNN training and inference.
For example, Jia _et al._ [168] incorporated the co-located block information
for in-loop filtering. Meng _et al._ [169] utilized the coding unit partition
for further performance improvement. Li _et al._ [170] input both the
reconstructed frame and the difference between the reconstructed and predicted
pixels to improve the coding efficiency. Applying prior knowledge in learning
may improve the coding performance, but it further complicates the CNN model
by involving additional information in the networks. On the other hand, the
contribution of this prior knowledge is quite limited because such additional
priors are already implicitly embedded in the reconstructed frame.
If a CNN-based in-loop filtering is applied to frame $I_{0}$, the impact will
be gradually propagated to frame $I_{1}$ that has frame $I_{0}$ as the
reference. Subsequently, $I_{1}$ is the reference of $I_{2}$, and so on so
forth333Even though more advanced inter referencing strategies can be devised,
inter propagation-based behavior remains the same.. If frame $I_{1}$ is
filtered again by the same CNN model, an over-filtering problem will be
triggered, resulting in severely degraded performance, as analyzed in [171].
To overcome this challenging problem, a CNN model called SimNet was built to
carry the relationship between the reconstructed frame and its original frame
in [172] to adaptively skip filtering operations in inter coding. SimNet
reported 7.27% and 5.57% BD-Rate savings for intra- and inter- coding of AV1,
respectively. A similar skipping strategy was suggested by Chen et al. [173]
to enable a wide activation residual network, yielding $14.42\%$ and 9.64% BD-
Rate savings for respective intra- and inter- coding on AV1 platform.
Alternative solutions resort to the more expensive R-D optimization to avoid
the over-filtering problem. For example, Yin et al. [174] developed three sets
of CNN filters for luma and chroma components, where the R-D optimal CNN model
is used and signaled in bitstream. Similar ideas are developed in [175, 176]
as well, in which multiple CNN models are trained and the R-D optimal model is
selected for inference.
It is impractical to use deeper and denser CNN models in applications. It is
also very expensive to conduct R-D optimization to choose the optimal one from
a set of pre-trained models. Note that a limited number of pre-trained models
are theoretically insufficient to be generalized for large-scale video
samples. To this end, in Section VII-A, we introduce a guided-CNN scheme which
adapts shallow CNN models according to the characteristics of input video
content.
### IV-B Post Filtering
Post filtering is generally applied to the compressed frames at the decoder
side to further enhance the video quality for better QoE.
Previous in-loop filters designated for intra-coded frames can be re-used for
single-frame post-filtering [177, 178, 179, 180, 181, 182, 163, 183, 184,
185]. Appropriate re-training may be applied in order to better capture the
data characteristics. However, single-frame post-filtering may introduce
quality fluctuation across frames. This may be due to the limited capacity of
CNN models to deal with a great amount of video contents. Thus, multi-frame
post filtering can be devised to massively exploit the correlation across
successive temporal frames. By doing so, it not only greatly improves the
single-frame solution, but also offers better temporal quality over time.
Typically, a two-step strategy is applied for multi-frame post filtering.
First, neighboring frames are aligned to the current frame via (pixel-level)
motion estimation and compensation (MEMC). Then, all aligned frames are fed
into networks for high-quality reconstruction. Thus, the accuracy of MEMC
greatly affects reconstruction performance. In applications, learned optical
flow, such as FlowNet [186], FlowNet2 [187], PWC-Net [188], and TOFlow [189],
are widely used.
Some exploration has already been made in this arena: Bao et al. [190] and
Wang et al. [191] implemented a general video quality enhancement framework
for denoising, deblocking, and super-resolution, where Bao et al. [190]
employed the FlowNet and Wang et al. [191] used pyramid, cascading, and
deformable convolutions to respectively align frames temporally. Meanwhile,
Yang et al. [192] proposed a multi-frame quality enhancement framework called
MFQE-1.0, in which a spatial transformer motion compensation (STMC) network is
used for alignment, and a deep quality enhancement network (QE-net) is
employed to improve reconstruction quality. Then, Guan et al. [193] upgraded
MFQE-1.0 to MFQE-2.0 by replacing QE-net using a dense CNN model, leading to
better performance and less complexity. Later on, Tong et al. [194] suggested
using FlowNet2 in MFQE-1.0 for temporal frame alignment (instead of default
STMC), yielding 0.23 dB PSNR gain over the original MFQE-1.0. Similarly,
FlowNet2 is also used in [195] for improved efficiency.
All of these studies suggested the importance of temporal alignment in post
filtering. Thus, in the subsequent case study (see Section VII-B), we first
examine the efficiency of alignment, and then further discuss the
contributions from respective intra-coded and inter-coded frames for the
quality enhancement of final reconstruction. This will help audiences gain a
deeper understanding of similar post filtering techniques.
## V Case Study for Pre-processing:
Switchable Texture-based Video Coding
This section presents a switchable texture-based video pre-processing that
leverages DNN-based semantic understanding for subsequent coding improvement.
In short, we exploit DNNs to accurately segment “perceptually InSIGnifcant”
(pInSIG) texture areas to produce a corresponding pInSIG mask. In many
instances, this mask drives the encoder to perform separately for pInSIG
textures that are typically inferred without additional residuals, and
“perceptually SIGnificant” (pSIG) areas elsewhere using traditional hybrid
coding method. This approach is implemented on top of the AV1 codec [196, 197,
198] by enabling the GoP-level switchable mechanism, This yields noticeable
bit rate savings for both standard test sequences and additional challenging
sequences from YouTube UGC dataset [199], under similar perceptual quality.
The method we propose is a pioneering work that integrates learning-based
texture analysis and reconstruction approaches with modern video codec to
enhance video compression performance.
Figure 3: Texture Analyzer. Proposed semantic segmentation network using
PSPNet [200] and ResNet-50 [201].
### V-A Texture Analysis
Our previous attempt [202] yielded encouraging bit rate savings without
decreasing visual quality. This was accomplished by perceptually
differentiating pInSIG textures and other areas to be encoded in a hybrid
coding framework. However, the corresponding texture masks were derived using
traditional methods, at the coding block level. On the other hand, building
upon advancements created by DNNs and large-scale labeled datasets (e.g.,
ImageNet [203], COCO [204], and ADE20K [205]), learning-based semantic scene
segmentation algorithms [206, 200, 205] have been tremendously improved to
generate accurate pixel-level texture masks.
In this work, we first rely on the powerful ResNet50 [201] with dilated
convolutions [207, 208] to extract feature maps that effectively embed the
content semantics. We then introduce the pyramid pooling module from PSPNet
[200] to produce a pixel-level semantic segmentation map shown in Fig. 3. Our
implementation starts with a pre-trained PSPNet model generated using the MIT
SceneParse150 [209] as a scene parsing benchmark. We then retrained the model
on a subset of a densely annotated dataset ADE20K [205]. In the end, the model
offers a pixel segmentation accuracy of 80.23%.
It is worthwhile to note that such pixel-level segmentation may result in the
creation of a number of semantic classes. Nevertheless, this study suggests
grouping similar texture classes commonly found in nature scenes together into
four major categories, e.g., “earth and grass”, “water, sea and river”,
“mountain and hill”, and “tree”. Each texture category would have an
individual segmentation mask to guide the compression performed by the
succeeding video encoder.
### V-B Switchable Texture-Based Video Coding
Texture masks are generally used to identify texture blocks, and to perform
the encoding of texture blocks and non-texture blocks separately, as
illustrated in Fig. 4a. In this case study, the AV1 reference software
platform is selected to exemplify the efficiency of our proposal.
Texture Blocks. Texture and non-texture blocks are identified by overlaying
the segmentation mask from the texture analyzer on its corresponding frame.
These frame-aligned texture masks produce pixel-level accuracy, which is
capable of supporting arbitrary texture shapes. However, in order to support
the block processing commonly adopted by video encoders, we propose refining
original pixel-level masks to their block-based representations. The minimum
size of a texture block is 16$\times$16\. In order to avoid boundary artifacts
and maintain temporal consistency, we implemented a conservative two-step
strategy to determine the texture block. First, the block itself must be fully
contained in the texture region marked using the pixel-level mask. Then, its
warped representation to temporal references (e.g., the preceding and
succeeding frames in the encoding order) have to be inside the masked texture
area of corresponding reference frames as well. Finally, these texture blocks
are encoded using the texture mode, and non-texture blocks are encoded as
usual using the hybrid coding structure.
Texture Mode. A texture mode coded block is inferred by its temporal reference
using the global motion parameters without incurring any motion compensation
residuals. In contrast, non-texture blocks are compressed using a hybrid
“prediction+residual” scheme. For each current frame and any one of its
reference frames, AV1 syntax specifies only one set of global motion
parameters at the frame header. Therefore, to comply with the AV1 syntax, our
implementation only considers one texture class for each frame. This
guarantees the general compatibility of our solution to existing AV1 decoders.
We further modified the AV1 global motion tool to estimate the motion
parameters based on the texture regions of the current frame and its reference
frame. We used the same feature extraction and model fitting approach as in
the global motion coding tool in order to provide a more accurate motion model
for the texture regions. This was done to prevent visual artifacts on the
block edges between the texture and non-texture blocks in the reconstructed
video. Although we have demonstrated our algorithms using the AV1 standard, we
expect that the same methodology can be applied to other standards. For
instance, when using the H.265/HEVC standard, we can leverage the SKIP mode
syntax to signal the texture mode instead of utilizing the global motion
parameters.
Previous discussions have suggested that the texture mode is enabled along
with inter prediction. Our extensive studies have also demonstrated that it is
better to activate the texture mode in frames where bi-directional predictions
are allowed (e.g., B-frames), for the optimal trade-off between bit rate
saving and perceived quality. As will be shown in following performance
comparisons, we use a 8-frame GoP (or Golden-Frame (GF) group defined in AV1)
to exemplify the texture modes in every other frame, by which the compound
prediction from bi-directional references can be facilitated for prediction
warping. Such bi-directional prediction could also alleviate possible temporal
quality flickering.
Switchable Optimization. In our previous work [210], the texture mode was
enabled for every B frame, demonstrating significant bit rate reduction at the
same level of perceptual sensation in most standard test videos, in comparison
to the AV1 anchor. However, some videos did cause the model to perform more
poorly. One reason for this effect is that higher QP settings typically incur
more all-zero residual blocks. Alternatively, texture mode is also content-
dependent: a relatively small number of texture blocks may be present for some
videos. Both scenarios limit the bit rate savings, and an overhead of extra
bits is mandatory for global motion signaling, if texture mode is enabled.
To address these problems, we introduce a switchable scheme to determine
whether texture mode could be potentially enabled for a GoP or a GF group. The
criteria for switching are based on the texture region percentage that is
calculated as the average ratio of texture blocks in B-frames, and on the
potential bit rate savings with or without texture mode. Figure 4b illustrates
the switchable texture mode decision. Currently, we use bit rate saving as a
criterion for switch decisions when the texture mode is enabled. This assumes
perceptual sensation will remain nearly the same, since these texture blocks
are perceptually insignificant.
(a)
(b)
Figure 4: Texture mode and switchable control scheme. (a) Texture mode encoder
implementation. (b) Switchable texture mode decision.
### V-C Experimental Results
We selected sequences with texture regions from standard test sequences and
the more challenging YouTube UGC data set444https://media.withyoutube.com/
[199]. YouTube UGC dataset is a sample selected from thousands of User
Generated Content (UGC) videos uploaded to YouTube. The names of the UGC
videos follow the format of Category_Resolution_UniqueID. We calculate the bit
rate savings at different QP values for 150 frames of the test sequences. In
our experiments, we used the following parameters for the AV1 codec555AV1
codec change-Id: Ibed6015aa7cce12fcc6f314ffde76624df4ad2a1 as the baseline:
8-frame GoP or GF group using random access configuration; 30 FPS; constant
quality rate control policy; multi-layer coding structure for all GF groups;
maximum intra frame interval at 150. We evaluate the performance of our
proposed method in terms of bit rate savings and perceived quality.
#### V-C1 Coding Performance
To evaluate the performance of the proposed switchable texture mode method,
bit rate savings at four quantization levels (QP = 16, 24, 32, 40) are
calculated for each test sequence in comparison to the AV1 baseline.
TABLE II: Bit rate saving (%) comparison between handcraft feature (FM) [211], block-level DNN (BM) [212] and pixel-level DNN (PM) [210] texture analysis against the AV1 baseline for selected standard test sequences using tex-allfg method. Video Sequence | QP=16 (%) | QP=24 (%) | QP=32 (%) | QP=40 (%)
---|---|---|---|---
FM | BM | PM | FM | BM | PM | FM | BM | PM | FM | BM | PM
Coastguard | $-0.17$ | $7.80$ | $9.14$ | $-0.36$ | $6.99$ | $8.01$ | $-0.43$ | $4.70$ | $5.72$ | $-0.62$ | $1.90$ | $2.13$
Flower | $7.42$ | $10.55$ | $13.00$ | $5.42$ | $8.66$ | $10.78$ | $2.51$ | $5.96$ | $4.95$ | $0.19$ | $3.38$ | $1.20$
Waterfall | $3.65$ | $4.63$ | $13.11$ | $1.58$ | $3.96$ | $7.21$ | $-0.14$ | $-0.33$ | $1.30$ | $-3.00$ | $-3.74$ | $-3.48$
Netflix_aerial | $1.15$ | $8.59$ | $9.15$ | $-0.26$ | $2.15$ | $5.59$ | $-1.32$ | $-0.68$ | $1.05$ | $-2.10$ | $-4.59$ | $-4.01$
Intotree | $0.88$ | $5.32$ | $9.71$ | $0.15$ | $4.32$ | $9.42$ | $-0.14$ | $1.99$ | $8.46$ | $-0.26$ | $-2.83$ | $4.92$
Texture Analysis. We compare two DNN-based texture analysis methods [212, 210]
with a handcrafted feature-based approach [211] for selected standard test
sequences. Results are shown in Table II. A positive bit rate saving (%)
indicates a reduction compared with the AV1 baseline. Compared to the feature
based approach, DNN-based methods show improved performance in terms of bit
rate saving. The feature based approach relies on color and edge information
to generate the texture mask and is less accurate and consistent both
spatially and temporally. Therefore, the number of blocks that are
reconstructed using texture mode is usually much smaller than that of DNN-
based methods. Note that the parameters used in feature based approach require
manually tuning for each video to optimize the texture analysis output. The
pixel-level segmentation [210] shows further advantages compared with block-
level method [212], since the CNN model does not require block size to be
fixed.
TABLE III: Bit rate saving (%) comparison for tex-allgf and tex-switch methods against the AV1 baseline. Resolution | Video Sequence | QP=16 (%) | QP=24 (%) | QP=32 (%) | QP=40 (%)
---|---|---|---|---|---
tex-allgf | tex-switch | tex-allgf | tex-switch | tex-allgf | tex-switch | tex-allgf | tex-switch
CIF | Bridgeclose | $15.78$ | $15.78$ | $10.87$ | $10.87$ | $4.21$ | $4.21$ | $2.77$ | $2.77$
Bridgefar | $10.68$ | $10.68$ | $8.56$ | $8.56$ | $6.34$ | $6.34$ | $6.01$ | $6.01$
Coastguard | $9.14$ | $9.14$ | $8.01$ | $8.01$ | $5.72$ | $5.72$ | $2.13$ | $2.13$
Flower | $13.00$ | $13.00$ | $10.78$ | $10.78$ | $4.95$ | $4.95$ | $1.20$ | $1.20$
Waterfall | $13.11$ | $13.11$ | $7.21$ | $7.21$ | $1.30$ | $1.30$ | $\pagecolor{newgreen}-3.48$ | $\pagecolor{newgreen}0.00$
512$\times$270 | Netflix_ariel | $9.15$ | $9.15$ | $5.59$ | $5.59$ | $1.05$ | $1.05$ | $\pagecolor{newgreen}-4.01$ | $\pagecolor{newgreen}0.00$
360P | NewsClip_360P-1e1c | $10.77$ | $10.77$ | $9.27$ | $9.27$ | $5.23$ | $5.23$ | $1.54$ | $1.54$
NewsClip_360P-22ce | $17.37$ | $17.37$ | $15.79$ | $15.79$ | $16.37$ | $16.37$ | $17.98$ | $17.98$
TelevisionClip_360P-3b9a | $1.45$ | $1.45$ | $0.48$ | $0.48$ | $\pagecolor{newgreen}-1.09$ | $\pagecolor{newgreen}0.00$ | $\pagecolor{newgreen}-3.26$ | $\pagecolor{newgreen}0.00$
TelevisionClip_360P-74dd | $1.66$ | $1.66$ | $1.17$ | $1.17$ | $0.36$ | $0.36$ | $\pagecolor{newgreen}-0.37$ | $\pagecolor{newgreen}0.00$
480P | HowTo_480P-04f1 | $3.81$ | $3.81$ | $2.57$ | $2.57$ | $0.93$ | $0.93$ | $\pagecolor{newgreen}0.06$ | $\pagecolor{newgreen}0.36$
HowTo_480P-4c99 | $2.36$ | $2.36$ | $1.67$ | $1.67$ | $\pagecolor{newred}0.37$ | $\pagecolor{newred}0.00$ | $\pagecolor{newgreen}-1.16$ | $\pagecolor{newgreen}0.00$
MusicVideo_480P-1eee | $3.31$ | $3.31$ | $3.29$ | $3.29$ | $2.53$ | $2.53$ | $-0.30$ | $-0.30$
NewsClip_480P-15fa | $6.31$ | $6.31$ | $\pagecolor{newred}6.05$ | $\pagecolor{newred}5.79$ | $\pagecolor{newred}0.53$ | $\pagecolor{newred}0.11$ | $\pagecolor{newgreen}-0.79$ | $\pagecolor{newgreen}0.03$
NewsClip_480P-7a0d | $11.54$ | $11.54$ | $10.03$ | $10.03$ | $1.53$ | $1.53$ | $\pagecolor{newred}0.08$ | $\pagecolor{newred}0.00$
TelevisionClip_480P-19d3 | $3.13$ | $3.13$ | $2.86$ | $2.86$ | $1.66$ | $1.66$ | $\pagecolor{newred}0.58$ | $\pagecolor{newred}0.00$
720P | HowTo_720P-0b01 | $12.72$ | $12.72$ | $11.84$ | $11.84$ | $9.31$ | $9.31$ | $6.35$ | $6.35$
MusicVideo_720P-3698 | $1.76$ | $1.76$ | $1.07$ | $1.07$ | $0.30$ | $0.30$ | $\pagecolor{newgreen}-0.17$ | $\pagecolor{newgreen}0.00$
MusicVideo_720P-4ad2 | $6.93$ | $6.93$ | $3.81$ | $3.81$ | $1.87$ | $1.87$ | $\pagecolor{newred}0.60$ | $\pagecolor{newred}0.11$
1080P | HowTo_1080P-4d7b | $7.31$ | $7.31$ | $6.07$ | $6.07$ | $3.21$ | $3.21$ | $0.72$ | $0.72$
MusicVideo_1080P-55af | $3.88$ | $3.88$ | $1.78$ | $1.78$ | $\pagecolor{newgreen}0.31$ | $\pagecolor{newgreen}0.33$ | $\pagecolor{newgreen}-0.99$ | $\pagecolor{newgreen}-0.68$
intotree | $9.71$ | $9.71$ | $9.42$ | $9.42$ | $8.46$ | $8.46$ | $4.92$ | $4.92$
| Average | $7.96$ | $7.96$ | $\pagecolor{newred}6.28$ | $\pagecolor{newred}6.27$ | $\pagecolor{newgreen}3.38$ | $\pagecolor{newgreen}3.40$ | $\pagecolor{newgreen}1.45$ | $\pagecolor{newgreen}2.05$
Switchable Scheme. We also compare the proposed method, a.k.a., tex-switch,
with our previous work in [210], a.k.a., tex-allgf, which enables texture mode
for all frames in a GF group. All three methods use the same encoder setting
for fair comparison. Bit rate saving results for various videos at different
resolutions against the AV1 baseline are shown in Table III. A positive bit
rate saving (%) indicates a reduction compared with the AV1 baseline.
In general, compared to the AV1 baseline, the coding performance of tex-allgf
shows significant bit rate savings at lower QPs. However, as QP increases, the
savings are diminished. In some cases, tex-allgf exhibits poorer coding
performance than the AV1 baseline at a high QP (e.g., negative numbers at QP
40). At a high QP, most blocks have zero residual due to heavy quantization,
leading to very limited margins for bit rate savings using texture mode. In
addition, few extra bits are required for the signalling of global motion of
texture mode coded blocks. The bit savings gained through residual skipping in
texture mode still cannot compensate for the bits used as overhead for the
side information.
Furthermore, the proposed tex-switch method retains the greatest bit rate
savings offered by tex-allgf, and resolves the loss at higher QP settings. As
shown in Table III, negative numbers are mostly removed (highlighted in green)
by the introduction of a GoP-level switchable texture mode. In some cases
where tex-switch has zero bit rate savings compared to the AV1 baseline, the
texture mode is completely disabled for all the GF groups, whereas tex-allgf
has loss. In a few cases, however, tex-switch has less bit rate saving than
tex-allgf (highlighted in red). This is because the bit rate saving
performance of the first GF group in the scene fails to accurately represent
the whole scene in some of the UGC sequences with short scene cuts. A possible
solution is to identify additional GF groups that show potential bit rate
savings and enable texture mode for these GF groups.
#### V-C2 Subjective Evaluation
Figure 5: Subjective evaluation of visual preference. Results show average
subjective preference (%) for QP = 16, 24, 32, 40 compared between AV1
baseline and proposed switchable texture mode.
Although significant bit rate savings have been achieved compared to the AV1
baseline, it is acknowledged that identical QP values do not necessarily imply
the same video quality. We have performed a subjective visual quality study
with 20 participants. Reconstructed videos produced by the proposed method
(tex-switch) and the baseline AV1 codec at QP = 16, 24, 32 and 40 are arranged
randomly and assessed by the participants using a double stimulus continuous
quality scale (DSCQS) method [213]. Subjects have been asked to choose among
three options: the first video has better visual quality, the second video has
better visual quality, or there is no difference between two versions.
The result of this study is summarized in Figure 5. The “Same Quality”
indicates the percentage of participants that cannot tell the difference
between the reconstructed videos by the AV1 baseline codec and the proposed
method tex-switch (69.03% on average). The term “tex-switch” indicates the
percentage of participants that prefer the reconstructions by the proposed
method tex-switch (14.32% on average); and the “AV1” indicates the percentage
of participants who think the visual quality of the reconstructed videos using
the AV1 baseline is better (16.65% on average).
We observe that the results are sequence dependent and that spatial and
temporal artifacts can appear in the reconstructed video. The main artifacts
come from the inaccurate pixel-based texture mask. For example, in some frames
of TelevisionClip_360P-74dd sequence, the texture masks include parts of the
moving objects in the foreground, which are reconstructed using texture mode.
Since the motion of the moving objects is different from the motion of the
texture area, there are noticeable artifacts around those parts of the frame.
To further improve the accuracy of region analysis using DNN-based pre-
processing, we plan to incorporate an in-loop perceptual visual quality metric
for optimization during the texture analysis and reconstruction.
### V-D Discussion And Future Direction
We proposed a DNN based texture analysis/synthesis coding tool for AV1 codec.
Experimental results show that our proposed method can achieve noticeable bit
rate reduction with satisfying visual quality for both standard test sets and
user generated content, which is verified by a subjective study. We envision
that video coding driven by semantic understanding will continue to improve in
terms of both quality and bit rate, especially by leveraging advances of deep
learning methods. However, there remain several open challenges that require
further investigation.
Accuracy of region analysis is one of the major challenges for integrating
semantic understanding into video coding. However, recent advances in scene
understanding have significantly improved the performance of region analysis.
Visual artifacts are still noticeable when a non-texture region is incorrectly
included in the texture mask, particularly if the analysis/synthesis coding
system is open loop. One potential solution is to incorporate some perceptual
visual quality measures in-loop during the texture region reconstruction.
Video segmentation benchmark datasets are important for developing machine
learning methods for video based semantic understanding. Existing segmentation
datasets are either based on images with texture [214], or contain general
video objects only [215, 216], or focus on visual quality but lack
segmentation ground truth.
## VI Case Study for Coding:
End-to-End Neural Video Coding (E2E-NVC)
This section presents a framework for end-to-end neural video coding. We
include a discussion of its key components, as well as its overall efficiency.
Our proposed method is extended from our pioneering work in [104] but with
significant performance improvements by allowing fully end-to-end learning-
based spatio-temporal feature representation. More details can be found in
[136, 217, 131].
(a)
(b)
Figure 6: End-to-End Neural Video Coding (E2E-NVC). This E2E-NVC in (a)
consists of modularized intra and inter coding, where inter coding utilizes
respective motion and residual coding. Each component is well exploited using
a stacked CNNs-based VAE for efficient representations of intra pixels,
displaced inter residuals, and inter motions. All modularized components are
inter-connected and optimized in an end-to-end manner. (b) General VAE model
applies stacked convolutions (e.g., 5$\times$5) with main encoder-decoder
(${\bf E}_{m}$, ${\bf D}_{m}$) and hyper encoder-decoder pairs (${\bf E}_{h}$,
${\bf D}_{h}$), where main encoder ${\bf E}_{m}$ includes four major
convolutional layers (e.g., convolutional downsampling and three residual
blocks ($\times$3) robust feature processing [201]). Hyper decoder ${\bf
D}_{h}$ mirrors the steps in hyper encoder ${\bf E}_{h}$ for hyper prior
information generation. Prior aggregation (PA) engine collects the information
from hyper prior, autoregressive spatial neighbors, as well as temporal
correspondences (if applicable) for main decoder ${\bf D}_{m}$ to reconstruct
input scene. Non-local attention is adopted to simulate the saliency masking
at bottlenecks, and rectified linear unit (ReLU) is implicitly embedded with
convolutions for enabling the nonlinearity. “Q” is for quantization, AE and AD
for respective arithmetic encoding and decoding. 2$\downarrow$ and 2$\uparrow$
are downsampling and upsampling at a factor of 2 for both horizontal and
vertical dimensions.
### VI-A Framework
As with all modern video encoders, the proposed E2E-NVC compresses the first
frame in each group of pictures as an intra-frame using a VAE based
compression engine (neuro-Intra). It codes the remaining frames in each group
using motion compensated prediction. As shown in Fig. 6a, the proposed E2E-NVC
uses the VAE compressor (neuro-Motion) to generate the multiscale motion field
between the current frame and the reference frame. Then, a multiscale motion
compensation network (MS-MCN) takes multiscale compressed flows, warps the
multiscale features of the reference frame, and combines these warped features
to generate the predicted frame. The prediction residual is then coded using
another VAE-based compressor (neuro-Res).
A low-delay E2E-NVC based video encoder is specifically illustrated in this
work. Given a group of pictures (GOP) $\mathbb{X}$ = {${\bf X_{1}},{\bf
X_{2}},...,{\bf X_{t}}$}, we first encode ${\bf X_{1}}$ using the neuro-Intra
module and have its reconstructed frame $\hat{\bf X}_{1}$. The following frame
${\bf X}_{2}$ is encoded predictively, using neuro-Motion, MS-MCN, and neuro-
Res together, as shown in Fig. 6a. Note that MS-MCN takes the multiscale
optical flows
$\left\\{\vec{f}^{1}_{d},\vec{f}^{2}_{d},...,\vec{f}^{s}_{d}\right\\}$ derived
by the pyramid decoder in neuro-Motion, and then uses them to generate the
predicted frame $\hat{\bf X}^{p}_{2}$ by multiscale motion compensation.
Displaced inter-residual ${\bf r}_{2}={{\bf X}_{2}}-{\hat{\bf X}^{p}_{2}}$ is
then compressed in neuro-Res, yielding the reconstruction $\hat{\bf r}_{2}$.
The final reconstruction $\hat{\bf X}_{2}$ is given by ${\hat{\bf
X}_{2}}={\hat{\bf X}^{p}_{2}}+{\hat{\bf r}_{2}}$. All of the remaining
P-frames in the group of pictures are then encoded using the same procedure.
Fig. 6b illustrates the general architecture of the VAE model. The VAE model
includes a main encoder-decoder pair that is used for latent feature analysis
and synthesis, as well as a hyper encoder-decoder for hyper prior generation.
The main encoder ${\bf E}_{m}$ uses four stacked CNN layers. Each
convolutional layer employs stride convolutions to achieve downsampling (at a
factor of 2 in this example) and cascaded convolutions for efficient feature
extraction (here, we use three ResNet-based residual blocks [201])666We choose
to apply cascaded ResNets for stacked CNNs because they are highly efficient
and reliable. Other efficient CNN architectures could also be applied.. We use
two-layer hyper encoder ${\bf E}_{h}$ to further generate the subsequent hyper
priors as side information, which is used in the entropy coding of the latent
features.
We apply stacked convolutional layers with a limited (3$\times$3) receptive
field to capture the spatial locality. These convolutional layers are stacked
in order to simulate layer-wise feature extraction. These same ideas are used
in many relevant studies [142, 149]. We utilize the simplest ReLU as the
nonlinear activation function(although other nonlinear activation functions
such as the Generalized Divisive Normalization could be used as well) in
[105].
The human visual system operates in two stages: First, the observer scans an
entire scene to gain a complete understanding of everything within the field
of vision. Second, the observer focuses their attention on specific salient
regions. During image and video compression, this mechanism of visual
attention can be used to ensure that bit resources are allocated where they
are most needed (e.g., via unequal feature quantization) [218, 140]. This
allows resources to be assigned such that salient areas are more accurately
reconstructed, while resources are conserved in the reconstruction of less-
salient areas. To more accurately discern salient from non-salient areas, we
adopt the non-local attention module (NLAM) at the bottleneck layers of both
the main encoder and hyper encoder, prior to quantization, in order to include
both global and local information.
To enable more accurate conditional probability density modeling for entropy
coding of the latent features, we introduce the Prior Aggregation (PA) engine
which fuses the inputs from the hyper priors, spatial neighbors, and temporal
context (if applicable)777Intra and residual coding only use joint spatial and
hyper priors without temporal inference.. Information theory suggests that
more accurate context modeling requires fewer resources (e.g., bits) to
represent information [219]. For the sake of simplicity, we assume the latent
features (e.g., motion, image pixel, residual) are following the Gaussian
distribution as in [149, 148]. We use the PA engine to derive the mean and
standard deviation of the distribution for each feature.
Figure 7: Efficiency of neuro-Intra. PSNR vs. rate performance of neuro-Intra
in comparison to NLAIC [136], Minnen (2018) [149], BPG (4:4:4) and JPEG2000.
Note that the curves for neuro-Intra and NLAIC overlap.
### VI-B Neural Intra Coding
Our neuro-Intra is a simplified version of the Non-Local Attention optimized
Image Compression (NLAIC) that was originally proposed in [136].
One major difference between the NLAIC and the VAE model using autoregressive
spatial context in [149] is the introduction of the NLAM inspired by [220]. In
addition, we have applied 3D 5$\times$5$\times$5 masked CNN888This
5$\times$5$\times$5 convolutional kernel shares the same parameters for all
channels, offering great model complexity reduction as compared with the 2D
CNN-based solution in [149]. to extract spatial priors, which are fused with
hyper priors in PA for entropy context modeling (e.g., the bottom part of Fig.
9). Here, we have assumed the single Gaussian distribution for the context
modeling of entropy coding. Note that temporal priors are not used for intra-
pixel and inter-residual in this paper by only utilizing the spatial priors.
Figure 8: Multiscale Motion Estimation and Compensation. One-stage neuro-
Motion with MS-MCN uses a pyramidal flow decoder to synthesize the multiscale
compressed optical flows (MCFs) that are used in a multiscale motion
compensation network for generating predicted frames.
The original NLAIC applies multiple NLAMs in both main and hyper coders,
leading to excessive memory consumption at a large spatial scale. In E2E-NVC,
NLAMs are only used at the bottleneck layers for both main and hyper encoder-
decoder pairs, allowing bits to be allocated adaptively.
To overcome the non-differentiability of the quantization operation,
quantization is usually simulated by adding uniform noise in [142]. However,
such noise augmentation is not exactly consistent with the rounding in
inference, which can yield performance loss (as reported by [135]). Thus, we
apply universal quantization (UQ) [135] in neuro-Intra. UQ is used for neuro-
Motion and neuro-Res as well. When applied to the common Kodak dataset, neuro-
Intra performed as well as NLAIC [136], and outperformed Minnen (2018) [149],
BPG (4:4:4) and JPEG2000, as shown in Fig. 7.
Figure 9: Context-Adaptive Modeling Using Joint Spatio-temporal and Hyper
Priors. All priors are fused in PA to provide estimates of the probability
distribution parameters.
### VI-C Neural Motion Coding and Compensation
Inter-frame coding plays a vital role in video coding. The key is how to
efficiently represent motion in a compact format for compensation. In contrast
to the pixel-domain block-based motion estimation and compensation in
conventional video coding, we rely on optical flow to accurately capture the
temporal information for motion compensation.
To improve inter-frame prediction, we extend our earlier work [131] to
multiscale motion generation and compensation. This multiscale motion
processing directly transforms two concatenated frames (where one frame is the
reference from the past, and one is the current frame) into quantized temporal
features that represent the inter-frame motion. These quantized features are
decoded into compressed optical flow in an unsupervised way for frame
compensation via warping. This one-stage scheme does not require any pre-
trained flow network such as FlowNet2 or PWC-net to generate the optical flow
explicitly. It allows us to quantize the motion features rather than the
optical flows, and to train the motion feature encoder and decoder together
with explicit consideration of quantization and rate constraint.
The neuro-Motion module is modified for multiscale motion generation, where
the main encoder is used for feature fusion. We replace the main decoder with
a pyramidal flow decoder, which generates the multiscale compressed optical
flows (MCFs). MCFs will be processed together with the reference frame, using
a multiscale motion compensation network (MS-MCN) to obtain the predicted
frame efficiently, as shown in Fig. 8. Please refer to [217] for more details.
Encoding motion compactly is another important factor for overall performance
improvement. We suggest the joint spatio-temporal and hyper prior-based
context-adaptive model shown in Fig. 9 for efficiently inferring current
quantized features. This is implemented in the PA engine of Fig. 6b.
The joint spatio-temporal and hyper prior-based context-adaptive model mainly
consists of a spatio-temporal-hyper aggregation module (STHAM) and a temporal
updating module (TUM), shown in Fig. 9. At timestamp $t$, STHAM is introduced
to accumulate all the accessible priors and estimate the mean and standard
deviation of Gaussian Mixture Model (GMM) jointly using:
$(\mu_{\mathscr{F}},\sigma_{\mathscr{F}})=\mathbb{F}({\mathscr{F}}_{1},...,{\mathscr{F}}_{i-1},\hat{\bf
z}_{t},{\bf h}_{t-1}),$ (1)
Spatial priors are autoregressively derived using masked 5$\times$5$\times$5
3D convolutions and then concatenated with decoded hyper priors and temporal
priors using stacked 1$\times$1$\times$1 convolutions.
$\mathscr{F}_{i},i=0,1,2,...$ are elements of quantized latent features (e.g.,
motion flow), ${\bf h}_{t-1}$ is aggregated temporal priors from motion flows
preceding the current frame. The neuro-Motion module exploits temporal
redundancy to further prediction efficiency, leveraging the correlation
between second-order moments of inter motion. A probabilistic model of each
element to be encoded is derived with the estimated $\mu_{\mathscr{F}}$ and
$\sigma_{\mathscr{F}}$ by:
$\displaystyle
p_{{\mathscr{F}}|({\mathscr{F}}_{1},...,{\mathscr{F}}{i-1},\hat{\bf
z}_{t},{\bf
h}_{t-1})}({\mathscr{F}}_{i}|{\mathscr{F}}_{1},...,{\mathscr{F}}_{i-1},\hat{\bf
z}_{t},{\bf h}_{t-1})$
$\displaystyle\mbox{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}}={\prod_{i}}(\mathcal{N}{(\mu_{\mathscr{F}},\sigma_{\mathscr{F}}^{2})}*\mathcal{U}(-\frac{1}{2},\frac{1}{2}))({\mathscr{F}}_{i}).$
(2)
Note that TUM is applied to embedded current quantized features
$\mathscr{F}_{t}$ recurrently using a standard ConvLSTM [221]:
$({\bf h}_{t},{\bf c}_{t})={\rm ConvLSTM}({\mathscr{F}_{t},{\bf h}_{t-1},{\bf
c}_{t-1}}),$ (3)
where ${\bf h}_{t}$ are updated temporal priors for the next frame, ${\bf
c}_{t}$ is a memory state to control information flow across multiple time
instances (e.g., frames). Other recurrent units can also be used to capture
temporal correlations as in (3).
It is worth noting that leveraging second-order information for the
representation of compact motion is also widely explored in traditional video
coding approaches. For example, motion vector predictions from spatial and
temporal co-located neighbors are standardized in H.265/HEVC, by which only
motion vector differences (after prediction) are encoded.
### VI-D Neural Residual Coding
Inter-frame residual coding is another significant module contributing to the
overall efficiency of the system. It is used to compress the temporal
prediction error pixels. It affects the efficiency of next frame prediction,
since errors usually propagate temporally.
Here we use the VAE architecture in Fig. 6b to encode the residual ${\bf
r}_{t}$. The rate-constrained loss function is used:
$L=\lambda\cdot\mathbb{D}_{2}\left({\bf X}_{t},({\bf X}^{p}_{t}+{\hat{\bf
r}_{t}})\right)+R,$ (4)
where $\mathbb{D}_{2}$ is the $\ell_{2}$ loss between a residual compensated
frame ${\bf X}^{p}_{t}+{\hat{\bf r}_{t}}$ and ${\bf X}_{t}$. neuro-Res will be
first pretrained using the frames predicted by the pretrained neuro-Motion and
MS-MCN, and a loss function in (4) where the rate $R$ only accounts for the
bits for residual. Then we refine neuro-Res jointly with neuro-Motion and MS-
MCN, using a loss where $R$ incorporates the bits for both motion and residual
with two frames.
### VI-E Experimental Comparison
(a)
(b)
Figure 10: BD-Rate Illustration Using PSNR & MS-SSIM. (a) NVC offers averaged 35.34% gain against the anchor H.264/AVC when distortion is measured using PSNR. (b) NVC shows over 50% gains against anchor H.264/AVC when using MS-SSIM evaluation. MS-SSIM is usually studied as a perceptual quality metric in image compression, especially at a low bit rate. TABLE IV: BD-Rate Gains of NVC, H.265/HEVC and DVC against the H.264/AVC. Sequences | H.265/HEVC | DVC | NVC
---|---|---|---
PSNR | MS-SSIM | PSNR | MS-SSIM | PSNR | MS-SSIM
BDBR | BD-(D) | BDBR | BD-(D) | BDBR | BD-(D) | BDBR | BD-(D) | BDBR | BD-(D) | BDBR | BD-(D)
ClassB | -32.03% | 0.78 | -27.67% | 0.0046 | -27.92% | 0.72 | -22.56% | 0.0049 | -45.66% | 1.21 | -54.90% | 0.0114
ClassC | -20.88% | 0.91 | -19.57% | 0.0054 | -3.53% | 0.13 | -24.89% | 0.0081 | -17.82% | 0.73 | -43.11% | 0.0133
ClassD | -12.39% | 0.57 | -9.68% | 0.0023 | -6.20% | 0.26 | -22.44% | 0.0067 | -15.53% | 0.70 | -43.64% | 0.0123
ClassE | -36.45% | 0.99 | -30.82% | 0.0018 | -35.94% | 1.17 | -29.08% | 0.0027 | -49.81% | 1.70 | -58.63% | 0.0048
UVG | -48.53% | 1.00 | -37.5% | 0.0056 | -37.74% | 1.00 | -16.46% | 0.0032 | -48.91% | 1.24 | -53.87% | 0.0100
Average | -30.05% | 0.85 | -25.04% | 0.0039 | -22.26% | 0.65 | -23.08% | 0.0051 | -35.54% | 1.11 | -50.83% | 0.0103
Figure 11: Visual Comparison. Reconstructed frames of NVC, H.265/HEVC and
H.264/AVC. We avoid blocky artifacts, visible noise, etc., and provide better
quality at lower bit rate.
We applied the same low-delay coding setting as DVC in [129] for our method
and traditional H.264/AVC, and H.265/HEVC for comparison. We encoded 100
frames and used GOP of 10 on H.265/HEVC test sequences, and 600 frames with
GOP of 12 on the UVG dataset. For H.265/HEVC, we applied the fast mode of the
x265999http://x265.org/ — a popular open-source H.265/HEVC encoder
implementation; while the fast mode of the
x264101010https://www.videolan.org/developers/x264.html is used as the
representative of the H.264/AVC encoder.
We show the leading compression efficiency in Fig. 10 using respective PSNR
and MS-SSIM measures, across H.265/HEVC and UVG test sequences. In Table IV,
by setting the same anchor using H.264/AVC, our NVC presents 35% BD-Rate
gains, while H.265/HEVC and DVC offer 30% and 22% gains, respectively. If the
distortion is measured by the MS-SSIM, our gains in efficiency are even
larger. This demonstrates that NVC can achieve a 50% improvement in
efficiency, while both H.265/HEVC and DVC achieve only around 25%.
Our NVC rivals the recent DVC_Pro [222], an upgrade of the earlier DVC [141],
e.g., 35.54% and 50.83% BD-Rate reduction measured by PSNR and MS-SSIM
distortion respectively for NVC, while 34.57% and 45.88% marked for DVC_Pro.
DVC [141] has mainly achieved a higher level of coding efficiency than
H.265/HEVC at high bit rates. However, a sharp decline in the performance of
DVC is revealed at low bit rates (e.g., performing worse than H.264/AVC at
some rates). We have also observed that DVC’s performance varies for different
test sequences. DVC_Pro upgrades DVC with better intra/residual coding using
[149] and $\lambda$ fine-tuning, showing state-of-the-art performance [222].
Visual Comparison We provide a visual quality comparison between NVC,
H.264/AVC, and H.265/HEVC as shown in Fig. 11. Generally, NVC yields
reconstructions that are much higher in quality than those of its competitors,
even with a lower bit rate cost. For the sample clip “RaceHorse”, which
includes non-translational motion and a complex background, NVC uses 7 percent
fewer bits despite an improvement in quality greater than 1.5 dB PSNR,
compared with H.264/AVC. For other cases, our method also shows robust
improvement. Traditional codec usually suffers from blocky artifacts and
motion-induced noise close to the edges of objects. In H.264/AVC, you clearly
can observe block partition boundaries with severe pixel discontinuity. Our
results provide higher-quality reconstruction and avoid noise and artifacts.
### VI-F Discussion And Future Direction
We developed an end-to-end deep neural video coding framework that can learn
compact spatio-temporal representation of raw video input. Our extensive
simulations yielded very encouraging results, demonstrating that our proposed
method can offer consistent and stable gains over existing methods (e.g.,
traditional H.265/HEVC, recent learning-based approaches [129], etc.,) across
a variety bit rates and a wide range of content.
The H.264/AVC, H.264/HEVC, AVS, AV1, and even the VVC, are masterpieces of
hybrid prediction/transform framework-based video coding. Rate-distortion
optimization, rate control, etc., can certainly be incorporated to improve
learning-based solutions. For example, reference frame selection is an
important means by which we can embed and aggregate the most appropriate
information for reducing temporal error and improving overall inter-coding
efficiency. Making deep learning-based video coding practically applicable is
another direction worthy of deeper investigation.
## VII Case Studies for Post-processing:
Efficient Neural Filtering
In this case study, both in-loop and post filtering are demonstrated using
stacked DNN-based neural filters for quality enhancement of reconstructed
frames. We specifically design a single-frame guided CNN which adapts pre-
trained CNN models to different video contents for in-loop filtering, and a
multi-frame CNN leveraging spatio-temporal information for post filtering.
Both reveal noticeable performance gains. In practice, neural filters can be
devised, i.e., in-loop or post, according to the application requirements.
### VII-A In-loop Filtering via Guided CNN
As reviewed in Section IV, most existing works design a CNN model to directly
map a degraded input frame to its restored version (e.g., ground truth label),
as illustrated in Fig. 12a. To ensure that the model is generalizable to other
contexts, CNN models are often designed to use deeper layers, denser
connections, wider receptive fields, etc., with hundreds of millions of
parameters. As a consequence, such generalized models are poorly suited to
most practical applications. To address this problem, we propose that content
adaptive weights be used to guide a shallow CNN model (as shown in Fig. 12b)
instead.
The principle underlying this approach is sparse signal decomposition: We
expect that the CNN model can represent any input as a weighted combination of
channel-wise features. Note that weighting coefficients are dependent on input
signals, making this model generalizable to a variety of content
characteristics.
Method. Let $\bf{x}$ be a degraded block with $\rm N$ pixels in a column-wise
vector format. The corresponding source block of $\bf{x}$ is $\bf{s}$, which
has a processing error $\bf d=s-x$. We wish to have ${\bf r}_{\rm corr}$ from
$\bf{x}$ so that the final reconstruction ${\bf x}_{\rm corr}={\bf x}+{\bf
r}_{\rm corr}$ is closer to $\bf s$.
Figure 12: CNN-based Restoration. (a) Conventional model structure. (b) Guided
CNN model with adaptive weights.
Let the CNN output layer have $\rm M$ channels, i.e., ${\bf r}_{\rm 0}$, ${\bf
r}_{\rm 1}$,$\cdots$,${\bf r}_{\rm M-1}$. Then, the ${\bf r}_{\rm corr}$ is
assumed as a linear combination of these channel-wise feature vectors,
${\bf r}_{\rm corr}={a_{0}}{\bf r}_{\rm 0}+{a_{1}}{\bf r}_{\rm
1}+\cdots+{a_{\rm M-1}}{\bf r}_{\rm M-1},$ (5)
where $a_{0},a_{1},\cdots,a_{\rm M-1}$ are the weighting parameters that are
explicitly signaled in the compressed bitstream.
Our objective is to minimize the distance between the restored block ${\bf
x}_{\rm corr}$ and its corresponding source $\bf s$, i.e., $\left|{\bf x}_{\rm
corr}-{\bf s}\right|^{2}=\left|{\bf r}_{\rm corr}-{\bf d}\right|^{2}$. Given
the channel-wise output features $\bf r_{\rm 0}$, $\bf r_{\rm 1}$, $\cdots$,
$\bf r_{\rm M-1}$, for a degraded input $\bf x$, the weighting parameters
$a_{0},a_{1},\cdots,a_{\rm M-1}$ can then be estimated by least-square
optimization as
$\left[a_{0},a_{1},\cdots,a_{\rm M-1}\right]^{\rm T}=({\bf R}^{\rm T}{\bf
R})^{-1}{\bf R}^{\rm T}{\bf d},$ (6)
where ${\bf R}=\left[{\bf r}_{0},{\bf r}_{1},\dots,{\bf r}_{\rm M-1}\right]$
is the matrix at a size of $\rm N\times M$ comprised of stacked output
features in column-wise order. The reconstruction error is given by
$e=|{\bf r_{\rm corr}}-{\bf d}|^{2}=|{\bf d}|^{2}-{\bf d}^{\rm T}{\bf R}({\bf
R}^{\rm T}{\bf R})^{-1}{\bf R}^{\rm T}{\bf d}.$ (7)
Loss Function. Assuming that one training batch is comprised of $\rm T$ patch
pairs: $\\{{\bf s}_{i},{\bf x}_{i}\\},i=0,1,,\cdots,\rm T-1$, the overall
reconstruction error over the training set is
$E=\sum\nolimits_{i}{\\{{{\left|{{{\bf{d}}_{i}}}\right|}^{2}}-{{\bf{d}}_{i}}^{\rm{T}}{{\bf{R}}_{i}}{{({{\bf{R}}_{i}}^{\rm{T}}{{\bf{R}}_{i}})}^{-1}}{{\bf{R}}_{i}}^{\rm{T}}{{\bf{d}}_{i}}\\}},$
(8)
where ${\bf d}_{i}={\bf s}_{i}-{\bf x}_{i}$ is the error for the $i^{th}$
patch. ${\bf R}_{i}=[{\bf r}_{i,0},{\bf r}_{i,1},\cdots,{\bf r}_{i,{\rm
M-1}}]$ is the corresponding channel-wise features in matrix form, with ${\bf
r}_{i,j}$ being the $j^{th}$ channel when training sample $\bf x_{i}$ is
passed through the CNN model. Given that $\left|{\bf d}_{i}\right|^{2}$ is
independent of the network model, the loss function can be simplified as
$L=\sum\nolimits_{i}{\\{-{{\bf{d}}_{i}}^{\rm{T}}{{\bf{R}}_{i}}{{({{\bf{R}}_{i}}^{\rm{T}}{{\bf{R}}_{i}})}^{-1}}{{\bf{R}}_{i}}^{\rm{T}}{{\bf{d}}_{i}}\\}}.$
(9)
Experimental Studies. A shallow baseline CNN model(as described in Table V) is
used to demonstrate the efficiency of the guided CNN model. This model is
comprised of seven layers in total and has a fixed kernel size of 3$\times$3\.
At the bottleneck layer, the channel number of the output feature map is $\rm
M$. After extensive simulations, $\rm M=2$ was selected. In total, our model
only requires 3,744 parameters, far fewer than the number required by existing
methods.
TABLE V: Layered structure and parameter settings of baseline CNN model.
Layer | Kernel size | Input channels | Output channels | Parameters
---|---|---|---|---
1 | $3\times 3$ | $1$ | $16$ | $144$
2 | $3\times 3$ | $16$ | $8$ | $1152$
3 | $3\times 3$ | $8$ | $8$ | $576$
4 | $3\times 3$ | $8$ | $8$ | $576$
5 | $3\times 3$ | $8$ | $8$ | $576$
6 | $3\times 3$ | $8$ | $8$ | $576$
7 | $3\times 3$ | $8$ | $\rm M=2$ | $144$
Total parameters | $3744$
In training, 1000 pictures of DIV2K [223] were used. All frames were
compressed using the AV1 encoder with in-loop filters CDEF [159] and LR [160]
turned off to generate corresponding quantization-induced degraded
reconstructions. We divided the 64 QPs into six ranges and trained one model
for each QP range. The six ranges include QP values 7 to 16, 17 to 26, 27 to
36, 47 to 56, and 57 to 63. Compressed frames falling into the same QP range
were used to train the corresponding CNN model. Frames were segmented into
64$\times$64 patches. Each batch contained 1,000 patches. We adopted the
Adaptive moment estimation (Adam) algorithm, with the initial learning rate
set at 1e-4. The learning rate is halved every 20 epochs.
We used the Tensorflow platform, which runs on NVIDIA GeForce GTX 1080Ti GPU,
to evaluate coding efficiency across four QPs, e.g., {32, 43, 53, and 63}. Our
test set included 24 video sequences with resolutions ranging from
2560$\times$1600 to 352$\times$288\. The first 50 frames of each sequence were
tested in both intra and inter configurations.
In our experiments, $\rm N$ was set to 64, 128, 256, and the whole frame,
respectively. We found that $\rm N=256$ yields the best performance. For each
block, the linear combination parameters $a_{i}$ $(i=0,1)$ were derived
accordingly. To strike an appropriate balance between bit consumption and
model efficiency, our experiments suggest that the dynamic range of $a_{i}$ is
within 15.
We compared the respective BD-Rate reductions of our guided CNN model and a
baseline CNN model against the AV1 baseline encoder. All filters were enabled
for the AV1 anchor. For a description of the baseline CNN model, see Table V.
Our guided CNN model is the baseline model plus the adaptive weights.
Both baseline and guided CNN models were applied on top of the AV1 encoder
with only the deblocking filter enabled, and other filters (including CDEF and
LR) turned off. The findings reported in Table VI demonstrate that either
baseline or guided CNN models can be used to replace additional adaptive in-
loop filters, while improving R-D efficiency. Furthermore, regardless of block
size and frame types, our guided model always outperformed the baseline CNN.
This is mainly due to the adaptive weights used to better characterize content
dynamics. Similar lightweight CNN structures can be upgraded using deep models
[164, 163, 167] for potentially greater BD-Rate savings.
TABLE VI: BD-Rate savings of baseline and guided CNN models against the AV1.
Resolution | Sequence | All Intra | Random Access
---|---|---|---
Baseline | Guided CNN | Baseline | Guided CNN
CNN | N=64 | N=128 | N=256 | Frame | CNN | N=64 | N=128 | N=256 | Frame
$2560\times 1600$ | PeopleOnStreet | $-1.15\%$ | $-1.95\%$ | $-2.84\%$ | $-2.90\%$ | $-2.81\%$ | $-0.19\%$ | $-0.22\%$ | $-1.03\%$ | $-1.02\%$ | $-0.83\%$
Traffic | $-1.71\%$ | $-1.76\%$ | $-3.01\%$ | $-3.16\%$ | $-3.03\%$ | $-0.26\%$ | $+1.89\%$ | $-1.64\%$ | $-2.15\%$ | $-2.17\%$
$1920\times 1080$ | BasketballDrive | $-0.45\%$ | $+2.95\%$ | $-0.72\%$ | $-1.06\%$ | $-0.72\%$ | $-0.02\%$ | $+8.04\%$ | $+0.87\%$ | $+0.07\%$ | $-0.05\%$
BQTerrace | $-0.98\%$ | $-3.19\%$ | $-3.66\%$ | $-3.44\%$ | $-2.10\%$ | $-0.33\%$ | $+0.68\%$ | $-1.62\%$ | $-1.91\%$ | $-1.51\%$
Cactus | $-1.64\%$ | $-1.38\%$ | $-2.79\%$ | $-2.89\%$ | $-2.56\%$ | $-0.21\%$ | $+1.18\%$ | $-1.13\%$ | $-1.31\%$ | $-0.96\%$
Kimono | $-0.23\%$ | $+3.55\%$ | $-0.18\%$ | $-0.88\%$ | $-0.95\%$ | $-0.07\%$ | $+6.07\%$ | $+0.84\%$ | $-0.07\%$ | $-0.01\%$
ParkScene | $-1.21\%$ | $+0.01\%$ | $-1.92\%$ | $-2.21\%$ | $-2.11\%$ | $-0.07\%$ | $+1.11\%$ | $-1.46\%$ | $-1.82\%$ | $-0.92\%$
blue-sky | $-2.89\%$ | $-0.96\%$ | $-2.58\%$ | $-2.86\%$ | $-2.56\%$ | $+0.00\%$ | $+3.46\%$ | $-2.02\%$ | $-2.96\%$ | $-2.77\%$
crowd_run | $-3.01\%$ | $-2.34\%$ | $-3.11\%$ | $-3.22\%$ | $-3.08\%$ | $-0.13\%$ | $-1.69\%$ | $-2.19\%$ | $-2.07\%$ | $-1.09\%$
$832\times 480$ | BasketballDrill | $-2.99\%$ | $-5.55\%$ | $-6.45\%$ | $-6.26\%$ | $-5.88\%$ | $-0.25\%$ | $-0.33\%$ | $-2.10\%$ | $-1.79\%$ | $-1.55\%$
BQMall | $-1.74\%$ | $-3.96\%$ | $-4.48\%$ | $-4.46\%$ | $-4.35\%$ | $-0.15\%$ | $+0.16\%$ | $-1.05\%$ | $-1.13\%$ | $-0.76\%$
PartyScene | $-0.83\%$ | $-3.77\%$ | $-4.02\%$ | $-3.97\%$ | $-3.81\%$ | $-0.20\%$ | $-1.10\%$ | $-1.43\%$ | $-1.25\%$ | $-0.13\%$
RaceHorsesC | $-1.91\%$ | $-2.01\%$ | $-2.58\%$ | $-2.49\%$ | $-2.38\%$ | $-0.21\%$ | $-0.70\%$ | $-1.28\%$ | $-1.03\%$ | $-0.80\%$
$416\times 240$ | BasketballPass | $-3.08\%$ | $-3.66\%$ | $-4.60\%$ | $-4.72\%$ | $-4.65\%$ | $-0.20\%$ | $+0.71\%$ | $-0.63\%$ | $-0.62\%$ | $-0.36\%$
BlowingBubbles | $-2.60\%$ | $-3.36\%$ | $-3.78\%$ | $-3.77\%$ | $-3.76\%$ | $-0.34\%$ | $-0.55\%$ | $-1.05\%$ | $-0.87\%$ | $-0.86\%$
BQSquare | $-4.92\%$ | $-6.09\%$ | $-6.23\%$ | $-6.27\%$ | $-6.22\%$ | $-0.50\%$ | $-0.54\%$ | $-0.92\%$ | $-1.13\%$ | $-1.17\%$
RaceHorses | $-3.57\%$ | $-5.39\%$ | $-5.75\%$ | $-5.75\%$ | $-5.76\%$ | $-0.51\%$ | $-2.82\%$ | $-3.06\%$ | $-2.69\%$ | $-2.94\%$
$1280\times 720$ | Johnny | $-2.01\%$ | $-2.41\%$ | $-4.03\%$ | $-4.21\%$ | $-4.12\%$ | $-0.31\%$ | $+8.32\%$ | $-0.94\%$ | $-2.57\%$ | $-2.63\%$
FourPeople | $-1.94\%$ | $-0.54\%$ | $-3.49\%$ | $-3.76\%$ | $-2.85\%$ | $-0.29\%$ | $+17.99\%$ | $+1.20\%$ | $-1.65\%$ | $-1.60\%$
KristenAndSara | $-2.71\%$ | $-1.49\%$ | $-3.97\%$ | $-4.32\%$ | $-4.26\%$ | $-0.42\%$ | $+15.95\%$ | $+0.53\%$ | $-2.49\%$ | $-2.31\%$
$352\times 288$ | Harbour | $-0.79\%$ | $-1.18\%$ | $-1.43\%$ | $-1.38\%$ | $-1.42\%$ | $-0.23\%$ | $-1.00\%$ | $-1.29\%$ | $-1.40\%$ | $-1.08\%$
Ice | $-3.59\%$ | $-5.54\%$ | $-6.88\%$ | $-7.08\%$ | $-7.19\%$ | $-0.59\%$ | $-1.59\%$ | $-3.59\%$ | $-3.65\%$ | $-3.97\%$
Silent | $-1.68\%$ | $-1.88\%$ | $-2.80\%$ | $-2.77\%$ | $-2.79\%$ | $-0.21\%$ | $+1.96\%$ | $-0.29\%$ | $-0.27\%$ | $-0.70\%$
Students | $-3.08\%$ | $-4.10\%$ | $-4.77\%$ | $-4.81\%$ | $-4.88\%$ | $-0.52\%$ | $+1.25\%$ | $-1.16\%$ | $-1.44\%$ | $-1.66\%$
| Average | $-2.11\%$ | $-2.33\%$ | $-3.59\%$ | $-3.69\%$ | $-3.51\%$ | $-0.26\%$ | $+2.43\%$ | $-1.10\%$ | $-1.55\%$ | $-1.37\%$
### VII-B Multi-frame Post Filtering
This section demonstrates how multi-frame video enhancement (MVE) scheme-based
post filtering can be used to minimize compression artifacts. We implemented
our proposed approach on AV1 reconstructed frames and achieved significant
coding improvement. Similar observations are expected with different anchors,
such as the H.265/HEVC.
Method. Single-frame video enhancement (SVE) refers to the sole application of
the fusion network without leveraging temporal frame correlations. As
discussed in Section IV, there are a great number of network models that can
be used to do SVE. In most cases, the efficiency and complexity are at odds
with one another: In other words, efficiency and complexity come at the cost
of deeper networks and higher numbers of parameters. Recently, Yu et al. [224]
discovered that models with more feature channels before activation could
provide significantly better performance with the same parameters and
computational budgets. We designed a wide activation residual network (WARN)
by combining wide activation with a powerful deep residual network (ResNet)
[225], shown in Fig. 13. This WARN illustrates the three inputs for an
enhanced output in the MVE framework. In contrast, SVE normally inputs a
single frame, and outputs a corresponding enhanced representation.
Figure 13: WARN. This wide activation residual network is used to fuse/enhance
input frame for improved quality. In MVE case, it takes three inputs to
enhance the LFs; and in SVE case, it inputs a single frame and outputs its
enhanced version. This WARN generally follows the residual network structure
with residual link and ResBlk embedded. Note that ResBlk is extended to
support wide activation from its plain version prior to ReLU activation.
This MVE closely follows the two-step strategy reviewed in Section IV. It uses
FlowNet2 [187] to perform pixel-level motion estimation/compensation-based
temporal frame alignment. Next, a WARN-based fusion network is used for final
enhancement. We allow the two High-quality Frames (HF) immediately preceding
and succeeding a low-quality frame (LF) to enhance the Low-quality Frame (LF)
in between. Bi-directional warping is performed for each LF to produce
compensated HFs in Fig. 14.
Figure 14: Enhancement Framework. (a) Single-input WARN-based SVE to enhance
the HF. (b)+(c) Two-step MVE using FlowNet2 for temporal alignment, and three-
input WARN- based fusion to use preceding and succeeding HFs for LF
enhancement.
Experimental Studies. We evaluate both SVE and MVE against the AV1 baseline. A
total of 118 video sequences were selected to train network models. More
specifically, the first 200 frames of each sequence were encoded with AV1
encoder to generate the reconstructed frames. The QPs are {32, 43, 53, 63},
yielding 23,600 reconstructed frames in total. After frame alignment, we
selected one training set containing compensated $HF_{0}$, compensated
$HF_{1}$, and to-be-enhanced LF from every 8 frames, which yielded a total of
2900 training sets. These sets were used to train the WARN model as the fusion
network. Notice that we trained the WARN models for SVE and MVE individually.
The GoP size was 16 with a hierarchical prediction structure. The LFs and HFs
were identified using their QPs, i.e., HFs with lower QP than the base QP were
decoded, such as frames 0, 4, 8, 12, and 16 in Fig. 15.
Algorithms were implemented using the Tensorflow platform, NVIDIA GeForce GTX
1080Ti GPU. In training, frames were segmented into 64$\times$64 patches, with
64 patches included in each batch. We adopted the Adam optimizer with the
initial learning rate set at 1e-4. The learning rate can be then adjusted
using the step strategy with $\gamma=0.5$. An additional 18 sequences were
also employed for testing. These were mostly used to evaluate video quality.
The first 50 frames of each test sequence were compressed. Then the
reconstructed frames were enhanced using the proposed SVE and MVE methods.
We applied the proposed method on AV1 reconstructed frames. The results are
presented in Table VII. Due to the hierarchical coding structure in inter
prediction, the LFs in Fig. 15 were enhanced using the neighboring HFs via MVE
framework. The HFs themselves are enhanced using the SVE method.
Figure 15: The hierarchical coding structure in the AV1 encoder. The LFs are
enhanced using HFs following the prediction structure via MVE scheme, and HFs
are restored using SVE method.
The overall BD-Rate savings of the SVE and MVE methods are tabulated in Table
VII, against the AV1. SVE achieves an averaged reduction of 8.2% and 5.0% BD-
rate for all intra and random access scenarios, respectively. On the other
hand, our MVE obtains 20.1% and 7.5% BD-rate savings on average, further
demonstrating the effectiveness of our proposed scheme. When random access
techniques are used, the HFs selected are generally distant from a target LF,
which reduces the benefits provided from inter HFs. On the other hand, intra
coding techniques uniformly demonstrate greater BD-rate savings, because the
neighboring frames nearest to target LFs can be used. This contributes
significantly to enhancement.
Besides the objective measures, sample snapshots of reconstructed frames are
illustrated in Fig. 16, clearly demonstrating that blocky and ringing
artifacts from the AV1 baseline are attenuated after applying either SVE or
MVE based filtering. Notably, MVE creates more visually appealing images than
SVE.
TABLE VII: BD-rate improvement of proposed SVE and MVE scheme against the AV1.
Class | Sequence | All Intra | Random Access
---|---|---|---
SVE | MVE | SVE | MVE
A | PeopleOnStreet | $-9.1\%$ | $-14.7\%$ | $-5.0\%$ | $-8.1\%$
Traffic | $-7.6\%$ | $-22.2\%$ | $-5.8\%$ | $-8.8\%$
B | BasketballDrive | $-5.9\%$ | $-13.1\%$ | $-4.4\%$ | $-6.4\%$
BQTerrace | $-8.0\%$ | $-23.7\%$ | $-7.7\%$ | $-9.8\%$
Cactus | $-7.7\%$ | $-21.9\%$ | $-3.9\%$ | $-6.0\%$
Kimono | $-3.8\%$ | $-20.4\%$ | $-3.9\%$ | $-7.1\%$
ParkScene | $-5.1\%$ | $-26.3\%$ | $-4.9\%$ | $-8.0\%$
C | BasketballDrill | $-12.5\%$ | $-21.3\%$ | $-5.6\%$ | $-7.9\%$
BQMall | $-8.9\%$ | $-18.7\%$ | $-3.5\%$ | $-6.1\%$
PartyScene | $-7.2\%$ | $-19.0\%$ | $-3.2\%$ | $-5.0\%$
RaceHorsesC | $-5.9\%$ | $-18.3\%$ | $-3.3\%$ | $-5.6\%$
D | BasketballPass | $-10.0\%$ | $-18.5\%$ | $-3.4\%$ | $-6.2\%$
BlowingBubbles | $-7.0\%$ | $-19.8\%$ | $-4.6\%$ | $-6.7\%$
BQSquare | $-10.8\%$ | $-21.3\%$ | $-11.0\%$ | $-13.6\%$
RaceHorses | $-9.2\%$ | $-19.3\%$ | $-4.9\%$ | $-7.8\%$
E | FourPeople | $-9.7\%$ | $-21.7\%$ | $-5.1\%$ | $-7.4\%$
Johnny | $-9.6\%$ | $-20.7\%$ | $-5.5\%$ | $-8.0\%$
KristenAndSara | $-9.6\%$ | $-21.2\%$ | $-4.4\%$ | $-7.0\%$
| Average | $-8.2\%$ | $-20.1\%$ | $-5.0\%$ | $-7.5\%$
Figure 16: Qualitative Visualization Zoomed-in snapshots of reconstructed
frames for the AV1 baseline, SVE and MVE filtered restoration, as well as the
ground truth label.
### VII-C Discussion And Future Direction
In this section, we proposed DNN-based approaches for video quality
enhancement. For in-loop filtering, we developed a guided CNN framework to
adapt pre-trained CNN models to various video contents. Under this framework,
the guided CNN learns to project an input signal onto a subspace of dimension
$\rm M$. The weighting parameters for a linear combination of these channels
are explicitly signaled in the encoded bitstream to obtain the final
restoration. For post filtering, we devised a spatio-temporal multi-frame
architecture to alleviate the compression artifacts. A two-step scheme is
adopted in which optical flow is first obtained for accurate motion
estimation/compensation, and then a wide activation residual network called
WARN is designed for information fusion and quality enhancement. Our proposed
enhancement approaches can be implemented on different CNN architectures.
The quality of enhanced frames plays a significant role for overall coding
performance, since they serve as reference frames for the motion estimation of
subsequent frames. Our future work will investigate the joint effect of in-
loop filtering and motion estimation on reference frames to exploit the
inherent correlations of these coding tools, which could further improve
coding performance.
## VIII Discussion and Conclusion
As an old Chinese saying goes, “A journey of a thousand miles begins with a
single step.” This is particularly true in the realm of technological
advancement. Both the fields of video compression and machine learning have
been established for many decades, but until recently, they evolved separately
in both academic explorations and industrial practice.
Lately, however, we have begun to witness the interdisciplinary advancements
yielded by the proactive application of deep learning technologies [226] into
video compression systems. Benefits of these advances include remarkable
improvements in performance in many technical aspects. To showcase the
remarkable products of this disciplinary cross-pollination, we have identified
three major functional blocks in a practical video system, e.g., pre-
processing, coding, post-processing. We then reviewed related studies and
publications to help the audience familiarize themselves with these topics.
Finally, we presented three case studies to highlight the state-of-the-art
efficiency resulting from the application of DNNs to video compression
systems, which demonstrates this avenue of exploration’s great potential to
bring about a new generation of video techniques, standards, and products.
Though this article presents separate DNN-based case studies for pre-
processing, coding, and post-processing, we believe that a fully end-to-end
DNN model could potentially offer a greater improvement in performance, while
enabling more functionalities. For example, Xia et al. [227] applied deep
object segmentation in pre-processing, and used it to guide neural video
coding, demonstrating noticeable visual improvements at very low bit rates.
Meanwhile, Lee et al. [228] and others observed similar effects, when a neural
adaptive filter was successfully used to further enhance neural compressed
images.
Nevertheless, a number of open problems requiring substantial further study
have been discovered. These include:
* •
Model Generalization: It is vital for DNN models to be generalizable to a wide
variety of video content, different artifacts, etc. Currently, most DNN-based
video compression techniques utilize supervised learning, which often demands
a significant amount of labelled image/video data for the full spectrum
coverage of aforementioned application scenarios. Continuously developing a
large-scale dataset, such as the ImageNet111111http://www.image-net.org/
presents one possible solution to this problem. An alternative approach may
use more advanced techniques to alleviate uncertainty related to a limited
training sample for model generalization. These techniques include (but are
not limited to) few-shot learning [229] and self-supervised learning [226].
* •
Complexity: Existing DNN-based methods are mainly criticized for their
unbearable complexity in both computational and spatial dimensions. Compared
to conventional video codec, which requires tens of Kilobytes on-chip memory,
most DNN algorithms require several Megabytes or even Gigabytes of memory
space. On the other hand, although inference may be very fast, training could
take hours, days or even weeks for converged and reliable models [141]. All of
these issues present serious barriers to the market adoption of DNN-based
tools, particularly on energy-efficient mobile platforms. One promising
solution is to design specialized hardware for the acceleration of DNN
algorithms [158]. Currently, neural processing units (NPU) have attracted
significant attention, and have been gradually deployed in heterogeneous
platforms (e.g., Qualcomm AI Engine in the Snapdragon chip series, Neural
Processor in Apple silicons, etc.) This paints a promising picture of a future
in which DNN algorithms can be deployed on NPU-equipped devices at a massive
scale.
* •
QoE Metric: Video quality matters. A video QoE metric that is better
correlated with the human visual system is highly desirable, not only for
quality evaluation, but also for loss control in DNN-based video compression.
There has been notable development in both subjective and objective video
quality assessments, yielding several well-known metrics, such as SSIM [230],
just-noticeable-distortion (JND) [231], and VMAF [232], some of which are
actively adopted for the evaluation of video algorithms, application products,
etc. On the other hand, existing DNN-based video coding approaches can
adaptively optimize the efficiency of a pre-defined loss function, such as
MSE, SSIM, adversarial loss [157], VGG feature based semantic loss, etc.
However, none of these loss functions has shown clear advantages. A unified,
differentiable, and HVS-driven metric is of great importance for the capacity
of DNN-based video coding techniques to offer perceptually better QoE.
The exponential growth of Internet traffic, a majority of which involves
videos and images, has been the driving force for the development of video
compression systems. The availability of a vast amount of images through the
Internet, meanwhile, has been critical for the renaissance of the field of
machine learning. In this work, we show that recent progress in deep learning
can, in return, improve video compression. These mutual positive feedbacks
suggest that significant progress could be achieved in both fields when they
are investigated together. Therefore, the approaches presented in this work
could be the stepping stones for improving the compression efficiency in
Internet-scale video applications.
From a different perspective, most compressed videos will be ultimately
consumed by human beings or interpreted by machines, for subsequent task
decisions. This is a typical computer vision (CV) problem, i.e., content
understanding and decisions for consumption or task-oriented application
(e.g., detection, classification, etc.) Existing approaches have performed
these tasks by first decoding the video, and then examining the tasks via
learned or rule-based methods based on decoded pixels. Such separate
processing, e.g., video decoding followed by CV tasks, is relied upon mainly
because traditional pixel-prediction based differential video compression
methods break the spatio-temporal features that could be potentially helpful
for vision tasks. In contrast, recent DNN-based video compression algorithms
rely on the feature extraction, activation, suppression, and aggregation for
more compact representation. For these reasons, it is expected that the CV
tasks can be fulfilled in the compressive domain without bit decoding and
pixel reconstruction. Our earlier attempts have shown very encouraging gain in
the accuracy of classification and retrieval in compressive formats, without
resorting to the traditional feature-based approaches using decoded pixels,
which we report in [233, 234]. Using powerful DNNs to unify video compression
and computer vision techniques is an exciting new field. It is also worth
noting that the ISO/IEC MPEG is now actively working on a new project called
“Video Coding for Machine”
(VCM)121212https://mpeg.chiariglione.org/standards/exploration/video-coding-
machines, with emphasis on exploring video compression solutions for both
human perception and machine intelligence.
## References
* [1] D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” _Nature_ , vol. 486, no. 7403, pp. 386–389, 2012.
* [2] M. Cheng, Z. Ma, S. Asif, Y. Xu, H. Liu, W. Bao, and J. Sun, “A dual camera system for high spatiotemporal resolution video acquisition,” _IEEE Trans. Pattern Analysis and Machine Intelligence_ , no. 01, pp. 1–1, 2020.
* [3] F. Dufaux, P. Le Callet, R. Mantiuk, and M. Mrak, _High dynamic range video: from acquisition, to display and applications_. Academic Press, 2016.
* [4] M. Winken, D. Marpe, H. Schwarz, and T. Wiegand, “Bit-depth scalable video coding,” in _2007 IEEE International Conference on Image Processing_ , vol. 1. IEEE, 2007, pp. I–5.
* [5] P. Tudor, “Mpeg-2 video compression,” _Electronics & communication engineering journal_, vol. 7, no. 6, pp. 257–264, 1995.
* [6] B. G. Haskell, A. Puri, and A. N. Netravali, _Digital video: an introduction to MPEG-2_. Springer Science & Business Media, 1996.
* [7] T. Sikora, “The mpeg-4 video standard verification model,” _IEEE Transactions on circuits and systems for video technology_ , vol. 7, no. 1, pp. 19–31, 1997.
* [8] W. Li, “Overview of fine granularity scalability in mpeg-4 video standard,” _IEEE Transactions on circuits and systems for video technology_ , vol. 11, no. 3, pp. 301–317, 2001.
* [9] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, “Overview of the H.264/AVC video coding standard,” _IEEE Transactions on circuits and systems for video technology_ , vol. 13, no. 7, pp. 560–576, 2003.
* [10] G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand, “Overview of the high efficiency video coding (HEVC) standard,” _IEEE Transactions on circuits and systems for video technology_ , vol. 22, no. 12, pp. 1649–1668, 2012\.
* [11] V. Sze, M. Budagavi, and G. J. Sullivan, “High efficiency video coding (hevc),” in _Integrated circuit and systems, algorithms and architectures_. Springer, 2014, vol. 39, pp. 49–90.
* [12] G. J. Sullivan, P. N. Topiwala, and A. Luthra, “The h. 264/avc advanced video coding standard: Overview and introduction to the fidelity range extensions,” in _Applications of Digital Image Processing XXVII_ , vol. 5558\. International Society for Optics and Photonics, 2004, pp. 454–474.
* [13] A. Vetro, T. Wiegand, and G. J. Sullivan, “Overview of the stereo and multiview video coding extensions of the h. 264/mpeg-4 avc standard,” _Proceedings of the IEEE_ , vol. 99, no. 4, pp. 626–642, 2011.
* [14] L. Yu, S. Chen, and J. Wang, “Overview of AVS-video coding standards,” _Signal processing: Image communication_ , vol. 24, no. 4, pp. 247–262, 2009\.
* [15] S. Ma, S. Wang, and W. Gao, “Overview of ieee 1857 video coding standard,” in _2013 IEEE International Conference on Image Processing_. IEEE, 2013, pp. 1500–1504.
* [16] J. Zhang, C. Jia, M. Lei, S. Wang, S. Ma, and W. Gao, “Recent development of avs video coding standard: AVS3,” in _2019 Picture Coding Symposium (PCS)_. IEEE, 2019, pp. 1–5.
* [17] Y. Chen, D. Murherjee, J. Han, A. Grange, Y. Xu, Z. Liu, S. Parker, C. Chen, H. Su, U. Joshi _et al._ , “An overview of core coding tools in the AV1 video codec,” in _2018 Picture Coding Symposium_. IEEE, 2018, pp. 41–45.
* [18] J. Han, B. Li, D. Mukherjee, C.-H. Chiang, C. Chen, H. Su, S. Parker, U. Joshi, Y. Chen, Y. Wang _et al._ , “A technical overview of av1,” _arXiv preprint arXiv:2008.06091_ , 2020.
* [19] “AOM - alliance for open media,” http://www.aomedia.org/.
* [20] A. Aaron, Z. Li, M. Manohara, J. De Cock, and D. Ronca, “Per-Title Encode Optimization,” The Netflix Tech Blog, https://netflixtechblog.com/per-title-encode-optimization-7e99442b62a2, (14 December 2015).
* [21] T. Shoham, D. Gill, S. Carmel, N. Terterov, and P. Tiktov, “Content-adaptive frame level rate control for video encoding using a perceptual video quality measure,” in _Applications of Digital Image Processing XLII_ , A. G. Tescher and T. Ebrahimi, Eds., vol. 11137, September 2019, p. 26.
* [22] Y.-C. Lin, H. Denman, and A. Kokaram, “Multipass encoding for reducing pulsing artifacts in cloud based video transcoding,” in _2015 IEEE International Conference on Image Processing_ , September 2015, pp. 907–911.
* [23] G. J. Sullivan and T. Wiegand, “Video compression-from concepts to the H.264/AVC standard,” _Proceedings of the IEEE_ , vol. 93, no. 1, pp. 18–31, 2005.
* [24] A. Norkin, G. Bjontegaard, A. Fuldseth, M. Narroschke, M. Ikeda, K. Andersson, M. Zhou, and G. Van der Auwera, “HEVC deblocking filter,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 22, no. 12, pp. 1746–1754, 2012.
* [25] C.-M. Fu, E. Alshina, A. Alshin, Y.-W. Huang, C.-Y. Chen, C.-Y. Tsai, C.-W. Hsu, S.-M. Lei, J.-H. Park, and W.-J. Han, “Sample adaptive offset in the HEVC standard,” _IEEE Transactions on Circuits and Systems for Video technology_ , vol. 22, no. 12, pp. 1755–1764, 2012.
* [26] R. Gupta, M. T. Khanna, and S. Chaudhury, “Visual saliency guided video compression algorithm,” _Signal Processing: Image Communication_ , vol. 28, no. 9, pp. 1006–1022, 2013.
* [27] S. Liu, X. Li, W. Wang, E. Alshina, K. Kawamura, K. Unno, Y. Kidani, P. Wu, A. Segall, M. Wien _et al._ , “Ahg on neural network based coding tools,” _Joint Video Expert Team_ , no. JVET-S0267/M54764, June 2020.
* [28] S. Liu, E. Alshina, J. Pfaff, M. Wien, P. Wu, and Y. Ye, “Report of ahg11 meeting on neural network-based video coding,” _Joint Video Expert Team_ , no. JVET-T0042/M54848, July 2020.
* [29] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” _IEEE Transactions on Image Processing_ , vol. 26, no. 7, pp. 3142–3155, 2017.
* [30] C. Tian, Y. Xu, L. Fei, and K. Yan, “Deep learning for image denoising: a survey,” in _International Conference on Genetic and Evolutionary Computing_. Springer, 2018, pp. 563–572.
* [31] A. Chakrabarti, “A neural approach to blind motion deblurring,” in _European conference on computer vision_. Springer, 2016, pp. 221–235.
* [32] J. Koh, J. Lee, and S. Yoon, “Single-image deblurring with neural networks: A comparative survey,” _Computer Vision and Image Understanding_ , p. 103134, 2020.
* [33] Y. Zhu, X. Fu, and A. Liu, “Learning dual transformation networks for image contrast enhancement,” _IEEE Signal Processing Letters_ , 2020.
* [34] W. Guan, T. Wang, J. Qi, L. Zhang, and H. Lu, “Edge-aware convolution neural network based salient object detection,” _IEEE Signal Processing Letters_ , vol. 26, no. 1, pp. 114–118, 2018.
* [35] L. Xu, J. Ren, Q. Yan, R. Liao, and J. Jia, “Deep edge-aware filters,” in _International Conference on Machine Learning_ , 2015, pp. 1669–1678.
* [36] L. Zhaoping, “A new framework for understanding vision from the perspective of the primary visual cortex,” _Current opinion in neurobiology_ , vol. 58, pp. 1–10, 2019.
* [37] X. Chen, M. Zirnsak, G. M. Vega, E. Govil, S. G. Lomber, and T. Moore, “The contribution of parietal cortex to visual salience,” _bioRxiv_ , 2019, doi: http://doi.org/10.1101/619643.
* [38] O. Schwartz and E. Simoncelli, “Natural signal statistics and sensory gain control.” _Nature neuroscience_ , vol. 4, no. 8, p. 819, 2001.
* [39] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” _IEEE Transactions on pattern analysis and machine intelligence_ , vol. 20, no. 11, pp. 1254–1259, 1998.
* [40] L. Itti, “Automatic foveation for video compression using a neurobiological model of visual attention,” _IEEE transactions on image processing_ , vol. 13, no. 10, pp. 1304–1318, 2004.
* [41] T. V. Nguyen, M. Xu, G. Gao, M. Kankanhalli, Q. Tian, and S. Yan, “Static saliency vs. dynamic saliency: a comparative study,” in _Proceedings of the 21st ACM international conference on Multimedia_ , 2013, pp. 987–996.
* [42] E. Vig, M. Dorr, and D. Cox, “Large-scale optimization of hierarchical features for saliency prediction in natural images,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2014, pp. 2798–2805.
* [43] N. Liu, J. Han, D. Zhang, S. Wen, and T. Liu, “Predicting eye fixations using convolutional neural networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 362–370.
* [44] G. Li and Y. Yu, “Deep contrast learning for salient object detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 478–487.
* [45] N. Liu and J. Han, “Dhsnet: Deep hierarchical saliency network for salient object detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 678–686.
* [46] Q. Hou, M.-M. Cheng, X. Hu, A. Borji, Z. Tu, and P. H. Torr, “Deeply supervised salient object detection with short connections,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 3203–3212.
* [47] B. Yan, H. Wang, X. Wang, and Y. Zhang, “An accurate saliency prediction method based on generative adversarial networks,” in _2017 IEEE International Conference on Image Processing_. IEEE, 2017, pp. 2339–2343.
* [48] Y. Xu, S. Gao, J. Wu, N. Li, and J. Yu, “Personalized saliency and its prediction,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 41, no. 12, pp. 2975–2989, 2018.
* [49] L. Bazzani, H. Larochelle, and L. Torresani, “Recurrent mixture density network for spatiotemporal visual attention,” _arXiv preprint arXiv:1603.08199_ , 2016.
* [50] C. Bak, A. Kocak, E. Erdem, and A. Erdem, “Spatio-temporal saliency networks for dynamic saliency prediction,” _IEEE Transactions on Multimedia_ , vol. 20, no. 7, pp. 1688–1698, 2017.
* [51] M. Sun, Z. Zhou, Q. Hu, Z. Wang, and J. Jiang, “SG-FCN: A motion and memory-based deep learning model for video saliency detection,” _IEEE transactions on cybernetics_ , vol. 49, no. 8, pp. 2900–2911, 2018.
* [52] L. Jiang, M. Xu, T. Liu, M. Qiao, and Z. Wang, “Deepvs: A deep learning based video saliency prediction approach,” in _Proceedings of the European Conference on Computer Vision_ , 2018, pp. 602–617.
* [53] Z. Wang, J. Ren, D. Zhang, M. Sun, and J. Jiang, “A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos,” _Neurocomputing_ , vol. 287, pp. 68–83, 2018.
* [54] R. Cong, J. Lei, H. Fu, F. Porikli, Q. Huang, and C. Hou, “Video saliency detection via sparsity-based reconstruction and propagation,” _IEEE Transactions on Image Processing_ , vol. 28, no. 10, pp. 4819–4831, 2019.
* [55] K. Min and J. J. Corso, “Tased-net: Temporally-aggregating spatial encoder-decoder network for video saliency detection,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 2394–2403.
* [56] W. Wang, J. Shen, J. Xie, M.-M. Cheng, H. Ling, and A. Borji, “Revisiting video saliency prediction in the deep learning era,” _IEEE transactions on Pattern Analysis and Machine Intelligence_ , 2019.
* [57] A. Vetro, T. Haga, K. Sumi, and H. Sun, “Object-based coding for long-term archive of surveillance video,” in _Proceedings of International Conference on Multimedia and Expo_ , vol. 2, July 2003, p. 417, Baltimore, MD.
* [58] T. Nishi and H. Fujiyoshi, “Object-based video coding using pixel state analysis,” in _Proceedings of the 17th International Conference on Pattern Recognition_ , vol. 3, August 2004, pp. 306–309, Cambridge, UK.
* [59] L. Zhu and Q. Zhang, “Motion-based foreground extraction in compressed video,” in _2010 International Conference on Measuring Technology and Mechatronics Automation_ , vol. 2. IEEE, 2010, pp. 711–714.
* [60] Z. Zhang, T. Jing, J. Han, Y. Xu, and X. Li, “Flow-process foreground region of interest detection method for video codecs,” _IEEE Access_ , vol. 5, pp. 16 263–16 276, 2017.
* [61] Y. Guo, Z. Xuan, and L. Song, “Foreground target extraction method based on neighbourhood pixel intensity correction,” _Australian Journal of Mechanical Engineering_ , pp. 1–10, 2019.
* [62] A. Shahbaz, V.-T. Hoang, and K.-H. Jo, “Convolutional neural network based foreground segmentation for video surveillance systems,” in _IECON 2019-45th Annual Conference of the IEEE Industrial Electronics Society_ , vol. 1. IEEE, 2019, pp. 86–89.
* [63] S. Zhou, J. Wang, D. Meng, Y. Liang, Y. Gong, and N. Zheng, “Discriminative feature learning with foreground attention for person re-identification,” _IEEE Transactions on Image Processing_ , vol. 28, no. 9, pp. 4671–4684, 2019\.
* [64] M. Babaee, D. T. Dinh, and G. Rigoll, “A deep convolutional neural network for background subtraction,” _arXiv preprint arXiv:1702.01731_ , 2017.
* [65] X. Liang, S. Liao, X. Wang, W. Liu, Y. Chen, and S. Z. Li, “Deep background subtraction with guided learning,” in _2018 IEEE International Conference on Multimedia and Expo_. IEEE, 2018, pp. 1–6.
* [66] S. Zhang, K. Wei, H. Jia, X. Xie, and W. Gao, “An efficient foreground-based surveillance video coding scheme in low bit-rate compression,” in _2012 Visual Communications and Image Processing_. IEEE, 2012, pp. 1–6.
* [67] H. Hadizadeh and I. V. Bajić, “Saliency-aware video compression,” _IEEE Transactions on Image Processing_ , vol. 23, no. 1, pp. 19–33, 2013\.
* [68] Y. Li, W. Liao, J. Huang, D. He, and Z. Chen, “Saliency based perceptual HEVC,” in _2014 IEEE International Conference on Multimedia and Expo Workshops_. IEEE, 2014, pp. 1–5.
* [69] C. Ku, G. Xiang, F. Qi, W. Yan, Y. Li, and X. Xie, “Bit allocation based on visual saliency in HEVC,” in _2019 IEEE Visual Communications and Image Processing_. IEEE, 2019, pp. 1–4.
* [70] S. Zhu and Z. Xu, “Spatiotemporal visual saliency guided perceptual high efficiency video coding with neural network,” _Neurocomputing_ , vol. 275, pp. 511–522, 2018.
* [71] V. Lyudvichenko, M. Erofeev, A. Ploshkin, and D. Vatolin, “Improving video compression with deep visual-attention models,” in _Proceedings of the 2019 International Conference on Intelligent Medicine and Image Processing_ , 2019, pp. 88–94.
* [72] M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, “Predicting human eye fixations via an lstm-based saliency attentive model,” _IEEE Transactions on Image Processing_ , vol. 27, no. 10, pp. 5142–5154, 2018.
* [73] X. Sun, X. Yang, S. Wang, and M. Liu, “Content-aware rate control scheme for HEVC based on static and dynamic saliency detection,” _Neurocomputing_ , 2020.
* [74] M. Carandini, J. B. Demb, V. Mante, D. J. Tolhurst, Y. Dan, B. A. Olshausen, J. L. Gallant, and N. C. Rust, “Do we know what the early visual system does?” _Journal of Neuroscience_ , vol. 25, no. 46, pp. 10 577–10 597, 2005.
* [75] J. Kremkow, J. Jin, S. J. Komban, Y. Wang, R. Lashgari, X. Li, M. Jansen, Q. Zaidi, and J.-M. Alonso, “Neuronal nonlinearity explains greater visual spatial resolution for darks than lights,” _Proceedings of the National Academy of Sciences_ , vol. 111, no. 8, pp. 3170–3175, 2014.
* [76] J. Ukita, T. Yoshida, and K. Ohki, “Characterisation of nonlinear receptive fields of visual neurons by convolutional neural network,” _Scientific reports_ , vol. 9, no. 1, pp. 1–17, 2019.
* [77] P. Neri, “Nonlinear characterization of a simple process in human vision,” _Journal of Vision_ , vol. 9, no. 12, pp. 1–1, 2009.
* [78] D. J. Heeger, “Normalization of cell responses in cat striate cortex,” _Visual Neuroscience_ , vol. 9, no. 2, pp. 181–197, 1992.
* [79] N. J. Priebe and D. Ferster, “Mechanisms of neuronal computation in mammalian visual cortex,” _Neuron_ , vol. 75, no. 2, pp. 194–208, 2012.
* [80] M. Carandini and D. J. Heeger, “Normalization as a canonical neural computation,” _Nature Reviews Neuroscience_ , vol. 13, no. 1, p. 51, 2012\.
* [81] M. H. Turner and F. Rieke, “Synaptic rectification controls nonlinear spatial integration of natural visual inputs,” _Neuron_ , vol. 90, no. 6, pp. 1257–1271, 2016.
* [82] D. Doshkov and P. Ndjiki-Nya, “Chapter 6 - how to use texture analysis and synthesis methods for video compression,” in _Academic Press Library in signal Processing_ , ser. Academic Press Library in Signal Processing, S. Theodoridis and R. Chellappa, Eds. Oxford, UK: Elsevier, 2014, vol. 5, pp. 197–225.
* [83] P. Ndjiki-Nya, D. Doshkov, H. Kaprykowsky, F. Zhang, D. Bull, and T. Wiegand, “Perception-oriented video coding based on image analysis and completion: A review,” _Signal Processing: Image Communication_ , vol. 27, no. 6, pp. 579–594, 2012.
* [84] A. K. Jain and F. Farrokhnia, “Unsupervised texture segmentation using gabor filters,” in _Proceedings of the International Conference on Systems, Man, and Cybernetics Conference proceedings_. IEEE, 1990, pp. 14–19, Los Angeles, CA.
* [85] A. C. Bovik, M. Clark, and W. S. Geisler, “Multichannel texture analysis using localized spatial filters,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 12, no. 1, pp. 55–73, 1990.
* [86] U. S. Thakur and O. Chubach, “Texture analysis and synthesis using steerable pyramid decomposition for video coding,” in _Proceedings of International Conference on Systems, Signals and Image Processing_ , September 2015, pp. 204–207, London, UK.
* [87] J. Portilla and E. P. Simoncelli, “A parametric texture model based on joint statistics of complex wavelet coefficients,” _International Journal of Computer Vision_ , vol. 40, no. 1, pp. 49–70, 2000.
* [88] S. Bansal, S. Chaudhury, and B. Lall, “Dynamic texture synthesis for video compression,” in _Proceeding of National Conference on Communications_ , Feb 2013, pp. 1–5, New Delhi, India.
* [89] G. R. Cross and A. K. Jain, “Markov random field texture models,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , no. 1, pp. 25–39, 1983.
* [90] R. Chellappa and S. Chatterjee, “Classification of textures using Gaussian Markov random fields,” _IEEE Transactions on Acoustics, Speech, and Signal Processing_ , vol. 33, no. 4, pp. 959–963, 1985.
* [91] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” _International Journal of Computer Vision_ , vol. 60, no. 2, pp. 91–110, 2004\.
* [92] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in _Proceedings of the European Conference on Computer Vision_. Springer, 2006, pp. 404–417, Graz, Austria.
* [93] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 24, no. 7, pp. 971–987, 2002.
* [94] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” _Advances in Neural Information Processing Systems_ , pp. 1097–1105, 2012.
* [95] M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2015, pp. 3828–3836, Boston, Massachusetts.
* [96] F. Perronnin, J. Sánchez, and T. Mensink, “Improving the fisher kernel for large-scale image classification,” in _Proceedings of the European Conference on Computer Vision_. Springer, 2010, pp. 143–156, Crete, Greece.
* [97] A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,” in _Proceedings of the Seventh IEEE International Conference on Computer Vision_ , vol. 2. IEEE, 1999, pp. 1033–1038, Kerkyra, Greece.
* [98] L.-Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vector quantization,” in _Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques_ , 2000, pp. 479–488, New Orleans, LA.
* [99] M. Ashikhmin, “Synthesizing natural textures,” in _Proceedings of the Symposium on Interactive 3D Graphics_ , 2001, pp. 217–226, New York, NY.
* [100] H. Derin and H. Elliott, “Modeling and segmentation of noisy and textured images using Gibbs random fields,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , no. 1, pp. 39–55, 1987.
* [101] D. J. Heeger and J. R. Bergen, “Pyramid-based texture analysis/synthesis,” in _Proceedings of the 22nd annual Conference on Computer Graphics and Interactive Techniques_ , 1995, pp. 229–238, Los Angeles, CA.
* [102] L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” _Advances in Neural Information Processing Systems_ , pp. 262–270, 2015.
* [103] C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in _Proceedings of the European Conference on Computer Vision_. Springer, 2016, pp. 702–716, Amsterdam, The Netherlands.
* [104] T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “Deepcoder: A deep neural network based video compression,” in _Proceedings of the IEEE Visual Communications and Image Processing_. IEEE, 2017, pp. 1–4, St. Petersburg, FL.
* [105] J. Ballé, V. Laparra, and E. P. Simoncelli, “End-to-end optimized image compression,” _arXiv preprint arXiv:1611.01704_ , 2016.
* [106] D. Liu, Y. Li, J. Lin, H. Li, and F. Wu, “Deep learning-based video coding: A review and a case study,” _ACM Computing Surveys (CSUR)_ , vol. 53, no. 1, pp. 1–35, 2020.
* [107] S. Ma, X. Zhang, C. Jia, Z. Zhao, S. Wang, and S. Wanga, “Image and video compression with neural networks: A review,” _IEEE Transactions on Circuits and Systems for Video Technology_ , 2019.
* [108] Y. Li, D. Liu, H. Li, L. Li, F. Wu, H. Zhang, and H. Yang, “Convolutional neural network-based block up-sampling for intra frame coding,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 28, no. 9, pp. 2316–2330, 2018.
* [109] F. Jiang, W. Tao, S. Liu, J. Ren, X. Guo, and D. Zhao, “An end-to-end compression framework based on convolutional neural networks,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 28, no. 10, pp. 3007–3018, 2017.
* [110] M. Afonso, F. Zhang, and D. R. Bull, “Video compression based on spatio-temporal resolution adaptation,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 29, no. 1, pp. 275–280, 2018.
* [111] J. Lin, D. Liu, H. Yang, H. Li, and F. Wu, “Convolutional neural network-based block up-sampling for HEVC,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 29, no. 12, pp. 3701–3715, 2018.
* [112] W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: A brief review,” _IEEE Transactions on Multimedia_ , vol. 21, no. 12, pp. 3106–3121, 2019.
* [113] W. Cui, T. Zhang, S. Zhang, F. Jiang, W. Zuo, and D. Zhao, “Convolutional neural networks based intra prediction for HEVC,” _arXiv preprint arXiv:1808.05734_ , 2018.
* [114] J. Li, B. Li, J. Xu, R. Xiong, and W. Gao, “Fully connected network-based intra prediction for image coding,” _IEEE Transactions on Image Processing_ , vol. 27, no. 7, pp. 3236–3247, 2018.
* [115] J. Pfaff, P. Helle, D. Maniry, S. Kaltenstadler, W. Samek, H. Schwarz, D. Marpe, and T. Wiegand, “Neural network based intra prediction for video coding,” _Applications of Digital Image Processing XLI_ , vol. 10752, p. 1075213, 2018.
* [116] Y. Hu, W. Yang, M. Li, and J. Liu, “Progressive spatial recurrent neural network for intra prediction,” _IEEE Transactions on Multimedia_ , vol. 21, no. 12, pp. 3024–3037, 2019.
* [117] Z. Jin, P. An, and L. Shen, “Video intra prediction using convolutional encoder decoder network,” _Neurocomputing_ , vol. 394, pp. 168–177, 2020\.
* [118] B. Girod, “Motion-compensating prediction with fractional-pel accuracy,” _IEEE Transactions on Communications_ , vol. 41, no. 4, pp. 604–612, 1993\.
* [119] N. Yan, D. Liu, H. Li, and F. Wu, “A convolutional neural network approach for half-pel interpolation in video coding,” in _Proceedings of the IEEE International Symposium on Circuits and Systems_. IEEE, 2017, pp. 1–4, Baltimore, MD.
* [120] H. Zhang, L. Song, Z. Luo, and X. Yang, “Learning a convolutional neural network for fractional interpolation in HEVC inter coding,” in _Proceedings of the IEEE Conference on Visual Communications and Image Processing_. IEEE, 2017, pp. 1–4, St. Petersburg, FL.
* [121] J. Liu, S. Xia, W. Yang, M. Li, and D. Liu, “One-for-all: Grouped variation network-based fractional interpolation in video coding,” _IEEE Transactions on Image Processing_ , vol. 28, no. 5, pp. 2140–2151, 2018.
* [122] L. Zhao, S. Wang, X. Zhang, S. Wang, S. Ma, and W. Gao, “Enhanced motion-compensated video coding with deep virtual reference frame generation,” _IEEE Transactions on Image Processing_ , vol. 28, no. 10, pp. 4832–4844, 2019.
* [123] S. Xia, W. Yang, Y. Hu, and J. Liu, “Deep inter prediction via pixel-wise motion oriented reference generation,” in _Proceedings of the IEEE International Conference on Image Processing_. IEEE, 2019, pp. 1710–1774, Taipei, Taiwan.
* [124] S. Huo, D. Liu, F. Wu, and H. Li, “Convolutional neural network-based motion compensation refinement for video coding,” in _Proceedings of the IEEE International Symposium on Circuits and Systems_. IEEE, 2018, pp. 1–4, Florence, Italy.
* [125] M. M. Alam, T. D. Nguyen, M. T. Hagan, and D. M. Chandler, “A perceptual quantization strategy for HEVC based on a convolutional neural network trained on natural images,” _Applications of Digital Image Processing XXXVIII_ , vol. 9599, p. 959918, 2015.
* [126] R. Song, D. Liu, H. Li, and F. Wu, “Neural network-based arithmetic coding of intra prediction modes in HEVC,” in _Proceedings of the IEEE Visual Communications and Image Processing_. IEEE, 2017, pp. 1–4, St. Petersburg, FL.
* [127] S. Puri, S. Lasserre, and P. Le Callet, “CNN-based transform index prediction in multiple transforms framework to assist entropy coding,” in _Proceedings of the European Signal Processing Conference_. IEEE, 2017, pp. 798–802, Kos island, Greece.
* [128] C. Ma, D. Liu, X. Peng, and F. Wu, “Convolutional neural network-based arithmetic coding of dc coefficients for HEVC intra coding,” in _Proceedings of the IEEE International Conference on Image Processing_. IEEE, 2018, pp. 1772–1776, Athens, Greece.
* [129] G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “Dvc: An end-to-end deep video compression framework,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 11 006–11 015, Long Beach, CA.
* [130] O. Rippel, S. Nair, C. Lew, S. Branson, A. G. Anderson, and L. Bourdev, “Learned video compression,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 3454–3463, South Korea.
* [131] H. Liu, H. Shen, L. Huang, M. Lu, T. Chen, and Z. Ma, “Learned video compression via joint spatial-temporal correlation exploration,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 07, 2020, pp. 11 580–11 587.
* [132] G. Toderici, S. M. O’Malley, S. J. Hwang, D. Vincent, D. Minnen, S. Baluja, M. Covell, and R. Sukthankar, “Variable rate image compression with recurrent neural networks,” in _Proceedings of the International Conference on Learning Representations_ , 2016.
* [133] G. Toderici, D. Vincent, N. Johnston, S. Jin Hwang, D. Minnen, J. Shor, and M. Covell, “Full resolution image compression with recurrent neural networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 5306–5314, Honolulu, Hawaii.
* [134] N. Johnston, D. Vincent, D. Minnen, M. Covell, S. Singh, T. Chinen, S. Jin Hwang, J. Shor, and G. Toderici, “Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 4385–4393.
* [135] Y. Choi, M. El-Khamy, and J. Lee, “Variable rate deep image compression with a conditional autoencoder,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 3146–3154.
* [136] T. Chen, H. Liu, Z. Ma, Q. Shen, X. Cao, and Y. Wang, “Neural image compression via non-local attention optimization and improved context modeling,” _arXiv preprint arXiv:1910.06244_ , 2019.
* [137] J. Ballé, “Efficient nonlinear transforms for lossy image compression,” _arXiv preprint arXiv:1802.00847_ , 2018.
* [138] J. Lee, S. Cho, and S.-K. Beack, “Context-adaptive entropy model for end-to-end optimized image compression,” _arXiv preprint arXiv:1809.10452_ , 2018.
* [139] J. Klopp, Y.-C. F. Wang, S.-Y. Chien, and L.-G. Chen, “Learning a code-space predictor by exploiting intra-image-dependencies.” in _BMVC_ , 2018, p. 124\.
* [140] F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool, “Conditional probability models for deep image compression,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , vol. 1, no. 2, 2018, p. 3.
* [141] G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “DVC: An end-to-end deep video compression framework,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 11 006–11 015.
* [142] J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” _arXiv preprint arXiv:1802.01436_ , 2018.
* [143] H. Liu, T. Chen, P. Guo, Q. Shen, X. Cao, Y. Wang, and Z. Ma, “Non-local attention optimized deep image compression,” _arXiv preprint arXiv:1904.09757_ , 2019.
* [144] C.-Y. Wu, N. Singhal, and P. Krähenbühl, “Video compression through image interpolation,” in _Proceedings of the European Conference on Computer Vision_ , 2018, pp. 416–431.
* [145] A. Djelouah, J. Campos, S. Schaub-Meyer, and C. Schroers, “Neural inter-frame compression for video coding,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 6421–6429.
* [146] M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” _arXiv preprint arXiv:1703.10553_ , 2017.
* [147] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7794–7803.
* [148] Y. Hu, W. Yang, and J. Liu, “Coarse-to-fine hyper-prior modeling for learned image compression.” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2020, pp. 11 013–11 020.
* [149] D. Minnen, J. Ballé, and G. D. Toderici, “Joint autoregressive and hierarchical priors for learned image compression,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 10 794–10 803.
* [150] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu, “Pixel recurrent neural networks,” _arXiv preprint arXiv:1601.06759_ , 2016.
* [151] S. Reed, A. van den Oord, N. Kalchbrenner, S. G. Colmenarejo, Z. Wang, Y. Chen, D. Belov, and N. De Freitas, “Parallel multiscale autoregressive density estimation,” in _Proceedings of the 34th International Conference on Machine Learning_. JMLR. org, 2017, pp. 2912–2921.
* [152] Z. Cheng, H. Sun, M. Takeuchi, and J. Katto, “Learned image compression with discretized gaussian mixture likelihoods and attention modules,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 7939–7948.
* [153] J. Lee, S. Cho, and M. Kim, “An end-to-end joint learning scheme of image compression and quality enhancement with improved entropy minimization,” _arXiv_ , pp. arXiv–1912, 2019.
* [154] O. Rippel and L. Bourdev, “Real-time adaptive image compression,” _arXiv preprint arXiv:1705.05823_ , 2017.
* [155] C. Huang, H. Liu, T. Chen, S. Pu, Q. Shen, and Z. Ma, “Extreme image coding via multiscale autoencoders with generative adversarial optimization,” in _Proceedings of IEEE Visual Communications and Image Processing_ , 2019.
* [156] E. Agustsson, M. Tschannen, F. Mentzer, R. Timofte, and L. V. Gool, “Generative adversarial networks for extreme learned image compression,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 221–231.
* [157] H. Liu, T. Chen, Q. Shen, T. Yue, and Z. Ma, “Deep image compression via end-to-end learning,” in _Proceedings of the IEEE International Conference on Computer Vision Workshops_ , 2018.
* [158] J. L. Hennessy and D. A. Patterson, “A new golden age for computer architecture,” _Communications of the ACM_ , vol. 62, no. 2, pp. 48–60, 2019\.
* [159] S. Midtskogen and J.-M. Valin, “The AV1 constrained directional enhancement filter (CDEF),” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing_. IEEE, 2018, pp. 1193–1197.
* [160] D. Mukherjee, S. Li, Y. Chen, A. Anis, S. Parker, and J. Bankoski, “A switchable loop-restoration with side-information framework for the emerging AV1 video codec,” in _2017 IEEE International Conference on Image Processing_. IEEE, 2017, pp. 265–269.
* [161] C.-Y. Tsai, C.-Y. Chen, T. Yamakage, I. S. Chong, Y.-W. Huang, C.-M. Fu, T. Itoh, T. Watanabe, T. Chujoh, M. Karczewicz _et al._ , “Adaptive loop filtering for video coding,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 7, no. 6, pp. 934–945, 2013.
* [162] W.-S. Park and M. Kim, “Cnn-based in-loop filtering for coding efficiency improvement,” in _2016 IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP)_. IEEE, 2016, pp. 1–5.
* [163] Y. Dai, D. Liu, and F. Wu, “A convolutional neural network approach for post-processing in HEVC intra coding,” in _International Conference on Multimedia Modeling_. Springer, 2017, pp. 28–39.
* [164] Y. Zhang, T. Shen, X. Ji, Y. Zhang, R. Xiong, and Q. Dai, “Residual highway convolutional neural networks for in-loop filtering in HEVC,” _IEEE Transactions on image processing_ , vol. 27, no. 8, pp. 3827–3841, 2018.
* [165] X. Xu, J. Qian, L. Yu, H. Wang, X. Zeng, Z. Li, and N. Wang, “Dense inception attention neural network for in-loop filter,” in _2019 Picture Coding Symposium_. IEEE, 2019, pp. 1–5.
* [166] K. Lin, C. Jia, Z. Zhao, L. Wang, S. Wang, S. Ma, and W. Gao, “Residual in residual based convolutional neural network in-loop filter for avs3,” in _2019 Picture Coding Symposium_. IEEE, 2019, pp. 1–5.
* [167] J. Kang, S. Kim, and K. M. Lee, “Multi-modal/multi-scale convolutional neural network based in-loop filter design for next generation video codec,” in _2017 IEEE International Conference on Image Processing_. IEEE, 2017, pp. 26–30.
* [168] C. Jia, S. Wang, X. Zhang, S. Wang, and S. Ma, “Spatial-temporal residue network based in-loop filter for video coding,” in _2017 IEEE Visual Communications and Image Processing_. IEEE, 2017, pp. 1–4.
* [169] X. Meng, C. Chen, S. Zhu, and B. Zeng, “A new HEVC in-loop filter based on multi-channel long-short-term dependency residual networks,” in _2018 Data Compression Conference_. IEEE, 2018, pp. 187–196.
* [170] D. Li and L. Yu, “An in-loop filter based on low-complexity cnn using residuals in intra video coding,” in _2019 IEEE International Symposium on Circuits and Systems_. IEEE, 2019, pp. 1–5.
* [171] D. Ding, L. Kong, G. Chen, Z. Liu, and Y. Fang, “A switchable deep learning approach for in-loop filtering in video coding,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 30, no. 7, pp. 1871–1887, 2020\.
* [172] D. Ding, G. Chen, D. Mukherjee, U. Joshi, and Y. Chen, “A CNN-based in-loop filtering approach for AV1 video codec,” in _Proceedings of the Picture Coding Symposium_. IEEE, 2019, pp. 1–5, Ningbo, China.
* [173] G. Chen, D. Ding, D. Mukherjee, U. Joshi, and Y. Chen, “AV1 in-loop filtering using a wide-activation structured residual network,” in _Proceedings of the IEEE International Conference on Image Processing_. IEEE, 2019, pp. 1725–1729, Taipei, Taiwan.
* [174] H. Yin, R. Yang, X. Fang, and S. Ma, “Ce13-1.2: adaptive convolutional neural network loop filter,” _JVET-N0480_ , 2019.
* [175] T. Li, M. Xu, C. Zhu, R. Yang, Z. Wang, and Z. Guan, “A deep learning approach for multi-frame in-loop filter of HEVC,” _IEEE Transactions on Image Processing_ , vol. 28, no. 11, pp. 5663–5678, 2019.
* [176] C. Jia, S. Wang, X. Zhang, S. Wang, J. Liu, S. Pu, and S. Ma, “Content-aware convolutional neural network for in-loop filtering in high efficiency video coding,” _IEEE Transactions on Image Processing_ , vol. 28, no. 7, pp. 3343–3356, 2019.
* [177] C. Dong, Y. Deng, C. Change Loy, and X. Tang, “Compression artifacts reduction by a deep convolutional network,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2015, pp. 576–584, Santiago, Chile.
* [178] L. Cavigelli, P. Hager, and L. Benini, “CAS-CNN: A deep convolutional neural network for image compression artifact suppression,” in _Proceedings of the IEEE International Joint Conference on Neural Networks_. IEEE, 2017, pp. 752–759, Anchorage, Alaska.
* [179] J. Guo and H. Chao, “Building dual-domain representations for compression artifacts reduction,” in _Proceedings of the European Conference on Computer Vision_. Springer, 2016, pp. 628–644, Amsterdam, The Netherlands.
* [180] L. Galteri, L. Seidenari, M. Bertini, and A. Del Bimbo, “Deep generative adversarial compression artifact removal,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 4826–4835, Venice, Italy.
* [181] P. Liu, H. Zhang, K. Zhang, L. Lin, and W. Zuo, “Multi-level wavelet-cnn for image restoration,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2018, pp. 773–782, Salt Lake City, UT.
* [182] Y. Zhang, L. Sun, C. Yan, X. Ji, and Q. Dai, “Adaptive residual networks for high-quality image restoration,” _IEEE Transactions on Image Processing_ , vol. 27, no. 7, pp. 3150–3163, 2018.
* [183] T. Wang, M. Chen, and H. Chao, “A novel deep learning-based method of improving coding efficiency from the decoder-end for HEVC,” in _Proceedings of the Data Compression Conference_. IEEE, 2017, pp. 410–419, Snowbird, Utah.
* [184] R. Yang, M. Xu, and Z. Wang, “Decoder-side HEVC quality enhancement with scalable convolutional neural network,” in _2017 IEEE International Conference on Multimedia and Expo_. IEEE, 2017, pp. 817–822.
* [185] X. He, Q. Hu, X. Zhang, C. Zhang, W. Lin, and X. Han, “Enhancing HEVC compressed videos with a partition-masked convolutional neural network,” in _2018 25th IEEE International Conference on Image Processing_. IEEE, 2018, pp. 216–220.
* [186] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 2758–2766.
* [187] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 2462–2470.
* [188] D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8934–8943.
* [189] T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman, “Video enhancement with task-oriented flow,” _International Journal of Computer Vision_ , vol. 127, no. 8, pp. 1106–1125, 2019.
* [190] W. Bao, W.-S. Lai, X. Zhang, Z. Gao, and M.-H. Yang, “MEMC-Net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement,” _IEEE transactions on pattern analysis and machine intelligence_ , 2019.
* [191] X. Wang, K. C. Chan, K. Yu, C. Dong, and C. Change Loy, “EDVR: Video restoration with enhanced deformable convolutional networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2019, pp. 0–0.
* [192] R. Yang, M. Xu, Z. Wang, and T. Li, “Multi-frame quality enhancement for compressed video,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 6664–6673.
* [193] Z. Guan, Q. Xing, M. Xu, R. Yang, T. Liu, and Z. Wang, “MFQE 2.0: A new approach for multi-frame quality enhancement on compressed video,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2019.
* [194] J. Tong, X. Wu, D. Ding, Z. Zhu, and Z. Liu, “Learning-based multi-frame video quality enhancement,” in _Proceedings of the IEEE International Conference on Image Processing_. IEEE, 2019, pp. 929–933, Taipei, Taiwan.
* [195] M. Lu, M. Cheng, Y. Xu, S. Pu, Q. Shen, and Z. Ma, “Learned quality enhancement via multi-frame priors for HEVC compliant low-delay applications,” in _2019 IEEE International Conference on Image Processing_. IEEE, 2019, pp. 934–938.
* [196] U. Joshi, D. Mukherjee, J. Han, Y. Chen, S. Parker, H. Su, A. Chiang, Y. Xu, Z. Liu, Y. Wang _et al._ , “Novel inter and intra prediction tools under consideration for the emerging AV1 video codec,” in _Applications of Digital Image Processing XL_ , vol. 10396. International Society for Optics and Photonics, 2017, p. 103960F.
* [197] Z. Liu, D. Mukherjee, W.-T. Lin, P. Wilkins, J. Han, and Y. Xu, “Adaptive multi-reference prediction using a symmetric framework,” _Electronic Imaging_ , vol. 2017, no. 2, pp. 65–72, 2017.
* [198] Y. Chen, D. Murherjee, J. Han, A. Grange, Y. Xu, Z. Liu, S. Parker, C. Chen, H. Su, U. Joshi _et al._ , “An overview of core coding tools in the AV1 video codec,” in _2018 Picture Coding Symposium_. IEEE, 2018, pp. 41–45.
* [199] Y. Wang, S. Inguva, and B. Adsumilli, “YouTube UGC dataset for video compression research,” _IEEE International Workshop on Multimedia Signal Processing_ , September 2019, Kuala Lumpur, Malaysia.
* [200] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , July 2017, pp. 2881–2890, Honolulu, HI.
* [201] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [202] M. Bosch, F. Zhu, and E. J. Delp, “Segmentation-Based Video Compression Using Texture and Motion Models,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 5, no. 7, pp. 1366–1377, November 2011.
* [203] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein _et al._ , “ImageNet large scale visual recognition challenge,” _International Journal of Computer Vision_ , vol. 115, no. 3, pp. 211–252, 2015.
* [204] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in _Proceedings of the IEEE European Conference on Computer Vision_ , September 2014, pp. 740–755, Zürich, Switzerland.
* [205] B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Torralba, “Semantic understanding of scenes through the ADE20K dataset,” _International Journal of Computer Vision_ , vol. 127, no. 3, pp. 302–321, 2019.
* [206] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , June 2015, pp. 3431–3440, Boston, MA.
* [207] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” _arXiv preprint arXiv:1412.7062_ , 2014.
* [208] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” _arXiv preprint arXiv:1511.07122_ , 2015.
* [209] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene parsing through ade20k dataset,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 633–641.
* [210] D. Chen, Q. Chen, and F. Zhu, “Pixel-level texture segmentation based AV1 video compression,” in _2019 IEEE International Conference on Acoustics, Speech and Signal Processing_. IEEE, 2019, pp. 1622–1626.
* [211] M. Bosch, F. Zhu, and E. J. Delp, “Spatial texture models for video compression,” _Proceedings of IEEE International Conference on Image Processing_ , vol. 1, pp. 93–96, September 2007, San Antonio, TX.
* [212] C. Fu, D. Chen, E. Delp, Z. Liu, and F. Zhu, “Texture segmentation based video compression using convolutional neural networks,” _Electronic Imaging_ , vol. 2018, no. 2, pp. 155–1, 2018.
* [213] I.-R. R. BT.500-14, “Methodologies for the subjective assessment of the quality of television images,” Geneva, Tech. Rep., 2019.
* [214] M. Haindl and S. Mikes, “Texture segmentation benchmark,” in _Proceedings of the 19th International Conference on Pattern Recognition_. IEEE, 2008, pp. 1–4.
* [215] N. Xu, L. Yang, Y. Fan, D. Yue, Y. Liang, J. Yang, and T. Huang, “YouTube-VOS: A large-scale video object segmentation benchmark,” _arXiv preprint_ , p. arXiv:1809.03327, 2018.
* [216] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung, “A benchmark dataset and evaluation methodology for video object segmentation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 724–732.
* [217] H. Liu, M. Lu, Z. Ma, F. Wang, Z. Xie, X. Cao, and Y. Wang, “Neural video coding using multiscale motion compensation and spatiotemporal context model,” _accepted by IEEE Trans. Circuits and Systems for Video Technology_ , Oct. 2020.
* [218] M. Li, W. Zuo, S. Gu, D. Zhao, and D. Zhang, “Learning convolutional networks for content-weighted image compression,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 3214–3223.
* [219] T. M. Cover and J. A. Thomas, _Elements of information theory_. John Wiley & Sons, 2012.
* [220] Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu, “Residual non-local attention networks for image restoration,” _arXiv preprint arXiv:1903.10082_ , 2019\.
* [221] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” _Advances in neural information processing systems_ , vol. 28, pp. 802–810, 2015.
* [222] G. Lu, X. Zhang, W. Ouyang, L. Chen, Z. Gao, and D. Xu, “An end-to-end learning framework for video compression,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2020.
* [223] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang, “Ntire 2017 challenge on single image super-resolution: Methods and results,” in _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition Workshops_ , 2017, pp. 114–125.
* [224] J. Yu, Y. Fan, J. Yang, N. Xu, Z. Wang, X. Wang, and T. Huang, “Wide activation for efficient and accurate image super-resolution,” _arXiv preprint arXiv:1808.08718_ , 2018.
* [225] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in _European Conference on Computer Vision_. Springer, 2016, pp. 630–645.
* [226] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, no. 7553, pp. 436–444, 2015.
* [227] Q. Xia, H. Liu, and Z. Ma, “Object-based image coding: A learning-driven revisit,” in _2020 IEEE International Conference on Multimedia and Expo (ICME)_. IEEE, 2020, pp. 1–6.
* [228] J. Lee, S. Cho, and M. Kim, “A hybrid architecture of jointly learning image compression and quality enhancement with improved entropy minimization,” _arXiv preprint arXiv:1912.12817_ , 2019.
* [229] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” _ACM Computing Surveys (CSUR)_ , vol. 53, no. 3, pp. 1–34, 2020.
* [230] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in _Proceedings of The Thrity-Seventh Asilomar Conference on Signals, Systems Computers_ , vol. 2, Nov 2003, pp. 1398–1402, pacific Grove, CA.
* [231] D. Yuan, T. Zhao, Y. Xu, H. Xue, and L. Lin, “Visual jnd: a perceptual measurement in video coding,” _IEEE Access_ , vol. 7, pp. 29 014–29 022, 2019.
* [232] Netflix, Inc., “VMAF: Perceptual video quality assessment based on multi-method fusion,” https://github.com/Netflix/vmaf, 2017.
* [233] Q. Shen, J. Cai, L. Liu, H. Liu, T. Chen, L. Ye, and Z. Ma, “Codedvision: Towards joint image understanding and compression via end-to-end learning,” in _Pacific Rim Conference on Multimedia_. Springer, 2018, pp. 3–14.
* [234] L. Liu, H. Liu, T. Chen, Q. Shen, and Z. Ma, “Codedretrieval: Joint image compression and retrieval with neural networks,” in _2019 IEEE Visual Communications and Image Processing (VCIP)_. IEEE, 2019, pp. 1–4.
|
Send correspondence to Y. Sekimoto, E-mail<EMAIL_ADDRESS>
# Concept Design of Low Frequency Telescope for CMB B-mode Polarization
satellite LiteBIRD
Y. Sekimoto Japan Aerospace Exploration Agency (JAXA), Institute of Space and
Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan The
University of Tokyo, Department of Astronomy, Tokyo 113-0033, Japan High
Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801,
Japan P.A.R. Ade Cardiff University, School of Physics and Astronomy,
Cardiff CF10 3XQ, UK A. Adler Stockholm University E. Allys Laboratoire de
Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS,
Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$,
Universit$\acute{\rm e}$ de Paris, 75005 Paris, France K. Arnold University
of California, San Diego, Department of Physics, San Diego, CA 92093-0424, USA
D. Auguste Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
J. Aumont IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS,
(Toulouse), France R. Aurlien University of Oslo, Institute of Theoretical
Astrophysics, NO-0315 Oslo, Norway J. Austermann National Institute of
Standards and Technology (NIST), Boulder, Colorado 80305, USA C. Baccigalupi
International School for Advanced Studies (SISSA), Via Bonomea 265, 34136,
Trieste, Italy A.J. Banday IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS,
CNES, UPS, (Toulouse), France R. Banerji University of Oslo, Institute of
Theoretical Astrophysics, NO-0315 Oslo, Norway R.B. Barreiro Instituto de
Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander,
Spain S. Basak School of Physics, Indian Institute of Science Education and
Research Thiruvananthapuram, Maruthamala PO, Vithura, Thiruvananthapuram
695551, Kerala, India J. Beall National Institute of Standards and
Technology (NIST), Boulder, Colorado 80305, USA D. Beck Stanford University,
Department of Physics, CA 94305-4060, USA S. Beckman University of
California, Berkeley, Department of Physics, Berkeley, CA 94720, USA J.
Bermejo Instituto Universitario de Microgravedad Ignacio Da Riva (IDR/UPM),
Plaza Cardenal Cisneros 3, 28040 - Madrid, Spain P. de Bernardis
Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy
and INFN Roma M. Bersanelli Dipartimento di Fisica, Università degli Studi
di Milano, INAF-IASF Milano, and Sezione INFN Milano J. Bonis Université
Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France J. Borrill Lawrence
Berkeley National Laboratory (LBNL), Computational Cosmology Center, Berkeley,
CA 94720, USA University of California, Berkeley, Space Science Laboratory,
Berkeley, CA 94720, USA F. Boulanger Laboratoire de Physique de
l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS,
Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$,
Universit$\acute{\rm e}$ de Paris, 75005 Paris, France S. Bounissou Institut
d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$
Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France M. Brilenkov
University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo,
Norway M. Brown University of Manchester, Manchester M13 9PL, United Kingdom
M. Bucher AstroParticle and Cosmology (APC) - University Paris Diderot,
CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité, France E. Calabrese
Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK P.
Campeti International School for Advanced Studies (SISSA), Via Bonomea 265,
34136, Trieste, Italy A. Carones Dipartimento di Fisica, Università di Roma
”Tor Vergata”, and Sezione INFN Roma2 F.J. Casas Instituto de Fisica de
Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain A.
Challinor DAMTP, Centre for Mathematical Sciences, Wilberforce Road,
Cambridge CB3 0WA, U.K. Institute of Astronomy, Madingley Road, Cambridge CB3
0HA, U.K. Kavli Institute for Cosmology Cambridge, Madingley Road, Cambridge
CB3 0HA, U.K. V. Chan University of Toronto K. Cheung University of
California, Berkeley, Department of Physics, Berkeley, CA 94720, USA Y.
Chinone University of Tokyo, School of Science, Research Center for the Early
Universe, RESCEU J.F. Cliche McGill University, Physics Department,
Montreal, QC H3A 0G4, Canada L. Colombo Dipartimento di Fisica, Università
degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano F. Columbro
Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy
and INFN Roma J. Cubas Universidad Politécnica de Madrid A. Cukierman
University of California, Berkeley, Department of Physics, Berkeley, CA 94720,
USA Stanford University, Department of Physics, CA 94305-4060, USA D. Curtis
University of California, Berkeley, Space Science Laboratory, Berkeley, CA
94720, USA G. D’Alessandro Dipartimento di Fisica, Università La Sapienza,
P. le A. Moro 2, Roma, Italy and INFN Roma N. Dachlythra Stockholm
University M. De Petris Dipartimento di Fisica, Università La Sapienza, P.
le A. Moro 2, Roma, Italy and INFN Roma C. Dickinson University of
Manchester, Manchester M13 9PL, United Kingdom P. Diego-Palazuelos Instituto
de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005,
Santander, Spain M. Dobbs McGill University, Physics Department, Montreal,
QC H3A 0G4, Canada T. Dotani Japan Aerospace Exploration Agency (JAXA),
Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa
252-5210, Japan L. Duband Univ. Grenoble Alpes, CEA, IRIG-DSBT, 38000
Grenoble, France S. Duff National Institute of Standards and Technology
(NIST), Boulder, Colorado 80305, USA J.M. Duval Univ. Grenoble Alpes, CEA,
IRIG-DSBT, 38000 Grenoble, France K. Ebisawa Japan Aerospace Exploration
Agency (JAXA), Institute of Space and Astronautical Science (ISAS),
Sagamihara, Kanagawa 252-5210, Japan T. Elleflot Lawrence Berkeley National
Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA H.K. Eriksen
University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo,
Norway J. Errard AstroParticle and Cosmology (APC) - University Paris
Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité, France T.
Essinger-Hileman NASA Goddard Space Flight Center F. Finelli INAF - OAS
Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) R. Flauger
University of California, San Diego, Department of Physics, San Diego, CA
92093-0424, USA C. Franceschet Dipartimento di Fisica, Università degli
Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano U. Fuskeland
University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo,
Norway M. Galloway University of Oslo, Institute of Theoretical
Astrophysics, NO-0315 Oslo, Norway K. Ganga AstroParticle and Cosmology
(APC) - University Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne
Paris Cité, France J.R. Gao SRON Netherlands Institute for Space Research
R. Genova-Santos Instituto de Astrofisica de Canarias (IAC), Spain M.
Gerbino Dipartimento di Fisica e Scienze della Terra, Università di Ferrara
and Sezione INFN di Ferrara, Via Saragat 1, 44122 Ferrara, Italy M. Gervasi
University of Milano Bicocca, Physics Department, p.zza della Scienza, 3,
20126 Milan Italy T. Ghigna University of Oxford Kavli Institute for the
Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The
University of Tokyo, Kashiwa, Chiba 277-8583, Japan E. Gjerløw University of
Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway M.L.
Gradziel National University of Ireland Maynooth J. Grain Institut
d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$
Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France F. Grupp MPE A.
Gruppuso INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy)
J.E. Gudmundsson Stockholm University T. de Haan High Energy Accelerator
Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan N.W. Halverson
Center for Astrophysics and Space Astronomy, University of Colorado, Boulder,
CO, 80309, USA P. Hargrave Cardiff University, School of Physics and
Astronomy, Cardiff CF10 3XQ, UK T. Hasebe Japan Aerospace Exploration Agency
(JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara,
Kanagawa 252-5210, Japan M. Hasegawa High Energy Accelerator Research
Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan M. Hattori Tohoku
University, Graduate School of Science, Astronomical Institute, Sendai,
980-8578, Japan M. Hazumi High Energy Accelerator Research Organization
(KEK), Tsukuba, Ibaraki 305-0801, Japan Japan Aerospace Exploration Agency
(JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara,
Kanagawa 252-5210, Japan Kavli Institute for the Physics and Mathematics of
the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba
277-8583, Japan The Graduate University for Advanced Studies (SOKENDAI),
Miura District, Kanagawa 240-0115, Hayama, Japan S. Henrot-Versillé
Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France D. Herman
University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo,
Norway D. Herranz Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida
los Castros SN, 39005, Santander, Spain C.A. Hill Lawrence Berkeley National
Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA University of
California, Berkeley, Department of Physics, Berkeley, CA 94720, USA G.
Hilton National Institute of Standards and Technology (NIST), Boulder,
Colorado 80305, USA Y. Hirota The University of Tokyo, Tokyo 113-0033, Japan
E. Hivon Institut d’Astrophysique de Paris, CNRS/Sorbonne
Universit$\acute{\rm e}$, Paris France R.A. Hlozek University of Toronto Y.
Hoshino Saitama University, Saitama 338-8570, Japan E. de la Hoz Instituto
de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005,
Santander, Spain J. Hubmayr National Institute of Standards and Technology
(NIST), Boulder, Colorado 80305, USA K. Ichiki Nagoya University, Kobayashi-
Masukawa Institute for the Origin of Particle and the Universe, Aichi
464-8602, Japan T. Iida ispace, inc. H. Imada Kavli Institute for the
Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The
University of Tokyo, Kashiwa, Chiba 277-8583, Japan National Astronomical
Observatory of Japan, Mitaka, Tokyo 181-8588, Japan K. Ishimura Waseda
University H. Ishino Okayama University, Department of Physics, Okayama
700-8530, Japan G. Jaehnig Center for Astrophysics and Space Astronomy,
University of Colorado, Boulder, CO, 80309, USA T. Kaga Japan Aerospace
Exploration Agency (JAXA), Institute of Space and Astronautical Science
(ISAS), Sagamihara, Kanagawa 252-5210, Japan S. Kashima National
Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan N. Katayama
Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU,
WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan A. Kato
High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki
305-0801, Japan The Graduate University for Advanced Studies (SOKENDAI),
Miura District, Kanagawa 240-0115, Hayama, Japan T. Kawasaki Kitasato
University, Sagamihara, Kanagawa 252-0373, Japan R. Keskitalo Lawrence
Berkeley National Laboratory (LBNL), Computational Cosmology Center, Berkeley,
CA 94720, USA University of California, Berkeley, Space Science Laboratory,
Berkeley, CA 94720, USA T. Kisner Lawrence Berkeley National Laboratory
(LBNL), Computational Cosmology Center, Berkeley, CA 94720, USA University of
California, Berkeley, Space Science Laboratory, Berkeley, CA 94720, USA Y.
Kobayashi The University of Tokyo, Tokyo 113-0033, Japan N. Kogiso Osaka
Prefecture University, Sakai, Osaka 599-8531, Japan A. Kogut NASA Goddard
Space Flight Center K. Kohri High Energy Accelerator Research Organization
(KEK), Tsukuba, Ibaraki 305-0801, Japan E. Komatsu Max-Planck-Institut for
Astrophysics, D-85741 Garching, Germany K. Komatsu Okayama University,
Department of Physics, Okayama 700-8530, Japan K. Konishi The University of
Tokyo, Tokyo 113-0033, Japan N. Krachmalnicoff International School for
Advanced Studies (SISSA), Via Bonomea 265, 34136, Trieste, Italy I.
Kreykenbohm University of Erlangen-Nürnberg C.L. Kuo SLAC National
Accelerator Laboratory, Kavli Institute for Particle Astrophysics and
Cosmology (KIPAC), Menlo Park, CA 94025, USA Stanford University, Department
of Physics, CA 94305-4060, USA A. Kushino Kurume University, Kurume, Fukuoka
830-0011, Japan L. Lamagna Dipartimento di Fisica, Università La Sapienza,
P. le A. Moro 2, Roma, Italy and INFN Roma J.V. Lanen National Institute of
Standards and Technology (NIST), Boulder, Colorado 80305, USA M. Lattanzi
Istituto Nazionale di Fisica Nucleare - Sezione di Ferrara A.T. Lee Lawrence
Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA
University of California, Berkeley, Department of Physics, Berkeley, CA 94720,
USA C. Leloup AstroParticle and Cosmology (APC) - University Paris Diderot,
CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité, France F. Levrier
Laboratoire de Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm
e}$rieure, ENS, Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne
Universit$\acute{\rm e}$, Universit$\acute{\rm e}$ de Paris, 75005 Paris,
France E. Linder Lawrence Berkeley National Laboratory (LBNL), Physics
Division, Berkeley, CA 94720, USA University of California, Berkeley, Space
Science Laboratory, Berkeley, CA 94720, USA T. Louis Université Paris-
Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France G. Luzzi Italian Space
Agency (ASI) T. Maciaszek Centre National d’Etudes Staptiales (CNES), France
B. Maffei Institut d’Astrophysique Spatiale (IAS), CNRS, UMR 8617,
Universit$\acute{\rm e}$ Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay,
France D. Maino Dipartimento di Fisica, Università degli Studi di Milano,
INAF-IASF Milano, and Sezione INFN Milano M. Maki High Energy Accelerator
Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan S. Mandelli
Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano,
and Sezione INFN Milano E. Martinez-Gonzalez Instituto de Fisica de
Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain S.
Masi Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma,
Italy and INFN Roma T. Matsumura Kavli Institute for the Physics and
Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo,
Kashiwa, Chiba 277-8583, Japan A. Mennella Dipartimento di Fisica,
Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano
M. Migliaccio Dipartimento di Fisica, Università di Roma ”Tor Vergata”, and
Sezione INFN Roma2 Y. Minami High Energy Accelerator Research Organization
(KEK), Tsukuba, Ibaraki 305-0801, Japan K. Mitsuda National Astronomical
Observatory of Japan, Mitaka, Tokyo 181-8588, Japan J. Montgomery McGill
University, Physics Department, Montreal, QC H3A 0G4, Canada L. Montier
IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse),
France G. Morgante INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129
Bologna (Italy) B. Mot IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS,
CNES, UPS, (Toulouse), France Y. Murata Japan Aerospace Exploration Agency
(JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara,
Kanagawa 252-5210, Japan J.A. Murphy National University of Ireland Maynooth
M. Nagai National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588,
Japan Y. Nagano Okayama University, Department of Physics, Okayama 700-8530,
Japan T. Nagasaki High Energy Accelerator Research Organization (KEK),
Tsukuba, Ibaraki 305-0801, Japan R. Nagata Japan Aerospace Exploration
Agency (JAXA), Institute of Space and Astronautical Science (ISAS),
Sagamihara, Kanagawa 252-5210, Japan S. Nakamura Yokohama National
University, Yokohama, Kanagawa 240-8501, Japan T. Namikawa DAMTP, Centre for
Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, U.K. P. Natoli
Dipartimento di Fisica e Scienze della Terra, Università di Ferrara and
Sezione INFN di Ferrara, Via Saragat 1, 44122 Ferrara, Italy S. Nerval
University of Toronto T. Nishibori Japan Aerospace Exploration Agency
(JAXA), Research and Development Directorate, Tsukuba, Ibaraki 305-8505, Japan
H. Nishino University of Tokyo, School of Science, Research Center for the
Early Universe, RESCEU C. O’Sullivan National University of Ireland Maynooth
H. Ogawa Osaka Prefecture University, Sakai, Osaka 599-8531, Japan H. Ogawa
Japan Aerospace Exploration Agency (JAXA), Institute of Space and
Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan S. Oguri
Japan Aerospace Exploration Agency (JAXA), Institute of Space and
Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan H. Ohsaki
The University of Tokyo, Tokyo 113-0033, Japan I.S. Ohta Konan University
N. Okada Japan Aerospace Exploration Agency (JAXA), Institute of Space and
Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan N. Okada
Osaka Prefecture University, Sakai, Osaka 599-8531, Japan L. Pagano
Dipartimento di Fisica e Scienze della Terra, Università di Ferrara and
Sezione INFN di Ferrara, Via Saragat 1, 44122 Ferrara, Italy A. Paiella
Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy
and INFN Roma D. Paoletti INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129
Bologna (Italy) G. Patanchon AstroParticle and Cosmology (APC) - University
Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité, France
J. Peloton Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
F. Piacentini Dipartimento di Fisica, Università La Sapienza, P. le A. Moro
2, Roma, Italy and INFN Roma G. Pisano Dipartimento di Fisica, Università La
Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma Cardiff University,
School of Physics and Astronomy, Cardiff CF10 3XQ, UK G. Polenta Space
Science Data Center, Italian Space Agency, via del Politecnico, 00133, Roma,
Italy D. Poletti International School for Advanced Studies (SISSA), Via
Bonomea 265, 34136, Trieste, Italy T. Prouvé Univ. Grenoble Alpes, CEA,
IRIG-DSBT, 38000 Grenoble, France G. Puglisi Stanford University, Department
of Physics, CA 94305-4060, USA D. Rambaud IRAP, Universit$\acute{\rm e}$ de
Toulouse, CNRS, CNES, UPS, (Toulouse), France C. Raum University of
California, Berkeley, Department of Physics, Berkeley, CA 94720, USA S.
Realini Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF
Milano, and Sezione INFN Milano M. Reinecke Max-Planck-Institut for
Astrophysics, D-85741 Garching, Germany M. Remazeilles University of
Manchester, Manchester M13 9PL, United Kingdom A. Ritacco Institut
d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$
Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France Laboratoire de
Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS,
Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$,
Universit$\acute{\rm e}$ de Paris, 75005 Paris, France G. Roudil IRAP,
Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France
J.A. Rubino-Martin Instituto de Astrofisica de Canarias (IAC), Spain M.
Russell University of California, San Diego, Department of Physics, San
Diego, CA 92093-0424, USA H. Sakurai The Institute for Solid State Physics
(ISSP), The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Y. Sakurai
Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU,
WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan M.
Sandri INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) M.
Sasaki University of Erlangen-Nürnberg G. Savini Optical Science
Laboratory, Physics and Astronomy Dept., University College London (UCL) D.
Scott University of British Columbia, Canada J. Seibert University of
California, San Diego, Department of Physics, San Diego, CA 92093-0424, USA
B. Sherwin DAMTP, Centre for Mathematical Sciences, Wilberforce Road,
Cambridge CB3 0WA, U.K. Kavli Institute for Cosmology Cambridge, Madingley
Road, Cambridge CB3 0HA, U.K. Lawrence Berkeley National Laboratory (LBNL),
Physics Division, Berkeley, CA 94720, USA K. Shinozaki Japan Aerospace
Exploration Agency (JAXA), Research and Development Directorate, Tsukuba,
Ibaraki 305-8505, Japan M. Shiraishi National Institute of Technology,
Kagawa College P. Shirron NASA Goddard Space Flight Center G. Signorelli
INFN Sezione di Pisa, Largo Bruno Pontecorvo 3, 56127 Pisa (Italy) G. Smecher
Three-Speed Logic, Inc. S. Stever Okayama University, Department of Physics,
Okayama 700-8530, Japan Kavli Institute for the Physics and Mathematics of
the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba
277-8583, Japan R. Stompor AstroParticle and Cosmology (APC) - University
Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité, France
H. Sugai Kavli Institute for the Physics and Mathematics of the Universe
(Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583,
Japan S. Sugiyama Saitama University, Saitama 338-8570, Japan A. Suzuki
Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA
94720, USA J. Suzuki High Energy Accelerator Research Organization (KEK),
Tsukuba, Ibaraki 305-0801, Japan T.L. Svalheim University of Oslo, Institute
of Theoretical Astrophysics, NO-0315 Oslo, Norway E. Switzer NASA Goddard
Space Flight Center R. Takaku Japan Aerospace Exploration Agency (JAXA),
Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa
252-5210, Japan The University of Tokyo, Department of Physics, Tokyo
113-0033, Japan H. Takakura The University of Tokyo, Department of
Astronomy, Tokyo 113-0033, Japan Japan Aerospace Exploration Agency (JAXA),
Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa
252-5210, Japan S. Takakura Kavli Institute for the Physics and Mathematics
of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa,
Chiba 277-8583, Japan Y. Takase Okayama University, Department of Physics,
Okayama 700-8530, Japan Y. Takeda Japan Aerospace Exploration Agency (JAXA),
Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa
252-5210, Japan A. Tartari INFN Sezione di Pisa, Largo Bruno Pontecorvo 3,
56127 Pisa (Italy) E. Taylor University of California, Berkeley, Department
of Physics, Berkeley, CA 94720, USA Y. Terao The University of Tokyo, Tokyo
113-0033, Japan H. Thommesen University of Oslo, Institute of Theoretical
Astrophysics, NO-0315 Oslo, Norway K.L. Thompson SLAC National Accelerator
Laboratory, Kavli Institute for Particle Astrophysics and Cosmology (KIPAC),
Menlo Park, CA 94025, USA Stanford University, Department of Physics, CA
94305-4060, USA B. Thorne University of Oxford T. Toda Okayama University,
Department of Physics, Okayama 700-8530, Japan M. Tomasi Dipartimento di
Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN
Milano M. Tominaga The University of Tokyo, Department of Astronomy, Tokyo
113-0033, Japan Japan Aerospace Exploration Agency (JAXA), Institute of Space
and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan N.
Trappe National University of Ireland Maynooth M. Tristram Université
Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France M. Tsuji National
Institute of Technology, Kagawa College M. Tsujimoto Japan Aerospace
Exploration Agency (JAXA), Institute of Space and Astronautical Science
(ISAS), Sagamihara, Kanagawa 252-5210, Japan C. Tucker Cardiff University,
School of Physics and Astronomy, Cardiff CF10 3XQ, UK J. Ullom National
Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA G.
Vermeulen Néel Institute, CNRS P. Vielva Instituto de Fisica de Cantabria
(IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain F. Villa
INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) M. Vissers
National Institute of Standards and Technology (NIST), Boulder, Colorado
80305, USA N. Vittorio Dipartimento di Fisica, Università di Roma ”Tor
Vergata”, and Sezione INFN Roma2 I. Wehus University of Oslo, Institute of
Theoretical Astrophysics, NO-0315 Oslo, Norway J. Weller Max-Planck-Institut
for Astrophysics, D-85741 Garching, Germany B. Westbrook University of
California, Berkeley, Department of Physics, Berkeley, CA 94720, USA J. Wilms
University of Erlangen-Nürnberg B. Winter Optical Science Laboratory,
Physics and Astronomy Dept., University College London (UCL) Mullard Space
Science Laboratory, University College London, London E.J. Wollack NASA
Goddard Space Flight Center N.Y. Yamasaki Japan Aerospace Exploration Agency
(JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara,
Kanagawa 252-5210, Japan T. Yoshida Japan Aerospace Exploration Agency
(JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara,
Kanagawa 252-5210, Japan J. Yumoto The University of Tokyo, Tokyo 113-0033,
Japan M. Zannoni University of Milano Bicocca, Physics Department, p.zza
della Scienza, 3, 20126 Milan Italy A. Zonca San Diego Supercomputer Center,
University of California, San Diego, La Jolla, California, USA
###### Abstract
LiteBIRD has been selected as JAXA’s strategic large mission in the 2020s, to
observe the cosmic microwave background (CMB) $B$-mode polarization over the
full sky at large angular scales. The challenges of LiteBIRD are the wide
field-of-view (FoV) and broadband capabilities of millimeter-wave polarization
measurements, which are derived from the system requirements. The possible
paths of stray light increase with a wider FoV and the far sidelobe knowledge
of $-56$ dB is a challenging optical requirement. A crossed-Dragone
configuration was chosen for the low frequency telescope (LFT : 34–161 GHz),
one of LiteBIRD’s onboard telescopes. It has a wide field-of-view
($18^{\circ}\times 9^{\circ}$) with an aperture of 400 mm in diameter,
corresponding to an angular resolution of about 30 arcminutes around 100 GHz.
The focal ratio f/3.0 and the crossing angle of the optical axes of 90∘ are
chosen after an extensive study of the stray light. The primary and secondary
reflectors have rectangular shapes with serrations to reduce the diffraction
pattern from the edges of the mirrors. The reflectors and structure are made
of aluminum to proportionally contract from warm down to the operating
temperature at $5\,$K. A 1/4 scaled model of the LFT has been developed to
validate the wide field-of-view design and to demonstrate the reduced far
sidelobes. A polarization modulation unit (PMU), realized with a half-wave
plate (HWP) is placed in front of the aperture stop, the entrance pupil of
this system. A large focal plane with approximately 1000 AlMn TES detectors
and frequency multiplexing SQUID amplifiers is cooled to 100 mK. The lens and
sinuous antennas have broadband capability. Performance specifications of the
LFT and an outline of the proposed verification plan are presented.
###### keywords:
Cosmic microwave background, space program, millimeter-wave polarization,
cryogenic telescope
## 1 INTRODUCTION
LiteBIRD, the Lite (Light) satellite for the study of $B$-mode polarization
and Inflation from cosmic background Radiation Detection, observes the cosmic
microwave background (CMB) polarization over the full sky at large angular
scales [1, 2, 3]. Cosmological inflation predicts primordial gravitational
waves, which imprinted large-scale curl ($B$-mode) patterns on the CMB
polarization map [4, 5, 6, 7]. Measurements of the CMB $B$-mode signals are
known as the best probe to detect the primordial gravitational waves and to
measure the inflation energy. The scientific objective of LiteBIRD is to test
major inflationary models [8]. The power of the $B$-modes is proportional to
the tensor-to-scalar ratio, $r$. The current upper limit on $r$ is
$r<0.044$[9]. The mission goal of LiteBIRD is to measure $r$ with a precision
of $\delta r<0.001$, which provides a crucial test of cosmic inflation. The
required angular coverage is $2<\ell<200$, where $\ell$ is the multipole
moment.
LiteBIRD has been selected as JAXA’s strategic large mission in the late
2020s. It will be launched with an H3 vehicle for three years of observations
at the Lagrangian point (L2) of the Earth-Sun system. It is a spinning
satellite with a precession angle ($\alpha$) of 45∘ and spin angle ($\beta$)
of 50∘ with spin rate of 0.05 rpm and precession period of 180 minutes, which
are optimized from crossing angles and revisits of previously scanned regions.
The concept design has been studied by researchers from Japan, U.S., Canada,
and Europe since September 2016.
LiteBIRD observes millimeter waves from 34 GHz to 448 GHz with two
instruments, LFT and MHFT[10, 11]. Both instruments have the same relative
bandwidth of min: max frequencies = 1:5. LFT will explore synchrotron and CMB
emission, while MHFT covers CMB emission and will also extend to higher
frequencies to explore the dust contribution. The bands in common between the
two telescopes, i.e. 89–161 GHz, allow reduction of systematics associated
with the telescopes, and add redundancy. A transmissive half-wave plate (HWP)
for polarization modulation has a limited bandwidth, and so LiteBIRD has two
instruments to cover the frequency bands. Both instruments are operated at
cryogenic temperature of $5\,$K to reduce the photon noise. The focal plane
design is based on multi-chroic TES detectors at 100 mK operation [12, 13].
Cryogenic chain of LiteBIRD is described by Hasebe et al. [14] and Duval et
al.[15]
Challenges for LiteBIRD are wide field-of-view (FoV) and broadband
capabilities of millimeter-wave polarization measurements, which are derived
from the sensitivity specifications. The wide FoV corresponds to a large focal
plane area; a detector pixel has different spill-over or edge-taper depending
on the pixel position on the focal plane. The possible paths of stray light
increase with a wider FoV. A stable system is also required to perform the all
sky survey.
LiteBIRD is currently under the conceptual study phase. It is important to
define preliminary design specifications in order to make progress on the
system design. The derivation of the detailed requirements and the detailed
design study are moving in parallel, and affect each other iteratively. In
this paper we introduce a list of design specifications in this phase. Based
on further simulation-based studies of the error budget allocation over the
entire system, the numbers we list for the design specifications may change.
## 2 Overview of LFT
LFT has been designed to meet specifications described in the next section.
This section describes a brief overview of LFT before describing design
details. LFT is a wide field-of-view telescope designed to observe the CMB and
synchrotron radiation in the frequencies of 34–161 GHz, as shown in Figure 1.
The aperture diameter is 400 mm. The angular resolution is 24–71 arcminutes.
LFT is operated at cryogenic temperature of 5 K to reduce the optical loading
and is surrounded by radiators called V-grooves. The thermal design of
LiteBIRD is described in Hasebe et al [14]. LFT has a crossed Dragone antenna
made of aluminum. A frame structure at $5\,$K supports all components: the PMU
(polarization modulation unit); focal plane; primary and secondary reflectors;
and absorbers. An earlier design[2] has been updated.
A PMU with a transmissive HWP (half-wave plate)[16] is mounted in front of the
aperture stop. LFT focal plane is based on multi-chroic TES detectors at 100
mK operation [12, 13]. There are interfaces with the LFT PMU and the LFT focal
plane.
Figure 1: Overview of low frequency telescope (LFT). MHFT and side panels are
not shown for clarity.
## 3 LFT design specifications
The performance specifications for LFT are as follows.
### 3.1 Frequency bands and noise
Frequency coverage
34–161 GHz
Band sensitivities
LFT shall have the array sensitivities as tabulated in Table 1, which shall
satisfy the map-level sensitivity specifications. The sensitivity is limited
by the number of pixels, which is closely related with the field of view of
the telescope. The noise of the detector in a pixel is limited by the optical
loading.
Table 1: Performance specifications of LFT. The bandwidth (BW) is (High $-$ Low)/Center frequency. Center freq. | BW | Beam fwhm | pixel dia. | No. det | NETarray | Pol. sensitivity
---|---|---|---|---|---|---
GHz | | [arcmin] | [mm] | | [$\mu$Krts] | [$\mu$K arcmin]
40 | 0.30 | 70.5 | 32 | 48 | 18.5 | 37.4
50 | 0.30 | 58.5 | 32 | 24 | 16.5 | 33.5
60 | 0.23 | 51.1 | 32 | 48 | 10.5 | 21.3
68 | 0.23 | 41.6 | 16 | 144 | 9.8 | 16.9
68 | 0.23 | 47.1 | 32 | 24 | 15.7
78 | 0.23 | 36.9 | 16 | 144 | 7.7 | 12.1
78 | 0.23 | 43.8 | 32 | 48 | 9.5
89 | 0.23 | 33.0 | 16 | 144 | 6.1 | 11.3
89 | 0.23 | 41.5 | 32 | 24 | 14.2
100 | 0.23 | 30.2 | 16 | 144 | 5.1 | 6.6
119 | 0.30 | 26.3 | 16 | 144 | 3.8 | 4.6
140 | 0.30 | 23.7 | 16 | 144 | 3.6 | 4.8
Band shape
The frequency bandpasses are defined by a combination of superconducting band-
pass filters on the wafer [12], and the use of quasi-optical metal-mesh
filters [17] in front of the focal plane to reject higher frequencies. Lower
frequencies than the defined band (red-leak) might contribute to sidelobes due
to the distorted beam pattern. The red-leak is rejected only by a
superconducting band-pass filter on the wafer[12]. Higher frequencies than the
defined band (blue-leak) might contribute to noise due to far-infrared
radiation. The blue leak is rejected by both the on-chip filter and the quasi-
optical metal mesh filter in front of the focal plane.
1/f noise
The knee frequency of the post-demodulation $1/f$ noise should be below
$0.1\,$mHz (assuming a $0.05\,$rpm spin rate, precession angle
$\alpha=45^{\circ}$ and spin angle $\beta=50^{\circ}$). The knee frequency of
the raw $1/f$ noise should be well below 3.1 Hz ($46\,$rpm$\times 4$). The 46
rpm value is the LFT HWP rotation rate. In the HWP failure mode, the pair-
differenced $f_{\rm knee}$ is $20\,$mHz for individual detectors and $100,$mHz
for the common mode.
Data loss and operational duty cycle
The operating life of the instruments should be long enough to perform
observations for 3 years. The system shall have an operational duty cycle of
85 % for science observations, including all downtime for cryogenic cycling,
detector operation preparation, and data transfer. Data loss due to cosmic ray
glitches should be less than 5 %.
### 3.2 Beam
Angular resolution
The angular resolution of each detector response should be sufficient to cover
the required angular scales of $2<\ell<200$, where $\ell$ is the spherical
harmonic index. It shall have a FWHM of $80^{\prime}$ or better. Angular
resolution should be better than $30^{\prime}$ at $100\,$GHz, for measuring
the recombination bump, which is the prominent structure at degree scales in
the $B$-mode power spectrum coming from the primordial gravitational waves. It
shall also be better than $80^{\prime}$ at $40\,$GHz, for dealing with point
sources.
Pointing offset knowledge
The pointing offset knowledge should be less than $2.1^{\prime}$[18, 19].
Far sidelobe knowledge
The extended component of the far sidelobe should be known at a precision
level of $-56$ dB [20, 21]. Radiation from the Galactic plane through the far
sidelobes contaminates the signal and therefore the inferred power spectrum.
The far sidelobe is currently defined as the domain located above 0.2 rad.
Small scale feature of sidelobe
The small-scale features of the far sidelobes should be known at a precision
level of $-33\,$dB, more specifically defined by the following equation:
(intensity/$0.05\,$%)$\times$ (diameter/$30^{\prime}$)2, where the diameter is
the FWHM of the small-scale features due to possible optical ghosts or optical
multiple reflections.
Near sidelobe knowledge
The beam pattern of near sidelobes (out to 10∘ from the co-polar beam peak)
should be known at a precision level of $-30\,$dB. Also, it should be
confirmed to be consistent with its designed pattern at a precision level of
10 % or better.
Beam stability knowledge
The beam-shape stability over time, should be better than 0.46 % (synchronous)
/ $2\,$%( random) for beam width, and better than 1.7′′ (synchronous) / 16′′
(random) for pointing, better than $0.086\,$% (synchronous) / $2.7\,$%
(random) at the third flattening (often called ellipticity), better than $-46$
dB at sidelobes around several to $30\,$degrees [19]. The time scale of the
synchronous beam fluctuation is $163\,$msec for LFT in which HWP rotates by
$45^{\circ}$, while ”random” is a component that fluctuates randomly over
time. They correspond to differential beam shape and are also related to
optical qualities of the instrument in the broad sense. Note that in the case
of a perfect polarization modulator, differential beam effects are negligibly
small. Therefore beam stability specifications are tied to imperfections of
the polarization modulation system.
### 3.3 Polarization
Knowledge of polarization efficiency
The polarization efficiency knowledge should be better than $0.2\,$%.
Absolute polarization angle knowledge (monopole)
The absolute polarization angle knowledge on the stable monopole component
should be better than $2.7^{\prime}$[18, 19].
Polarization modulation
The modulation frequency should be $>4\times 0.76\,$Hz, which assures 4
modulation (at least) during beam-size excursions of 30′. The modulation
frequency should be $<4\times 4.5\,$Hz, given by an argument about the
bolometer time constant.
Modulation synchronous instrumental polarization knowledge
The $4f$ synchronous instrumental polarization knowledge should be better than
$0.0063\,$%.
### 3.4 Gain
Gain variation in time
The gain variation in time for a single detector should be better than $10\,$%
assuming that the gain parameter is updated every 1200 sec (which corresponds
to a 0.05 rpm rotation period). The effective differential gain should be
smaller than $0.0069\,$% (synchronous, i.e., $163\,$msec for LFT, in which the
HWP rotates by $45^{\circ}$) / $0.3\,$% (random).
### 3.5 Other specifications
There are other specifications. According to the system design [2], heat
dissipation of LFT is limited to 4 mW, which includes the PMU and temperature
control of the LFT optical components. The minumum eigen-frequency for LFT is
assumed to be 100 Hz and 50 Hz for axial and lateral axes, respectively;
however, this might be optimized by a combined design with the cryo-structure
of the payload module (PLM). LFT is designed to withstand quasi-static loads
of 20 g for the axial and lateral axes. EMC/EMI specifications have been
studied with simulations[22].
## 4 Optical design
### 4.1 Antenna design
After trade-off studies of various optical configurations among crossed-
Dragone, offset-Gregorian [23], and open-Dragone[24, 25], we concluded that
the crossed-Dragone antenna is the best option for LFT because of the wide-
field of view and the low cross polarization. Multiple reflections of crossed-
Dragone antennas have been described earlier[26].
Table 2: Optical specifications of LFT antenna Aperture diameter | 400 mm
---|---
Field of view | $18^{\circ}\times 9^{\circ}$
Strehl ratio | $>0.95$ at $161\,$GHz
Focal plane telecentricity | $<1.0^{\circ}$
Focal ratio | $2.9<$ F/# $<3.1$
PSF flattening | $<5\,$%
Cross polarization | $<-30$ dB
Rotation of polarization angle across FoV | $<\pm 1.5^{\circ}$
A crossed-Dragone antenna of LFT has been designed with anamorphic aspherical
surfaces [27] to achieve the specifications listed in Table 2. The anamorphic
aspherical surface is described with the following equation for both the
primary mirror (PM) and the secondary mirror (SM) [27]:
$z_{m}=\frac{C_{m,x}x_{m}^{2}+C_{m,y}y_{m}^{2}}{1+\sqrt{1-(1+k_{m,x})C^{2}_{m,x}x_{m}^{2}-(1+k_{m,y})C^{2}_{m,y}y_{m}^{2}}}+\sum_{i=2}^{5}A_{m,i}\left[\left(1-B_{m,i}\right)x_{m}^{2}+\left(1+B_{m,i}\right)y_{m}^{2}\right]^{i},$
(1)
where $m=$ PM, SM, $C_{m,x}$ and $C_{m,y}$ are curvatures for the $x$ and $y$
directions, $k_{m,x}$ and $k_{m,y}$ are conic constants in the x and y
directions, and $A_{m,i}$ and $B_{m,i}$ are aspherical coefficients.
Table 3: Optical parameters of anamorphic aspherical surfaces[27]. | $C_{m,x}$/mm-1 | $C_{m,y}$/mm-1 | $k_{m,x}$ | $k_{m,y}$ | $y_{m,0}$/mm | $z_{m,0}$/mm | $\theta_{m}$/deg. |
---|---|---|---|---|---|---|---|---
PM | $-1.60053\times 10^{-4}$ | $-4.71355\times 10^{-4}$ | 15.857906 | $-5.174224$ | 0 | 696.344 | 0 |
SM | 4.05234$\times 10^{-4}$ | 5.04062$\times 10^{-4}$ | $-4.162644$ | $-1.282787$ | $-163.771$ | 346.223 | 42.45664 |
FP | | | | | 550.924 | 343.223 | 90 |
| $A_{m,2}$ | $B_{m,2}$ | $A_{m,3}$ | $B_{m,3}$ | $A_{m,4}$ | $B_{m,4}$ | $A_{m,5}$ | $B_{m,5}$
PS | $-5.28\times 10^{-12}$ | $-3.31\times 10^{-1}$ | 1.63$\times 10^{-18}$ | $-0.716$ | $-2.50\times 10^{-24}$ | $-0.973$ | $-2.17\times 10^{-34}$ | 0.0929
SM | $-3.10\times 10^{-16}$ | 59.834 | 7.42$\times 10^{-18}$ | $-0.375$ | $-3.45\times 10^{-23}$ | -1.157 | $-3.89\times 10^{-31}$ | $-0.349$
A ray diagram of LFT is shown in Figure 2, which has an aperture diameter of
400 mm and an FoV of 18∘$\times$ 9∘. The aperture diameter is derived from the
requirement of the angular resolution of 80′ at 40 GHz. The FoV corresponds to
the focal plane area of $420\,$mm$\,\times 210\,$mm, which is roughly
proportional to the sensitivity. This meets the sensitivity requirement in
Table 1.
Optical rays are designed to have $640\,$mm diameter at the aperture from the
focal plane to keep enough edge tapers at both primary and secondary
reflectors. The Strehl ratio at $161\,$GHz is larger than 0.95, as shown in
Figure 3. Rotation of the polarization angle for the $y$-axis polarization
across the field of view is shown in Figure 3. The rotation is estimated to
$<\pm 1.5^{\circ}$ according to the ray tracing simulation with a finite
resistivity. The derived optical parameters are tabulated in Table 3.
The allocated volumes of LFT and MHFT are shown in Figure 4. The field of view
of LFT is maximized under the volume constraint. Crossed-Dragone antennae with
f/2.5, 3.0, and 3.5 are compared. The volume is roughly proportional to the
f-number. Under the volume constraint, the smaller values are preferable, but,
the stray light is larger. We chose f/3.0 for LFT, considering focal-plane
dimensions and feed parameters.
We updated the design of a crossed-Dragone antenna reported by [27]. The f/3.0
and the crossing angle of the optical axes of 90∘ have been chosen after an
extensive study of stray light (on the right of Figure 2).
Figure 5 shows the stray light with the crossing angles; At the crossing angle
of 110∘, the direct path from the feed to the sky is small, but there are many
triple reflection paths. At 82∘, there are large direct paths. Then the 90∘
angle moderates for both the triple reflections and direct paths.
The detector hood and front hood whose height of 500 mm reduces stray light to
far sidelobes as shown in Figure 2. The $y$-direction of the focal plane in
the focal plane coordinate (Figure 6) is limited by multiple reflections or
stray light. The $x$-direction is limited by the $5\,$K allocated area of
LiteBIRD, as shown in Figure 4.
Primary and secondary mirrors have rectangular shapes of 835 $\times$ 795 mm
and 872 $\times$ 739 mm, respectively, with serrations to reduce diffraction
patterns from the edges of mirrors. The mirror sizes were reduced from the
previous design [2] because the $2\,$K cold aperture stop was removed due to
limitations of the cooling capacity and then the length between the aperture
and the main reflector was reduced. The optical design is based on feed
parameters as tabulated in Table 4.
Figure 2: (Left) Ray tracing diagram of Low Frequency Telescope (LFT). Blue,
Red, and Green lines show $\theta_{y}=+4.5^{\circ}$, $\theta_{y}=0^{\circ}$,
$\theta_{y}=-4.5^{\circ}$, respectively. (Right) Possible stray light paths of
LFT. Red lines show direct paths. Blue and green lines show triple
reflections.
Figure 3: (Left) Map of Strehl ratio of LFT antenna at $161\,$GHz. (Right)
Rotation of polarization angle of $y$-axis polarization across the field of
view in units of degrees.
Figure 4: (Left) Usable volume of LFT and MHFT and the PLM coordinate. V-grooves are also shown. The most inner V-groove is at $30\,$K. The top of the truss is the $5\,$K structural interface for LFT and MHFT. (Right) Allocated area of LFT and MHFT and the PLM coordinate. Figure 5: Stray light with the crossing angle of the optical axes of the crossed-Dragone configuration. Table 4: Frequency bands and feed parameters. The bandwidth (BW) is (High $-$ Low)/Center frequency. The number (No.) of detectors is two times the number of pixels because of two orthogonal polarization detections. Type | Center freq. | BW | Low | High | Pixel dia. | Beam waist | No. pix | No. det.
---|---|---|---|---|---|---|---|---
| [GHz] | | [GHz] | [GHz] | [mm] | radius [mm] | |
1 | 40 | 0.30 | 34 | 46 | 32 | 11.64 | 24 | 48
| 60 | 0.23 | 53 | 67 | 32 | 11.64 | 24 | 48
| 78 | 0.23 | 69 | 87 | 32 | 11.64 | 24 | 48
2 | 50 | 0.30 | 43 | 58 | 32 | 11.64 | 12 | 24
| 68 | 0.23 | 60 | 76 | 32 | 11.64 | 12 | 24
| 89 | 0.23 | 79 | 99 | 32 | 11.64 | 12 | 24
3 | 68 | 0.23 | 60 | 76 | 16 | 5.82 | 72 | 144
| 89 | 0.23 | 79 | 99 | 16 | 5.82 | 72 | 144
| 119 | 0.30 | 101 | 137 | 16 | 5.82 | 72 | 144
4 | 78 | 0.23 | 69 | 87 | 16 | 5.82 | 72 | 144
| 100 | 0.23 | 89 | 112 | 16 | 5.82 | 72 | 144
| 140 | 0.30 | 119 | 161 | 16 | 5.82 | 72 | 144
Figure 6: LFT focal plane pixel arrangement. There are eight square (10 cm
$\times$ 10 cm) tiles. Red, yellow, and green, blue pixels correspond Type 1,
2, 3, and 4 of Table 4, respectively. The LFT focal plane coordinate is shown
in black arrows. The scales are shown in units of millimeters.
### 4.2 Optical simulation
Physical optics simulations of LFT with GRASP10[28] have been studied in the
same way by Imada et al.[29]. Lower frequencies make it relatively difficult
to meet the far sidelobe requirement due to diffraction effects. Figure 7
shows the impact of the feed sidelobes.
The LFT antenna pattern assuming a Gaussian feed is shown in the left panels
of Figure 7, while the feed simulated with HFSS[30] is shown in the right
ones. Upper panels show the antenna pattern of a pixel near the primary
reflector, while lower ones show that of near the aperture. It is clear that
the direct path from the feed sidelobe contributes the far sidelobe of LFT at
a level of $-60\,$dB. The feed sidelobe of a pixel near the aperture
contributes the point-like sidelobe due to triple reflections (feed
$\rightarrow$ primary $\rightarrow$ secondary $\rightarrow$ primary
$\rightarrow$ sky: shown in green). Note that there are discrepancies of the
feed sidelobes at a level around $-20\,$dB between the HFSS simulation and the
room-temperature measurement of the sinuous/lens feed [31].
Figure 7: Optical simulation of far-field beam pattern of LFT at 34 GHz. Gray
shows the nominal beam pattern without stray light, red shows the direct path
from the focal plane to sky, green shows triple reflections (feed $-$ primary
$-$ secondary $-$ primary $-$ sky), and blue shows triple reflections (feed
$-$ secondary $-$ primary $-$ secondary $-$ sky). (Top, Left) A pixel near the
primary reflector around ($x,y$) = ($-190\,$mm, $-87\,$mm) with a Gaussian
feed. (Top, Right) A pixel near the primary reflector with HFSS simulation of
sinuous antenna. The feed sidelobe contributes the far sidelobe of LFT due to
direct path (Red). (Bottom, Left) A pixel near the aperture stop around
($x,y$) = ($-190\,$mm, $+87\,$mm) with a Gaussian feed. (Top, Right) A pixel
near the aperture stop with HFSS simulation of sinuous antenna. The feed
sidelobe contributes the far sidelobe of LFT due to triple reflections (feed
$-$ primary $-$ secondary $-$ primary $-$ sky: Green).
We have simulated the antenna pattern at $30\,$GHz, as shown in Figure 8,
since a bandpass filter cannot cut off sharply at a specific frequency, e.g.,
34 GHz, which causes a red leak to the sidelobe. The feed here is polarized
along the $x$ axis, and located at ($x,y$) = ($-88\,$mm, $+44\,$mm) with a
diameter of $24\,$mm, which is different from the current design, but the
qualitative effects are the same. Several features, originating from the
diffraction at the mirror edges, are shown within circles in both panels.
These features are at a higher level than that of the nominal diffracted point
spread function (PSF).
(a)
(b)
Figure 8: Physical optical simulation for 30 GHz, which is out of the band.
(a) 2D map. The features from the diffraction at the mirror edges can be found
at the right side of the main lobe. (b) 1D cut.
The current simulations take into account the reflectors, the aperture stop
and the front baffle with perfect absorbers. The followings items will be
considered for further studies, which might generate additional side-lobes.
* •
Actual absorbers have finite reflections on the aperture stop, front hood,
detector hood, frame, and panels. The absorbers covering the optical cavity
and the focal plane are not ideal and they have frequency dependence as well
as angle dependence of reflectance.
* •
There are multiple reflections (i.e. ghost effects) or multiple scattering
among the HWP, the focal plane, the aperture stop, quasi-optical LP Filters,
and the absorbers.
### 4.3 Other optical components
The aperture stop at 4.8 K with an inner diameter of 400 mm is made of
millimeter absorber, TK-RAM[32, 33] on an aluminum plate. This works to make
good beam shape for a relatively low edge taper of about $3\,$dB
configuration.
Millimeter absorbers to reduce reflections are attached on the inside surface
of the $5\,$K frame, which plays a role of a cavity. Eccosorb AN72 and HR10
are candidates for such absorbers; however, they have large TML (total mass
loss) and CVCM (collected volatile condensable materials). According to the
NASA outgass database[34], AN72 washed with ethanol shows reasonable TML and
CVCM.
The front-hood, as shown in Figure 9, is made of millimeter absorber Eccosorb
AN72 and aluminum plate.
### 4.4 Thermal control
Temperature stability of the optical components of LFT is required to meet the
specification of the single detector $f_{\rm knee}=$ $20\,$mHz, which
corresponds to $50\,$seconds. The noise equivalent temperature (NET) of each
detector is around $50\mu$K/$\sqrt{\rm Hz}$, so the noise is integrated to
$\Delta T=7\,\mu$K in the $50\,$seconds. It is necessary to meet the following
constraint:
$(\Delta T)^{2}\gg\sum_{o=1}^{N_{\rm o}}\left(\delta
T_{o}\times\eta_{o}\times\epsilon_{o}\times({\rm
optical\,efficiency})\right)^{2},$ (2)
where $N_{\rm o}$ is the number of optical components, $\delta T_{o}$ is the
temperature stability of the optical components, $\eta_{o}$ is the optical
load fraction and, $\epsilon_{o}$ is the emissivity of the optical components.
The optical efficiency of the feed is assumed to be 0.69. The noise
contribution of each optical component is assumed less than $2\,\mu$K. The
derived specifications on the stability of the LFT optical components are
shown in Table 5. Those specifications give a rough estimate for temperature
stability of $\delta T_{o}/T_{o}\sim 10^{-5}$ in the worst case, but, more
accurate estimates are required, because the optical load fraction
($\eta_{o}$) depends on the focal plane position, the feed sidelobe and the
frequency, as described in section 4.2 and in Figure 7.
The temperature of the aperture stop, and other optical components, are
planned to be stabilized with heaters to reduce the $1/f$ noise level.
Table 5: specifications for temperature stability on the scale of 50 seconds of LFT optical components. The optical load fraction ($\eta_{o}$) is a typical value, because it depends on the focal plane position, the feed sidelobe, and frequency. $\epsilon_{o}$ is the emissivity of the optical components. Components | Temperature [K] | $\eta_{o}$ | $\epsilon_{o}$ | Stability
---|---|---|---|---
| min. | max. | | | [mK]
Front hood | 5 | 6 | 0.004 | 0.99 | 3
PMU/HWP | 4.5 | 20 | 0.63 | 0.01 | 0.5
PMU mount | 4.5 | 20 | 0.004 | 0.99 | 0.7
Around aperture stop | 4.5 | 4.8 | 0.2 | 0.99 | 0.02
$5\,$K frame | 4.5 | 5 | 0.1 | 0.99 | 0.03
LFT reflectors | 4.5 | 5 | 0.9 | 0.002 | 1.6
Detector hood | 1.8 | 2 | 0.08 | 0.99 | 0.04
Low-pass filter | 1.7 | 2 | 0.9 | 0.01 | 0.3
## 5 Structure design
The structural design of LFT is shown in Figure 9. The frame and reflectors of
LFT are made of aluminum in order to shrink similarly within 0.4 % from
$300\,$K to $5\,$K[35]. Structural and thermal stability of the telescope is
required for the all sky survey of CMB polarization observations. Aluminum has
good thermal conductance at $5\,$K and is mechanically stable. The frame has
structural interfaces at $5\,$K with PMU and the focal plane, which is
operated at 0.1 K. The fastener between the reflector and the frame is planned
to use SUS (stainless steel) bolts. The SUS bolts generate local deformations
with an area of several mm, which does not affect on the global shape of the
reflectors. The telescope is supported by trusses made of aluminum on the
$5\,$K interface plate. The total mass of LFT, including the trusses, the PMU
and the focal plane, is estimated to be 200 kg.
Optical tolerance analysis leads to alignment specifications of LFT (Table 6),
which are derived from the polarization angle variation. The gravitation
deformation of LFT is estimated to be $\delta x$ of $-14\,\mu$m, $\delta y$ of
$-23\,\mu$m, $\delta z$ of $22\,\mu$m, which are all reasonably small. Then,
we can plan the ground verification and calibration without directional
constraints due to gravitational effects. According to a scaled model (see
Section 8), the alignment can be achieved with careful design and assembly.
Table 6: Alignment specifications of LFT. All values are maxima. Requirement | Primary (M1) | Secondary (M2) | Frame | Combined
---|---|---|---|---
Mechanical shape error | 15 $\mu$m r.m.s. | 15 $\mu$m r.m.s. | | 30 $\mu$m r.m.s.
Alignment dx | ± 0.1 mm | ± 0.1 mm | ± 0.2 mm | ± 0.4 mm
Alignment dy | ± 0.1 mm | ± 0.1 mm | ± 0.2 mm | ± 0.4 mm
Alignment dz | ± 0.2 mm | ± 0.2 mm | ± 0.2 mm | ± 0.6 mm
Tilt Rot-x | ± 0.5 arcmin | ± 0.5 arcmin | ± 0.6 arcmin | ± 1.6 arcmin
Tilt Rot-y | ± 0.4 arcmin | ± 0.4 arcmin | ± 0.2 arcmin | ± 1.0 arcmin
Tilt Rot-z | ± 0.1 arcmin | ± 0.1 arcmin | ± 0.2 arcmin | ± 0.4 arcmin
The surface roughness of the reflectors are designed to be 2–4$\,\mu$m in Ra
on the scale of 10 mm, which reduces infrared radiation, mainly from the
Galactic plane. According to the Ruze fomula
$\eta_{e}=\exp\left[-\left(\frac{4\pi\epsilon}{\lambda}\right)^{2}\right]$,
infrared radiation more than 5–10 THz (30–60$\,\mu$m) can be scattered.
The telescope is tightly covered with aluminum and absorbers to reduce the
stray light from the inner surface of the $30\,$K V-groove (see Figure 1). The
absorber, made of plastic and carbon, is adhered to a panel with epoxy, then
the panel is fixed to the $5\,$K frame. The cryogenic contraction of the
absorber and the epoxy will be carefully designed not to deform the frame.
Figure 9: (Left) Lateral view of structural design of LFT. The side panel is
covered with millimeter absorbers. (Right) Top view of LFT.
## 6 LFT Polarization modulation unit (PMU)
Figure 10: LFT Polarization Modulation Unit (PMU)[16]. The sapphire half-wave
plate is shown in blue.
A polarization modulation unit with a transmissive sapphire HWP has been
developed for LiteBIRD (Figure 10) [36, 37, 38]. The progress of the PMU is
separately reported[16]. The PMU/HWP is placed in front of the aperture stop
or entrance pupil of $400\,$mm diameter. The HWP continuously rotates with
$46\,$rpm = $0.77\,$Hz. PMU uses superconducting magnets for levitation [36].
The eddy current and magnetic hysteresis dissipate and increase the
temperature of the rotating HWP from $5\,$K to $20\,$K. The HWP rotation axis
is tilted by 5∘ with respect to the optical axis to mitigate multiple
reflections including optical ghosts between the HWP and the focal plane.
We have derived following the interface specifications on LFT PMU and focal
plane from the LFT specifications (Section 3) and system designs during ISAS
pre-phase A2 [3, 2].
1. 1.
The optical effects of the observation frequency of 34–161 GHz due to the PMU
are minimized to meet the near and far sidelobes specifications of LFT.
2. 2.
The opaque $20\,$K parts of PMU are designed to reduce the optical loading.
3. 3.
The mass of PMU is $30\,$kg.
4. 4.
The heat loads to the $5\,$K stage including the PMU wire harness are less
than $3\,$mW.
5. 5.
AC magnetic field variation and DC magnetic field are minimized to reduce the
effects on the focal plane.
## 7 LFT Focal plane
Figure 11: (Left) LFT focal plane assembly. (Right) Structural interface
between the focal plane and LFT.
The LFT focal plane has been designed and developed with antenna-coupled TES
detectors[12]. The lens and sinuous antenna have broadband capability[31]. The
focal plane with AlMn TES is cooled to 100 mK with ADRs [15]. The cold readout
with SQUID amplifiers is also cooled to 100 mK. Cosmic ray mitigation has been
extensively investigated[39, 40]. The progress of the LFT focal plane is
separately reported[13].
The focal plane consists of eight square (10 cm $\times$ 10 cm) tiles, as
shown in Figure 11. The focal plane is shielded with a hood at 2 K to reduce
stray light (see Figure 2). A quasi-optical metal-mesh low-pass filter[17] is
put in front of square modules to reduce thermal loads from far-infrared
radiation of the Galactic plane and the $20\,$K radiation of PMU. A magnetic
shield to reduce magnetic variation from the PMU covers the focal plane except
for the optical input. The structural interface at $5\,$K between the focal
plane and LFT is designed as shown in Figure 11.
The following interface specifications on the focal plane are flown down from
the LFT specifications and system designs.
1. 1.
The optical efficiency of each detector is higher than 0.69.
2. 2.
The return loss of the feeds in the in-band frequencies is better than
$-10\,$dB.
3. 3.
The main beam width of the feeds is consistent with the Gaussian beam radius
defined in Table 4 within 5 %.
4. 4.
The sidelobes of each detector are less than $-17\,$dB. Figure 7 shows the
effects of the feed sidelobes.
5. 5.
The optical cross talk among pixels is less than 0.03 %.
6. 6.
The lower frequency edges of 34 GHz and 60 GHz of the 40 GHz band and the 68
GHz band, respectively, have sharper cut-offs to reduce the contamination of
sidelobes of the lower frequencies. Figure 8 shows the beam pattern at 30 GHz.
7. 7.
The polarization efficiency of the feeds should be higher than 98 %, which
corresponds to the cross polarization of $<-17$ dB.
8. 8.
The polarization angle of each detector across the frequency band changes by
less than $\pm 5^{\circ}$.
9. 9.
The detector noise is basically the photon noise limit of the cosmic microwave
background of 2.7 K. The NET is tabulated in Table 1.
10. 10.
The common mode $1/f$ knee noise of the detector module is stable to be better
than 100 mHz.
11. 11.
The $1/f$ knee of each detector is stable to be better than 20 mHz.
12. 12.
Micro-vibration of the $5\,$K interface is less than 30 $\mu$G/$\sqrt{\rm Hz}$
and 80 $\mu$G/$\sqrt{\rm Hz}$ over 10–200 Hz and 200–500 Hz, respectively.
Under this condition, the focal plane shall perform the required sensitivity.
This requirement is based on the experience of the Hitomi X-ray satellite
[41].
13. 13.
The detector yield including the readout electronics is larger than 80 %.
14. 14.
The dead time fraction due to cosmic ray glitches is less than 0.05.
15. 15.
The mass of the focal plane assembly is assumed to be 17 kg without the
magnetic shield.
16. 16.
The first eigen-frequency of the focal plane is required to larger than 141 Hz
for all three axes.
## 8 Scaled model demonstration
Figure 12: (Left) LFT quarter (1/4) scaled model and the near-field
measurement system [42]. (Right) Far-field patterns of the quarter LFT at the
center (top) and edges (middle and bottom) of the focal plane, measured at 220
GHz, which corresponds to 55 GHz in the full model [42].
A quarter (1/4)-size scaled model of the LFT antenna has been designed and
developed to verify the wide-field design. Measured frequencies are also
scaled, so the antenna pattern of the scaled model reveals that of the full
size.
The near-field measurement system with the scaled LFT has been developed as
shown in Figure 12[42]. Measured amplitude and phase data are transformed to
far fields. Figure 12 shows far-field beam patterns at three focal positions
(see Figure 6), center, top-right edge, and bottom-right edge, at the
frequency of 220 GHz, which corresponds to 55 GHz in the full size LFT. We
confirmed the suppression of far sidelobes based on the scaled model
measurements.
Rotation of polarization angle over the field of view is another key parameter
for the wide-field design. A dedicated compact antenna test range (CATR), or a
collimated millimeter-wave source has been developed to measure the
polarization angle across the wide field of view of the 1/4 LFT. The
polarization angle of the 1/4-scaled LFT has been measured with a resolution
of 0.1′[43]. The polarization angle of polarization $x$ or horizontal
polarization was measured to rotate by around 60′ across the focal plane,
while the angle of polarization $y$ or vertical polarization rotates by around
30′ across the focal plane.
The structural design of the LFT antenna has been studied with the 1/4-scaled
LFT. The frame structure of the 1/4-LFT as shown in Figure 13, was assembled
with plates and rectangular bars. The reflector alignment of the assembled 1/4
LFT was measured with a coordinated machine (Mitsutoyo Legex 12128), as shown
in Figure 13. The fitted curve of the optical surfaces referring to the
aperture center is different from the designed values by $36\,\mu$m and 22′′
at the maximum. The measured alignment met the quarter values of the alignment
requirement of Table 6.
Figure 13: (Left) LFT quarter (1/4) scaled model. (Right) Measurement of
reflector surfaces with a coordinated machine.
## 9 Verification Plan
Verification and calibration of a cryogenic telescope at the ground facilities
before launch are challenging. A verification plan is tabulated in Table 7.
Two development models (DM/EM and FM111DM: demonstration model, EM:
engineering model, FM: flight model.) are planned [2].
Table 7: Verification plan of LFT. | | DM/EM | FM
---|---|---|---
LFT-antenna tests at room temperature | |
| Shape measurements with a 3D coordinated machine | $\checkmark$ | $\checkmark$
| Millimeter-wave antenna pattern with horns | $\checkmark$ | $\checkmark$
| V-grooves/MHFT diffraction | $\checkmark$ | $-$
LFT-antenna cryogenic tests at $5\,$K | |
| Strain measurements | $\checkmark$ | $-$
| Deformation measurements: photogrammetry or laser sensing | optional | optional
| Millimeter-wave antenna pattern with horns | optional | optional
LFT AIV and calibration with FP and PMU | |
| Antenna pattern | $\checkmark$ | $\checkmark$
| Polarization angle | $\checkmark$ | $\checkmark$
| Frequency response | $\checkmark$ | $\checkmark$
The antenna pattern of LFT before integration with the focal plane will be
tested at room temperature. A possible method is a near-field beam measurement
[42] or a CATR measurement [43].
Diffraction effects due to V-grooves and structures of MHFT will be evaluated
and modeled to be small enough ($<-60$ dB) as designed at room temperature. A
structure thermal model (STM) of the mission payload is constructed and tested
with mechanical coolers to verify structural and thermal performance [2]. It
will be used to measure the electromagnetic effects of V-grooves at room
temperature.
Then, the cryogenic deformation of LFT will be measured to be small enough, as
designed. There are a few methods to measure cryogenic deformation of LFT: 1)
strain measurements with strain gauges; 2) photogrammetric measurements; and
3) laser reflection measurements.
To verify the requirements of LFT and to calibrate LFT with the focal plane
and the PMU, we have a plan to build a beam measurement system in a cryogenic
environment. There are three methods to measure cryogenic beam patterns,
polarization angles and spectral response (Table 8). One approach is near-
field beam measurements in front of the front hood of LFT. To obtain the far-
field pattern from the near-field measurements, the phase distribution must be
retrieved with a reference source[44].
Table 8: Possible cryogenic RF measurements. CATR: compact antenna test range. CW : continuous wave/coherent source. | Near Field | CATR with CW | CATR with blackbody
---|---|---|---
Phase retrieval | necessary | unnecessary | unnecessary
Volume | compact | large | large
Time | longer | fast | faster
Standing wave | no concern | little concern | no concern
Pol. angle | difficult | possible | possible
Spectral response | difficult | possible | $-$
Figure 14: Schematic drawing of cryogenic set-up with a CATR (compact antenna
test range), which moves three-dimensionally with two Gonio stages. It is
planned to measure co-polar and cross-polar beam pattens, polarization angle,
and spectral response of LFT with CW and blackbody sources.
Another method is direct measurement of far-field pattern with a collimated
source or a compact antenna test range (CATR), which needs larger volume for
the cryogenic environment, as shown in Figure 14. This concept has three
merits over the phase retrieval near-field beam measurement.
1. 1.
The polarization angle of LFT is also measured with a collimated beam, as
demonstrated by H. Takakura et al. 2020[43].
2. 2.
The frequency spectral response is measured with a broadband coherent source.
A few broadband photo-mixers have been demonstrated at millimeter-wave
frequencies [45, 46].
3. 3.
It is possible to measure beam patterns with continuum sources as well as
coherent sources. Beam measurements with a continuum source are faster than
those of multiple frequencies with coherent sources.
In either method, it is crucial to de-couple the mechanics at room temperature
from the sources at cryogenic temperature, or to develop moving mechanics
operated at low temperature.
## 10 Summary
Based on the performance specifications of LFT, a wide field-of-view design
has been studied as well as structural and thermal designs. A 1/4-scaled model
of LFT has been developed to verify the design. The measured beam pattern was
consistent with the optical model at a level of $-50$ dB. Interface
specifications of the LFT PMU and LFT focal plane are presented. The
verification scheme of LFT is planned as the ISAS/JAXA pre-phase A activity.
###### Acknowledgements.
This work is supported in Japan by ISAS/JAXA for Pre-Phase A2 studies, by the
acceleration program of JAXA research and development directorate, by the
World Premier International Research Center Initiative (WPI) of MEXT, by the
JSPS Core-to-Core Program of A. Advanced Research Networks, and by JSPS
KAKENHI Grant Numbers JP15H05891, JP17H01115, and JP17H01125. The Italian
LiteBIRD phase A contribution is supported by the Italian Space Agency (ASI
Grants No. 2020-9-HH.0 and 2016-24-H.1-2018), the National Institute for
Nuclear Physics (INFN) and the National Institute for Astrophysics (INAF). The
French LiteBIRD phase A contribution is supported by the Centre National
d’Etudes Spatiale (CNES), by the Centre National de la Recherche Scientifique
(CNRS), and by the Commissariat à l’Energie Atomique (CEA). The Canadian
contribution is supported by the Canadian Space Agency. The US contribution is
supported by NASA grant no. 80NSSC18K0132. Norwegian participation in LiteBIRD
is supported by the Research Council of Norway (Grant No. 263011). The Spanish
LiteBIRD phase A contribution is supported by the Spanish Agencia Estatal de
Investigación (AEI), project refs. PID2019-110610RB-C21 and AYA2017-84185-P.
Funds that support the Swedish contributions come from the Swedish National
Space Agency (SNSA/Rymdstyrelsen) and the Swedish Research Council (Reg. no.
2019-03959). The German participation in LiteBIRD is supported in part by the
Excellence Cluster ORIGINS, which is funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s
Excellence Strategy (Grant No. EXC-2094 - 390783311). This research used
resources of the Central Computing System owned and operated by the Computing
Research Center at KEK, as well as resources of the National Energy Research
Scientific Computing Center, a DOE Office of Science User Facility supported
by the Office of Science of the U.S. Department of Energy.
## References
* [1] M. Hazumi and LiteBIRD collaboration, “LiteBIRD: a small satellite for the study of B-mode polarization and inflation from cosmic background radiation detection,” in SPIE Astronomical Telescopes + Instrumentation, pp. 844219–9, 2012.
* [2] Y. Sekimoto and LiteBIRD collaboration, “Concept design of the LiteBIRD satellite for CMB B-mode polarization,” in Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave, 10698, p. 106981Y, SPIE, 2018.
* [3] M. Hazumi and LiteBIRD Joint Study Group, “LiteBIRD satellite: JAXA’s new strategic L-class mission for all-sky surveys of cosmic microwave background polarization,” in Space Telescopes and Instrumentation 2020, SPIE, 2020\.
* [4] U. Seljak and M. Zaldarriaga, “Signature of gravity waves in the polarization of the microwave background,” Phys. Rev. Lett. 78, pp. 2054–2057, Mar 1997.
* [5] M. Kamionkowski, A. Stebbins, A. Kosowsky, and A. Stebbins, “A Probe of Primordial Gravity Waves and Vorticity,” Phys. Rev. Lett. 78, pp. 2058–2061, mar 1997.
* [6] M. Zaldarriaga and U. Seljak, “All-sky analysis of polarization in the microwave background,” Phys. Rev. D 55(4), pp. 1830–1840, 1997\.
* [7] M. Kamionkowski, A. Kosowsky, and A. Stebbins, “Statistics of cosmic microwave background polarization,” Phys. Rev. D 55(12), pp. 7368–7388, 1997.
* [8] M. Kamionkowski and E. D. Kovetz, “The Quest for B Modes from Inflationary Gravitational Waves,” Annu. Rev. Astron. Astrophys. 54, pp. 227–69, oct 2016.
* [9] M. Tristram, A. J. Banday, K. M. Górski, R. Keskitalo, C. R. Lawrence, K. J. Andersen, R. B. Barreiro, J. Borrill, H. K. Eriksen, R. Fernandez-Cobos, T. S. Kisner, E. Martínez-González, B. Partridge, D. Scott, T. L. Svalheim, H. Thommesen, and I. K. Wehus, “Planck constraints on the tensor-to-scalar ratio.” arXiv: 2010.01139v1, 2020.
* [10] L. Montier and LiteBIRD Joint Study Group, “Overview of the Mid- and High-Frequency Telescopes of the LiteBIRD satellite mission,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [11] L. Lamagna et al., “The optical design of the Litebird Middle and High Frequency Telescope,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [12] A. Suzuki et al., “The LiteBIRD Satellite Mission - Sub-Kelvin Instrument,” J. of Low Temperature Physics 193, pp. 1048–1056, 2018\.
* [13] B. Westbrook et al., “Detector fabrication development for the LiteBIRD satellite mission,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [14] T. Hasebe, Y. Sekimoto, T. Dotani, K. Mitsuda, K. Shinozaki, and S. Yoshida, “Optimization of cryogenic architecture for LiteBIRD satellite using radiative cooling,” Journal of Astronomical Telescopes, Instruments, and Systems 5(4), p. 044002, 2019.
* [15] J.-M. Duval, T. Prouvé, P. Shirron, K. Shinozaki, Y. Sekimoto, T. Hasebe, G. Vermeulen, J. André, M. Hasumi, L. Montier, and B. Mot, “LiteBIRD Cryogenic Chain: 100 mK Cooling with Mechanical Coolers and ADRs,” Journal of Low Temperature Physics 199, pp. 730–736, 2020.
* [16] Y. Sakurai et al., “Breadboard model of polarization modulator unit based on a continuous rotating half-wave plate for low frequency telescope of LiteBIRD space mission,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [17] P. A. R. Ade, G. Pisano, C. Tucker, and S. Weaver, “A review of metal mesh filters,” in SPIE, pp. 62750U–62750U, 2006.
* [18] H. Ishino and LiteBIRD collaboration, “LiteBIRD: lite satellite for the study of B-mode polarization and inflation from cosmic microwave background radiation detection,” in Proc. SPIE, 9904, p. 99040X, 2016.
* [19] H. Ishino, R. Nagata, and LiteBIRD working group, “Development of LiteBIRD analysis pipeline and systematics evaluation,” in Proceedings of the 16th Space Science Symposium, 2016.
* [20] R. Nagata, “Requirement analysis for LiteBIRD optical system,” JAXA Supercomputer System Annual Report April 2017-March 2018 JSS2 Inter-University Research , feb 2019.
* [21] A. T. Lee et al., “LiteBIRD an all-sky cosmic microwave background probe of inflation,” Bulletin of the American Astronomical Society 7(51), p. 286, 2019.
* [22] M. Tsuji et al., “Simulating electromagnetic transfer function from the transmission antennae to the sensors vicinity in LiteBIRD,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [23] H. Tran, A. Lee, S. Hanany, M. Milligan, and T. Renbarger, “Comparison of the crossed and the Gregorian Mizuguchi-Dragone for wide-field millimeter-wave astronomy,” Applied Optics 47(2), pp. 103–109, 2008.
* [24] B. E. Bernacki, J. F. Kelly, D. Sheen, B. Hatchell, P. Valdez, J. Tedeschi, T. Hall, and D. McMakin, “Wide-field-of-view millimeter-wave telescope design with ultra-low cross-polarization,” in SPIE, 8362, pp. 836207–836211, may 2012.
* [25] K. Young et al., “Optical design of PICO: a concept for a space mission to probe inflation and cosmic origins,” in Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave, 10698, pp. 1242 – 1253, SPIE, 2018.
* [26] H. Tran et al., “Optical Design of the EPIC-IM Crossed Dragone Telescope,” in SPIE, 7731, pp. 77311R–15, 2010.
* [27] S. Kashima, M. Hazumi, H. Imada, N. Katayama, T. Matsumura, Y. Sekimoto, and H. Sugai, “Wide field-of-view crossed Dragone optical system using anamorphic aspherical surfaces,” Appl. Opt. 57(15), pp. 4171–4179, 2018.
* [28] “TICRA.” http://www.ticra.com/software/grasp/.
* [29] H. Imada, T. Dotani, T. Hasebe, M. Hazumi, J. Inatani, H. Ishino, S. Kashima, N. Katayama, K. Kimura, T. Matsumura, R. Nagata, Y. Sekimoto, H. Sugai, A. Suzuki, and S. Utsunomiya, “The optical design and physical optics analysis of a cross-Dragonian telescope for LiteBIRD,” in Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave, 10698, p. 106984K, SPIE, 2018.
* [30] “HFSS.” https://www.ansys.com/products/electronics/ansys-hfss.
* [31] J. M. Edwards, R. O’Brient, A. T. Lee, and G. M. Rebeiz, “Dual-Polarized Sinuous Antennas on Extended Hemispherical Silicon Lenses,” IEEE Transactions on Antennas and Propagation 60, pp. 4082–4091, sep 2012.
* [32] “TK-RAM.” http://www.terahertz.co.uk/.
* [33] J. Saily and A. Raisanen, “Characterization of Submillimeter Wave Absorbers from 200 - 600 GHz,” International Journal of Infrared and Millimeter Waves 5, p. 71, 2004.
* [34] “NASA outgassing data.” https://outgassing.nasa.gov/.
* [35] “NIST Material Properties References.” https://trc.nist.gov/cryogenics/materials/references.htm.
* [36] Y. Sakurai, T. Matsumura, T. Iida, H. Kanai, N. Katayama, H. Imada, H. Ohsaki, Y. Terao, T. Shimomura, H. Sugai, H. Kataza, R. Yamamoto, and S. Utsunomiya, “Design and thermal characteristics of a 400 mm diameter levitating rotor in a superconducting magnetic bearing operating below at 10 k for a cmb polarization experiment,” IEEE Transactions on Applied Superconductivity 28, pp. 1–4, June 2018.
* [37] Y. Sakurai et al., “Design and development of a polarization modulator unit based on a continuous rotating half-wave plate for LiteBIRD,” in Proc. SPIE, 10708, p. 107080E, 2018.
* [38] K. Komatsu, T. Matsumura, H. Imada, H. Ishino, N. Katayama, and Y. Sakurai, “Demonstration of the broadband half-wave plate using the nine-layer sapphire for the cosmic microwave background polarization experiment,” Journal of Astronomical Telescopes, Instruments, and Systems 5, p. 1, nov 2019.
* [39] S. Stever et al., “Cosmic ray glitch predictions, physical modelling, and overall effect on the LiteBIRD space mission,” JCAP , p. in preparation, 2020.
* [40] M. Tominaga et al., “Simulation of cosmic ray effects in the cmb b-mode polarization observation satellite litebird,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [41] Y. Takei, S. Yasuda, K. Ishimura, et al., “Vibration isolation system for cryocoolers of soft x-ray spectrometer on-board ASTRO-H (Hitomi),” Journal of Astronomical Telescopes, Instruments, and Systems 4, p. 011216, feb 2018.
* [42] H. Takakura, Y. Sekimoto, J. Inatani, S. Kashima, H. Imada, T. Hasebe, T. Kaga, Y. Takeda, and N. Okada, “Far-Sidelobe Antenna Pattern Measurement of LiteBIRD Low Frequency Telescope in 1/4 Scale,” IEEE Transactions on Terahertz Science and Technology 9, pp. 598–605, nov 2019.
* [43] H. Takakura, Y. Sekimoto, J. Inatani, S. Kashima, and M. Sugimoto, “Polarization angle measurement of litebird low frequency telescope scaled model,” in Space Telescopes and Instrumentation 2020, SPIE, 2020.
* [44] D. Smith, M. Leach, M. Elsdon, and S. Foti, “Indirect Holographic Techniques for Determining Antenna Radiation Characteristics and Imaging Aperture Fields,” IEEE Antennas and Propagation Magazine 49, pp. 54–67, feb 2007.
* [45] A. Hirata, T. Nagatsuma, R. Yano, H. Ito, T. Furuta, Y. Hirota, T. Ishibashi, H. Matsuo, A. Ueda, T. Noguchi, Y. Sekimoto, M. Ishiguro, and S. Matsuura, “Output power measurement of photonic millimeter-wave and sub-millimeter-wave emitter at 100-800 GHz,” Electron Lett 38, p. 798, 2002.
* [46] H. Kiuchi, T. Kawanishi, and A. Kanno, “Wide frequency range optical synthesizer with high-frequency resolution,” IEEE Photonics Technology Letters 29, pp. 78–81, Jan 2017.
|
# Cosmological chirality and magnetic fields from parity violating particle
decays
Tanmay Vachaspati∗, Alexander Vilenkin† ∗Physics Department, Arizona State
University, Tempe, AZ 85287, USA.
†Institute of Cosmology, Department of Physics and Astronomy, Tufts
University, Medford, MA 02155, USA.
###### Abstract
We estimate the chirality of the cosmological medium due to parity violating
decays of standard model particles, focusing on the example of tau leptons.
The non-trivial chirality is however too small to make a significant
contribution to the cosmological magnetic field via the chiral-magnetic
effect.
## I Introduction
The last few decades have seen growing interest in cosmic magnetic fields on
several fronts [1]. Several ideas have been proposed that can generate
magnetic fields in cosmology, some of which are directly tied to known
particle physics [2, 3, 4] and its possible extensions [5, 6, 7, 8, 1]. The
magneto-hydrodynamical (MHD) evolution of cosmological magnetic fields is now
understood quite well on the basis of analytical arguments [9, 10] and direct
simulations [11]. There are claims for an indirect lower bound on the
cosmological magnetic field strength [12, 13, 14, 15, 16], though not without
debate [17, 18], and more direct evidence [19]. Concurrently there are claims
of a parity violating signature that can be used to measure the magnetic field
helicity spectrum [20, 21] though with no significant detections as yet [22,
23].
In parallel to these developments, motivated by heavy-ion collision
experiments [24], there has been renewed interest in chiral effects in
plasmas, namely the chiral-magnetic [25] and chiral-vortical [26] effects (CME
and CVE respectively). The CME and CVE have also been applied to the evolution
of cosmological and astrophysical magnetic fields [5, 27, 28, 29, 30, 31, 32,
33]. In this paper we discuss how CME and CVE can effectively arise in
standard cosmology with standard particle interactions due to the parity-
violating decays of heavy leptons and quarks. The basic idea is that the
standard model has a number of unstable particles that decay at various
cosmological epochs, primarily due to the weak interactions. Since the weak
interactions violate parity, the decay products are chiral and this provides a
net particle helicity to the cosmological medium. The net particle helicity in
principle leads to electric currents via the CME that can generate magnetic
helicity. However, accounting only for decays of standard model particles, the
net particle helicity is too small to significantly affect cosmological
magnetic fields and their helicity.
We start by describing the physical effect in some detail in the context of
the tau lepton in Sec. II, where we also estimate the induced electric
currents. We find an upper bound to the magnetic helicity that can be
generated due to chiral effects in Sec. III. Our conclusions are summarized
and discussed in Sec. IV.
## II Chirality production in tau decays
To illustrate the physics of the effect, in this section we will discuss the
decay of tau leptons in the background of a magnetic field and fluid
vorticity. Except for small differences, the physics carries over to the case
of decays of other particles.
### II.1 Particle decay
Tau leptons decay into electrons (or muons) and neutrinos
$\tau^{-}\to e^{-}+\nu_{\tau}+{\bar{\nu}}_{e}$ (1)
and anti-tau into positrons and neutrinos
$\tau^{+}\to e^{+}+{\bar{\nu}}_{\tau}+\nu_{e}$ (2)
These decays violate parity since they proceed primarily by the weak
interactions. Therefore the tau predominantly decays into a relativistic left-
handed electron, while an anti-tau decays into a relativistic right-handed
positron. Due to the lepton asymmetry of the universe there are more taus than
anti-taus, and the cosmological medium gains net left-handed chirality as
tau’s decay.
The decay product electrons are chiral since they are produced by the weak
interactions, but chirality is not preserved for massive particles. Instead,
as emphasized in Ref. [34] in the context of supernovae and neutron stars,
chirality is nearly equal to helicity for ultrarelativistic particles, so it
is better to think of the final electrons as being in a definite helicity
state. Helicity can only change due to particle interactions. We shall adopt
this view in what follows.
The $\tau$ mass is $m_{\tau}=1777~{}{\rm MeV}$ and the $\tau$ lifetime in its
rest frame is $\tau_{\tau}=2.9\times 10^{-13}~{}{\rm s}$. However, the
decaying taus are constantly reproduced by reactions inverse to (1),
(2),111Tau-particles are also produced and destroyed in scattering reactions
like $\tau^{-}+{\nu}_{e}\to e^{-}+\nu_{\tau}$. We disregard them in what
follows, since they do not change the order of magnitude of the effect. so the
number density of taus, $n_{\tau}$, remains comparable to that of photons
until the time
$t_{\tau}\sim 10^{-7}~{}{\rm s},$ (3)
when the cosmic temperature drops to $T\sim m_{\tau}$. At later times
$n_{\tau}$ decreases exponentially.
The particle helicity density, $n_{\chi}$, is produced in tau decays and is
dissipated by helicity flipping scatterings and due to the chiral anomaly. The
latter is proportional to $\alpha^{3}B^{2}$ [35], where $\alpha\approx 1/137$
is the fine structure constant and $B$ the magnetic field strength, and is
much slower than helicity flipping scatterings for vanishing or weak magnetic
fields. We will ignore the anomalous flipping for now but will discuss it in
Sec. LABEL:Bgen when we consider the effect of particle chirality on the
generation of magnetic fields. The evolution of particle helicity density can
be described by the kinetic equation in the relaxation time approximation,
$\frac{d}{dt}(a^{3}n_{\chi})=\frac{a^{3}}{\tau_{d}}(\delta n_{\tau}-\delta
n_{\tau}^{\rm eq})-\frac{a^{3}n_{\chi}}{\tau_{\chi}},$ (4)
where
$\delta n_{\tau}=n_{\tau}^{+}-n_{\tau}^{-},$ (5)
$n_{\tau}^{-}$ and $n_{\tau}^{+}$ are the densities of tau and anti-tau
particles, respectively, $\delta n_{\tau}^{\rm eq}$ is the equilibrium value
of $\delta n_{\tau}$, $\tau_{d}\sim(T/m_{\tau})\tau_{\tau}$ is the decay time
of taus (assuming that $T>m_{\tau}$ and with time dilation taken into account)
and $\tau_{\chi}^{-1}$ is the electron helicity flipping rate. At $T\gg
m_{e}$, the helicity flipping rate is suppressed by a factor $m_{e}^{2}/T^{2}$
compared to the scattering rate $\alpha T$ [36] (earlier estimates of the
scattering rate were suppressed by another factor of $\alpha$ [34]),
$\tau_{\chi}\sim\frac{1}{\alpha T}\frac{T^{2}}{m_{e}^{2}}.$ (6)
The excess of anti-tau’s over tau’s, $\delta n_{\tau}$, decreases due to tau
decay and is described by the equation,
$\frac{d}{dt}(a^{3}\delta n_{\tau})=\frac{a^{3}}{\tau_{d}}(\delta
n_{\tau}^{\rm eq}-\delta n_{\tau}).$ (7)
At temperatures below the elecroweak phase transition, $T\lesssim T_{\rm
EW}\sim 100$ GeV, we have $\tau_{d}\ll t$, where $t$ is the cosmic time222This
is easily verified using the relation $t\sim m_{\rm P}/\sqrt{N}T^{2}$, where
$m_{\rm P}$ is the Planck mass and $N$ is the number of particle species in
equilibrium.. This means that the equilibrium density of taus establishes very
quickly (compared to the Hubble time), and the approximate solution of (7) is
$\delta n_{\tau}\approx\delta n_{\tau}^{\rm eq}$. Inserting (7) in (4) and
then using $\delta n_{\tau}\approx\delta n_{\tau}^{\rm eq}$ we have
$\frac{d}{dt}(a^{3}n_{\chi})=-\frac{d}{dt}\left(a^{3}\delta n_{\tau}^{\rm
eq}\right)-\frac{a^{3}n_{\chi}}{\tau_{\chi}}.$ (8)
With a given $\delta n_{\tau}^{\rm eq}$, this equation can be solved in
quadratures, but we shall instead find an approximate solution. Since we are
in the regime where $\tau_{\chi}\ll t$, the term on the left-hand side can be
neglected and we obtain
$n_{\chi}\approx-\tau_{\chi}T^{3}\frac{d}{dt}\left(\frac{\delta n_{\tau}^{\rm
eq}}{T^{3}}\right),$ (9)
where we have used $aT\approx{\rm const}$.
Once we determine the equilibrium excess of anti-taus over taus, denoted by
$\delta n_{\tau}^{\rm eq}$, we can estimate the chirality density of the
universe due to tau decays using (9).
### II.2 Equilibrium density
The equilibrium density $\delta n_{\tau}^{\rm eq}$ is given by
$\delta n_{\tau}^{\rm
eq}=\frac{1}{2\pi^{2}}\int_{0}^{\infty}dpp^{2}\left[f\left(\frac{E-\mu_{\tau}}{T}\right)-f\left(\frac{E+\mu_{\tau}}{T}\right)\right],$
(10)
where $f(x)=(e^{x}+1)^{-1}$ is the Fermi distribution,
$E=\sqrt{p^{2}+m_{\tau}^{2}}$, and $\mu_{\tau}$ is the chemical potential of
$\tau$ particles. At $T\gg m_{\tau},\mu_{\tau}$ we can expand the integrand in
Eq. (10) in powers of $m_{\tau}^{2}/p^{2}$ and $\mu_{\tau}/T$. The
integrations are then easily performed and we find
$\delta n_{\tau}^{\rm
eq}\approx\frac{\mu_{\tau}T^{2}}{6}\left(1-\frac{3m_{\tau}^{2}}{2\pi^{2}T^{2}}\right).$
(11)
We assume that the baryon and/or lepton asymmetry of the universe was
generated at $T\gg T_{EW}$ by some interactions beyond the Standard Model, for
example by $(B-L)$-violating leptoquark decays. This asymmetry was then
redistributed between the Standard Model leptons and quarks by sphaleron
processes, so at $T\ll T_{EW}$ we expect the chemical potentials of light
baryons and leptons to be of the order $\mu/T\sim\eta_{B}$ [37, 38], where
$\eta_{B}\sim 10^{-9}$ is the observed baryon to photon ratio. In the high-
temperature regime, when $T$ is large compared to all relevant particle
masses, we have $\mu_{\tau}/T\approx{\rm const}$, with a mass correction
${\cal O}(m^{2}/T^{2})$ [39]. Then Eq. (11) becomes
$\frac{\delta n_{\tau}^{\rm eq}}{T^{3}}\approx
C\eta_{B}-K\eta_{B}\frac{m_{\tau}^{2}}{T^{2}},$ (12)
where $C$ and $K$ are ${\cal O}(1)$ numerical constants333This estimate
assumes that taus are the heaviest particles present in equilibrium at
temperature $T$. If a heavier particle is present in equilibrium, it too will
contribute to the mass correction and may change the estimate.. The mass
correction term in (12) can be qualitatively understood as follows. As the
temperature decreases, it becomes energetically favorable to transfer the
conserved $\tau$-lepton number from $\tau$-particles to $\tau$-neutrinos. The
excess $\tau$-lepton number is also decreased as a result [39].
Substituting Eq. (12) in (9) we obtain
$n_{\chi}\approx-3K\eta_{B}\tau_{\chi}m_{\tau}^{2}{\dot{T}}.$ (13)
With ${\dot{T}}=-T/2t$, $t\sim m_{\rm P}/T^{2}$ and $\tau_{\chi}$ from Eq.
(6), this gives (omitting numerical factors)
$n_{\chi}\sim\frac{\eta_{B}m_{\tau}^{2}}{\alpha m_{e}^{2}}\frac{T}{m_{\rm
P}}n_{\gamma},$ (14)
where $n_{\gamma}\sim T^{3}$ is the photon number density.
This estimate was derived assuming $T\gg m_{\tau}$, but it still applies at
$T\sim m_{\tau}$. Reactions (1), (2) remain in equilibrium when $T$ drops well
below $m_{\tau}$. In this regime, $\delta n_{\tau}$ and $n_{\chi}$ decrease
exponentially.
Similar formulae can be written down for the decay of other unstable
particles. The largest helicity is injected by the decay of the heaviest
particle into the lightest particle and at the highest temperature.
## III Magnetic helicity
As noted in Ref. [32], the maximum magnetic helicity that can be obtained from
particle helicity can be derived from the chiral anomaly equation, which can
be written as a conservation law,
$n_{\chi}+\frac{4\pi}{\alpha}h={\rm constant}.$ (15)
where $h=\langle{\bf A}\cdot{\bf B}\rangle$ is the magnetic helicity. Assuming
that the initial magnetic helicity and the final particle helicity vanish, we
get
$h_{\rm max}=\frac{\alpha}{4\pi}n_{\chi}\sim\frac{\eta_{B}m_{\tau}^{2}}{4\pi
m_{e}^{2}}\frac{T}{m_{\rm P}}n_{\gamma}$ (16)
where we have used (14). We compare $h_{\rm max}$ to magnetic helicity that
could be induced due to baryogenesis [3, 4],
$h_{B}\sim\frac{\eta_{B}}{\alpha}n_{\gamma}\sim 10^{-5}\,{\rm cm}^{-3}\sim
10^{-45}\,{\rm G}^{2}\,{\rm Mpc}$ (17)
where we have used the known cosmic baryon number density and are using
natural units. Then
$h_{\rm max}\sim\frac{\alpha m_{\tau}^{2}}{4\pi m_{e}^{2}}\frac{T}{m_{\rm
P}}h_{B}\sim 10^{-10}h_{B}$ (18)
where we have used $T\sim 100\,{\rm GeV}$ in the numerical estimate. Even if
the decay of top quarks with mass $\sim 175\,{\rm GeV}$ to down quarks with
mass $\sim 1\,{\rm MeV}$ is considered, $h_{\rm max}\sim 10^{-6}h_{B}$.
Comparing to observations, even with the most conservative lower bound of
$10^{-19}\,{\rm G}$ on Mpc scales, we get an estimate for the observed
helicity $\sim 10^{-38}\,{\rm G}^{2}\,{\rm Mpc}$.
## IV Conclusions
We have shown that the decays of certain standard model particles can lead to
a chiral cosmological medium around the epoch of the electroweak phase
transition. The final result for the chiral asymmetry due to tau-lepton decays
is given in (14). However, the asymmetry is suppressed by the baryon to
entropy ratio ($\eta_{B}\sim 10^{-9}$) and the effect on magnetic field
helicity generation is very weak as we have shown in Sec. III. Nonetheless it
is of interest that the cosmological medium was chiral at the earliest moments
even within the standard model of particle physics.
## V Acknowledgements
We thank the participants of the Nordita workshop on “Quantum Anomalies and
Chiral Magnetic Phenomena”, especially Axel Brandenburg and Kohei Kamada for
feedback. We also thank Matt Baumgart, Cecilia Lunardini, Igor Shovkovy, and
Tracy Slatyer for discussions. TV’s work is supported by the U.S. Department
of Energy, Office of High Energy Physics, under Award No. DE-SC0019470 at
Arizona State University. AV is supported by the National Science Foundation
Award No. 1820872.
## References
* [1] Tanmay Vachaspati. Progress on Cosmological Magnetic Fields. 10 2020.
* [2] T. Vachaspati. Magnetic fields from cosmological phase transitions. Phys.Lett., B265:258–261, 1991.
* [3] John M. Cornwall. Speculations on primordial magnetic helicity. Phys. Rev., D56:6146–6154, 1997.
* [4] Tanmay Vachaspati. Estimate of the primordial magnetic field helicity. Phys. Rev. Lett., 87:251302, 2001.
* [5] M. Joyce and Mikhail E. Shaposhnikov. Primordial magnetic fields, right-handed electrons, and the Abelian anomaly. Phys. Rev. Lett., 79:1193–1196, 1997.
* [6] Michael McNeil Forbes and Ariel R. Zhitnitsky. Primordial galactic magnetic fields from domain walls at the QCD phase transition. Phys. Rev. Lett., 85:5268–5271, 2000.
* [7] Trevor Stevens, Mikkel B. Johnson, Leonard S. Kisslinger, and Ernest M. Henley. Non-Abelian Higgs model of magnetic field generation during a cosmological first-order electroweak phase transition. Phys. Rev., D85:063003, 2012.
* [8] F. Miniati, G. Gregori, B. Reville, and S. Sarkar. Axion-driven cosmic magnetogenesis during the qcd crossover. Physical Review Letters, 121(2), Jul 2018.
* [9] Robi Banerjee and Karsten Jedamzik. The Evolution of cosmic magnetic fields: From the very early universe, to recombination, to the present. Phys. Rev., D70:123003, 2004.
* [10] Karsten Jedamzik and Gunter Sigl. The Evolution of the Large-Scale Tail of Primordial Magnetic Fields. Phys.Rev., D83:103005, 2011.
* [11] Axel Brandenburg, Tina Kahniashvili, Sayan Mandal, Alberto Roper Pol, Alexander G. Tevzadze, and Tanmay Vachaspati. Evolution of hydromagnetic turbulence from the electroweak phase transition. Phys. Rev., D96(12):123528, 2017.
* [12] A. Neronov and I. Vovk. Evidence for Strong Extragalactic Magnetic Fields from Fermi Observations of TeV Blazars. Science, 328:73–, April 2010.
* [13] Warren Essey, Shin’ichiro Ando, and Alexander Kusenko. Determination of intergalactic magnetic fields from gamma ray data. Astropart. Phys., 35:135–139, 2011.
* [14] K. Dolag, M. Kachelriess, S. Ostapchenko, and R. Tomas. Lower limit on the strength and filling factor of extragalactic magnetic fields. Astrophys. J., 727:L4, 2011.
* [15] Matthew Wood, Jonathan Biteau, Regina Caputo, Mattia Di Mauro, and Manuel Meyer. Preliminary Results of the $Fermi$ High-Latitude Extended Source Catalog. PoS, ICRC2017:647, 2018.
* [16] M. Ackermann et al. The Search for Spatial Extension in High-latitude Sources Detected by the $Fermi$ Large Area Telescope. Astrophys. J. Suppl., 237(2):32, 2018.
* [17] Avery E. Broderick, Philip Chang, and Christoph Pfrommer. The Cosmological Impact of Luminous TeV Blazars I: Implications of Plasma Instabilities for the Intergalactic Magnetic Field and Extragalactic Gamma-Ray Background. Astrophys.J., 752:22, 2012.
* [18] Timothy C. Arlen, Vladimir V. Vassiliev, Thomas Weisgarber, Scott P. Wakely, and S. Yusef Shafi. Intergalactic Magnetic Fields and Gamma Ray Observations of Extreme TeV Blazars. Astrophys. J., 796(1):18, 2014.
* [19] Wenlei Chen, James H. Buckley, and Francesc Ferrer. Search for GeV ? -Ray Pair Halos Around Low Redshift Blazars. Phys. Rev. Lett., 115:211103, 2015.
* [20] Hiroyuki Tashiro, Wenlei Chen, Francesc Ferrer, and Tanmay Vachaspati. Search for CP Violating Signature of Intergalactic Magnetic Helicity in the Gamma Ray Sky. Mon. Not. Roy. Astron. Soc., 445(1):L41–L45, 2014.
* [21] Wenlei Chen, Borun D. Chowdhury, Francesc Ferrer, Hiroyuki Tashiro, and Tanmay Vachaspati. Intergalactic magnetic field spectra from diffuse gamma rays. Mon. Not. Roy. Astron. Soc., 450(4):3371–3380, 2015.
* [22] Julia Asplund, Guölaugur Jóhannesson, and Axel Brandenburg. On the measurement of handedness in fermi large area telescope data, 2020\.
* [23] M. Kachelriess and B. C. Martinez. Searching for primordial helical magnetic fields, 2020.
* [24] Dmitri E. Kharzeev. The Chiral MagnetoHydroDynamics of QCD fluid at RHIC and LHC. J. Phys., G38:124061, 2011.
* [25] A. Vilenkin. Equilibrium Parity Violating Current in a Magnetic Field. Phys. Rev., D22:3080–3084, 1980.
* [26] A. Vilenkin. Macroscopic Parity Violating Effects: Neutrino Fluxes from Rotating Black Holes and in Rotating Thermal Radiation. phys. rev., D20:1807–1812, 1979.
* [27] Alexey Boyarsky, Jurg Frohlich, and Oleg Ruchayskiy. Self-consistent evolution of magnetic fields and chiral asymmetry in the early Universe. Phys. Rev. Lett., 108:031301, 2012.
* [28] Hiroyuki Tashiro, Tanmay Vachaspati, and Alexander Vilenkin. Chiral Effects and Cosmic Magnetic Fields. Phys. Rev., D86:105033, 2012.
* [29] Maxim Dvornikov and Victor B. Semikoz. Lepton asymmetry growth in the symmetric phase of an electroweak plasma with hypermagnetic fields versus its washing out by sphalerons. Phys. Rev. D, 87(2):025023, 2013.
* [30] Maxim Dvornikov and Victor B. Semikoz. Instability of magnetic fields in electroweak plasma driven by neutrino asymmetries. JCAP, 05:002, 2014.
* [31] Maxim Dvornikov and Victor B. Semikoz. Influence of the turbulent motion on the chiral magnetic effect in the early Universe. Phys. Rev. D, 95(4):043538, 2017.
* [32] Axel Brandenburg, Jennifer Schober, Igor Rogachevskii, Tina Kahniashvili, Alexey Boyarsky, Jurg Frohlich, Oleg Ruchayskiy, and Nathan Kleeorin. The turbulent chiral-magnetic cascade in the early universe. Astrophys. J., 845(2):L21, 2017.
* [33] Youhei Masada, Kei Kotake, Tomoya Takiwaki, and Naoki Yamamoto. Chiral magnetohydrodynamic turbulence in core-collapse supernovae. Phys. Rev., D98(8):083018, 2018.
* [34] Dorota Grabowska, David B. Kaplan, and Sanjay Reddy. Role of the electron mass in damping chiral plasma instability in Supernovae and neutron stars. Phys. Rev., D91(8):085035, 2015.
* [35] Daniel G. Figueroa and Mikhail Shaposhnikov. Anomalous non-conservation of fermion/chiral number in Abelian gauge theories at finite temperature. JHEP, 04:026, 2018.
* [36] A. Boyarsky, V. Cheianov, O. Ruchayskiy, and O. Sobol. Evolution of the Primordial Axial Charge across Cosmic Times. Phys. Rev. Lett., 126:021801, 2021.
* [37] V. A. Kuzmin, V. A. Rubakov, and M. E. Shaposhnikov. Anomalous Electroweak Baryon Number Nonconservation and GUT Mechanism for Baryogenesis. Phys. Lett., B191:171–173, 1987.
* [38] Jeffrey A. Harvey and Michael S. Turner. Cosmological baryon and lepton number in the presence of electroweak fermion number violation. Phys. Rev., D42:3344–3349, 1990.
* [39] A. I. Bochkarev, S. Yu. Khlebnikov, and M. E. Shaposhnikov. Sphalerons and Baryogenesis: Electroweak CP Violation at High Temperatures. Nucl. Phys., B329:493–518, 1990.
|
# A Bright Ultraviolet Excess in the Transitional 02es-like Type Ia Supernova
2019yvq
J. Burke Department of Physics, University of California, Santa Barbara, CA
93106-9530, USA Las Cumbres Observatory, 6740 Cortona Dr, Suite 102, Goleta,
CA 93117-5575, USA D. A. Howell Department of Physics, University of
California, Santa Barbara, CA 93106-9530, USA Las Cumbres Observatory, 6740
Cortona Dr, Suite 102, Goleta, CA 93117-5575, USA S. K. Sarbadhicary Center
for Data Intensive and Time Domain Astronomy, Department of Physics and
Astronomy, Michigan State University, East Lansing, MI 48824 D. J. Sand
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson,
AZ 85721-0065, USA R. C. Amaro Steward Observatory, University of Arizona,
933 North Cherry Avenue, Tucson, AZ 85721-0065, USA D. Hiramatsu Department
of Physics, University of California, Santa Barbara, CA 93106-9530, USA Las
Cumbres Observatory, 6740 Cortona Dr, Suite 102, Goleta, CA 93117-5575, USA
C. McCully Department of Physics, University of California, Santa Barbara, CA
93106-9530, USA Las Cumbres Observatory, 6740 Cortona Dr, Suite 102, Goleta,
CA 93117-5575, USA C. Pellegrino Department of Physics, University of
California, Santa Barbara, CA 93106-9530, USA Las Cumbres Observatory, 6740
Cortona Dr, Suite 102, Goleta, CA 93117-5575, USA J. E. Andrews Steward
Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ
85721-0065, USA P. J. Brown Department of Physics and Astronomy, Texas A&M
University, 4242 TAMU, College Station, TX 77843, USA George P. and Cynthia
Woods Mitchell Institute for Fundamental Physics & Astronomy Koichi Itagaki
(板垣公一) Itagaki Astronomical Observatory, Yamagata 990-2492, Japan M.
Shahbandeh Department of Physics, Florida State University, Tallahassee, FL
32306, USA K. A. Bostroem Department of Physics and Astronomy, University of
California, 1 Shields Avenue, Davis, CA 95616-5270, USA L. Chomiuk Center for
Data Intensive and Time Domain Astronomy, Department of Physics and Astronomy,
Michigan State University, East Lansing, MI 48824 E. Y. Hsiao Department of
Physics, Florida State University, Tallahassee, FL 32306, USA Nathan Smith
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson,
AZ 85721-0065, USA S. Valenti Department of Physics and Astronomy, University
of California, 1 Shields Avenue, Davis, CA 95616-5270, USA
(Received Soon; Revised After that; Accepted After that)
###### Abstract
We present photometric and spectroscopic observations of the nearby Type Ia SN
2019yvq, from its discovery $\sim$1 day after explosion to $\sim$100 days
after its peak brightness. This SN exhibits several unusual features, most
notably an extremely bright UV excess seen within $\sim$5 days of its
explosion. As seen in Swift UV data, this early excess outshines its “peak”
brightness, making this object more extreme than other SNe with early UV/blue
excesses (e.g. iPTF14atg and SN 2017cbv). In addition, it was underluminous
($M_{B}=-18.4$), relatively quickly declining ($\Delta m_{15}(B)=1.35$), and
shows red colors past its early blue bump. Unusual (although not
unprecedented) spectral features include extremely broad-lined and high-
velocity Si absorption. Despite obvious differences in peak spectra, we
classify SN 2019yvq as a transitional member of the 02es-like subclass due to
its similarities in several respects (e.g. color, peak luminosity, peak Ti,
nebular [Ca II]). We model this dataset with a variety of published models,
including SN ejecta–companion shock interaction and sub-Chandrasekhar mass WD
double detonation models. Radio constraints from the VLA place an upper limit
of $(4.5\text{---}20)\times 10^{-8}$ M⊙/yr on the mass-loss rate from a
symbiotic progenitor, which does not exclude a red giant or main sequence
companion. Ultimately we find that no one model can accurately replicate all
aspects of the dataset, and further we find that the ubiquity of early
excesses in 02es-like SNe Ia requires a progenitor system that is capable of
producing isotropic UV flux, ruling out some models for this class of objects.
supernovae: individual (SN 2019yvq) – supernovae: general
††journal: ApJ††facilities: Las Cumbres Observatory (Sinistro), FTN (FLOYDS),
Bok (B&C Spectrograph), MMT (Blue Channel spectrograph), IRTF (SpeX), Swift
(UVOT), VLA††software: astropy (Astropy Collaboration et al., 2013; The
Astropy Collaboration et al., 2018), SNooPy (Burns et al., 2011), Tardis
(Kerzendorf et al., 2019), sncosmo (Barbary et al., 2016), SALT2 (Guy et al.,
2007), MLCS2k2 (Jha et al., 2007), lightcurve_fitting (Hosseinzadeh, 2019),
emceee (Foreman-Mackey et al., 2013)
## 1 Introduction
Despite the fact that Type Ia supernovae (SNe) were used as standardizable
candles to discover the accelerating expansion of the universe and constrain
its energy content (Riess et al., 1998; Perlmutter et al., 1999), open
questions remain about their progenitor systems. The SNe themselves are
understood to be the thermonuclear explosions of carbon/oxygen white dwarfs
(WDs) (Hoyle & Fowler, 1960), but beyond that there are large uncertainties
about both the progenitor system(s) and explosion mechanism(s).
Many possible progenitor systems have been theorized. The two broad classes
are the single-degenerate channel (Whelan & Iben, 1973), where the WD accretes
matter slowly from a nondegenerate companion, and the double-degenerate
channel (Iben & Tutukov, 1984), where the source of the extra matter needed to
ignite the WD is a second WD. Within these two broad channels exist many
specific and sometimes exotic scenarios, e.g. dynamically driven double-
degenerate double-detonation systems (Shen et al., 2018) or rotating super-
Chandrasekhar mass WD progenitors (Yoon & Langer, 2005). For reviews, see
Howell (2011), Wang & Han (2012), and Maoz et al. (2014).
Kasen (2010) predicted an observational signature that could distinguish
between the single- and double-degenerate cases. If the donor star were
nondegenerate then the SN ejecta will run into it and get shock-heated. The
shock-heated ejecta would then emit an excess of UV/blue light which could be
detected in the SN’s early-time lightcurve. The strength of this signature is
dependent on the companion’s size and separation, the velocity of the ejecta,
and the viewing angle of the event. Kasen (2010) predicted that the viewing
angle effect alone would make this early blue excess visible in only 10% of
SNe Ia which explode through this single-degenerate channel.
Following the publication of Kasen (2010), many rolling supernova searches
were examined for evidence of the effect in the optical and UV (Hayden et al.,
2010; Bianco et al., 2011; Ganeshalingam et al., 2011; Tucker, 2011). These
found no evidence for the predicted shock with a red giant companion. Brown et
al. (2012a) also excluded red giant companions from a smaller sample of SNe Ia
with constraining UV data. The early optical observations of SN 2011fe were
additionally able to place extremely tight constraints on optical and UV shock
emission from the companion (Nugent et al., 2011; Brown et al., 2012b).
Early blue excesses have since been seen in a small number of SNe, most
notably SN 2012cg (Marion et al., 2016), iPTF14atg (Cao et al., 2015),
iPTF16abc (Miller et al., 2018), and SN 2017cbv (Hosseinzadeh et al., 2017).
The proliferation of transient surveys has allowed for much more consistent
and thorough followup of young SNe (e.g. Yao et al., 2019). This in turn has
revealed a wide range of early behaviors including varying early color
evolution (Bulla et al., 2020; Stritzinger et al., 2018; Brown et al., 2017,
2018) and a range of (sometimes broken) power laws which describe their rising
lightcurves (Olling et al., 2015; Miller et al., 2018, 2020a; Li et al., 2019;
Shappee et al., 2019; Dimitriadis et al., 2019).
A number of progenitor scenarios can reproduce some range of these observed
properties, including explosions which vary the degree of nickel mixing in the
exploding WD (Piro & Morozova, 2016) leading to a range of early colors, and
models of sub-Chandrasekhar mass WDs detonated by the ignition of a surface
layer of He (Polin et al., 2019a) leading to a wide range of absolute
magnitudes and colors.
In this paper we present early-time photometry and spectroscopy of the Type Ia
SN 2019yvq, a SN discovered in late 2019 which displays a rare, and unusually
strong, blue bump at early times. The object displays other unusual behavior,
including extremely broad and high-velocity Si II at peak and strong nebular
[Fe II] and [Ca II]. Its unique combination of characteristics make it an
excellent stress-test for several models of SNe Ia. Multiple papers have
already been written about this object (Miller et al., 2020b; Siebert et al.,
2020; Tucker et al., 2020), which we reference throughout, as this work agrees
with prior findings in some respects and disagrees in others.
In Section 2 we describe the object’s discovery and the observational followup
by Las Cumbres Observatory, which obtained data presented here for the first
time, and the Swift space telescope. In Section 3 we discuss interesting
features of the dataset, and we compare specifically to 02es-like SNe Ia in
Section 4. In Section 5 we compare our data to models from Kasen (2010) and
Polin et al. (2019a) and discuss the difficulty of finding a single model that
reproduces all features of our dataset. In Section 6 we discuss constraints on
the progenitor system as indicated by radio observations from the Karl G.
Jansky Very Large Array. We discuss implications of the event and its
properties in Section 7. We conclude in Section 8.
## 2 Discovery & Observations
### 2.1 Discovery
SN 2019yvq was discovered by Koichi Itagaki (Itagaki, 2019) on 2019 December
28.74 UT using a Celestron 14 inch telescope at an unfiltered magnitude of
16.7. A nondetection of the same field, using an identical setup, was found
the night before (2019 December 27.72 UT), with a limiting unfiltered
magnitude of $\sim$18.2. This nondetection is approximately 0.3 days after the
nondetection reported by ASAS-SN in Tucker et al. (2020), and places an even
more stringent limit on the rise-time and early lightcurve. Following the
initial discovery, both the ZTF (Bellm et al., 2019) and ATLAS (Tonry et al.,
2018) surveys reported detections of SN 2019yvq. An initial classification
spectrum using HOWPol on the 1.5-m Kanata telescope on 2020 January 01.84
suggested that SN 2019yvq was a Type Ib/c supernova (Kawabata, 2020), although
a subsequent spectrum (taken on 2020 January 4.07) with the SPRAT spectrograph
on the Liverpool telescope clearly showed that SN2019yvq was a SN Ia before
maximum light. A spectrum from the SED Machine on the Palomar 60-in telescope
taken on 2020 January 12.36 further confirmed that SN 2019yvq is a SN Ia. We
have downloaded these spectra from the Transient Name Server
(TNS)111https://wis-tns.weizmann.ac.il/ and incorporated them into our
analysis.
Figure 1: UV and optical extinction-corrected photometry of SN 2019yvq. As
discussed in Section 3.1 we adopt $E(B-V)_{\textrm{host}}=0.052$ throughout
our analysis, in addition to $E(B-V)_{\textrm{Milky Way}}=0.017$. The first
epoch shows an extremely strong blue/UV excess. The lines connecting the
points are simple linear interpolations to guide the eye, especially to the
strength of the early UV excess, and do not represent models. The epochs of
the spectra shown in Figure 2 are included as vertical grey lines.
SN 2019yvq is located at right ascension
$12\overset{\mathrm{h}}{\phantom{.}}27\overset{\mathrm{m}}{\phantom{.}}21\overset{\mathrm{s}}{.}85$
and declination
$+64\overset{\circ}{\phantom{.}}47\overset{\prime}{\phantom{.}}59\overset{\prime\prime}{.}8$
(J2000), and lies 12.9 arcsec to the southeast of the host galaxy NGC 4441,
which has a redshift of $z$=0.00908 (Rothberg & Joseph, 2006, retrieved via
NED222http://ned.ipac.caltech.edu/). NGC 4441 is an SAB0-type galaxy, and is
clearly undergoing a merger event as can be seen in deep images from the DESI
Legacy Imaging Survey333http://legacysurvey.org/viewer (Dey et al., 2019). A
surface brightness fluctuation (SBF) distance to NGC 4441 suggests
$D$$\approx$20 Mpc (Tonry et al., 2001), although the disturbed nature of the
host likely affects this measurement. The Hubble-flow distance is
$D$$\approx$40 Mpc, which is in agreement with the distance modulus calculated
in Miller et al. (2020b). Both to be consistent with Siebert et al. (2020) and
Tucker et al. (2020), and because using the SBF distance value would further
decrease the object’s already low luminosity, we adopt the distance modulus
from Miller et al. (2020b) throughout this work ($\mu=33.14\pm 0.11$,
$D=42.5\pm 2.2$ Mpc). We also adopt a Milky Way extinction value of
$E(B-V)$=0.017 mag using the Schlafly & Finkbeiner (2011) calibration of the
Schlegel et al. (1998) dust maps.
Figure 2: The top left and right-hand panels indicate the optical spectral
evolution of SN 2019yvq, separated into panels purely for readability. The
bottom left panel shows the IR spectrum at $\sim$6 days taken with SpeX on the
IRTF (Section 2.3). Epochs (in days) with respect to B-band maximum are
included as labels on each spectrum. The wavelengths of spectral features are
marked with dashed lines, corresponding to their approximate velocity which
they have at maximum light to guide the eye in tracking their velocity
evolution. Telluric features are marked with $\oplus$. The primary source for
spectra was the FLOYDS instrument at Las Cumbres (black spectra), but a number
of other spectra (detailed in Sections 2.1 and 2.3) are included as well. The
final three spectra have been binned by a factor of 5, for clarity.
### 2.2 Photometry
Figure 1 displays our full photometric dataset.
An intense UBVgri follow-up campaign was undertaken using the 1-m telescopes
of Las Cumbres Observatory (LCO; Brown et al., 2013). Data were reduced using
lcogtsnpipe (Valenti et al., 2016) by performing PSF-fitting photometry.
Zeropoints for images in the UBV filters were calculated from Landolt standard
fields (Landolt, 1992) taken on the same night by the same telescope.
Likewise, zeropoints for images in the gri filter set were calculated by using
Sloan magnitudes of stars in the same field as the object (SDSS Collaboration
et al., 2017).
Observations from the Neil Gehrels Swift Observatory (Swift; Gehrels et al.,
2004) and the Ultra-Violet Optical Telescope (UVOT; Roming et al., 2005) were
obtained under GI Program 1518168 and reduced using the pipeline associated
with the Swift Optical Ultraviolet Supernovae Archive (SOUSA; Brown et al.,
2014) and the zeropoints of Breeveld et al. (2010). The temporal sensitivity
changes were corrected for using the 20200925
CALDB444https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/swift/docs/uvot/uvotcaldb_throughput_06.pdf.
Template observations from 2012 were used to subtract the host galaxy count
rates from the UVW2, UVM2, and UVW1 filters.
In addition to the Las Cumbres and Swift photometric data, we have also
obtained unfiltered photometry taken with the Itagaki Astronomical
Observatory’s Celestron 14-inch telescope in the days after discovery,
including the nondetection taken the day prior to SN 2019yvq’s discovery.
We gather $g$ and $r$ band data from the public ZTF data stream using the MARS
transient broker555https://mars.lco.global/, and present the near-peak data in
Figure 1 as comparison.
### 2.3 Spectroscopy
Figure 2 displays our full spectroscopic dataset.
A sequence of optical spectra were taken primarily with the FLOYDS
spectrograph mounted on Las Cumbres Observatory’s 2-m telescope on Haleakala,
HI, and were reduced as described in Valenti et al. (2014).
Additional optical spectroscopy was obtained with the 2.3-m Bok telescope and
the B&C spectrograph using both the 300 line/mm grating and a higher
resolution 1200/mm line grating. We also obtained an MMT medium resolution
(1200 l/mm) spectrum on 2020-02-18 11:27 UTC using the Blue Channel
spectrograph (Schmidt et al., 1989). These data were reduced using standard
IRAF tasks. We use the Na ID doublet in the high resolution data as one method
of estimating host galaxy extinction from cold gas as discussed in Section
3.2.3.
Finally, a near-infrared spectrum of SN 2019yvq was taken on 2020 Jan 20 (UT)
with SpeX (Rayner et al., 2003) on the NASA Infrared Telescope Facility in
cross-dispersed ‘SXD’ mode, providing wavelength coverage from $\sim$0.8–2.4
$\mu$m; these data were reduced in a standard way, as described in Hsiao et
al. (2019).
All new data are made publicly available on the Weizmann Interactive Supernova
Repository666https://wiserep.weizmann.ac.il/(Yaron & Gal-Yam, 2012).
## 3 Data Analysis
### 3.1 Lightcurve and Color Evolution Analysis
Method | $E(B-V)$ | $\sigma_{E(B-V)}$ | $M_{B}$
---|---|---|---
Na ID | 0.052 | ${}_{-0.025}^{+0.053}$ | -18.41
Lira Law | 0.268 | 0.043 | -19.29
SNooPy | 0.342 | $0.031\pm 0.060\textrm{\ (sys)}$ | -19.60
SNooPy (no $i$) | 0.445 | $0.049\pm 0.060\textrm{\ (sys)}$ | -20.02
SALT2 | 0.347 | 0.015 | -19.62
SALT2 (no $i$) | 0.631 | 0.019 | -20.78
MLCS2k2 | 0.252 | 0.0036 | -19.23
MLCS2k2 (no $i$) | 0.279 | 0.0038 | -19.34
Table 1: Range of extinction values and peak absolute magnitudes computed
using different methods and SN Ia fitting programs. SALT2 and MLCS2k2 fits
were done using the sncosmo package and Lira Law fits were done with a fixed
slope, as discussed in the text. We adopt the Na ID extinction value
throughout our analysis.
The lightcurve of SN 2019yvq is presented in Figure 1. The most striking
feature of this lightcurve is the strong wavelength-dependent excess of the
first epoch, seen in data from Las Cumbres, ZTF, and Swift. We note especially
the excess in the mid-UV Swift filters, where the magnitude during the initial
bump is brighter than the “peak” magnitude. This is even more extreme than
other objects with an observed mid-UV excess at early times such as SN 2012cg
(Marion et al., 2016) and iPTF14atg (Cao et al., 2015). We also note that SN
2017cbv (Hosseinzadeh et al., 2017), the SN Ia with the most clearly resolved
early optical blue bump, displayed only a moderate excess in the UVW1, UVM2,
or UVW2 bands compared to what is expected from companion shock interaction
models (as shown in Figure 3 of that paper), although its UV colors are still
quite blue compared to other normal SNe Ia (Brown et al., 2017).
Different methods of estimating the extinction due to the host galaxy of SN
2019yvq yielded significantly different results, as summarized in Table 1. For
all fits we fixed $R_{V,\textrm{host}}=3.1$.
Figure 3: Comparisons of the $B-V$ color evolution of SN 2019yvq (black) to
the Lira Law (pink). The best-fit line (dashed) to the appropriate SN 2019yvq
data has a slope $2.9\sigma$ away from the expected slope. Fixing the slope
(solid line) is one method of measuring the host extinction, reported in Table
1. Following the convention of Phillips et al. (1999), data are plotted
relative to $t_{V}$ (days from V-band maximum). Figure 4: Color evolution of
SN 2019yvq compared with other SNe Ia. We assume an explosion epoch of SN
2019yvq derived from the best-fit companion shocking model, and the two sets
of model colors plotted are the best-fit models described in Section 5. We
note again the extremely strong early blue color in every filter combination
besides $r-i$.
One method of calculating extinction in SNe Ia is the “Lira Law.” As shown in
Figure 1 of Phillips et al. (1999), the $B-V$ color evolution of many SNe Ia
is similar between 30 and 90 days after $V$ maximum, and can be fit with a
line described by Equation 1 of that paper. That expected linear color
evolution is shown in pink in Figure 3. $E(B-V)$ can then be measured by
fitting a line with the same slope to the color data, and finding the linear
offset needed to deredden the fit to the expected Lira Law values. Using this
method we measure $E(B-V)=0.268\pm 0.043$ for SN 2019yvq. However, the $B-V$
color evolution of SN 2019yvq has a best-fit slope $2.9\sigma$ away from the
slope predicted by the Lira Law. The shallower slope of SN 2019yvq is not
unprecedented (see e.g. Förster et al., 2013), but does cast doubt on the
$E(B-V)$ value obtained from the Lira Law comparison.
We also attempted to fit the BVgri data from Las Cumbres using the SNooPy
software package (Burns et al., 2011). We obtained the extinction value by
comparing to EBV_model, which required a high extinction value (0.342) to
match the data. similar to the findings in Miller et al. (2020b). The fits
start at a phase of -10 days with respect to maximum light, and thus the early
excess should not bias the results. We found that the fits strongly
overpredicted the secondary i maximum, so we also performed fits which
excluded those data.
In contrast to normal SNe Ia, SN 2019yvq lacks a strong secondary NIR peak,
although Tucker et al. (2020) do find evidence of a weak secondary NIR maximum
in both the ZTF i-band data and the TESS lightcurve. We take this very weak
secondary NIR peak as one of several pieces of evidence that the object is
intrinsically underluminous compared to normal SNe Ia (see Section 4).
We repeated this process on the $UBVgri$ Las Cumbres data using the SALT2 (Guy
et al., 2007) and MLCS2k2 (Jha et al., 2007) fitting packages, accessed
through SNCosmo (Barbary et al., 2016) with an added CCM89Dust component to
measure $E(B-V)$. We exclude the first three epochs of data, to reduce biases
from attempting to fit the early blue excess. The fits were generally poor: in
order to achieve a $\chi_{\textrm{reduced}}^{2}$ of less than 2 on the best
fits (MLCS2k2, no i band), we required a systematic error of more than three
times the average flux error to be added in quadrature at each point. In
general the fits again overpredicted the secondary i-band peak. Values for the
SNooPy and SNCosmo fits are reported in Table 1.
The fact that different methods of estimating $E(B-V)$ led to such a wide
range of extinction values, and the fact that methods which relied on fitting
to SN Ia templates resulted in generally poor fits, led us to conclude that SN
2019yvq is an inherently peculiar SN Ia. We therefore adopt the extinction
value obtained from fitting the Na ID lines, $E(B-V)=0.052^{+0.053}_{-0.025}$
(see Section 3.2.3 for methodology). This value, while significantly lower
than other possible values, results in an underluminous peak absolute
magnitude, which is consistent with SN 2019yvq’s weak secondary IR maximum and
high lightcurve decline rate. Additionally, it is consistent with the value
calculated in Miller et al. (2020b) ($E(B-V)_{\textrm{host}}\approx 0.032$),
which they derive using the same method, but a different spectrum. Siebert et
al. (2020) and Tucker et al. (2020) adopt this value from Miller et al.
(2020b), so our extinction value is also consistent with all previously
published work on SN 2019yvq.
We fit a fifth-order polynomial to the near-peak B data to obtain standard
lightcurve parameters. We find that SN 2019yvq reached its peak apparent
magnitude of $B_{\textrm{max}}=15.01\pm 0.03$ ($M_{B}=-18.4\pm 0.1$) on MJD
$58862.8\pm 0.4$, with $\Delta m_{15}(B)=1.36\pm 0.10$. We note that this
$\Delta m_{15}$ is lower than the value inferred by Miller et al. (2020b) from
the g lightcurve and used in Siebert et al. (2020).
The color evolution of SN 2019yvq is presented in Figure 4. The Swift data for
all objects were extinction-corrected using the method of Brown et al. (2010)
(Table 1). We note that SN 2019yvq becomes rapidly redder in all optical
colors (besides $r-i$) over the first five days. In $(B-V)$ and $(g-r)$
especially, it is much redder than typical SNe Ia such as SN 2011fe (data from
Zhang et al., 2016) and more closely mirrors the evolution of iPTF14atg.
iPTF14atg was also an underluminous SN Ia with a strong early UV excess (Cao
et al., 2015), and belonged to the 02es-like subclass, whose namesake is
described in Ganeshalingam et al. (2012). As discussed in Section 4, we
classify SN 2019yvq as a transitional 02es-like.
In terms of Swift UV colors, SN 2019yvq stands out even more compared to
typical SNe Ia, and is $\gtrsim$1 magnitude bluer than SN 2017cbv in
$(UVW1-U)$ at $\sim$5 days after the estimated explosion time. This extreme UV
color and subsequent evolution is again most similar to iPTF14atg within ten
days of explosion.
Based on the lightcurve parameters, we can begin to put SN 2019yvq in context
with other SNe Ia, especially those with early light curve data as well. In
the left panel of Figure 5, we show the $M_{B}$ versus $\Delta m_{15}(B)$
relation of Phillips (1993), populated with a large sample of nearby SNe Ia
(see Figure 14 from Parrent et al. 2014, with original data from Blondin et
al., 2012; Folatelli et al., 2012; Pakmor et al., 2013). When we include the
“blue” and “red” sample of early SN Ia of Stritzinger et al. (2018) (hereafter
S18), we see the tendency of early blue objects to be slower declining and
slightly brighter than the red sample. SN 2019yvq notably stands out from the
“early-blue” sample with its much higher decline-rate. In this parameter space
it is closer to another transitional 02es-like, SN 2006bt (the orange star in
Figure 5), although still well-separated from that object.
Figure 5: Demographic properties of SN 2019yvq (black star in each plot). We
note that SN 2019yvq is at the edge of normal parameter space in several
respects, and is well-separated from the early blue objects of S18. It is
instead closer to (although still substantially different from) the
transitional 02es-like SN 2006bt (orange star in each plot). Left: Luminosity
decline rate relation for SNe Ia, with the gray background points coming from
the union of samples presented by several groups (Blondin et al., 2012;
Folatelli et al., 2012). The orange polygon and data points replicate the
sample of 02es-like SNe Ia in Taubenberger (2017), with the transitional SN
2006bt represented by the orange star in each plot. In blue and red we show
the early SN Ia sample presented by S18, split by their early light curve
colors. Out of the S18 sample, we have adjusted the absolute magnitude of SN
2017cbv to match the distance of $D=12.3$ Mpc found in Sand et al. (2018).
Center: The location of SN 2019yvq (black star) in the Branch diagram (Branch
et al., 2006), which groups SNe Ia as broad line (BL), shallow silicon (SS),
core normal (CN), or cool (CL) based on the pseudo-equivalent widths of two Si
II features. The background sample is the same as the left panel, and the only
other 02es-like (in orange) in Blondin et al. (2012) is SN 2002es itself.
Right: A replica of the plot from Polin et al. (2019a) comparing $0.01$
M$\odot$ He shell double detonation models to a sample of SNe Ia from Zheng et
al. (2018), with velocities measured at peak. The prototype object SN 2002es
has a Si II velocity which is too low (5890 km s-1) to fit in the axis range
of these plots.
### 3.2 Spectral Analysis
We show the spectral evolution of SN 2019yvq in Figure 2, from roughly $-14$
to $+117$ days with respect to $B$-band maximum. Using the Supernova
IDentification software package (SNID; Blondin & Tonry, 2007) on the FLOYDS
spectrum taken at $+$1.8 d with respect to $B$-band maximum we find that all
reasonable matches correspond to normal SN Ia. In particular, the spectrum is
well matched to SN 2002bo near maximum light except in the region of
$\sim$4000–4500 Å, which we attribute to weak Ti II absorption and discuss
further in Section 4. We note that the initial spectrum of SN 2019yvq shows
faint H$\beta$, H$\alpha$ and [N II] emission; upon investigation, we believe
this emission is from the host galaxy due to slight mis-centering of the SN
within the slit.
#### 3.2.1 Velocities and Spectral Classification
We measure a Si II $\lambda$6355 velocity of 14,400 km s-1 near maximum light,
as well as pseudo-equivalent width (peW) values of 169 Å and 20 Å for the Si
II $\lambda$6355 and $\lambda$5972 features, respectively, from the +1.8d
FLOYDS spectrum (these measurements, and those that follow, are in broad
agreement with those of Miller et al. 2020b). Here SN 2019yvq is clearly a
high-velocity (HV) object in the Wang et al. (2009) classification scheme
(e.g. objects with Si II $\lambda$6355 $\gtrsim$11,800 km s-1 near max). To
put SN 2019yvq in the context of the standard Branch classification scheme
(Branch et al., 2006), we plot it along with a larger sample of SNe Ia
(Blondin et al., 2012) in the center panel of Figure 5. Here SN 2019yvq is
clearly a Broad Lined (BL) SN Ia, with a very deep and broad Si II
$\lambda$6355 feature. This is consistent with its match to SN 2002bo, which
was another BL event. We also plot the blue and red sample from S18 on the
Branch diagram, and note that SN 2019yvq again stands alone among the early
blue objects as a broad lined event, as most of the others are Shallow Silicon
or Core Normals, and instead it is closer to the transitional 02es-like SN
2006bt.
To explore the demographic place of SN 2019yvq further, we plot the Si II
$\lambda$6355 velocity near maximum light versus the absolute $B$-band
magnitude in the right panel of Figure 5. This plot is largely a reproduction
of Figure 11 in Polin et al. (2019a), with the grey data points originating
from the SNe Ia sample of Zheng et al. (2018); the blue and red sample of S18
and SN 2006bt are plotted as well. As discussed by Polin et al. (2019a), two
groups of SNe Ia are apparent in the plot: one that is tightly clumped at
$v\approx 10,500$ km s-1 and $M_{B}\approx-19.4$ and is attributed to
Chandrasekhar mass explosions, and a second group that follows a relationship
between luminosity and velocity, roughly tracking expectations from the sub-
Chandrasekhar class of explosions, as illustrated by the dashed line which
depicts a set of 0.01 $M_{\odot}$ He shell double detonation models. It is
clear that SN 2019yvq is not well-matched by either population, and a model
with different He shell mass is needed to replicate its position, as is found
in Section 5.2.
#### 3.2.2 Search for Unburned Carbon
The presence of unburned carbon in SN Ia spectra is potentially a powerful
discriminant between explosion models. Chandrasekhar-mass delayed detonation
explosions predict complete carbon burning for normal-bright SNe Ia (e.g.
Kasen et al., 2009), and increasing amounts of unburned carbon for fainter SNe
Ia (e.g. Höflich et al., 2002). In the explosions of sub-Chandrasekhar mass
white dwarfs, on the other hand, the initial surface detonation may leave
little or no detectable carbon (e.g. Fink et al., 2010; Polin et al., 2019a).
The most commonly searched for carbon feature is C II $\lambda$6580Å, which
can be difficult to detect both because it fades quickly after explosion and
is near the strong Si II $\lambda$6355Å absorption line. Large spectroscopic
samples have found that $\sim$20-30% of early time SNe Ia data have C II
signatures, with the chances of detection increasing the earlier the data were
taken (Thomas et al., 2011; Parrent et al., 2011; Folatelli et al., 2012;
Silverman & Filippenko, 2012; Wyatt et al., 2020). Interestingly, several of
the SN Ia with early light curve excesses have also displayed strong early
carbon, including SN 2017cbv (Hosseinzadeh et al., 2017), iPTF16abc (Miller et
al., 2018) and SN2018oh (Li et al., 2019).
We have closely inspected all of our SN 2019yvq optical spectra through
maximum light at the expected position of C II $\lambda$6580 Å, near the red
shoulder of the Si II $\lambda$6355 Å absorption line. No C II feature is
apparent, and our earliest data do not show the strong carbon absorption seen
in SN 2017cbv and iPTF16abc, although the signal to noise of our early data is
not good enough to make definitive claims on any weak C II feature. We have
further inspected our IRTF spectrum taken at +6 d with respect to $B$-band
maximum, as it has been suggested that the C I $\lambda$1.0693 $\mu$m line is
a good tracer of unburned carbon. No C I line is apparent, but this spectrum
is later than ideal since this feature is most visible around maximum light
(e.g. Hsiao et al., 2013, 2019). Detailed modeling is necessary to completely
rule out any subtle carbon feature, but this is beyond the scope of the
current work.
In conclusion, we can make no definitive claim about the presence of either C
II $\lambda$6580 Å or C I $\lambda$1.0693 $\mu$m, partially due to low signal
to noise data, although we can rule out the strong carbon seen in previous SNe
Ia with blue light curve excesses. This lack of strong carbon is in broad
agreement with expectations from sub-Chandrasekhar helium shell detonation
models (e.g. Polin et al., 2019a), which we explore further in our model
comparisons below.
#### 3.2.3 Medium Resolution Spectra and Na ID
The Na ID doublet is often used to estimate host galaxy extinction in nearby
SNe (e.g. Poznanski et al., 2012), although the correlation between host
extinction and Na ID equivalent width has a large scatter (e.g. Galbany et
al., 2019). Although the diffuse interstellar band at 5780Å has been shown to
be a superior tracer of host extinction (Phillips et al., 2013), we do not
detect the line in our medium resolution Bok spectrum. The Na ID doublet at
the redshift of SN2019yvq’s host ($z$=0.00908) is clearly visible in our
medium resolution Bok B&C spectrum ($R$$\approx$3400) taken on 2020 January 29
UT (a medium resolution MMT Blue Channel spectrum taken on 2020 February 18
does not have sufficient signal to detect the doublet), and we measure 0.28Å
and 0.18Å for the equivalent width of the D1 and D2 lines, respectively. Using
the correlation found by Poznanski et al. (2012), this translates to an
expected host extinction of $E(B-V)_{\textrm{host, Na
ID}}=0.052^{+0.053}_{-0.025}$ mag. As discussed in Section 3.1, this is the
host extinction value we use throughout the paper.
Figure 6: Nebular spectra of SNe Ia focusing on the [Ca II], [Fe II], [Ni II]
line complex. This feature is strongest in the nebular spectra of
underluminous SNe Ia, and is the subject of thorough modeling in Siebert et
al. (2020) for a +153d Keck spectrum of SN 2019yvq. The legend displays the
shortened SN name (e.g. SN2019yvq $\rightarrow$ 19yvq) and the epoch in days
after $B$ maximum. Spectra have been normalized to have identical mean fluxes
over their full wavelength range ($\sim$3500–10000 Å). SN 2019yvq lies in
between normal SNe Ia (represented by SN 2011fe) and low-luminosity SNe Ia
(represented by the 91bg-like SN 1999by).
#### 3.2.4 Nebular spectra of SN 2019yvq
The nebular spectra of SNe Ia can provide an independent way to differentiate
between progenitor systems, since different progenitors and explosion channels
should have different nebular signatures.
The violent merger of two WDs should result in nebular [O I] due to its
ejection at low velocities (Pakmor et al., 2012), although this has only been
seen in the nebular spectra of the 02es-like SN 2010lp (Taubenberger et al.,
2013) and is not present in the nebular spectra of SN 2019yvq.
The double-detonation scenario should only partially burn the core, leaving
strong Ca signatures (Polin et al., 2019b). SN 2019yvq does display nebular
[Ca II] which is intermediate in strength between typical- and low-luminosity
SNe Ia, as shown in Figure 6.
Lastly, the companion interaction scenario should produce H and He emission
from the swept-up material (Botyánszki et al., 2018; Dessart et al., 2020),
although this is seen in an extremely limited number of cases (Kollmeier et
al., 2019; Prieto et al., 2020). We use the nebular spectra of SN 2019yvq to
measure limits on the luminosity and mass of swept-up H and He, following the
methodology of Sand et al. (2019) and references therein. To briefly
summarize, we first smooth the spectrum on a scale much larger than the
expected width of an H$\alpha$ feature. We then subtract off the smoothed
spectrum and search for any excess flux in the residuals, assuming an expected
width of FWHM $\approx$ 1000 km s-1 (22 Å) for the line width and a potential
offset from the rest wavelength of up to $\sim$1000 km s-1 as well. Following
Equation 1 from Botyánszki et al. (2018), we then estimate the mass of the
stripped material, after predicting the luminosity of SN 2019yvq at +200 days.
For the nebular spectrum taken +106 days past maximum,
$M_{\textrm{H}}<1.6\times 10^{-3}\textrm{ M\textsubscript{$\odot$}}$ and
$M_{\textrm{He}}<2.0\times 10^{-2}\textrm{ M\textsubscript{$\odot$}}$ (using
the He I $\lambda$6678 line). Using an additional nebular spectrum taken +117
days past maximum, $M_{\textrm{H}}<1.7\times 10^{-3}\textrm{
M\textsubscript{$\odot$}}$ and $M_{\textrm{He}}<2.1\times 10^{-2}\textrm{
M\textsubscript{$\odot$}}$. With access to a higher signal-to-noise spectrum,
Siebert et al. (2020) place even stricter limits on the amount of swept-up He
and He: $M_{\textrm{H}}<2.8\times 10^{-4}\textrm{ M\textsubscript{$\odot$}}$
and $M_{\textrm{He}}<2.4\times 10^{-4}\textrm{ M\textsubscript{$\odot$}}$.
Parameter | 02es-like SNe Ia | SN 2019yvq
---|---|---
$M_{B}$ | -17.6 – -18.1 | -18.41
$\Delta m_{15}(B)$ | 1.1 – 1.3 | 1.36
Rise time (days) | 19 – 20 | 18.7
$(B-V)_{\textrm{max}}$ | 0.2 – 0.5 | 0.22
Secondary IR maximum | Weak | Weak
$v_{\textrm{Si II}}$ (km s-1) | 6000 – 10000 | 14400
Ti II at peak | Yes | Yes
nebular [Fe II] and [Ca II] | Yes | Yes
Table 2: Comparisons between SN 2019yvq and 02es-like SNe Ia. Parameter ranges
for 02es-like SNe Ia are taken from Taubenberger (2017) and are intended to be
approximate, reflecting the small sample size and diversity of this subclass.
The combination of the presence of [Ca II] and a lack of narrow hydrogen
emission is consistent with a double-detonation progenitor system, which is
what is inferred by Siebert et al. (2020). Despite these limits, we cannot
unequivocally claim that SN 2019yvq is a double detonation event due to
discrepancies in best-fit models of photospheric photometry and nebular
spectroscopy. Our conclusion in this regard is in agreement with Tucker et al.
(2020) and Miller et al. (2020b), and is discussed in more detail in Section
5.2.
Figure 7: Comparisons of SNe Ia peak spectra over a wide range of
luminosities. Although the spectrum of SN 2019yvq is quite similar to SN
2002bo (a more typical luminosity SN Ia), its primary difference is in the
$\sim$4000–4500 Å region. This coincides with the “titanium trough” present in
lower luminosity SNe Ia, and SN 2019yvq’s extra absorption in this wavelength
region supports the interpretation of it as an underluminous SN Ia despite
obvious differences when comparing to the spectrum of SN 2002es. The
combination of low temperature and luminosity with broad high-velocity Si II
is rarely seen in SNe Ia and is difficult to reproduce in models.
## 4 Comparisons to SN 2002es
SN 2019yvq shares some characteristics with 02es-like SNe Ia, and could be
considered an 02es-like depending on how broad a definition of that subclass
is taken. We classify it as a transitional 02es-like. Although this term has
not previously been used in the literature to describe any objects, it
accurately reflects the nature of SN 2019yvq. Table 2 summarizes various
photometric and spectroscopic signatures of 02es-like SNe Ia, taken from
Taubenberger (2017). See Ganeshalingam et al. (2012) for a study of the
eponymous SN 2002es, and Taubenberger (2017) and White et al. (2015) for
reviews of this subclass.
SN 2019yvq is at the edge of what could be considered 02es-like in several
respects. Its peak brightness and lightcurve width are on the edge of the
class, as seen in the left panel of Figure 5. Like 02es-like SNe Ia, SN
2019yvq also displays an almost nonexistent secondary IR maximum and red
colors after its initial blue excess (see Figure 4 and its similarity to the
02es-like iPTF14atg).
Spectroscopically there are both similarities and obvious differences, as
highlighted in Figure 7. The peak spectrum of SN 2019yvq is most similar to SN
2002bo, which also displayed deep Si II 6355 and had a similar Si II line
ratio. SN 2002bo had a more typical peak luminosity for SNe Ia
($M_{B}=-19.41$, Benetti et al., 2004). SN 2019yvq’s Si II velocity and line
ratio make it an outlier compared to other 02es-like SNe Ia, since these
spectral features would normally indicate an energetic and luminous event.
Figure 7 also includes for comparison SN 2006bt, which displayed Si II 6355
which was higher-velocity and broader than typical SNe Ia, but weaker and
lower-velocity than SN 2019yvq. We would also classify SN 2006bt as a
transitional 02es-like (in agreement with Taubenberger, 2017), and we refer to
Foley et al. (2010) for a thorough study of this unusual object.
02es-like SNe Ia are also characterized by Ti II at peak, which is seen in
lower luminosity SNe Ia like SN 1991bg (see Figure 7). We note that the
spectra of SN 2019yvq and SN 2002bo are quite dissimilar bluewards of
$\sim$4500 Å, which is precisely at one end of the Ti II “trough”. Ti II and V
II are efficient at suppressing blue flux in SNe Ia, and we refer to Figure 11
of Cartier et al. (2017) to demonstrate their effects on SNe Ia spectra. In
the wavelength regime of the Ti trough, SN 2019yvq is again intermediate
between typical-luminosity SNe Ia (SN 2011fe, SN 2002bo) and low-luminosity
SNe Ia (SN 2002es, SN 1991bg). We take SN 2019yvq’s suppressed blue flux as
tentative evidence for it having Ti, albeit weaker than the more extreme case
of SN 1991bg.
Strong [Ca II] and [Fe II] emission is also seen in the nebular spectra of
sub-luminous SNe Ia, such as the 02es-like SN 2010lp (Taubenberger et al.,
2013). As already discussed in Section 3.2.4 and shown in Figure 6, SN 2019yvq
displays nebular [Ca II] emission which is intermediate between low-luminosity
and normal-luminosity SNe Ia, again placing it in a transitional region of
parameter space.
The weak/nonexistent secondary IR maximum, relatively high decline rate,
nebular [Ca II], and Ti II are all pieces of evidence in support of SN 2019yvq
being an underluminous event. When the appropriate extinction is used, this
brings its peak luminosity and color to the border of what could be considered
02es-like SNe Ia, and we classify it as a transitional member of that
subclass.
02es-like SN | Host | Earliest | Filter | Early
---|---|---|---|---
| type | epoch | | excess?
| | (days) | |
SN 2019yvq1 | SAB0 | -15.8 | Swift | Yes
iPTF14atg2 | E-S0 | -15.5 | Swift | Yes
iPTF14dpk3 | Starburst | -16.3 | R | Maybe
PTF10acdh4 | $\cdots$ | -14.5 | R | Unknown
PTF10ujn4 | $\cdots$ | -10.7 | R | Unknown
PTF10bvr4 | E | ?? | R | Unknown
SN 2002es5 | S0 | -7.3 | B | Unknown
SN 1999bh6 | Sb | 0.6 | B | Unknown
SN 2006bt6,7 | S0/a | -2.6 | B | Unknown
PTF10ops6,8 | SAa? | -6.6 | B | Unknown
SN 2010lp6 | SAb | -7 | B | Unknown
Table 3: A literature sample of known 02es-like SNe Ia. iPTF14atg is the only
other 02es-like observed in blue filters as early as SN 2019yvq, and it also
displays a UV excess. iPTF14dpk displayed a sharp rise from its last non-
detection, and its first detection is high relative to a power law rise.
PTF10ops is either $\sim$148 kpc offset from the spiral galaxy SDSS
J214737.86+055309.3, or in a very faint satellite galaxy of it. Sources: 1:
this work; 2: Cao et al. (2015); 3: Cao et al. (2016); 4: White et al. (2015);
5: Ganeshalingam et al. (2012); 6: Taubenberger (2017); 7: Foley et al.
(2010); 8: Maguire et al. (2011).
Table 3 lists all known 02es-like SNe Ia, including SN 2019yvq. The three SNe
which were detected the earliest all display unusual lightcurve properties.
iPTF14atg (Cao et al., 2015) has already been discussed as a prime example of
an early UV excess. The early lightcurve of iPTF14dpk (Cao et al., 2016)
differed from iPTF14atg, as it rose more than 1.8 magnitudes/day between its
last non-detection and earliest detection (in $R$, the only observed band at
that epoch). Cao et al. (2016) take this as evidence of a dark phase, a time
period after the explosion where the energy generated by radioactive decay has
not yet reached the photosphere (i.e. the explosion has occurred but is not
yet visible). The lightcurve also declined between the first and second
epochs, although Cao et al. (2016) attribute this to scatter consistent with
the errors and not a physical dimming. The paper concludes that the lightcurve
of iPTF14dpk is consistent with the ejecta-companion interaction scenario but
seen from an unfavorable viewing angle.
The fact that the three 02es-like SNe Ia which have the earliest observations
all display extremely unusual, but consistent, lightcurve properties could be
evidence that they all arise from identical progenitor systems, but the sample
of such well-observed events will need to be expanded beyond its current
limited numbers to make this statement with statistical confidence. But even
with the small sample size we can say that the companion-ejecta interaction
models, which predict a strong UV excess $\sim$10% of the time due to viewing
angle constraints, are unlikely to be the source of 02es-like SNe Ia if two of
the three SNe observed at the right epochs display such an excess with
certainty, and the third displays a potential weak excess. We discuss these
implications more in Section 7.
## 5 Model Comparisons
We compare SN 2019yvq to two main classes of models which are capable of
producing early blue bumps: companion shocking models from Kasen (2010) and
double detonation sub-Chandrasekhar mass models from Polin et al. (2019a). Our
best-fit models in these two categories are included in Figure 8. We also
discuss comparisons to models with varying Ni distributions. No one model
reproduces all features of the dataset, so we discuss their benefits and
shortcomings.
### 5.1 Companion Shocking
As discussed in the introduction, Kasen (2010) predicted that an early blue/UV
excess could be seen in the lightcurves of SNe Ia when the ejecta collide with
a nondegenerate companion and gets shock-heated. This excess arising from
companion shocking would only be visible within a few days of the explosion,
and would only be seen for $\sim$10% of SNe Ia due to viewing angle effects.
Hosseinzadeh et al. (2017) previously used these models to fit the lightcurve
of SN 2017cbv. As described in that paper, they require a total of eight
parameters to generate fits: (1) the explosion epoch $t_{0}$, (2) the
companion separation $a$, (3) a factor involving the ejecta mass and speed
$(x\propto Mv^{7})$, (4) the time of maximum $t_{\textrm{max}}$, (5) the
lightcurve stretch $s$, (6) and (7) factors on the $r$ and $i$ flux of the
SiFTO template (Conley et al., 2008) $r_{r}$ and $r_{i}$, and (8) a factor on
the $U$ shock flux $r_{U}$.
We make use of lightcurve_fitting (Hosseinzadeh, 2019) to fit these models,
which uses a Markov Chain Monte Carlo routine based on the emcee package
(Foreman-Mackey et al., 2013) to generate fits. The models consist of two
components: a blackbody flux component and a SiFTO template which can be
stretched and scaled. We extend the blackbody component of the model to
include the early UVW2, UVM2, and UVW1 Swift data, since the first two epochs
were taken in a regime where the SN flux was dominated by the early excess.
Fits struggled to converge until the following steps were taken: (1) we put a
tight prior on the explosion epoch and enforced adherence to the non-detection
from Itagaki Astronomical Observatory, and (2) we extended the multiplicative
factor on the $U$ shock flux to include Swift data due to the strength of the
excess in those bands as well. The parameters for our best-fit model are
listed in Table 4, along with the corresponding best-fit model for SN 2017cbv
from Hosseinzadeh et al. (2017).
Figure 8: Comparisons between the Las Cumbres and early Swift data for SN 2019yvq and two different models. The non-detection and first detection from Itagaki are included in black. Shown in the dashed line is the best-fit companion shocking model from Kasen (2010). The parameters for this model are in Table 4 (see Section 5.1 for more detail). The SN template used to generate the companion shocking model did not extend into the mid-UV, so only the blackbody flux component is shown for the Swift filters. The dotted line is the best-fit double detonation model from Polin et al. (2019a): a 0.95 M$\odot$ WD progenitor with 0.055 M$\odot$ of He (see Section 5.2 for more detail). | SN 2019yvq | SN 2017cbv
---|---|---
$t_{0}$ (MJD) | 58844.3$\pm$0.1 | 57821.9
$a$ (R$\odot$) | $52^{+6}_{-4}$ | 56
$\frac{M}{\textrm{M}_{\textrm{Ch}}}\left(\frac{v}{10000\textrm{ km s}^{-1}}\right)^{7}$ | $0.099\pm 0.03$ | $3.84\pm 0.19$
$t_{\textrm{max}}$ (MJD) | $58863.14\pm 0.08$ | 57840.2
$s$ | $0.878\pm 0.007$ | 1.04
$r_{r}$ | $0.920\pm 0.006$ | 0.95
$r_{i}$ | $0.736^{+0.006}_{-0.007}$ | 0.85
$r_{U}$ | $1.27\pm 0.04$ | 0.61
Table 4: Comparisons between the best-fit parameters of the Kasen (2010)
companion shocking models for SN 2019yvq (this work) and SN 2017cbv
(Hosseinzadeh et al., 2017). Parameters: time of explosion ($t_{0}$),
companion separation ($a$), a parameter involving the ejecta mass and velocity
($\propto Mv^{7}$), time of peak ($t_{\textrm{max}}$), lightcurve stretch
($s$), factors on the $r$ and $i$ flux in the SiFTO template ($r_{r},r_{i}$),
and a flux factor on the $U$ though $UVW2$ shock flux ($r_{U}$).
The most significant of these is the $r_{U}$ factor: Hosseinzadeh et al.
(2017) find that the $U$ shock flux for models describing SN 2017cbv must be
scaled by a factor of 0.61. There are several possible explanations for this,
including assumptions of spherical symmetry and blackbody SEDs, or the effects
of line blanketing from iron group elements (IGEs) causing the UV/blue flux to
be overestimated.
However, we do not find that the $U$ (and $UVW1$, $UVM2$, $UVW2$) shock flux
needs to be scaled down to match the data. Instead the best-fit model has a UV
flux enhancement of about 27%. An increase of this amount is unsurprising: the
analytic expressions for the blackbody luminosity used in lightcurve_fitting
and derived from Kasen (2010) replicate the numerical models of companion-
ejecta interaction seen at a viewing angle of approximately $30^{\circ}$ (see
Figure 2 of that paper). Explosions with smaller viewing angles result in
higher observed luminosities, up to about 0.25 dex (a factor of 1.8) brighter
for a perfectly aligned scenario. Although our model does not include the
viewing angle as a parameter, better-aligned explosions can generate the
required shock flux enhancement.
The other notably discrepant parameter between the two fits is the parameter
involving mass and velocity. It is worth noting that the relevant velocity is
not exactly the ejecta velocity, rather it is the transition velocity between
different power laws in the density profile for the modeled ejecta. Assuming
$\textrm{M}_{\textrm{Ch}}$ of ejecta, the value of this parameter for SN
2017cbv corresponds to a velocity of about 12000 km s-1. Using the same
assumption, the value for SN 2019yvq corresponds to a transition velocity of
about 7000 km s-1.
The best-fit companion separation (52 R$\odot$) implies a companion radius of
$\sim$20 R$\odot$, assuming Roche lobe overflow (Eggleton, 1983). This stellar
radius does not exclude most main sequence stars, and the separation lies
towards the extreme of the expected distribution for main sequence donor
stars, based on binary population synthesis models (Liu et al., 2015).
Miller et al. (2020b) also use the Kasen (2010) models to fit their data,
although with a different methodology. They fit only shock-dominated data
(within $\sim$3.5 days of explosion) and use a slightly different analytical
form for the shock flux. They find a best-fit companion separation of $13\pm
1$ R$\odot$ and an explosion date of $58845.82\pm 0.04$ (MJD). This companion
separation is several times smaller than our best-fit value (Table 4), and the
explosion date is more than 1.5 days after ours. Since their explosion date is
in fact almost two hours after the initial detection from Itagaki, we are
unsurprised by the disagreement in companion separations.
As a final remark on the best-fit parameters in Table 4, we note that SN
2019yvq and SN 2017cbv have similar rise times (18.7 days and 18.2 days,
respectively). These values are quite typical for SNe Ia – Firth et al.
(2015a) find an average rise time of $18.98\pm 0.54$ days in a sample of 18
well-sampled objects.
Although lightcurve_fitting generates model lightcurves and not spectra, we
reproduce the spectral effects of this model by taking a spectrum of SN 2011fe
at a similar epoch to our earliest spectrum and diluting it with a blackbody
of the predicted size and temperature. The effects of this blackbody dilution
are shown in Figure 9, where it can be seen that they do a qualitatively good
job replicating the early spectrum of SN 2019yvq (in black), with its blue
continuum and weak features. Further, quantitatively fitting for the best-fit
temperature needed to reproduce the strength of spectral features (keeping the
radius the same as predicted by the fits) results in a temperature only about
350 K higher than predicted by the models. These two temperatures being
consistent with each other provides independent confirmation of the validity
of the companion shocking models.
Companion shocking models can produce a wide range of early blue bumps
depending on the companion separation, size, and viewing angle (see Figures 2
and 3 of Kasen, 2010). While the fits for SN 2019yvq are not perfect, notably
underpredicting the strength of the decline to the second epoch of Swift data,
they both closely reproduce the wavelength-dependent behavior of the early
excess and predict a temperature closely aligned with what is expected by
diluting an early spectrum with blackbody flux.
Figure 9: Our earliest spectrum of SN 2019yvq (black line) compared to a
spectrum of SN 2011fe at a comparable epoch. Epochs listed with respect to
days from B-band maximum. The magenta line represents the SN 2011fe spectrum
diluted by a 8794 K blackbody, the temperature predicted at that epoch by our
best-fit companion shocking models. Allowing the temperature of the blackbody
to vary and comparing to the the SN 2019yvq with a $\chi^{2}_{\nu}$ test, we
obtain a best-fit temperature of about 350 K higher (yellow line). The green
line represents the spectrum at the same epoch (measured from explosion) from
the best-fit double detonation model.
### 5.2 Double Detonation
As described in detail in Polin et al. (2019a), the explosion mechanism of
these models consists of the ignition of a surface layer of He which then
detonates the underlying C/O WD. We compared observations of SN 2019yvq with
double-detonation models which had WD masses between 0.6 and 1.3 M$\odot$ and
He shell masses between 0.01 and 0.1 M$\odot$.
We measure the overall best-fit model in our grid by doing a simple reduced
$\chi^{2}$ comparison between each model and the $UBVgri$ photometry. We fix
the explosion epoch to be the same used in the best-fit companion shocking
model, as described in Section 5.1. Normally one would infer an explosion
epoch from a power-law fit to the rising data (e.g. Ganeshalingam et al.,
2011; Firth et al., 2015b) however in this case these fits were very poorly
constrained. This was primarily due to a limited number of epochs available
for fitting, as there were only four left after ignoring the obviously non-
power-law first epoch.
The best-fit model in our grid has a 0.95 M$\odot$ WD with a 0.055 M$\odot$
layer of He. This model is shown as the dotted line in the photometry of
Figure 8 and the color evolution of Figure 4, and the spectrum from this model
matching the epoch of our earliest SN 2019yvq spectrum is shown in Figure 9.
Although most of this spectrum is a blue continuum with weak features, in
general agreement with the observations, we find that it predicts much
stronger features in the $\sim$4000–5000 Å range and a stronger downturn
blueward of $\sim$4000 Å than are observed.
This model does have a strong excess at the correct epochs (i.e. up to $\sim$4
days after the explosion), however it dramatically underpredicts most of the U
data. The drop after the early excess is also stronger in all bands than is
seen in the data, and the models predict a “red bump” which is not seen in the
data (see Figure 4). Additionally, all reasonably well-fitting models in the
grid predict a U decline that is steeper than observed. In the case of the
best-fit model, it is steeper than the observed decline-rate by more than a
factor of two (in magnitudes per day).
There are also several advantages to double detonation models which match the
observed data: a lack of C in the spectra, a weak secondary IR maximum, and a
blue/UV excess at roughly the right epochs are some points of agreement.
Both Miller et al. (2020b) and Siebert et al. (2020) use the models from Polin
et al. (2019a) to fit different aspects of SN 2019yvq’s dataset. Fitting to
the gri ZTF photometry in addition to some Swift data over approximately the
same epochs shown here, Miller et al. (2020b) find a best-fit model consisting
of a 0.92 M$\odot$ WD with a 0.04 M$\odot$ He shell. Their results are similar
to what is presented here: general agreement on some counts (early blue
excess), and diagreement on others (difficulty fitting bluer filters).
Siebert et al. (2020) extend the best-fit model of Miller et al. (2020b) into
the nebular phase, and show that the best-fit model based on photospheric
photometry is a poor match for nebular spectroscopy, overpredicting the
strength of the [Ca II] and [Fe II] feature by a factor of several. Instead,
to match the nebular spectra they find a best-fit model consisting of a 1.1
M$\odot$ WD with a 0.05 M$\odot$ He shell. This nebular model is in turn a
poor match to the photospheric photometry, overpredicting the bluer bands by
more than a magnitude and greatly underpredicting the strength of the early
excess in optical bands.
We find it difficult to reconcile this discrepancy, and cannot definitively
claim that SN 2019yvq is the result of a double-detonation, despite the
several points in favor of these models as listed above.
### 5.3 Nickel Distributions
#### 5.3.1 Photometry
Variations in Ni distributions in the WD progenitor are also known to produce
a range of SN Ia behavior (e.g. Piro & Morozova, 2016; Magee et al., 2020).
Using the same methodology described in Section 5.2, we look for best-fit
models from the grid of 255 models provided by Magee et al. (2020). These
models make use of the radiative transfer code TURTLS (Magee et al., 2018) and
vary the density profiles, Ni masses, kinetic energy, and degree of Ni mixing
to produce a range of lightcurves up to $+25$ days from the explosion.
Fitting the UBVgri Las Cumbres lightcurve, we find the best-fit model is
EXP_Ni0.8_KE0.50_P4.4. This has an exponential density profile, 0.8 M$\odot$
of Ni, and a kinetic energy of 0.50 foe. The last element of the model name
(P4.4) describes the scaling parameter which determines the Ni distribution,
and indicates the Ni is comparatively mixed through the ejecta.
However, while this model does as well as the other two classes of models we
have discussed at fitting the rise time and peak absolute magnitude, it
contains no early excess. The authors note in Magee et al. (2020) that
although they can fit a majority of SNe in their sample, the remaining objects
have an early excess which the models cannot replicate. Since we consider the
early UV excess to be the most unique feature of this SN, the most difficult
and interesting aspect to model, and potentially the biggest clue to what the
progenitor system is, we do not include this best-fit model in Figure 8.
The same authors also released a set of models using a similar methodology
capable of reproducing early excesses due to clumps of 56Ni in the outer
ejecta (Magee & Maguire, 2020). However, since these models were based on SN
2017cbv and SN 2018oh data and both these SNe had typical peak luminosities
unlike the underluminous SN 2019yvq, we do not include them as comparisons.
Additionally, these models display early red bumps similar to those seen in
the double detonation models, which are not seen in our data (see Figure 4).
#### 5.3.2 Spectroscopy
In addition to the above photometric modeling, we also utilize Tardis
(Kerzendorf & Sim, 2014) to examine the spectroscopic effects of varying Ni
distributions and photospheric velocities. A full exploration of these effects
are outside the scope of this paper, but we report initial observations here.
We start with a base model, which consists of an early SN 2011fe spectrum
identical to the one used in Heringer et al. (2017) at an epoch of $+5.9$ days
from the explosion, similar to the epoch of our earliest spectrum. The
v_inner_boundary (photospheric velocity) of this model is $12,400$ km/s. We
then alter the Ni distribution and photospheric velocity of this model in an
attempt to replicate the SN 2019yvq.
Our perturbations were unsuccessful at reproducing the earliest spectrum, but
we note observable effects of altering the Ni distribution. Adopting a uniform
Ni distribution for the outer ejecta with a mass fraction of 0.19 (replicating
the most mixed model of Piro & Morozova, 2016), we note that the red wings of
the Si II 6355 and O I 7774 lines become asymmetrically broader, and that the
Ca NIR triplet drastically reduces in strength. Artificially introducing a
mass of Ni in the outermost portions of the ejecta ($>20,000$ km/s) weakens
the Mg II complex and other features blueward of $\sim$4500 Å. As the density
of this outer Ni mass is increased, other dramatic effects, such as the
extreme broadening of the O I 7774 features are introduced, which are not seen
in the early spectra of SN 2019yvq.
We also experiment with varying the photospheric velocity of the models, as
our earliest spectrum has a Si II 6355 velocity of approximately $21,000$ km
s-1, which is significantly higher than the default value of $12,400$ km s-1.
Miller et al. (2020b) find velocities of as high as $25,000$ km s-1 are
necessary to fit their earliest spectrum, but since the maximum velocity in
the Tardis model is $24,000$ km s-1 this is unreachable for us. We do note
that at high photospheric velocities, such as $18,000$ to $20,000$ km s-1, the
strengths of most spectroscopic features begin to match the weak values of our
earliest spectrum and the spectrum begins to be dominated by a blue continuum.
However, as also pointed out by Miller et al. (2020b), Tardis has a
photospheric boundary which is not wavelength-dependent inside of which is a
quasi-blackbody. Because our Tardis models have a limited velocity range,
increasing the model’s photospheric velocity thus increases the percentage of
the model’s mass which acts as a blackbody and effectively dilutes the
spectral features from the tenuous outer layers with a strong blackbody
component. Blackbody dilution is also a signature of the companion shocking
models, and is shown in Figure 9. The blackbody temperature predicted by the
companion shocking models is also thousands of Kelvin hotter than the
photospheric temperatures Tardis calculates for this velocity range (between
6,000 and 7,000 K).
Miller et al. (2020b) use additional Ni distribution models based on Magee &
Maguire (2020) and find that the predicted spectra have strong line blanketing
blueward of $\sim$4400 Å, in addition to overpredicting the i-band flux.
Since unusual Ni distributions result in spectral features absent in the
observed spectra, and since high photospheric velocities replicate the effects
of the companion interaction scenario, we do not include these spectra in our
comparisons.
## 6 Progenitor Constraints from Radio Observations
Radio emission is a sensitive probe of circumstellar medium (CSM) of the
progenitor. The CSM is polluted by mass-loss from the progenitor in the pre-SN
stage, and interaction of the SN ejecta with this CSM accelerates electrons to
relativistic energies and amplifies the ambient magnetic field, producing
synchrotron radio emission (Chevalier, 1982, 1984, 1998). Simple models of
radio emission have provided constraints on the CSM environment and progenitor
properties for both core-collapse (e.g. Ryder et al., 2004; Soderberg et al.,
2006; Chevalier & Fransson, 2006; Weiler et al., 2007; Salas et al., 2013) and
SNe Ia (Panagia et al., 2006; Chomiuk et al., 2016). Radio emission is yet to
be detected from a SN Ia , but non-detections have provided stringent
constraints on progenitor scenarios (Chomiuk et al., 2016), particularly for
nearby events like SN 2011fe (Horesh et al., 2012; Chomiuk et al., 2012) and
SN 2014J (Pérez-Torres et al., 2014).
Radio observation of SN 2019yvq was obtained with the Karl G. Jansky Very
Large Array (VLA) on 2020 Jan 26, 11:39:53, which is within 29.77 days of
$t_{0}$ (derived in Section 2.2). The observation block was 1-hr long, with
38.23 mins time-on-source for SN 2019yvq. Observations were taken in X-band
(8–12 GHz) in the D-configuration of the VLA (DDT: 19B-346, PI: S.
Sarbadhicary). The observations were obtained in wide-band continuum mode,
yielding 4 GHz of bandwidth sampled by 32 spectral windows, each 128 MHz wide
sampled by 1 MHz-wide channels with two polarizations. We used 3C286 as our
flux and bandpass calibrator, and J1313+6735 as our phase calibrator. Data
were calibrated with the VLA CASA calibration pipeline (version 5.6.2-2)
777https://science.nrao.edu/facilities/vla/data-processing/pipeline. The
pipeline consists of a collection of algorithms that automatically loads the
raw data into a CASA measurement set (MS) format, flags corrupted data (e.g.
due to antenna shadowing, channel edges, radio frequency interference or RFI),
applies various corrections (e.g. antenna position, atmospheric opacity) and
derives delay, flux-scale, bandpass and phase calibrations which are applied
to the data.
We imaged the calibrated visibility dataset with tclean in CASA. We used
multi-term, multi-frequency synthesis as our deconvolution algorithm (set with
deconvolver=‘mtmfs’ in tclean), which performs deconvolution on a Taylor-
series expansion of the wide-band spectral data in order to minimize
frequency-dependent artifacts (Rau & Cornwell, 2011). We set nterms=2 which
uses the first two Taylor terms to create images of intensity (Stokes-I) and
spectral index. The SN is offset $\sim 13^{\prime\prime}$ from the bright
central radio nucleus of the galaxy, and as a result the emission at the SN
site is dominated by sidelobes from the nucleus for the typical resolution
$\sim 7.2^{\prime\prime}$ expected in X-band images in D-configuration. For
this reason, we only imaged the 10-12 GHz bandwidth with tclean, excluded
visibility data from baselines shorter than 6 k$\lambda$, and applied Briggs-
weighting on the remaining visibility data with the parameter robust=0. This
provided just enough angular resolution and source sensitivity at the SN site
to determine if any radio emission separate from the nucleus is associated
with the SN site.
No radio source was detected at the site of SN 2019yvq in the cleaned,
deconvolved 11-GHz image with a synthesized beam of $5.5^{\prime\prime}\times
4.2^{\prime\prime}$. The flux at the exact location of the SN is $-25\mu$Jy.
Using the AIPS task IMEAN, we obtain an RMS of $11.7\mu$Jy per beam, which
translates to a 3$\sigma$ 11-GHz luminosity limit of $7.6\times 10^{25}$
ergs/s/Hz, assuming a distance of 42.5 Mpc.
The 3$\sigma$ upper limit can shed some light on the CSM around 2019yvq
similar to the methodology in Chomiuk et al. (2012) and Chomiuk et al. (2016).
Using the Chevalier (1982) model of a CSM characterized by $\rho=\dot{M}/4\pi
r^{2}v_{w}$ (where $\rho$ is density in gm/cm3, $\dot{M}$ is the mass-loss
rate from the progenitor, $r$ is the distance from progenitor and $v_{w}$ is
wind velocity), we obtain an upper limit of $(4.5\text{---}20)\times 10^{-8}$
M⊙/yr on the mass-loss rate from a symbiotic progenitor (involving a red-giant
companion, assuming $v_{w}$=10 km/s). The range of mass-loss rates reflect the
uncertainty in the parameter $\epsilon_{b}$, the fraction of shock energy
shared by the amplified magnetic field, with typical values in the range
0.01-0.1 for SNe (Chomiuk et al., 2012). These limits are shown in Figure 10.
Chomiuk et al. (2016) measured the mean mass-loss rate in symbiotic
progenitors in the Milky Way to be $\mathrm{log}_{10}(\dot{M})=-6.41\pm 1.03$
M⊙/yr (asssuming $v_{w}=100$ km/s), so our measurement does not exclude the
possibility of a red-giant companion. Scenarios involving accretion from a
main-sequence companion accompanied by steady nuclear burning are also not
excluded by our limit (Chomiuk et al., 2012).
Figure 10: Limits (in gray) for the mass loss rate of the progenitor of SN
2019yvq from its VLA observations, following the model of Chevalier (1982),
shown for typical range of values of $\epsilon_{b}$ which parameterizes the
fraction of shock energy in the amplified post-shock magnetic field in radio
light curve models. These observations can rule out some symbiotic progenitor
systems, but they do not exclude red giant companions or other methods of mass
loss.
## 7 Discussion
SN 2019yvq is an unusual event in many respects. It has: a strong early UV
flash; red colors besides the early flash; relatively faint peak luminosity, a
moderately high decline rate, and a weak secondary IR maximum; broad, high-
velocity Si II 6355 paired with both weak Si II 5972 and Ti II at peak; and
nebular [Ca II] and [Fe II]. These paint a conflicting picture, with some
aspects pointing to a low-energy explosion (low luminosity, weak secondary IR
maximum, nebular [Ca II], peak Ti II) and others pointing to a high-energy
event (Si II velocity and line ratio). Due to several characteristics it
shares, or almost shares, with low-luminosity 02es-like SNe Ia, we classify it
as a transitional member of that subclass (see Table 2 and the rest of Section
4).
This object being a transitional 02es-like has two major implications.
The first is the confirmation that transitional 02es-like SNe Ia can exist.
This has precedent in the object SN 2006bt (Foley et al., 2010; Ganeshalingam
et al., 2010), which can be considered a transitional member of this class
(Taubenberger, 2017) despite its high velocities (12,500 km s-1 at 3 days
before maximum) and relatively bright luminosity
($M_{B,\textrm{peak}}\sim-19$, with uncertain reddening correction). This
object is included in both Figure 5 (orange star) and Figure 7 for comparison.
However, SN 2019yvq is by no means a clone of SN 2006bt as it lies in
extremely sparsely populated regions of parameter space in several respects
(see Figure 5, also Figure 2 of Tucker et al., 2020). On the Phillips relation
SN 2019yvq has similar parameters to SN 2012Z, but on the Branch diagram SN
2019yvq is most similar to SN 2002bo. SNe 2002bo and 2012Z are substantially
different SNe. A transitional 02es-like SN that not only shares
characteristics with both these SNe but is also distinct from another
transitional member of its subclass supports evidence that there is a
continuum of events between normal SNe Ia and 02es-likes. Assuming a continuum
of events instead of discrete subclasses, this also suggests that 02es-like
SNe do not arise from progenitor systems which are distinct from the systems
of normal SNe Ia.
The second major implication comes from the fact that the three 02es-like SNe
Ia with very early data (SN 2019yvq, iPTF14atg, and iPTF14dpk) all display
unusual early-time lightcurves (see Section 4 and Table 3). Of these, the two
with Swift data at these early epochs display the two strongest early UV
flashes in SNe Ia. iPTF14dpk unfortunately only has R-band photometry, and
while at first glance its first data point appears indicative of an early
excess, Cao et al. (2016) say that this would require an extreme explosion
energy and would lead to higher velocities than are observed. The lack of
multi-band photometry makes us hesitant to accept that conclusion
incontrovertibly. According to Kasen (2010), if such early excesses are due to
companion–ejecta shock interaction they should only be seen in $\sim$10% of
events with such early data. Instead, for 02es-like SNe Ia, they are seen in
two (or three) of the three early events. This is unlikely – even with the
current small sample size, the odds of so many early excesses are somewhere
between 1 in 100 and 1 in 1000. And as discussed in Section 5.2, the
discrepancies between photospheric and nebular best-fit models make us
hesitant to claim that SN 2019yvq is a double detonation event either, even
though those models can produce early UV excesses. We are left considering
progenitor scenarios which could produce an early excess which is both fit
relatively successfully by shock interaction models but is not viewing angle-
dependent.
In addition to models which have already been discussed (double detonations
and varied Ni distributions, see Sections 5.2 and 5.3.1), there are a few
possibilities for progenitor systems configured in such a way to produce more
isotropic shocks. One option lies in the accretion disks which form as the
(primary) WD accretes matter. Levanon & Soker (2019) model the exquisitely
sampled early bump seen in the K2 data of SN 2018oh as the interaction of the
SN ejecta with what they refer to as “Disk-Originated Matter,” since accretion
disks could also give rise to bipolar jets. The addition of an accretion disk
and jets would more easily account for the ubiquity of early excesses since
these components can be seen more isotropically. Piro & Morozova (2016), in
addition to modeling the degree of Ni mixing in WD progenitors, also
investigate the effects of a more general distribution of CSM. These models
can produce early excesses which occur on a range of timescales and
intensities, depending on the total amount of external matter in the CSM and
its density scaling. In particular they can produce early bumps which only
last $\sim$2 days, which could explain the (potential) extremely brief excess
seen in iPTF14dpk. These CSM models also get redder immediately after the
explosion instead of bluer like the Ni mixing models. This early reddening
more accurately reflects the color evolution of SN 2019yvq.
Cao et al. (2016) model the 02es-like SNe Ia iPTF14atg and iPTF14dpk as
interacting with non-degenerate companions, but seen from different viewing
angles. The addition of SN 2019yvq as another member of the rare 02es-like
subclass, with a commensurate early UV excess, leads us to doubt that all
three of these excesses arise from ejecta–companion shock interaction.
Something about their progenitor systems must be more isotropic than is
assumed in Kasen (2010) to explain the ubiquity of these early excesses in
02es-like SNe Ia.
## 8 Conclusions & Summary
We have discussed the discovery and follow-up observations of SN 2019yvq, a
nearby SN Ia with a rare and unusually strong excess in its early lightcurve,
in addition to several other uncommon features. This early excess is most
pronounced in the UV, where the object is brighter during the excess than
during the epochs of its optical peak.
This object is one of a very limited number of SNe Ia with early UV/blue
excess, and it demonstrates an even stronger excess than other objects in the
sample. SN 2019yvq deviates significantly from SNe Ia that are blue at early
times but otherwise normal. Instead it shares some, but not all, features of
the 02es-like SN Ia subclass, including a low peak luminosity, red color,
moderately high decline rate, Ti II at peak, and nebular [Ca II] and [Fe II].
We classify SN 2019yvq as a transitional member of the 02es-like subclass.
Although models which simulate WD double detonation and ejecta–companion shock
interaction can create lightcurves with excess flux at early times, we find
that no one model can accurately reproduce all unusual aspects of this
object’s dataset. This is in broad agreement with the conclusions drawn in
Miller et al. (2020b) and Tucker et al. (2020), which include several pieces
of data not present here (including i-band ZTF data, post-maximum TESS data,
and a Keck NIRES spectrum) and, like us, are unable to satisfactorily explain
every aspect of the SN 2019yvq dataset. As in Siebert et al. (2020) we also
find strong [Ca II] and [Fe II] emissions in the nebular spectra of SN 2019yvq
in addition to strong limits on the amount of swept-up H and He, but we do not
take this as exclusive evidence of a double detonation explosion.
Two other 02es-like SNe Ia also display unusual early lightcurves (iPTF14atg
and iPTF14dpk). The deviations from a power-law rise in all 02es-like SNe Ia
with sufficiently early data makes us further doubt that the early UV excess
seen in SN 2019yvq arises from ejecta–companion shock interaction, as viewing
angle effects dictate that such excesses should only be seen in $\sim$10% of
events with early data, not $\sim$100%. 02es-like SNe Ia must originate in
progenitor systems capable of displaying early excesses nearly isotropically.
The addition of CSM or accretion disks and jets could account for this needed
isotropy.
This SN demonstrates the importance of prompt discovery, reporting, and
follow-up of young SNe. In this case, the one day non-detection enabled rapid
follow-up with multiple facilities around the world and in space. The
synthesis of such high-cadence multiwavelength datasets is a powerful tool for
understanding the origins of SNe Ia, or for providing even more observational
peculiarities which accurate models must account for.
We are grateful to A. Polin for providing the lightcurve and spectra models in
Polin et al. (2019a), and to G. Hosseinzadeh for assistance in our use of
lightcurve_fitting. We also thank E. Heringer for providing the Tardis models
from Heringer et al. (2017), and R. Cartier for providing the syn++ models
from Cartier et al. (2017). J.B., D.A.H., D.H., C.M., and C.P. are supported
by NSF grants AST-1313484 and AST-1911225, as well as by NASA grant
80NSSC19kf1639. S.K.S. and L.C. are supported by NSF grant AST-1907790. Time
domain research by D.J.S. is supported by NSF grants AST-1821987, 1813466, &
1908972, and by the Heising-Simons Foundation under grant #2020-1864. P.J.B.
is supported by NASA grants 80NSSC20K0456 and 80NSSC19K0316. This research
makes use of observations from the Las Cumbres Observatory network, in
addition to the MARS ZTF alert broker developed by Las Cumbres Observatory
software engineers. This research has made use of the NASA/IPAC Extragalactic
Database (NED) which is operated by the Jet Propulsion Laboratory, California
Institute of Technology, under contract with NASA. This research made use of
Tardis, a community-developed software package for spectral synthesis in
supernovae (Kerzendorf & Sim, 2014; Kerzendorf et al., 2019). The development
of Tardis received support from the Google Summer of Code initiative and from
ESA’s Summer of Code in Space program. Tardis makes extensive use of Astropy
and PyNE.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Barbary et al. (2016) Barbary, K., rbiswas4, Goldstein, D., et al. 2016, doi:10.5281/zenodo.168220
* Bellm et al. (2019) Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, PASP, 131, 018002
* Benetti et al. (2004) Benetti, S., Meikle, P., Stehle, M., et al. 2004, MNRAS, 348, 261
* Bianco et al. (2011) Bianco, F. B., Howell, D. A., Sullivan, M., et al. 2011, ApJ, 741, 20
* Blondin & Tonry (2007) Blondin, S., & Tonry, J. L. 2007, ApJ, 666, 1024
* Blondin et al. (2012) Blondin, S., Matheson, T., Kirshner, R. P., et al. 2012, AJ, 143, 126
* Botyánszki et al. (2018) Botyánszki, J., Kasen, D., & Plewa, T. 2018, ApJ, 852, L6
* Branch et al. (2006) Branch, D., Dang, L. C., Hall, N., et al. 2006, PASP, 118, 560
* Breeveld et al. (2010) Breeveld, A. A., Curran, P. A., Hoversten, E. A., et al. 2010, MNRAS, 406, 1687
* Brown et al. (2014) Brown, P. J., Breeveld, A. A., Holland, S., Kuin, P., & Pritchard, T. 2014, Ap&SS, 354, 89
* Brown et al. (2012a) Brown, P. J., Dawson, K. S., Harris, D. W., et al. 2012a, ApJ, 749, 18
* Brown et al. (2017) Brown, P. J., Landez, N. J., Milne, P. A., & Stritzinger, M. D. 2017, ApJ, 836, 232
* Brown et al. (2018) Brown, P. J., Perry, J. M., Beeny, B. A., Milne, P. A., & Wang, X. 2018, ApJ, 867, 56
* Brown et al. (2010) Brown, P. J., Roming, P. W. A., Milne, P., et al. 2010, ApJ, 721, 1608
* Brown et al. (2012b) Brown, P. J., Dawson, K. S., de Pasquale, M., et al. 2012b, ApJ, 753, 22
* Brown et al. (2013) Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031
* Bulla et al. (2020) Bulla, M., Miller, A. A., Yao, Y., et al. 2020, arXiv e-prints, arXiv:2001.00587
* Burns et al. (2011) Burns, C. R., Stritzinger, M., Phillips, M. M., et al. 2011, AJ, 141, 19
* Cao et al. (2016) Cao, Y., Kulkarni, S. R., Gal-Yam, A., et al. 2016, ApJ, 832, 86
* Cao et al. (2015) Cao, Y., Kulkarni, S. R., Howell, D. A., et al. 2015, Natur, 521, 328
* Cartier et al. (2017) Cartier, R., Sullivan, M., Firth, R. E., et al. 2017, MNRAS, 464, 4476
* Chevalier (1982) Chevalier, R. A. 1982, ApJ, 259, 302
* Chevalier (1984) —. 1984, ApJ, 285, L63
* Chevalier (1998) —. 1998, ApJ, 499, 810
* Chevalier & Fransson (2006) Chevalier, R. A., & Fransson, C. 2006, ApJ, 651, 381
* Chomiuk et al. (2012) Chomiuk, L., Soderberg, A. M., Moe, M., et al. 2012, ApJ, 750, 164
* Chomiuk et al. (2016) Chomiuk, L., Soderberg, A. M., Chevalier, R. A., et al. 2016, ApJ, 821, 119
* Conley et al. (2008) Conley, A., Sullivan, M., Hsiao, E. Y., et al. 2008, ApJ, 681, 482
* Dessart et al. (2020) Dessart, L., Leonard, D. C., & Prieto, J. L. 2020, A&A, 638, A80
* Dey et al. (2019) Dey, A., Schlegel, D. J., Lang, D., et al. 2019, AJ, 157, 168
* Dimitriadis et al. (2019) Dimitriadis, G., Foley, R. J., Rest, A., et al. 2019, ApJ, 870, L1
* Eggleton (1983) Eggleton, P. P. 1983, ApJ, 268, 368
* Fink et al. (2010) Fink, M., Röpke, F. K., Hillebrandt, W., et al. 2010, A&A, 514, A53
* Firth et al. (2015a) Firth, R. E., Sullivan, M., Gal-Yam, A., et al. 2015a, MNRAS, 446, 3895
* Firth et al. (2015b) —. 2015b, MNRAS, 446, 3895
* Folatelli et al. (2012) Folatelli, G., Phillips, M. M., Morrell, N., et al. 2012, ApJ, 745, 74
* Foley et al. (2010) Foley, R. J., Narayan, G., Challis, P. J., et al. 2010, ApJ, 708, 1748
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Förster et al. (2013) Förster, F., González-Gaitán, S., Folatelli, G., & Morrell, N. 2013, ApJ, 772, 19
* Galbany et al. (2019) Galbany, L., Ashall, C., Höflich, P., et al. 2019, A&A, 630, A76
* Ganeshalingam et al. (2011) Ganeshalingam, M., Li, W., & Filippenko, A. V. 2011, MNRAS, 416, 2607
* Ganeshalingam et al. (2010) Ganeshalingam, M., Li, W., Filippenko, A. V., et al. 2010, ApJS, 190, 418
* Ganeshalingam et al. (2012) —. 2012, ApJ, 751, 142
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005
* Guy et al. (2007) Guy, J., Astier, P., Baumont, S., et al. 2007, A&A, 466, 11
* Hayden et al. (2010) Hayden, B. T., Garnavich, P. M., Kasen, D., et al. 2010, ApJ, 722, 1691
* Heringer et al. (2017) Heringer, E., van Kerkwijk, M. H., Sim, S. A., & Kerzendorf, W. E. 2017, ApJ, 846, 15
* Höflich et al. (2002) Höflich, P., Gerardy, C. L., Fesen, R. A., & Sakai, S. 2002, ApJ, 568, 791
* Horesh et al. (2012) Horesh, A., Kulkarni, S. R., Fox, D. B., et al. 2012, ApJ, 746, 21
* Hosseinzadeh (2019) Hosseinzadeh, G. 2019, Light Curve Fitting, vv0.0.0, Zenodo, doi:10.5281/zenodo.2639464. https://doi.org/10.5281/zenodo.2639464
* Hosseinzadeh et al. (2017) Hosseinzadeh, G., Sand, D. J., Valenti, S., et al. 2017, ApJ, 845, L11
* Howell (2011) Howell, D. A. 2011, NatCo, 2, 350
* Hoyle & Fowler (1960) Hoyle, F., & Fowler, W. A. 1960, ApJ, 132, 565
* Hsiao et al. (2013) Hsiao, E. Y., Marion, G. H., Phillips, M. M., et al. 2013, ApJ, 766, 72
* Hsiao et al. (2019) Hsiao, E. Y., Phillips, M. M., Marion, G. H., et al. 2019, PASP, 131, 014002
* Iben & Tutukov (1984) Iben, I., J., & Tutukov, A. V. 1984, ApJS, 54, 335
* Itagaki (2019) Itagaki, K. 2019, Transient Name Server Discovery Report, 2019-2720, 1
* Jha et al. (2007) Jha, S., Riess, A. G., & Kirshner, R. P. 2007, ApJ, 659, 122
* Kasen (2010) Kasen, D. 2010, ApJ, 708, 1025
* Kasen et al. (2009) Kasen, D., Röpke, F. K., & Woosley, S. E. 2009, Nature, 460, 869
* Kawabata (2020) Kawabata, M. 2020, Transient Name Server Classification Report, 2020-24, 1
* Kerzendorf et al. (2019) Kerzendorf, W., Nöbauer, U., Sim, S., et al. 2019, tardis-sn/tardis: TARDIS v3.0 alpha2, , , doi:10.5281/zenodo.2590539. https://doi.org/10.5281/zenodo.2590539
* Kerzendorf & Sim (2014) Kerzendorf, W. E., & Sim, S. A. 2014, MNRAS, 440, 387
* Kollmeier et al. (2019) Kollmeier, J. A., Chen, P., Dong, S., et al. 2019, MNRAS, 486, 3041
* Landolt (1992) Landolt, A. U. 1992, AJ, 104, 340
* Levanon & Soker (2019) Levanon, N., & Soker, N. 2019, ApJ, 872, L7
* Li et al. (2019) Li, W., Wang, X., Vinkó, J., et al. 2019, ApJ, 870, 12
* Liu et al. (2015) Liu, Z.-W., Moriya, T. J., & Stancliffe, R. J. 2015, MNRAS, 454, 1192
* Magee & Maguire (2020) Magee, M. R., & Maguire, K. 2020, arXiv e-prints, arXiv:2007.02101
* Magee et al. (2020) Magee, M. R., Maguire, K., Kotak, R., et al. 2020, A&A, 634, A37
* Magee et al. (2018) Magee, M. R., Sim, S. A., Kotak, R., & Kerzendorf, W. E. 2018, A&A, 614, A115
* Maguire et al. (2011) Maguire, K., Sullivan, M., Thomas, R. C., et al. 2011, MNRAS, 418, 747
* Maoz et al. (2014) Maoz, D., Mannucci, F., & Nelemans, G. 2014, ARA&A, 52, 107
* Marion et al. (2016) Marion, G. H., Brown, P. J., Vinkó, J., et al. 2016, ApJ, 820, 92
* Miller et al. (2018) Miller, A. A., Cao, Y., Piro, A. L., et al. 2018, ApJ, 852, 100
* Miller et al. (2020a) Miller, A. A., Yao, Y., Bulla, M., et al. 2020a, arXiv e-prints, arXiv:2001.00598
* Miller et al. (2020b) Miller, A. A., Magee, M. R., Polin, A., et al. 2020b, ApJ, 898, 56
* Nugent et al. (2011) Nugent, P. E., Sullivan, M., Cenko, S. B., et al. 2011, Nature, 480, 344
* Olling et al. (2015) Olling, R. P., Mushotzky, R., Shaya, E. J., et al. 2015, Natur, 521, 332
* Pakmor et al. (2012) Pakmor, R., Kromer, M., Taubenberger, S., et al. 2012, ApJ, 747, L10
* Pakmor et al. (2013) Pakmor, R., Kromer, M., Taubenberger, S., & Springel, V. 2013, ApJ, 770, L8
* Panagia et al. (2006) Panagia, N., Van Dyk, S. D., Weiler, K. W., et al. 2006, ApJ, 646, 369
* Parrent et al. (2014) Parrent, J., Friesen, B., & Parthasarathy, M. 2014, Ap&SS, 351, 1
* Parrent et al. (2011) Parrent, J. T., Thomas, R. C., Fesen, R. A., et al. 2011, ApJ, 732, 30
* Pérez-Torres et al. (2014) Pérez-Torres, M. A., Lundqvist, P., Beswick, R. J., et al. 2014, ApJ, 792, 38
* Perlmutter et al. (1999) Perlmutter, S., Aldering, G., Goldhaber, G., et al. 1999, The Astrophysical Journal, 517, 565–586. http://dx.doi.org/10.1086/307221
* Phillips (1993) Phillips, M. M. 1993, ApJ, 413, L105
* Phillips et al. (1999) Phillips, M. M., Lira, P., Suntzeff, N. B., et al. 1999, AJ, 118, 1766
* Phillips et al. (2013) Phillips, M. M., Simon, J. D., Morrell, N., et al. 2013, ApJ, 779, 38
* Piro & Morozova (2016) Piro, A. L., & Morozova, V. S. 2016, ApJ, 826, 96
* Polin et al. (2019a) Polin, A., Nugent, P., & Kasen, D. 2019a, ApJ, 873, 84
* Polin et al. (2019b) —. 2019b, arXiv e-prints, arXiv:1910.12434
* Poznanski et al. (2012) Poznanski, D., Prochaska, J. X., & Bloom, J. S. 2012, MNRAS, 426, 1465
* Prieto et al. (2020) Prieto, J. L., Chen, P., Dong, S., et al. 2020, ApJ, 889, 100
* Rau & Cornwell (2011) Rau, U., & Cornwell, T. J. 2011, A&A, 532, A71
* Rayner et al. (2003) Rayner, J. T., Toomey, D. W., Onaka, P. M., et al. 2003, PASP, 115, 362
* Riess et al. (1998) Riess, A. G., Filippenko, A. V., Challis, P., et al. 1998, The Astronomical Journal, 116, 1009–1038. http://dx.doi.org/10.1086/300499
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95. https://doi.org/10.1007%2Fs11214-005-5095-4
* Rothberg & Joseph (2006) Rothberg, B., & Joseph, R. D. 2006, AJ, 131, 185
* Ryder et al. (2004) Ryder, S. D., Sadler, E. M., Subrahmanyan, R., et al. 2004, MNRAS, 349, 1093
* Salas et al. (2013) Salas, P., Bauer, F. E., Stockdale, C., & Prieto, J. L. 2013, MNRAS, 428, 1207
* Sand et al. (2018) Sand, D. J., Graham, M. L., Botyánszki, J., et al. 2018, ApJ, 863, 24
* Sand et al. (2019) Sand, D. J., Amaro, R. C., Moe, M., et al. 2019, ApJ, 877, L4
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103. https://doi.org/10.1088%2F0004-637x%2F737%2F2%2F103
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
* Schmidt et al. (1989) Schmidt, G. D., Weymann, R. J., & Foltz, C. B. 1989, PASP, 101, 713
* SDSS Collaboration et al. (2017) SDSS Collaboration, Albareti, F. D., Allende Prieto, C., et al. 2017, ApJS, 233, 25
* Shappee et al. (2019) Shappee, B. J., Holoien, T. W. S., Drout, M. R., et al. 2019, ApJ, 870, 13
* Shen et al. (2018) Shen, K. J., Boubert, D., Gänsicke, B. T., et al. 2018, ApJ, 865, 15
* Siebert et al. (2020) Siebert, M. R., Dimitriadis, G., Polin, A., & Foley, R. J. 2020, arXiv e-prints, arXiv:2007.13793
* Silverman & Filippenko (2012) Silverman, J. M., & Filippenko, A. V. 2012, MNRAS, 425, 1917
* Soderberg et al. (2006) Soderberg, A. M., Chevalier, R. A., Kulkarni, S. R., & Frail, D. A. 2006, ApJ, 651, 1005
* Stritzinger et al. (2018) Stritzinger, M. D., Shappee, B. J., Piro, A. L., et al. 2018, ApJ, 864, L35
* Taubenberger (2017) Taubenberger, S. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin (Springer, Cham)
* Taubenberger et al. (2013) Taubenberger, S., Kromer, M., Pakmor, R., et al. 2013, ApJ, 775, L43
* The Astropy Collaboration et al. (2018) The Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, ArXiv e-prints, arXiv:1801.02634
* Thomas et al. (2011) Thomas, R. C., Aldering, G., Antilogus, P., et al. 2011, ApJ, 743, 27
* Tonry et al. (2001) Tonry, J. L., Dressler, A., Blakeslee, J. P., et al. 2001, ApJ, 546, 681
* Tonry et al. (2018) Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505
* Tucker (2011) Tucker, B. E. 2011, Ap&SS, 335, 223
* Tucker et al. (2020) Tucker, M. A., Ashall, C., Shappee, B. J., et al. 2020, arXiv e-prints, arXiv:2009.07856
* Valenti et al. (2014) Valenti, S., Sand, D., Pastorello, A., et al. 2014, MNRAS, 438, L101
* Valenti et al. (2016) Valenti, S., Howell, D. A., Stritzinger, M. D., et al. 2016, MNRAS, 459, 3939\. https://doi.org/10.1093%2Fmnras%2Fstw870
* Wang & Han (2012) Wang, B., & Han, Z. 2012, New A Rev., 56, 122
* Wang et al. (2009) Wang, X., Filippenko, A. V., Ganeshalingam, M., et al. 2009, ApJ, 699, L139
* Weiler et al. (2007) Weiler, K. W., Williams, C. L., Panagia, N., et al. 2007, ApJ, 671, 1959
* Whelan & Iben (1973) Whelan, J., & Iben, Icko, J. 1973, ApJ, 186, 1007
* White et al. (2015) White, C. J., Kasliwal, M. M., Nugent, P. E., et al. 2015, ApJ, 799, 52
* Wyatt et al. (2020) Wyatt, S. D., Sand, D. J., Hsiao, E. Y., et al. 2020, arXiv e-prints, arXiv:2012.02858
* Yao et al. (2019) Yao, Y., Miller, A. A., Kulkarni, S. R., et al. 2019, ApJ, 886, 152
* Yaron & Gal-Yam (2012) Yaron, O., & Gal-Yam, A. 2012, PASP, 124, 668
* Yoon & Langer (2005) Yoon, S. C., & Langer, N. 2005, A&A, 435, 967
* Zhang et al. (2016) Zhang, K., Wang, X., Zhang, J., et al. 2016, ApJ, 820, 67
* Zheng et al. (2018) Zheng, W., Kelly, P. L., & Filippenko, A. V. 2018, ApJ, 858, 104
|
# g-mode oscillations in hybrid stars:
A tale of two sounds
Prashanth Jaikumar<EMAIL_ADDRESS>Department of Physics and
Astronomy, California State University Long Beach, Long Beach, California
90840, USA Alexandra Semposki<EMAIL_ADDRESS>Madappa Prakash
<EMAIL_ADDRESS>Department of Physics and Astronomy, Ohio University, Athens,
Ohio 45701, USA Constantinos Constantinou<EMAIL_ADDRESS>INFN-
TIFPA, Trento Institute of Fundamental Physics and Applications, Povo, 38123
Trento, Italy European Centre for Theoretical Studies in Nuclear Physics and
Related Areas, Villazzano, 38123 Trento, Italy
###### Abstract
We study the principal core $g$-mode oscillation in hybrid stars containing
quark matter and find that they have an unusually large frequency range
($\approx$ 200 - 600 Hz) compared to ordinary neutron stars or self-bound
quark stars of the same mass. Theoretical arguments and numerical calculations
that trace this effect to the difference in the behaviour of the equilibrium
and adiabatic sound speeds in the mixed phase of quarks and nucleons are
provided. We propose that the sensitivity of core $g$-mode oscillations to
non-nucleonic matter in neutron stars could be due to the presence of a mixed
quark-nucleon phase. Based on our analysis, we conclude that for binary
mergers where one or both components may be a hybrid star, the fraction of
tidal energy pumped into resonant $g$-modes in hybrid stars can exceed that of
a normal neutron star by a factor of 2-3, although resonance occurs during the
last stages of inspiral. A self-bound star, on the other hand, has a much
weaker tidal overlap with the $g$-mode. The cumulative tidal phase error in
hybrid stars, $\Delta\phi\cong$ 0.5 rad, is comparable to that from tides in
ordinary neutron stars, presenting a challenge in distinguishing between the
two cases. However, should the principal $g$-mode be excited to sufficient
amplitude for detection in a postmerger remnant with quark matter in its
interior, its frequency would be a possible indication for the existence of
non-nucleonic matter in neutron stars.
††preprint: APS/123-QED
## I Introduction
The core of a neutron star (NS) can, in principle, support phases of dense
deconfined quark matter (QM) [1]. Confirmation of the presence of quarks in
NSs, however, has not been possible either through observations or from
lattice-gauge calculations of finite baryon density matter starting from the
Lagrangian of Quantum Chromodynamics (QCD). Although perturbative calculations
of QM have been performed [2, 3, 4], their applicability is limited to baryon
densities $n_{B}\gtrsim 40n_{s}$ [5], where $n_{s}\simeq 0.16~{}\rm{fm^{-3}}$
is the nuclear matter saturation density. Such densities, however, lie well
beyond the central densities $n_{c}$ in the range 3-8$n_{s}$ of observed NSs.
In view of this conundrum, theoretical studies of QM in NSs have been
exploratory in nature by positing either a sharp 1st-order or a smooth
crossover hadron-to-quark phase transition. Depending on the treatment of the
phase transition and the equations of state (EOSs) of hadronic and quark
matter, either a phase of pure QM or a phase in which hadrons are admixed with
quarks can be realized (for a detailed account, see Ref. [6] and an extensive
list of references therein). In either case, stars with quarks are difficult
to distinguish from normal NSs based on the knowledge of masses and radii
alone as similar results can be obtained with both. While the long-term
cooling of a NS can be affected by the presence of quarks, cooling data are
relatively sparse and gathered over decades [7, 8]. Gravitational wave
observations from compact binary mergers can be another probe of the EOS, but
currently, constraints on tidal polarizability [9, 10, 11] from gravitational
wave data [12] are consistent with both normal and quark-containing stars,
depending on the theoretical assumptions made [6, 13, 14].
In this paper, we are particularly interested in how NS oscillations can shed
light on the presence of QM in stars that contain an admixture of nucleons and
quarks (termed hybrid stars). Andersson and Kokkotas [15] have proposed that
NS oscillations (in particular, the $f,p$ modes) could be a “fingerprint” for
the supra-nuclear EOS in gravitational wave data. A review of potential
signatures of QM in NSs in the multi-messenger era, including the role of
their oscillations, can be found in [16]. Along these lines, we offer in this
work a new diagnostic of deconfined QM in NSs based on asteroseismology. We
show that a steep rise in the frequency of the principal $g$-mode (gravity
mode) occurs as soon as QM appears in a mixed phase in the core, exceeding the
typical core $g$-mode frequency of a nucleonic star by a factor of two or
more. This rise is essentially driven by a drop in the local equilibrium speed
of sound at the onset of the phase transition, while the adiabatic sound speed
changes only slightly. If this $g$-mode becomes resonant with the tidal force
during the late stages of a binary inspiral, the resulting energy transfer
from the orbital motion to the star via tidal coupling can affect the phase of
the gravitational waveform, and potentially signal a hybrid star.
NS oscillations are categorized by the nature of the dominant restoring force
for the perturbation in question. Several types of modes can be supported by a
star and it is desirable to investigate as many of them as possible in detail.
These modes are typically excited and sustained in different regions of the
star and their amplitudes and damping rates are subject to considerable
uncertainty. Here, for reasons that will become apparent, we focus our
attention on the $g$-mode and its coupling to gravitational waves.
A $g$-mode is a specific kind of non-radial fluid oscillation initiated when a
parcel of fluid is displaced against the background of a stratified
environment [17, 18]. While pressure equilibrium is rapidly restored via sound
waves, chemical equilibrium can take longer causing buoyancy forces to oppose
the displacement. Since cold NSs are not convective, the opposing force sets
up stable oscillations throughout the core, with a typical frequency, called
the (local) Brunt-Väisälä frequency [19], which depends on the difference
between the equilibrium and adiabatic sound speeds as well as the local metric
coefficients. Convectively stable $g$-modes exist for a wide range of models
of the EOS [20]. Though the $g$-mode in NSs has been studied before [21, 22,
23], with recent works incorporating additional features like hyperonic matter
and superfluidity [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], the novel
aspect of our work is the investigation of the $g$-mode frequency in a phase
transition from nuclear matter to a mixed phase of quarks and nucleons. We
point out that a similar result was obtained in [30] for superfluid hyperonic
stars. However, the calculations presented there were for a fixed NS mass of
about 1.64$M_{\odot}$ with a radius of 13.6 km. While their chosen hyperonic
EOS does support a maximum mass of $2.015M_{\odot}$ [35], the nuclear and
quark EOSs chosen in our work satisfy additional observational and
experimental constraints, and presents the effect for a wide range of masses
up to the observed maximum.
This paper also extends the results of [36] by incorporating aspects of
General Relativity in the fluid oscillation equations (while remaining within
the Cowling approximation), updating the nuclear EOS to include consistency
with radius constraints from tidal polarizability [37] and NICER data [38, 39]
on the radius of $\simeq 1.4M_{\odot}$ NS. We also provide new analytical
results for the two sound speeds in a mixed phase of quarks and nucleons. Our
study can be of practical interest to investigations of the sound speed in
NSs, which is attracting renewed attention [40, 41, 42, 43]. The matter of
detecting $g$-modes from the pre- or post-coalescence phase of binary NS
mergers is not addressed in detail here, but we present an estimate of its
impact on the accumulated tidal phase up to the merger.
It is pertinent to note that oscillation modes other than $g$-modes can also
potentially be affected by the presence of QM in NSs. Radial oscillation
modes, which however do not couple to gravitational waves, were studied in
[44]. Among non-radial modes, the $i$-mode (interface mode) has been recently
investigated for the special case of crystalline QM surrounded by a hadronic
envelope in [45] and its frequency can range from (300-1500) Hz, which can be
probed by current or third generation interferometric detectors. The $r$-mode
(Rossby mode) frequency and its damping rate for NSs containing QM also
differs from a purely nucleonic one [46, 47, 48]. The $s$-mode (shear mode)
can be excited in a mixed phase of quarks and nucleons and is sensitive to the
shear modulus of structures in the mixed phase [49], probing the surface
tension of QM. The $g$-mode oscillation in stars containing QM has been
studied in the case of a sharp interface [50, 51] between hadronic and quark
matter, yielding the spectrum of so-called discontinuity $g$-modes, but these
works assume a strong 1st-order phase transition and a large value of the
surface tension for QM, while we study the case of a mixed phase of
significant extent that would be favored if the same surface tension were
small enough111Here, we do not explicitly study surface and curvature effects
or the impact of a non-trivially structured mixed phase on the oscillation
spectrum [52, 49, 53], but a more complete treatment of $g$-modes in hybrid
stars should address these issues.. The $g$-mode for baryonic stars with
superfluid effects was studied in [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]
highlighting the subtle role of temperature and composition gradients in
driving the $g$-mode. In this work, we investigate the composition gradients
induced by the mixed phase of quarks and nucleons, which supports unusually
high-frequency $g$-modes through its effect on the adiabatic and equilibrium
sound speeds.
The organization of this paper is as follows. In Sec. II, we introduce the
governing equations of the $g$-mode and outline its relation to the two sound
speeds. In Sec. III, we present the EOS models in the nucleonic and quark
sectors chosen for our study of the $g$-mode spectrum. The rationale for our
parameter choices and the basic features of these models are highlighted here
for orientation. We stress that our choices are representative, but not
exhaustive. In Sec. IV, we derive expressions for the two sound speeds in the
mixed phase of quarks and nucleons. Results for the sound speeds, the Brunt-
Väisälä frequency and the $g$-mode frequency in nucleonic, quark and hybrid
stars are gathered and interpreted in Sec. V. The tidal forcing and phase
error due to $g$-mode excitation are estimated in Sec. VI. Our summary,
conclusions and outlook are contained in Sec. VII. The appendices contain
details about the determination of parameters in the nuclear EOS model, the
resulting NS structural properties and the two sound speeds.
## II $g$-mode Oscillations
In this section, we outline the equations for fluid oscillations and non-
rotating NS structure that were used to determine the eigenfrequencies of the
$g$-mode. In general, the oscillatory displacement of a fluid element in a
spherically symmetric star is represented by a vector field
${\vec{\xi}}^{nlm}(\vec{r}){\rm e}^{-i\omega t}$ with $n,l$ and $m$ denoting
the radial, azimuthal and magnetic mode indices. To be precise, the
frequencies also carry subscripts $nlm$ implicitly understood, with
degeneracies that are broken in more realistic cases such as with rotation or
magnetic fields. For even-parity or spheroidal modes, separation into radial
and tangential components yields $\xi_{r}^{nlm}(\vec{r})$ =
$\eta_{r}^{nl}(r)Y_{lm}(\theta,\phi)$ and $\xi_{\perp}^{nlm}(\vec{r})$ =
$r\eta_{\perp}^{nl}(r)\nabla_{\perp}Y_{lm}(\theta,\phi)$, respectively, where
$Y_{lm}(\theta,\phi)$ are the spherical harmonics. From the perturbed
continuity equation for the fluid, the tangential function $\eta_{\perp}$ can
be traded for fluid variables as $\delta p/\epsilon$ =
$\omega^{2}r\eta_{\perp}(r)Y_{lm}(\theta,\phi){\rm e}^{-i\omega t}$, where
$\delta p$ is the corresponding local (Eulerian) pressure perturbation and
$\epsilon$ the local energy density. Within the relativistic Cowling
approximation222The Cowling approximation neglects the back reaction of the
gravitational potential and reduces the number of equations we have to solve.
While this approximation is not strictly consistent with our fully general
relativistic (GR) treatment of the equilibrium structure of the star, it does
not change our conclusions qualitatively or even quantitatively, since this
approximation is accurate for $g$-mode frequencies at the few % level [54].,
the equations of motion to be solved to determine the frequency of a
particular mode are [55, 17, 29]
$\displaystyle-\frac{1}{\mathrm{e}^{\lambda/2}r^{2}}\frac{\partial}{\partial
r}\left[\mathrm{e}^{\lambda/2}r^{2}\xi_{r}\right]+\frac{l(l+1)\mathrm{e}^{\nu}}{r^{2}\omega^{2}}\frac{\delta
p}{p+\epsilon}-\frac{\Delta p}{\gamma p}=0$ $\displaystyle\frac{\partial\delta
p}{\partial r}+g\left(1+\frac{1}{c_{\mathrm{s}}^{2}}\right)\delta
p+\mathrm{e}^{\lambda-\nu}h\left(N^{2}-\omega^{2}\right)\xi_{r}=0\,,$ (1)
where $h=p+\epsilon$, and we have suppressed the indices on $\omega$ and
$\xi$. Equation (1) involves thermodynamic quantities that follow from the
specific EOS. Specifically, $p$ denotes pressure, $\epsilon$ energy density,
and $\gamma$ the adiabatic index of the fluid. The Lagrangian variation of the
pressure enters as $\Delta p$, and is related to the Eulerian variation
$\delta p$ through the operator relation $\Delta\equiv\delta+\xi\cdot\nabla$.
The symbol $c_{s}$ denotes the adiabatic sound speed, which is related to the
adiabatic index as $c_{s}^{2}=\gamma p/(\mu_{n}n_{B})$ where $\mu_{n}$ is the
neutron chemical potential333In beta-equilibrated charge neutral matter, the
neutron chemical potential is sufficient to determine all other chemical
potentials. and $n_{B}$ the local baryon density. The equilibrium sound speed
enters through the Brunt-Väisälä frequency ($N$) which is given by
$\displaystyle N^{2}\equiv
g^{2}\Big{(}\frac{1}{c_{e}^{2}}-\frac{1}{c_{s}^{2}}\Big{)}{\rm
e}^{\nu-\lambda}\,,$ (2)
where $g=-\nabla\phi=-\nabla p/h$ with $h=\epsilon+p$ the enthalpy of the
fluid. Finally, $\nu(r)$ and $\lambda(r)$ are metric functions of the
unperturbed star which features in the Schwarzschild interior metric ($r<R$):
$\displaystyle-\mathrm{d}s^{2}\equiv\,g_{\alpha\beta}\mathrm{d}x^{\alpha}\mathrm{d}x^{\beta}=$
$\displaystyle-\mathrm{e}^{\nu(r)}\mathrm{d}t^{2}+\mathrm{e}^{\lambda(r)}\mathrm{d}r^{2}$
(3)
$\displaystyle+r^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{2}\right).$
Explicitly,
$\displaystyle e^{\lambda(r)}=\frac{1}{1-\left(\frac{2Gm(r)}{c^{2}r}\right)}$
(4)
and
$\displaystyle
e^{\nu(r)}=\exp\bigg{[}-\frac{2G}{c^{2}}\int_{0}^{r}\left(\frac{\left(m(r^{\prime})+\frac{4\pi
p(r^{\prime})r^{\prime
3}}{c^{2}}\right)}{r^{\prime}\left(r^{\prime}-\frac{2m(r^{\prime})G}{c^{2}}\right)}\right)dr^{\prime}\bigg{]}{\rm
e}^{\nu_{0}},$ (5)
where $m(r^{\prime})$ is the enclosed mass of the star at $r^{\prime}$. These
metric functions must match to their exterior values at the surface $r=R$,
hence the constant factor ${\rm e}^{\nu_{0}}$ [56].
In this work, we study the fundamental $g$-mode with $n$ = 1 and fix the
mode’s multipolarity at $l$ = 2. For the non-rotating stars we consider here,
solutions are degenerate in $m$. Note that our definition of the “fundamental”
mode refers to the lowest radial order of the $g$-mode which also has the
highest frequency. This should not be confused with the qualitatively
different $f$-mode which is also referred to sometimes as the fundamental
mode. Furthermore, overtones with lower frequency exist, but we do not perform
any computations with them here, since the fundamental $g$-mode has the
highest frequency and will be excited during the final stage of the pre-merger
phase when tidal forces are strongest. The system of equations in Eq. (1)
cannot be solved analytically even with a simple model of a neutron star. Our
aim will be to solve this numerically as an eigenvalue system for the $g$-mode
frequency $\omega$. Physically, the solution to this system of equations,
under the boundary conditions $\Delta p=0$ at the surface and
$\xi_{r},\,\delta p/\epsilon$ regular at the center, only exists for discrete
values of the mode frequency $\omega$. These values represent the $g$-mode
spectrum for a chosen stellar model. Because we have employed the Cowling
approximation and ignored the perturbations of the metric that must accompany
fluid perturbations, we cannot compute the imaginary part of the
eigenfrequency (damping time) of the $g$-mode444The damping time of $g$-modes
due to viscosity and gravitational wave emission, crudely estimated in [57,
36], suggests that the $g$-mode can become secularly unstable for temperatures
$10^{8}~{}{\rm K}<T<10^{9}~{}{\rm K}$ for rotational speeds exceeding twice
the $g$-mode frequency of a static star.. We turn now to discuss the EOS
models for nucleonic and quark matter employed in this work.
## III Models for the Equation of State
The EOS models chosen in this work were predicated on the requirement that the
squared sound speeds $c_{e}^{2}$ (see Eq.(25)) and $c_{s}^{2}$ could be
calculated straightforwardly. In the nucleonic sector, we employ the model of
Zhao and Lattimer (ZL) [43] which is consistent with nuclear systematics at
and below the nuclear saturation density $n_{s}$. With suitable choices of the
slope of the nuclear symmetry energy at $n_{s}$ (see below), this EOS is also
consistent with the recent chiral effective theory calculations of Drischler
et al. [58] in which uncertainty estimates of the EOS up to $2n_{s}$ were
provided (see Fig. 2 of this reference). In addition, the ZL EOS is able to
support $\simeq 2M_{\odot}$ stars required by mass measurements of heavy NSs
[59], and is consistent with the recent radius measurements of $\sim
1.4M_{\odot}$ stars [38, 39] and the tidal deformability estimates from the
binary neutron star merger GW170817 [37].
Among the many models and treatments available in the quark sector [6], we
utilize the vMIT model of Gomes et al. [60] as a caricature of strongly
interacting quarks at the densities attained within NSs. Such interactions
between quarks are required to satisfy astrophysical data, particularly those
of heavy mass NSs. For the treatment of the nucleon-to-quark transition at
supra-nuclear densities, we employ the Gibbs construction [61] which renders
the transition to be smooth. Alternative models and treatments that feature
strong first- or second-order phase transitions will be undertaken in
subsequent work.
### III.1 The ZL EOS for Nucleonic Matter
For completeness and to set the stage for the calculation of the two sound
speeds in the next section, relevant details of the ZL model are provided
below. The total energy density of interacting nucleons in neutron star matter
(NSM) is
$\displaystyle\epsilon_{B}=$
$\displaystyle\sum_{i=n,p}\frac{1}{\pi^{2}}\int_{0}^{k_{Fi}}k^{2}\sqrt{M_{B}^{2}+k^{2}}\,dk$
$\displaystyle+n_{B}V(n_{n},n_{p})\,,$ (6)
where the Fermi momenta $k_{Fi}=(3\pi^{2}n_{i})^{1/3}$ with $i=n,p$ and
$n_{B}=n_{n}+n_{p}$, and $M_{B}$ is the baryon mass in vacuum. In the ZL
model, interactions between nucleons are written as
$\displaystyle V(n_{n},n_{p})\equiv V(u,x)=$
$\displaystyle\,4x(1-x)(a_{0}u+b_{0}u^{\gamma})$
$\displaystyle+(1-2x)^{2}(a_{1}u+b_{1}u^{\gamma_{1}}),$ (7)
where $u=n_{B}/n_{s}$ and the proton fraction $x=n_{p}/n_{B}$. Adding and
subtracting $a_{0}u+b_{0}u^{\gamma}$, the above equation can be rewritten as
$\displaystyle V(u,x)$ $\displaystyle=V_{0}+S_{2i}(u)(1-2x)^{2}\quad$ with
$\displaystyle V_{0}$ $\displaystyle=a_{0}u+b_{0}u^{\gamma},\quad$
$\displaystyle\quad S_{2i}(u)$
$\displaystyle=(a_{1}-a_{0})u+b_{1}u^{\gamma_{1}}-b_{0}u^{\gamma}\,,$ (8)
where the subscript “$2i$” in $S_{2i}$ refers to the interacting part of the
total symmetry energy $S_{2}=S_{2k}+S_{2i}$, with $S_{2k}$ representing the
kinetic part. Expanding the kinetic part in Eq. (6) to order $(1-2x)^{2}$, we
obtain the result555For the derivation of the kinetic part of the symmetry
energy and its derivatives, see the Appendix.
$\displaystyle
S_{2k}=\frac{1}{8}\left[\frac{1}{n}\frac{\partial^{2}\epsilon_{Bk}}{\partial
x^{2}}\right]_{x=\frac{1}{2}}=\frac{k_{F}^{2}}{6E_{F}}\,,$ (9)
where $k_{F}=(3\pi^{2}n_{B}/2)^{1/3}$ is the Fermi wave number of symmetric
nuclear matter (SNM) and $E_{F}=\sqrt{k_{F}^{2}+M_{B}^{2}}$. Collecting the
results, the energy per baryon relative to $M_{B}$ is given by
$\displaystyle\frac{\epsilon_{B}}{n_{B}}-M_{B}$ $\displaystyle=$
$\displaystyle E(u,x)=E_{{\rm SNM}}+S_{2}(u)(1-2x)^{2}$ where $\displaystyle
E_{{\rm SNM}}$ $\displaystyle=$ $\displaystyle
T_{1/2}+V_{1/2}=T_{1/2}+(a_{0}u+b_{0}u^{\gamma}),$ $\displaystyle S_{2}(u)$
$\displaystyle=$
$\displaystyle\frac{k_{F}^{2}}{6E_{F}}+(a_{1}-a_{0})u+b_{1}u^{\gamma_{1}}-b_{0}u^{\gamma}\,.$
(10)
The kinetic energy per baryon $T_{1/2}$ in SNM ($x=1/2$) is given by the
expression
$\displaystyle T_{1/2}$ $\displaystyle=$
$\displaystyle\frac{\epsilon_{1/2}^{\rm kin}}{n_{B}}-M_{B}\quad{\rm with}$
$\displaystyle\epsilon_{1/2}^{\rm kin}$ $\displaystyle=$
$\displaystyle\frac{2}{4\pi^{2}}\bigg{[}k_{F}E_{F}\left(k_{F}^{2}+\frac{M_{B}^{2}}{2}\right)$
(11) $\displaystyle-$
$\displaystyle\frac{1}{2}M_{B}^{4}\ln\left(\frac{k_{F}+E_{F}}{M_{B}}\right)\bigg{]}\,,$
where $n_{B}$, $k_{F}$ and $E_{F}$ refer to those in SNM.
The baryon pressure $p_{B}$ is
$\displaystyle p_{B}=n_{s}u^{2}\frac{dE_{B}}{du}=p_{\rm
SNM}+n_{s}u^{2}(1-2x)^{2}\frac{dS_{2}(u)}{du}\,,$ (12)
where
$\displaystyle p_{SNM}$ $\displaystyle=$ $\displaystyle p_{1/2}^{\rm
kin}+n_{s}~{}(a_{0}u^{2}+\gamma b_{0}u^{\gamma+1}),\quad{\rm with}$
$\displaystyle p_{1/2}^{\rm kin}$ $\displaystyle=$
$\displaystyle\frac{2}{12\pi^{2}}\bigg{[}k_{F}E_{F}\left(k_{F}^{2}-\frac{3}{2}M_{B}^{2}\right)$
$\displaystyle+$
$\displaystyle\frac{3}{2}M_{B}^{4}\ln\left(\frac{k_{F}+E_{F}}{M_{B}}\right)\bigg{]},\quad{\rm
and}$ $\displaystyle u\frac{dS_{2}(u)}{du}$ $\displaystyle=$
$\displaystyle\frac{2}{3}S_{2k}\left[1-18\left(\frac{S_{2k}}{k_{F}}\right)^{2}\right]+(a_{1}-a_{0})u$
(13) $\displaystyle+$ $\displaystyle b_{1}\gamma_{1}u^{\gamma_{1}}-b_{0}\gamma
u^{\gamma}.$
The incompressibility $K_{B}$ in SNM is obtained from
$\displaystyle K_{B}$ $\displaystyle=$ $\displaystyle
9\frac{dp_{B}}{dn_{B}}=9\left[2u\frac{dE_{B}}{du}+u^{2}\frac{d^{2}E_{B}}{du^{2}}\right]$
(14) $\displaystyle=$ $\displaystyle
9\left\\{\frac{k_{F}^{2}}{3E_{F}}+\left[2a_{0}u+\gamma(\gamma+1)b_{0}u^{\gamma}\right]\right\\}$
The energy per baryon in pure neutron matter (PNM in which $x=0$) relative to
the baryon mass is
$\displaystyle E_{{\rm PNM}}$ $\displaystyle=$ $\displaystyle
T_{0}+V_{0}=T_{0}+(a_{1}u+b_{1}u^{\gamma}_{1})$ $\displaystyle T_{0}$
$\displaystyle=$ $\displaystyle\frac{\epsilon_{0}^{\rm
kin}}{n_{B}}-M_{B}\quad{\rm with}$ $\displaystyle\epsilon_{0}^{\rm kin}$
$\displaystyle=$
$\displaystyle\frac{1}{4\pi^{2}}\bigg{[}k_{Fn}E_{Fn}\left(k_{Fn}^{2}+\frac{M_{B}^{2}}{2}\right)$
(15) $\displaystyle-$
$\displaystyle\frac{1}{2}M_{B}^{4}\ln\left(\frac{k_{Fn}+E_{Fn}}{M_{B}}\right)\bigg{]}\,,$
where now $n_{B}=n_{n}$, $k_{Fn}=(3\pi^{2}n_{n})^{1/3}=2^{1/3}k_{F}$, and
$E_{Fn}=\sqrt{k_{Fn}^{2}+M_{B}^{2}}$.
The determination of the EOS constants in SNM and PNM, and relevant NS
structural properties are summarized in Appendix A.
### III.2 The vMIT Equation of State for Quark Matter
In recent years, variations of the original bag model [62] have been adopted
[60, 63] to calculate the structure of NSs with quarks in their cores to
account for $\geq 2\mathrm{M}_{\odot}$ maximum-mass stars. Termed as vMIT or
vBag models, the QCD perturbative results are dropped and replaced by
repulsive vector interactions between quarks in such works. We will provide
some numerical examples of the vMIT model for contrast with other models as
those of the vBag model turn out to be qualitatively similar.
The Lagrangian density of the vMIT bag model is
$\mathcal{L}=\sum_{i}\left[\bar{\psi}_{i}\left(i\not{\partial}-m_{i}-B\right)\psi_{i}+\mathcal{L}_{\mathrm{int}}\right]\Theta,$
(16)
where
$\mathcal{L}_{\mathrm{int}}=\mathcal{L}_{\mathrm{pert}}+\mathcal{L}_{\mathrm{vec}}$
describes quarks of mass $m_{i}$ confined within a bag as denoted by the
$\Theta$ function. For three flavors $i=u,d,s$ and three colors $N_{c}=3$ of
quarks, the number and baryon densities, energy density, pressure and chemical
potentials in the bag model are given by
$\displaystyle n_{i}$
$\displaystyle=2N_{c}\int^{k_{Fi}}\frac{d^{3}k}{(2\pi)^{3}},\quad
n_{B}=\frac{1}{3}\sum_{i}n_{i}$ (17) $\displaystyle\epsilon_{Q}$
$\displaystyle=2N_{c}\sum_{i}\int^{k_{Fi}}\frac{d^{3}k}{(2\pi)^{3}}\sqrt{k^{2}+m_{i}^{2}}+\epsilon_{\mathrm{int}}+B$
(18) $\displaystyle p_{Q}$
$\displaystyle=\frac{2N_{c}}{3}\sum_{i}\int^{k_{Fi}}\frac{d^{3}k}{(2\pi)^{3}}\frac{k^{2}}{\sqrt{k^{2}+m_{i}^{2}}}+p_{\mathrm{int}}-B$
(19) $\displaystyle\mu_{i}$
$\displaystyle=\sqrt{k_{Fi}^{2}+m_{i}^{2}}+\mu_{\mathrm{int},i}$ (20)
The upper limit of the integrals $k_{Fi}$ is the Fermi wave number for each
species $i$, which, at zero temperature, appropriately terminates the
integration over $k$. The first terms in $\epsilon_{Q}$ and in $p_{Q}$ are
free Fermi gas (FG) contributions, $\epsilon_{\mathrm{FG}}$ and
$p_{\mathrm{FG}}$ respectively, the second terms are due to
$\mathcal{L}_{\text{int }}$ and $B$ is the bag constant that accounts for the
cost of confining the quarks inside a bag. The $m_{i}$ are quark masses,
generally taken to be current quark masses. The $u$ and $d$ quark masses are
commonly set to zero (because at high density, $k_{Fi}$ in these cases far
exceed $m_{i}$), whereas that of the $s$ quark is taken at its Particle Data
Group (PDG) value. The QCD perturbative calculations of $\epsilon_{\text{pert
}}$ and $p_{\text{pert }}$, and the ensuing results for the structure of NSs
containing quarks within the cores as well as self-bound strange quark stars
are discussed in [5]. At leading order of QCD corrections, the results are
qualitatively similar to what one obtains by simply using the FG results with
an appropriately chosen value of $B$. As results of perturbative calculations
are deemed to be valid only for $n_{B}\geq 40n_{s}$, they are dropped in the
vMIT model. The Lagrangian density from vector interactions
$\quad\mathcal{L}_{\text{vec
}}=-G_{v}\sum_{i}\bar{\psi}\gamma_{\mu}V^{\mu}\psi+\left(m_{V}^{2}/2\right)V_{\mu}V^{\mu}\,,$
(21)
where interactions among the quarks occur via the exchange of a vector-
isoscalar meson $V^{\mu}$ of mass $m_{V},$ is chosen in Ref. [54]. Explicitly,
$\displaystyle\epsilon_{Q}$
$\displaystyle=\sum_{i}\epsilon_{\mathrm{FG},\mathrm{i}}+\frac{1}{2}\left(\frac{G_{v}}{m_{V}}\right)^{2}n_{Q}^{2}+B$
(22) $\displaystyle p_{Q}$
$\displaystyle=\sum_{i}p_{\mathrm{FG},\mathrm{i}}+\frac{1}{2}\left(\frac{G_{v}}{m_{V}}\right)^{2}n_{Q}^{2}-B$
(23) $\displaystyle\mu_{i}$
$\displaystyle=\sqrt{k_{Fi}^{2}+m_{i}^{2}}+\left(\frac{G_{v}}{m_{V}}\right)^{2}n_{Q}\,,$
(24)
where $n_{Q}=\sum_{i}n_{i},$ and the bag constant $B$ is chosen appropriately
to enable a transition to matter containing quarks. Note that terms associated
with the vector interaction above are similar to those in hadronic models. We
studied model parameters in a wide range $B^{1/4}=(155-180)~{}\mathrm{MeV}$
and $a=\left(G_{v}/m_{V}\right)^{2}=(0.1-0.3)~{}\mathrm{fm}^{2}$ and report
results for specific values within this range.
## IV Sound Speeds in the Pure and Mixed Phases
As the difference of the adiabatic and equilibrium sound speeds drives the
restoring force for $g$-modes, it is instructive to collect some general
expressions for these two sound speeds in the pure and mixed phases. For the
pure phase of $npe$ matter, these expressions are derived and applied in [20],
but given their central role in this work, and the fact that we also extend
the application to $npe\mu$ and quark matter, we detail their derivation below
for completeness. For the mixed phase, we derive expressions that have not, to
our knowledge, been previously reported in the literature. First, a point of
notation: the equilibrium squared sound speed is commonly defined in the
literature [20, 64, 6] by the symbol $c_{s}^{2}$, which we reserve here for
the squared adiabatic sound speed, as in [29]. The equilibrium sound speed is
defined by
$\displaystyle c_{e}^{2}=\frac{dp}{d\epsilon}\,,$ (25)
where $p$ and $\epsilon$ are the total pressure and energy density in matter
$\displaystyle\epsilon=\epsilon_{B}\,+\sum\limits_{l=e^{-},\,\mu^{-}}\epsilon_{l}\,,\quad
p=p_{B}\,+\sum\limits_{l=e^{-},\,\mu^{-}}p_{l}\,.$ (26)
In Eq.(26), the leptonic energies are the $T$=0 degenerate Fermi gas
expressions for massive leptons. Being a total derivative, the derivative is
taken along the curve satisfying both mechanical and chemical equilibrium,
i.e., $\beta$-equilibrium conditions hold. In NSM, when only $npe$ are present
in equilibrium, the composition at fixed baryon density ($n_{n}+n_{p}$) is
completely fixed once the proton fraction $x_{p}$ (=$x_{e}$ by charge
neutrality) is determined. In this case, the squared adiabatic sound speed is
defined as
$\displaystyle c_{s}^{2}=\left(\frac{\partial
p}{\partial\epsilon}\right)_{x}\,,$ (27)
where $x=x_{p}=x_{e}$. In the partial derivative, the composition is held
fixed, i.e., $\beta$-equilibrium conditions are imposed only after all
derivatives have been evaluated. The resulting distinction between these two
speeds plays an important role in determining the oscillation frequencies of
non-radial oscillations such as $g$-modes:
$\displaystyle\omega^{2}\propto\left(\frac{1}{c_{e}^{2}}-\frac{1}{c_{s}^{2}}\right)=\frac{(c_{s}^{2}-c_{e}^{2})}{c_{e}^{2}c_{s}^{2}}\,.$
(28)
Note that both the above speeds are dependent on density which varies over a
large range in NSM. Furthermore, an individual knowledge of both speeds is
required. In what follows, we apply Eqs. (25) and (27) to the case of a pure
and mixed phase.
### IV.1 The Pure Phase
#### IV.1.1 Sound speeds in $npe$ matter
It is useful to recast the general expressions Eqs. (25) and (27) in terms of
derivatives of the individual chemical potentials with respect to density,
since such expressions are amenable to both analytical and numerical checks.
Without loss of generality, we have
$\displaystyle
c_{e}^{2}=\frac{dp}{d\epsilon}=\left(\frac{dp}{dn_{B}}\right)\bigg{/}\left(\frac{d\epsilon}{dn_{B}}\right)\,,$
(29)
Considering $npe$ matter as an example, differentiating the total energy
energy density inclusive of electrons
$\displaystyle\epsilon(n_{B},x)=n_{B}[M_{B}+E(n_{B},x)]$ (30)
with respect to $n_{B}$, we have
$\displaystyle\left(\frac{d\epsilon}{dn_{B}}\right)=\frac{\epsilon}{n_{B}}+n_{B}\left(\frac{dE}{dn_{B}}\right)\,,$
(31)
where $E(n_{B},x)$ is the energy per baryon. The second term on the right hand
side of Eq. (31) becomes
$\displaystyle\left(\frac{dE}{dn_{B}}\right)=\left(\frac{\partial E}{\partial
n_{B}}\right)+\left(\frac{\partial E}{\partial
x}\right)_{n_{B}}\left(\frac{dx}{dn_{B}}\right)\,.$ (32)
For the equilibrium sound speed, the $\beta$-equilibrium condition
$\left(\frac{\partial E}{\partial x}\right)_{n_{B}}=0$ yields
$\displaystyle\left(\frac{dE}{dn_{B}}\right)=\left(\frac{\partial E}{\partial
n_{B}}\right)\,.$ (33)
Thus,
$\displaystyle\left(\frac{d\epsilon}{dn_{B}}\right)=\frac{\epsilon}{n_{B}}+\frac{1}{n_{B}}\left[n_{B}^{2}\left(\frac{dE}{dn_{B}}\right)\right]=\frac{(\epsilon+p)}{n_{B}}\,.$
(34)
From the thermodynamic identity, using charge neutrality ($x=x_{e}$) and beta-
equilibrium,
$\displaystyle\epsilon+p$ $\displaystyle=$
$\displaystyle\mu_{n}n_{n}+\mu_{p}n_{p}+\mu_{e}n_{e}$ (35) $\displaystyle=$
$\displaystyle\mu_{n}n_{B}-(\mu_{n}-\mu_{p}-\mu_{e})n_{B}=\mu_{n}n_{B}\,,$
leading to the simple result
$\displaystyle\left(\frac{d\epsilon}{dn_{B}}\right)=\left(\frac{\partial\epsilon}{\partial
n_{B}}\right)_{x}=\mu_{n}$ (36)
This implies that
$\displaystyle c_{e}^{2}$ $\displaystyle\equiv$
$\displaystyle\frac{dp}{d\epsilon}=\frac{1}{\mu_{n}}\left(\frac{dp}{dn_{B}}\right)=\frac{1}{\mu_{n}}n_{B}\left(\frac{d\mu_{n}}{dn_{B}}\right)\,$
(37) $\displaystyle=$ $\displaystyle\left(\frac{d\ln\mu_{n}}{d\ln
n_{B}}\right)\,,$
where we have again taken advantage of the thermodynamic identity to relate
the required derivative of $p$ to that of $\mu_{n}$.
The adiabatic squared sound speed can be expressed as
$\displaystyle c_{s}^{2}$ $\displaystyle=$ $\displaystyle\left(\frac{\partial
p}{\partial\epsilon}\right)_{x}=\left(\frac{\partial p}{\partial
n_{B}}\right)_{x}\bigg{/}\left(\frac{\partial\epsilon}{\partial
n_{B}}\right)_{x}\,$ (38) $\displaystyle=$
$\displaystyle\frac{n_{B}}{\epsilon+p}\left(\frac{\partial p}{\partial
n_{B}}\right)_{x}=\frac{1}{\mu_{\rm avg}}\left(\frac{\partial p}{\partial
n_{B}}\right)_{x}\,,$ (39)
where we have used the equality in Eq. (34), which is valid at constant
composition even in the absence of $\beta$-equilibrium, and introduced an
average chemical potential $\mu_{\rm
avg}=(\sum_{i}\mu_{i}n_{i})/n_{B}=(\epsilon+p)/n_{B}$. Since $p=p_{B}+p_{e}$,
$c_{s}^{2}=\frac{1}{(\epsilon+p)}\left[\left(u\frac{\partial p_{B}}{\partial
u}\right)_{x}+\left(u\frac{\partial p_{e}}{\partial u}\right)_{x}\right]\,.$
(40)
The required derivatives are analytic:
$\displaystyle\left(u\frac{\partial p_{B}}{\partial u}\right)_{x}$
$\displaystyle=$
$\displaystyle\frac{2}{9\pi^{2}}\frac{k_{F}^{5}}{E_{F}}+n_{s}u\Big{[}2a_{0}u+b_{0}\gamma(\gamma+1)u^{\gamma}\Big{]}$
(41) $\displaystyle+$ $\displaystyle
n_{s}u(1-2x)^{2}\left\\{\frac{2k_{F}^{2}}{27E_{F}}\left(1-\frac{9k_{F}^{2}}{10E_{F}^{2}}+\frac{3k_{F}^{4}}{10E_{F}^{4}}\right)\right.$
$\displaystyle+$
$\displaystyle\left.\bigg{[}2u(a_{1}-a_{0})+b_{1}\gamma_{1}(\gamma_{1}+1)u^{\gamma_{1}}\right.$
$\displaystyle-$
$\displaystyle\left.b_{0}\gamma(\gamma+1)u^{\gamma}\bigg{]}\right\\}$
$\displaystyle\left(u\frac{\partial p_{e}}{\partial u}\right)_{x}$
$\displaystyle=$ $\displaystyle\frac{1}{3}n_{e}\mu_{e}\,.$ (42)
Thus, the difference of squared sound speeds becomes
$\displaystyle c_{s}^{2}-c_{e}^{2}=\frac{1}{\mu_{\rm avg}}\left(\frac{\partial
p}{\partial
n_{B}}\right)_{x}-\frac{1}{\mu_{n}}\left(\frac{dp}{dn_{B}}\right)\,.$ (43)
At this point all the necessary ingredients for the calculation of the speed-
of-sound difference are present. It is instructive, however, to obtain a
complementary expression in which its physical causes, namely
$\beta$-equilibrium and compositional gradients, are made explicit. To that
end, we proceed as follows: Noting that
$\displaystyle\frac{dp}{dn_{B}}=\left(\frac{\partial p}{\partial
n_{B}}\right)_{x}+\left(\frac{\partial p}{\partial
x}\right)_{n_{B}}\frac{dx}{dn_{B}}\,,$ (44)
Eq. (43) can be recast as
$\displaystyle c_{s}^{2}-c_{e}^{2}$ $\displaystyle=$
$\displaystyle\left(\frac{1}{\mu_{\rm
avg}}-\frac{1}{\mu_{n}}\right)\left(\frac{\partial p}{\partial
n_{B}}\right)_{x}-\frac{1}{\mu_{n}}\left(\frac{\partial p}{\partial
x}\right)_{n_{B}}\frac{dx}{dn_{B}}$ (45) $\displaystyle=$
$\displaystyle-~{}\frac{x\tilde{\mu}}{\mu_{\rm
avg}\mu_{n}}\left(\frac{\partial p}{\partial
n_{B}}\right)_{x}-\frac{1}{\mu_{n}}\left(\frac{\partial p}{\partial
x}\right)_{n_{B}}\frac{dx}{dn_{B}}\,$
where $\tilde{\mu}=\mu_{e}+\mu_{p}-\mu_{n}$. Anticipating that
$\beta$-equilibrium will be imposed at the end, we note that the first term
above vanishes as $\tilde{\mu}=0$, which leads to
$\displaystyle c_{s}^{2}-c_{e}^{2}=-\frac{1}{\mu_{n}}\left(\frac{\partial
p}{\partial x}\right)_{n_{B}}\frac{dx}{dn_{B}}\,.$ (46)
Utilizing $p=n_{B}^{2}\frac{\partial E}{\partial n_{B}}$ and interchanging the
order of derivatives
$\displaystyle
c_{s}^{2}-c_{e}^{2}=-\frac{1}{\mu_{n}}n_{B}^{2}\frac{\partial}{\partial
n_{B}}\left(\frac{\partial E}{\partial x}\right)_{n_{B}}\frac{dx}{dn_{B}}\,,$
(47)
which can be further rewritten as 666Observe the interesting relation
$\frac{\partial p}{\partial x}=n_{B}^{2}\frac{\partial\tilde{\mu}}{\partial
n_{B}}$, noted also in the context of bulk viscosity studies [65].
$\displaystyle
c_{s}^{2}-c_{e}^{2}=-\frac{1}{\mu_{n}}n_{B}^{2}\left(\frac{\partial\tilde{\mu}}{\partial
n_{B}}\right)_{x}\frac{dx}{dn_{B}}\,.$ (48)
It remains now to determine $\frac{dx}{dn_{B}}$. As
$\displaystyle d\tilde{\mu}=\left(\frac{\partial\tilde{\mu}}{\partial
n_{B}}\right)_{x}dn_{B}+\left(\frac{\partial\tilde{\mu}}{\partial
x}\right)_{n_{B}}dx=0\,,$ (49)
$\displaystyle\frac{dx}{dn_{B}}=-\left(\frac{\partial\tilde{\mu}}{\partial
n_{B}}\right)_{x}\bigg{/}\left(\frac{\partial\tilde{\mu}}{\partial
x}\right)_{n_{B}}\,.$ (50)
With this relation, Eq. (48) becomes
$\displaystyle c_{s}^{2}$ $\displaystyle=$ $\displaystyle
c_{e}^{2}+\frac{\left[n_{B}\left(\frac{\partial\tilde{\mu}}{\partial
n_{B}}\right)_{x}\right]^{2}}{\mu_{n}\left(\frac{\partial\tilde{\mu}}{\partial
x}\right)_{n_{B}}}\,,$ (51)
which illustrates the influence of the density and compositional gradients of
the two sound speeds. Thus far, we have simply retraced the steps originally
given in [20] (Sec. 4.2 of this reference). In $npe$ matter under the
constraint of charge neutrality, the independent variables chosen are $n_{B}$
and $x$, and thus a partial derivative of $\tilde{\mu}$ with respect to
$n_{B}$ ($x$) implies that $x$ ($n_{B}$) is fixed.
Casting the expressions for the sound speeds in terms of the chemical
potentials is expedient, as illustrated below for the case of $npe$ matter.
Note that the average chemical potential $\mu_{\rm avg}=\mu_{n}$ only in
$\beta$-equilibrium. At fixed $x$, with the relation
$\tilde{\mu}=\mu_{e}-\hat{\mu}$,
$\displaystyle n_{B}\frac{\partial\tilde{\mu}}{\partial n_{B}}$
$\displaystyle=$ $\displaystyle u\frac{\partial(\mu_{e}-\hat{\mu})}{\partial
u}\,.$ (52)
For the term in Eq. (52) involving electrons, we have
$\displaystyle\mu_{e}=\hbar c~{}(3\pi^{2}n_{s}u)^{1/3}x^{1/3}\quad{\rm
and}\quad u\frac{\partial\mu_{e}}{\partial u}=\frac{\mu_{e}}{3}\,,$ (53)
while for baryons,777Details of the derivatives of the kinetic part of the
symmetry energy are given in Appendix A.
$\displaystyle\hat{\mu}$ $\displaystyle=$
$\displaystyle\mu_{n}-\mu_{p}=4S_{2}(u)(1-2x)\quad{\rm with}$ $\displaystyle
S_{2}(u)$ $\displaystyle=$ $\displaystyle
S_{2k}+S_{2i}=\frac{k_{F}^{2}}{6E_{F}}$ $\displaystyle+$
$\displaystyle(a_{1}-a_{0})u+b_{1}u^{\gamma_{1}}-b_{0}u^{\gamma}\,,$
$\displaystyle uS_{2k}^{\prime}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\cdot
2S_{2k}\left[1-18\left(\frac{S_{2k}}{k_{F}}\right)^{2}\right],$
$\displaystyle{\rm and}\quad uS_{2i}^{\prime}$ $\displaystyle=$
$\displaystyle(a_{1}-a_{0})u+\gamma_{1}b_{1}u^{\gamma_{1}}-\gamma
b_{0}u^{\gamma}\,,$ $\displaystyle uS_{2}^{\prime}$ $\displaystyle=$
$\displaystyle uS_{2k}^{\prime}+uS_{2i}^{\prime}.$ (54)
Putting together the above results, we have
$\displaystyle n_{B}\frac{\partial\tilde{\mu}}{\partial
n_{B}}=\frac{\mu_{e}}{3}-4(1-2x)~{}uS_{2}^{\prime}\,.$ (55)
Derivatives with respect to $x$ of $\tilde{\mu}$ at fixed density are also
straightforward. For
$\displaystyle\frac{\partial\tilde{\mu}}{\partial
x}=\frac{\partial(\mu_{e}-\hat{\mu})}{\partial x}\,,$ (56)
we note that
$\displaystyle\frac{\partial\mu_{e}}{\partial x}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\frac{\mu_{e}}{x}\quad{\rm
and}\quad\frac{\partial\hat{\mu}}{\partial x}=-8S_{2}(u)\quad{\rm so~{}that}$
$\displaystyle\frac{\partial\tilde{\mu}}{\partial x}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\frac{\mu_{e}}{x}+8S_{2}(u)\,.$ (57)
The equivalence of Eqs. (40) and (51) is established analytically in Appendix
A.5.
#### IV.1.2 Sound speeds in $npe\mu$ matter
Going beyond the results in [20], one way to include muons is by choosing
$n_{B}$, $x=x_{e}+x_{\mu}$ and $x_{\mu}\equiv y$ as the independent variables.
The formal expression for the squared adiabatic sound speed remains the same
as in $npe$ matter, i.e., Eq. (40) but now $\left(u~{}\partial p_{e}/\partial
u\right)_{x}$ [Eq. (42)] is replaced by
$\left(u\frac{\partial p_{lep}}{\partial
u}\right)_{x,x_{\mu}}=\frac{1}{3}n_{e}\mu_{e}+\frac{1}{3}n_{\mu}\left(\frac{\mu_{\mu}^{2}-m_{\mu}^{2}}{\mu_{\mu}}\right)\,.$
(58)
where $lep=e^{-},\mu^{-}$.
Furthermore, by retracing the steps leading to Eq. (51), its $npe\mu$
equivalent is obtained as
$c_{s}^{2}-c_{e}^{2}=-\frac{1}{\mu_{n}}\left(\left.\frac{\partial P}{\partial
x}\right|_{n_{B},y}\frac{dx}{dn_{B}}+\left.\frac{\partial P}{\partial
y}\right|_{n_{B},x}\frac{dy}{dn_{B}}\right)$ (59)
with 888The intermediate steps leading to Eqs. (60)-(61) are detailed in
Appendix. A.6
$\displaystyle\frac{dx}{dn_{B}}$ $\displaystyle=$
$\displaystyle\frac{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}}$ (60) $\displaystyle\frac{dy}{dn_{B}}$ $\displaystyle=$
$\displaystyle\frac{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}}~{}.$ (61)
The chemical potentials $\tilde{\mu}_{x}=\mu_{p}+\mu_{e}-\mu_{n}$ and
$\tilde{\mu}_{y}=\mu_{\mu}-\mu_{e}$ are zero in $\beta$-equilibrated matter.
Equations (59)-(61), while demonstrating that compositional gradients are at
the core of g-mode oscillations, are lengthy and computationally more involved
compared to the direct calculation of the adiabatic sound speed in $npe\mu$
matter using Eqs. (41) and (58). For the sake of completeness, we provide here
the explicit expressions for the adiabatic sound speed in $npe\mu$ matter
arising from (59)-(61) which are in excellent numerical agreement with the
more direct method:
$\displaystyle
c_{s}^{2}=c_{e}^{2}+\frac{1}{\mu_{n}}\Big{(}T_{1}+T_{2}+T_{3}+T_{4}\Big{)}$
(62)
where $T_{j}=N_{j}/D$ with
$\displaystyle N_{1}$
$\displaystyle=\left[\frac{\mu_{e}}{3}-4(1-2x)~{}uS_{2}^{\prime}\right]^{2}$
(63) $\displaystyle N_{2}$
$\displaystyle=\left[\frac{\mu_{e}}{3}-4(1-2x)~{}uS_{2}^{\prime}\right]8S_{2}x_{e}\left[\frac{k_{F_{\mu}}}{k_{F_{e}}}-\frac{x_{\mu}}{x_{e}}\right]$
$\displaystyle N_{3}$
$\displaystyle=\left[\frac{k_{F_{\mu}}^{2}}{3\mu_{e}}-4(1-2x)~{}uS_{2}^{\prime}\right]\left(\mu_{e}+8S_{2}x_{e}\right)\left[\frac{x_{\mu}}{x_{e}}-\frac{k_{F_{\mu}}}{k_{F_{e}}}\right]$
$\displaystyle N_{4}$
$\displaystyle=\left[\frac{k_{F_{\mu}}^{2}}{3\mu_{e}}-4(1-2x)~{}uS_{2}^{\prime}\right]\frac{k_{F_{\mu}}}{k_{F_{e}}}\left[\frac{\mu_{e}}{3}-4(1-2x)~{}uS_{2}^{\prime}\right]$
$\displaystyle D$
$\displaystyle=\left[\frac{\mu_{e}}{3x_{e}}+8S_{2}\left(1+\frac{k_{F_{\mu}}}{k_{Fe}}\right)\right]$
and $k_{F_{e}}=\mu_{e}$ (massless electrons) and
$k_{F_{\mu}}$=$\sqrt{{\mu_{\mu}}^{2}-m_{\mu}^{2}}$. These equations explicitly
display the connection to the nuclear symmetry energy $S_{2}$ and its density
derivative $S_{2}^{\prime}$.
At the muon threshold ($k_{F_{\mu}},x_{\mu}$=0 $\Rightarrow$ $x$=$x_{e}$), it
is easy to see that $N_{2},N_{3},N_{4}$=0, while $N_{1}(\equiv N_{1}^{npe}$)
recovers Eq. (51) for $npe$ matter. At extremely high baryon density, muons
are ultra-relativistic ($\mu_{\mu}$=$k_{F_{\mu}}$=$k_{F_{e}}$,
$x_{\mu}$=$x_{e}$=$x/2$) so that $N_{2},N_{3}$=0,
$N_{1}$=$N_{4}$=$N_{1}^{npe}/2$ and the total leptonic contribution to the
sound speed is equally divided between electrons and muons.
#### IV.1.3 Sound speeds in Quark Matter
We now move to a discussion of sound speeds in dense quark matter at zero
temperature. For the pure quark phase, the difference of the two sound speeds
has been computed to leading order in the quark mass [66, 36] using the non-
interacting 3-flavor FG model with massive quarks (see Sec.III.2). These
expressions reveal that for the non-interacting FG model, a non-zero quark
mass is necessary to support $g$-modes. This is because a system of massless
$uds$ quarks is charge neutral with equal numbers of each flavor at any
density; effectively, there is no change in composition with density to drive
the $g$-mode.
To leading order in the $s$-quark’s mass $m_{s}$, the Brunt-Väisälä frequency
is [36]
$\displaystyle N_{q}\simeq\left(\frac{g}{2\pi
c_{e}}\right)\left(\frac{m_{s}^{2}}{\sqrt{B}}\right)\,,$ (64)
where $c_{e}^{2}=dp_{q}/d\epsilon_{q}$ is the equilibrium squared sound speed
in QM999Numerically, $N_{q}\approx 100$ Hz for a current quark mass
$m_{s}\approx 100$ MeV, but the effect of interactions in addition to this
yields significantly lower values for $N_{q}$ [36].. It is possible to obtain
an exact expression for $c_{e}^{2}$ and $c_{s}^{2}$ in QM for the FG model,
and also for the vMIT model, as we show below.
The equilibrium sound speed may be simply calculated by numerically evaluating
$c_{e,vMIT}^{2}=dp/d\epsilon$ in the pure quark phase. However, additional
insight into its compositional structure is gained by expressing it in terms
of the various chemical potentials involved. Starting from the relation (valid
in $\beta$-equilibrium)
$\displaystyle\mu_{n}=2\mu_{d}+\mu_{u}=(2\mu_{d}^{\ast}+\mu_{u}^{\ast})+3an_{Q}\,,$
(65)
where $\mu_{f}^{\ast}=\sqrt{k_{F_{f}}^{2}+m_{f}^{2}}$ for a quark of flavor
$f$, and using $c_{e}^{2}=d\ln\mu_{n}/\ln n_{B}$,101010In charge neutral and
$\beta$-equilibrated matter,
$\mu_{B}=\sum_{f}x_{f}\mu_{f}+\sum\limits_{\ell=e,\mu}x_{\ell}\mu_{\ell}=\mu_{n}$
as in the nucleonic phase. we obtain
$\displaystyle c_{e,vMIT}^{2}=\frac{1}{\mu_{n}}$ $\displaystyle\bigg{[}$
$\displaystyle\frac{1}{3}\left\\{2\mu_{d}^{\ast}\left(1-\frac{m_{d}^{2}}{\mu_{d}^{\ast^{2}}}\right)\frac{d\ln
n_{d}}{d\ln n_{B}}\right.$ (66) $\displaystyle+$
$\displaystyle\left.\mu_{u}^{\ast}\left(1-\frac{m_{u}^{2}}{\mu_{u}^{\ast^{2}}}\right)\frac{d\ln
n_{u}}{d\ln n_{B}}\right\\}$ $\displaystyle+$ $\displaystyle
3an_{Q}\bigg{]}\,.$
Contributions from the leptons are implicitly included in the above
expression.
For the non-interacting FG model, the pressure $p=\sum_{f}p_{\rm
FG}(\mu_{f},\mu_{e})$. Introducing the partial fractions $x_{f}=n_{f}/n_{B}$,
where $n_{f}=(\mu_{f}^{2}-m_{f}^{2})^{3/2}/\pi^{2}$ and
$n_{e}=\mu_{e}^{3}/3\pi^{2}$, the partial derivative of pressure with respect
to baryon density in the definition of the adiabatic sound speed in Eq. (39)
can be re-expressed in terms of partial derivatives with respect to the
various chemical potentials, yielding
$\displaystyle c_{s,{\rm FG}}^{2}=\frac{1}{\mu_{\rm
avg}}\left[\sum_{f}\frac{1}{3}\mu_{f}x_{f}\left(1-\frac{m_{f}^{2}}{\mu_{f}^{2}}\right)+\frac{1}{3}\mu_{e}x_{e}\right]\,,$
(67)
where $\mu_{\rm avg}=(\sum\limits_{f=u,d,s,e}n_{f}\mu_{f})/n_{B}$. Note that
if all $m_{f}$ = 0 (i.e, one is in a charge neutral phase with $x_{e}$ = 0),
$c_{s,{\rm FG}}^{2}=c_{e,{\rm FG}}^{2}=1/3$ and there can be no $g$-modes.
Inclusion of ${\cal O}(\alpha_{s})$ corrections to this model does not change
the fact that a non-zero quark mass is necessary for $g$-modes.
In the vMIT model given by Eqs. (24),
$\mu_{f}^{\ast}=\sqrt{k_{F_{f}}^{2}+m_{f}^{2}}=\mu_{f}-an_{Q}$, and as was
done for the FG model, we compute partial derivatives with respect to
$\mu_{f}^{\ast}$, noting that $n_{f}(\mu_{f})$ = $n_{\rm FG}(\mu_{f}^{\ast})$.
The resulting expression for the adiabatic sound speed in the vMIT model is
$\displaystyle c_{s,vMIT}^{2}=\frac{1}{\mu_{\rm avg}}$ $\displaystyle\bigg{[}$
$\displaystyle\sum_{f}\frac{1}{3}\mu_{f}^{\ast}x_{f}\left(1-\frac{m_{f}^{2}}{\mu_{f}^{\ast^{2}}}\right)$
(68) $\displaystyle+$
$\displaystyle\frac{1}{3}\mu_{e}x_{e}+3an_{Q}\bigg{]}\,,$
where all quantities $\mu_{f},\mu_{e},x_{e},x_{f}$ are equilibrium
values111111Inclusion of muons is straightforward and adds a term,
$\frac{1}{3}\mu_{\mu}x_{\mu}\left(1-\frac{m_{\mu}^{2}}{\mu_{\mu}^{\ast^{2}}}\right)$,
on the right hand side of Eq. (67).. If we switch off interactions
($a\rightarrow$ 0), we recover results of the non-interacting FG model.
Interestingly, if we retain the interaction term, but set all quark masses
equal or to zero (implying that $x_{e}$ = 0), we find that
$c_{s,vMIT}^{2}=c_{e,vMIT}^{2}$ so stable $g$-modes are not supported in the
pure quark phase. Therefore, while both sound speeds are modified by
interactions, e.g.,
$\displaystyle c_{s,vMIT}^{2}=\frac{1}{\mu_{\rm
avg}}\left[\mu_{q}^{*}+3an_{Q}\right]\neq c_{s,{\rm FG}}^{2}\,,$ (69)
at asymptotically high density where quark masses are negligible, there can be
no $g$-modes in quark matter in the vMIT model 121212$g$-modes would still
exist in a mixed phase of vMIT quark matter and nucleons as the electron
fraction would vary from $\beta$-processes involving nucleons.
Note that when all chemical potentials and partial fractions are set to their
equilibrium values for $c_{s,vMIT}^{2}$ in the pure phase, $\mu_{\rm
avg}=\mu_{n}$. A comparison of Eqs. (66) and (68) reveals the differences
between the two sound speeds. While effects of interactions enter in the same
formal way for the two squared speeds, the occurrence of the logarithmic
derivatives of the quark densities distinguishes $c_{e,vMIT}^{2}$ from
$c_{s,vMIT}^{2}$ which features the partial fractions $x_{f}$. This difference
is the principal reason for the latter to become larger than the former. In
both cases, the $d$-quark contributions are larger than those of $u$ and $s$
quarks.
### IV.2 Sound Speeds in the Mixed Phase
Once we have expressions for the sound speed in a pure phase of quarks or
nucleons, it is possible to compute the sound speed in the mixed phase of the
two, obtained from a Gibbs construction. The only information required, other
than the sound speeds in the pure phases, is the partial phase fraction of
quarks $\chi$ at any density. It is more convenient to begin with the
reciprocal relation
$\displaystyle\frac{1}{c_{e,{\rm mix}}^{2}}=\left(\frac{d\epsilon_{\rm
mix}}{dp_{\rm mix}}\right)\,.$ (70)
In a Gibbs mixed phase, the pressures in the two phases are equal, while the
energy density is a proportional mix of the quark ($q$) and nucleonic/hadronic
($h$) phases: $\epsilon_{\rm mix}=(1-\chi)\epsilon_{h}+\chi\epsilon_{q}$.
Substituting this in Eq.(70) gives
$\displaystyle\frac{1}{c_{e,{\rm
mix}}^{2}}=\frac{(1-\chi)}{c_{e,h}^{2}}+\frac{\chi}{c_{e,q}^{2}}+(\epsilon_{q}-\epsilon_{h})\frac{d\chi/dn_{B}}{dP/dn_{B}}\,.$
(71)
The derivatives in Eq.(71) must be computed numerically after solving for
$\chi$, hence afford no particular advantage over a direct numerical
computation of the sound speed from Eq.(70) itself. However, note that the
last term in Eq.(71) is always positive in the mixed phase.
As before, the general definition of the adiabatic sound speed applies to the
mixed phase
$\displaystyle c_{s,{\rm mix}}^{2}=\left(\frac{dp_{\rm mix}}{d\epsilon_{\rm
mix}}\right)_{x_{i}={\rm const.}=x_{i,{\rm eq.}}}\,,$ (72)
and the thermodynamic identity becomes $\epsilon_{\rm mix}+p_{\rm mix}$ =
$\sum_{i}n_{i}\mu_{i}$ = $n_{B}\mu_{\rm avg}$. Noting that the derivatives
$\partial\epsilon_{h}/\partial_{n_{B,h}}$ and
$\partial\epsilon_{q}/\partial_{n_{B,q}}$ are equal to the respective
$\mu_{\rm avg}$, it is once again more convenient to begin with the reciprocal
relation
$\displaystyle\frac{1}{c_{s,{\rm mix}}^{2}}=\left(\frac{d\epsilon_{\rm
mix}}{dp_{\rm mix}}\right)_{x_{i}={\rm const.}=x_{i,{\rm eq.}}}\,,$ (73)
and use the chain rule to compute derivatives with respect to density. This
leads to
$\displaystyle\left(\frac{d\epsilon_{\rm mix}}{dp_{\rm
mix}}\right)_{x_{i}=x_{i,{\rm eq.}}}=\frac{(1-\chi)\mu_{\rm
avg}}{\left(\frac{\partial p_{h}}{\partial n_{B},h}\right)}+\frac{\chi\mu_{\rm
avg}}{\left(\frac{\partial p_{q}}{\partial n_{B},q}\right)}\,,$ (74)
which, using Eq. (39), becomes
$\displaystyle\frac{1}{c_{s,{\rm
mix}}^{2}}=\frac{(1-\chi)}{c_{s,h}^{2}}+\frac{\chi}{c_{s,q}^{2}}\,.$ (75)
Comparing Eqs. (71) and (75), and to the extent that the two sound speeds in
the pure hadronic/quark phase are almost equal, we expect that the last term
in Eq. (71) which tracks the rapidly changing composition in the mixed phase,
is mainly responsible for $c_{s,\rm mix}^{2}>c_{e,\rm mix}^{2}$. The more
rapid the appearance of new chemical species and the softer the mixed phase,
the larger the Brunt-Väisälä frequency will be.
Furthermore, as will become evident from our results in Sec. V, the adiabatic
sound speed is continuous across the transition to and from the mixed phase,
while the equilibrium sound speed has a slight jump to accommodate the
derivative of $\chi$. The reciprocal relation for the adiabatic sound speeds
is reminiscent of the addition of resistors in a parallel circuit, with
voltage as pressure and current as energy density. Such impedance analogies
arise commonly in electrical engineering when modeling the behavior of
transducers.
## V Results
### V.1 Structural properties, sound speeds and the Brunt-Väisälä frequency
Figure 1: Mass-radius curves for the ZL EOS without and with muons.
Configurations with muons are slightly more compact, but both cases support
$M_{\rm max}\simeq 2M_{\odot}$. Except for $L$, the EOS parameters are
$K_{0}=220$ MeV, $S_{v}=31$ MeV, and $\gamma_{1}=1.6$ for all curves.
Figure 1 shows $M$-$R$ curves for ZL EOSs with and without muons for the
indicated parameters in the caption. The radii of $\sim 1.4~{}M_{\odot}$
stars, $R_{1.4}$, for the different models shown lie within the bounds
inferred from available data. For example, data from X-ray observations have
yielded $R_{1.4}=9$-$14$ km for canonical masses of $\sim 1.4~{}M_{\odot}$
[67, 68, 69]. Measured tidal deformations from gravitational wave data in the
binary NS merger GW170817 give 8.9-13.2 km for binary masses in the range
1.36(1.17)-1.6(1.36) $M_{\odot}$ [70], whereas for the same masses Capano et
al. [71] report $11\pm 1$ km. X-ray pulse analysis of NICER data from PSR
J0030+0451 by Miller et al. (2019) [39] finds $13.02^{+1.14}_{-1.19}$ km for
$M=1.44\pm 0.15~{}M_{\odot}$, whereas for the same star Riley et al. (2019)
[38] obtain $12.71^{+1.14}_{-1.19}$ km and
$M=1.34^{+0.15}_{-0.16}~{}M_{\odot}$. The maximum masses ($\simeq 2M_{\odot}$)
of these EOSs131313By adjusting the constants of the ZL EOS to make the EOS
stiffer (yet causal) at supra-nuclear densities, masses larger than
$2M_{\odot}$ can be obtained; an example will be shown later. are also within
the uncertainties of high mass NSs which range from $1.908\pm
0.016~{}M_{\odot}$ to $2.27^{+0.17}_{-0.15}~{}M_{\odot}$ [72, 73, 74, 75, 76].
Although differences in $R_{1.4}$ with and without muons for a given EOS are
small, the appearance of muons in the star leads to distinct features in the
Brunt-Väisälä frequency (see below).
Figure 2: Difference between the adiabatic and equilibrium squared sound
speeds (normalized to the squared speed of light) for the ZL EOS ($K_{0}$=220
MeV, $S_{v}=31$ MeV, $L=60$ MeV and $\gamma_{1}$=1.6) without and with muons.
In Fig. 2, differences in the two squared sound speeds are shown as a function
of $n_{B}$ with and without muons for the ZL EOS with $L=60$ MeV. The small
jump at $n_{B}\simeq 0.14~{}{\rm fm}^{-3}$, the density at which muons appear,
is caused by a sudden drop in the equilibrium sound speed. The differences at
large densities are due to the increasing concentration of muons.
Figure 3: The Brunt-Väisälä frequency in the NS for the ZL EOS ($K_{0}$=220
MeV, $S_{v}=31$ MeV, $L=60$ MeV, and $\gamma_{1}$=1.6) without and with muons.
Figure 4: EOS for the mixed phase of nucleons and quarks (middle curve) using
the Gibbs construction. For the ZL EOS without muons, $K_{0}$=220 MeV,
$S_{v}=31$ MeV, $L=60$ MeV, and $\gamma_{1}$=1.6. Parameters for the vMIT EOS
are: ($m_{u},m_{d},m_{s}$)=(5,7,150) MeV, $B^{1/4}$=180 MeV and $a$=0.1. The
circle indicates the central $p$ and $\epsilon$ of the maximum mass star
($n_{c,{\rm max}}/n_{s}$=7.63, for $M_{\rm max}/M_{\odot}$=1.82).
Figure 3 shows the Brunt-Väisälä frequency $N$ vs $r/R$ in the star. In the
results shown, the crust is assumed to be a homogeneous fluid for simplicity,
hence $N$ vanishes there. The location where muons begin to appear is signaled
by the small kink in the bottom panel. Overall, $N$ is slightly larger with
muons in the density range in the core of a 1.4$M_{\odot}$ star, consistent
with Fig. 2. This has a proportional impact on the $g$-mode frequency as shown
in the next section.
The EOS of the mixed phase following the Gibbs construction, and the ZL EOS
for the nucleonic sector and the vMIT EOS for the quark sector is shown in
Fig. 4. The ZL EOS does not include the small effect of muons. In the quark
sector, muons have not been included since their impact relative to quarks is
tiny.
Figure 5: Quark fraction vs $n_{B}$ corresponding to Fig. 4. The circle
indicates the central density of the maximum mass star ($n_{c,{\rm max}}$=1.22
${\rm fm}^{-3}$ for $M_{\rm max}/M_{\odot}$=1.82).
The compositional change in the mixed phase is indicated by the quark fraction
$\chi$ in Fig. 5. The steep rise of $\chi$ from the onset indicates the sort
of rapid compositional change that can impact the $g$-mode frequency. A
similar effect has been reported [30] in the context of the appearance of
strange baryons (e.g. hyperons), which is not a phase transition. Note, that
for the EOSs considered, the central density of the maximum mass star,
indicated by the filled circle on the curve, lies in the mixed phase so that
the pure quark phase is not realized.
Figure 6: The two sound speeds (top panel) and their differences (bottom
panel) in the mixed phase for the EOS parameters corresponding to Fig. 4. The
pure quark phase is not achieved prior to the maximum mass in this case. The
termination at $n_{B}$=0.08 fm-3 demarcates the core-crust boundary, since we
assume $c_{s}$=$c_{e}$ in the core. Both sound speeds take much smaller values
in the crust than in the core. The circle indicates the central density of the
maximum mass star ($n_{c,{\rm max}}$=1.22 ${\rm fm}^{-3}$ for $M_{\rm
max}/M_{\odot}$=1.82).
Figure 6 shows results for the individual sound speeds and their differences
for the mixed phase. The two sound speeds in the mixed phase behave very
differently. Specifically, the equilibrium sound speed suddenly drops (rises)
at the onset (end) of the mixed phase, whereas the adiabatic sound speed
varies smoothly.
Figure 7: The Brunt-Väisälä frequency in a hybrid star of mass
$1.4~{}M_{\odot}$. The ZL EOS does not include muons and parameters for the
nuclear and quark EOS are as in Fig. 4. Quarks enter at $n_{B}\simeq$ 0.42
fm-3 corresponding to $r/R$ = 0.383, and the mixed phase extends beyond the
central density. The value of $N$ decreases towards the core due to the
decreasing value of $g$, even as the sound speed difference does not change
much.
The Brunt-Väisälä frequency of a $1.4~{}M_{\odot}$ hybrid star is shown in
Fig. 7. Note the broader width of the peak when quarks enter, and its location
in denser regions of the star, as compared to the nucleonic stars depicted in
Fig. 3. This explains why the $g$-mode, which is a long-wavelength global
oscillation, is strongly impacted by the mixed phase (see results in the next
section).
Figure 8: The mass-radius curves for a hybrid star (Gibbs construction) with
EOS parameters chosen such that the mixed phase supports $M_{\rm
max}=2.05~{}M_{\odot}$. In the left panel, muons are not included, whereas the
right panel is with muons included.
Figure 8 shows $M$-$R$ curves for a hybrid star whose $M_{\rm
max}=2.05~{}M_{\odot}$. This value is obtained by increasing the
compressibility parameter of the ZL EOS from $K_{0}$ = 220 to $K_{0}$ = 240
MeV, and increasing $\gamma_{1}$ from 1.6 to 1.8 while maintaining causality.
Including muons pushes the onset of the mixed phase to slightly higher
densities, which causes the maximum mass of a hybrid star with muons to be
higher than for a hybrid star without muons. This is in contrast to the effect
of muons in an ordinary NS, where the softening results in a lower maximum
mass. The leftmost curves in these figures refer to a self-bound quark star,
and are shown here to provide contrast.
### V.2 Boundary conditions for the $g$-mode oscillation
Having established the equilibrium structure and computed the sound speeds, we
have all the variables necessary to solve Eqs. (1) at hand, except for the
boundary conditions that determine the (real) eigenfrequencies. The boundary
conditions for Newtonian structure equations are obtained as a straightforward
limiting case of Eqs. (1), and are discussed at length in [20]. To summarize
those results, in the Newtonian non-relativistic case, regularity of
$\xi_{r},~{}\delta p/\rho$ can be checked by Taylor expansion around $r$ = 0.
The resulting condition is:
$\displaystyle
r^{2}\xi_{r}=\frac{l}{\omega^{2}}(Y_{0}+\phi_{0})r^{l+1},\quad\frac{\delta
p}{\rho}=Y_{0}r^{l}\,,$ (76)
where $Y_{0},~{}\phi_{0}$ are constants. For our purposes, $\phi_{0}$ = 0
since we ignore perturbations in the gravitational potential, as in [20].
$Y_{0}$ is an arbitrary normalization constant allowed by the linearity of
these equations. Effectively, this means that the overall scale of the
eigenfunctions is arbitrary. It must be determined by external factors, such
as the strength of the force (tidal effects in a merger, for example). The
normalization has no impact on the numerical value of the eigenfrequency. It
is therefore conventional to choose $Y_{0}$ = 1. We will make, for simplicity,
and without loss of generality, a slightly different choice:
$\frac{l}{\omega^{2}}Y_{0}=1$ (77)
so that (for $l$=2), $\xi_{r}\rightarrow r$ as $r\rightarrow 0$. In practice,
we start the integration slight off-center, so $\xi_{r}$ will be small but
non-zero. The other condition at the center becomes
$\frac{\delta p}{\rho}=\frac{\omega^{2}}{l}r^{l}{\rm e}^{-\nu_{0}}\,,$ (78)
again, with $l$ = 2 for our case. For the relativistic form of the oscillation
equations, the above conditions still apply with the change $\frac{\delta
p}{\rho}\rightarrow\frac{\delta p}{\epsilon+p}$. The boundary condition at the
surface is the vanishing of the Lagrangian pressure perturbation $\Delta
p=c_{s}^{2}\Delta\epsilon=0$. This projects out the radial component of
$\vec{\xi}$. In the non-relativistic case, $\nabla p=-\rho g$ while in the
relativistic case, $\nabla p=-gh$ with $h=(\epsilon+p)$ the enthalpy.
With some algebra, one can arrive at a simpler form of Eqs. (1):
$\displaystyle\frac{dU}{dr}$ $\displaystyle=$
$\displaystyle\frac{g}{c_{s}^{2}}U+{\rm e}^{\lambda/2}\left[\frac{l(l+1){\rm
e}^{\nu}}{\omega^{2}}-\frac{r^{2}}{c_{s}^{2}}\right]V$
$\displaystyle\frac{dV}{dr}$ $\displaystyle=$ $\displaystyle{\rm
e}^{\lambda/2-\nu}\frac{\omega^{2}-N^{2}}{r^{2}}U+g\Delta(c^{-2})V\,,$ (79)
where $U$ = $r^{2}{\rm e}^{\lambda/2}\xi_{r}$, $V$ = $\delta p/(\epsilon+p)$
and
$\Delta(c^{-2})=c_{e}^{-2}-c_{s}^{-2}$.
We employ a 4th-order Runge-Kutta scheme to find a global solution of the
linear perturbation equations, Eqs. (V.2), subject to the boundary conditions
for the relativistic case outlined above. Since the solution set comprises
overtones, we selected the lowest order $g$-mode (highest frequency) by
checking that the radial eigenfunction $\xi_{r}$ has only one node inside the
star. The corresponding eignefrequency is plotted in the figures that follow.
We examine the trends of the $g$-mode vs. mass for various parameter choices,
for the pure nuclear, self-bound and hybrid stars.
Figure 9: Contrasts of the $g$-mode frequencies vs mass of normal NSs for the
ZL EOS without and with muons. The two curves with different $L$’s in each
panel are for EOSs with $K_{0}=220$ MeV, $S_{v}=31$ MeV and $\gamma_{1}$=1.6.
Figure 9 contrasts the influence of varying the density dependence of the
symmetry energy, by changing the slope of the symmetry energy parameter $L$ at
$n_{s}$, of the underlying ZL EOS for normal neutron stars with fixed
$K_{0}=220$ MeV and $S_{v}=31$ MeV. For $L$ = 60 MeV as well as $L$=70 MeV,
the softening effect of muons leads to a noticeable increase in the $g$-mode
frequency at a given mass. Comparing $L$ = 60 MeV with $L$ = 70 MeV for a
fixed composition however, the $g$-mode frequency for
$M\buildrel>\over{\sim}~{}$0.5-0.6 $\mathrm{M}_{\odot}\;$is higher for the
stiffer EOS.
Figure 10: Contrasts of g-mode frequencies vs stellar mass in a hybrid star.
Parameters of the EOSs are as in the insets. In the left panel, muons are not
included, whereas the right panel is with muons included.
In Fig. 10, results contrasting the $g$-mode frequencies in normal, hybrid,
and self-bound stars are presented. The contents of this figure constitute the
principal result of this work, viz., the abrupt rise in the scale of the
$g$-mode frequency at the onset of the mixed phase in the hybrid star. For the
EOS parameters displayed in the figure, the jump occurs around 1.4
$M_{\odot}$, so that a hybrid star in a merger would have a distinctly higher
g-mode frequency than a normal NS. In the top panel, the ZL EOS does not
include muons, whereas in the bottom panel the ZL EOS includes muons. The
$g$-mode frequency in the mixed phase is again higher than in a pure phase,
but since the mixed phase appears at a higher density due to muons, the rise
in the $g$-mode is less dramatic compared to a hybrid star without muons.
Results for the self-bound star are shown here for comparison, and to
emphasize that its $g$-mode frequency is comparatively small (10-50 Hz).
Unlike the $f$-mode frequency for the hybrid star, which gradually
interpolates between those of the normal NS and self-bound star [50, 77] and
shows no dramatic effects of compositional changes, the $g$-mode frequency for
the hybrid star is the highest of all and is sensitive to the onset of quarks
- making it less subject to ambiguity. One does not need to know the mass of
the star to ascertain if it can be a hybrid star if the $g$-mode frequency can
be precisely determined.
The unusually large $g$-mode frequency for the hybrid star with a Gibbs mixed
phase may be understood in a qualitative sense using general thermodynamic
considerations without reference to details of the EOS. In general, the
equilibrium sound speed $c_{e,\mathrm{mix}}$ in a system with two conserved
charges ($\mu_{B}$ and $\mu_{Q}$) can be expressed as
$\displaystyle c_{e,\mathrm{mix}}^{2}$ $\displaystyle=$
$\displaystyle\frac{dp_{\mathrm{mix}}\left(\mu_{B},\mu_{Q}\right)}{d\epsilon_{\mathrm{mix}}}$
(80) $\displaystyle=$ $\displaystyle\frac{\partial
p_{\mathrm{mix}}}{\partial\mu_{B}}\left(\frac{d\mu_{B}}{d\epsilon_{\mathrm{mix}}}\right)+\frac{\partial
p_{\mathrm{mix}}}{\partial\mu_{Q}}\left(\frac{d\mu_{Q}}{d\epsilon_{\mathrm{mix}}}\right)$
where $\mu_{Q}$ is the charge chemical potential. Glendenning [61] showed that
in such a situation, while $\mu_{B}$ is smooth at the onset of the mixed
phase, $\mu_{Q}$ is not, as there is freedom to rearrange charges between the
two phases to achieve global charge neutrality and minimize the free energy.
In fact, the steady rise with density of $\mu_{Q}$ in the pure nuclear phase
changes abruptly to a decline in the mixed phase, tempering the equilibrium
sound speed as shown by our numerical results presented in Fig. 6 and
confirmed by other works [78] which use different EOS from ours for the
nucleon and quark sector. On the other hand, the adiabatic sound speed
$c_{s,\mathrm{mix}}$ is evaluated at fixed composition and shows no such
effect, hence the difference of the two sound speeds (usually small in a pure
phase) abruptly increases in the mixed phase. This is reflected as a positive
jump in the Brunt-Väisälä frequency and therefore of the $g$-mode in the mixed
phase.
## VI $g$-mode Energy and Tidal Forcing
Unlike the Sun, where convection from within can drive oscillations, any
oscillations of an evolved NS likely require an external agent to excite and
sustain the perturbation beyond its normal damping time. A violent event such
as a NS merger is bound to produce oscillations in the pre-merger phase due to
tidal forcing or in the postmerger (ringdown) phase as the hypermassive
remnant relaxes to its stable rotating configuration. Here, we estimate the
impact of the $g$-mode on tidal phasing leading up to the merger, as the
$g$-mode spectrum in the postmerger remnant can be modified by thermal and
convective effects which are beyond the scope of the current work. We follow
[18] and assume spherically symmetric non-rotating stars, the Newtonian
approximation to orbital dynamics and quadrupolar gravitational wave emission.
These simplifying approximations allow for a first estimate of the excitation
energy and amplitude of the $g$-mode, as well as the phase difference due to
dynamic tides associated to the $g$-mode (not to be confused with the quasi-
static tides due to global deformation). Our estimates can be systematically
improved by going to the post-Newtonian approximation or numerical relativity.
The estimates are derived by modeling the NS as a forced simple harmonic
oscillator with a mass $M_{\ast}$, radius $R_{\ast}$ and a natural frequency
$\omega$=$\omega_{g}$, the angular frequency of the $g$-mode. The forcing
comes from the quadrupolar moment of the companion star’s gravitational force
(mass $M$), which couples to the $g$-mode. By following the analysis of [18],
we arrive at an expression for the accumulated phase error $\Delta\Phi(t)$
caused by the $g$-mode:
$\displaystyle\Delta\Phi(t)\approx\frac{3\pi\Gamma}{m}\left[\frac{\Omega_{e}(t)}{\Omega}-1\right]\left(-\frac{\Delta
E}{E}\right)\,,$ (81)
where $\triangle E$ is the energy pumped into the $g$-mode, $E$ the total
(kinetic plus potential) orbital energy of the system, $\Omega_{e}(t)$ the
time-dependent orbital frequency of the binary, and $\Omega$=$\omega_{g}/m$.
The quantity $m$ in Eq. (81) is the azimuthal mode index ($m$=2 in this case).
Finally, $\Gamma$ is a quantity that appears as a result of applying the
stationary phase approximation to the evaluation of the time to resonance
[18], and is quantified below. A $\Delta\Phi(t)$ of ${\cal O}(1)$ signifies a
large deviation from the point particle approximation to the gravitational
waveform from the merger. Explicitly, the quantity $\triangle E$ (for angular
quantum number $l$) is given by
$\displaystyle\Delta E$ $\displaystyle=$
$\displaystyle\left(\frac{5\pi}{384m}\right)\frac{M/M_{\ast}}{[1+M/M_{\ast}]^{(2l+1)/3}}\left(\frac{c^{2}R_{\ast}}{GM_{\ast}}\right)^{5/2}$
(82) $\displaystyle\times$
$\displaystyle\left(\frac{\Omega}{\Omega_{d}}\right)^{(4l-7)/3}\left(\frac{GM_{\ast}^{2}}{R_{\ast}}\right)S_{lm}^{2}\,.$
where $\Omega_{d}$=$(GM_{\ast}/R_{\ast}^{3})^{1/2}$ is a natural frequency
unit and $S_{lm}$ is proportional to the overlap integral between the mode
eigenstate $|lm\rangle$ and the vector spherical harmonic
$\left|P_{lm}\right\rangle=\nabla\left[r^{l}Y_{lm}(\theta,\phi)\right]$. The
total instantaneous orbital energy is $E=-GMM_{\ast}/2a$ with $a=a(t)$ the
instantaneous orbital separation. The evolution of the orbital frequency for a
circularized orbit using the formula for quadrupolar gravitational wave
emission gives
$\displaystyle\Omega_{e}(\tau)=\frac{1}{8}\left(\frac{aM_{c}}{c^{3}}\right)^{-5/8}\frac{1}{\tau^{3/8}}\,,$
(83)
where $M_{c}$ is the chirp mass of the binary system and $\tau$ is the time to
coalescence. All quantities on the right hand side in Eq. (81) can be
calculated, once the parameters of the binary ($M,M_{\ast},R_{\ast}$) and the
resonant $g$-mode frequency are fixed. We choose $M$=$M_{\ast}$=1.5
$M_{\odot}$ for neutron/hybrid stars and pure quark stars. The strongest tidal
coupling is likely to the $l$=$m$=2 $g$-mode whose characteristic frequency we
choose as
$\displaystyle\omega_{g}(NS)$ $\displaystyle\cong$ $\displaystyle
2\pi(200)\,{\rm Hz}\,,~{}~{}$ $\displaystyle\omega_{g}(HS)$
$\displaystyle\cong$ $\displaystyle 2\pi(300)\,{\rm
Hz}\,,~{}~{}\rm{and}~{}~{}$ $\displaystyle\omega_{g}(QS)$ $\displaystyle\cong$
$\displaystyle 2\pi(40)\,{\rm Hz}\,,~{}~{}$ (84)
based on the $g$-mode eigenfrequencies in the previous section. Even without
computing $S_{lm}$, one can estimate from Eq. (83) the time at which the
$g$-mode becomes resonant as
$\displaystyle\tau_{0}(NS)$ $\displaystyle\cong$ $\displaystyle 272\,{\rm
ms}\,,~{}~{}$ $\displaystyle\tau_{0}(HS)$ $\displaystyle\cong$ $\displaystyle
103\,{\rm ms}\,,~{}~{}\rm{and}~{}~{}$ $\displaystyle\tau_{0}(QS)$
$\displaystyle\cong$ $\displaystyle 22\,{\rm s}\,,$ (85)
where the the zero of time is the moment of coalescence. Assuming circularized
orbits, standard equations of binary orbit evolution $a(t)$ [79] give
$\displaystyle a_{0}(NS)$ $\displaystyle\cong$ $\displaystyle
111~{}\mathrm{km}\,,~{}~{}$ $\displaystyle a_{0}(HS)$ $\displaystyle\cong$
$\displaystyle 85~{}\mathrm{km}\,,~{}~{}{\rm and}~{}~{}$ $\displaystyle
a_{0}(QS)$ $\displaystyle\cong$ $\displaystyle 326~{}\mathrm{km}\,.$ (86)
We note that the $g$-mode for the hybrid star, which has a larger resonant
frequency than neutron or quark stars, is excited later in the merger and is
likely to be stronger in amplitude due to the close separation of the binary
since the forcing term is $\propto 1/a^{3}$ for $l$=2. Finally, from our
calculations for the $g$-mode eigenfunction and the associated density
perturbation $\delta\epsilon(r)$, we estimate
$\displaystyle S_{lm}^{NS}$ $\displaystyle\cong$ $\displaystyle\quad 4.5\times
10^{-3}\,,~{}~{}$ $\displaystyle S_{{lm}}^{HS}$ $\displaystyle\cong$
$\displaystyle\quad 6.2\times 10^{-3}\,~{}~{}{\rm and}~{}~{}$ $\displaystyle
S_{lm}^{QS}$ $\displaystyle\cong$ $\displaystyle\quad 9.9\times 10^{-6}\,$
(87)
using Eq. (40) for $S_{lm}$ from [32]. From these estimates, Eq. (82) can be
utilized to yield the estimated fractional orbital energy pumped into the
$g$-mode:
$\displaystyle\left|\frac{\Delta E}{E}\right|^{NS}$ $\displaystyle\cong$
$\displaystyle 2.3\times 10^{-3}\,,~{}~{}$ $\displaystyle\left|\frac{\Delta
E}{E}\right|^{HS}$ $\displaystyle\cong$ $\displaystyle 5.9\times
10^{-3}\,,~{}~{}~{}~{}{\rm and}~{}~{}$ $\displaystyle\left|\frac{\Delta
E}{E}\right|^{QS}$ $\displaystyle\cong$ $\displaystyle\quad 2\times
10^{-9}\,,$ (88)
and finally from Eq. (81), we obtain the phase error due to the resonant
excitation of the $g$-mode to be
$\displaystyle\triangle\phi^{NS}$ $\displaystyle\cong$ $\displaystyle
0.8\,,~{}~{}{\rm and}~{}~{}$ $\displaystyle\triangle\phi^{HS}$
$\displaystyle\cong$ $\displaystyle 0.45\,,~{}~{}$
$\displaystyle\triangle\phi^{QS}$ $\displaystyle\cong$ $\displaystyle 6\times
10^{-4}\,.$ (89)
Note that $\Delta\phi^{NS}$ and $\triangle\phi^{HS}$ are comparable. Despite
$\left(\frac{\Delta E}{E}\right)$ being larger for a hybrid star as expected,
its higher $g$-mode frequency means it is excited later in the merger, when
there is less time left for accumulating a phase error. These results are very
sensitive to the value of $S_{lm}$ ($\Delta\Phi\propto S_{lm}^{2}$), which
itself can vary by a factor of 2 or more depending on the EOS.
### Comparison with other works
Table 1: Comparison of characteristic $g$-mode frequencies (denoted by $\omega_{g}$ in the table) reported in a selection of the literature. As other works usually fix the stellar mass $M$, we include this information. The symbol $\Lambda$ is used here as a shorthand to denote hyperonic degrees of freedom and SF denotes superfluidity in the nucleonic sector. Values of $f_{g}$ that vary with the NS mass can be inferred from Figs. 9 and 10 of this work. The entries are representative, not exhaustive. Authors [Ref.] | Core | $M$ | $f_{g}=\omega_{g}/(2\pi)$
---|---|---|---
| Composition | [$M_{\odot}$] | [kHz]
Reisenegger & Goldreich [17] | $npe$ | 1.405 | 0.215
Lai [20] | $npe$ | 1.4 | 0.073
Kantor & Gusakov [29] | $npe$ | 1.4 | 0.13
Kantor & Gusakov [29] | $npe\mu$ | 1.4 | 0.19
Kantor & Gusakov [29] | $npe\mu$(SF) | 1.4 | 0.46
Dommes & Gusakov [30] | $npe\mu\Lambda$(SF) | 1.634 | 0.742
Yu & Weinberg [33] | $npe\mu$ | 1.4 | 0.13
Yu & Weinberg [33] | $npe\mu$(SF) | 2.0 | 0.45
Rau & Wasserman [34] | $npe\mu$(SF) | 2.0 | 0.45
Jaikumar et al. [this work] | $npe$ | 1.4 | 0.24
Jaikumar et al. [this work] | $npe\mu$ | 1.4 | 0.27
Jaikumar et al. [this work] | $npe\mu q$ | 2.0 | 0.58
Table 1 compares our results for zero-temperature core $g$-modes in the Gibbs
mixed phase of hadrons and quarks with other works, some of which also find an
enhancement of the frequency due to other compositional mixes or collective
fluid effects like superfluidity, although values in the table do not include
the effect of entrainment on the $g$-mode, which has also been studied.
Details about the different EOSs used, the effect of non-zero temperature and
entrainment can be found by perusing the respective reference. We confirm the
value of the $g$-mode frequency for $npe$ and $npe\mu$ non-superfluid matter
described by the Akmal-Pandharipande-Ravenhall (APR)-EOS as reported in [29],
which also serves as a test of our numerics. In comparison to [29] with the
APR-EOS or [33] with the SLy4 equation of state, the ZL-EOS yields a larger
value of the $g$-mode frequency as it is less stiff than either of those two.
While the EOS and the treatment of gravitational perturbations differ between
the cited works, the results for $npe\mu$ matter with superfluidity appear to
be in general agreement with each other. Note the considerably larger value of
the $g$-mode frequency for hyperonic stars with superfluidity compared to
hybrid stars or superfluid neutron stars. All of these, in turn, are larger
than non-superfluid neutron/hyperonic stars although the latter employ
Newtonian gravity. A study of $g$-mode frequencies and damping times in
superfluid hybrid stars is a future objective that would make this comparison
more complete.
## VII Summary and Conclusions
The main objective of this work was to ascertain the characteristics of
$g$-mode oscillations of NSs containing QM in their cores. Toward this end,
the nucleon-to-quark phase transition was treated using Gibbs construction
which renders an otherwise sharp first order transition smooth. The cores of
such hybrid stars accommodate admixtures of nucleonic and quark matter, the
pure quark phase being never achieved. This feature, while allowing contrasts
between the structural properties (e.g., $M$ vs $R$) of normal and hybrid
stars to be made also permits comparisons of observables that depend on their
interior compositions, such as short- and long-term cooling, oscillation
modes, etc. Determining the composition of the star is essential to break the
degeneracy that exists in the masses and radii of normal and hybrid stars as
one may be masquerading as the other.
The nucleonic part of the EOS used in this work tallies with nuclear
systematics near and below $n_{s}$ in addition to being consistent with
results from modern chiral EFT calculations up to $2n_{s}$ for which
uncertainty quantifications have been made. The EOS employed in the quark
sector is sufficiently stiff, hence non-perturbative, to support $\sim
2M_{\odot}$ NSs required by recent observations. Furthermore, the overall EOS
gives radii of $\sim 1.4M_{\odot}$ stars that lie within the bounds of recent
determinations. The EOS is also consistent with the tidal deformation inferred
from gravitational wave detection in the event GW170817. Appendix A summarizes
the structural properties for the EOSs used and provides mathematical details
for the derivation of the sound speeds.
Unlike for $M$-$R$ curves for which only the pressure vs density relation
(EOS) is sufficient, the analysis of $g$-mode oscillations requires
simultaneous information about the equilibrium and adiabatic squared sound
speeds, $c_{e}^{2}=dp/d\epsilon$ and $c_{s}^{2}=\partial
p/\partial\epsilon|_{x}$, where $x$ is the local proton fraction. The
distinction between these two sound speeds plays a central role in determining
the Brunt-Väisälä frequencies $\omega^{2}\propto c_{e}^{-2}-c_{s}^{-2}$ of
non-radial $g$-mode oscillations. Thus, a future detection of $g$-modes would
take gravitational wave astronomy beyond the current capability of $M$-$R$
measurements to determine the composition of the star.
We find that the $g$-mode is sensitive to the presence of QM in NSs, where
quarks are part of a mixed phase with nuclear matter in the core. The
equilibrium sound speed drops sharply at the boundary of the mixed phase (Fig.
5), raising the local Brunt-Väisälä frequency and the fundamental $g$-mode
frequency of the star (Fig. 6). Contrasts of $g$-mode frequencies between
normal and hybrid stars containing quark matter (Fig. 9) form the principal
results of our work.
Our analysis leads to the conclusion that in binary mergers where one or both
components may be a hybrid star, the fraction of tidal energy pumped into the
resonant $g$-mode in hybrid stars can exceed that of a NS by a factor of 2-3,
although resonance occurs later in the inspiral. On the other hand, a self-
bound star has a much weaker tidal overlap with the g-mode. The cumulative
tidal phase error in hybrid stars $\Delta\phi\cong$ 0.5 is comparable to that
from tides in ordinary NSs. While this happenstance may present a challenge in
distinguishing between the two cases, should the g-mode be excited to
sufficient amplitude in a postmerger remnant, its frequency spectrum would be
a possible indication for the existence of non-nucleonic matter, including
quarks. The detection of such $g$-mode frequencies in binary mergers observed
by current gravitational wave detectors seems challenging, but possible with
next generation detectors.
The novel features of this work include (i) use of nucleonic EOSs that are
consistent with constraints from modern chiral EFT calculations coupled with
sufficiently stiff quark EOSs to calculate structural properties of hybrid
stars that lie within the bounds of astrophysical measurements, (ii) a first
calculation of the two sound speeds and the principal $g$-mode frequency of
hybrid stars employing Gibbs phase criteria, and (iii) a concomitant analysis
of tidal phase effects in a binary merger due to $g$-modes in hybrid stars. In
future work, we aim to report on $g$-mode frequencies in alternative
treatments of quark matter in NSs such as a first order nucleon-to-quark phase
transition and crossover transitions as in quarkyonic matter.
_Acknowledgments._ — We are grateful to the anonymous referee for meticulous
review of the equations. We acknowledge discussions with Thomas Klähn on the
non-perturbative EOS for quark matter. Thanks are due to Sophia Han for
remarks on observational constraints on the EOS. P.J. is supported by the U.S.
NSF Grant No. PHY-1608959 and PHY-1913693. The research of A.S. and M.P. was
supported by the U.S. Department of Energy, Grant No. DE-FG02-93ER-40756. C.C.
acknowledges support from the European Union’s Horizon 2020 research and
innovation programme under the Marie Skłodowska-Curie grant agreement No.
754496 (H2020-MSCA-COFUND-2016 FELLINI).
## Appendix A Determination of EOS constants in SNM and PNM for the ZL EOS
### A.1 SNM
The constants $a_{0},b_{0}$ and $\gamma$ in Eq. (10) for SNM are determined by
utilizing the empirical properties of SNM at $u=1$. Specifically, the values
used are $E_{1/2}=-B=-16$ MeV at $n_{s}=0.16~{}{\rm fm^{-3}}$,
$p_{1/2}/n_{s}=0$, and $K_{1/2}=220$ MeV. Manipulating the relations
$\displaystyle-B$ $\displaystyle=$ $\displaystyle T_{1/2}+a_{0}+b_{0}$ (90)
$\displaystyle 0$ $\displaystyle=$ $\displaystyle
T_{1/2}^{\prime}+a_{0}+\gamma b_{0}$ (91) $\displaystyle\frac{K_{1/2}}{9}$
$\displaystyle=$ $\displaystyle
T_{1/2}^{\prime\prime}+2T_{1/2}^{\prime}+2a_{0}+\gamma(\gamma+1)b_{0}\,,$ (92)
the constants are given by
$\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{K_{1/2}/9-T_{1/2}^{\prime\prime}}{T_{1/2}-T_{1/2}^{\prime}+B}\,,\quad$
$\displaystyle b_{0}$ $\displaystyle=$
$\displaystyle\frac{K_{1/2}/9-T_{1/2}^{\prime\prime}}{\gamma(\gamma-1)}\quad{\rm
and}$ $\displaystyle a_{0}$ $\displaystyle=$ $\displaystyle-B-
T_{1/2}-b_{0}\,,$ (93)
where $T_{1/2}^{\prime}=u\frac{dT_{1/2}}{du}$ and
$T_{1/2}^{\prime\prime}=u^{2}\frac{d^{2}T_{1/2}}{du^{2}}$. Explicit
expressions for these derivatives are
$\displaystyle T_{1/2}^{\prime}$ $\displaystyle=$
$\displaystyle\left.\frac{p_{1/2}^{\rm kin}}{n}\right|_{n_{s}}$
$\displaystyle=$
$\displaystyle\frac{1}{n_{s}}\cdot\frac{2}{12\pi^{2}}\bigg{[}k_{F}E_{F}\left(k_{F}^{2}-\frac{3}{2}M_{B}^{2}\right)$
$\displaystyle+$
$\displaystyle\frac{3}{2}M_{B}^{4}\ln\left(\frac{k_{F}+E_{F}}{M_{B}}\right)\bigg{]}_{k_{Fs}}\,,$
$\displaystyle T_{1/2}^{\prime\prime}$ $\displaystyle=$
$\displaystyle\frac{K_{1/2}^{\rm
kin}}{9}-2T_{/12}^{\prime}=\frac{k_{Fs}^{2}}{3E_{Fs}}-2T_{/12}^{\prime},$ (94)
where $k_{Fs}=(3\pi^{2}n_{s}/2)^{1/3}$. To obtain the first term in the
rightmost equality above, it is advantageous to use the thermodynamical
identity $p=n\mu-\epsilon$ for the kinetic parts, whence
$\frac{dp}{dn}=n\frac{d\mu}{dn}=\frac{d\mu}{dk_{F}}\frac{dk_{F}}{dn}$. The
result quoted above ensues from the relations
$\frac{dk_{F}}{dn}=\frac{k_{F}}{3n}$ and
$\mu=E_{F}=\sqrt{k_{F}^{2}+M_{B}^{2}}$ both evaluated at $n_{s}$ and $k_{Fs}$.
Numerical values of the derivatives and constants so derived are
$\displaystyle T_{1/2}$ $\displaystyle\simeq$ $\displaystyle 21.79~{}{\rm
MeV},~{}~{}~{}T_{1/2}^{\prime}\simeq 14.34~{}{\rm MeV,}$ $\displaystyle
T_{1/2}^{\prime\prime}$ $\displaystyle=$ $\displaystyle-5.030~{}{\rm
MeV,}~{}~{}~{}\gamma\simeq 1.256,$ $\displaystyle~{}~{}~{}a_{0}$
$\displaystyle\simeq$ $\displaystyle-129.3~{}{\rm MeV},\quad{\rm
and}~{}~{}~{}b_{0}\simeq 91.49~{}{\rm MeV}.$ (95)
as in ZL. For other permissible values of $K_{1/2}$ in the range $220\pm 30$
MeV, Eqs. (A.1, A.1) can be used to determine the corresponding constants.
### A.2 PNM
In the PNM sector in which $x=0$, the constants in Eq. (15) to determined are
$a_{1},b_{1}$ and $\gamma_{1}$. As in SNM, $E_{0}$ and $T_{0}$ are relative to
the baryon mass $M_{B}$. Denoting the energy per baryon of PNM by $E_{0}$, its
various terms and the associated pressure are
$\displaystyle E_{0}$ $\displaystyle=$ $\displaystyle
T_{0}+V_{0}=T_{0}+a_{1}u+b_{1}u^{\gamma}_{1}\,,$ $\displaystyle p_{0}$
$\displaystyle=$ $\displaystyle n_{s}\left(u^{2}\frac{dE_{0}}{du}\right)$ (96)
$\displaystyle=$ $\displaystyle
n_{s}\left(u^{2}T_{0}^{\prime}+a_{1}u^{2}+\gamma_{1}b_{1}u^{\gamma_{1}+1}\right)\,.$
Evaluating the above equations at $u=1$ leads to
$\displaystyle E_{0}=S_{v}-B=T_{0}+a_{1}+b_{1}\,,$ (97) $\displaystyle
p_{0}=n_{s}(T_{0}^{\prime}+a_{1}+\gamma_{1}b_{1})\,,$ (98)
where $S_{v}=(E_{0}-E_{1/2})$ at $u=1$. The last equation above is generally
written as
$\displaystyle\frac{p_{0}}{n_{s}}$ $\displaystyle=$
$\displaystyle\frac{L}{3}\quad{\rm with}\quad
L=3\left[n\frac{dS_{v}}{dn}\right]_{n_{s}}=3~{}[uS_{v}^{\prime}]_{u=1}\quad{\rm
so~{}that}$ $\displaystyle\frac{L}{3}$ $\displaystyle=$ $\displaystyle
T_{0}^{\prime}+a_{1}+\gamma_{1}b_{1}\,,$ (99)
where $S_{v}^{\prime}=\frac{dS_{v}}{du}$. Manipulating Eqs. (97) and (99)
leads to the relations
$\displaystyle b_{1}$ $\displaystyle=$
$\displaystyle\frac{\frac{L}{3}+B-S_{v}+T_{0}-T_{0}^{\prime}}{\gamma_{1}-1}\quad{\rm
and}\quad$ $\displaystyle a_{1}$ $\displaystyle=$ $\displaystyle
S_{v}-B-T_{0}-b_{1}\,.$ (100)
Taking guidance from the empirical properties of isospin asymmetric nuclear
matter, we choose $S_{v}=31$ MeV, $L$ in the range ($30$-$70$) MeV, and
$\gamma_{1}=5/3$. The resulting values of the constants are
$\displaystyle a_{1}$ $\displaystyle\simeq$
$\displaystyle-\left(\frac{L}{2}+14.72\right)~{}{\rm MeV}\quad{\rm and}\quad$
$\displaystyle b_{1}$ $\displaystyle\simeq$
$\displaystyle\left(\frac{L}{2}-4.62\right)~{}{\rm MeV}.$ (101)
### A.3 Sensitivity of the EOS constants
The EOS constants above depend on the input values of $B,~{}n_{s},~{}K_{0}$
and $S_{v},~{}L,~{}\gamma_{1}$ in SNM and PNM, respectively. Although we have
used representative values for these quantities at $u=1$, nuclear data permits
variations in them. Furthermore, one or more sets of these constants may be
correlated, as for example, $S_{V}$ and $L$. Additional constraints are to
support $\simeq 2M_{\odot}$ NSs and to maintain causality, at least within the
star. These points must be borne in mind when varying the input values,
particularly when correlated errors are present in theoretical evaluations of
these quantities.
### A.4 NS properties with the ZL EOSs
The various properties shown in Table 2 below are for beta-equilibrated normal
NSs and correspond to variations in the characteristic properties of the ZL
EOSs.
Table 2: Structural properties of nucleonic NSs with $M=1.4\,M_{\odot}$ and $M_{\rm max}$ for the ZL EOSs. For each mass, the compactness parameter $\beta=(GM/c^{2}R)\simeq(1.475/R)(M/M_{\odot})$, $n_{c}$, $p_{c}$ and $y_{c}$ are the central values of the density, pressure and proton fraction, respectively. The corresponding equilibrium squared speeds of sound are denoted by $c_{e}^{2}$. The $\Lambda$’s denote tidal deformabilities. Property | ZL-A | ZL-B | ZL-C | Units
---|---|---|---|---
$K_{0}$ | 220 | 220 | 240 | MeV
$S_{v}$ | 31 | 31 | 31 | MeV
$L$ | 50 | 70 | 60 | MeV
$\gamma_{1}$ | 1.6 | 1.6 | 1.8 |
$R_{1.4}$ | 11.77 | 12.69 | 12.61 | km
$\beta_{1.4}$ | 0.175 | 0.163 | 0.164 |
$n_{c,1.4}/n_{s}$ | 3.35 | 2.78 | 2.75 |
$p_{c,1.4}$ | 83.65 | 60.59 | 61.50 | ${\rm MeV~{}fm^{-3}}$
$(c_{e}^{2})_{c,1.4}$ | 0.385 | 0.345 | 0.363 | $c^{2}$
$\Lambda_{1.4}$ | 713.4 | 970.2 | 504.1 |
$R_{\rm max}$ | 10.01 | 10.68 | 10.8 | km
$M_{\rm max}$ | 1.997 | 2.02 | 2.13 | $M_{\odot}$
$\beta_{\rm max}$ | 0.294 | 0.279 | 0.291 |
$n_{\rm c,max}/n_{s}$ | 7.71 | 6.96 | 6.67 |
$p_{\rm c,max}$ | 798.89 | 602.04 | 646.7 | ${\rm MeV~{}fm^{-3}}$
$(c_{e}^{2})_{\rm c,max}$ | 0.874 | 0.777 | 0.866 | $c^{2}$
$\Lambda_{\rm max}$ | 9.11 | 9.6 | 6.39 |
Structural properties of hybrid stars are discussed and shown in various
figures in the text.
### A.5 Proof of equivalence of Eqs. (40) and (51)
Here we establish the equivalence of the direct approach to computing
$c_{s}^{2}$ from Eq. (40) with that from Eq. (51) for a general EOS with a
parabolic dependence of the proton fraction $x$ in the case of $n,p,e$ matter.
Both approaches yield identical results, which we have verified numerically as
well.
The equilibrium squared sound speed is
$c_{e}^{2}=\left(\frac{dP}{d\epsilon}\right)_{\rm
eq}=\frac{n_{B}\frac{d}{dn_{B}}\left(P_{\rm
bar}+P_{e}\right)}{\left(\epsilon+P\right)}\,,$ (102)
where the total pressure $P=P_{\rm bar}+P_{e}$ is comprised of the pressure
from baryons and electrons. Writing
$\frac{d}{dn_{B}}=\frac{\partial}{\partial
n_{B}}+\frac{dx}{dn_{B}}\frac{\partial}{\partial x}\,,$ (103)
we get
$c_{e}^{2}=\frac{\left(n_{B}\frac{\partial P_{bar}}{\partial
n_{B}}+n_{B}\frac{\partial P_{e}}{\partial
n_{B}}\right)}{\epsilon+P}+\frac{n_{B}\left(\frac{\partial P_{\text{bar
}}}{\partial x}\frac{dx}{dn_{B}}+\frac{\partial P_{\text{e}}}{\partial
x}\frac{dx}{dn_{B}}\right)}{\epsilon+P}\,.$ (104)
Comparing with Eq. (39), the first term on the right hand side is simply
$c_{s}^{2}$. For the specific case of an EOS with parabolic dependence in $x$
(of which the APR-EOS in Kantor & Gusakov [29] and the ZL-EOS in Zhao and
Lattimer [43] are examples), we have
$E(u,x)=E_{0}(u)+(1-2x)^{2}S_{2}(u);\quad u=n_{B}/n_{0}\,.$ (105)
where $n_{0}$ is the saturation density, $E_{0}$ the energy per baryons
(neutrons and protons) and $S_{2}$ the symmetry energy. Computing the pressure
and its derivatives with respect to $n_{B}$ and $x$ for the EOS in Eq. (105),
we find
$\frac{\partial P_{\text{bar}}}{\partial x}=-4n_{B}(1-2x)uS_{2}^{\prime}\,,$
(106)
where the prime on $S_{2}$ is with respect to $u$. Since (assuming massless
electrons) $\frac{\partial P_{\text{e}}}{\partial x}=\frac{1}{3}n_{B}\mu_{e}$,
it follows that
$c_{e}^{2}=c_{s}^{2}+\frac{1}{\mu_{n}}\left(\frac{\mu_{e}}{3}-4(1-2x)uS_{2}^{\prime}\right)n_{B}\frac{dx}{dn_{B}}\,.$
(107)
From the $\beta$-equilibrium condition
$\mu_{e}=4S_{2}(1-2x)\Rightarrow\mu_{e}/(4S_{2})=(1-2x)\,,$ (108)
we get upon differentiation
$\frac{1}{4S_{2}}\left(\frac{d\mu_{e}}{dn_{B}}\right)-\left(\frac{\mu_{e}}{4S_{2}^{2}}\right)\left(\frac{dS_{2}}{dn_{B}}\right)=-2\left(\frac{dx}{dn_{B}}\right)\,.$
(109)
Using
$\left(\frac{d\mu_{e}}{dn_{B}}\right)=\frac{\mu_{e}}{3x}\left(\frac{dx}{dn_{B}}\right)+\frac{\mu_{e}}{3n_{B}}\,,$
(110)
and solving for $dx/dn_{B}$ from Eq.(109), a minor rearrangement yields
$n_{B}\left(\frac{dx}{dn_{B}}\right)=\frac{-\left(\frac{\mu_{e}}{3}-4(1-2x)uS_{2}^{\prime}\right)}{\left(\frac{\mu_{e}}{3x}+8S_{2}\right)}\,.$
(111)
Putting together Eq. (111) with Eq. (107), we get
$\displaystyle
c_{e}^{2}=c_{s}^{2}-\frac{1}{\mu_{n}}\frac{\left(\frac{\mu_{e}}{3}-4(1-2x)uS_{2}^{\prime}\right)^{2}}{\left(\frac{\mu_{e}}{3x}+8S_{2}\right)}\,.$
(112)
Comparing Eq.(112) with Eqs. (51), (55) and (57) in the text, namely,
$\displaystyle c_{s}^{2}$ $\displaystyle=$ $\displaystyle
c_{e}^{2}+\frac{\left[n_{B}\left(\frac{\partial\tilde{\mu}}{\partial
n_{B}}\right)_{x}\right]^{2}}{\mu_{n}\left(\frac{\partial\tilde{\mu}}{\partial
x}\right)_{n_{B}}}$ (113) $\displaystyle
n_{B}\frac{\partial\tilde{\mu}}{\partial n_{B}}$ $\displaystyle=$
$\displaystyle\frac{\mu_{e}}{3}-4(1-2x)~{}uS_{2}^{\prime}$ (114)
$\displaystyle\frac{\partial\tilde{\mu}}{\partial x}$ $\displaystyle=$
$\displaystyle\frac{1}{3}\frac{\mu_{e}}{x}+8S_{2}(u)\,,$ (115)
we see that Eq. (51) is consistent with the direct definition of $c_{s}^{2}$
from Eq. (40) and that Eq. (51) applies in general for any form of the EOS
with a parabolic dependence in $x$, although in the text we arrived at Eqs.
(55) and (57) in the context of the ZL-EOS.
### A.6 Derivation of Eqs. (59)-(61)
In $npe\mu$ matter, we choose the independent variables to be the baryon
density $n_{B}$, the lepton fraction $x$, and the muon fraction $y\equiv
x_{\mu}$. The electron fraction $x_{e}$ is the difference $x-y$.
The starting point for the speed-of-sound difference is
$c_{s}^{2}-c_{e}^{2}=\frac{1}{\mu_{avg}}\left.\frac{\partial P}{\partial
n_{B}}\right|_{x,y}-\frac{1}{\mu_{n}}\frac{dP}{dn_{B}}~{}.$ (116)
The total derivative of the pressure $P(n_{B},x,y)$ with respect to $n_{B}$ is
given by
$\frac{dP}{dn_{B}}=\left.\frac{\partial P}{\partial
n_{B}}\right|_{x,y}+\left.\frac{\partial P}{\partial
x}\right|_{n_{B},y}\frac{dx}{dn_{B}}+\left.\frac{\partial P}{\partial
y}\right|_{n_{B},x}\frac{dy}{dn_{B}}$ (117)
and therefore
$\displaystyle c_{s}^{2}$ $\displaystyle-$ $\displaystyle
c_{e}^{2}=\left(\frac{1}{\mu_{avg}}-\frac{1}{\mu_{n}}\right)\left.\frac{\partial
P}{\partial n_{B}}\right|_{x,y}$ $\displaystyle-$
$\displaystyle\frac{1}{\mu_{n}}\left(\left.\frac{\partial P}{\partial
x}\right|_{n_{B},y}\frac{dx}{dn_{B}}+\left.\frac{\partial P}{\partial
y}\right|_{n_{B},x}\frac{dy}{dn_{B}}\right)$ $\displaystyle=$
$\displaystyle\left(\frac{\mu_{n}-\mu_{avg}}{\mu_{avg}~{}\mu_{n}}\right)\left.\frac{\partial
P}{\partial n_{B}}\right|_{x,y}$ $\displaystyle-$
$\displaystyle\frac{1}{\mu_{n}}\left(\left.\frac{\partial P}{\partial
x}\right|_{n_{B},y}\frac{dx}{dn_{B}}+\left.\frac{\partial P}{\partial
y}\right|_{n_{B},x}\frac{dy}{dn_{B}}\right).$ (119)
The average chemical potential $\mu_{avg}$ is
$\mu_{avg}=(1-x)\mu_{n}+x\mu_{p}+(x-y)\mu_{e}+y\mu_{y}$ (120)
which means that
$\displaystyle\mu_{n}-\mu_{avg}$ $\displaystyle=$ $\displaystyle
x(\mu_{n}-\mu_{p}-\mu_{e})+y(\mu_{e}-\mu_{\mu})$ (121) $\displaystyle\equiv$
$\displaystyle-x\tilde{\mu}_{x}-y\tilde{\mu}_{y}$
with the obvious definitions for $\tilde{\mu}_{x}$ and $\tilde{\mu}_{y}$.
In $\beta$-equilibrium $\mu_{n}=\mu_{avg}$ (as well as
$\tilde{\mu}_{x}=\tilde{\mu}_{y}=0$) and, correspondingly,
$c_{s}^{2}-c_{e}^{2}=-\frac{1}{\mu_{n}}\left(\left.\frac{\partial P}{\partial
x}\right|_{n_{B},y}\frac{dx}{dn_{B}}+\left.\frac{\partial P}{\partial
y}\right|_{n_{B},x}\frac{dy}{dn_{B}}\right)~{},$ (122)
which is Eq. (59) in the main text. Using $P=n_{B}^{2}\left.\frac{\partial
E}{\partial n_{B}}\right|_{x,y}$, the speed-of-sound difference can be
expressed as
$\displaystyle c_{s}^{2}-c_{e}^{2}$ $\displaystyle=$
$\displaystyle-\frac{n_{B}^{2}}{\mu_{n}}\frac{\partial}{\partial
n_{B}}\left.\left(\left.\frac{\partial E}{\partial
x}\right|_{n_{B},y}\frac{dx}{dn_{B}}+\left.\frac{\partial E}{\partial
y}\right|_{n_{B},x}\frac{dy}{dn_{B}}\right)\right|_{x,y}$ (123)
$\displaystyle=$
$\displaystyle-\frac{n_{B}^{2}}{\mu_{n}}\left(\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}\frac{dx}{dn_{B}}+\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}\frac{dy}{dn_{B}}\right)~{}.$
The calculation of $\frac{dx}{dn_{B}}$ and $\frac{dy}{dn_{B}}$ begins from the
total differentials of $\tilde{\mu}_{x}$ and $\tilde{\mu}_{y}$ which are
$d\tilde{\mu}_{x}=\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}dn_{B}+\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}dx+\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}dy=0$ (124)
and
$d\tilde{\mu}_{y}=\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}dn_{B}+\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}dx+\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}dy=0~{}.$ (125)
From the former differential, it follows that
$dy=\frac{-1}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}}\left(\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}dn_{B}+\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}dx\right)$ (126)
which, when substituted in the latter, leads to
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}dn_{B}+\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}dx$ (127) $\displaystyle-$
$\displaystyle\frac{\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}}\left(\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}dn_{B}+\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}dx\right)~{}.$
One then collects terms proportional to $dn_{B}$ and $dx$
$\displaystyle\left(\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}-\frac{\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}\right)dn_{B}$ (128) $\displaystyle=$
$\displaystyle-\left(\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}-\frac{\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}\right)dx$
or, equivalently,
$\frac{dx}{dn_{B}}=\frac{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}}~{}.$ (129)
Similarly,
$\frac{dy}{dn_{B}}=\frac{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
n_{B}}\right|_{x,y}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
n_{B}}\right|_{x,y}}{\left.\frac{\partial\tilde{\mu}_{x}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{y}}{\partial
y}\right|_{n_{B},x}-\left.\frac{\partial\tilde{\mu}_{y}}{\partial
x}\right|_{n_{B},y}\left.\frac{\partial\tilde{\mu}_{x}}{\partial
y}\right|_{n_{B},x}}~{}.$ (130)
The speed-of-sound difference, as given by Eqs. (123), (129), and (130) is
physically transparent because $\beta$-equilibrium and compositional gradients
are brought to the forefront via $\tilde{\mu}_{x(y)}$ and $\partial/\partial
x(y)$, respectively (the latter two equations are Eqs. (60) and (61) in the
main text). However, this intuitive picture comes at the expense of
computational complexity.
## References
* Collins and Perry [1975] J. C. Collins and M. J. Perry, Phys. Rev. Lett. 34, 1353 (1975).
* Kurkela _et al._ [2010] A. Kurkela, P. Romatschke, and A. Vuorinen, Phys. Rev. D 81, 105021 (2010).
* Kurkela and Vuorinen [2016] A. Kurkela and A. Vuorinen, Phys. Rev. Lett. 117, 042501 (2016).
* Gorda _et al._ [2018] T. Gorda, A. Kurkela, P. Romatschke, M. Säppi, and A. Vuorinen, Phys. Rev. Lett. 121, 202701 (2018).
* Annala _et al._ [2020] E. Annala, T. Gorda, A. Kurkela, J. Nättilä, and A. Vuorinen, Nature Physics 16, 907 (2020).
* Han _et al._ [2019] S. Han, M. A. A. Mamun, S. Lalit, C. Constantinou, and M. Prakash, Phys. Rev. D 100, 103022 (2019).
* Page _et al._ [2015] D. Page, J. M. Lattimer, M. Prakash, and A. W. Steiner, Novel Superfluids, International Series of Monographs on Physics, edited by K. H. Bennemann, and J. B. Ketterson (Oxford University Press) , 505 (2015).
* Potekhin _et al._ [2015] A. Y. Potekhin, J. A. Pons, and D. Page, Space Sci. Rev. 191, 239 (2015).
* Hinderer [2008] T. Hinderer, The Astrophysical Journal 677, 1216 (2008).
* Damour and Nagar [2009] T. Damour and A. Nagar, Phys. Rev. D 80, 084035 (2009).
* Postnikov _et al._ [2010] S. Postnikov, M. Prakash, and J. M. Lattimer, Phys. Rev. D 82, 024016 (2010).
* Abbott _et al._ [2017] B. P. Abbott, others for the LIGO Scientific Collaboration, and Virgo Collaboration, Phys. Rev. Lett. 119, 161101 (2017).
* Parisi _et al._ [2020] A. Parisi, C. Vásquez Flores, C. H. Lenzi, C.-S. Chen, and G. Lugones, arXiv e-prints , arXiv:2009.14274 (2020).
* Tews _et al._ [2019] I. Tews, J. Margueron, and S. Reddy, European Physical Journal A 55, 97 (2019).
* Kokkotas _et al._ [2001] K. D. Kokkotas, T. A. Apostolatos, and N. Andersson, Mon. Not. Roy. Ast. Soc. 320, 307 (2001).
* Alford _et al._ [2019] M. G. Alford, S. Han, and K. Schwenzer, Journal of Physics G Nuclear Physics 46, 114001 (2019).
* Reisenegger and Goldreich [1992] A. Reisenegger and P. Goldreich, Astrophys. J. 395, 240 (1992).
* Reisenegger and Goldreich [1994] A. Reisenegger and P. Goldreich, Astrophys. J. 426, 688 (1994).
* Cox [1980] J. P. Cox, _Theory of Stellar Pulsation_ (Princeton University Press, 1980).
* Lai [1994] D. Lai, Mon. Not. Roy. Astron. Soc. 270, 611 (1994).
* Finn [1987] L. S. Finn, Mon. Not. Roy. Ast. Soc. 227, 265 (1987).
* Strohmayer [1993] T. E. Strohmayer, Astrophys. J. 417, 273 (1993).
* McDermott [1990] P. N. McDermott, Mon. Not. Roy. Ast. Soc. 245, 508 (1990).
* Lee [1995] U. Lee, Astron. Astrophys. 303, 515 (1995).
* Prix and Rieutord [2002] R. Prix and M. Rieutord, Astron. & Astrophys. 393, 949–963 (2002).
* Andersson and Comer [2001] N. Andersson and G. L. Comer, Mon. Not. Roy. Ast. Soc. 328, 1129 (2001).
* Gusakov and Kantor [2013] M. E. Gusakov and E. M. Kantor, Phys. Rev. D 88, 101302 (2013).
* Gualtieri _et al._ [2014] L. Gualtieri, E. M. Kantor, M. E. Gusakov, and A. I. Chugunov, Phys. Rev. D 90, 024010 (2014).
* Kantor and Gusakov [2014] E. M. Kantor and M. E. Gusakov, Mon. Not. Roy. Astron. Soc. 442, 90 (2014).
* Dommes and Gusakov [2016] V. A. Dommes and M. E. Gusakov, Mon. Not. Roy. Astron. Soc. 455, 2852 (2016).
* Passamonti _et al._ [2016] A. Passamonti, N. Andersson, and W. C. G. Ho, Mon. Not. Roy. Astron. Soc. 455, 1489 (2016).
* Yu and Weinberg [2017] H. Yu and N. N. Weinberg, Mon. Not. Roy. Astron. Soc. 470, 350 (2017).
* Yu and Weinberg [2017] H. Yu and N. N. Weinberg, Mon. Not. Roy. Ast. Soc. 464, 2622 (2017).
* Rau and Wasserman [2018] P. B. Rau and I. Wasserman, Mon. Not. Roy. Ast. Soc. 481, 4427 (2018).
* Gusakov _et al._ [2014] M. E. Gusakov, P. Haensel, and E. M. Kantor, Monthly Notices of the Royal Astronomical Society 439, 318 (2014).
* Wei _et al._ [2020] W. Wei, M. Salinas, T. Klähn, P. Jaikumar, and M. Barry, The Astrophysical Journal 904, 187 (2020).
* Abbott _et al._ [2018] B. P. Abbott _et al._ , Phys. Rev. Lett. 121, 161101 (2018).
* Riley _et al._ [2019] T. E. Riley _et al._ , The Astrophysical Journal Letters 887, L21 (2019).
* Miller _et al._ [2019] M. C. Miller _et al._ , The Astrophysical Journal Letters 887, L24 (2019).
* Bedaque and Steiner [2015] P. Bedaque and A. W. Steiner, Phys. Rev. Lett. 114, 031103 (2015).
* Kanakis-Pegios _et al._ [2020] A. Kanakis-Pegios, P. S. Koliogiannis, and C. C. Moustakidis, Phys. Rev. C 102, 055801 (2020).
* McLerran and Reddy [2019] L. McLerran and S. Reddy, Phys. Rev. Lett. 122, 122701 (2019).
* Zhao and Lattimer [2020] T. Zhao and J. M. Lattimer, Phys. Rev. D 102, 023021 (2020).
* Brillante and Mishustin [2014] A. Brillante and I. N. Mishustin, EPL 105, 39001 (2014).
* Lau and Yagi [2020] S. Y. Lau and K. Yagi, arXiv e-prints (2020), arXiv:2012.13000 [astro-ph.HE] .
* Jaikumar _et al._ [2008] P. Jaikumar, G. Rupak, and A. W. Steiner, Phys. Rev. D 78, 123007 (2008).
* Sa’d [2008] B. A. Sa’d, arXiv e-prints (2008), arXiv:0806.3359 [astro-ph] .
* Rupak and Jaikumar [2010] G. Rupak and P. Jaikumar, Phys. Rev. C 82, 055806 (2010).
* Sotani _et al._ [2013] H. Sotani, T. Maruyama, and T. Tatsumi, Nucl. Phys. A906, 37 (2013).
* Flores and Lugones [2014] C. V. Flores and G. Lugones, Class. Quant. Grav. 31, 155002 (2014).
* Ranea-Sandoval _et al._ [2018] I. F. Ranea-Sandoval, O. M. Guilera, M. Mariani, and M. G. Orsaria, Journal of Cosmology and Astroparticle Physics 2018 (12), 031.
* Heiselberg [1995] H. Heiselberg, in _Strangeness and Quark Matter_ , edited by G. Vassiliadis, A. D. Panagiotou, S. Kumar, and J. Madsen (1995) pp. 298–307.
* Xia _et al._ [2018] C.-J. Xia, G.-X. Peng, T.-T. Sun, W.-L. Guo, D.-H. Lu, and P. Jaikumar, Phys. Rev. D 98, 034031 (2018).
* Gregorian [2015] P. Gregorian, Masters Thesis, Eberhard Karls Universität Tübingen (2015), uRL:https://dspace.library.uu.nl/handle/1874/306758.
* McDermott _et al._ [1983] P. N. McDermott, H. M. van Horn, and J. F. Scholl, Astrophys. J. 268, 837 (1983).
* Chandrasekhar and Ferrari [1991] S. Chandrasekhar and V. Ferrari, Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 432, 247 (1991).
* Lai [1999] D. Lai, Mon. Not. Roy. Astron. Soc. 307, 1001 (1999).
* Drischler _et al._ [2016] C. Drischler, A. Carbone, K. Hebeler, and A. Schwenk, Phys. Rev. C 94, 054307 (2016).
* Lattimer and Prakash [2011] J. M. Lattimer and M. Prakash, From Nuclei to Stars, ed. S. Lee, Singapore: WorldScientific , 275 (2011).
* Gomes _et al._ [2019] R. O. Gomes, P. Char, and S. Schramm, The Astrophysical Journal 877, 139 (2019).
* Glendenning [1992] N. K. Glendenning, Phys. Rev. D 46, 1274 (1992).
* Baym and Chin [1976] G. Baym and S. A. Chin, Physics Letters B 62, 241 (1976).
* Klähn and Fischer [2015] T. Klähn and T. Fischer, Astrophys. J. 810, 134 (2015).
* Tews _et al._ [2018] I. Tews, J. Carlson, S. Gandolfi, and S. Reddy, The Astrophysical Journal 860, 149 (2018).
* Haensel _et al._ [2000] P. Haensel, K. P. Levenfish, and D. G. Yakovlev, Astron. & Astrophys. 357, 1157 (2000), arXiv:astro-ph/0004183 [astro-ph] .
* Abney _et al._ [1996] M. Abney, R. I. Epstein, and A. V. Olinto, Astrophys. J. Lett. 466, L91 (1996).
* Lattimer [2012] J. M. Lattimer, Annual Review of Nuclear and Particle Science 62, 485 (2012).
* Özel and Freire [2016] F. Özel and P. Freire, Ann. Rev. Astron. & Astrophys. 54, 401 (2016).
* Özel _et al._ [2016] F. Özel, D. Psaltis, T. Güver, G. Baym, C. Heinke, and S. Guillot, The Astrophysical Journal 820, 28 (2016).
* De _et al._ [2018] S. De, D. Finstad, J. M. Lattimer, D. A. Brown, E. Berger, and C. M. Biwer, Phys. Rev. Lett. 121, 091102 (2018).
* Capano _et al._ [2020] C. D. Capano, I. Tews, S. M. Brown, B. Margalit, S. De, S. Kumar, D. A. Brown, B. Krishnan, and S. Reddy, Nature Astronomy 4, 625 (2020).
* Demorest _et al._ [2010] P. B. Demorest, T. Pennucci, S. M. Ransom, M. S. E. Roberts, and J. W. T. Hessels, Nature (London) 467, 1081 (2010).
* Fonseca _et al._ [2016] E. Fonseca, T. T. Pennucci, J. A. Ellis, _et al._ , Astrophys. J. 832, 167 (2016).
* Arzoumanian _et al._ [2018] Z. Arzoumanian, A. Brazier, S. Burke-Spolaor, _et al._ , The Astrophysical Journal Supplement Series 235, 37 (2018).
* Cromartie _et al._ [2020] H. T. Cromartie _et al._ , Nature Astronomy 4, 72 (2020).
* Linares _et al._ [2018] M. Linares, T. Shahbaz, and J. Casares, The Astrophysical Journal 859, 54 (2018).
* Flores _et al._ [2017] C. V. Flores, Z. B. Hall, II, and P. Jaikumar, Phys. Rev. C96, 065803 (2017).
* Wu and Shen [2019] X. H. Wu and H. Shen, Phys. Rev. C 99, 065802 (2019).
* Maggiore [2007] M. Maggiore, _Gravitational Waves Vol 1: Theory and Experiments_ (Oxford University Press, 2007).
|
# Weakly-Supervised Hierarchical Models for Predicting Persuasion Strategies
Jiaao Chen, Diyi Yang
###### Abstract
Modeling persuasive language has the potential to better facilitate our
decision-making processes. Despite its importance, computational modeling of
persuasion is still in its infancy, largely due to the lack of benchmark
datasets that can provide quantitative labels of persuasive strategies to
expedite this line of research. To this end, we introduce a large-scale multi-
domain text corpus for modeling persuasive strategies in good-faith text
requests. Moreover, we design a hierarchical weakly-supervised latent variable
model that can leverage partially labeled data to predict such associated
persuasive strategies for each sentence, where the supervision comes from both
the overall document-level labels and very limited sentence-level labels.
Experimental results showed that our proposed method outperformed existing
semi-supervised baselines significantly. We have publicly released our code at
https://github.com/GT-SALT/Persuasion˙Strategy˙WVAE.
## Introduction
Persuasive communication has the potential to bring significant positive and
pro-social factors to our society (Hovland, Janis, and Kelly 1971). For
instance, persuasion could largely help fundraising for charities and
philanthropic organizations or convincing substance-abusing family members to
seek professional help. Given the nature of persuasion, it is of great
importance to study how and why persuasion works in language. Modeling
persuasive language is challenging in the field of natural language
understanding since it is difficult to quantify the persuasiveness of requests
and even harder to generalize persuasive strategies learned from one domain to
another. Although researchers from social psychology have offered useful
advice on how to understand persuasion, most of them have been conducted from
a qualitative perspective (Bartels 2006; Popkin and Popkin 1994).
Computational modeling of persuasion is still in its infancy, largely due to
the lack of benchmarks that can provide unified, representative corpus to
facilitate this line of research, with a few exceptions like (Luu, Tan, and
Smith 2019b; Atkinson, Srinivasan, and Tan 2019; Wang et al. 2019).
Most existing datasets concerning persuasive text are either (1) too small
(e.g., in the order of hundreds) for current machine learning models (Yang et
al. 2019) or (2) not representative for understanding persuasive strategies by
only looking at one specific domain (Wang et al. 2019). To make persuasion
research and technology maximally useful, both for practical use and
scientific study, a generic and representative corpus is a must, which can
represent persuasive language in a way that is not exclusively tailored to any
one specific dataset or platform. To fill these gaps, building on theoretical
work on persuasion and these prior empirical studies, we first introduce a set
of generic persuasive strategies and a multi-domain corpus to understand
different persuasion strategies that people use in their requests for
different types of persuasion goals in various domains.
However, constructing a large-scale dataset that contains persuasive
strategies labels is often time-consuming and expensive. To mitigate the cost
of labeling fine-grained sentence persuasive strategy, we then introduce a
simple but effective weakly-supervised hierarchical latent variable model that
leverages mainly global or document-level labels (e.g., overall persuasiveness
of the textual requests) alongside with limited sentence annotations to
predict sentence-level persuasion strategies. Our work is inspired by prior
work (Oquab et al. 2015) in computer vision that used the global image-level
labels to classify local objects. Intuitively, our model is hierarchically
semi-supervised, with sentence-level latent variables to reconstruct the input
sentence and all latent variables of sentences are aggregated to predict
document-level persuasiveness. Specifically, at the sentence-level, we utilize
two latent variables representing persuasion strategies and context
separately, in order to disentangle information pertaining to label-oriented
and content-specific properties to do reconstructions; at the document level,
we encode those two latent variables together to predict the overall document
labels in the hope that it could supervise the learning of sentence-level
persuasive strategies. To sum up, our contributions include:
1. 1.
A set of generic persuasive strategies based on theoretical and empirical
studies and introducing a relatively large-scale dataset that includes
annotations of persuasive strategies for three domains.
2. 2.
A hierarchical weakly-supervised latent variable model to predict persuasive
strategies with partially labeled data.
3. 3.
Extensive experimental results that test the effectiveness of our models and
visualize the importance of our proposed persuasion strategies.
## Related Work
There has been much attention paid to computational persuasive language
understanding (Guo, Zhang, and Singh 2020; Atkinson, Srinivasan, and Tan 2019;
Lukin et al. 2017; Yang and Kraut 2017; Shaikh et al. 2020). For instance, Tan
et al. (2016) looked at how the interaction dynamics such as the language
interplay between opinion holders and other participants predict the
persuasiveness via ChangeMyView subreddit. Althoff, Danescu-Niculescu-Mizil,
and Jurafsky (2014) studied donations in Random Acts of Pizza on Reddit, using
the social relations between recipient and donor plus linguistic factors like
narratives to predict the success of these altruistic requests. Although prior
work offered predictive and insightful models, most research determined their
persuasion labels or variables without reference to a taxonomy of persuasion
techniques. Yang et al. (2019) identified the persuasive strategies employed
in each sentence among textual requests from crowdfunding websites in a semi-
supervised manner. Wang et al. (2019) looked at utterance in persuasive
dialogues and annotated a corpus with different persuasion strategies such as
self-modeling, foot-in-the-door, credibility, etc., together with classifiers
to predict such strategies at a sentence-level. These work mainly focused on a
small subset of persuasion strategies and the identification of such
strategies in a specific context. Inspired by those work, we propose a generic
and representative set of persuasion strategies to capture various persuasion
strategies that people use in their requests.
Strategy | Definition and Examples | Connection with Prior Work
---|---|---
| Commitment
---
| The persuaders indicating their intentions to take acts or justify their
earlier
---
decisions to convince others that they have made the correct choice.
e.g., I just lent to Auntie Fine’s Donut Shop. (Kiva)
| _Commitment_ (Yang et al. 2019),
---
_Self-modeling_ (Wang et al. 2019),
_Commitment_ (Vargheese et al. 2020b)
| Emotion
---
| Making request full of emotional valence and arousal affect to influence
others.
---
e.g., Guys I’m desperate. (Borrow)
I’ve been in the lowest depressive state of my life. (RAOP)
| _Ethos_ (Carlile et al. 2018),
---
_Emotion appeal_ (Carlile et al. 2018),
_Sentiment_ (Durmus et al. 2018),
_Emotion words_ (Luu et al, 2019a),
_Emotion_ (Asai et al. 2020)
| Politeness
---
| The usage of polite language in requests.
---
e.g., Your help is deeply appreciated! (Borrow)
| _Politeness_ (Durmus et al. 2018),
---
_Politeness_ (Althoff et al. 2014),
_Politeness_ (Nashruddin et al. 2020)
| Reciprocity
---
| Responding to a positive action with another positive action. People are
more
---
likely to help if they have received help themselves.
e.g., I will pay 5% interest no later than May 1, 2016. (Borrow)
I’ll pay it forward with my first check. (RAOP)
| _Reciprocity_ (Althoff et al. 2014),
---
_Reciprocity_ (Roethke et al. 2020),
_Reciprocity_ (Vargheese et al. 2020b)
| Scarcity
---
| People emphasizing on the urgency, rare of their needs.
---
e.g., Need this loan urgently. (Borrow)
I haven’t ate a meal in two days. (RAOP)
Loan expiring today and still needs $650. (Kiva)
| _Scarcity_ (Vargheese et al. 2020b),
---
_Scarcity_ (Yang et al. 2019),
_Scarcity_ (Lawson et al. 2020)
| Credibility
---
| The uses of credentials impacts to establish credibility and earn others’
trust.
---
e.g., Can provide any documentation needed. (Borrow)
She has already repaid 2 previous loans. (Kiva)
| _Credibility appeal_ (Wang et al. 2019),
---
_Social Proof_ (Roethke et al. 2020),
_Social Proof_ (Vargheese et al. 2020a)
| Evidence
---
| Providing concrete facts or evidence for the narrative or request.
---
e.g. My insurance was canceled today. (Borrow)
There is a Pizza Hut and a Dominos near me. (RAOP)
$225 to go and 1 A+ member on the loan. (Kiva)
| _Evidentiality_ (Althoff et al. 2014),
---
_Evidence_ (Carlile et al. 2018),
_Evidence_ (Stab and Gurevych 2014),
_Concreteness_ (Yang et al. 2019)
_Evidence_ (Durmus et al. 2018)
| Impact
---
| Emphasizing the importance or impact of the request.
---
e.g., I will use this loan to pay my rent. (Borrow)
This loan will help him with his business. (Kiva)
| _Logos_ (Carlile et al. 2018),
---
_Logic appeal_ (Wang et al. 2019)
_Impact_ (Yang et al. 2019)
Table 1: The generic taxonomy of persuasive strategies, their definitions,
example sentences, and connections with prior work.
Recently many semi-supervised learning approaches have been developed for
natural language processing, including adversarial training (Miyato, Dai, and
Goodfellow 2016), variational auto-encoders (Kingma et al. 2014; Yang et al.
2017; Gururangan et al. 2019), consistency training (Xie et al. 2020; Chen,
Wu, and Yang 2020; Chen, Yang, and Yang 2020) and various pre-training
techniques (Kiros et al. 2015; Dai and Le 2015). The contextual word
representations (Peters et al. 2018; Devlin et al. 2019) have emerged as
powerful mechanisms to make use of large scale unlabeled data. Most of these
prior works focus on semi-supervised learning, in which the labels are
partially available and the supervisions for labeled and unlabeled data are
both on the sentence-levels. In contrast, our work is hierarchical weakly
supervised and we aim to predict sentence-levels labels, not document-level
persuasiveness. To our best knowledge, weakly supervised learning has been
explored much less in natural language processing except for a few recent work
(Lee, Chang, and Toutanova 2019; Min et al. 2019) in question answering. There
are a few exceptions: Yang et al. (2019) utilized a small amount of hand-
labeled sentences together with a large number of requests automatically
labeled at the document level for text classification. Pryzant, Chung, and
Jurafsky (2017) proposed an adversarial objective to learn text features
highly predictive of advertisement outcomes. Our work has an analog task in
computer vision–weakly supervised image segmentation (Papandreou et al. 2015;
Pinheiro and Collobert 2015)– which uses image labels or bounding boxes
information to predict pixel-level labels. Similar to image segmentation,
obtaining global/document/image level labels for persuasive understanding is
much cheaper than local/sentence/pixel level labels. Different from multi-task
learning where models have full supervisions in each task, our proposed model
is fully supervised at the document level while partially supervised at the
sentence level.
## Persuasion Taxonomy and Corpus
Previous work modeling persuasion in language either focus on a small subset
of strategies or look at a specific platform, hard to be adapted to other
contexts. To fill this gap, we propose a set of generic persuasive strategies
based on widely used persuasion models from social psychology. Specifically,
we leverage Petty and Cacioppo’s elaboration likelihood model (1986) and
Chiaken’s social information processing model (Chaiken 1980), which suggest
that people process information in two ways: either performing a relatively
deep analysis of the quality of an argument or relying on some simple
superficial cues to make decisions (Cialdini 2001). Guided by these psychology
insights, we examine the aforementioned computational studies on persuasion
and argumentation (Wang et al. 2019; Yang et al. 2019; Durmus, Cardie, and
Durmus 2018; Vargheese, Collinson, and Masthoff 2020a; Carlile et al. 2018),
and further synthesize these theoretical and practical tactics into eight
unified categories: _Commitment, Emotion, Politeness, Reciprocity, Scarcity_
that allow people to use simple inferential rules to make decisions, and
_Credibility, Evidence, Impact_ that require people to evaluate the
information based on its merits, logic, and importance. As shown in Table 1,
our taxonomy distilled, extended, and unified existing persuasion strategies.
Different from prior work that introduced domain-specific persuasion tactics
with limited generalizability, our generic taxonomy can be easily plugged into
different text domains, making large-scale understanding of persuasion in
language across multiple contexts comparable and replicable.
### Dataset Collection & Statistics
We collected our data from three different domains related to persuasion. (1)
Kiva111www.kiva.org is a peer-to-peer philanthropic lending platform where
persuading others to make loans is a key to success (no interest), (2)
subreddit
“r/Random$\\_$Acts$\\_$of$\\_$Pizza222www.reddit.com/r/Random˙Acts˙Of˙Pizza”
(RAOP) where members write requests to ask for free pizzas (social purpose, no
direct money transaction), and (3) subreddit
“r/borrow333www.reddit.com/r/borrow” (Borrow) that focuses on writing posts to
borrow money from others (with interest). After removing personal and
sensitive information, we obtained 40,466 posts from Kiva, 18,026 posts from
RAOP, and 49,855 posts from Borrow.
We sampled 5% documents with document length ranging from 1 to 6 from Kiva, 1
to 8 from RAOP and 1 to 7 from Borrow to annotate, as documents with at most 6
sentences account for 89% in Kiva, 86% posts in RAOP has no more than 8
sentences, and 85% posts in Borrow has at most 7 sentences. We recruited four
research assistants to label persuasion strategies for each sentence in
sampled documents. Definitions and examples of different persuasion strategies
were provided, together with a training session where we asked annotators to
annotate a number of example sentences and walked them through any disagreed
annotations. To assess the reliability of the annotated labels, we then asked
them to annotate the same 100 documents with 400 sentences and computed
Cohen’s Kappa coefficient to measure inter-rater reliability. We obtained an
average score of 0.538 on Kiva, 0.613 on RAOP, and 0.623 on Borrow, which
indicates moderate agreement (McHugh 2012). Annotators then annotated the rest
1200 documents by themselves independently.
The dataset statistics are shown in Table 2, and the sentence-level label
distribution in each dataset is shown in Figure 1. We merge rare strategies
into the Other category. Specifically, we merge Commitment, Scarcity, and
Emotion in Borrow, Credibility and Commitment in RAOP, Reciprocity and Emotion
in Kiva, as Other. We utilized whether the requester received pizzas or loans
from the subreddits as the document-level labels for RAOP and Borrow. 30.1% of
people successfully got pizzas on RAOP and 48.5% of people received loans on
Borrow. In Kiva, we utilized the number of people who lent loans as the
document-level labels. The numbers are further labeled based on buckets:
$[1,2)$, $[2,3)$, $[3,4)$, $[4,\infty)$, accounting for 44.1%, 20.3%, 12.4%
and 33.2% of all documents.
Figure 1: The distribution of each persuasion strategy in three annotated three datasets. | # Docs | | # Sents
---
w/ label
| # Sents
---
w/o label
Doc Labels | Sent Labels
Borrow | 49,855 | 5,800 | 164,293 | Success or not | | Evidence, Impact, Politeness, Reciprocity, Credibility
---
RAOP | 18,026 | 3,600 | 77,517 | Success or not | | Evidence, Impact, Politeness, Reciprocity, Scarcity, Emotion
---
Kiva | 40,466 | 6,300 | 135,330 | # People loaned | | Evidence, Impact, Politeness, Credibility, Scarcity, Commitment
---
Table 2: Dataset statistics. For strategies that are rare, we merged them into
an _Other_ category.
## Method
To alleviate the dependencies on labeled data, we propose a hierarchical
weakly-supervised latent variable model to leverage partially labeled data to
predict sentence-level persuasive strategies. Specifically, we introduce a
sentence-level latent variable model to reconstruct the input sentence and
predict the sentence-level persuasion labels spontaneously, supervised by the
global or document-level labels (e.g., overall persuasiveness of the
documents). The overall architecture of our method is shown in Figure 2.
### Weakly Supervised Latent Model
Given a corpus of $N$ documents $\mathbf{D}=\\{\mathbf{d}_{i}\\}_{i=1}^{N}$,
where each document $\mathbf{d}$ consists of $M$ sentences
$\mathbf{d}_{i}=\\{\mathbf{s}_{i}^{j}\\}_{j=1}^{M}$. For each document
$\mathbf{d}_{i}\in\mathbf{D}$, its document level label is denoted as
$\mathbf{t}_{i}$, representing the overall persuasiveness of the documents. We
divide the corpus into two parts:
$\mathbf{D}=\mathbf{D}_{L}\cup\mathbf{D}_{U}$, where $\mathbf{D}_{L}$
($\mathbf{D}_{U}$) denotes the set of documents with (without) sentence
labels. For each document $\mathbf{d}_{i}\in\mathbf{D}_{L}$, the corresponding
sentence labels are $\\{\mathbf{y}_{i}^{j}\\}_{j=1}^{M}$, where
$\mathbf{y}_{i}^{j}\in\mathbf{C}=\\{c_{k}\\}_{k=1}^{K}$ and represents the
persuasive strategy of a given sentence. In many practical scenarios, getting
document-level labels $\\{\mathbf{t}_{i}\\}$ is much easier and cheaper than
the fine-grained sentence labels $\\{\mathbf{s}_{i}^{j}\\}$ since the number
of sentences $M$ in a document $\mathbf{d}_{i}$ can be very large. Similarly,
in our setting, the number of documents with fully labeled sentences is very
limited, i.e., $|\mathbf{D}_{L}|\ll|\mathbf{D}|$. To this end, we introduce a
novel hierarchical weakly supervised latent variable model that can leverage
both the document-level labels and the small amount of sentence-level labels
to discover the sentence persuasive strategies. Our model is weakly supervised
since we will utilize document labels to facilitate the learning of sentence
persuasive strategies. The intuition is that global documents labels of
persuasiveness carry useful information of local sentence persuasive
strategies, thus can provide supervision in an indirect manner.
Figure 2: Overall architecture. At sentence-level, the input sentences are
first encoded into two latent variables: $y$ representing strategies and $z$
containing context information; the decoder reconstructs the input sentences.
At document-level, a predictor network aggregates the latent variables within
the input document to predict document-level labels. For labeled documents,
labels are directly used for the reconstruction and prediction; for unlabeled
ones, latent variables $y$ are used.
#### Sentence Level VAE
Following prior work on semi-supervised variational autoencoders (VAEs)
(Kingma and Welling 2013), for an input sentence $\mathbf{s}$, we assume a
graphical model whose latent representation contains a continuous vector
$\mathbf{z}$, denoting the content of a sentence, and a discrete persuasive
strategy label $\mathbf{y}$:
$\displaystyle
p(\mathbf{s},\mathbf{z},\mathbf{y})=p(\mathbf{s}|\mathbf{z},\mathbf{y})p(\mathbf{z})p(\mathbf{y})$
(1)
To learn the semi-supervised VAE, we optimize the variational lower bound as
our learning objective. For unlabeled sentence, we maximize the evidence lower
bound as:
$\displaystyle\log p(\mathbf{s})$ $\displaystyle\geq\mathbb{E}_{\mathbf{y}\sim
q(\mathbf{y}|\mathbf{s})}[\mathbb{E}_{\mathbf{z}\sim
q(\mathbf{z}|\mathbf{s},\mathbf{y})}[\log
p(\mathbf{s}|\mathbf{z},\mathbf{y})]$ (2)
$\displaystyle\quad-\text{KL}[q(\mathbf{z}|\mathbf{s},\mathbf{y})||p(\mathbf{z})]]-\text{KL}[q(\mathbf{y}|\mathbf{s})||p(\mathbf{y})]$
where $p(\mathbf{s}|\mathbf{y},\mathbf{z})$ is a decoder (generative network)
to reconstruct input sentences and $q(\mathbf{y}|\mathbf{s})$ is an encoder
(an inference or a predictor network) to predict sentence-level labels.
For labeled sentences, the variational lower bound is:
$\displaystyle\log p(\mathbf{s},\mathbf{y})$
$\displaystyle\geq\mathbb{E}_{\mathbf{z}\sim
q(\mathbf{z}|\mathbf{s},\mathbf{y})}[\log
p(\mathbf{s}|\mathbf{z},\mathbf{y})]$ (3)
$\displaystyle\quad-\text{KL}[q(\mathbf{z}|\mathbf{s},\mathbf{y})||p(\mathbf{z})]+\text{constant}$
In addition, for sentences with labels, we also update the inference network
$q(\mathbf{y}|\mathbf{s})$ via minimizing the cross entropy loss
$\mathbb{E}_{(\mathbf{s},\mathbf{y})}[-\log q(\mathbf{y}|\mathbf{s})]$
directly.
#### Document Level VAE
Different from sentence-level VAEs, we model the input document $\mathbf{d}$
with sentences $\\{\mathbf{s}^{j}\\}_{j=1}^{M}=\mathbf{s}^{1:M}$ as a whole
and assume that the document-level label $\mathbf{t}$ depends on the sentence-
level latent variables. Thus we obtain the document-level VAE model as:
$\displaystyle
p(\mathbf{d},\mathbf{t},\mathbf{y},\mathbf{z})=p(\mathbf{d},\mathbf{t}|\mathbf{y},\mathbf{z})\prod_{j=1}^{M}p(\mathbf{y}^{j})\prod_{j=1}^{M}p(\mathbf{z}^{j})$
(4)
where $p(\mathbf{d},\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})$ is the
generative model for all sentences in the document $\mathbf{d}$ and the
document label $\mathbf{t}$.
For simplicity, we further assume conditional independence between the
sentences $\mathbf{s}^{1:M}$ in $\mathbf{d}$ and its label $\mathbf{t}$ given
the latent variables:
$p(\mathbf{d},\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})=p(\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})\prod_{j=1}^{M}p(\mathbf{s}^{j}|\mathbf{y}^{j},\mathbf{z}^{j}).$
Since the possible number of the sentence label combinations is huge, simply
computing the marginal probability becomes intractable. Thus we optimize the
evidence lower bound. By using mean field approximation (Jain, Koehler, and
Mossel 2018), we factorize the posterior distribution as:
$q(\mathbf{z}^{1:M},\mathbf{y}^{1:M}|\mathbf{d},\mathbf{t})=\prod_{j=1}^{M}q(\mathbf{z}^{j}|\mathbf{y}^{j},\mathbf{s}^{j},\mathbf{t})\prod_{j=1}^{M}q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})$.
That is, the posterior distribution of latent variables $\mathbf{y}^{j}$ and
$\mathbf{z}^{j}$ only depends on the sentence $\mathbf{s}^{j}$ and the
document label $\mathbf{t}$. For documents without sentence labels, the
evidence lower bound is:
$\displaystyle\log p(\mathbf{d},\mathbf{t})\geq\mathbb{E}_{\mathbf{y}\sim
q(\mathbf{y}|\mathbf{s},\mathbf{t})}[\mathbb{E}_{\mathbf{z}\sim
q(\mathbf{z}|\mathbf{s},\mathbf{y},\mathbf{t})}[\log
p(\mathbf{t}|\mathbf{y},\mathbf{z})$ (5)
$\displaystyle\quad+\sum_{i=1}^{N}\log
p(\mathbf{s}^{j}|\mathbf{y}^{j},\mathbf{z}^{j})]-\sum_{j=1}^{M}\text{KL}[q(\mathbf{z}^{j}|\mathbf{s}^{j},\mathbf{y}^{j},\mathbf{t})||p(\mathbf{z}^{j})]]$
$\displaystyle\quad-\sum_{j=1}^{M}\text{KL}[q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})||p(\mathbf{y}^{j})]=U(\mathbf{d},\mathbf{t})$
For document with sentence labels, the variational lower bound can be adapted
from above as:
$\displaystyle\log
p(\mathbf{d},\mathbf{t},\mathbf{y})\geq\mathbb{E}_{\mathbf{z}\sim
q(\mathbf{z}|\mathbf{s},\mathbf{y},\mathbf{t})}[\log
p(\mathbf{t}|\mathbf{y},\mathbf{z})$ (6)
$\displaystyle\quad+\sum_{i=1}^{N}\log
p(\mathbf{s}^{j}|\mathbf{y}^{j},\mathbf{z}^{j})]-\sum_{j=1}^{M}\text{KL}[q(\mathbf{z}^{j}|\mathbf{s}^{j},\mathbf{y}^{j},\mathbf{t})||p(\mathbf{z}^{j})]$
$\displaystyle\quad=L(\mathbf{d},\mathbf{t},\mathbf{y})+\text{constant}$
Combining the loss for document with and without sentence labels, we obtain
the overall loss function:
$\displaystyle L=$
$\displaystyle\quad\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{U}}U(\mathbf{d},\mathbf{t})+\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}L(\mathbf{d},\mathbf{t},\mathbf{y}^{1:M})$
(7)
$\displaystyle\quad+\alpha\cdot\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}\prod_{j=1}^{M}\log
q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})$
Here, $\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}\prod_{j=1}^{M}\log
q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})$ represents the discriminative
loss for sentences with labels and $\alpha$ controls the trade-off between
generative loss and discriminative loss 444The influence of $\alpha$ is
discussed in Section 4 in Appendix..
Compared to sentence-level VAE (S-VAE) that only learns sentence
representation via a generative network $p(\mathbf{s}|\mathbf{y,z})$,
document-level VAE utilizes the contextual relations between sentences by
aggregating multiple sentences in a document and further predicting document-
level labels via a predictor network
$p(\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})$. Document-level weakly
supervised VAE (WS-VAE) incorporates both direct sentence-level supervision
and indirect document-level supervision to better make use of unlabeled
sentences, thus can further help the persuasion strategies classification.
Note that our hierarchical weakly-supervised latent variable model presents a
generic framework to utilize dependencies between sentence-level and document-
level labels, and can be easily adapted to other NLP tasks where document-
level supervision is rich and sentence-level supervision is scarce.
### Training Details
In practice, we parameterize the inference network
$q(\mathbf{y}|\mathbf{s},\mathbf{t})$ and
$q(\mathbf{z}|\mathbf{y},\mathbf{s},\mathbf{t})$ using a LSTM or a BERT which
encodes the sentences (and document label) to get the posterior distribution.
We used another LSTM as the decoder to model the the generative network
$p(\mathbf{s}|\mathbf{z},\mathbf{y})$. At the document level, each sentence’s
content vector and strategy vector is fed as input to a LSTM to model the
predictor network $p(\mathbf{t}|\mathbf{z}^{1:M},\mathbf{y}^{1:M})$.
Reparametrization: It is challenging to back-propagate through random
variables as it involves non-differentiable sampling procedures. For latent
variable $\mathbf{z}$, we utilized the reparametrization technique proposed by
Kingma and Welling (2013) to re-parametrize the Gaussian random variable
$\mathbf{z}$ as $\mathbf{\mu}+\mathbf{\sigma}\epsilon$, where $\epsilon\sim
N(0,I)$, $\mu$ and $\sigma$ are deterministic and differentiable. For discrete
latent variable $\mathbf{y}$, we adopted Gumbel softmax (Jang, Gu, and Poole
2017) to approximate it continuously:
$y_{k}=\frac{\exp\left(\left(\log\left(\pi_{k}\right)+g_{k}\right)/\tau\right)}{\sum_{k=1}^{K}\exp\left(\left(\log\left(\pi_{k}\right)+g_{k}\right)/\tau\right)}$
where $\pi_{1:K}$ are the probabilities of a categorical distribution, $g_{k}$
follows Gumbel$(0,1)$ and $\tau$ is the temperature. The approximation is
accurate when $\tau\to 0$ and smooth when $\tau>0$. We gradually decrease
$\tau$ in the training process.
Prior Estimation: Classical variational models usually assume simple priors
such as uniform distributions. We performed a Gaussian kernel density
estimation over training data to estimate the prior for $\mathbf{y}$, and
assumed the latent variable $z$ follows a standard Gaussian distribution.
## Experiment and Result
Dataset | Train | Dev | Test
---|---|---|---
Borrow | 900 | 400 | 400
RAOP | 300 | 200 | 300
Kiva | 1000 | 400 | 400
Table 3: Split statistics about train, dev, and test set. Dataset | Model | Sentence-level Persuasion Strategy Prediction F1 Score | Doc-Level Accuracy
---|---|---|---
20 | 50 | 100 | Max
Kiva | LSTM | $26.1\pm 0.8$ | $37.6\pm 1.0$ | $43.3\pm 1.0$ | $54.6\pm 2.0$ | -
SH-Net | $29.1\pm 0.4$ | $38.8\pm 0.9$ | $43.4\pm 0.9$ | $54.8\pm 0.9$ | $34.8\pm 1.0$
BERT | $28.6\pm 4.0$ | $38.5\pm 0.7$ | $44.6\pm 3.0$ | $57.0\pm 1.0$ | -
S-VAE | $30.9\pm 1.0$ | $40.3\pm 0.7$ | $43.6\pm 0.9$ | $55.7\pm 1.0$ | -
WS-VAE | $31.5\pm 0.8$ | $40.9\pm 1.0$ | $44.0\pm 1.0$ | $55.4\pm 0.8$ | $35.5\pm 1.0$
WS-VAE-BERT | $34.2\pm 0.2$ | $43.0\pm 0.9$ | $45.2\pm 0.9$ | $59.1\pm 0.9$ | $36.7\pm 2.0$
RAOP | LSTM | $28.5\pm 1.0$ | $37.7\pm 1.0$ | $42.5\pm 1.0$ | $47.8\pm 0.9$ | -
SH-Net | $30.0\pm 1.0$ | $39.1\pm 1.0$ | $42.8\pm 1.0$ | $48.1\pm 1.0$ | $66.6\pm 1.0$
BERT | $30.6\pm 2.0$ | $39.5\pm 2.0$ | $43.4\pm 2.0$ | $54.0\pm 1.0$ | -
S-VAE | $31.7\pm 0.7$ | $40.1\pm 1.0$ | $43.2\pm 1.0$ | $48.8\pm 2.0$ | -
WS-VAE | $32.1\pm 0.9$ | $39.9\pm 0.9$ | $43.8\pm 0.9$ | $49.1\pm 2.0$ | $65.3\pm 1.0$
WS-VAE-BERT | $41.0\pm 0.8$ | $45.6\pm 2.0$ | $51.2\pm 0.8$ | $58.3\pm 2.0$ | $67.8\pm 1.0$
Borrow | LSTM | $53.4\pm 0.9$ | $62.6\pm 0.9$ | $68.1\pm 0.8$ | $74.4\pm 2.0$ | -
SH-Net | $53.7\pm 1.0$ | $63.2\pm 1.0$ | $68.0\pm 0.7$ | $74.5\pm 1.0$ | $56.5\pm 2.0$
BERT | $56.7\pm 1.0$ | $64.1\pm 3.0$ | $68.5\pm 1.0$ | $74.6\pm 0.4$ | -
S-VAE | $59.2\pm 0.7$ | $65.3\pm 0.4$ | $68.8\pm 0.6$ | $74.6\pm 0.5$ | -
WS-VAE | $59.5\pm 1.0$ | $66.0\pm 0.7$ | $68.9\pm 1.0$ | $74.7\pm 0.3$ | $56.5\pm 0.9$
WS-VAE-BERT | $62.6\pm 2.0$ | $68.5\pm 1.0$ | $70.4\pm 1.0$ | $75.9\pm 0.7$ | $57.5\pm 0.8$
Table 4: Sentence-level persuasion strategy prediction performance (Macro F1
Score) and document-level prediction performance (Accuracy). Models are
trained with documents amount of 20 (81 sentences in Kiva, 99 sentences in
RAOP and 59 sentences in Borrow), 50 (200 sentences in Kiva, 236 sentences in
RAOP and 168 sentences in Borrow), 100 (355 sentences in Kiva, 480 sentences
in RAOP and 356 sentences in Borrow), and all the training set (3512 sentences
in Kiva, 1382 sentences in RAOP and 3136 sentences in Borrow). The results are
averaged after 5 different runs, with the 95% confidence interval.
Experiment Setup: We randomly sampled from the labeled documents to form the
maximum labeled train set, the development, and test set to train and evaluate
models, and we utilized all the unlabeled documents as training data as well.
The data splits are shown in Table 3. We utilized NLTK (Bird, Klein, and Loper
2009) to split the documents into sentences and tokenize each sentence with
BERT-base uncased tokenizer (Devlin et al. 2019). We added a special CLS token
at the beginning of each sentence and a special SEP token at the end of each
sentence. We used BERT (Devlin et al. 2019) as the discriminative network,
LSTM as the generative network and predictor network. The inference network is
a 2-layer MLP. We trained our model via AdamW (Loshchilov and Hutter 2017) and
tuned hyper-parameters on the development set.
### Baselines and Model Settings555Parameters details are stated in Section 5
in the Appendix.
We compared our model on strategy classification for each sentence with
several baselines: (1) LSTM (Hochreiter and Schmidhuber 1997): LSTM is
utilized as the encoder for sentences. We use the last layer’s hidden states
as the representations of sentences to classify the persuasion strategies.
Only labeled sentences are used here. (2) SH-Net (Yang et al. 2019): SH-Net
utilized a hierarchical LSTM to classify strategies with the supervision from
both sentence-level and document-level labels, thus both labeled documents and
unlabeled documents being used. We followed their implementation and modified
the document-level inputs as concatenations of latent variables $y$ and $z$.
(3) BERT (Devlin et al. 2019): We used the pre-trained BERT-base uncased model
and fine-tuned it for the persuasion strategy classification. BERT only
utilized labeled sentences. (4) S-VAE: Sentence-level VAE applied variational
autoencoders in classifications by reconstructing the input sentences while
learning to classify them. Both labeled and unlabeled sentences are used.
Figure 3: Average attention weight learned in the predictor network for
different strategies in three datasets.
WS-VAE denotes our proposed weakly supervised latent variable model that made
use of sentence-level labels and document-level labels at the same time, as
well as reconstructing input documents. We further showed that our proposed
WS-VAE model is orthogonal to pre-trained models like BERT as well by
utilizing pre-trained BERT as the discriminative network to encode the input
sentences and then using 2-layer LSTM as the generative network and predictor
network, denoted as WS-VAE-BERT, a special case (based on pre-trained
transformer models) of WS-VAE.
### Results
##### Varying the Number of Labeled Documents
We tested the models with varying amount of labeled documents from 20 to the
maximum number of labeled training documents, and summarized the results in
Table 4. The simple LSTM classifier showed the worst performance over three
datasets, especially when limited labeled documents were given. After simply
adding document-level supervision as well as unlabeled documents, SH-Net got
better Macro F1 scores as well as lower variance, showing the impact of
document-level supervision on sentence-level learning. BERT fine-tuned on
persuasion strategy classification tasks showed better performance than LSTM
and SH-Net with limited labeled data in most cases.
By leveraging the reconstruction of each input sentence using corresponding
persuasion strategies and context latent variables, S-VAE showed a significant
performance boost comparing to only utilizing indirect supervision from the
document-level labels. This indicated that by incorporating the extra
supervision directly from the input sentence itself, we can gain more help
than hierarchical supervision from document levels. By utilizing the
hierarchical latent variable model, which not only utilized the sentence
reconstruction but also document-level predictions to assist the sentence-
level classifications, WS-VAE outperformed S-VAE. When combining with the
state-of-the-art pre-trained models like BERT, our WS-VAE-BERT achieved the
best performance over three datasets. This suggests that such improvement does
not only come from large pre-trained models, but also the incorporation of our
hierarchical latent variable model.
Note that we also showed the document-level prediction accuracy for models
that used all the labeled documents. Even though the document-level
predictions were not our goals, we observed a consistent trend that higher
document-level performance correlated with the higher sentence-level accuracy,
suggesting that the global document-level supervision helped the sentence-
level predictions.
Figure 4: Attention weight for content vectors and strategy vectors when
predicting document-level labels in the predictor network. Figure 5: Cosine
similarities between different persuasive strategies (Credibility,
Reciprocity, Evidence, Commitment, Scarcity, Emotion, Impact and Politeness).
##### Importance of Strategies vs Content
To better understand how these persuasive strategies and the text content
jointly affect the success of text requests, we added an attention layer over
content latent variable $z$ and strategy latent variable $y$ in the predictor
network to visualize the importance of persuasive strategies and text content
in the WS-VAE-BERT, as shown in Figure 4. In all three domains, we found that
content vectors tend to have larger weights than strategy vectors. This
suggests that when people are writing requests to convince others to take
action, content is relatively the more important component than persuasion
strategies. However, leveraging proper persuasive strategies can further boost
the likelihood of their requests being fulfilled.
##### Attention Weight
We further calculated the average attention weights learned in the predictor
network (attended over strategy latent variable $y$ and content latent
variable $z$ to predict the document-level labels) for different strategies in
three datasets which is shown in Figure 3. We observed that _Reciprocity_ ,
_Commitment_ , _Scarcity_ and _Impact_ seemed to play more important roles,
while _Credibility_ , _Evidence_ , _Emotion_ and _Politeness_ had lower
average attention weights, which indicated that simple superficial strategies
might be more influential to overall persuasiveness in online forums than
strategies that required deeper analysis.
##### Relation between Persuasive Strategies
To explore possible relations among different persuasive strategies, we
utilized the embeddings for each persuasive strategy from the predictor
network and visualized their pairwise similarities in Figure 5. All the
similarities scores were below 0.5, showing those strategies in our taxonomy
are generally orthogonal to each other and capture different aspects of
persuasive language. However, some strategies tend to demonstrate relatively
higher relations; for example, _Scarcity_ highly correlates with _Evidence_ on
RAOP and Kiva, indicating that people may often use them together in their
requests.
## Conclusion and Future Work
This work introduced a set of generic persuasive strategies based on theories
on persuasion, together with a large-scale multi-domain text corpus annotated
with their associated persuasion strategies. To further utilize both labeled
and unlabeled data in real-world scenarios, we designed a hierarchical weakly-
supervised latent variable model to utilize document-level persuasiveness
supervision to guide the learning of specific sentence-level persuasive
strategies. Experimental results showed that our proposed method outperformed
existing semi-supervised baselines significantly on three datasets. Note that,
we made an assumption that the document-level persuasiveness label only
depended on the sentence-level information. However there are other factors
closely related to the overall persuasiveness such as requesters/lenders’
backgrounds or their prior interactions (Valeiras-Jurado 2020; Longpre,
Durmus, and Cardie 2019). Future work can investigate how these audience
factors further affect the predictions of both sentence- and document- level
labels. As an initial effort, our latent variable methods disentangle
persuasion strategies and the content, and highlight the relations between
persuasion strategies and the overall persuasiveness, which can be further
leveraged by real-world applications to make textual requests more effective
via different choices of persuasion strategies.
## Acknowledgment
We would like to thank Jintong Jiang, Leyuan Pan, Yuwei Wu, Zichao Yang, the
anonymous reviewers, and the members of Georgia Tech SALT group for their
feedback. We acknowledge the support of NVIDIA Corporation with the donation
of GPU used for this research. DY is supported in part by a grant from Google.
## References
* Althoff, Danescu-Niculescu-Mizil, and Jurafsky (2014) Althoff, T.; Danescu-Niculescu-Mizil, C.; and Jurafsky, D. 2014. How to Ask for a Favor: A Case Study on the Success of Altruistic Requests. In _Proceedings of ICWSM_.
* Asai et al. (2020) Asai, S.; Yoshino, K.; Shinagawa, S.; Sakti, S.; and Nakamura, S. 2020. Emotional Speech Corpus for Persuasive Dialogue System. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , 491–497. Marseille, France: European Language Resources Association. ISBN 979-10-95546-34-4. URL https://www.aclweb.org/anthology/2020.lrec-1.62.
* Atkinson, Srinivasan, and Tan (2019) Atkinson, D.; Srinivasan, K. B.; and Tan, C. 2019. What Gets Echoed? Understanding the “Pointers” in Explanations of Persuasive Arguments. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , 2904–2914.
* Bartels (2006) Bartels, L. M. 2006. Priming and persuasion in presidential campaigns. _Capturing campaign effects_ 78–112.
* Bird, Klein, and Loper (2009) Bird, S.; Klein, E.; and Loper, E. 2009. _Natural Language Processing with Python_. O’Reilly Media, Inc., 1st edition. ISBN 0596516495, 9780596516499.
* Carlile et al. (2018) Carlile, W.; Gurrapadi, N.; Ke, Z.; and Ng, V. 2018. Give Me More Feedback: Annotating Argument Persuasiveness and Related Attributes in Student Essays. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , 621–631. Melbourne, Australia: Association for Computational Linguistics. doi:10.18653/v1/P18-1058. URL https://www.aclweb.org/anthology/P18-1058.
* Chaiken (1980) Chaiken, S. 1980. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. _Journal of personality and social psychology_ 39(5): 752.
* Chen, Wu, and Yang (2020) Chen, J.; Wu, Y.; and Yang, D. 2020. Semi-supervised Models via Data Augmentation for Classifying Interactive Affective Responses. In _AffCon@AAAI_.
* Chen, Yang, and Yang (2020) Chen, J.; Yang, Z.; and Yang, D. 2020. MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , 2147–2157. Online: Association for Computational Linguistics. doi:10.18653/v1/2020.acl-main.194. URL https://www.aclweb.org/anthology/2020.acl-main.194.
* Cialdini (2001) Cialdini, R. 2001. 6 principles of persuasion. _Arizona State University, eBrand Media Publication_ .
* Dai and Le (2015) Dai, A. M.; and Le, Q. V. 2015. Semi-supervised sequence learning. In _Advances in neural information processing systems_ , 3079–3087.
* Devlin et al. (2019) Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , 4171–4186. Minneapolis, Minnesota: Association for Computational Linguistics.
* Durmus, Cardie, and Durmus (2018) Durmus, E.; Cardie, C.; and Durmus, E. 2018. Exploring the Role of Prior Beliefs for Argument Persuasion. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , 1035–1045. New Orleans, Louisiana: Association for Computational Linguistics. doi:10.18653/v1/N18-1094. URL https://www.aclweb.org/anthology/N18-1094.
* Guo, Zhang, and Singh (2020) Guo, Z.; Zhang, Z.; and Singh, M. 2020. In Opinion Holders’ Shoes: Modeling Cumulative Influence for View Change in Online Argumentation. In _Proceedings of The Web Conference 2020_ , 2388–2399.
* Gururangan et al. (2019) Gururangan, S.; Dang, T.; Card, D.; and Smith, N. A. 2019. Variational Pretraining for Semi-supervised Text Classification. _arXiv preprint arXiv:1906.02242_ .
* Hochreiter and Schmidhuber (1997) Hochreiter, S.; and Schmidhuber, J. 1997. Long Short-Term Memory. _Neural Comput._ 9(8): 1735–1780. ISSN 0899-7667. doi:10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10.1162/neco.1997.9.8.1735.
* Hovland, Janis, and Kelly (1971) Hovland, C. I.; Janis, I. L.; and Kelly, H. 1971. Communication and persuasion. _Attitude change_ 66–80.
* Jain, Koehler, and Mossel (2018) Jain, V.; Koehler, F.; and Mossel, E. 2018. The Mean-Field Approximation: Information Inequalities, Algorithms, and Complexity. _CoRR_ abs/1802.06126. URL http://arxiv.org/abs/1802.06126.
* Jang, Gu, and Poole (2017) Jang, E.; Gu, S.; and Poole, B. 2017. Categorical Reparameterization with Gumbel-Softmax. URL https://arxiv.org/abs/1611.01144.
* Kingma et al. (2014) Kingma, D. P.; Mohamed, S.; Rezende, D. J.; and Welling, M. 2014. Semi-supervised learning with deep generative models. In _Advances in neural information processing systems_ , 3581–3589.
* Kingma and Welling (2013) Kingma, D. P.; and Welling, M. 2013. Auto-Encoding Variational Bayes. URL http://arxiv.org/abs/1312.6114. Cite arxiv:1312.6114.
* Kiros et al. (2015) Kiros, R.; Zhu, Y.; Salakhutdinov, R. R.; Zemel, R.; Urtasun, R.; Torralba, A.; and Fidler, S. 2015. Skip-thought vectors. In _Advances in neural information processing systems_ , 3294–3302.
* Lawson et al. (2020) Lawson, P.; Pearson, C. J.; Crowson, A.; and Mayhorn, C. B. 2020. Email phishing and signal detection: How persuasion principles and personality influence response patterns and accuracy. _Applied Ergonomics_ 86: 103084.
* Lee, Chang, and Toutanova (2019) Lee, K.; Chang, M.-W.; and Toutanova, K. 2019. Latent Retrieval for Weakly Supervised Open Domain Question Answering. _arXiv preprint arXiv:1906.00300_ .
* Longpre, Durmus, and Cardie (2019) Longpre, L.; Durmus, E.; and Cardie, C. 2019. Persuasion of the Undecided: Language vs. the Listener. In _Proceedings of the 6th Workshop on Argument Mining_ , 167–176.
* Loshchilov and Hutter (2017) Loshchilov, I.; and Hutter, F. 2017. Fixing Weight Decay Regularization in Adam. _CoRR_ abs/1711.05101. URL http://arxiv.org/abs/1711.05101.
* Lukin et al. (2017) Lukin, S.; Anand, P.; Walker, M.; and Whittaker, S. 2017. Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , 742–753.
* Luu, Tan, and Smith (2019a) Luu, K.; Tan, C.; and Smith, N. 2019a. Measuring Online Debaters’ Persuasive Skill from Text over Time. _Transactions of the Association for Computational Linguistics_ 7(0): 537–550. ISSN 2307-387X. URL https://transacl.org/index.php/tacl/article/view/1639.
* Luu, Tan, and Smith (2019b) Luu, K.; Tan, C.; and Smith, N. A. 2019b. Measuring Online Debaters’ Persuasive Skill from Text over Time. _Transactions of the Association for Computational Linguistics_ 7: 537–550.
* McHugh (2012) McHugh, M. 2012. Interrater reliability: The kappa statistic. _Biochemia medica : časopis Hrvatskoga društva medicinskih biokemičara / HDMB_ 22: 276–82. doi:10.11613/BM.2012.031.
* Min et al. (2019) Min, S.; Chen, D.; Hajishirzi, H.; and Zettlemoyer, L. 2019. A discrete hard em approach for weakly supervised question answering. _arXiv preprint arXiv:1909.04849_ .
* Miyato, Dai, and Goodfellow (2016) Miyato, T.; Dai, A. M.; and Goodfellow, I. 2016. Adversarial training methods for semi-supervised text classification. _arXiv preprint arXiv:1605.07725_ .
* Nashruddin, Alam, and Harun (2020) Nashruddin, N.; Alam, F. A.; and Harun, A. 2020. Moral Values Found in Linguistic Politeness Patterns of Bugis Society. _Edumaspul: Jurnal Pendidikan_ 4(1): 132–141.
* Oquab et al. (2015) Oquab, M.; Bottou, L.; Laptev, I.; and Sivic, J. 2015. Is object localization for free?-weakly-supervised learning with convolutional neural networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 685–694.
* Papandreou et al. (2015) Papandreou, G.; Chen, L.-C.; Murphy, K. P.; and Yuille, A. L. 2015. Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. In _Proceedings of the IEEE international conference on computer vision_ , 1742–1750.
* Peters et al. (2018) Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep contextualized word representations. _arXiv preprint arXiv:1802.05365_ .
* Petty and Cacioppo (1986) Petty, R. E.; and Cacioppo, J. T. 1986. The elaboration likelihood model of persuasion. In _Communication and persuasion_ , 1–24. Springer.
* Pinheiro and Collobert (2015) Pinheiro, P. O.; and Collobert, R. 2015. From image-level to pixel-level labeling with convolutional networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 1713–1721.
* Popkin and Popkin (1994) Popkin, S. L.; and Popkin, S. L. 1994. _The reasoning voter: Communication and persuasion in presidential campaigns_. University of Chicago Press.
* Pryzant, Chung, and Jurafsky (2017) Pryzant, R.; Chung, Y.; and Jurafsky, D. 2017. Predicting Sales from the Language of Product Descriptions. In _eCOM@ SIGIR_.
* Roethke et al. (2020) Roethke, K.; Klumpe, J.; Adam, M.; and Benlian, A. 2020. Social influence tactics in e-commerce onboarding: The role of social proof and reciprocity in affecting user registrations. _Decision Support Systems_ 131: 113268. ISSN 0167-9236. doi:https://doi.org/10.1016/j.dss.2020.113268. URL http://www.sciencedirect.com/science/article/pii/S0167923620300233.
* Shaikh et al. (2020) Shaikh, O.; Chen, J.; Saad-Falcon, J.; Chau, D. H.; and Yang, D. 2020. Examining the Ordering of Rhetorical Strategies in Persuasive Requests. _Findings of EMNLP_ .
* Stab and Gurevych (2014) Stab, C.; and Gurevych, I. 2014. Identifying argumentative discourse structures in persuasive essays. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , 46–56.
* Tan et al. (2016) Tan, C.; Niculae, V.; Danescu-Niculescu-Mizil, C.; and Lee, L. 2016. Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions. In _Proceedings of the 25th International Conference on World Wide Web_ , WWW ’16, 613–624. Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee. ISBN 978-1-4503-4143-1.
* Valeiras-Jurado (2020) Valeiras-Jurado, J. 2020. Genre-specific persuasion in oral presentations: Adaptation to the audience through multimodal persuasive strategies. _International Journal of Applied Linguistics_ 30(2): 293–312.
* Vargheese, Collinson, and Masthoff (2020a) Vargheese, J. P.; Collinson, M.; and Masthoff, J. 2020a. Exploring susceptibility measures to persuasion. In _International Conference on Persuasive Technology_ , 16–29. Springer.
* Vargheese, Collinson, and Masthoff (2020b) Vargheese, J. P.; Collinson, M.; and Masthoff, J. 2020b. Exploring Susceptibility Measures to Persuasion. In Gram-Hansen, S. B.; Jonasen, T. S.; and Midden, C., eds., _Persuasive Technology. Designing for Future Change_ , 16–29. Cham: Springer International Publishing. ISBN 978-3-030-45712-9.
* Wang et al. (2019) Wang, X.; Shi, W.; Kim, R.; Oh, Y.; Yang, S.; Zhang, J.; and Yu, Z. 2019. Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good. _arXiv preprint arXiv:1906.06725_ .
* Xie et al. (2020) Xie, Q.; Dai, Z.; Hovy, E.; Luong, M.-T.; and Le, Q. V. 2020. Unsupervised Data Augmentation for Consistency Training. URL https://openreview.net/forum?id=ByeL1R4FvS.
* Yang et al. (2019) Yang, D.; Chen, J.; Yang, Z.; Jurafsky, D.; and Hovy, E. 2019. Let’s Make Your Request More Persuasive: Modeling Persuasive Strategies via Semi-Supervised Neural Nets on Crowdfunding Platforms. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , 3620–3630.
* Yang and Kraut (2017) Yang, D.; and Kraut, R. E. 2017. Persuading teammates to give: Systematic versus heuristic cues for soliciting loans. _Proceedings of the ACM on Human-Computer Interaction_ 1: 114.
* Yang et al. (2017) Yang, Z.; Hu, Z.; Salakhutdinov, R.; and Berg-Kirkpatrick, T. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , 3881–3890. JMLR. org.
## Appendix
## Dataset & Annotation Details
In different contexts, people tend to write documents with different numbers
of sentences, which might be associated with different sets of persuasion
strategies.
The mean and std for number of sentences per document are 4.68 and 4.63 in
Borrow, 5.10 and 4.40 in RAOP, and 3.83 and 4.12 in Kiva.
We recruited two graduate and two undergraduate students to label the
persuasion strategies for each sentence in given documents which were randomly
sampled from the whole corpus. Definitions and examples of different
persuasion strategies were provided to the annotators. We also conducted a
training session where we asked annotators to annotate 50 example sentences
and walked through them any disagreements or confusions they had. Annotators
then annotated 1200 documents by themselves independently.
To assess the reliability of the annotated labels, the same set of documents
which contained 100 documents with 400 sentences was given to annotators to
label and we computed the Cohen’s Kappa coefficient. We obtained an average
score of 0.538 on Kiva, 0.613 on RAOP and 0.623 on Borrow, which indicated
moderate agreement and reasonable annotation quality (McHugh 2012).
## WS-VAE
##### Sentence level VAE
Based on prior work on semi-supervised VAEs (Kingma and Welling 2013), for an
input sentence $\mathbf{s}$, we assume a graphical model whose latent
representation contains a continuous vector $\mathbf{z}$, denoting the content
of a sentence, and a discrete persuasive strategy label $\mathbf{y}$:
$\displaystyle
p(\mathbf{s},\mathbf{z},\mathbf{y})=p(\mathbf{s}|\mathbf{z},\mathbf{y})p(\mathbf{z})p(\mathbf{y}).$
To learn the semi-supervised VAE, we optimize the variational lower bound as
our learning objective. For unlabeled sentence, we maximize:
$\displaystyle\log p(\mathbf{s})$
$\displaystyle=\log\mathbb{E}_{\mathbf{y}\sim
p(\mathbf{y})}\mathbb{E}_{\mathbf{z}\sim
p(\mathbf{z})}[p(\mathbf{\mathbf{s}}|\mathbf{z},\mathbf{y})]$
$\displaystyle\geq\mathbb{E}_{\mathbf{y}\sim
q(\mathbf{y}|\mathbf{s})}[\mathbb{E}_{\mathbf{z}\sim
q(\mathbf{z}|\mathbf{s},\mathbf{y})}[\log
p(\mathbf{s}|\mathbf{z},\mathbf{y})]$
$\displaystyle\quad-\text{KL}[q(\mathbf{z}|\mathbf{s},\mathbf{y})||p(\mathbf{z})]]$
$\displaystyle\quad-\text{KL}[q(\mathbf{y}|\mathbf{s})||p(\mathbf{y})],$
where $p(\mathbf{s}|\mathbf{y},\mathbf{z})$ is a decoder (generative network)
to reconstruct input sentences and $q(\mathbf{y}|\mathbf{s})$ is an encoder
(an inference or a predictor network) to predict sentence-level labels. For
labeled sentences, the variational lower bound becomes:
$\displaystyle\log p(\mathbf{s},\mathbf{y})$
$\displaystyle=\log\mathbb{E}_{\mathbf{z}\sim
p(\mathbf{z})}[p(\mathbf{s}|\mathbf{z},\mathbf{y})p(\mathbf{y})]$
$\displaystyle\geq\mathbb{E}_{\mathbf{z}\sim
q(\mathbf{z}|\mathbf{s},\mathbf{y})}[\log
p(\mathbf{s}|\mathbf{z},\mathbf{y})]$
$\displaystyle\quad-\text{KL}[q(\mathbf{z}|\mathbf{s},\mathbf{y})||p(\mathbf{z})]+\text{constant}$
In addition, for sentences with labels, we also update the inference network
$q(\mathbf{y}|\mathbf{s})$ via minimizing the cross entropy loss
$\mathbb{E}_{(\mathbf{s},\mathbf{y})}[-\log q(\mathbf{y}|\mathbf{s})]$
directly.
##### Document level VAE
Different from sentence-level VAEs, we model the input document $\mathbf{d}$
with sentences $\\{\mathbf{s}^{j}\\}_{j=1}^{M}=\mathbf{s}^{1:M}$ as a whole
and assume that the document-level label $\mathbf{t}$ depends on the sentence-
level latent variables. Thus we obtain the document-level VAE model as:
$\displaystyle p(\mathbf{d},\mathbf{t},\mathbf{y}^{1:M},\mathbf{z}^{1:M})=$
$\displaystyle p(\mathbf{d},\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})$
$\displaystyle\prod_{j=1}^{M}p(\mathbf{y}^{j})\prod_{j=1}^{M}p(\mathbf{z}^{j}),$
where $p(\mathbf{d},\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})$ is the
generative model for all sentences in the document $\mathbf{d}$ and the
document label $\mathbf{t}$. For simplicity, we further assume conditional
independence between the sentences $\mathbf{s}^{1:M}$ in $\mathbf{d}$ and its
label $\mathbf{t}$ given the latent variables:
$\displaystyle p(\mathbf{d},\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})=$
$\displaystyle p(\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})$
$\displaystyle\prod_{j=1}^{M}p(\mathbf{s}^{j}|\mathbf{y}^{j},\mathbf{z}^{j}).$
Since the possible number of the sentence label combinations is huge, simply
computing the marginal probability becomes intractable. Thus we optimize the
evidence lower bound. By using mean field approximation (Jain, Koehler, and
Mossel 2018), we factorize the posterior distribution as:
$\displaystyle q(\mathbf{z}^{1:M},\mathbf{y}^{1:M}|\mathbf{d},\mathbf{t})$
$\displaystyle\quad\quad=q(\mathbf{z}^{1:M}|\mathbf{y}^{1:M},\mathbf{s}^{1:M},\mathbf{t})q(\mathbf{y}^{1:M}|\mathbf{s}^{1:M},\mathbf{t})$
$\displaystyle\quad\quad=\prod_{j=1}^{M}q(\mathbf{z}^{j}|\mathbf{y}^{j},\mathbf{s}^{j},\mathbf{t})\prod_{j=1}^{M}q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t}),$
That is, the posterior distribution of latent variables $\mathbf{y}^{j}$ and
$\mathbf{z}^{j}$ only depends on the sentence $\mathbf{s}^{j}$ and the
document label $\mathbf{t}$. For documents without sentence labels, the
variational lower bound $U(\mathbf{d},\mathbf{t})$ is:
$\displaystyle\log p(\mathbf{d},\mathbf{t})=\log\mathbb{E}_{\mathbf{y}\sim
p(\mathbf{y})}\mathbb{E}_{\mathbf{z}\sim
p(\mathbf{z})}[p(\mathbf{t}|\mathbf{z}^{1:M},\mathbf{y}^{1:M})$
$\displaystyle\quad\quad\quad\quad\quad\quad\prod_{j=1}^{M}p(\mathbf{s}^{j}|\mathbf{z}^{j},\mathbf{y}^{j})\prod_{j=1}^{M}p(\mathbf{y}^{j})\prod_{j=1}^{M}p(\mathbf{z}^{j})]$
$\displaystyle\geq\mathbb{E}_{\mathbf{y}^{1:M}\sim
q(\mathbf{y}^{1:M}|\mathbf{s}^{1:M},\mathbf{t})}[\mathbb{E}_{\mathbf{z}^{1:M}\sim
q(\mathbf{z}^{1:M}|\mathbf{s}^{1:M},\mathbf{y}^{1:M},\mathbf{t})}$
$\displaystyle\quad[\log
p(\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})+\sum_{i=1}^{N}\log
p(\mathbf{s}^{j}|\mathbf{y}^{j},\mathbf{z}^{j})]$
$\displaystyle\quad\quad\quad\quad-\sum_{j=1}^{M}\text{KL}[q(\mathbf{z}^{j}|\mathbf{s}^{j},\mathbf{y}^{j},\mathbf{t})||p(\mathbf{z}^{j})]]$
$\displaystyle\quad\quad\quad\quad-\sum_{j=1}^{M}\text{KL}[q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})||p(\mathbf{y}^{j})]$
$\displaystyle\quad\quad\quad\quad=U(\mathbf{d},\mathbf{t})$
For document with sentence labels, the variational lower bound can be adapted
from above as:
$\displaystyle\log p(\mathbf{d},\mathbf{t},\mathbf{y}^{1:M})$
$\displaystyle=\log\mathbb{E}_{\mathbf{z}\sim
p(\mathbf{z})}[p(\mathbf{t}|\mathbf{z}^{1:M},\mathbf{y}^{1:M})$
$\displaystyle\quad\quad\quad\quad\quad\quad\prod_{j=1}^{M}p(\mathbf{s}^{j}|\mathbf{z}^{j},\mathbf{y}^{j})\prod_{j=1}^{M}p(\mathbf{y}^{j})\prod_{j=1}^{M}p(\mathbf{z}^{j})]$
$\displaystyle\geq\mathbb{E}_{\mathbf{z}^{1:M}\sim
q(\mathbf{z}^{1:M}|\mathbf{s}^{1:M},\mathbf{y}^{1:M},\mathbf{t})}$
$\displaystyle\quad[\log
p(\mathbf{t}|\mathbf{y}^{1:M},\mathbf{z}^{1:M})+\sum_{i=1}^{N}\log
p(\mathbf{s}^{j}|\mathbf{y}^{j},\mathbf{z}^{j})]$
$\displaystyle\quad-\sum_{j=1}^{M}\text{KL}[q(\mathbf{z}^{j}|\mathbf{s}^{j},\mathbf{y}^{j},\mathbf{t})||p(\mathbf{z}^{j})]+\text{constant}$
$\displaystyle\quad=L(\mathbf{d},\mathbf{t},\mathbf{y}^{1:M})+\text{constant}$
Combining the loss for document with and without sentence labels, we obtain
the overall loss function:
$\displaystyle L=$
$\displaystyle\quad\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{U}}U(\mathbf{d},\mathbf{t})+\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}L(\mathbf{d},\mathbf{t},\mathbf{y}^{1:M})$
$\displaystyle\quad+\alpha\cdot\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}\prod_{j=1}^{M}\log
q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})$
Here, $\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}\prod_{j=1}^{M}\log
q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})$ represents the discriminative
loss for sentences with persuasive strategy labels and $\alpha$ controls the
trade-off between generative loss and discriminative loss.
## Threshold on KL Divergence
Yang et al. (2017) found that VAEs might easily get stuck in two local
optimums: the KL term on $\mathbf{y}$ is very large and all samples collapse
to one class or the KL term on $\mathbf{y}$ is very small and
$q(\mathbf{y}|\mathbf{s})$ is close to the prior distribution. Thus we
minimize the KL term only when it is larger than a threshold $w$:
$\text{KL}_{\mathbf{y}}=\mathbf{max}(w,\text{KL}[q(\mathbf{y}|\mathbf{s})||p(\mathbf{y})])$
## Influence of the Trade-off Weight $\alpha$
The overall loss function of our proposed weakly-supervised hierarchical
latent variable model is:
$\displaystyle L=$
$\displaystyle\quad\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{U}}U(\mathbf{d},\mathbf{t})+\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}L(\mathbf{d},\mathbf{t},\mathbf{y}^{1:M})$
$\displaystyle\quad+\alpha\cdot\mathbb{E}_{\mathbf{d}\in\mathbf{D}_{L}}\prod_{j=1}^{M}\log
q(\mathbf{y}^{j}|\mathbf{s}^{j},\mathbf{t})$
Here, the $\alpha$ is a parameter that controls the balance of reconstruction
loss and supervised sentence classification loss. When $\alpha$ is small, the
sentence level classifications are not well learned. When $\alpha$ is large,
the model tends to only learn the sentence level classification tasks and
ignore the reconstructions and document level predictions. In experiments, we
set $\alpha$ to 5 through a grid search from the set $\\{1,5,10,20\\}$.
## Model Implementation Details
### S-VAE
For S-VAE \- the sentence-level latent variable model, which applies
variational autoencoderes in sentence-level classifications by reconstructing
the input sentences while learning to classify them, which encourages the
model to assign input sentences to a label $y$ such that the reconstruction
loss is low. S-VAE is a special case (only performing operations at sentence
levels) of our proposed WS-VAE. The weight for the reconstruction term is 1,
the weight for the classification term is 5 and the weight for KL divergence
terms are annealing from a small value to 1 through the training process. The
learning rate is 0.001.
### WS-VAE
For WS-VAE \- our proposed weakly supervised latent variable model, takes
advantage of sentence-level labels and document-level labels at the same time,
as well as reconstructing input documents. The weight for the reconstruction
term is 1, the weight for the classification term is 5, the weight for KL
divergence terms are annealing from a small value to 1 through the training
process, and the weight for predictor term is 0.5. The threshold for KL
regularization on $q(y|s)$ is 1.2. The learning rate is 0.001.
### WS-VAE-BERT
For WS-VAE-BERT \- a special case (based on pre-trained transformer models) of
WS-VAE, combines ES-VAE with recent pre-trained BERT. The weight for the
reconstruction term is 1, the weight for the classification term is 5, the
weight for KL divergence terms are annealing from a small value to 1 through
the training process, and the weight for predictor term is 0.1. The threshold
for KL regularization on $q(y|s)$ is 1.2. The learning rate is 0.00001.
Datasets | Threshold on y | Macro F1
---|---|---
Kiva | 0 | 0.228
1.2 | 0.315
2.0 | 0.305
RAOP | 0 | 0.274
1.2 | 0.321
2.0 | 0.316
Borrow | 0 | 0.485
1.2 | 0.595
2.0 | 0.542
Table 5: Macro F1 Score with different threshold on y in KL regularization
term for SH-VAE. Models are trained on three datasets with 20 labeled
documents (81 sentences in kiva, 99 sentences in RAOP and 59 sentences in
Borrow).
## Impact of Variational Regularization
To show the importance of variational regularization on the latent variable
$y$ (the threshold on KL divergence $w$) mentioned in Section Threshold on KL
Divergence, we performed ablation study for the KL term for $y$. We tested WS-
VAE with different values of threshold on three datasets using 20 labeled
documents and the results were shown in Table 5. When the threshold is small
like 0, which meant we added large regularization on y, the performance is bad
because the $q(y|s)$ was so close to estimated prior distributions and barely
learned from objective functions. When the threshold was large like 2, which
meant there did not exist any regularization on $y$, we got lower F1 scores as
well. When there is a appropriate threshold such as 1.2 to offer
regularization, WS-VAE could achieve the best performance.
Figure 6: Macro F1 scores with 20 documents with sentence labels and different
numbers of documents without sentence labels for WS-VAE. Results on Borrow
follow the left y-axis, while RAOP and Kiva follow the right y-axis.
## Varying the Number of Unlabeled Documents
We visualized WS-VAE’s performances on three datasets when varying the amount
of unlabeled data in Figure 6: macro F1 scores increased with more unlabeled
data, demonstrating the effectiveness of the introduction of unlabeled
sentences, and our hierarchical weakly-supervised model.
|
# Rapid Method for Generation Prioritization during System Restoration with
Renewable Resources
Adam Mate, Eduardo Cotilla-Sanchez School of Electrical Engineering &
Computer Science
Oregon State University, Corvallis, OR 97331 USA
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Quick and reliable power system restoration is critically important after
natural disasters or other sudden threats, such as cyber-attacks. Leveraging
renewable resources in system restoration shortens recovery times, resulting
in prevented life-loss and avoided economic-loss, and improves the resilience
of the entire grid. However, it is not a common practice today; the inherent
variability of these resources represents a challenge for a streamlined
restoration process. This paper presents a prioritized method -— starting with
renewable generator units then lowering priority to conventional units -— to
plan the operational schedule of a power system during the restoration
process. The goal is to achieve a well balanced system in the presence of
significant renewable penetration. Validation and benchmarking experiments
were performed on a customized version of the RTS-GMLC test system using six
months out of year-long data, tested through hourly simulations. After
evaluating the performance and computational costs, this method proved faster
than common approaches: a MILP Unit Commitment algorithm, widely used today,
and an “enable-and-try” algorithm. In summary, herein a more convenient method
is provided to be utilized during time-sensitive restoration, as an online
operation-planning aid.
###### Index Terms:
operational planning, generation prioritization, power system restoration,
RTS-GMLC, renewables integration
## I Introduction
Natural disasters (e.g. hurricanes, earthquakes, floods) and other extreme
weather conditions (e.g blizzards, heat waves) are becoming more common,
posing an increasing threat to our power systems. The U.S. Pacific Northwest
(PNW) in particular faces a complex and devastating disaster scenario: the
imminent Cascadia Subduction Zone (CSZ) megathrust earthquake, yielding the
creation of a powerful tsunami, hundreds of aftershocks and increased volcanic
activity in the region [1] –[2]. To effectively cope with such challenges, the
resiliency of the power system must be improved and the speed of power system
restoration accelerated. Several types of renewable generators are able to
withstand the above threats better than traditional ones. Wind turbines and
solar panels have proven many times their ability to quickly respond to and
recover from extreme events [3]. Therefore, renewable resources should be
leveraged during restoration as they are convenient to use to shorten recovery
times [4] –[5].
A key step in power system restoration is determining the operational schedule
of restored units. Classic unit commitment (UC) algorithms optimize for
operational costs (i.e., supply system loads with the lowest total cost),
however, they can take an extended amount of time to find the optimal schedule
depending on the integrated modeling details and the selected solution
approach (for details see Section IV-A).
After catastrophes, every minute counts in system restoration. Any delay can
have tragic consequences. In this paper a prioritization method is proposed
for power systems with significant renewable penetration; greater than 20%
average. The method prioritizes generator units, and determines which ones
should be dispatched. Renewable units that remain available after the event
are enabled by default, and the goal is to decide within seconds which
conventional units (and with what generation setpoints) should be enabled
alongside them. As data is received about the ongoing restoration process, the
method can reconsider its earlier made decisions close to real time and adjust
to always provide the best schedule.
## II Model Description
To evaluate the proposed prioritization method, the Reliability Test System of
the Grid Modernization Lab Consortium (RTS-GMLC) test system [6] was used.
RTS-GMLC is an updated version of the IEEE RTS-96 test system [7], with
modernizing changes [8]:
* •
Created relative node locations based on line distances (arbitrary geographic
region in the SW United States).
* •
Fixed data errors, improved transmission system, updated bus loads and
regional load profiles, modernized generation fleet (new unit types, new
conventional generators, and new renewable generation profiles).
* •
Hourly and 5-minute operations data for a year – from Jan. 1st, 2020 to Dec.
31st, 2020.
The default RTS-GMLC case represents a peak load flow state, with disabled
wind and solar generations. It consists of 73 buses, 158 generator units
(including 72 conventional and 82 renewable units), 120 AC transmission lines,
and 51 loads. Case-values are based on the original RTS-96 system. Forecasted
hourly data is available for the active power generation of solar, wind and
hydro units, and for the active loads of the system. Hourly data is not
provided for the reactive power generations and loads, conventional units,
synchronous condensers (abbrv.: sync-conds), and the storage unit.
The below used term time_period refers to a single hour period of the year
period: the RTS-GMLC data-set consists of 8,784 hour-sized time_periods.
The following subsections present and discuss in detail all implemented
customization of the RTS-GMLC test system.
### II-A Load Data
Hourly real power demand data is provided for each area and in every
time_period separately. To get the new $P_{d}$ load value (newMW) of a
specific bus in an area:
$\textnormal{newMW}=\textnormal{oldMW}\cdot\textnormal{MW\\_rescaling}$ (1)
where oldMW is the default RTS-GMLC active load value of the bus, and MW
rescaling ratio-value is determined as:
$\textnormal{MW\\_rescaling}=\frac{\textnormal{time\\_period\\_load}}{\textnormal{total\\_demand}}$
(2)
where time_period_load is the provided active load timeseries value of the
area, and total_demand is the calculated total real power demand of the entire
area in the time_period.
Hourly reactive power demand data is not provided. The default $Q_{d}$ load
values of buses were kept unchanged from the RTS-96 values. Using a fixed,
peak load flow state value during the simulations, however, is not an accurate
characterization of the reactive load profile that varies throughout a day and
the year. Thus, new rescaling ratio-values were introduced to improve the load
profile. To get the new $Q_{d}$ load value (newMVar) of a specific bus in an
area:
$\textnormal{newMVar}=\textnormal{oldMVar}\cdot\textnormal{MVar\\_rescaling}$
(3)
where oldMVar is the default RTS-GMLC reactive load value of the bus, and
MVar_rescaling ratio-value is determined as:
$\textnormal{MVar\\_rescaling}=\frac{\textnormal{time\\_period\\_load}}{\textnormal{max\\_demand}}$
(4)
where time_period_load is the provided active load timeseries value of the
area, and max_demand is the determined maximum time_period_load value of the
area throughout the entire year. More specifically, to create the new
MVar_rescaling ratio-value of an area in a certain time_period, the provided
active load timeseries values were used: 1) the yearly maximum timeseries
value of the area is determined and set to 1, and 2) the values of other
time_periods are the calculated ratios compared to the area-maximum.
Figure 1: Improved reactive load profile of the RTS-GMLC system.
Fig. 1 presents the improved reactive load profile of RTS-GMLC between January
26 and February 2 (i.e. 168 time_periods). The graph illustrates how the total
power demand of each area changes throughout the days and the week, instead of
staying at constant 580 [MVar] values. This is a more realistic profile, a
better fit for the hourly simulations.
### II-B Energy Portfolio
Figure 2: Modified portfolio of the RTS-GMLC system, using different
prioritization approaches.
In the RTS-GMLC system, the forecasted renewable generation is substantial
throughout the year, and could supply most loads by itself in numerous
time_periods. Enabling every generators (fixed-value conventional and hourly-
changing renewable units) simultaneously would lead to an unbalanced power
system where the generation greatly exceeds the demand. For this reason, the
available units must be coordinated; key generators need to be selected to
operate based on the system state and operational goals.
The renewable generation profile of RTS-GMLC is based on the Southwest U.S., a
region filled with solar and wind resources [6]. Today’s energy portfolio of
Oregon and the PNW, however, differs from this: about a half and a tenth of
the generated power comes from hydro and wind resources, respectively [9]
–[10]. The desired goal was to change the renewable portfolio of RTS-GMLC to
resemble the PNW’s portfolio in every time_period. In this research, based on
historical data and anticipated generation-changes, the renewable portfolio of
the PNW (i.e. goal portfolio) in 2020 is predicted to be: Solar 0.5%, Hydro
46.75%, Wind 10.5%, and Other Renewable 2.25%. The minimum renewable
generation requirement in 2020, based on legal mandates [11] –[12], is assumed
to be 20%.
In achieving significant renewable penetration (that characterizes the PNW),
the forecasted available potentials of every renewable generators were
modified in each time_period:
* •
Renewable generators are the highest priority; all units are enabled that are
forecasted to have generation. Concentrated Solar Power (CSP) units are not
used in the PNW, so were disabled.
* •
The 2.25% “Other Renewable” is proportionally distributed between the “Wind”
and “Solar” categories of goal portfolio. Distribution is based on the current
generation’s share of the total renewable generation.
* •
After the total active power demand (total_load) is determined, the allowed
generation (based on forecasted data) of renewable units is changed:
* –
units of a specific resource-type collectively generate $P_{total-type}$
active power
* –
individual units of that type can only generate as much power ($P_{gmax}$) as
their $P_{total-type}$’s share of the total_load is less (or equal) to the
corresponding percentage in the goal portfolio
* –
if the share is less than the goal portfolio percentage, their $P_{gmax}$
limits are set to be their forecasted generations in that time_period;
otherwise $P_{gmax}$ limits are rescaled to achieve the goal portfolio
* •
In case the minimum renewable generation requirement is not fulfilled at this
point, and potential is remained in the forecasted maximum generation, the
allowed generation ($P_{gmax}$) of appropriate units are increased with the
remainder generations to achieve requirement fulfillment.
* •
$P_{gmin}$ limits of renewable units are kept unchanged. Since hourly data is
not provided for reactive power generations, the default RTS-GMLC values were
kept unchanged throughout the year. Solar and wind units are not able to
participate in reactive power generation.
### II-C Synchronous Condensers and Storage Units
Reactive power is not able to travel far, thus it must be generated where it
is used. The renewable generators of RTS-GMLC greatly contribute towards the
active power supply of loads, but are not contributing towards the reactive
power generation (except for hydro units). With the introduced significant
renewable share and preferred use of renewable units, this leads to cases with
insufficient reactive power generation, and with substantial generation
imbalance across the power system. For this reason, the sync-conds of the
default RTS-GMLC case must be reassessed and modified.
Hourly data is not provided for the sync-conds. The default reactive power
generation limits of the three existing sync-conds (one in each area: Bus114,
Bus214 and Bus314) were updated to better fit the changed energy portfolio of
the system: $Q_{gmin}$ minimum limit was set to -50 [MVar], and $Q_{gmax}$
maximum limit was set to 100 [MVar]. Furthermore, to compensate for the
missing reactive power generation and slightly reduce the system-level
imbalance, new sync-conds were added with realistic generation-limits.
In each time_period, every bus that has a connected renewable unit receives a
new added sync-cond. The generation limits of these additional sync-conds are
based on the total power generation of the buses renewable unit(s). If the
total generated active power of the unit(s) is greater than 250 [MW], then the
limits of the added sync-cond are set as: $Q_{gmin}$ is -50 [MVar], and
$Q_{gmax}$ is 100 [MVar]. If the total generated power is greater than 100
[MW], then the limits set as: $Q_{gmin}$ is -25 [MVar], and $Q_{gmax}$ is 25
[MVar]. Otherwise, $Q_{gmin}$ is -5 [MVar], and $Q_{gmax}$ is 10 [MVar].
The storage unit of the system was disabled.
### II-D Observations
Fig. 2 presents the modified energy portfolio of RTS-GMLC between January 26
and February 2, using different generation prioritization approaches. The
Universal Selection Scheme (abbrv.: USS – Section III-B; top graph) is the
proposed new method, while the MILP Unit Commitment (abbrv.: MILP UC – Section
IV-A; bottom left graph) and the Minimum Number of Generators (abbrv.: MNG –
Section IV-B; bottom right graph) algorithms were implemented for result
comparison. The graphs illustrate the similarities and differences between the
determined operational schedules.
Performed experiments (Section V) verify that the changed portfolio of RTS-
GMLC has significant renewable penetration. Average 20-25% of the total system
load is supplied by renewable sources throughout the time_periods, so the
minimum renewable generation requirement is fulfilled. Also, the modified
portfolio of the system broadly resembles the PNW’s predicted energy
portfolio. On the other hand, there are notable differences between the used
prioritization approaches; these are discussed in detail in Section V-A.
## III Generation Prioritization Method
Beside the significant renewable generation in the modified RTS-GMLC (Section
II-B), the remainder of the total system load is supplied by nonrenewable
sources. Thus, the conventional generators must be prioritized and key units
selected to generate. The following subsections present a new prioritization
method for this process.
### III-A GPWD Factor
To characterize the importance of each generator, the Generator Participation
Weight Determination (GPWD) factor was introduced. This new index is comprised
of easily obtainable values, and is used to rank generators in each
time_period, creating a list that distinguishes between significant and less
significant units. The formed list has a vague resemblance to priority lists
presented in [13] or [14], but is more relevant to be used during time-
sensitive restoration than those.
GPWD is calculated as follows:
$\textnormal{GPWD}=\textnormal{PS}+\textnormal{APF-P}+\textnormal{APF-Q}+\textnormal{MP}-\frac{\textnormal{$P_{gmin}$}}{\textnormal{$Q_{gmax}$}}$
(5)
PS: Prior State
* •
the Status of a unit in the prior time_period; enabled unit receives a value
of 1, disabled unit receives 0
* •
turning generators ON and OFF frequently is not beneficial or realistic, so
previously enabled units have higher rank in the present time_period
APF-P and APF-Q
* •
Area Participation Factors based on $P_{g}$ active power, and $Q_{g}$ reactive
power generations; values of enabled units add up to 1 in each area for both
cases
* •
To obtain values: After setting $P_{gmin}$ generation limits of all
conventional generators to 0 [MW], an Optimal Power Flow (OPF) [15] simulation
is performed. Then, using the OPF results in every area separately: 1)
calculate the total Pg (or absolute valued $Q_{g}$) generation of conventional
units; 2) determine each individual unit’s share of the total generation.
* •
higher APF value means greater contribution in the area, resulting in a more
important unit
MP: Maximum Power
* •
maximum generatable power of a certain unit compared to the largest generator
of the power system; each unit receives a value between 0 and 1
* –
if the relative $P_{gmax}$ (or $Q_{gmax}$) size of a unit is greater than 95%,
the unit receives a value of 0.5; if $P_{gmax}$ (or $Q_{gmax}$) is between 95%
and 80%, the unit receives 0.25; otherwise the unit receives 0
* –
MP is the sum of the two values resulting from the relative $P_{gmax}$ and
$Q_{gmax}$ sizes; only the largest units in the power system receive MP values
* •
MP keeps the largest unit(s) of the system active most of the time, as they
greatly contribute towards the missing load supply, and are harder to turn
ON/OFF frequently
$P_{gmin}$/$Q_{gmax}$ ratio
* •
ratio of the generators’ two default generation limits: $P_{gmin}$ minimum
active power and $Q_{gmax}$ maximum reactive power generation limits
* •
each unit receives a value between 0 and 1; after determining the
$P_{gmin}$/$Q_{gmax}$ ratio of each generator, individual values are
calculated into relative values compared to the maximum of the time_period
* •
smaller ratios are preferred, because those units reduce the reactive power
generation imbalance (caused by the significant renewable penetration) more
than they contributes toward the active power generation
If the determined GPWD factor of a generator is smaller than 0, it is changed
to 0. Disabled units receive 0 as well.
### III-B Universal Selection Scheme
GPWD factors are used to rank conventional units, where larger value
corresponds to higher rank (greater importance) on the created list. To decide
which units participate in the supply of the demand, the USS method was
created. USS is implemented in each time_period and area separately, and uses
the GPWD-ranked list of units.
Enabled units are selected based on the following values, and after taking the
below detailed preparatory steps:
* •
Disable every conventional units in the system, then re-enable a unit (the one
with the highest $P_{gmax}$ active power generation capability) for each Slack
bus.
* •
Determine the $hour\\_of\\_the\\_day$ of the time_period.
* •
Determine the $renewable\\_percentage$ goal value (abbrv.: $renew\\_pct$): the
planned renewable generation share of total load in the time_period (based on
Section II-B).
* •
Calculate active and reactive missing generation of each area: difference
between forecasted load and the enabled total renewable generation (based on
Section II-B).
* •
Set MW and MVar generation goals in each area: The area’s
$MW\\_generation\\_goal$ is 115% of the active missing generation (considering
the effect of power transmission losses in the system, and keeping 10%
spinning reserve). The area’s $MVar\\_generation\\_goal$ is the reactive
missing generation minus 85% of the added extra sync-conds’ total generation.
* •
Take into consideration the enabled units of the Slack buses: deduct
(0.5x$P_{gmin}$+0.5x$P_{gmax}$) from their area’s $MW\\_generation\\_goal$,
and deduct 50% of their $Q_{gmax}$ from their area’s
$MVar\\_generation\\_goal$. In the equations, $P_{gmin}$, $P_{gmax}$ and
$Q_{gmax}$ values are the generation limits of the enabled units.
Generators are enabled until there is missing generation in their area, i.e.
the $MW\\_generation\\_goal$ and/or the $MVar\\_generation\\_goal$ in their
area is greater than 0.
General description of the USS method:
1. 1.
take first (or next) generator of the GPWD-ranked list;
2. 2.
determine the area and status (can be enabled or must stay disabled) of the
unit;
3. 3.
decide if the unit needs to be enabled in the time_period; it not, then
terminate the setting-process;
4. 4.
after enabling the unit, calculate the new area generation goals (deduct the
effect of the enabled unit from the old area generation goals);
5. 5.
start over from 1).
The detailed USS method consists of three steps, and further specifies 3) and
4) points of the above general description. An OPF simulation [15] is
performed after each step to determine the success (i.e. power flow
convergence) of the created system-setup, and to decide if continuing to the
next step is necessary or not.
Step 1: enable as few conventional generators as possible
* •
3): enable units until the $MW\\_generation\\_goal$ OR
$MVar\\_generation\\_goal$ in their area is greater than 0
* •
4): to get the new generation goals of the enabled unit’s area, deduct
(0.15x$P_{gmin}$+0.85x$P_{gmax}$) from the old $MW\\_generation\\_goal$, and
deduct (0.85x$Q_{gmax}$) from the old $MVar\\_generation\\_goal$
* •
Once the setting-process ends (either goes through the full GPWD-ranked list,
or gets terminated because the generation goal(s) went below 0), the rest of
the generators on the list remain disabled in the time_period.
* •
If the performed OPF simulation was successful, the status of units is
determined and the USS method is concluded; otherwise it proceeds to Step 2.
Step 2: enable a realistic number of units based on active power generation of
conventional generators
* •
3): enable units until the $MW\\_generation\\_goal$ in their area is greater
than 0
* •
4): to get the new area generation goals, this rule applies:
* –
when the renewable generation is low, the time_periods require more
conventional units (with generation closer to their $P_{gmax}$ limits); when
the generation is high, they require less conventional units (with generation
closer to their $P_{gmin}$ limits)
* –
thus, from the old $MW\\_generation\\_goal$ deduct:
(0.50x$P_{gmin}$+0.50x$P_{gmax}$) if $renew\\_pct$$<=$10%
(0.55x$P_{gmin}$+0.45x$P_{gmax}$) if $renew\\_pct$$<=$17.5%
(0.60x$P_{gmin}$+0.40x$P_{gmax}$) if $renew\\_pct$$<=$25%
(0.65x$P_{gmin}$+0.35x$P_{gmax}$) if $renew\\_pct$$>$25%
* •
As in Step 1, once the setting-process ends, the rest of the generators on the
list remain disabled.
* •
If the performed OPF simulation was unsuccessful, the method proceeds to Step
3.
Step 3: enable a realistic number of units based on reactive power generation
of conventional generators
* •
3): enable units until the $MVar\\_generation\\_goal$ in their area is greater
than 0
* •
4): to get the new area generation goal, this rule applies:
* –
different time of the day requires different number of enabled units: during
the night (when reactive power demand is lower) less is needed, while during
the day (when reactive power demand is higher) more
* –
thus, from the old $MVar\\_generation\\_goal$ deduct:
(0.25x$Q_{gmax}$) if $hour\\_of\\_the\\_day$ = 1-6, 24
(0.20x$Q_{gmax}$) if $hour\\_of\\_the\\_day$ = 7-10, 22-23
(0.15x$Q_{gmax}$) if $hour\\_of\\_the\\_day$= 11-21
* •
As in earlier steps, once the setting-process ends, the rest of the generators
on the list remain disabled.
As the USS method is concluded, the operational schedule in the time_period is
determined. Renewable units are enabled and set based on the modifications of
Section II-B. Conventional units are enabled based on the last performed Step
of the USS method, and set to generate with their $P_{gmin}$ and $Q_{gmax}$
values as a starting point. Another performed OPF or PF simulation on the
restored power system determines the exact generation setpoints of these
units.
## IV Implemented Algorithms for Result Comparison
Figure 3: Comparison of different prioritization approaches during normal
operation.
### IV-A MILP Unit Commitment
Unit Commitment is a mathematical optimization problem that determines the
optimal operational schedule of generator units within a power system subject
to device and operating constraints [16]. In most cases the target objective
is to minimize the operational costs throughout the system.
Numerous UC solution approaches have been explored, and algorithms have been
developed and tested over the years. Techniques for regulated and deregulated
markets, systems with renewable energy resources and energy storage units,
distributed generation systems, and more [17] –[18]. In the electric utility
industry, traditionally the Lagrangian Relaxation (LR) technique has been used
to solve UC problems, and remains a widely used powerful solution approach
[18] –[19]. Nowadays, after the spread of efficient commercial solvers such as
CPLEX [20] –[22], the common and most efficient practice of solving UC
problems is through Mixed-Integer Linear Programming (MILP).
MILP algorithms adopt linear programming to solve and check for an integer
solution [18],[20]. It is required that the objective function and constraints
be a linear function of the decision variables. Their greatest advantage over
LR is global optimality; they guarantee a solution that is globally optimal or
one with an acceptable tolerance [22] –[24]. On the other hand, they scale
poorly and fail when the number of units increases, or when additional
modeling detail is integrated. Their efficiency also suffer from computational
delay and the need for large memory [17] –[18],[22].
The herein implemented MILP UC algorithm is based upon an openly-accessible UC
script by MathWorks: [25]. The MILP computation was solved using the
INTLINPROG solver of MATLAB’s Optimization Toolbox [26].
MathWorks’ script was customized and optimized for the used RTS-GMLC system in
the following manner:
* •
MILP UC was implemented for each system area separately to account for the
unique properties of the areas. It is executed in each time_period separately.
* •
Only the conventional generators need to be optimized; the data of other units
and system elements were ignored.
* •
Input data (RTS-GMLC default data) was modified to fit the application
circumstances:
* –
Fuel cost data was provided in units of [$/MMBTU].
* –
Operational cost data was provided as a piecewise linear cost function with
four breaking points. [$/hr/MW] unit values were calculated for each generator
to quicken the algorithm. The given four values were averaged into a single
value.
* –
When a generator was enabled in the previous time_period, its start-up cost
was changed to 0 [$] in the present time_period.
* –
Ramp-up and ramp-down rates were changed from given [MW/min] unit to [MW/hr]
units.
* –
All values of disabled generators were set to 0, as they are not participating
in the algorithm.
* •
Forecasted load data (targeted MW active power generation of the area) is
increased by 5% to serve as spinning reserve for the generators of the area
and to compensate for potential variabilities and modelling inaccuracies.
* •
The objective function is the sum of three variables: cost of turning the
generator on (Status x Start-up cost), cost of running the generator if it is
on ($P_{g}$ x Operating cost), and cost of generating power ($P_{g}$ x Fuel
cost).
* •
The number of integrated modeling details were kept low to increase
computational speed.
### IV-B Minimum Number of Generators
The minimum number of conventional generators is the amount of units that is
needed to successfully perform an OPF simulation [15] in the created power
system, i.e. to reach power flow convergence. This algorithm determines and
enables the minimum number of units in the entire system in each time_period
separately.
MNG utilizes the earlier formed GPWD factor-ranked conventional generator
list, in which the units are listed from the largest GPWD value unit to the
smallest one (presented in Section III-A). In the process – which is an
”enable-and-try” algorithm – generators are enabled one-by-one, from the top
of the list to the bottom, or until the OPF simulation of the resulting
system-setup is successful.
## V Results and Discussion
The testing of the proposed prioritization method was done on a computer with
an Inter(R) Core(TM) i7-7500U 2.90GHz CPU, and 12GB RAM. The used software was
MATLAB R2018b 64-bit, with MATPOWER 6.0 [27].
Figure 4: One-line diagram of the RTS-GMLC system during the restoration time-
frame.
### V-A Prioritization During Normal Operation
First, validation experiments were performed during the “Normal Operation” of
the RTS-GMLC system where all system elements were continuously operational,
and connected to the grid according to the above detailed customization
changes. Five months out of year-long data were tested through hourly
simulations: the months with the lowest (March, June, and October), and the
months with the highest areal and total system loads (July and August). Each
month is 30 or 31 days long, resulting in 720 or 744 simulated time_periods.
Fig. 3 presents the results in table format. Each column belongs to a
different prioritization approach and each row details their performances in a
specific month or (in the last row) approximated for the entire year. The
table states the number of time_periods with “working” (converged OFP or PF
simulation of the created system-setup) and “not working” operational
schedules, the total computational times, the average number of enabled
conventional units, and the average renewable shares of total generation.
As Fig. 2 also illustrates, the USS method and the MNG algorithm both
determine working operational schedules in every time_period; the MILP UC
algorithm, however, is not always able to provide working schedules, which
explains the missing (or unrealistic) columns in its graph. To further
validate the conclusion of Section II-D, Fig. 3 proves that the results of all
three approaches in yearly average satisfy the 20% minimum renewable
generation requirement.
Comparing the USS method to the MILP UC algorithm, it must be noted that the
former dispatches significantly less conventional units, resulting in more
feasible and economical schedules. Although the MNG algorithm creates the best
operational schedules, it is the slowest among the three; 5-times slower than
the USS method. Furthermore, even though the implemented MILP UC algorithm was
designed to be fast, the proposed prioritization method is 2.11-times faster.
Considering the small size of the RTS-GMLC system, this is a significant
difference.
### V-B Prioritization During Restoration
Validation experiments were performed during an ongoing system restoration
process. Based on historical data and expected consequences [1],[2], a
fictional restoration time-frame was implemented for a presumed CSZ earthquake
event. To create a connection between the RTS-GMLC system and the PNW, it was
assumed that Area 1 of RTS-GMLC corresponds to the Pacific Coast region of the
PNW, Area 2 to the region between the Coastal Range and the Cascades, and Area
3 to the region east of the Cascades.
Two weeks data were tested through hourly simulations, between January 26 and
February 8. It was assumed that RTS-GMLC operates according to the below
schedule (note: “TP” is abbreviation of time_period); Fig. 4 presents the one-
line diagram of the system during this period.
1. 1.
Normal Operation (01/26 TP-1 to 01/26 TP-21)
2. 2.
CSZ Earthquake Disaster (01/26 TP-22 to 01/29 TP-9): at 9pm local time a CSZ
event struck the region; Area 3 remains intact and continues to operate, while
Area 1 and 2 disconnect and enter into complete blackout
3. 3.
Partially Restored Operation I. (01/29 TP-10 to 02/03 TP-17): about three days
after the CSZ event, the 230 [kV] side of Area 2 (orange dashed lines in Fig.
4) is restored, and is connected to operate with Area 3
4. 4.
Partially Restored Operation II. (02/03 TP-18 to 02/08 TP-24): about a week
after the CSZ event, the 138 [kV] side of Area 2 (red dashed lines in Fig. 4)
is restored and connected to the operating areas; Area 1 (red dotted lines in
Fig. 4) remains nonoperational
Fig. 5 presents the results in table format, similarly to Fig. 3. The last row
displays the total computational times, and the total percentage of “working”
schedules during the complete restoration time-frame.
The same conclusions can be drawn related to the performances and
computational costs as in Section V-A. As was expected, the MILP UC algorithm
became much faster as the system size (and element number) was reduced, but
the determined schedules are non-feasible in many cases. The MNG algorithm
provided the best schedules during the time-frame, enabled the least amount of
conventional units in each step of the restoration, but was considerably
slower than other approaches. The proposed USS method provides the fastest,
reliable operational schedules among the three prioritization approaches,
regardless if during normal operation or a restoration process.
Figure 5: Comparison of different prioritization approaches during the
restoration time-frame.
### V-C Closing Remarks
Altogether about six months data were used to perform the validation and
benchmarking experiments on the proposed USS method. The selected periods
cover a wide range of possible system-states - months with the highest and
lowest areal and system loads, a month from each quarter of the year (to take
into account the seasonality of power generation and demand), and times during
normal and islanded operation (as part of a restoration process) - all in a
power system with significant renewable penetration.
The presented Universal Selection Scheme method (and the associated GPWD
factor) proved to be a fast, efficient and convenient tool under various
circumstances. Thus, it is advised to be utilized during time-sensitive
restoration to rapidly plan the operational schedule of generator units in a
power system.
## References
* [1] Oregon Seismic Safety Policy Advisory Commission, “The Oregon resilience plan: reducing risk and improving recovery for the next Cascadia earthquake and tsunami,” Tech. Rep., 2013.
* [2] Cascadia Region Earthquake Workgroup, “Cascadia Subduction Zone Earthquakes: A Magnitude 9.0 Earthquake Scenario,” 2013.
* [3] American Council on Renewable Energy, “The role of renewable energy in national security,” 2018.
* [4] A. El-Zonkoly, “Renewable energy sources for complete optimal power system black-start restoration,” IET Gen., Transm. & Distr., Vol.: 9, Issue: 6, pp. 531–-539, 2015.
* [5] J. Sprooten et al., “Power system restoration with high penetration level of renewable generation,” SASGC 2014.
* [6] E. Preston et al., “Evaluation of year 2020 IEEE RTS generation reliability indices,” IEEE ICPMAPS 2018.
* [7] C. Grigg et al., “The IEEE Reliability Test System-1996. A report prepared by the Reliability Test System Task Force of the Application of Probability Methods Subcommittee,” IEEE Trans. Power Syst., Vol.: 14, Issue: 3, pp. 1010–1020, 1999.
* [8] NREL, Golden CO, “Reliability Test System of the Grid Modernization Lab Consortium (RTS-GMLC) test system.” [Online]. Available: https://github.com/GridMod/RTS-GMLC
* [9] Northwest Power and Conservation Council, “Power Supply,” Updated: May 2018. [Online]. Available: https://www.nwcouncil.org/energy/ energy-topics/power-supply
* [10] Oregon State Department of Energy, “Electricity Mix in Oregon,” [Online]. Available: https://www.oregon.gov/energy/energy-oregon/pages/ electricity-mix-in-oregon.aspx
* [11] Oregon State Department of Energy, “Renewable Portfolio Stan-dard,” [Online]. Available: https://www.oregon.gov/energy/energy-oregon/Pages/ Renewable-Portfolio-Standard.aspx
* [12] Washington State Department of Commerce, “Energy Independence Act (EIA or I-937),” [Online]. Available: https://www.commerce.wa.gov/ growing-the-economy/energy/energy-independence-act/
* [13] A. Elsayed et al., “A new priority list unit commitment method for large-scale power systems,” IMEPSC 2017.
* [14] V. Raj et al., “Analysis of unit commitment problem through Lagrange relaxation and priority listing method,”PIIC 2014.
* [15] F. Capitanescu, “Critical review of recent advances and further developments needed in AC optimal power flow,” Electric Power Systems Research, Vol.: 136, pp. 57–68, 2016.
* [16] G. Sheble et al., “Unit commitment literature synopsis,” IEEE Trans. Power Syst., Vol.: 9, Issue: 1, pp. 128–135, 1994.
* [17] T. Logenthiran et al., “Formulation of Unit Commitment problems and analysis of available methodologies used for solving the problems,” IEEE ICSET 2010.
* [18] M. Tahanan et al., “Large-scale Unit Commitment under uncertainty,” 4OR-Q, Vol.: 13, Issue: 2, pp. 115–-171, 2015.
* [19] S. Virmani et al., “Implementation of a Lagrangian relaxation based unit commitment problem,” IEEE Trans. Power Syst., Vol.: 4, Issue: 4, pp. 1373–1380, 1989.
* [20] E. Bixby et al., “MIP: Theory and Practice — Closing the gap”, CSMO 1999.
* [21] H. Daneshi et al., “Mixed integer programming method to solve security constrained unit commitment with restricted operating zone limits,” IEEE ICEIT 2008.
* [22] D. Streiffert et al., “A mixed integer programming solution for market clearing and reliability analysis,” IEEE PESGM 2005.
* [23] X. Guan et al., “Optimization based methods for unit commitment: Lagrangian relaxation versus general mixed integer programming,” IEEE PESGM 2003.
* [24] T. Li et al., “Price-based unit commitment: a case of Lagrangian relaxation versus mixed integer programming,” IEEE Trans. Power Syst., Vol.: 20, Issue: 4, pp. 2015–2025, 2005.
* [25] MathWorks, “MATLAB Examples - Script 9: Unit Commitment.” [Online]. Available: https://www.mathworks.com/examples/matlab/ community/36286-script-9-unit-commitment
* [26] MathWorks, “MATLAB R2018b Documentation.”
* [27] R. Zimmerman et al., “MATPOWER: Steady-State Operations, Planning and Analysis Tools for Power Systems Research and Education,” IEEE Trans. Power Syst., Vol.: 26, Issue: 1, pp. 12–19, 2011.
|
# Periods of Hilbert Modular forms, Kronecker series and Cohomology
YoungJu Choie YoungJu Choie Department of MathematicsPohang University of
Science and Technology (POSTECH) Pohang, Republic of Korea<EMAIL_ADDRESS>
###### Abstract.
Generalizing a result of [24, 8] about elliptic modular forms, we give a
closed formula for the sum of all Hilbert Hecke eigenforms over a totally real
number field with strict class number $1$, multiplied by their period
polynomials, as a single product of the Kronecker series.
###### Key words and phrases:
Hilbert modular form, parabolic cohomology, period polynomial, Kronecker
series
###### 2000 Mathematics Subject Classification:
11F41, 11F50, 11F60, 11F67
This work was partially supported by NRF 2018R1A4A1023590 and NRF
2017R1A2B2001807
## 1\. Introduction
Based on Bol’s result [3] Eichler initiated a theory of the periods of
integrals so that an automorphic form of the first or second kind leads to a
cohomology class in the mapping of a Fuchsian group into a polynomial module
and the (converse) correspondence of each such cohomology class leads to an
automorphic form in one complex variable [9]. Shimura extended this theory by
showing that the structure of an abelian variety in certain cases can be also
given to the periods of such integrals and showed critical values of the
$L$-functions attached to elliptic modular forms can be computed explicitly
using the cohomology group [20]. This method was developed by Manin [16] who
proved an algebraic theorem for the periods of elliptic cusp forms for the
full modular group and studied $p$-adic properties of the algebraic factors in
$L$-functions. Kohnen-Zagier [14] further extended this theory to elliptic
modular forms including Eisenstein series [14] and studied forms whose period
polynomials have arithmetically interesting rational structure relating to
Bernoulli numbers, binary quadratic forms, zeta-functions of real quadratic
fields, modular forms of half-integral weight and Hilbert modular forms.
Hence, period polynomials, which allow us to compute the critical values of
$L$-function of modular forms at once, give a rich source of relations between
modular forms and arithmetic.
The period polynomial of an elliptic cusp form $f(\tau)=\sum_{\ell\geq
1}a_{f}(\ell)q^{\ell}\,\,(\tau\in\mathbb{H}=$ upper half plane, $q=e^{2\pi
i\tau})$ of weight $k$ on $SL_{2}(\mathbb{Z})$ is the polynomial of degree
$k-2$ defined by
(1.1) $\displaystyle r_{f}(X)=\int_{0}^{i\infty}f(\tau)(\tau-X)^{k-2}d\tau$
or equivalently by
$r_{f}(X)=-\sum_{n=0}^{k-2}\frac{(k-2)!}{(k-2-n)!}\frac{L(f,n+1)}{(2\pi
i)^{n+1}}X^{k-2-n},$
where $L(f,s)=\sum_{n\geq 1}\frac{a_{f}(n)}{n^{s}}\,(Re(s)\gg 0).$ The maps
$f\rightarrow r_{f}^{ev}$ and $f\rightarrow r_{f}^{od}$ assigning to $f$ the
even and odd parts of $r_{f}$ are both injective with known images from the
Eichler-Shimura-Manin theory.
When $f$ is a Hecke eigenform then one has the two-variable polynomial
$\displaystyle
r_{f}(X,Y):=\frac{r_{f}^{ev}(X)r_{f}^{od}(Y)+r_{f}^{od}(X)r_{f}^{ev}(Y)}{(2i)^{k-3}<f,\,f>}\in\mathbb{Q}_{f}[X,Y]$
where $\mathbb{Q}_{f}$ is the field generated by Fourier coefficients of $f$
over $\mathbb{Q}.$
Zagier [24] found the following attractive formula :
$\displaystyle\frac{(XY-1)(X+Y)}{X^{2}Y^{2}}T^{-2}+\sum_{k\geq
2}\sum_{f\in\mathcal{B}_{k}}r_{f}(X,Y)f(\tau)\frac{T^{k-2}}{(k-2)!}$
$\displaystyle=$ $\displaystyle
F_{\tau}(T,-XYT)F_{\tau}(XT,YT),\,F_{\tau}(u,v)=\frac{\theta^{{}^{\prime}}(0)\theta(u+v)}{\theta(u)\theta(v)}$
where
$\theta(u)=\sum_{n\in\mathbb{Z}}(-1)^{n}q^{\frac{1}{2}(n+\frac{1}{2})^{2}}e^{(n+\frac{1}{2})u}$
is a Jacobi theta function and $\mathcal{B}_{k}$ is a set of all Hecke
eigenforms of weight $k$ on $SL_{2}(\mathbb{Z}).$
The identity by Zagier (1) relates a generating function, which contains all
Hecke eigenforms together with all critical values, to the Jacobi form
$F_{\tau}(u,v).$ Such expansions with respect to the variable $T$ give an
algorithm to compute Hecke eigenforms (see [24] for more details).
It took almost 30 years to see that such an identity (1) is not accidental but
also exists for a general group $\Gamma_{0}(N)$ (see [8]). Now it seems
natural to ask if one can get such a relation for general automorphic forms.
In this paper we attempt to get such a formula, namely, an identity between a
generating function of periods of Hilbert modular forms over totally real
number fields with strict class number one and Jacobi forms (see Theorem 2.2).
The function $F_{\tau}(u,v)$ in (1) was introduced by Kronecker in a more
general form (see page 70 in [23]) and several properties of that have been
explored by Zagier [24]. The essential property of $F_{\tau}(u,v)$ is that it
can be identified as a sum of derivatives of Eisenstein series (called the
”Kuznetsov lifting”) and using this fact we are able to extend Zaiger’
identity to that for a totally real number field. The main result of this
paper shows the first connection between the Kronecker series and the critical
$L$\- values of Hilbert modular forms over the totally real number fields. It
also gives a systematic way to compute Hilbert Hecke eigenforms and the
special values of $L$ -functions by taking the expansions of the Kronecker
series.
This paper is organized as follows: in section $2$ we state (Main) Theorem 2.2
after introducing necessary notations. In section $3$ the analog of
Eisenstein-Kronecker series over a totally real number field and the
rationality of period polynomials of Hilbert modular forms are discussed.
Section $4$ gives detailed proof of Main Theorem. Finally, we give a comment
on a connection between parabolic cohomology and a period theory of Hilbert
modular forms. Also, we discuss a possible application of the Kronecker series
to evaluate the special $L$-values of a general automorphic form as a
conclusion.
Acknowledgement I would like to thank the referees for numerous helpful
comments and suggestions which greatly improved the exposition of this paper.
## 2\. Notations and statement of Main Theorem
### 2.1. Notation
* •
$\mathbb{F}:$ a totally real number field of degree $t$ with discriminant $D$
and strict class number $1$
* •
$\mathcal{O}:$ the ring of integers of $\mathbb{F}$ containing a unit of
negative norm
* •
$\mathcal{O}^{*}:$ the group of units of $\mathcal{O}$
* •
$\mathcal{O}^{*,+}:$ the group of totally positive units of $\mathcal{O}$
* •
$U^{+}=\\{\epsilon^{2}\,:\,\epsilon\in\mathcal{O}^{*,+}\\}$
* •
$\alpha\succ 0:\,$ a totally positive element
* •
$\alpha_{1}\rightarrow\alpha_{2},\cdots,\alpha_{t}$ for the conjugation
* •
$\mathcal{N}(\alpha)=\prod_{j=1}^{t}\alpha_{j}$ the norm
* •
$tr(\alpha)=\sum_{j=1}^{t}\alpha_{j}$ the trace
* •
$\mathfrak{D}:$ the different of $\mathbb{F}$
* •
$\zeta_{\mathbb{F}}(s)=\sum_{{c}\subset\mathcal{O}}\frac{1}{\mathcal{N}({c})^{s}},$
where $s\in\mathbb{C}$ and the integral ideal $c$
* •
$|\mathbf{r}|=\sum_{i=1}^{t}r_{i},\,\mathbf{r}+\mathbf{r}^{\prime}=(r_{1}+r_{1}^{\prime},\cdots,r_{t}+r_{t}^{\prime}),\,$
$\Gamma(\mathbf{r}+\mathbf{1})=\mathbf{r}!=r_{1}!\cdots
r_{t}!,\,\left(\begin{smallmatrix}\mathbf{r}\\\
\mathbf{r}^{\prime}\end{smallmatrix}\right)=\left(\begin{smallmatrix}r_{1}\\\
r_{1}^{\prime}\end{smallmatrix}\right)\left(\begin{smallmatrix}r_{2}\\\
r_{2}^{\prime}\end{smallmatrix}\right)\cdots\left(\begin{smallmatrix}r_{t}\\\
r_{t}^{\prime}\end{smallmatrix}\right)$ for
$\mathbf{r}=(r_{1},r_{2},\cdots,r_{t}),\mathbf{r}^{\prime}=(r_{1}^{\prime},r_{2}^{\prime},\cdots,r_{t}^{\prime})\in\mathbb{Z}_{\geq
0}^{t}$
* •
$z^{\mathbf{r}}={z_{1}}^{r_{1}}{z_{2}}^{r_{2}}\cdots{z_{t}}^{r_{t}},\,tr(\mathbf{m}z)=\sum_{j=1}^{t}m_{j}z_{j},\,\mathcal{N}(z)=\prod_{j=1}^{t}z_{i}$
for $z=(z_{1},z_{2},\cdots,z_{t})\in\mathbb{C}^{t}$ and
${\mathbf{m}}\in\mathbb{F}$
* •
$\mathbb{H}^{t}:$ the $t$-copies of complex upper half plane $\mathbb{H}$
* •
$\tau=(\tau_{1},\cdots,\tau_{t})=x+\sqrt{-1}y\in\mathbb{H}^{t},x=(x_{1},\cdots,x_{t})\in\mathbb{R}^{t},y=(y_{1},\cdots,y_{t})\in(\mathbb{R}^{+})^{t},\,q=\prod_{j=1}^{t}q_{j},\,q_{j}=e^{2\pi
i\tau_{j}},\,1\leq j\leq t.$
* •
$\sigma=(\sigma_{1},\cdots,\sigma_{t})\in\Gamma=SL_{2}(\mathcal{O})^{t}:$ an
element in the Hilbert modular group
* •
The action of the group $\Gamma$, which is embedded into
$SL_{2}(\mathbb{R})\times\cdots\times SL_{2}(\mathbb{R}),$ on $\mathbb{H}^{t}$
is given by linear fractional transformations
$\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)\tau=\frac{a\tau+b}{c\tau+d}=\bigl{(}\frac{a_{1}\tau_{1}+b_{1}}{c_{1}\tau_{1}+d_{1}},\cdots,\frac{a_{t}\tau_{t}+b_{t}}{c_{t}\tau_{t}+d_{t}}\bigr{)},\tau=(\tau_{1},\cdots,\tau_{t})\in\mathbb{H}^{t}$
* •
For a holomorphic function $\chi$ on $\mathbb{H}^{t},$
$\chi^{(\ell)}(\tau)=\frac{\partial^{|\ell|}}{\partial{\tau}^{\ell}}\chi(\tau):=\frac{\partial^{|\ell|}}{\partial{\tau_{1}}^{\ell_{1}}\cdots\partial{\tau_{t}}^{\ell_{t}}}\chi(\tau),\,\,\,\forall\ell=(\ell_{1},\cdots,\ell_{t})\in\mathbb{Z}^{t}_{\geq
0}$
* •
$\mathbb{D}^{\ell}\bigl{(}\chi(\tau)\bigr{)}:=\chi^{(\ell)}(\tau)$
* •
$S_{\mathbf{k}}\subset M_{\mathbf{k}}:$ the space of Hilbert cusp form
$\subset$ the space of Hilbert modular form on $\Gamma$ with a parallel weight
$\mathbf{k}=(k,\cdots,k),$ even $k\geq 2.$
* •
$\mathcal{B}^{0}_{k}\subset\mathcal{B}_{k}:$ a basis, consisting of all
normalized Hecke eigenforms, of $S_{\mathbf{k}}\subset M_{\mathbf{k}},$
respectively
* •
$\mathbb{Q}_{f}:$ the field spanned by Fourier coefficients of $f$ over
$\mathbb{Q}$
* •
$\bigl{<}f\,,\,g\bigr{>}:=\int_{\Gamma\backslash\mathbb{H}^{t}}f(\tau)\overline{g(\tau)}\frac{dx\,dy}{y^{2}},\,$
the Petersson inner product for $f\in S_{\mathbf{k}},g\in M_{\mathbf{k}}$
* •
For a function $f$ on $\mathbb{H}^{t}$ and
$\mathbf{\ell}=(\ell_{1},\cdots,\ell_{t})\in\mathbb{Z}^{t},$
$\displaystyle(f|_{\ell}\sigma)(z):=(cz+d)^{-\mathbf{\ell}}f(\frac{az+b}{cz+d}),\sigma=\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)\in\Gamma$
### 2.2. Statement of Main Theorem
Take a cusp form $f(\tau)=\sum_{\mathfrak{D}^{-1}\ni\nu\succ
0}a_{f}(\nu)e^{2\pi itr(\nu\tau)}$ in $S_{\mathbf{k}}$ and consider the
complete $L$-function of $f:$ for $s\in\mathbb{C},$
$\Lambda(f,s):=\int_{\mathbb{R}_{+}^{t}/U^{+}}f(iy)y^{s-1}\,dy=D^{s}(2\pi)^{-ts}\Gamma(s)^{t}L(f,s),$
where $\,L(f,s)=\sum_{\mathfrak{D}^{-1}/U^{+}\ni\nu\succ
0}\frac{a_{f}(\nu)}{\mathcal{N}(\nu)^{s}}\,\,\,(Re(s)\gg 0).$ It is well-known
that $\Lambda(f,s)$ has an analytic continuation and functional equation [4,
11]
$\displaystyle\Lambda(f,s)=(-1)^{\frac{tk}{2}}\Lambda(f,k-s).$
Consider the following polynomials in $X=(X_{1},\cdots,X_{t})$, called the
even (odd) period polynomial associated to $f:$
$\displaystyle R_{f}^{ev}(X)$ $\displaystyle:=$
$\displaystyle\sum_{\tiny{\begin{array}[]{cc}0\leq n\leq k-2\\\ n\equiv
0\pmod{2}\end{array}}}\frac{\Gamma({k}-{1})^{t}}{\Gamma(n+1)^{t}\Gamma(k-n-1)^{t}}R_{{k-2-n}}(f)\mathcal{N}(X)^{{n}},$
$\displaystyle R_{f}^{od}(X)$ $\displaystyle:=$
$\displaystyle\sum_{\tiny{\begin{array}[]{cc}0<n<k-2\\\ n\equiv
1\pmod{2}\end{array}}}\frac{(-1)^{nt}\Gamma({k}-{1})^{t}}{\Gamma(n+1)^{t}\Gamma(k-n-1)^{t}}R_{{k-2-n}}(f)\mathcal{N}(X)^{{n}},$
$\displaystyle
R_{f}(X):=(-1)^{t}\bigl{(}R_{f}^{ev}(X)+R_{f}^{od}(X)\bigr{)},\,\,$
where
$\displaystyle
R_{{n}}(f):=\int_{{\mathbb{R}_{+}}^{t}/U^{+}}f(\tau)\tau^{n}d\tau=i^{t(n+1)}\Lambda(f,n+1).$
Using the functional equation of $\Lambda(f,s)$ we get
$\displaystyle R_{{k-2-n}}(f)=(-1)^{t(n+1)}R_{{n}}(f)$
and so we get
$\mathcal{N}(X)^{k-2}R_{f}(-\frac{1}{X})=(-1)^{t}R_{f}(X).$
Let $f$ be a primitive (Hilbert) Hecke eigenform and consider the polynomial
of the $2t$-variables in $X=(X_{1},\cdots,X_{t})$ and $Y=(Y_{1},\cdots,Y_{t})$
$\displaystyle
R_{f}(X,Y):=(-1)^{t}\frac{R^{ev}_{f}(X)R^{od}_{f}(Y)+R^{ev}_{f}(Y)R^{od}_{f}(X)}{D^{k-\frac{1}{2}}\,(2i)^{t(k-3)}<f,\,f>}\in\mathbb{C}[X,Y].$
It transforms under $\sigma\in Gal(\mathbb{C}/\mathbb{Q})$ by
$R_{\sigma(f)}=\sigma(R_{f})$ so that $R_{f}(X,Y)$ has coefficients in the
number field $\mathbb{Q}_{f}$ generated by the Fourier coefficients of $f.$
Summing over the basis $\mathcal{B}_{k}^{0},$ consisting of all normalized
Hecke eigenforms of $S_{\mathbf{k}},$ the following function
(2.3) $\displaystyle
C_{k}^{cusp}(X,Y;\tau):=\sum_{f\in\mathcal{B}_{k}^{0}}R_{f}(X,Y)f(\tau)$
is in $\mathbb{Q}[[q]][X,Y]$ for each even integer $k\geq 2.$ Further we
extend the definition of $R_{f}(X,Y)$ (see section 3.2) to non-cusp forms and
include the Eisenstein series in the sum (2.3). Then we define
(2.4) $\displaystyle
C_{k}(X,Y;\tau):=\sum_{f\in\mathcal{B}_{k}}R_{f}(X,Y)f(\tau).$
###### Example 2.1.
Take $\mathbb{F}=\mathbb{Q}(\sqrt{5}).$
1. (1)
$\displaystyle C_{2}(X,Y;\tau)$
$\displaystyle=\sum_{f\in\mathcal{B}_{k}}R_{f}(X,Y)f(\tau)=2^{4}\cdot 3\cdot
5\frac{\bigl{(}\mathcal{N}(X)+\mathcal{N}(Y)\bigr{)}\bigl{(}\mathcal{N}(XY)+1\bigr{)}}{\mathcal{N}(XY)}G_{\mathbb{F},2}(\tau),$
with the normalized Eisenstein series of weight $(k,k)$ on $\Gamma$ given by
$\displaystyle
G_{\mathbb{F},k}(\tau)=\frac{\zeta_{\mathbb{F}}(1-k)}{2^{2}}+\sum_{\mathcal{D}^{-1}\ni\nu\succ
0}\sigma_{k-1}(\nu\mathcal{D})e^{2\pi
itr(\nu\tau)},\sigma_{r}(\mathfrak{n})=\sum_{\mathfrak{c}|\mathfrak{n}}\mathcal{N}(\mathfrak{c})^{r}.$
2. (2)
Let $\mathbb{F}=\mathbb{Q}(\sqrt{5})$ and take a unique cusp form $f$ of
weight $8$ on $\Gamma.$ Using the example in [1] we get
$\frac{R_{f}^{ev}(X)R_{f}^{od}(Y)}{5^{\frac{15}{2}}(2i)^{10}<f,f>}=c(1+\frac{361}{2^{2}\cdot
5}X^{2}+\frac{361}{2^{2}\cdot 5}X^{4}+X^{6})(Y+\frac{2}{3}Y^{3}+Y^{5})$
up to rational constant multiple $c.$
Combining all these functions into a single generating function to define
(2.5) $\displaystyle C(X,Y;\tau;T)$
$\displaystyle:=\frac{(\mathcal{N}(X)+\mathcal{N}(Y))(\mathcal{N}(XY)+(-1)^{t})}{\mathcal{N}(XYT)^{2}}+\sum_{k\geq
2}C_{k}(X,Y;\tau)\frac{\mathcal{N}(T)^{{k-2}}}{\Gamma(k-1)^{t}}.$
On the other hand, consider
(2.6) $\displaystyle\,\,F_{\tau}(u,v):=(-2)^{t}\sum_{k\geq
0}\widetilde{{G}_{\mathbb{F},{k}}}(\tau,\frac{uv}{2\pi
i})(\mathcal{N}(u)^{{k-1}}+\mathcal{N}(v)^{{k-1}}),u,v\in\mathbb{C}^{t},$
where
(2.9)
$\displaystyle\widetilde{{G}_{\mathbb{F},{k}}}(\tau,\lambda):=\biggl{\\{}\begin{array}[]{cc}\sum_{\ell=(\ell_{1},\cdots,\ell_{t})\in\mathbb{Z}^{t}_{\geq
0}}\frac{{\lambda}^{\ell}}{{\ell}!({\ell}+\mathbf{{k}-{1}})!}\mathbb{D}^{\ell}\bigl{(}{G}_{\mathbb{F},{k}}(\tau)\bigr{)}&\mbox{
if $k\geq 2$}\\\ \frac{1}{2^{t}}&\mbox{if
$k=0$}\end{array}\biggr{\\}},\lambda\in\mathbb{C}^{t}$
and a normalized Hilbert Eisenstein series $G_{\mathbb{F},{k}}(\tau)$ (p 20 in
[11]) defined by
$\displaystyle E_{\mathbb{F},{k}}(\tau)$ $\displaystyle:=$
$\displaystyle\frac{D^{\frac{1}{2}-k}(2\pi
i)^{tk}}{\Gamma(k)^{t}}\bigl{(}\frac{1}{2^{t}}\zeta_{\mathbb{F}}(1-k)+\sum_{\tiny{\begin{array}[]{cc}\nu\in\mathfrak{D}^{-1}\\\
\nu\succ 0\end{array}}}\sigma_{k-1}(\nu\mathfrak{D})e^{2\pi
itr(\nu\tau)}\bigr{)}$ $\displaystyle:=$
$\displaystyle\frac{D^{\frac{1}{2}-k}(2\pi
i)^{tk}}{\Gamma(k)^{t}}G_{\mathbb{F},{k}}(\tau),\,\sigma_{r}(\mathfrak{n})=\sum_{\mathfrak{c}|\mathfrak{n}}\mathcal{N}(\mathfrak{c})^{r}.$
Now we state the main result of this paper :
###### Theorem 2.2.
(Main Theorem) Let $C(X,Y;\tau;T)$ be the generating function of the periods
of Hilbert modular forms given in (2.5). Then we have
1. (1)
$C(X,Y;\tau;T)\in\frac{1}{\mathcal{N}(XYT)^{2}}{\mathbb{Q}}[X,Y][[q,T]].$
2. (2)
$C(X,Y;\tau;T)=F_{\tau}(T,-XYT)\,F_{\tau}(XT,YT).$
###### Remark 2.1.
$\widetilde{{G}_{\mathbb{F},{k}}}(\tau,\lambda)$ in (2.9) is the ”Kuznetsov
lifting” of the Hilbert Eisenstein series $G_{\mathbb{F},\mathbf{k}}(\tau).$
Its modular transformation property is known (see Theorem 2 in [6]) : for any
$\left(\begin{smallmatrix}a&b\\\ c&d\end{smallmatrix}\right)\in\Gamma,k\geq
2,$
$\mathcal{N}(c\tau+d)^{-{k}}e^{-tr(\frac{c\lambda}{c\tau+d})}\widetilde{{G}_{\mathbb{F},{k}}}(\frac{a\tau+b}{c\tau+d},\frac{\lambda}{c\tau+d})=\widetilde{{G}_{\mathbb{F},{k}}}(\tau,\lambda)$
and its generating function $\mathcal{F}_{\tau}(u,v):=(-2)^{t}\sum_{k\geq
2}\widetilde{{G}_{\mathbb{F},{k}}}(\tau,\frac{uv}{2\pi
i})(\mathcal{N}(u)^{{k-1}}+\mathcal{N}(v)^{{k-1}})$ behaves as a Jacobi-like
form [10, 7] with a modular transformation property
$\mathcal{F}_{\frac{a\tau+b}{c\tau+d}}(\frac{u}{c\tau+d},\frac{v}{c\tau+d})=\mathcal{N}(c\tau+d)\,e^{tr(\frac{cuv}{c\tau+d})}{\mathcal{F}}_{\tau}(u,v),\forall\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)\in\Gamma.$
## 3\. Algebraicity and Period of Hecke eigen forms
### 3.1. Algebraicity
The study of period relation for automorphic forms was started by Shimura. He
showed the existence of relations up to factors in $\overline{\mathbb{Q}}^{*}$
in many instances and made a general conjecture relating periods of Hilbert
modular varieties and their compact analogs, that is, the quaternionic modular
varieties [17, 18]. There is a weaker conjecture, which gives a relation
between a product of two periods, called the quadratic periods, may be
interpreted, up to algebraic factors, as Petersson inner products. This was
proved by M. Harris [12] under a certain technical condition. More precisely,
for each $m,0\leq m\leq k-2,$ $\Lambda(f,m+1)$ is called the critical values.
###### Theorem 3.1.
(Theorem 4.3 in [19]) Let $f$ be a Hilbert Hecke eigenform of weight
$\mathbf{k}=(k,\cdots,k)$ over a totally real number field $\mathbb{F}$ of
degree $t$ and $\sigma\in Gal(\overline{\mathbb{Q}}/\mathbb{Q}).$
1. (1)
For each $r\in\mathbb{Z}^{t}/2\mathbb{Z}^{t}$ and for $f^{\sigma},\sigma\in
Gal(\overline{\mathbb{Q}}/\mathbb{Q}),$ there exist nonzero complex numbers
$\omega_{f}^{r}$ such that $(\frac{L(f,m)}{(2\pi
i)^{tm}\omega_{f}^{r}})^{\sigma}=\frac{L(f^{\sigma},m)}{(2\pi
i)^{tm}\omega_{f^{\sigma}}^{r}},$ for any integer $m$ such that $0<m<k.$
2. (2)
$\frac{L(f,m)}{(2\pi i)^{tm}\omega_{f}^{r}}\in\mathbb{Q}_{f}.$
3. (3)
If $p=(p_{1},\cdots,p_{t}),r=(r_{1},\cdots,r_{t})$ with $p_{i}+r_{i}\equiv
1\pmod{2},1\leq\forall i\leq t,$ we have $\frac{w_{f}^{p}\cdot
w_{f}^{r}}{<f\,,f>}\in\mathbb{Q}_{f}$ and $\bigl{(}\frac{w_{f}^{p}\cdot
w_{f}^{r}}{<f\,,f>}\bigr{)}^{\sigma}=\frac{w_{f^{\sigma}}^{p}\cdot
w_{f^{\sigma}}^{r}}{<f^{\sigma}\,,f^{\sigma}>}.$
### 3.2. Period of non-cusp forms
Since $\mathcal{B}_{k}$ in (2.4) contains non-cusp forms one needs to explain
” period function” corresponding to a non-cusp form $f.$ Take a non-cusp form
$f(\tau)=\sum_{0\preceq\nu\in\mathfrak{D}^{-1}}a_{f}(\nu)e^{2\pi
itr(\nu\tau)}$ in $M_{\mathbf{k}}$ and consider
$\Lambda(f,s):=\int_{(\mathbb{R}_{+})^{t}/U^{+}}\bigl{(}f(iy)-a_{f}(0)\bigr{)}y^{{s}-1}dy=D^{s}(2\pi)^{-ts}\Gamma(s)^{t}L(f,s),s\in\mathbb{C}.$
It has a meromorphic continuation to $\mathbb{C}$ and satisfies a functional
equation $\Lambda(f,s)=(-1)^{\frac{tk}{2}}\Lambda(f,k-s),$ but now has simple
poles of residue as $-a_{f}(0)$ and $(-1)^{kt}a_{f}(0)$ , up to a constant
multiple, at $s=0$ and $s=k,$ respectively.
Define
$\displaystyle R_{f}(X):=\frac{(-1)^{t}\sqrt{D}\cdot
a_{f}(0)}{({k-1})^{t}}(\mathcal{N}(X)^{{k-1}}+(-1)^{t}\mathcal{N}(X)^{-{1}})$
$\displaystyle+$
$\displaystyle\sum_{n=0}^{k-2}(-1)^{\frac{t(k+n-1)}{2}}\frac{\Gamma({k-1})^{t}}{\Gamma(n+1)^{t}\Gamma(k-n-1)^{t}}\Lambda(f,k-1-n)\mathcal{N}(X)^{n}.$
The assumption that $\mathbb{F}$ has the strict class number $1$ implies that
the space of Hilbert modular forms is a direct sum (see [4], p 12)
$M_{\mathbf{k}}=S_{\mathbf{k}}\oplus<G_{\mathbb{F},k}>$
and, so (2.4) becomes
$C_{k}(X,Y;\tau)=\sum_{f\in\mathcal{B}_{k}^{0}}R_{f}(X,Y)f(\tau)+R_{G_{\mathbb{F},k}}(X,Y)G_{\mathbb{F},k}(\tau),$
where $R_{G_{\mathbb{F},k}}(X,Y),$ defined by
(3.2) $\displaystyle
R_{G_{\mathbb{F},k}}(X,Y)=(-1)^{t}\frac{R_{G_{\mathbb{F},k}}^{ev}(X)R_{G_{\mathbb{F},k}}^{od}(Y)+R_{G_{\mathbb{F},k}}^{ev}(Y)R_{G_{\mathbb{F},k}}^{od}(X)}{D^{k-\frac{1}{2}}(2i)^{t(k-3)}<G_{\mathbb{F},k},G_{\mathbb{F},k}>},$
is a symmetrized sum of the product of period polynomials of the normalized
Hecke Eisenstein series $G_{\mathbb{F},{k}}(\tau)$ given as followings:
###### Proposition 3.2.
Take
$w_{G_{\mathbb{F},{k}}}^{-}=\frac{\sqrt{D}\Gamma(k-1)^{t}}{2^{t}},\,\,w_{G_{\mathbb{F},{k}}}^{+}=\frac{D^{k-\frac{3}{2}}\zeta_{\mathbb{F}}(k-1)}{(2\pi
i)^{t(k-1)}}w_{G_{\mathbb{F},{k}}}^{-}$
and
$\displaystyle p_{{k}}^{+}(X)=\mathcal{N}(X)^{k-2}+(-1)^{t},\,$ $\displaystyle
p_{{k}}^{-}(X)=\sum_{-1\leq n\leq k-1,n\equiv
1\pmod{2}}\frac{\zeta_{\mathbb{F}}(1-(n+1))\zeta_{\mathbb{F}}(n+2-k)}{\Gamma(n+1)^{t}\Gamma(k-n-1)^{t}}{\mathcal{N}(X)}^{n}.$
For $k\geq 2$ the period function of $G_{\mathbb{F},{k}}(\tau)$ is given by
(3.3) $\displaystyle
R_{G_{\mathbb{F},{k}}}(X)=(-1)^{t}\bigl{(}w_{G_{\mathbb{F},{k}}}^{-}\cdot
p_{{k}}^{-}(X)+w_{G_{\mathbb{F},{k}}}^{+}\cdot p_{{k}}^{+}(X)\bigr{)}$
so that
$R_{G_{\mathbb{F},{k}}}^{ev}(X)=w_{G_{\mathbb{F},{k}}}^{+}\cdot
p_{{k}}^{+}(X)\mbox{ and
}R_{G_{\mathbb{F},{k}}}^{od}(X)=w_{G_{\mathbb{F},{k}}}^{-}\cdot
p_{{k}}^{-}(X).$
###### Remark 3.3.
1. (1)
Like in the case of an elliptic modular form (see [14]), the period function
$R_{G_{\mathbb{F},{k}}}(X)$ is $\frac{1}{\mathcal{N}(X)}$ times a polynomial :
$R_{G_{\mathbb{F},{k}}}(X)\in\frac{1}{\mathcal{N}(X)}\mathbb{C}[X].$
2. (2)
[11] Note that
$\frac{D^{n-\frac{1}{2}}\zeta_{\mathbb{F}}(n)\Gamma(n)^{t}}{(2\pi
i)^{tn}}=\frac{\zeta_{\mathbb{F}}(1-n)}{2^{t}}$ and $\zeta_{\mathbb{F}}(-n)=0$
for any positive even integer $n.$
3. (3)
1. (a)
[26] Let $\mathbb{F}$ be a real quadratic field with discriminant $D.$ It is
known that $\zeta_{\mathbb{F}}(1-n)=B_{n}B_{n,\chi}$ for even positive integer
$n.$ $B_{r}$ and $B_{r,\chi}$ are the $r$th Bernoulli number
$(B_{0}=1,B_{1}=-\frac{1}{2},B_{2}=\frac{1}{6},\cdots)$ and the $r$th twisted
Bernoulli number
$(B_{1,\chi}=\frac{1}{D}\sum_{a=1}^{D}\chi(a)a,B_{2,\chi}=\frac{1}{D}\sum_{a=1}^{D}\chi(a)a^{2}-\sum_{a=1}^{D}\chi(a)a,B_{3,\chi}=\cdots),$
respectively. Here $\chi\pmod{D}$ is a primitive character defined by
$\chi(\cdot)=\bigl{(}\frac{D}{\cdot}\bigr{)}.$
2. (b)
(open problem) It is well known that the generating functions of $B_{j}$ and
$B_{j,\chi}$ are
$\sum_{n=0}^{\infty}B_{n}\frac{t^{n}}{n!}=\frac{te^{t}}{e^{t}-1}\mbox{\, and
}\sum_{m\geq
0}B_{m,\chi}\frac{t^{m}}{m!}=\sum_{j=1}^{D}\frac{\chi(j)te^{jt}}{e^{Dt}-1}.$
Similarly, it will be interesting to express the following generating function
$\sum_{m\geq 2}^{\infty}B_{m}B_{m,\chi}\frac{t^{m}}{m!}=\sum_{m\geq
2}^{\infty}\zeta_{\mathbb{F}}(1-m)\frac{t^{m}}{m!},$
as elementary functions.
Proof of Proposition 3.2 : The period polynomial of $G_{\mathbb{F},k}(\tau)$
can be computed from the definition in (3.2) : using
$\Lambda(G_{\mathbb{F},k},n+1)=\frac{D^{n+1}\Gamma(n+1)^{t}}{(2\pi)^{t(n+1)}}\zeta_{\mathbb{F}}(n+1)\zeta_{\mathbb{F}}(n+2-k),$
we have
$\displaystyle
R_{G_{\mathbb{F},k}}(X)=\frac{(-1)^{t}\sqrt{D}\zeta_{\mathbb{F}}(1-k)}{2^{t}(k-1)^{t}}(\mathcal{N}(X)^{k-1}+(-1)^{t}\mathcal{N}(X)^{-1})$
$\displaystyle+\frac{(-1)^{t}{D}^{k-1}\Gamma(k-1)^{t}\zeta_{\mathbb{F}}(k-1)}{2^{t}(2\pi
i)^{t(k-1)}}\bigl{(}\mathcal{N}(X)^{k-2}+(-1)^{t}\bigr{)}$
$\displaystyle+\sum_{n=1}^{k-3}(-1)^{\frac{t(k+n-1)}{2}}\frac{\Gamma({k-1})^{t}D^{k-n-1}}{(2\pi)^{t(k-n-1)}\Gamma(n+1)^{t}}\zeta_{\mathbb{F}}(1-(n+1))\zeta_{\mathbb{F}}(k-1-n)\mathcal{N}(X)^{{n}}$
(using the functional equation of $\zeta_{\mathbb{F}}(s)$ of Remark 3.3-(2))
$\displaystyle=\frac{(-1)^{t}{D}^{k-1}\Gamma(k-1)^{t}\zeta_{\mathbb{F}}(k-1)}{2^{t}(2\pi
i)^{t(k-1)}}\bigl{(}\mathcal{N}(X)^{k-2}+(-1)^{t}\bigr{)}$
$\displaystyle+\frac{(-1)^{t}\sqrt{D}\Gamma(k-1)^{t}}{2^{t}}\sum_{-1\leq
n\equiv 1\pmod{2}\leq
k-1}\frac{\zeta_{\mathbb{F}}(1-(n+1))\zeta_{\mathbb{F}}(n+2-k)}{\Gamma(n+1)^{t}\Gamma(k-n-1)^{t}}\mathcal{N}(X)^{n}$
$\displaystyle=(-1)^{t}\bigl{(}\omega_{G_{\mathbb{F},k}}^{+}p_{{k}}^{+}(X)+\omega_{G_{\mathbb{F},k}}^{-}p_{{k}}^{-}(X)\bigr{)}.$
This completes a proof. ∎
#### 3.2.1. Petersson scalar product of Eisenstein Series
The Petersson scalar product of a $SL_{2}(\mathbb{Z})$-invariant function had
been defined by Zagier [25] using Rankin-Selberg method. Similarly we have
###### Proposition 3.4.
$<G_{\mathbb{F},{k}}(\tau),G_{\mathbb{F},{k}}(\tau)>=\frac{\Gamma(k-1)^{t}\zeta_{\mathbb{F}}(k-1)}{(4\pi)^{t(k-1)}}\frac{\zeta_{\mathbb{F}}(1-k)}{2^{t}}$
Proof of Proposition 3.4 : Following the method in [25] the Petersson norm of
the Hilbert Eisenstein series $G_{\mathbb{F},{k}}(\tau)$ can be computed as
$\displaystyle<G_{\mathbb{F},{k}}(\tau),G_{\mathbb{F},{k}}(\tau)>=(-1)^{\frac{tk}{2}}(4\pi)^{-tk}\Gamma(k)^{t}\cdot\zeta^{*}_{\mathbb{F}}(k)\zeta^{*}_{\mathbb{F}}(2-k)$
where
$\zeta_{\mathbb{F}}^{*}(s):=D^{\frac{s}{2}}\pi^{-\frac{ts}{2}}\Gamma(\frac{s}{2})^{t}\zeta_{\mathbb{F}}(s)=\zeta_{\mathbb{F}}^{*}(1-s)$
(see p 57 [26]). Using the identities
$\Gamma(\frac{k}{2})\Gamma(\frac{k-1}{2})=\Gamma(k-1)\sqrt{\pi}2^{-(k-2)}$ and
$\zeta_{\mathbb{F}}(k)=D^{-k+\frac{1}{2}}\frac{(2\pi
i)^{tk}}{2^{t}\Gamma(k)^{t}}\zeta_{\mathbb{F}}(1-k)$ we get the Petersson norm
of the Eisenstein series $G_{\mathbb{F},k}.$ ∎
Now write the function $C_{k}(X,Y;\tau)$ in (2.4) as a sum of
$C^{cusp}_{k}(X,Y;\tau)$ in (2.3) and
$C^{Eis}_{k}(X,Y;\tau):=R_{G_{\mathbb{F},k}}(X,Y)G_{\mathbb{F},k}(\tau):$
$C_{k}(X,Y;\tau)=C^{cusp}_{k}(X,Y;\tau)+C^{Eis}_{k}(X,Y;\tau).$
###### Proposition 3.5.
We have
1. (1)
$C^{Eis}_{k}(X,Y;\tau)=(-1)^{t}\frac{2^{t}\Gamma(k-1)^{t}}{\zeta_{\mathbb{F}}(1-k)}(p^{+}_{k}(X)p_{k}^{-}(Y)+p^{+}_{k}(Y)p_{k}^{-}(X))G_{\mathbb{F},k}(\tau).$
2. (2)
$C(X,Y;\tau;T)=F_{\tau}(XT,YT)F_{\tau}(T,-XYT)\mbox{ as
$\tau\rightarrow(i\infty,\cdots,i\infty)$}$
Proof of Proposition 3.5 :
1. (1)
From (3.2) recall that
$R_{G_{\mathbb{F},k}}(X,Y)=(-1)^{t}\frac{R_{G_{\mathbb{F},k}}^{ev}(X)R_{G_{\mathbb{F},k}}^{od}(Y)+R_{G_{\mathbb{F},k}}^{ev}(Y)R_{G_{\mathbb{F},k}}^{od}(X)}{D^{k-\frac{1}{2}}(2i)^{t(k-3)}<G_{\mathbb{F},k},G_{\mathbb{F},k}>}.$
So, Proposition 3.2 and Proposition 3.4 imply that
$\displaystyle
C^{Eis}_{k}(X,Y;\tau)=R_{G_{\mathbb{F},k}}(X,Y)G_{\mathbb{F},k}(\tau)$
$\displaystyle=\frac{\omega^{+}_{G_{\mathbb{F},k}}\omega^{-}_{G_{\mathbb{F},k}}\bigl{(}p^{+}_{k}(X)p^{-}_{k}(Y)+p^{+}_{k}(Y)p^{-}_{k}(X)\bigr{)}}{D^{k-\frac{1}{2}}(2i)^{t(k-3)}<G_{\mathbb{F},k},\,G_{\mathbb{F},k}>}G_{\mathbb{F},k}(\tau)$
$\displaystyle=(-1)^{t}\frac{2^{t}\Gamma(k-1)^{t}}{\zeta_{\mathbb{F}}(1-k)}(p^{+}_{k}(X)p_{k}^{-}(Y)+p^{+}_{k}(Y)p_{k}^{-}(X))G_{\mathbb{F},k}(\tau).$
2. (2)
Using Proposition 3.5 part (1) the value of $C(X,Y;\tau;T)$ as
$\tau\rightarrow(i\infty,\cdots,i\infty)$ is
$\displaystyle C(X,Y;(i\infty,\cdots,i\infty);T)$
$\displaystyle=\frac{(\mathcal{N}(X)+\mathcal{N}(Y))(\mathcal{N}(XY)+(-1)^{t})}{\mathcal{N}(XYT)^{2}}+\sum_{k\geq
2}C_{k}^{Eis}(X,Y;(i\infty,\cdots,i\infty))\frac{\mathcal{N}(T)^{k-2}}{\Gamma(k-1)^{t}}$
$\displaystyle=$
$\displaystyle\frac{(\mathcal{N}(X)+\mathcal{N}(Y))(\mathcal{N}(XY)+(-1)^{t})}{\mathcal{N}(XYT)^{2}}+(-1)^{t}\sum_{k\geq
2}\bigl{(}p_{{k}}^{+}(X)p_{{k}}^{-}(Y)+p_{{k}}^{+}(Y)p_{{k}}^{-}(X)\bigr{)}{\mathcal{N}(T)^{k-2}}$
since $G_{\mathbb{F},k}(i\infty)=\frac{\zeta_{\mathbb{F}}(1-k)}{2^{t}}.$ On
the other hand, a direct computation shows that
$\displaystyle
F_{\tau}(T,-XYT)F_{\tau}(XT,YT)|_{\tau\rightarrow(i\infty,\cdots,i\infty)}$
$\displaystyle=\frac{(\mathcal{N}(X)+\mathcal{N}(Y))(\mathcal{N}(XY)+(-1)^{t})}{\mathcal{N}(XYT)^{2}}+(-1)^{t}\sum_{k\geq
2}\bigl{(}p^{+}_{k}(X)p^{-}_{k}(Y)+p^{+}_{k}(Y)p^{-}_{k}(X)\bigr{)}\mathcal{N}(T)^{k-2}$
This completes the proof of Proposition 3.5.
∎
## 4\. Proofs
### 4.1. Proof of Theorem 2.2 (Main Theorem)
$(1)$ Using Theorem 3.1 with a proper choice of the Petersson norm $<f,\,f>,$
we see that
$\displaystyle
R_{f}(X,Y):=(-1)^{t}\frac{R^{ev}_{f}(X)R^{od}_{f}(Y)+R^{ev}_{f}(Y)R^{od}_{f}(X)}{D^{k-\frac{1}{2}}\,(2i)^{t(k-3)}<f,\,f>}\in\mathbb{Q}_{f}[X,Y],f\in
S_{\mathbf{k}}.$
With an action of $\sigma\in Gal(\mathbb{C}/\mathbb{Q}_{f})$ by
$R_{\sigma(f)}=\sigma(R_{f})$ we see that
$\displaystyle
C(X,Y;\tau;T)=\frac{(\mathcal{N}(X)+\mathcal{N}(Y))(\mathcal{N}(XY)+(-1)^{t})}{\mathcal{N}(XYT)^{2}}$
$\displaystyle+$ $\displaystyle\sum_{k\geq
2}\sum_{f\in\mathcal{B}_{k}}R_{f}(X,Y)f(\tau)\frac{\mathcal{N}(T)^{{k-2}}}{\Gamma(k-1)^{2}}\in\frac{1}{\mathcal{N}(XYT)^{2}}\mathbb{Q}[X,Y][[q,T]].$
This proves rationality of $C(X,Y;\tau;T).$
$(2)$ To prove Theorem 2.2 part (2) write the Taylor expansion (2.6)
$\displaystyle F_{\tau}(u,v)$ $\displaystyle=$
$\displaystyle\frac{1}{\mathcal{N}(u)}+\frac{1}{\mathcal{N}(v)}$
$\displaystyle+$ $\displaystyle(-2)^{t}\sum_{k\geq
2}\sum_{\ell\in\mathbb{Z}_{\geq
0}^{t}}\frac{\mathbb{D}^{\ell}\bigl{(}G_{\mathbb{F},k}(\tau)\bigr{)}}{(2\pi
i)^{\ell}\ell!(\ell+\mathbf{k}-\mathbf{1})!}(u^{\ell}v^{\ell+\mathbf{k}-1}+u^{\ell+\mathbf{k}-1}v^{\ell})$
or write it as
(4.1) $\displaystyle
F_{\tau}(u,v)=\sum_{\mathbf{h}=(h,\cdots,h),\mathbf{\ell}=(\ell_{1},\cdots,\ell_{2})}g_{{h},\mathbf{\ell}}(\tau)(u^{\ell}v^{\ell+\mathbf{h}-\mathbf{1}}+u^{\ell+\mathbf{h}-\mathbf{1}}v^{\ell})$
with
$g_{{h},\mathbf{\ell}}(\tau)=\left\\{\begin{array}[]{ccll}\frac{(-2)^{t}}{\,(2\pi
i)^{\ell}\Gamma(\mathbf{\ell}+\mathbf{1})\Gamma(\mathbf{\ell}+\mathbf{h})}\mathbb{D}^{\ell}(G_{\mathbb{F},{h}}(\tau)),&\mbox{\,
if $h\geq 2,\ell_{i}\geq 0,i=1,2$}\\\ \frac{1}{2^{t}},&\mbox{\, if
${h}=0,\ell=(0,\cdots,0)$}\\\ 0,&\mbox{\, otherwise}\end{array}\right\\}.$
Next let
$F_{\tau}(T,-XYT)F_{\tau}(XT,YT)=\sum_{{k}\geq
0}b_{\mathbf{k}}(X,Y;\tau)\frac{\mathcal{N}(T)^{{k}-{2}}}{\Gamma(k-1)^{2}}.$
Since we have already checked that
$C(X,Y;(i\infty,\cdots,i\infty),T)=F_{\tau}(T,-XYT)F_{\tau}(XT,YT)|_{\tau\rightarrow(i\infty,\cdots,i\infty)}$
in Proposition 3.5, it is enough to confirm that
$\frac{\bigl{<}b_{\mathbf{k}}(X,Y;\cdot),f(\cdot)\bigr{>}}{\bigl{<}f,\,f\bigr{>}}=R_{f}(X,Y)\mbox{\,\,
for each $f\in\mathcal{B}^{0}_{k}$}.$
From the expression (4.1) we see
(4.6) $\displaystyle b_{\mathbf{k}}(X,Y;\tau)$ $\displaystyle=$
$\displaystyle\sum_{\tiny{\begin{array}[]{ccc}\ell,\mathbf{h},\ell^{\prime},\mathbf{h}^{\prime}\\\
\mathbf{h}+\mathbf{h}^{\prime}+{2}(\mathbf{\ell}+\mathbf{\ell}^{\prime})=\mathbf{k}\\\
\mathbf{h}=(h,\cdots,h),\mathbf{h}^{\prime}=(h^{\prime},\cdots,h^{\prime})\end{array}}}g_{{h},\mathbf{\ell}}(\tau)g_{{h}^{\prime},\mathbf{\ell}^{\prime}}(\tau)$
$\displaystyle\times[(-XY)^{\mathbf{\ell}+\mathbf{h}-\mathbf{1}}+(-XY)^{\mathbf{\ell}}][X^{\mathbf{\ell}^{\prime}}Y^{\mathbf{\ell}^{\prime}+\mathbf{h}^{\prime}-\mathbf{1}}+X^{\mathbf{\ell}^{\prime}+\mathbf{h}^{\prime}-\mathbf{1}}Y^{\mathbf{\ell}^{\prime}}]$
The coefficients of $\mathcal{N}(X)^{{p}}\mathcal{N}(Y)^{{q}}$ with $q$ or $p$
equal to $-1$ or to $k-1$ involve only $G_{\mathbb{F},k}(\tau)$ and have
already been treated in Proposition 3.5. Also the coefficients of
$\mathcal{N}(X)^{p}\mathcal{N}(Y)^{q}$ in (4.6) is invariant under
$q\leftrightarrow k-2-p$ and $q\leftrightarrow p$ so that we may assume $0\leq
p<q\leq\frac{k-2}{2}.$ For such $p,q$ the coefficient of
$\mathcal{N}(X)^{{p}}\mathcal{N}(Y)^{{q}}$ in (4.6) equals
$\displaystyle\sum_{\tiny{\begin{array}[]{cc}\ell,\ell^{\prime}\succeq-\mathbf{1}\\\
\ell+\ell^{\prime}=\mathbf{p}=(p,\cdots,p)\end{array}}}(-1)^{|\ell|}g_{{k}-{p}-{q}-{1},\mathbf{\ell}}(\tau)g_{{q}-{p}+{1},\mathbf{\ell}^{\prime}}(\tau)$
$\displaystyle=$
$\displaystyle\sum_{\tiny{\begin{array}[]{cc}\ell,\ell^{\prime}\succeq-\mathbf{1}\\\
\ell+\ell^{\prime}=\mathbf{p}=(p,\cdots,p)\end{array}}}2^{2t}\frac{(-1)^{|\ell|}\mathbb{D}^{\ell}(G_{\mathbb{F},k-p-q-1})\mathbb{D}^{\ell^{\prime}}(G_{\mathbb{F},q-p+1})}{\ell!(\ell+\mathbf{k-p-q}-2)!{\ell^{\prime}}!(\mathbf{q}-\ell^{\prime})!}$
$\displaystyle=$ $\displaystyle\frac{2^{2t}}{\Gamma({q}+\
{1})^{t}\Gamma({k}-{q}-{1})^{t}}[G_{\mathbb{F},{q}-{p}+{1}},G_{\mathbb{F},{k}-{q}-{p}-{1}}]^{Hil}_{\mathbf{p}}.$
Here, $[\cdot\,\,\cdot]^{Hil}_{\mathbf{p}}$ denotes the
$\mathbf{p}=(p,\cdots,p)$th Rankin-Cohen bracket (see Corollary 1 in [6])
defined by
$\displaystyle[G_{\mathbb{F},{q}-{p}+{1}},G_{\mathbb{F},{k}-{q}-{p}-{1}}]^{Hil}_{\mathbf{p}}:=$
$\displaystyle\frac{1}{(2\pi
i)^{tp}}\sum_{\tiny{\begin{array}[]{ccc}0\leq{\ell}_{i}\leq{p}\\\
\ell=(\ell_{1},\cdots,\ell_{t})\\\
\ell+\ell^{\prime}=\mathbf{p}\end{array}}}\frac{(-1)^{|\mathbf{\ell}|}\Gamma({q}+{1})^{t}\Gamma({k}-{q}-{1})^{t}}{{\ell^{\prime}}!(\mathbf{{k}+{\ell^{\prime}}-{q}-{p-2}})!{\ell}!(\mathbf{{q}-{\ell}})!}\mathbb{D}^{\mathbf{\ell}}(G_{\mathbb{F},{q}-{p}+{1}})\mathbb{D}^{{\ell^{\prime}}}(G_{\mathbb{F},{k}-{q}-{p}-{1}}).$
On the other hand we recall the following result (Theorem 3 in [6]) :
###### Theorem 4.1.
[6] Suppose that $f(\tau)=\sum_{\mathcal{D}^{-1}\ni\nu\succ
0}a_{f}(\nu)e^{2\pi itr(\nu\tau)}\in S_{\mathbf{k}}$ and
$g(\tau)=\sum_{\mathcal{D}^{-1}\ni\nu\succeq 0}a_{g}(\nu)e^{2\pi
itr(\nu\tau)}\in M_{\mathbf{k}_{2}}$ with $k=k_{1}+k_{2}+2p>2.$ Then
$\frac{D^{\frac{1}{2}-k_{1}}(2\pi
i)^{tk_{1}}}{\Gamma(k_{1})^{t}}<f,[G_{\mathbb{F},k_{1}},g_{k_{2}}]_{\mathbf{p}}>=\frac{\Gamma(k-1)^{t}\Gamma(k_{1}+p)^{t}}{(4\pi)^{t(k-1)}\Gamma(k_{1})^{t}\Gamma(p+1)^{t}}\sum_{\nu\succ
0}\frac{a_{f}(\nu)\overline{a_{g}(\nu)}}{\mathcal{N}(\nu)^{k-p-1}}$
Taking $g_{k_{2}}(\tau)=G_{\mathbb{F},k}(\tau)$ in Theorem 4.1 we get
$\displaystyle<f,[G_{\mathbb{F},k_{1}},G_{\mathbb{F},k_{2}}]_{\mathbf{p}}>=(-1)^{\frac{tk_{1}}{2}}\frac{D^{k_{1}-\frac{1}{2}}\Gamma(k-1)^{2}\Gamma(k_{1}+p)^{t}}{2^{t(k-1)}(2\pi)^{t(k+k_{1}-1)}\Gamma(p+1)^{t}}\sum_{\nu>0}\frac{a_{f}(\nu)\sigma_{k_{2}-1}(\nu)}{\mathcal{N}(\nu)^{k-p-1}}$
$\displaystyle=(-1)^{\frac{tk_{1}}{2}}\frac{D^{k_{1}-\frac{1}{2}}\Gamma(k-1)^{t}\Gamma(k_{1}+p)^{t}}{2^{t(k-1)}(2\pi)^{t(k+k_{1}-1)}\Gamma(p+1)^{t}}L(f,k-p-1)L(f,k_{1}+p)$
$\displaystyle(\mbox{since
$R_{n}(f)=i^{t(n+1)}D^{n+1}(2\pi)^{-t(n+1)}\Gamma(n+1)^{t}L(f,n+1)$})$
$\displaystyle=\frac{(-1)^{\frac{t(k-1)}{2}}\Gamma(k-1)^{t}}{D^{k-\frac{1}{2}}2^{t(k-1)}\Gamma(p+1)^{t}\Gamma(k-p-1)^{t}}R_{k-p-2}(f)R_{k_{1}+p-1}(f).$
And so we have that
$\displaystyle\sum_{f\in\mathcal{B}^{0}_{k}}\frac{R_{k-p-2}(f)R_{k_{1}+p-1}(f)f(\tau)}{i^{t(k-1)}D^{k-\frac{1}{2}}2^{t(k-3)}<f,f>}=\frac{2^{2t}\Gamma(p+1)^{t}\Gamma(k-p-1)^{t}}{\Gamma(k-1)^{t}}[G_{\mathbb{F},k_{1}},G_{\mathbb{F},k_{2}}]^{Hil}_{\mathbf{p}}.$
Now take $k_{1}=k-q-p-1,k_{2}=q+1-p$ for $q+p\equiv 1\pmod{2},p,q>0,$ to get
$\displaystyle\sum_{f\in\mathcal{B}^{0}_{k}}(-1)^{t}\frac{R_{{k-q-2}}(f)R_{{k-p-2}}(f)}{D^{k-\frac{1}{2}}(2i)^{t(k-3)}<f,\,f>}f(\tau)=\frac{2^{2t}\Gamma({k-p-1})^{t}\Gamma({p+1})^{t}}{\Gamma({k-1})^{t}}[G_{\mathbb{F},{k-1-q-p}},\,G_{\mathbb{F},{q+1-p}}]^{Hil}_{\mathbf{p}}.$
Since
$\displaystyle\sum_{f\in\mathcal{B}^{0}_{k}}R_{f}(X,Y)f(\tau)=\sum_{f\in\mathcal{B}^{0}_{k}}(-1)^{t}\frac{R_{f}^{ev}(X)R_{f}^{ev}(Y)+R_{f}^{ev}(Y)R_{f}^{ev}(X)}{D^{k-\frac{1}{2}}(2i)^{t(k-3)}<f,\,f>}f(\tau)$
we get
$\frac{\bigl{<}\,b_{\mathbf{k}}(X,Y;\cdot),\,f(\cdot)\,\bigr{>}}{\bigl{<}\,f,\,f\bigl{>}}=R_{f}(X,Y).$
Combining all together with Proposition 3.5 we conclude that
$\displaystyle C(X,Y;\tau;T)=F_{\tau}(T,-XYT)\,F_{\tau}(XT,YT)$
which completes a proof. ∎
## 5\. Conclusion
One of the main importance of modular forms in number theory is that spaces of
modular forms are generated by those with rational Fourier coefficients. The
”period theory” gives another natural rational structure of modular forms. A
striking result by Zagier [24] states that this rational information of
modular forms can be written as a single product of Kronecker series
$F_{\tau}(u,v)$ which is a Jacobi form. The recent results in [2, 21] show
that Eisenstein-Kronecker numbers have a rich arithmetic nature, such as a
connection with the special Hecke $L$-function over imaginary quadratic fields
and Katz’ two-variable p-adic Eisenstein measure.
In this paper, we identified the Kronecker series as a ”Kuznetsov lifting” of
holomorphic Hilbert Eisenstein series over totally real number fields with
strict class number 1. This is the first case to connect Kronecker series to
the critical values of Hilbert modular $L$-functions over a totally real
number field and it seems worthwhile to explore the hidden arithmetic
relations more.
On the other hand, in terms of geometric interpretation, a modular form can be
regarded as a section of a certain sheaf of differential forms on the open
modular curve on a congruence subgroup $\Gamma\subset SL_{2}(\mathbb{Z}).$ By
noting that the singular cohomology of the open modular curve is given by the
group cohomology $H^{*}(\Gamma,W)$ the comparison of de Rham and singular
cohomology can give an Eichler-isomorphism. Matsushima and Murakami [15]
extended the results to show that the space of automorphic forms on a
symmetric space $M$ is isomorphic to $H^{*}(M,S)$ for a certain locally
constant sheaf $S$ over $M.$ The cohomology of Hilbert surfaces in terms of
Hilbert cusp forms has been studied by many researchers including [11, 22].
Relating the critical values of $L$-functions of Hilbert cusp forms and
cohomology was first studied by Yoshida [22]. Following the work by Knopp [13]
and Kohnen-Zagier [14], which provide us the considerable new lights on
Eichler-Shimura isomorphisms such as rational structures of elliptic modular
forms, we are able to associate the space of Hilbert modular forms over the
totally real number fields to the parabolic cohomology group in terms of the
period polynomial by taking anti-derivative of Hilbert modular form [5].
## References
* [1] A. Babei, L. Rolen and I.Wagner, The Riemann hypothesis for period polynomials of Hilbert modular forms, Journal of Number Theory, Vol. 218, January 2021, page 44 - 61.
* [2] K. Bannai and S. Kobayashi, Algebraic theta functions and the p-adic interpolation of Eisenstein-Kronecker numbers, Duke Mathematical Journal, 153 (2010), no. 2, 229 - 295.
* [3] G. Bol, Invarianten linearer Differentialgleichungen, Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 16 (1949), no. 3-4, 1 - 28.
* [4] J. Brunier, G. van der Geer, G. Harder and D. Zagier, The $1-2-3$ of modular forms, Lectures from the Summer School on Modular Forms and their Applications held in Nordfjordeid, June 2004, Edited by Kristian Ranestad, Universitext. Springer-Verlag, Berlin, 2008.
* [5] Y. Choie, Parabolic cohomology and Hilbert modular forms, Preprint (2021) .
* [6] Y. Choie, H. Kim and O. Richter, Differential operators on Hilbert modular forms, Journal of Number Theory 122 (2007), no. 1, 25-36.
* [7] Y. Choie and M. Lee, Jacobi-like forms, Pseudodifferential operators, and Quasimodular forms, Springer Monographs in Mathematics (2019).
* [8] Y. Choie, Y. Park and D. Zagier, Periods of modular forms on $\Gamma_{0}(N)$ and Products of Jacobi Theta functions, Journal of the European Mathematical Society, Vol. 21, Issue 5, 1379 - 1410 (2019).
* [9] M. Eichler, Eine Verallgemeinerung der Abelschen Integrale, Mathematische Zeitschrift, vol. 67 (1957) pp 267-298.
* [10] M. Eichler and D. Zagier, The theory of Jacobi forms, Progress in Mathematics, 55, Birkhäuser Boston, Inc., Boston, MA, 1985.
* [11] G. van der Geer, Hilbert modular surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 16, Springer-Verlag, Berlin, 1988.
* [12] M. Harris, $L$-functions of $2\times 2$ unitary groups and factorization of periods of Hilbert modulr forms, Journal of American Mathematical Society 6 (1993), no. 3, 637 - 719.
* [13] M. Knopp, Some new results on the Eichler cohomology of automorphic forms, Bulletin of the American Mathematical Society, 80 (1974), 607 - 632.
* [14] W. Kohnen and D. Zagier, Modular forms with rational periods, Modular forms (Durham, 1983), 197-249, Ellis Horwood Series in Mathematics and its Applications: Statistics and Operational Research, Horwood, Chichester, 1984.
* [15] Y. Matsushima and S. Murakami, On vector bundle valued harmonic forms and automorphic forms on symmetric Riemannian manifolds, Annals of Mathematics (2) 78 (1963), 365 - 416.
* [16] Y. Manin, Periods of cusp forms, and $p$-adic Hecke series, (Russian) Matematicheskii Sbornik (N.S.) 92 (134) (1973), 378 - 401, 503.
* [17] $\underline{\makebox[40.00006pt]{}}$, On the critical values of certain Dirichlet series and the periods of automorphic forms, Inventiones Mathematicae 94 (1988), no. 2, 245 - 305.
* [18] $\underline{\makebox[40.00006pt]{}}$, Algebraic relations between critical values of zeta functions and inner products, American Journal of Mathematics, 105 (1983), no. 1, 253 - 285.
* [19] $\underline{\makebox[40.00006pt]{}}$, The special values of the zeta functions associated with Hilbert modular forms, Duke Mathematical Journal, 45 (1978), no. 3, 637- 679.
* [20] $\underline{\makebox[40.00006pt]{}}$, Sur les intégrales attachées aux forms automorphes, Journal of the Mathematical Society of Japan 11 (1959), 291 - 311.
* [21] J. Sprang, Eisenstein-Kronecker series via the Poincaré bundle, Forum of Mathematics, Sigma 7 (2019), No. e34, 59 pp.
* [22] H. Yoshida, Absolute CM-periods. Mathematical Surveys and Monographs, 106, American Mathematical Society, Providence, RI, 2003.
* [23] A. Weil, Elliptic functions according to Eisenstein and Kronecker, Reprint of the 1976 original, Classics in Mathematics, Springer-Verlag, Berlin, 1999.
* [24] D. Zagier, Periods of modular forms and Jacobi theta functions, Inventiones Mathematicae, 104, 449-265 (1991).
* [25] $\underline{\makebox[40.00006pt]{}}$, The Rankin-Selberg method for automorphic functions which are not of rapid decay, Journal of the Faculty of Science, University of Tokyo, Section IA, Mathematics 28 (1981), no. 3, 415 - 437 (1982).
* [26] $\underline{\makebox[40.00006pt]{}}$, On the values at negative integers of the Zeta-functions of a real quadratic fields, L’Enseignement Mathématique, 22 (1976), no. 1 - 2, 55 - 95.
|
# A positive operator-valued measure for two-photon detection via sum-
frequency generation
Sofiane Merkouche Corresponding author<EMAIL_ADDRESS>Valérian Thiel
Brian J. Smith Oregon Center for Optical, Molecular, and Quantum Science, and
Department of Physics, University of Oregon, Eugene, OR 97403
###### Abstract
Spontaneous parametric down conversion (PDC), in the perturbative limit, can
be considered as a probabilistic splitting of one input photon into two output
photons. Conversely, sum-frequency generation (SFG) implements the reverse
process of combining two input photons into one. Here we show that a single-
photon projective measurement in the temporal-mode basis of the output photon
of a two-photon SFG process effects a generalized measurement on the input
two-photon state. We describe the positive operator-valued measure (POVM)
associated with such a measurement, and show that its elements are
proportional to the two-photon states produced by the time-reversed PDC
process. Such a detection acts as a joint measurement on two photons, and is
thus an important component of many quantum information processing protocols
relying on photonic entanglement. Using the retrodictive approach, we analyze
the properties of the two-photon POVM that are relevant for quantum protocols
exploiting two-photon states and measurements.
###### pacs:
## I Introduction
Entangled photon pairs are an extremely useful system for studying both the
fundamentals Aspect (2015) and applications of quantum mechanics, and are the
workhorse of experimental quantum optics. This is mainly due to their ease of
generation in the laboratory through spontaneous parametric downconversion
(PDC), whereby a nonlinear medium such as a crystal is pumped with a bright
laser beam and mediates the probabilistic splitting of one pump photon into a
pair of photons, subject to energy and momentum conservation. Over the past
three decades, much progress has been made in the generation of PDC photon
pairs with well-engineered polarization, spectral-temporal, and spatial
structure, exhibiting varying degrees of correlation in all of these degrees
of freedom. Particular attention has been given recently to encoding quantum
information in the spectral-temporal degree of freedom of light. This is
because time-frequency modes of light, generally referred to as temporal
modes, can encode a large amount of information, are particularly well-suited
to integrated optics technology, and are robust to communication channel noise
Brecht _et al._ (2015). In addition, time-frequency entangled photons are
useful for applications such as large-alphabet quantum key distribution Nunn
_et al._ (2013), quantum-enhanced spectroscopy Raymer _et al._ (2013);
Schlawin and Buchleitner (2017); Dayan (2007), and quantum-enhanced sensing
Zhuang _et al._ (2017).
Complementary to two-photon state generation is two-photon joint detection,
which is an example of the more general concept of a joint quantum measurement
on two systems. It is known that joint quantum measurements on separately
prepared systems can inherently reveal more information than accessible
through separate measurements relying on local operations and classical
communication Bennett _et al._ (1999). In addition entangled measurements,
joint measurements whose eigenstates are entangled states, are as crucial a
resource as entangled states in quantum protocols such as quantum
teleportation Bennett _et al._ (1993), remote state preparation Bennett _et
al._ (2001), entanglement swapping Halder _et al._ (2007); Sangouard _et
al._ (2011), superdense coding, and quantum illumination Lloyd (2008). In
fact, the equal footing that entangled states and entangled measurements have
in quantum protocols such as teleportation has only recently been given due
attention Gisin (2019).
One way to implement a two-photon joint measurement is to use the complement
of PDC, sum-frequency generation (SFG). Here two photons interact in a
nonlinear medium and are upconverted to a single photon, conserving energy and
momentum. Two-photon measurement via SFG has been explored theoretically Dayan
(2007) and experimentally Dayan _et al._ (2005). In addition, it has been
pointed out that the theory of two-photon detection by SFG closely parallels
that of two-photon absorption in a molecule, and a unified framework
describing both of these processes can be found in reference Dayan (2007).
In this work we construct and analyze the positive operator valued measure
(POVM) associated with joint two-photon measurements relying on SFG followed
by mode-selective detection of the upconverted photon in the time-frequency
domain. Our development of the two-photon POVM closely parallels that of the
POVM for a single photon detected after a filter, as described in reference
van Enk (2017). We then give some figures of merit for such measurements that
are relevant to some of the aforementioned protocols, namely the projectivity,
orthogonality, and entanglement of the measurement operators. We illustrate
the role of entanglement in measurements with a model of the spectral quantum
teleportation scenario. We conclude by highlighting some questions and
possible future directions left open by this work.
## II Framework
### II.1 The three-wave mixing interaction
We begin by writing down the transformation describing three-wave mixing,
which includes both parametric down-conversion and sum-frequency generation,
in the interaction picture. We assume a given polarization configuration and
assume that all the interacting fields occupy a single transverse spatial
mode, so that only the time-frequency degrees of freedom of the field are
relevant. Under these conditions the transformation may be expressed as
$\begin{gathered}\hat{H}=\hat{H}_{PDC}+\hat{H}_{SFG},\\\
\hat{H}_{PDC}=\chi\int\text{d}\omega_{s}\text{d}\omega_{i}\Phi(\omega_{s},\omega_{i})\
\hat{a}_{p}(\omega_{s}+\omega_{i})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i}),\\\
\hat{H}_{SFG}=(\hat{H}_{PDC})^{\dagger},\end{gathered}$ (1)
where $\hat{a}^{(\dagger)}_{j}(\omega_{j})$ is the annihilation (creation)
operator for a single photon at monochromatic mode $j$ with frequency
$\omega_{j}$, and $j=p,s,i$ label the pump, signal, and idler frequencies;
$\chi\ll 1$ is a parameter characterizing the efficiency of the process,
describing the second-order nonlinearity and containing all the parameters
that are constant or slowly-varying over the integration; and
$\Phi(\omega_{s},\omega_{i})$ is the phase-matching function, which has the
form
$\Phi(\omega_{s},\omega_{i})\propto\mathrm{sinc}\left(\frac{\Delta\mathbf{k}\cdot\mathbf{L}}{2}\right),$
(2)
where $\mathbf{L}$ is the vector quantifying the length of the interaction
medium, and
$\Delta\mathbf{k}=\mathbf{k}_{p}(\omega_{s}+\omega_{i})-\mathbf{k}_{s}(\omega_{s})-\mathbf{k}_{i}(\omega_{i})$
is the wavevector mismatch for the three fields. $\Phi$ takes on its maximum
value when $\Delta\mathbf{k}=\mathbf{0}$, and thus corresponds to momentum
conservation in the process. Finally, we have separated the transformation
explicitly into $\hat{H}_{PDC}$, the term responsible for PDC, and its
Hermitian conjugate, $\hat{H}_{SFG}$, responsible for SFG.
The interacting fields evolve unitarily under this transformation, and for our
analysis, we will consider only the weak-interaction limit, so that, for an
input state $\ket{\Psi_{\text{in}}}$, the output state is given by
$\ket{\Psi_{\text{out}}}=\exp{[-i\hat{H}]}\ket{\Psi_{\text{in}}}\approx\left(1-i\hat{H}\right)\ket{\Psi_{\text{in}}}.$
(3)
Note that, in a slight abuse of notation, we are using $\hat{H}$ to reflect
the fact that this transformation is derived from the interaction Hamiltonian
for three-wave mixing, although the latter is a time-dependent quantity with a
different dimensionality (see Appendix A).
### II.2 PDC photon pairs and the joint spectral amplitude
Figure 1: Two-dimensional plot of the magnitude of a typical JSA. The solid
lines contour a Gaussian pump mode $\phi_{p}(\omega_{s}+\omega_{i})$, and the
dashed lines contour the phasematching function $\Phi(\omega_{s},\omega_{i})$.
This shows how spectral correlations arise in the JSA. Frequencies are in
arbitrary units.
It is instructive to briefly review the spectral-temporal structure of photon
pairs generated by PDC, governed by the $\hat{H}_{PDC}$ term. In most
applications PDC is pumped by a strong coherent state occupying a spectral
mode function $\phi_{p}(\omega)$, which can be treated as a classical field
amplitude $E_{p}(\omega)=E_{0}\phi_{p}(\omega)$, where $E_{0}$ quantifies the
field strength, and $\phi_{p}(\omega)$ is normalized as $\int\text{d}\omega\
|\phi_{p}(\omega)|^{2}=1$. However, since we are working in the perturbative
limit, it is equivalent to consider a single-photon pump in the state
$\ket{\Psi_{\text{in}}}=\ket{\phi_{p}}=\int\text{d}\omega\phi_{p}(\omega)\hat{a}_{p}^{\dagger}(\omega)\ket{\text{vac}}.$
(4)
After this state undergoes unitary evolution according to equation (3), we
obtain the output state
$\ket{\Psi_{\text{out}}}=\ket{\phi_{p}}-i\sqrt{w}\ket{\Psi_{\text{PDC}}},$ (5)
where
$\ket{\Psi_{\text{PDC}}}=\frac{\chi}{\sqrt{w}}\int\text{d}\omega_{s}\text{d}\omega_{i}\phi_{p}(\omega_{s}+\omega_{i})\Phi(\omega_{s},\omega_{i})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i})\ket{\text{vac}}$
(6)
is a normalized two-photon state, and where
$w=\int\text{d}\omega_{s}\text{d}\omega_{i}|\chi\
\phi_{p}(\omega_{s}+\omega_{i})\Phi(\omega_{s},\omega_{i})|^{2}$ (7)
is a normalization factor.
It is convenient here to define the joint spectral amplitude (JSA)
$f(\omega_{s},\omega_{i})=\frac{\chi}{\sqrt{w}}\phi_{p}(\omega_{s}+\omega_{i})\Phi(\omega_{s},\omega_{i}),$
(8)
so that
$\ket{\Psi_{\text{PDC}}}=\int\text{d}\omega_{s}\text{d}\omega_{i}f(\omega_{s},\omega_{i})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i})\ket{\text{vac}}$
(9)
The JSA can be viewed as a two-photon wavefunction, and its modulus squared,
$|f(\omega_{s},\omega_{i})|^{2}$, is the probability density function for the
photon pair in frequency space, normalized as
$\int\text{d}\omega_{s}\text{d}\omega_{i}|f(\omega_{s},\omega_{i})|^{2}=1$.
Considerable progress has been made in engineering the temporal-mode structure
of PDC photon pairs, which is completely characterized by the JSA, and this is
done by shaping of the pump spectral amplitude
$\phi_{p}(\omega_{s}+\omega_{i})$ and engineering of the phasematching
$\Phi(\omega_{s},\omega_{i})$ in the nonlinear medium. We plot schematically
in Fig. 1 a typical JSA configuration showing its dependence on the pump
amplitude and the phasematching function. A thorough review of the state-of-
the-art in two-photon state engineering in the time-frequency domain can be
found in reference Ansari _et al._ (2018a).
### II.3 Two-photon SFG and the two-photon POVM
Figure 2: PDC uses a $\chi^{(2)}$ interaction medium to convert a single-
photon state $\ket{1_{\phi}}$ in the mode $p$ to a pair of photons in modes
$s$ and $i$, described by the state $\ket{\psi_{\text{PDC}}}$ given in the
text. In the time-reverse picture, a projective measurement
$\hat{\text{P}}_{n}$ of a single photon produced by SFG implements measurement
with POVM element $\hat{\Pi}_{n}$ on the two input photons.
We now turn our attention to the SFG term in equation (1), explicitly given by
$\hat{H}_{SFG}=\chi^{*}\int\text{d}\omega_{s}\text{d}\omega_{i}\Phi^{*}(\omega_{s},\omega_{i})\hat{a}^{\dagger}_{p}(\omega_{s}+\omega_{i})\hat{a}_{s}(\omega_{s})\hat{a}_{i}(\omega_{i})$
(10)
and consider the upconversion of an arbitrary pure two photon state given by
$\ket{\Psi_{\text{in}}}=\ket{\psi_{g}}=\int\text{d}\omega_{s}\text{d}\omega_{i}g(\omega_{s},\omega_{i})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i})\ket{\text{vac}},$
(11)
where $g(\omega_{s},\omega_{i})$ is a two-photon JSA. The output state will
then be
$\ket{\Psi_{\text{out}}}=\ket{\psi_{g}}-i\chi^{*}\ket{\sigma},$ (12)
where
$\ket{\sigma}=\int d\nu\sigma(\nu)\hat{a}^{\dagger}_{p}(\nu)\ket{\text{vac}},$
(13)
with the (unnormalized) spectral amplitude function
$\sigma(\nu)=-\frac{1}{2}\int d\nu^{\prime}\
\tilde{\Phi}^{*}\left(\nu,\nu^{\prime}\right)\tilde{g}\left(\nu,\nu^{\prime}\right).$
(14)
We obtain this last equation by changing variables to the sum and difference
frequencies $\nu=\omega_{s}+\omega_{i}$ and
$\nu^{\prime}=\omega_{s}-\omega_{i}$, and defining
$\tilde{\Phi}^{*}(\nu,\nu^{\prime})=\Phi^{*}\left(\frac{\nu+\nu^{\prime}}{2},\frac{\nu-\nu^{\prime}}{2}\right)$
(and likewise for $\tilde{g}(\nu,\nu^{\prime})$).
We are now equipped to develop the two-photon POVM corresponding to a
detection of the upconverted single-photon state $\ket{\sigma}$, which closely
mirrors the one-photon, pre-filter POVM described in reference van Enk (2017).
Consider performing an ideal, projective measurement of the upconverted photon
onto an orthonormal set of temporal mode single photon states
$\\{(\hat{\text{P}}_{n}=\ket{\phi_{n}}\bra{\phi_{n}})_{n=1}^{\infty}\\}$ with
$\ket{\phi_{n}}=\int\text{d}\omega\phi_{n}(\omega)\hat{a}^{\dagger}_{p}(\omega)\ket{\text{vac}},$
(15)
satisfying
$\braket{\phi_{n}}{\phi_{m}}=\int\text{d}\omega\
\phi^{*}_{n}(\omega)\phi_{m}(\omega)=\delta_{nm}.$ (16)
Such a measurement can in principle be realized using a quantum pulse gate,
recently described and demonstrated in references Ansari _et al._ (2018b);
Reddy and Raymer (2018), whereby a strong pump field in a particular temporal
mode selects out that same mode from an input signal field and upconverts it
through SFG to a register mode which can be easily detected with a
spectrometer. The probability for a successful detection for this measurement
will be given by
$\begin{gathered}p_{n}=|\chi^{*}\braket{\phi_{n}}{\sigma}|^{2}\\\
=\left|-\frac{\chi^{*}}{2}\int\ \text{d}\nu\text{d}\nu^{\prime}\
\phi_{n}^{*}(\nu)\tilde{\Phi}^{*}\left(\nu,\nu^{\prime}\right)\tilde{g}\left(\nu,\nu^{\prime}\right)\right|^{2}\\\
=\left|\chi^{*}\int\text{d}\omega_{s}\text{d}\omega_{i}\phi_{n}^{*}(\omega_{s}+\omega_{i})\Phi^{*}(\omega_{s},\omega_{i})g(\omega_{s},\omega_{i})\right|^{2}\end{gathered}$
(17)
However, this same probability can be obtained by applying the Born rule to
the input state
$\hat{\rho}_{\text{in}}=\ket{\Psi_{\text{in}}}\bra{\Psi_{\text{in}}}$ in the
two-photon space:
$p_{n}=\text{Tr}(\hat{\rho}_{\text{in}}\hat{\Pi}_{n}),$ (18)
if we define a POVM element
$\hat{\Pi}_{n}=w_{n}\ket{\Psi_{n}}\bra{\Psi_{n}},$ (19)
where
$\ket{\Psi_{n}}=\frac{\chi}{\sqrt{w_{n}}}\int\text{d}\omega\text{d}\omega^{\prime}\phi_{n}(\omega+\omega^{\prime})\Phi(\omega,\omega^{\prime})\hat{a}^{\dagger}_{s}(\omega)\hat{a}^{\dagger}_{i}(\omega^{\prime})\ket{\text{vac}},$
(20)
and
$w_{n}=\int\text{d}\omega\text{d}\omega^{\prime}|\chi\
\phi_{n}(\omega+\omega^{\prime})\Phi(\omega,\omega^{\prime})|^{2}.$ (21)
We immediately recognize $\ket{\Psi_{n}}$ as the normalized two-photon state
that would result from PDC with a pump photon in the state $\ket{\phi_{n}}$.
That is, a projective measurement of an upconverted photon with projector
$\hat{\text{P}}_{n}=\ket{\phi_{n}}\bra{\phi_{n}}$ implements a generalized
measurement of the two input photons with POVM element $\hat{\Pi}_{n}$. This
is schematically shown in Fig. 2. Furthermore, the properties of
$\hat{\Pi}_{n}$ follow immediately from the properties of the PDC state
$\ket{\Psi_{n}}$, as we will see in the following section. It is convenient to
associate with the POVM element $\hat{\Pi}_{n}$ a measurement JSA
$f_{n}(\omega+\omega^{\prime})=\frac{\chi}{\sqrt{w_{n}}}\phi_{n}(\omega+\omega^{\prime})\Phi(\omega,\omega^{\prime}).$
(22)
To complete the POVM, we note that we are considering an ideal detector in the
SFG mode, such that any upconverted photon is detected with certainty. We are
thus justified in defining an element corresponding to no detection as
$\hat{\Pi}_{\text{null}}=\mathds{1}-\sum_{n=1}^{\infty}\hat{\Pi}_{n},$ (23)
where $\mathds{1}$ denotes the identity operator in the relevant two-photon
subspace. Using the fact that the $\phi_{n}$ mode functions form a complete
orthonormal set, we can evaluate
$\sum_{n=1}^{\infty}\hat{\Pi}_{n}=|\chi|^{2}\int\text{d}\omega\text{d}\omega^{\prime}|\Phi(\omega,\omega^{\prime})|^{2}\ket{\omega,\omega^{\prime}}\bra{\omega,\omega^{\prime}},$
(24)
where
$\ket{\omega,\omega^{\prime}}=\hat{a}^{\dagger}_{s}(\omega)\hat{a}^{\dagger}_{i}(\omega_{i})\ket{\text{vac}}$.
Noting that the identity in the two-photon subspace can be resolved as
$\mathds{1}=\int\text{d}\omega\text{d}\omega^{\prime}\ket{\omega,\omega^{\prime}}\bra{\omega,\omega^{\prime}},$
(25)
we can express $\hat{\Pi}_{\text{null}}$ explicitly as
$\hat{\Pi}_{\text{null}}=\int\text{d}\omega\text{d}\omega^{\prime}\left(1-|\chi|^{2}|\Phi(\omega,\omega^{\prime})|^{2}\right)\ket{\omega,\omega^{\prime}}\bra{\omega,\omega^{\prime}}.$
(26)
Finally we may write down the complete two-photon POVM as
$\left\\{(\hat{\Pi}_{n})_{n=1}^{\infty},\hat{\Pi}_{\text{null}}\right\\},$
(27)
satisfying
$\sum_{n=1}^{\infty}\hat{\Pi}_{n}+\hat{\Pi}_{\text{null}}=\mathds{1}.$ (28)
## III Properties of the measurement operator
### III.1 Projectivity
We will now take advantage of the well-studied properties of the two-photon
PDC state $\ket{\Psi_{n}}$ to analyze some of the useful properties of the
POVM element $\hat{\Pi}_{n}$. We begin by defining the retrodicted two-photon
state Amri _et al._ (2011), corresponding to an outcome $n$, as
$\hat{\rho}_{n}=\frac{\hat{\Pi}_{n}}{\text{Tr}(\hat{\Pi}_{n})}=\ket{\Psi_{n}}\bra{\Psi_{n}}.$
(29)
We consider the measurement projective, if $\hat{\rho}_{n}$ is a pure state,
satisfying $\text{Tr}(\hat{\rho}_{n}^{2})=1$, which is indeed the case for
equation (29).
In general, however, single-photon detectors are not perfectly resolving. In
the case of the quantum pulse gate, a detector click may not correspond to
single pulse mode, but rather an incoherent mixture of a few modes. In the
case of a non-ideal spectrally resolving detection, one either uses a filter
of finite bandwidth, or a spectrometer with finite resolution. In all of these
cases, it is more accurate to describe a non-ideally resolving, that is, non-
projective, single-photon measurement by
$\hat{\text{P}}_{q}=\sum_{n}q_{n}\hat{\text{P}}_{n}$ (30)
where $0\leq q_{n}\leq 1$ are weighting coefficients. This leads to a two-
photon POVM element
$\hat{\Pi}_{q}=\sum_{n}q_{n}\hat{\Pi}_{n},$ (31)
and a retrodicted state
$\hat{\rho}_{q}=\frac{\hat{\Pi}_{q}}{\text{Tr}(\hat{\Pi}_{q})},$ (32)
which has $\text{Tr}(\hat{\rho}_{q}^{2})\leq 1$ and is not in general a pure
state. Evidently, the two-photon POVM elements are projective if and only if
the single-photon measurement operators are projective.
Projective two-photon measurements are of particular importance in quantum
teleportation and remote-state preparation, and entanglement swapping, because
in these schemes the measurement acts as a herald to a single photon state or
a two-photon entangled state, respectively. Ideally the heralded states should
be pure to be useful for quantum information processing. And the purity of the
heralded state is limited by both the purity of the input states and the
purity (projectivity) of the heralding measurement Amri _et al._ (2011).
### III.2 Orthogonality
Orthogonal measurements are measurements which project onto orthogonal states,
and thus satisfy
$\hat{\Pi}_{n}\hat{\Pi}_{m}\propto\delta_{nm}\hat{\Pi}_{n}.$ (33)
We note here that orthogonal measurements of the SFG photon do not correspond
to orthogonal two-photon POVM elements in general. This is analogous to the
fact that PDC pumped with orthogonal pulse modes does not produce orthogonal
PDC states in general. The non-orthogonality of the two-photon states can be
seen by taking
$\begin{gathered}\braket{\Psi_{n}}{\Psi_{m}}=\\\
\frac{|\chi|^{2}}{\sqrt{w_{n}w_{m}}}\int\text{d}\omega\text{d}\omega^{\prime}\phi^{*}_{n}(\omega+\omega^{\prime})\phi_{m}(\omega+\omega^{\prime})|\Phi(\omega,\omega^{\prime})|^{2}\neq\delta_{nm}.\end{gathered}$
(34)
This is due to the filtering induced by the phasematching function. This is
indeed analogous to what happens when two orthogonal modes are subjected to
linear filtering (see reference van Enk (2017) on this point): in general the
transmitted modes considered alone are not orthogonal, even though filtering
is a unitary process. The orthogonality is preserved only when considering all
of the modes involved in the transformation, whereas here we are only
considering the signal and idler modes and not the pump.
Figure 3: JSA’s for the configuration described in the text where the
phasematching function is engineered through group-velocity matching makes an
angle $\theta=45^{\text{o}}$ with respect to the $\omega_{s}$-axis. Then it
becomes independent of the sum frequency $\nu=\omega_{s}+\omega_{i}$, and thus
orthogonal measurements of the SFG photon correspond to orthogonal two-photon
POVM elements. Blue (red) indicates positive (negative) amplitudes. In the
case of PDC, the amount of correlations in the JSA can be controlled by
shaping of the pump pulse, as described in reference Ansari _et al._ (2018a).
Here we plot the JSA’s obtained by shaping the pump into the (a) zeroth-, (b)
first-, and (c) second-order Hermite-Gauss modes, resulting into mutually-
orthogonal two-photon states. Frequencies are in arbitrary units.
An obvious question that arises then is, in what cases do the POVM elements,
in fact, correspond to orthogonal measurements? The answer to this question
becomes obvious when we rewrite equation (34) in terms of the sum and
difference frequencies $\nu$ and $\nu^{\prime}$,
$\begin{gathered}\braket{\Psi_{n}}{\Psi_{m}}=\\\
\frac{|\chi|^{2}}{4\sqrt{w_{n}w_{m}}}\int\text{d}\nu\text{d}\nu^{\prime}\phi^{*}_{n}(\nu)\phi_{m}(\nu)\left|\tilde{\Phi}\left(\nu,\nu^{\prime}\right)\right|^{2}.\end{gathered}$
(35)
Clearly, only when the phasematching function does not depend on the sum-
frequency $\nu$, that is, $\Phi=\Phi(\nu^{\prime})$, then do we obtain
$\braket{\Psi_{n}}{\Psi_{m}}=\delta_{nm},$ (36)
and the $\hat{\Pi}_{n}$ then satisfy
$\hat{\Pi}_{n}\hat{\Pi}_{m}=\delta_{nm}w_{n}\hat{\Pi}_{n}.$ (37)
Orthogonality of the two-photon POVM elements is of interest, for example, in
the quantum illumination scheme as originally described by Lloyd Lloyd (2008).
Here an entangled two-photon state $\ket{\Psi_{n}}$ is prepared and one of the
photons sent to reflect off a possibly present target, while the other photon
is kept in the lab. The two photons are then to be jointly measured, whereupon
a successful projection onto the initial state $\ket{\Psi_{n}}$ indicates the
presence of the target. If one is to implement this scheme using SFG as the
two-photon measurement, non-orthogonal measurements would suffer from the
possibility that the desired state $\ket{\Psi_{n}}$ could give a positive
outcome corresponding to the “wrong” measurement associated with a non-
orthogonal state $\ket{\Psi_{m}}$.
In general, the orthogonality condition (36) can be approximately satisfied as
long as the phase-matching function varies slowly enough in the $\nu$
direction, in comparison to the support of the detection mode function. This
happens, for example, in a sufficiently short interaction medium. However,
there are two limiting cases that are of note. The first is the spectrally
resolved detection limit, which corresponds to simply measuring the output
with an ideal spectrometer. In this limit, the detection mode can be
approximated by a delta function,
$\phi_{n}(\omega)\rightarrow\delta(\omega-\omega_{n}),$ (38)
and
$f_{n}(\omega,\omega^{\prime})\propto\delta(\omega+\omega^{\prime}-\omega_{n}),$
(39)
where $\omega_{n}$ is the measured frequency at the spectrometer. This is the
analogue of pumping a PDC source with monochromatic, or continuous-wave (cw),
light. In both of these cases, orthogonal pump (or measurement modes) with
frequencies $\omega_{n}$ and $\omega_{m}$ correspond to orthogonal two-photon
states (or measurements) with sum frequencies $\omega_{n}$ and $\omega_{m}$.
The second case of interest is achieved by extended phase-matching techniques,
as described in reference Ansari _et al._ (2018a). For certain nonlinear
materials and field configurations, it is possible, using group-velocity
matching, to make the phase-matching function approximately constant in the
$\nu$ direction over some range of interest. More precisely, the phase-
matching function can be engineered to make an angle $\theta=45^{\text{o}}$ in
the $\omega_{s}$-$\omega_{i}$ plane, perpendicular to the angle that the pump
function makes. This configuration has been used by Ansari et al to generate
PDC states with a controllable temporal-mode structure and degree of
entanglement through pump pulse-shaping Ansari _et al._ (2018b). This concept
is illustrated schematically in Fig. 3. More recently, similarly exotic two-
photon states have been obtained through phasematching shaped by the periodic
poling of the nonlinear crystal, rather than pulse-shaping of the pump
Graffitti _et al._ (2020).
An interesting result that follows from the limit where $\Phi$ is independent
of $\nu$ is the possibility of downconverting an arbitrary pulse shape in a
nonlinear medium into an entangled photon pair, and recovering the pump pulse
shape by upconverting the photon pair in an identical medium. This can be seen
by taking $\tilde{g}(\nu,\nu^{\prime})=\phi(\nu)\tilde{\Phi}(\nu^{\prime})$ in
equation (14), and obtaining
$\sigma(\nu)=\phi(\nu)\int d\nu^{\prime}|\tilde{\Phi}(\nu^{\prime})|^{2},$
(40)
which is evidently proportional to the input $\phi(\nu)$. The spatial analogue
of this result, whereby a pump beam shaped in a specific transverse spatial
mode is downconverted, and the photon resulting from the upconversion of the
PDC pair is shown to recover the transverse spatial mode, has recently been
experimentally demonstrated by Jimenez et al Jimenez _et al._ (2019).
### III.3 Entanglement
We now turn to perhaps a more interesting question regarding the two-photon
measurement operator: when is the POVM element $\hat{\Pi}_{n}$ a projector
onto an entangled two-photon state, and thus can be said to enact an entangled
measurement on the input photons? Vértesi and Navascués (2011); Renou _et
al._ (2018) We can answer this question readily: $\hat{\Pi}_{n}$ is an
entangled measurement, if the retrodicted state $\rho_{n}$ is an entangled
state. Entangled measurements play a central role in quantum teleportation,
superdense coding, and quantum illumination, among many other protocols, and
recently the role of entanglement in joint measurements has been recognized to
be equally important to the role of entanglement of states as a shared
resource Gisin (2019).
To illustrate the role of entangled measurements in a quantum protocol, we
will investigate briefly the spectral quantum teleportation scenario,
described by Molotkov Molotkov (1998) and by Humble Humble (2010) (and whose
spatial analogue was described by Walborn et al Walborn _et al._ (2007)). In
this protocol, Alice and Bob share a two-photon entangled state described by a
JSA $f_{s}(\omega_{a},\omega_{b})$, and Alice is to teleport a single photon
state with spectral amplitude $\psi_{c}(\omega_{c})$ by performing an SFG
measurement on this photon and her half of the entangled state, and
communicating the measurement result to Bob.
Figure 4: Spectral teleportation scenario considered in the text. Alice and
Bob share entangled photons $a$ and $b$ in the state $\ket{\Psi_{s}}$. Alice
performs a two-photon SFG measurement $\hat{\Pi}_{m}$ on her photon $a$ and
photon $c$, in the state $\ket{\psi_{c}}$, and communicates the result of her
measurement to Bob, whereupon Bob reconstructs the state $\ket{\psi_{b|m}}$.
Reference Molotkov (1998) considers only the case of a maximally-correlated
pair of entangled photons shared between Alice and Bob, while reference Humble
(2010) generalizes this result to the case of a Gaussian JSA, which is a good
approximation to what can be produced using pulsed lasers as a pump. In both
references however, Alice’s joint measurement is a spectrally-resolved
measurement of the SFG photon. Here we use our formalism to generalize further
to a pulse-mode resolved measurement of the SFG photon, as can be realized
with a quantum pulse gate, by considering a generalized measurement JSA
$f_{m}(\omega_{a},\omega_{c})$. It was first pointed out in the original
proposal of quantum teleportation Bennett _et al._ (1993) that in addition to
the maximally-entangled state (generalized Bell-state) shared by Alice and
Bob, quantum teleportation with unit fidelity is achieved when Alice’s joint
measurement projects onto a maximally-entangled state. Here we show behavior
that is consistent with this result by quantifying the teleportation fidelity
as a function of the entanglement of both the shared state and the joint
measurement. It is worth clarifying that our current goal is not to
demonstrate that the POVM element is entangled, but rather, it is to show that
our POVM formalism is sufficient to describe quantum teleportation in the
time-frequency domain, provided we stipulate entanglement as a property of the
measurement. This is in keeping with the more familiar case of the Bell-state
measurement’s role in qubit teleportation.
The teleportation scenario we consider is shown schematically in Fig. 4. Alice
and Bob share entangled photons a and b, respectively, described by a Gaussian
JSA similar to the one in reference Humble (2010):
$\begin{gathered}\ket{\Psi_{s}}=\int\text{d}\omega_{a}\ \text{d}\omega_{b}\
f_{s}(\omega_{a},\omega_{b})\hat{a}^{\dagger}_{a}(\omega_{a})\hat{a}^{\dagger}_{b}(\omega_{b})\ket{\text{vac}}\\\
f_{s}(\omega_{a},\omega_{b})=N_{s}\text{Exp}\left[-\frac{1}{\gamma_{s}^{2}(1-\alpha^{2})}\left(\frac{\omega_{a}^{2}}{2}+\frac{\omega_{b}^{2}}{2}+\alpha\omega_{a}\omega_{b}\right)\right]\end{gathered}$
(41)
where $\alpha\in[-1,1]$ is the correlation between the the photon frequencies,
with $\alpha=1$ corresponding to maximal frequency anticorrelation, such as
would be obtained from a cw pump; $\gamma_{s}$ is the characteristic bandwidth
of the PDC photons, and $N_{s}$ is the normalization constant. Alice provides
a single photon c to be teleported, described by the state
$\ket{\psi_{c}}=\int\text{d}\omega_{c}\psi_{c}(\omega_{c})\hat{a}^{\dagger}_{c}(\omega_{c})\ket{\text{vac}}$
(42)
where $\psi_{c}(\omega_{c})$ is an arbitrary spectral amplitude function.
Alice initiates the teleportation by performing an SFG measurement on photons
a and c, represented by an operator
$\hat{\Pi}_{m}=w_{m}\ket{\Psi_{m}}\bra{\Psi_{m}}$, with
$\begin{gathered}\ket{\Psi_{m}}=\int\text{d}\omega_{a}\ \text{d}\omega_{c}\
f_{m}(\omega_{a},\omega_{c})\hat{a}^{\dagger}_{a}(\omega_{a})\hat{a}^{\dagger}_{c}(\omega_{c})\ket{\text{vac}}\\\
f_{m}(\omega_{a},\omega_{c})=N_{m}\text{Exp}\left[-\frac{1}{\gamma_{m}^{2}(1-\beta^{2})}\left(\frac{\omega_{a}^{2}}{2}+\frac{\omega_{c}^{2}}{2}+\beta\omega_{a}\omega_{c}\right)\right]\end{gathered}$
(43)
with parameters defined similarly to $\ket{\Psi_{s}}$.
Figure 5: Behavior of the teleportation fidelity for the different cases
described in the text. Plot (a) shows the behavior with $\alpha$, the state
entanglement, and $\sigma=\gamma_{c}/\gamma_{s}$, for the ideal SFG
measurement, with $\beta=1$ and $\gamma_{m}\rightarrow\infty$, as considered
in Ref Humble (2010). The same plot describes the fidelity as a function of
$\beta$ and $\sigma=\gamma_{c}/\gamma_{m}$ for the case of a maximally
entangled state with $\alpha=1$ and $\gamma_{s}\rightarrow\infty$. Plots (b)
and (c) illustrate the behavior of the fidelity when the entangled state and
the entangled measurement have comparable bandwidths (here
$\gamma_{s}=\gamma_{m}=1$). Here the fidelity behaves differently with
$\alpha$ and with $\beta$, because $f_{s}$ and $f_{m}$ are not in general
interchangeable in the expression for $\psi_{b|m}$. All quantities are
dimensionless.
We point out here that we have centered both $f_{s}$ and $f_{m}$ at 0 in
frequency space, without loss of generality. This is because, in the protocol
described in reference Humble (2010), Alice communicates her obtained
frequency $\omega_{a}+\omega_{c}$ to Bob, whereupon he performs the
appropriate frequency translation to his photon $b$ to recover the state that
would have resulted, had Alice obtained $\omega_{a}+\omega_{b}$ in her
measurement. Further note that we are using the parameters $\alpha$ and
$\beta$ to quantify the entanglement of the shared state and the joint
measurement, respectively, rather than a more familiar measure of entanglement
for pure states, such as the Schmidt number Parker _et al._ (2000). We have
made this choice because, although the Schmidt number $K$ bears a simple
relationship with our parameter $\alpha$ (or $\beta$), satisfying
$K=\frac{1}{\sqrt{1-\alpha^{2}}}$ (see Appendix B), the latter has the
convenient feature of being bounded by the interval $[-1,1]$, whereas the
Schmidt number diverges for maximal entanglement.
With all of this in consideration, Alice’s joint measurement on photons a and
c heralds Bob’s photon b in the teleported state
$\begin{gathered}\ket{\psi_{b|m}}=\int\text{d}\omega_{b}\psi_{b|m}(\omega_{b})\hat{a}^{\dagger}_{b}(\omega_{b})\ket{\text{vac}},\\\
\psi_{b|m}(\omega_{b})=N_{b|m}\int\text{d}\omega_{a}\text{d}\omega_{c}f^{*}_{m}(\omega_{a},\omega_{c})f_{s}(\omega_{a},\omega_{b})\psi_{c}(\omega_{c}).\end{gathered}$
(44)
where $N_{b|m}$ is the appropriate normalization constant. The teleportation
fidelity is then given by the modulus squared of the overlap,
$F=\left||^{2}=\left|\int\text{d}\omega\psi_{c}^{*}(\omega)\psi_{b|m}(\omega)\right|^{2}$
(45)
For this analysis, we let $\psi_{c}$ be a Gaussian function with
characteristic width $\gamma_{c}$,
$\psi_{c}(\omega)=\frac{1}{\sqrt{\gamma_{c}\sqrt{\pi}}}e^{-\omega^{2}/2\gamma_{c}^{2}}.$
(46)
Using this form for the states and measurements, we obtain an algebraic
expression for the fidelity which depends on five parameters,
$F=F(\alpha,\beta,\gamma_{s},\gamma_{m},\gamma_{c})$. The full expression is
unwieldy and not very instructive to display here. We shall verify that our
formalism reproduces the result of reference Humble (2010) in the appropriate
limits. That reference studies the behavior of the fidelity as a function of
$\alpha$ and $\sigma=\gamma_{c}/\gamma_{s}$ for a uniformly phasematched SFG
process followed by an ideally-resolved frequency detection. This corresponds
to taking the limit $\gamma_{m}\rightarrow\infty$ and $\beta=1$. In these
limits, our formalism exactly recovers the fidelity
$\begin{gathered}F_{\gamma_{m}\rightarrow\infty}=\sqrt{\frac{4\sigma^{2}(\sigma^{2}+1)(\sigma^{2}+1-\alpha^{2})}{((\sigma^{2}+1)^{2}-\alpha^{2})^{2}}},\end{gathered}$
(47)
which is displayed in Fig. 5 (a). In that reference, an interesting feature of
this behavior of the fidelity was noted. That is, although the fidelity
increases monotonically with the source entanglement $\alpha$ for $\sigma\ll
1$, this is no longer true for when $\gamma_{c}$ is comparable to
$\gamma_{s}$. In particular, the fidelity is equal to one along the curve
$\alpha^{2}=1-\sigma^{4}$, and is equal to $\sqrt{8/9}$ at the upper-right
hand corner of the plot, where $\alpha=1$ and $\sigma=1$. In the language of
our formalism, given the ideal entangled measurement, with infinite SFG
bandwidth and ideal spectral resolution, there is a trade-off between spectral
bandwidth and spectral entanglement of the sources.
Our result allows us to generalize further, however, and also consider the
case of the Gaussian SFG measurement with finite bandwidth. First we consider
the reverse scenario to the one above, where the source is perfectly
entangled, with $\gamma_{s}\rightarrow\infty$ and $\alpha=1$, and look at the
dependence of the fidelity on $\beta$ and $\sigma$. In this case we find that
the fidelity exhibits the same dependence, that is,
$F_{\gamma_{s}\rightarrow\infty}=\sqrt{\frac{4\sigma^{2}(\sigma^{2}+1)(\sigma^{2}+1-\beta^{2})}{((\sigma^{2}+1)^{2}-\beta^{2})^{2}}},$
(48)
and we can conclude that, given an ideal entangled state between Alice and
Bob, there is a trade off between spectral bandwidth and spectral entanglement
of the measurement.
Finally, we arrive at the most realistic case, where both the entangled source
and the measurement have finite bandwidths, corresponding to finite
phasematching in the PDC and SHG processes. Here we set them equal, taking
$\gamma_{s}=\gamma_{m}=1$, and obtain
$\begin{gathered}F_{\gamma_{m}=\gamma_{s}}=\sqrt{\frac{4\sigma^{2}(\beta^{2}-2(1+\sigma^{2}))(\beta^{2}-(2-\alpha^{2})(1+\sigma^{2}))}{(1+\sigma^{2})^{2}(\alpha^{2}+\beta^{2}-2(1+\sigma^{2}))^{2}}}.\end{gathered}$
(49)
In this case we find the interesting and counterintuitive result that the
behaviors of the fidelity with the source entanglement $\alpha$ and with the
measurement entanglement $\beta$ are no longer equivalent. We show this by
plotting the behavior of the limiting cases of $F_{\gamma}(\alpha,1,\sigma)$
(spectral resolution of the SFG) and $F_{\gamma}(1,\beta,\sigma)$
(monochromatic pumping of the PDC) in Fig. 5 (b) and (c), respectively. In the
case of $\beta=1$, the fidelity is maximized along the curve
$\alpha^{2}=\frac{1+\sigma^{2}-2\sigma^{4}}{1+\sigma^{2}}$ and has similar
limiting behaviors to the ideal case considered in reference Humble (2010).
The case of $\alpha=1$ exhibits a starker contrast, taking its maximum value
along the curve $\beta^{2}=\frac{-1+\sigma^{2}+2\sigma^{4}}{-1+\sigma^{2}}$.
Unlike any of the previous cases, the fidelity is no longer equal to unity in
the bottom right-hand corner, for $\sigma=1$, $\beta=0$, but instead it is
equal to $\sqrt{8/9}$.
We emphasize that $\beta<1$ does not represent a non-ideal spectral resolution
of the upconverted photon, since we are only considering projective
measurements, but instead corresponds to a coherent broadband measurement, as
could be obtained using a quantum pulse gate. What this last result suggests
is that, for finite bandwidths of the entangled source and the entangled
measurement, it is not generally the case that spectral resolution maximizes
the teleportation fidelity. Further, the asymmetry between the behaviors of
entangled state and the entangled measurement can be understood from the fact
that the state JSA $f_{s}$ and the measurement JSA $f_{m}$ are not
interchangeable in the expression for $\psi_{b|m}$, with $f_{m}$ having both
of its arguments integrated over. Most notably, we have shown that, by
treating two-photon measurements more generally and on equal footing with the
two-photon states, it is possible not only to recover previously-obtained
results in the limit of ideal measurements, but also to uncover which states
and measurements are optimal for a given task (in this case spectral
teleportation), under more realistic constraints (in this case, finite PDC and
SFG bandwidths).
This brief analysis leaves open the question of how to generalize to a more
realistic, non-ideally resolved SFG measurement. For a mixed bipartite state
$\hat{\rho}$, a convenient measure of entanglement is the negativity Vidal and
Werner (2002). The negativity essentially counts the negative eigenvalues of
$\hat{\rho}$ partially transposed with respect to one of its subsystems, and
it sets an upper bound on the teleportation capacity of the state. This
suggests that we may define a negativity associated with a non-projective POVM
element $\hat{\Pi}_{q}$ as the negativity of its mixed retrodicted state
$\hat{\rho}_{q}$. The role of finite spectral resolution in SFG detection has
been investigated numerically for entanglement swapping in reference Vitullo
_et al._ (2018). However, it could be more elegant to frame this relationship
in terms of the negativities both of the input states and the measurements in
scenarios such as quantum teleportation and entanglement swapping, and this
remains to be explored in future work.
## IV Conclusion
We have demonstrated how to construct the POVM associated with two-photon
detection by SFG followed by temporal-mode-selective single-photon detection.
We have shown that this POVM is proportional to the two-photon state created
in the time-reverse PDC process pumped with a field in the detected mode. This
allowed us to characterize several aspects of the POVM relevant to its
adequacy for quantum information protocols. In particular, we have shown that
a projective measurement of the SFG photon corresponds to a projective two-
photon POVM element. We have pointed out the special case where orthogonal SFG
single-photon measurements correspond to orthogonal two-photon measurements.
And finally, we have shown the correspondence between the two-photon
entanglement retrodicted by the SFG measurement and the two-photon
entanglement produced by the time-reversed PDC process. These results could
have implications for quantum information experiments relying on PDC and SFG
in terms of exploring the interplay between entangled states and entangled
measurements. Additionally, it remains an open question how best to certify
the entanglement of the SFG measurement Bennet _et al._ (2014), or even to
perform quantum tomography of the process. Finally, given recent interest in
using quantum light for two-photon absorption Schlawin and Buchleitner (2017)
Landes _et al._ (2020), our results open the question of whether it’s
possible to have a combined framework of two-photon processes in terms of
quantum measurement theory.
###### Acknowledgements.
We would like to acknowledge M. G. Raymer and S. J. van Enk for valuable
discussions. This work is funded by NSF grant No. 1839216.
## Appendix
## Appendix A Deriving the three-wave mixing transformation
Strictly speaking, the Hamiltonian describing the nonlinear interactions we
consider is a time-dependent quantity, $\hat{H}(t)$, whereby a state
$\ket{\Psi_{\text{out}}}$ evolves from an initial state
$\ket{\Psi_{\text{in}}}$ according to
$\begin{gathered}\ket{\Psi_{\text{out}}}=\exp{\Big{[}-\frac{i}{\hbar}\int_{0}^{t}\text{d}t^{\prime}\hat{H}(t^{\prime})\Big{]}}\ket{\Psi_{\text{in}}}\\\
\approx\left(1-\frac{i}{\hbar}\int_{0}^{t}\text{d}t^{\prime}\hat{H}(t^{\prime})\right)\ket{\Psi_{\text{in}}}\end{gathered}$
(50)
The relevant Hamiltonian for three-wave mixing has the form
$\hat{H}(t)=\chi\int_{V}\text{d}V\hat{E}^{+}_{p}(\mathbf{r},t)\hat{E}^{-}_{s}(\mathbf{r},t)\hat{E}^{-}_{i}(\mathbf{r},t)+\text{H.c.}$
(51)
where $\hat{E}^{+(-)}_{j}$ denotes the positive (negative) frequency component
of the $j$ field operator, with $j=p,s,i$. $V$ denotes the interaction volume,
which we take to be infinite in the transverse direction (by assuming the
field modes are well-confined within the crystal area), and of length $L$ in
the longitudinal direction. Finally, $\mathbf{r}$ and $t$ denote the space and
time coordinates, and $\tilde{\chi}$ describes the interaction strength. We
expand the field operators into their plane-wave components,
$\begin{gathered}\hat{E}^{+}_{j}(\mathbf{r},t)=\int\text{d}\omega_{j}A_{j}(\omega_{j})\exp{\Big{[}i(\mathbf{k}_{j}(\omega_{j})\cdot\mathbf{r}-\omega_{j}t)\Big{]}}\hat{a}_{j}(\omega_{j}),\\\
\hat{E}^{-}_{j}=(\hat{E}^{+}_{j})^{\dagger},\end{gathered}$ (52)
where $A_{j}(\omega_{j})$ is a slowly-varying function of $\omega$.
Substituting these into the Hamiltonian and absorbing all the slowly-varying
functions into $\chi$, we obtain
$\displaystyle\hat{H}(t)=$
$\displaystyle\chi\int_{V}\text{d}V\int\text{d}\omega_{p}\text{d}\omega_{s}\text{d}\omega_{i}\hat{a}_{p}(\omega_{p})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i})$
(53)
$\displaystyle\times\exp\Big{[}i(\mathbf{k}_{p}(\omega_{p})-\mathbf{k}_{s}(\omega_{s})-\mathbf{k}_{i}(\omega_{i}))\cdot\mathbf{r}\Big{]}$
$\displaystyle\times\exp\Big{[}-i(\omega_{p}-\omega_{s}-\omega_{i})t\Big{]}+\text{H.c.}.$
Now we use this form of the Hamiltonian to compute output state (50) to first
order in the expansion, whereupon we carry the integration over the transverse
spatial directions to infinity. Additionally, we carry out the time integral
from negative to positive infinity because the input and output states are
observed long before and after the interaction time $t$, resulting in a delta-
function in $(\omega_{p}-\omega_{s}-\omega_{i})$ (energy conservation). All of
this obtains
$\begin{gathered}\ket{\Psi_{\text{out}}}\approx\Bigg{[}1-i\chi\int_{0}^{L}\text{d}z\int\text{d}\omega_{s}\text{d}\omega_{i}\exp\Big{[}i(\Delta\mathbf{k})_{z}z\Big{]}\\\
\times\hat{a}_{p}(\omega_{s}+\omega_{i})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i})+\text{H.c.}\Bigg{]}\ket{\Psi_{\text{in}}},\end{gathered}$
(54)
where we have also absorbed the $\hbar$ into $\chi$. Carrying out the
integration over $z$ provides the phase-matching function
$\Phi(\omega_{s},\omega_{i})$, and we define the transformation
$\hat{H}=\chi\int\text{d}\omega_{s}\text{d}\omega_{i}\Phi(\omega_{s},\omega_{i})\hat{a}_{p}(\omega_{s}+\omega_{i})\hat{a}^{\dagger}_{s}(\omega_{s})\hat{a}^{\dagger}_{i}(\omega_{i})+\text{H.c},$
(55)
such that
$\ket{\Psi_{\text{out}}}\approx\left(1-i\hat{H}\right)\ket{\Psi_{\text{in}}}.$
(56)
## Appendix B Relating the entanglement parameter $\alpha$ to the Schmidt
number $K$
In section III.3 we used the scenario of spectral teleportation to illustrate
the role of entanglement in the measurement, on par with entanglement in the
state, in a quantum protocol. To that end, we quantified the teleportation
fidelity in terms of the correlation parameters $\alpha$ ($\beta$) of the
bivariate Gaussian state $f_{s}(\omega,\omega^{\prime})$ (measurement
$f_{m}(\omega,\omega^{\prime})$). This parameter has the advantage of being
bounded by the interval $[-1,1]$, with maximal entanglement at the boundaries,
whereas more common measures of entanglement for pure states, such as the
entropy and the Schmidt number, diverge for maximal entanglement. Here we show
for completeness how the Schmidt number $K$ depends functionally on $\alpha$,
while the same analysis holds for $\beta$.
The Gaussian JSA $f_{s}(\omega,\omega^{\prime})$ from (41) has a Schmidt
decomposition of the form
$f_{s}(\omega,\omega^{\prime})=\sum^{\infty}_{j=0}\ \sqrt{\lambda_{j}}\
u_{j}(\omega)v_{j}(\omega^{\prime}),$ (57)
where $\\{u_{j}(\omega)\\}$ is the orthonormal set of Hermite-Gauss functions
spanning the spectral Hilbert space over $\omega$, and the same is true of
$\\{v_{j}(\omega^{\prime})\\}$ Humble (2010). The Schmidt coefficients
$\lambda_{j}$ are given by
$\lambda_{j}=\text{sech}^{2}\ \zeta\ \text{tanh}^{2j}\ \zeta,$ (58)
satisfying $\sum_{j=0}^{\infty}\lambda_{j}=1$, and where $\zeta$ is given by
$\alpha=\tanh\ 2\zeta.$ (59)
The Schmidt number $K$ is then given by
$K=\frac{1}{\sum_{j=0}^{\infty}\lambda_{j}^{2}}=\text{cosh}\ 2\zeta.$ (60)
Combining Eq. (59) and (60), we arrive at the simple relationship
$K=\frac{1}{\sqrt{1-\alpha^{2}}},$ (61)
where, as expected, $K$ is equal to unity for the case of no correlation,
$\alpha=0$, and diverges for maximal correlation, $\alpha=\pm 1$.
## References
* Aspect (2015) A. Aspect, Physics 8, 123 (2015).
* Brecht _et al._ (2015) B. Brecht, D. V. Reddy, C. Silberhorn, and M. G. Raymer, Physical Review X 5, 041017 (2015), arXiv:1504.06251 .
* Nunn _et al._ (2013) J. Nunn, L. J. Wright, C. Söller, L. Zhang, I. A. Walmsley, and B. J. Smith, Optics Express 21, 15959 (2013), arXiv:1305.0960 .
* Raymer _et al._ (2013) M. G. Raymer, A. H. Marcus, J. R. Widom, and D. L. Vitullo, Journal of Physical Chemistry B 117, 15559 (2013).
* Schlawin and Buchleitner (2017) F. Schlawin and A. Buchleitner, New Journal of Physics 19, 013009 (2017).
* Dayan (2007) B. Dayan, Physical Review A - Atomic, Molecular, and Optical Physics 76, 043813 (2007), arXiv:0704.2859 .
* Zhuang _et al._ (2017) Q. Zhuang, Z. Zhang, and J. H. Shapiro, Physical Review A 96, 040304(R) (2017), arXiv:1705.06793 .
* Bennett _et al._ (1999) C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters, Physical Review A - Atomic, Molecular, and Optical Physics 59, 1070 (1999), arXiv:9804053 [quant-ph] .
* Bennett _et al._ (1993) C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993).
* Bennett _et al._ (2001) C. H. Bennett, D. P. DiVincenzo, P. W. Shor, J. A. Smolin, B. M. Terhal, and W. K. Wootters, Physical Review Letters 87, 077902 (2001).
* Halder _et al._ (2007) M. Halder, A. Beveratos, N. Gisin, V. Scarani, C. Simon, and H. Zbinden, Nature Physics 3, 692 (2007).
* Sangouard _et al._ (2011) N. Sangouard, B. Sanguinetti, N. Curtz, N. Gisin, R. Thew, and H. Zbinden, Physical Review Letters 106, 120403 (2011).
* Lloyd (2008) S. Lloyd, Science 321, 1463 (2008).
* Gisin (2019) N. Gisin, Entropy 21 (2019), 10.3390/e21030325, arXiv:1809.10901 .
* Dayan _et al._ (2005) B. Dayan, A. Pe’er, A. A. Friesem, and Y. Silberberg, Physical Review Letters 94, 043602 (2005).
* van Enk (2017) S. J. van Enk, Physical Review A - Atomic, Molecular, and Optical Physics 96, 033834 (2017), arXiv:1705.09033 .
* Ansari _et al._ (2018a) V. Ansari, J. M. Donohue, B. Brecht, and C. Silberhorn, Optica 5, 534 (2018a), arXiv:1803.04316 .
* Ansari _et al._ (2018b) V. Ansari, J. M. Donohue, M. Allgaier, L. Sansoni, B. Brecht, J. Roslund, N. Treps, G. Harder, and C. Silberhorn, Physical Review Letters 120, 213601 (2018b).
* Reddy and Raymer (2018) D. V. Reddy and M. G. Raymer, Optica 5, 423 (2018), arXiv:1710.06736 .
* Amri _et al._ (2011) T. Amri, J. Laurat, and C. Fabre, Physical Review Letters 106, 020502 (2011).
* Graffitti _et al._ (2020) F. Graffitti, P. Barrow, A. Pickston, A. M. Brańczyk, and A. Fedrizzi, Phys. Rev. Lett. 124, 053603 (2020).
* Jimenez _et al._ (2019) G. D. Jimenez, V. G. Garces, and K. A. O’Donnell, Phys. Rev. A 99, 023853 (2019).
* Vértesi and Navascués (2011) T. Vértesi and M. Navascués, Physical Review A - Atomic, Molecular, and Optical Physics 83, 062112 (2011), arXiv:1101.5361 .
* Renou _et al._ (2018) M. O. Renou, J. Kaniewski, and N. Brunner, Physical Review Letters 121, 250507 (2018).
* Molotkov (1998) S. N. Molotkov, Physics Letters, Section A: General, Atomic and Solid State Physics 245, 339 (1998).
* Humble (2010) T. S. Humble, Physical Review A - Atomic, Molecular, and Optical Physics 81, 062339 (2010).
* Walborn _et al._ (2007) S. P. Walborn, D. S. Ether, R. L. de Matos Filho, and N. Zagury, Physical Review A - Atomic, Molecular, and Optical Physics 76, 033801 (2007).
* Parker _et al._ (2000) S. Parker, S. Bose, and M. B. Plenio, Phys. Rev. A 61, 032305 (2000).
* Vidal and Werner (2002) G. Vidal and R. F. Werner, Physical Review A - Atomic, Molecular, and Optical Physics 65, 032314 (2002), arXiv:0102117v1 [arXiv:quant-ph] .
* Vitullo _et al._ (2018) D. L. P. Vitullo, M. G. Raymer, B. J. Smith, M. Karpiński, L. Mejling, and K. Rottwitt, Physical Review A - Atomic, Molecular, and Optical Physics 98, 023836 (2018).
* Bennet _et al._ (2014) A. Bennet, T. Vértesi, D. J. Saunders, N. Brunner, and G. J. Pryde, Physical Review Letters 113, 080405 (2014), arXiv:1404.1422 .
* Landes _et al._ (2020) T. Landes, M. Allgaier, S. Merkouche, B. J. Smith, A. H. Marcus, and M. G. Raymer, (2020), arXiv:2012.06736 [quant-ph] .
|
# Positively $p$-nuclear operators, positively $p$-integral operators and
approximation properties
Dongyang Chen School of Mathematical Sciences, Xiamen University, Xiamen,
361005, China<EMAIL_ADDRESS>, Amar Belacel Laboratory of Pure and Applied
Mathematics (LMPA), University of Laghouat, Laghouat, Algeria a.belacel@lagh-
univ.dz and Javier Alejandro Chávez-Domínguez Department of Mathematics,
University of Oklahoma, Norman, Oklahoma, 73019, USA<EMAIL_ADDRESS>
###### Abstract.
In the present paper, we introduce and investigate a new class of positively
$p$-nuclear operators that are positive analogues of right $p$-nuclear
operators. One of our main results establishes an identification of the dual
space of positively $p$-nuclear operators with the class of positive
$p$-majorizing operators that is a dual notion of positive $p$-summing
operators. As applications, we prove the duality relationships between
latticially $p$-nuclear operators introduced by O. I. Zhukova and positively
$p$-nuclear operators. We also introduce a new concept of positively
$p$-integral operators via positively $p$-nuclear operators and prove that the
inclusion map from $L_{p^{*}}(\mu)$ to $L_{1}(\mu)$($\mu$ finite) is
positively $p$-integral. New characterizations of latticially $p$-integral
operators by O. I. Zhukova and positively $p$-integral operators are presented
and used to prove that an operator is latticially $p$-integral (resp.
positively $p$-integral) precisely when its second adjoint is. Finally, we
describe the space of positively $p^{*}$-integral operators as the dual of the
$\|\cdot\|_{\Upsilon_{p}}$-closure of the subspace of finite rank operators in
the space of positive $p$-majorizing operators. Approximation properties, even
positive approximation properties, are needed in establishing main
identifications.
###### Key words and phrases:
latticially $p$-nuclear operators; positively $p$-nuclear operators;
latticially $p$-integral operators; positively $p$-integral operators;
approximation properties.
###### 2010 Mathematics Subject Classification:
Primary 47B10, 46B28, 46B42, 46B45.
*Corresponding author
Dongyang Chen was supported by the National Natural Science Foundation of
China (Grant No. 11971403) and the Natural Science Foundation of Fujian
Province of China (Grant No. 2019J01024).
## 1\. Introduction
Introduced first by A. Grothendieck in [14], the theory of $p$-summing
operators was exhaustively studied by A. Pietsch [27] and J. Lindenstrauss and
A. Pełczyński [21]. In 1955, A. Grothendieck [15] introduced and studied
nuclear and integral operators that are central to his theory of tensor
products. A. Persson and A. Pietsch [26] introduced and investigated
$p$-nuclear and $p$-integral operators that are natural generalizations to
arbitrary $1\leq p\leq\infty$ of the classes of nuclear operators and integral
operators. The classes of $p$-summing, $p$-nuclear and $p$-integral operators
have extreme utility in the study of many different problems in Banach space
theory. We recommend [10] and [28] for a complete study of the topics. So it
is natural to generalize these three classes of operator to various settings.
In 1998, the generalization of the theory of $p$-summing operators to the
noncommutative setting was first developed by G. Pisier [29] by means of the
so called completely $p$-summing maps. Successively, the classes of nuclear
operators, integral operators and other ideals of operators were generalized
to the noncommutative setting ([12,19] etc.). In 2009, J. Farmer and W. B.
Johnson started in [11] studying the $p$-summing operators in the nonlinear
setting, which they called Lipschitz $p$-summing operators. The paper [11] has
motivated the study of various classes of classical operator ideals in the
nonlinear setting (see, for instance, [18], [4], [7], [5], etc).
By comparison to the noncommutative setting and the nonlinear setting, it
seems that the theory of $p$-summing, $p$-nuclear and $p$-integral operators
in the Banach lattice setting attracts much less attention. In 1971, U.
Schlotterbeck [34] (see also [33]) characterized abstract $M$-spaces
($AM$-spaces for short) and abstract $L$-spaces ($AL$-spaces) in a way quite
different from the classical Kakutani’s representation theorems for
$AM$-spaces with a unit and $AL$-spaces: A Banach lattice $X$ is isometric
lattice isomorphic to an $AL$-space ($AM$-space, respectively) if and only if
every positive unconditionally summable sequence in $X$ is absolutely summable
(every norm null sequence in $X$ is order bounded), that is, the identity map
$I_{X}$ on $X$ takes positive unconditionally summable sequences to absolutely
summable sequences ($I_{X}$ takes norm null sequences to order bounded
sequences). In 1972, H. H. Schaefer [32] generalized this property of the
identity map on $AL$-spaces ($AM$-spaces, respectively) in a natural way and
introduced the concept of the so called cone absolutely summing operators
(majorizing operators, respectively). Furthermore, H. H. Schaefer [32]
characterized cone absolutely summing operators (majorizing operators,
respectively) by factoring positively through $AL$-spaces ($AM$-spaces,
respectively). On the other hand, by introducing the $l$-norm on the class of
all cone absolutely summing operators (the $m$-norm on the class of all
majorizing operators), H. H. Schaefer [32] extended Schlotterbeck’s
characterizations of $AL$-spaces ($AM$-spaces, respectively). In 1971, L.
Krsteva in [20] written in Russian (see also [13]) extended cone absolutely
summing operators to the so-called latticially $p$-summing operators. Being
unaware of [20] and [13], O. Blasco ([3,2]) introduced the notion of positive
$p$-summing operators, which is exactly the same as latticially $p$-summing
operators. Having latticially $p$-summing operators at hand, it is natural to
think about $p$-nuclear and $p$-integral operators in the Banach lattice
setting. In 1998, O. I. Zhukova [35] defined and investigated a partially
positive version of $p$-nuclear operators-latticially $p$-nuclear operators.
By using of latticially $p$-nuclear operators, O. I. Zhukova [35] naturally
introduced the notion of latticially $p$-integral operators and proved some of
well-known results analogous to the classical theory of $p$-summing,
$p$-nuclear and $p$-integral operators.
This paper is a continuous work of [6]. The aim of the present paper is to
develop the theory of $p$-nuclear and $p$-integral operators in the Banach
lattice setting. The paper is organized as follows.
It was known [28, Theorem 18.2.5] that the adjoint operator ideal
$[\mathcal{N}_{p},\nu_{p}]^{*}$ of $[\mathcal{N}_{p},\nu_{p}]$ is equal to
$[\prod_{p^{*}},\pi_{p^{*}}]$. This formula described the dual space
$(\mathcal{N}_{p}(E,F))^{*}$ as the space $\prod_{p^{*}}(F,E^{**})$ if $E^{*}$
and $F$ have the metric approximation property. O. I. Zhukova [35] established
an analogous representation theorem for
$(\widetilde{\mathcal{N}}_{p}(E,X))^{*}$, the dual space of the latticially
$p$-nuclear operators, in terms of latticially $p$-summing operators if
$E^{*}$ has the metric approximation property or $X$ has the positive metric
approximation property. In Section 2, we introduce the notion of positively
$p$-nuclear operators that is a partially positive version of right
$p$-nuclear operators ([25],[31, Sec.6.2]). Firstly, we show that the class of
positively $p$-nuclear operators does not coincide with the class of right
$p$-nuclear operators. Secondly, we establish a representation theorem for
$(\widetilde{\mathcal{N}}^{p}(X,E))^{*}$, the dual space of the positively
$p$-nuclear operators, by means of positive $p$-majorizing operators
introduced by D. Chen, A. Belacel and J. A. Chávez-Domínguez [6] if $E$ has
the approximation property or $X^{*}$ has the positive metric approximation
property. Recall that when $E^{*}$ has the approximation property, any
operator $T:E\rightarrow F$ with nuclear adjoint is nuclear and both nuclear
norms coincide (see for instance [31, Proposition 4.10]). The analogous result
for $p$-nuclear operators due to O. I. Reinov [30, Theorem 1] states that when
$E^{*}$ or $F^{***}$ has the approximation property, then an operator
$T:E\rightarrow F$ with $p$-nuclear adjoint is right $p$-nuclear and the
$p$-nuclear norm of $T^{*}$ and the right $p$-nuclear norm of $T$ coincide. As
a corollary of our representation theorem for
$(\widetilde{\mathcal{N}}^{p}(X,E))^{*}$, we prove that when $E^{***}$ has the
approximation property or $X^{*}$ has the positive metric approximation
property, then an operator $T:X\rightarrow E$ with a latticially $p$-nuclear
adjoint is positively $p$-nuclear and the latticially $p$-nuclear norm of
$T^{*}$ and the positively $p$-nuclear norm of $T$ coincide. Furthermore, we
use O. I. Zhukova’s representation theorem for
$(\widetilde{\mathcal{N}}_{p}(E,X))^{*}$ to prove that when $E^{*}$ has the
approximation property or $X^{****}$ has the positive metric approximation
property, then an operator $S:E\rightarrow X$ with a positively $p$-nuclear
adjoint is latticially $p$-nuclear and the positively $p$-nuclear norm of
$S^{*}$ and the latticially $p$-nuclear norm of $S$ coincide. Finally, we use
our representation theorem for $(\widetilde{\mathcal{N}}^{p}(X,E))^{*}$ to
describe the space of positive $p$-majorizing operators via positively
$p$-nuclear operators and nuclear operators.
The operator ideal of $p$-integral operators is defined to be the maximal hull
of the ideal of $p$-nuclear operators ([28]). Following A. Defant and K.
Floret [9], the maximal hull of a Banach operator ideal is defined by finite
dimensional subspaces and finite co-dimensional subspaces. It should be
mentioned that the maximal hull can be restated by finite rank operators (see
[28, Theorem 8.7.4]). Based on this restatement, O. I. Zhukova [35] defined
the class of latticially $p$-integral operators to be the left positive
maximal hull of the class of latticially $p$-nuclear operators. In Section 3,
we define the class of positively $p$-integral operators to be the right
positive maximal hull of the class of positively $p$-nuclear operators.
Relating to order completeness, we show that positively $p$-integral operators
can be characterized by finite dimensional sublattices and finite co-
dimensional subspaces. But, when relating to positive metric approximation
property, we characterize positively $p$-integral operators only by finite co-
dimensional subspaces and latticially $p$-integral operators only by finite
dimensional subspaces. As applications, we establish the duality relationships
between latticially $p$-integral operators and positively $p$-integral
operators. Consequently, we prove that an operator $S:E\rightarrow X$ is
latticially $p$-integral precisely when $S^{**}$ is if $X^{**}$ has the
positive metric approximation property (resp. an operator $T:X\rightarrow E$
is positively $p$-integral precisely when $T^{**}$ is if $X^{*}$ has the
positive metric approximation property). O. I. Zhukova [35] proved that the
class of latticially $p$-nuclear operators from $E$ to $X$ can be embedded
isometrically into the class of latticially $p$-integral operators from $E$ to
$X$ whenever $E^{*}$ has the metric approximation property and $X$ has the
positive metric approximation property. Analogously, we prove that the class
of positively $p$-nuclear operators from $X$ to $E$ can be embedded
isometrically into the class of positively $p$-integral operators from $X$ to
$E$ if $X^{*}$ has the positive metric approximation property and $E$ has the
metric approximation property. [28, Theorem 19.2.13] stated that the adjoint
operator ideal $[\prod_{p},\pi_{p}]^{*}$ of $[\prod_{p},\pi_{p}]$ is
$[\mathcal{I}_{p^{*}},i_{p^{*}}]$. This formula described
$\mathcal{I}_{p^{*}}(F,E^{**})$ as the dual of the $\pi_{p}$-closure of
$\mathcal{F}(E,F)$ in $\prod_{p}(E,F)$ when $E^{*}$ and $F$ has the metric
approximation property. Analogously, O. I. Zhukova [35] described
$\widetilde{\mathcal{I}}_{p^{*}}(E,X^{**})$, the space of latticially
$p^{*}$-integral operators from $E$ to $X^{**}$, as the dual of the
$\|\cdot\|_{\Lambda_{p}}$-closure of $\mathcal{F}(X,E)$ in the space of
latticially $p$-summing operators if $E^{*}$ has the metric approximation
property and $X^{**}$ has the positive metric approximation property. In this
section, we describe $\widetilde{\mathcal{I}}^{p^{*}}(X,E^{**})$, the space of
positively $p^{*}$-integral operators from $X$ to $E^{**}$, as the dual of the
$\|\cdot\|_{\Upsilon_{p}}$-closure of $\mathcal{F}(E,X)$ in the space of
positive $p$-majorizing operators if $E^{**}$ has the metric approximation
property, $X^{*}$ has the positive metric approximation property and $X$ is
order continuous.
Notation and Preliminary. Our notation and terminology are standard as may be
found in [28,10,23]. Throughout the paper, $X,Y,Z$ will always denote real
Banach lattices, whereas $E,F,G$ will denote real Banach spaces. By an
operator, we always mean a bounded linear operator. For a Banach lattice $X$,
we denote by $X_{+}$ the positive cone of $X$, i.e., $X_{+}:=\\{x\in X:x\geq
0\\}$. We write $LDim(X)$ for the collection of all finite dimensional
sublattices of $X$. If $M$ is a closed subspace of $E$, we denote by $i_{M}$
the canonical inclusion from $M$ into $E$ and by $Q_{M}$ the natural quotient
map from $E$ onto $E/M$. We let $M^{\perp}:=\\{u^{*}\in E^{*}:\langle
u^{*},u\rangle=0$ for all $u\in M\\}$. We write $FIN(E)$ for the collection of
all finite-dimensional subspaces of $E$ and $COFIN(E)$ for the collection of
all finite co-dimensional subspaces of $E$. An operator $T:X\rightarrow Y$
which preserves the lattice operations is called lattice homomorphism, that
is, $T(x_{1}\vee x_{2})=Tx_{1}\vee Tx_{2}$ for all $x_{1},x_{2}\in X$. An one-
to-one, surjective lattice homomorphism is called lattice isomorphism. As
customary, $B_{E}$ denotes the closed unit ball of $E$, $E^{*}$ its linear
dual and $I_{E}$ the identity map on $E$. We denote by $\mathcal{L}(E,F)$
(resp. $\mathcal{F}(E,F)$) the space of all operators (resp. finite rank
operators) from $E$ to $F$. The classes of $p$-summing, $p$-nuclear and
$p$-integral operators are denoted by $\prod_{p},\mathcal{N}_{p}$ and
$\mathcal{I}_{p}$, respectively. For Banach lattices $X$ and $Y$,
$\mathcal{F}_{+}(X,Y)$ stands for the set of all positive finite rank
operators from $X$ to $Y$. The letters $p,q,r$ will designate elements of
$[1,+\infty]$, and $p^{*}$ denotes the exponent conjugate to $p$ (i.e.,
$\frac{1}{p}+\frac{1}{p^{*}}=1$). For a Banach space $E$, we denote by
$l_{p}(E)$ and $l^{w}_{p}(E)$ the spaces of all $p$-summable and weakly
$p$-summable sequences in $E$, respectively, with their usual norms
$\|(u_{n})_{n}\|_{p}:=(\sum_{n=1}^{\infty}\|u_{n}\|^{p})^{\frac{1}{p}},\quad\|(u_{n})_{n}\|_{p}^{w}:=\sup_{u^{*}\in
B_{E^{*}}}(\sum_{n=1}^{\infty}|\langle
u^{*},u_{n}\rangle|^{p})^{\frac{1}{p}}.$
The reader is referred to [28,10,23] for any unexplained notation or
terminology.
## 2\. Positively $p$-nuclear operators
Recall [28] that an operator $S:E\rightarrow F$ is called $p$-nuclear if
$S=\sum_{j=1}^{\infty}u^{*}_{j}\otimes v_{j},$
where $(u^{*}_{j})_{j}\in l_{p}(E^{*}),(v_{j})_{j}\in l^{w}_{p^{*}}(F)$.
One set
$\nu_{p}(S):=\inf\|(u^{*}_{j})_{j}\|_{p}\cdot\|(v_{j})_{j}\|_{p^{*}}^{w},$
where the infimum is taken over all so-called $p$-nuclear representations
described above.
$1$-nuclear operators are simply called nuclear operators. The class of all
nuclear operators with nuclear norm is denoted by $[\mathcal{N},\nu].$ O. I.
Zhukova [35] introduced the concept of latticially $p$-nuclear operators which
can be considered to be partially positive analogues of $p$-nuclear operators
as follows.
###### Definition 2.1.
[35] An operator $S:E\rightarrow X$ is called latticially $p$-nuclear if
$\displaystyle S=\sum_{j=1}^{\infty}u^{*}_{j}\otimes x_{j},$ (2.1)
where $(u^{*}_{j})_{j}\in l_{p}(E^{*}),(x_{j})_{j}\in l^{w}_{p^{*}}(X)_{+}$.
The representation (2.1) is referred to as a latticially $p$-nuclear
representation of $S$.
Put
$\widetilde{\nu}_{p}(S):=\inf\|(u^{*}_{j})_{j}\|_{p}\cdot\|(x_{j})_{j}\|_{p^{*}}^{w},$
where the infimum is taken over all latticially $p$-nuclear representations of
$S$.
The class of all latticially $p$-nuclear operators is denoted by
$\widetilde{\mathcal{N}}_{p}$. O. I. Zhukova [35] observed that latticially
$p$-nuclear operators have the left positive ideal property, that is, if
$S\in\widetilde{\mathcal{N}}_{p}(E,X),T\in\mathcal{L}(F,E)$ and
$R:X\rightarrow Y$ is positive, then $RST$ is latticially $p$-nuclear and
$\widetilde{\nu}_{p}(RST)\leq\|R\|\widetilde{\nu}_{p}(S)\|T\|$. It was also
pointed out in [35] that
$[\widetilde{\mathcal{N}}_{p},\widetilde{\nu}_{p}]\subseteq[\widetilde{\mathcal{N}}_{q},\widetilde{\nu}_{q}]$
for $p<q$. O. I. Zhukova [35] mentioned that an operator $S:E\rightarrow X$ is
latticially $p$-nuclear if and only if
$\displaystyle S=\sum_{j=1}^{\infty}u^{*}_{j}\otimes x_{j},$ (2.2)
where $(u^{*}_{j})_{j}\in l_{p}(E^{*}),(|x_{j}|)_{j}\in l^{w}_{p^{*}}(X)$.
O. I. Zhukova set
$\widetilde{\nu}_{p}^{\prime}(S):=\inf\|(u^{*}_{j})_{j}\|_{p}\cdot\|(|x_{j}|)_{j}\|_{p^{*}}^{w},$
where the infimum is taken over all representations (2.2) of $S$.
He also observed that $\widetilde{\nu}_{p}^{\prime}\leq\widetilde{\nu}_{p}\leq
2\widetilde{\nu}_{p}^{\prime}$ and
$[\widetilde{\mathcal{N}}_{1},\widetilde{\nu}_{1}^{\prime}]=[\mathcal{N},\nu].$
Recall ([25],[31, Sec.6.2]) that an operator $S:E\rightarrow F$ is called
right $p$-nuclear if $S$ can be written as
$S=\sum_{j=1}^{\infty}u^{*}_{j}\otimes v_{j},$
where $(u^{*}_{j})_{j}\in l^{w}_{p^{*}}(E^{*}),(v_{j})_{j}\in l_{p}(F)$.
Moreover, the right $p$-nuclear norm of $S$ is defined as
$\nu^{p}(S):=\inf\|(u^{*}_{j})_{j}\|_{p^{*}}^{w}\cdot\|(v_{j})_{j}\|_{p},$
where the infimum is taken all over possible representations of $S$ as above.
The class of all right $p$-nuclear operators is denoted by $\mathcal{N}^{p}$.
It is easy to see that if $S:E\rightarrow F$ is $p$-nuclear, then $S^{*}$ is
right $p$-nuclear and $\nu^{p}(S^{*})\leq\nu_{p}(S)$.
In this section, we introduce the notion of positively $p$-nuclear operators,
inspired by the definition in the Banach space setting.
###### Definition 2.2.
We say that an operator $T:X\rightarrow E$ is positively $p$-nuclear if
$\displaystyle T=\sum_{j=1}^{\infty}x^{*}_{j}\otimes u_{j},$ (2.3)
where $(x^{*}_{j})_{j}\in l^{w}_{p^{*}}(X^{*})_{+},(u_{j})_{j}\in l_{p}(E)$.
We call the representation (2.3) a positively $p$-nuclear representation of
$T$. We set
$\widetilde{\nu}^{p}(T):=\inf\|(x^{*}_{j})_{j}\|_{p^{*}}^{w}\cdot\|(u_{j})_{j}\|_{p},$
where the infimum is taken over all positively $p$-nuclear representations of
$T$. The class of all positively $p$-nuclear operators is denoted by
$\widetilde{\mathcal{N}}^{p}$.
We collect some basic properties of positively $p$-nuclear operators which are
immediate from the definition. These elementary properties will be used
throughout the paper.
###### Proposition 2.3.
If $T\in\widetilde{\mathcal{N}}^{p}(X,E),S\in\mathcal{L}(E,F)$ and
$R:Y\rightarrow X$ is positive, then $STR$ is positively $p$-nuclear and
$\widetilde{\nu}^{p}(STR)\leq\|S\|\widetilde{\nu}^{p}(T)\|R\|$.
$[\widetilde{\mathcal{N}}^{p},\widetilde{\nu}^{p}]\subseteq[\widetilde{\mathcal{N}}^{q},\widetilde{\nu}^{q}]$
for $p<q$.
$T:X\rightarrow E$ is positively $p$-nuclear if and only if
$\displaystyle T=\sum_{j=1}^{\infty}x^{*}_{j}\otimes u_{j},$ (2.4)
where $(|x^{*}_{j}|)_{j}\in l^{w}_{p^{*}}(X^{*}),(u_{j})_{j}\in l_{p}(E)$. In
this case, if we let
$|\widetilde{\nu}^{p}|(T):=\inf\|(|x^{*}_{j}|)_{j}\|_{p^{*}}^{w}\cdot\|(u_{j})_{j}\|_{p},$
where the infimum is taken over all representations (2.4) of $T$, then
$|\widetilde{\nu}^{p}|(T)\leq\widetilde{\nu}^{p}(T)\leq
2|\widetilde{\nu}^{p}|(T).$
$[\widetilde{\mathcal{N}}^{1},|\widetilde{\nu}^{1}|]=[\mathcal{N},\nu].$
If $S\in\widetilde{\mathcal{N}}_{p}(E,X)$, then
$S^{*}\in\widetilde{\mathcal{N}}^{p}(X^{*},E^{*})$ and
$\widetilde{\nu}^{p}(S^{*})\leq\widetilde{\nu}_{p}(S).$ The converse is true
if $X$ is a dual Banach lattice and
$\widetilde{\nu}^{p}(S^{*})=\widetilde{\nu}_{p}(S).$
If $T\in\widetilde{\mathcal{N}}^{p}(X,E)$, then
$T^{*}\in\widetilde{\mathcal{N}}_{p}(E^{*},X^{*})$ and
$\widetilde{\nu}_{p}(T^{*})\leq\widetilde{\nu}^{p}(T).$ The converse is true
if $E$ is a dual Banach space and
$\widetilde{\nu}_{p}(T^{*})=\widetilde{\nu}^{p}(T).$
###### Remark 2.4.
The class $\widetilde{\mathcal{N}}^{p}$ do not coincide with
$\mathcal{N}^{p}$. Indeed, O. I. Zhukova [35] remarked that the operator
$T:L_{1}[0,1]\rightarrow L_{2}[0,1]$ defined by
$Tf=\sum\limits_{n=1}^{\infty}\frac{1}{n}(\int_{[0,1]}f(t)r_{n}(t)dt)r_{n},\quad
f\in L_{1}[0,1],$
where $(r_{n})_{n}$ is the Rademacher function sequence, being $p$-nuclear for
every $p>1$, is not latticially $p$-nuclear for any $p$. Hence, $T^{*}$ is
right $p$-nuclear for every $p>1$. But, by Proposition 2.3 (e), $T^{*}$ is not
positively $p$-nuclear for any $p$.
To describe the conjugate of the space of positively $p$-nuclear operators, we
need the concept of positive $p$-majorizing operators introduced in [6].
###### Definition 2.5.
[6] We say that an operator $S:E\rightarrow X$ is positive $p$-majorizing if
there exists a constant $C>0$ such
$(\sum_{j=1}^{n}|\langle x^{*}_{j},Su_{j}\rangle|^{p})^{\frac{1}{p}}\leq
C\|(x^{*}_{j})_{j=1}^{n}\|^{w}_{p},$ (2.5)
for all finite families $(u_{j})_{j=1}^{n}$ in $B_{E}$ and
$(x^{*}_{j})_{j=1}^{n}$ in $(X^{*})_{+}$.
We denote by $\Upsilon_{p}(E,X)$ the space of all positive $p$-majorizing
operators from $E$ to $X$. It is easy to see that $\Upsilon_{p}(E,X)$ becomes
a Banach space with the norm $\|\cdot\|_{\Upsilon_{p}}$ given by the infimum
of the constants $C$ satisfying (2.5). Obviously, positive $p$-majorizing
operators have the left positive ideal property, that is, if
$S\in\Upsilon_{p}(E,X),T\in\mathcal{L}(F,E)$ and $R:X\rightarrow Y$ is
positive, then $RST$ is positive $p$-majorizing and
$\|RST\|_{\Upsilon_{p}}\leq\|R\|\|S\|_{\Upsilon_{p}}\|T\|$.
###### Definition 2.6.
[3] An operator $T:X\rightarrow E$ is said to be positive $p$-summing if there
exists a constant $C>0$ such that
$(\sum_{i=1}^{n}\|Tx_{i}\|^{p})^{\frac{1}{p}}\leq
C\|(x_{i})_{i=1}^{n}\|^{w}_{p}.$ (2.6)
for any choice of finitely many vectors $x_{1},x_{2},\cdots,x_{n}$ in $X_{+}$.
The space of all positive $p$-summing operators from $X$ to $E$ is denoted by
$\Lambda_{p}(X,E)$. This space becomes a Banach space with the norm
$\|\cdot\|_{\Lambda_{p}}$ given by the infimum of the constants $C$ satisfying
(2.6). It is easy to see that positive $p$-summing operators have the right
positive ideal property, that is, if
$T\in\Lambda_{p}(X,E),S\in\mathcal{L}(E,F)$ and $R:Y\rightarrow X$ is
positive, then $STR$ is positive $p$-summing and
$\|STR\|_{\Lambda_{p}}\leq\|S\|\|T\|_{\Lambda_{p}}\|R\|$.
In [6], we prove the following duality relationships between positive
$p$-summing operators and positive $p$-majorizing operators which will be used
later.
###### Theorem 2.7.
[6]
An operator $T:X\rightarrow E$ is positive $p$-summing if and only if $T^{*}$
is positive $p$-majorizing. In this case,
$\|T\|_{\Lambda_{p}}=\|T^{*}\|_{\Upsilon_{p}}$.
An operator $S:F\rightarrow Y$ is positive $p$-majorizing if and only if
$S^{*}$ is positive $p$-summing. In this case,
$\|S\|_{\Upsilon_{p}}=\|S^{*}\|_{\Lambda_{p}}$.
Recall that a Banach space $E$ has the approximation property ($AP$ for short)
if for every $\epsilon>0$ and for every compact subset $K$ of $E$, there
exists an operator $S\in\mathcal{F}(E)$ such that $\|Su-u\|<\epsilon$ for all
$u\in K$. In addition, if the operator $S$ can be chosen with $\|S\|\leq 1$,
$E$ is said to has the metric approximation property ($MAP$). A Banach lattice
$X$ is said to have the positive metric approximation property ($PMAP$) if for
every $\epsilon>0$ and for every compact subset $K$ of $X$, there exists an
operator $R\in\mathcal{F}_{+}(X)$ with $\|R\|\leq 1$ such that
$\|Rx-x\|<\epsilon$ for all $x\in K$.
We need a result due to A. Lissitsin and E. Oja [22] which says that positive
finite-rank operators between dual Banach lattices are locally conjugate.
###### Lemma 2.8.
[22] Let $X,Y$ be Banach lattices, let $F$ be a finite subset of $Y^{*}$ and
let $\epsilon>0$. If $S\in\mathcal{F}_{+}(Y^{*},X^{*})$, then there exists an
operator $R\in\mathcal{F}_{+}(X,Y)$ such that $\|R\|\leq(1+\epsilon)\|S\|$ and
$\|R^{*}y^{*}-Sy^{*}\|<\epsilon$ for all $y^{*}\in F$.
It follows from Lemma 2.8 that $X^{*}$ has the $PMAP$ if and only if for every
$\epsilon>0$ and each compact subset $K$ of $X^{*}$, there exists an operator
$R\in\mathcal{F}_{+}(X)$ with $\|R\|\leq 1$ such that
$\|R^{*}x^{*}-x^{*}\|<\epsilon$ for all $x^{*}\in K$.
###### Lemma 2.9.
Suppose that $E$ has the $AP$ or $X^{*}$ has the $PMAP$. Assume that
$S:E\rightarrow X^{**}$ is positive $p^{*}$-majorizing. Let
$(x^{*}_{n})_{n}\in(l^{w}_{p^{*}}(X^{*}))_{+}$ and $(u_{n})_{n}\in l_{p}(E)$.
Then
$\sum\limits_{n=1}^{\infty}x^{*}_{n}\otimes u_{n}=0$ implies
$\sum\limits_{n=1}^{\infty}\langle Su_{n},x^{*}_{n}\rangle=0.$
###### Proof.
It is clear that the conclusion holds true if $S$ is finite-rank. Suppose that
$S:E\rightarrow X^{**}$ is positive $p^{*}$-majorizing.
Case 1. $E$ has the $AP$.
Let $\epsilon>0$. Choose $1\leq\xi_{n}\rightarrow\infty$ with
$\|(\xi_{n}u_{n})_{n}\|_{p}\leq(1+\epsilon)\|(u_{n})_{n}\|_{p}$. Since $E$ has
the $AP$, there exists an operator $U\in\mathcal{F}(E)$ such that
$\|U(\frac{u_{n}}{\xi_{n}\|u_{n}\|})-\frac{u_{n}}{\xi_{n}\|u_{n}\|}\|<\epsilon$
for all $n$. Note that $\sum\limits_{n=1}^{\infty}\langle
SUu_{n},x^{*}_{n}\rangle=0$. By Theorem 2.7, we get
$\displaystyle|\sum\limits_{n=1}^{\infty}\langle Su_{n},x^{*}_{n}\rangle|$
$\displaystyle=|\sum\limits_{n=1}^{\infty}\langle
S(u_{n}-Uu_{n}),x^{*}_{n}\rangle|$
$\displaystyle=|\sum\limits_{n=1}^{\infty}\langle
S^{*}J_{X^{*}}x^{*}_{n},u_{n}-Uu_{n}\rangle|$
$\displaystyle\leq(\sum_{n=1}^{\infty}\|S^{*}J_{X^{*}}x^{*}_{n}\|^{p^{*}})^{\frac{1}{p^{*}}}(\sum_{n=1}^{\infty}\|u_{n}-Uu_{n}\|^{p})^{\frac{1}{p}}$
$\displaystyle\leq\epsilon(1+\epsilon)\|S\|_{\Upsilon_{p^{*}}}\|(x^{*}_{n})_{n=1}^{\infty}\|_{p^{*}}^{w}\|(u_{n})_{n}\|_{p}$
Letting $\epsilon\rightarrow 0$, we get
$\sum\limits_{n=1}^{\infty}\langle Su_{n},x^{*}_{n}\rangle=0.$
Case 2. $X^{*}$ has the $PMAP$.
We may assume that $\lim\limits_{n\rightarrow\infty}\|x^{*}_{n}\|=0$. Let
$\epsilon>0$. We choose a positive integral $N$ with
$(\sum\limits_{n=N+1}^{\infty}\|u_{n}\|^{p})^{\frac{1}{p}}<\epsilon.$ Let
$\delta>0$ be such that $\delta
N^{\frac{1}{p^{*}}}\|S\|\|(u_{n})_{n=1}^{\infty}\|_{p}<\epsilon.$ Since
$X^{*}$ has the $PMAP$, it follows from Lemma 2.8 that there exists an
operator $R\in\mathcal{F}_{+}(X)$ with $\|R\|\leq 1$ such that
$\|R^{*}x^{*}_{n}-x^{*}_{n}\|<\delta$ for all $n$. Note that
$\sum\limits_{n=1}^{\infty}\langle R^{**}Su_{n},x^{*}_{n}\rangle=0.$ By
Theorem 2.7, we get
$\displaystyle|\sum\limits_{n=1}^{\infty}\langle Su_{n},x^{*}_{n}\rangle|$
$\displaystyle=|\sum\limits_{n=1}^{\infty}\langle
Su_{n},x^{*}_{n}-R^{*}x^{*}_{n}\rangle|$
$\displaystyle\leq\sum\limits_{n=1}^{N}|\langle
Su_{n},x^{*}_{n}-R^{*}x^{*}_{n}\rangle|+\sum\limits_{n=N+1}^{\infty}|\langle
S^{*}x^{*}_{n},u_{n}\rangle|+\sum\limits_{n=N+1}^{\infty}|\langle
S^{*}R^{*}x^{*}_{n},u_{n}\rangle|$
$\displaystyle\leq(\sum_{n=1}^{N}\|x^{*}_{n}-R^{*}x^{*}_{n}\|^{p^{*}})^{\frac{1}{p^{*}}}(\sum_{n=1}^{N}\|Su_{n}\|^{p})^{\frac{1}{p}}+(\sum_{n=N+1}^{\infty}\|S^{*}x^{*}_{n}\|^{p^{*}})^{\frac{1}{p^{*}}}(\sum_{n=N+1}^{\infty}\|u_{n}\|^{p})^{\frac{1}{p}}$
$\displaystyle+(\sum_{n=N+1}^{\infty}\|S^{*}R^{*}x^{*}_{n}\|^{p^{*}})^{\frac{1}{p^{*}}}(\sum_{n=N+1}^{\infty}\|u_{n}\|^{p})^{\frac{1}{p}}$
$\displaystyle\leq\delta
N^{\frac{1}{p^{*}}}\|S\|\|(u_{n})_{n=1}^{\infty}\|_{p}+2\epsilon\|S\|_{\Upsilon_{p^{*}}}\|(x^{*}_{n})_{n=1}^{\infty}\|_{p^{*}}^{w}$
$\displaystyle\leq\epsilon+2\epsilon\|S\|_{\Upsilon_{p^{*}}}\|(x^{*}_{n})_{n=1}^{\infty}\|_{p^{*}}^{w}$
Letting $\epsilon\rightarrow 0$, we get
$\sum\limits_{n=1}^{\infty}\langle Su_{n},x^{*}_{n}\rangle=0.$
∎
Consequently, under the hypothesis of Lemma 2.9, if $T:X\rightarrow E$ is
positively $p$-nuclear and $S:E\rightarrow X^{**}$ is positive
$p^{*}$-majorizing, then
$\textrm{trace}(ST):=\sum\limits_{n=1}^{\infty}\langle
Su_{n},x^{*}_{n}\rangle$ is independent of the choice of the positively
$p$-nuclear representation $T=\sum\limits_{n=1}^{\infty}x^{*}_{n}\otimes
u_{n}$. Moreover, it is easy to see that
$|\textrm{trace}(ST)|\leq\|S\|_{\Upsilon_{p^{*}}}\widetilde{\nu}^{p}(T).$
To prove the main result of this section, we need a lemma due to A. Lissitsin
and E. Oja [22] that demonstrates the connection between finite-dimensional
subspaces and finite-dimensional sublattices in order complete Banach
lattices. This lemma will be used frequently throughout this paper.
###### Lemma 2.10.
[22, Lemma 5.5] Let $M$ be a finite-dimensional subspace of an order complete
Banach lattice $X$ and let $\epsilon>0$. Then there exist a sublattice $Z$ of
$X$ containing $M$, a finite-dimensional sublattice $G$ of $Z$, and a positive
projection $P$ from $Z$ onto $G$ such that $\|Px-x\|\leq\epsilon\|x\|$ for all
$x\in M$.
We also need the principle of local reflexivity in Banach lattices due to J.
L. Conroy, L. C. Moore [8] and S. J. Bernau [1], which plays a crucial role in
Banach lattice theory.
###### Theorem 2.11.
[1, Theorem 2] Let $X$ be a Banach lattice and let $M$ be a finite-dimensional
sublattice of $X^{**}$. Then for every finite-dimensional subspace $L$ of
$X^{*}$ and every $\epsilon>0$, there exists a lattice isomorphism $R$ from
$M$ into $X$ such that
$\|R\|,\|R^{-1}\|\leq 1+\epsilon$;
$|\langle x^{**},x^{*}\rangle-\langle
x^{*},Rx^{**}\rangle|\leq\epsilon\|x^{**}\|\|x^{*}\|$, for all $x^{**}\in M$
and $x^{*}\in L$.
Now we are in a position to give the main result of this section.
###### Theorem 2.12.
Suppose that $E$ has the $AP$ or $X^{*}$ has the $PMAP$. Then
$\Upsilon_{p^{*}}(E,X^{**})=(\widetilde{\mathcal{N}}^{p}(X,E))^{*}.$
###### Proof.
Let us define an operator
$V:\Upsilon_{p^{*}}(E,X^{**})\rightarrow(\widetilde{\mathcal{N}}^{p}(X,E))^{*}$
by
$S\mapsto V_{S}(T)=\textrm{trace}(ST),\quad
S\in\Upsilon_{p^{*}}(E,X^{**}),T\in\widetilde{\mathcal{N}}^{p}(X,E).$
Then $\|V_{S}\|\leq\|S\|_{\Upsilon_{p^{*}}}$.
Let $\varphi\in(\widetilde{\mathcal{N}}^{p}(X,E))^{*}$. We define an operator
$S:E\rightarrow X^{**}$ by $\langle
Su,x^{*}\rangle=\langle\varphi,x^{*}\otimes u\rangle$ for $u\in E,x^{*}\in
X^{*}$.
We claim that $S$ is positive $p^{*}$-majorizing.
Given any $u_{1},u_{2},\cdots,u_{n}$ in $B_{E}$ and
$x^{***}_{1},x^{***}_{2},\cdots,x^{***}_{n}$ in $(X^{***})_{+}$. Let
$\epsilon>0$. We set $M=\textrm{span}\\{Su_{j}:1\leq j\leq n\\}$ and
$L=\textrm{span}\\{x^{***}_{j}:1\leq j\leq n\\}$. It follows from Lemma 2.10
that there exist a sublattice $Z$ of $X^{***}$ containing $L$, a finite-
dimensional sublattice $G$ of $Z$ and a positive projection $P$ from $Z$ onto
$G$ such that $\|Px^{***}-x^{***}\|\leq\epsilon\|x^{***}\|$ for all
$x^{***}\in L$. By Theorem 2.11, we get a lattice isomorphism $R$ from $G$
into $X^{*}$ such that $\|R\|,\|R^{-1}\|\leq 1+\epsilon$ and
$\displaystyle|\langle x^{***},x^{**}\rangle-\langle
x^{**},Rx^{***}\rangle|\leq\epsilon\|x^{***}\|\|x^{**}\|,$ (2.7)
for all $x^{***}\in G,x^{**}\in M.$ Let $x^{*}_{j}=RPx^{***}_{j}\geq
0(j=1,2,\cdots,n)$. We choose $(\lambda_{j})_{j=1}^{n}$ such that
$\sum\limits_{j=1}^{n}|\lambda_{j}|^{p}=1$ and
$(\sum_{j=1}^{n}|\langle
Su_{j},x^{*}_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}=\sum_{j=1}^{n}\lambda_{j}\langle
Su_{j},x^{*}_{j}\rangle.$
Let
$T=\sum\limits_{j=1}^{n}x^{*}_{j}\otimes\lambda_{j}u_{j}\in\widetilde{\mathcal{N}}^{p}(X,E)$.
Then we have
$\displaystyle(\sum_{j=1}^{n}|\langle
Su_{j},x^{*}_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}$
$\displaystyle=\langle\varphi,T\rangle$
$\displaystyle\leq\|\varphi\|\widetilde{\nu}^{p}(T)$
$\displaystyle\leq\|\varphi\|\|(x^{*}_{j})_{j=1}^{n}\|^{w}_{p^{*}}$
$\displaystyle\leq\|\varphi\|(1+\epsilon)^{2}\|(x^{***}_{j})_{j=1}^{n}\|^{w}_{p^{*}}$
(2.8)
By (2.7), we get
$\displaystyle(\sum_{j=1}^{n}|\langle x^{***}_{j},Su_{j}\rangle-\langle
Su_{j},x^{*}_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}$
$\displaystyle\leq(\sum_{j=1}^{n}|\langle x^{***}_{j},Su_{j}\rangle-\langle
Px^{***}_{j},Su_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}$
$\displaystyle+(\sum_{j=1}^{n}|\langle Px^{***}_{j},Su_{j}\rangle-\langle
Su_{j},x^{*}_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}$
$\displaystyle\leq\epsilon\|S\|(\sum_{j=1}^{n}\|x^{***}_{j}\|^{p^{*}})^{\frac{1}{p^{*}}}+\epsilon(1+\epsilon)\|S\|(\sum_{j=1}^{n}\|x^{***}_{j}\|^{p^{*}})^{\frac{1}{p^{*}}}$
(2.9)
Combining (2) and (2), we get
$\displaystyle(\sum_{j=1}^{n}|\langle
x^{***}_{j},Su_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}$
$\displaystyle\leq(\sum_{j=1}^{n}|\langle
Su_{j},x^{*}_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}+(\sum_{j=1}^{n}|\langle
x^{***}_{j},Su_{j}\rangle-\langle
Su_{j},x^{*}_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}$
$\displaystyle\leq\|\varphi\|(1+\epsilon)^{2}\|(x^{***}_{j})_{j=1}^{n}\|^{w}_{p^{*}}+\epsilon(2+\epsilon)\|S\|(\sum_{j=1}^{n}\|x^{***}_{j}\|^{p^{*}})^{\frac{1}{p^{*}}}$
Letting $\epsilon\rightarrow 0$, we get
$(\sum_{j=1}^{n}|\langle
x^{***}_{j},Su_{j}\rangle|^{p^{*}})^{\frac{1}{p^{*}}}\leq\|\varphi\|\|(x^{***}_{j})_{j=1}^{n}\|^{w}_{p^{*}},$
which implies that $S$ is positive $p^{*}$-majorizing and
$\|S\|_{\Upsilon_{p^{*}}}\leq\|\varphi\|$.
By the definition of the operator $S$, we see that
$\langle\varphi,T\rangle=V_{S}(T)$ for all $T\in\mathcal{F}(X,E)$. Since
$\mathcal{F}(X,E)$ is $\widetilde{\nu}^{p}$-dense in
$\widetilde{\mathcal{N}}^{p}(X,E)$, it follows that $\varphi=V_{S}$. Hence the
operator $V$ is a surjective linear isometry.
∎
###### Corollary 2.13.
Suppose that $E^{***}$ has the $AP$ or $X^{*}$ has the $PMAP$. If the operator
$T:X\rightarrow E$ has a latticially $p$-nuclear adjoint, then $T$ is
positively $p$-nuclear and
$\widetilde{\nu}^{p}(T)=\widetilde{\nu}_{p}(T^{*}).$
###### Proof.
Suppose that $T$ is not positively $p$-nuclear. Since $T^{*}$ is latticially
$p$-nuclear, it follows from Proposition 2.3(e) that $T^{**}$ is positively
$p$-nuclear and so is $T^{**}J_{X}=J_{E}T$. Hence
$J_{E}T\in\widetilde{\mathcal{N}}^{p}(X,E^{**})\setminus\widetilde{\mathcal{N}}^{p}(X,E).$
Since $E^{***}$ has the $AP$, $E^{**}$ has the $AP$. By Theorem 2.12 and the
Hahn-Banach Theorem, we get an operator $S\in\Upsilon_{p^{*}}(E^{**},X^{**})$
such that $\textrm{trace}(SJ_{E}T)=1$ and $\textrm{trace}(SJ_{E}R)=0$ for all
$R\in\widetilde{\mathcal{N}}^{p}(X,E).$ This yields that $SJ_{E}u=0$ for all
$u\in E$. Let us take any latticially $p$-nuclear representation
$T^{*}=\sum\limits_{n=1}^{\infty}u^{**}_{n}\otimes x^{*}_{n}$. Since
$SJ_{E}u=0$ for all $u\in E$, we get $\sum\limits_{n=1}^{\infty}\langle
x^{*}_{n},x\rangle Su^{**}_{n}=0$ for all $x\in X$. Moreover, for every
$x^{*}\in X^{*}$, we have
$0=\langle SJ_{E}u,x^{*}\rangle=\langle J^{*}_{E}S^{*}x^{*},u\rangle.$
By Goldstine-Weston Theorem, we get
$\displaystyle\langle J^{**}_{E}u^{**},S^{*}x^{*}\rangle=\langle
u^{**},J^{*}_{E}S^{*}x^{*}\rangle=0.$ (2.10)
Note that
$\displaystyle
1=\textrm{trace}(SJ_{E}T)=\textrm{trace}(ST^{**}J_{X})=\sum\limits_{n=1}^{\infty}\langle
Su^{**}_{n},x^{*}_{n}\rangle.$ (2.11)
It follows from Theorem 2.7 that
$\sum\limits_{n=1}^{\infty}\|u^{**}_{n}\|\|S^{*}x^{*}_{n}\|\leq\|(u^{**}_{n})_{n}\|_{p}\|S\|_{\Upsilon_{p^{*}}}\|(x^{*}_{n})_{n}\|_{p^{*}}^{w}<\infty.$
In the case $X^{*}$ has the $PMAP$, an argument analogous to that of Lemma 2.9
Case 2 shows that $\sum\limits_{n=1}^{\infty}\langle
Su^{**}_{n},x^{*}_{n}\rangle=0,$ which contradicts with (2.11). It remains to
prove the conclusion in the case $E^{***}$ has the $AP$.
We define an operator
$V:=S^{*}J_{X^{*}}T^{*}J_{E}^{*}:E^{***}\stackrel{{\scriptstyle
J^{*}_{E}}}{{\longrightarrow}}E^{*}\stackrel{{\scriptstyle
T^{*}}}{{\longrightarrow}}X^{*}\stackrel{{\scriptstyle
J_{X^{*}}}}{{\longrightarrow}}X^{***}\stackrel{{\scriptstyle
S^{*}}}{{\longrightarrow}}E^{***}.$
It is easy to see that $V=\sum\limits_{n=1}^{\infty}u^{**}_{n}\otimes
S^{*}x^{*}_{n}$ is nuclear. Furthermore, for $u^{***}\in E^{***},u^{**}\in
E^{**}$, we get
$\displaystyle\langle Vu^{***},u^{**}\rangle$ $\displaystyle=\langle
S^{*}J_{X^{*}}T^{*}J_{E}^{*}u^{***},u^{**}\rangle$
$\displaystyle=\sum\limits_{n=1}^{\infty}\langle
u^{**}_{n},J^{*}_{E}u^{***}\rangle\langle S^{*}x^{*}_{n},u^{**}\rangle$
$\displaystyle=\sum\limits_{n=1}^{\infty}\langle
J^{**}_{E}u^{**}_{n},u^{***}\rangle\langle S^{*}x^{*}_{n},u^{**}\rangle.$
Therefore, $V=\sum\limits_{n=1}^{\infty}J^{**}_{E}u^{**}_{n}\otimes
S^{*}x^{*}_{n},\sum\limits_{n=1}^{\infty}\|J^{**}_{E}u^{**}_{n}\|\|S^{*}x^{*}_{n}\|<\infty.$
Since $E^{***}$ has the $AP$, we get, by (2.11),
$\sum\limits_{n=1}^{\infty}\langle
J^{**}_{E}u^{**}_{n},S^{*}x^{*}_{n}\rangle=\sum\limits_{n=1}^{\infty}\langle
S^{*}x^{*}_{n},u^{**}_{n}\rangle=\sum\limits_{n=1}^{\infty}\langle
Su^{**}_{n},x^{*}_{n}\rangle=1.$
This contradicts with (2.10).
In conclusion, we have proved in both cases that if $J_{E}T$ is positively
$p$-nuclear, then so is $T$. Since $\widetilde{\mathcal{N}}^{p}(X,E)$ is a
closed subspace of $\widetilde{\mathcal{N}}^{p}(X,E^{**})$ under the canonical
mapping $J_{E}$, we get
$\widetilde{\nu}^{p}(T)=\widetilde{\nu}^{p}(J_{E}T)=\widetilde{\nu}^{p}(T^{**}J_{X})\leq\widetilde{\nu}^{p}(T^{**})\leq\widetilde{\nu}_{p}(T^{*}).$
∎
###### Theorem 2.14.
Suppose that $E^{*}$ has the $AP$ or $X^{****}$ has the $PMAP$. If the
operator $S:E\rightarrow X$ has a positively $p$-nuclear adjoint, then $S$ is
latticially $p$-nuclear and
$\widetilde{\nu}_{p}(S)=\widetilde{\nu}^{p}(S^{*}).$
###### Proof.
Suppose that $S$ is not latticially $p$-nuclear. By [35, Theorem 3], there
exists an operator $T\in\Lambda_{p^{*}}(X^{**},E^{**})$ such that
$\textrm{trace}(TJ_{X}S)=1$ and $\textrm{trace}(TJ_{X}R)=0$ for all
$R\in\widetilde{\mathcal{N}}_{p}(E,X)$. This implies that $TJ_{X}x=0$ for all
$x\in X$. It follows from Goldstine-Weston Theorem that $\langle
J^{**}_{X}x^{**},T^{*}u^{*}\rangle=0$ for all $u^{*}\in E^{*},x^{**}\in
X^{**}$. Take any positively $p$-nuclear representation
$S^{*}=\sum\limits_{n=1}^{\infty}x^{**}_{n}\otimes u^{*}_{n}$. Then, we have
$\displaystyle\sum\limits_{n=1}^{\infty}\langle
Tx^{**}_{n},u^{*}_{n}\rangle=\textrm{trace}(TS^{**}J_{E})=\textrm{trace}(TJ_{X}S)=1.$
(2.12)
Moreover, $\sum\limits_{n=1}^{\infty}\langle u^{*}_{n},u\rangle Tx^{**}_{n}=0$
for all $u\in E$.
If $E^{*}$ has the $AP$, then $E^{*}$ has the $AP$ with conjugate operators.
We argue as in Lemma 2.9 Case 1 to show that
$\sum\limits_{n=1}^{\infty}\langle Tx^{**}_{n},u^{*}_{n}\rangle=0$, which
contradicts with (2.12). Now assume that $X^{****}$ has the $PMAP$.
Let
$U=(T^{*}J_{E^{*}})(S^{*}J_{X}^{*}):X^{***}\stackrel{{\scriptstyle
J^{*}_{X}}}{{\longrightarrow}}X^{*}\stackrel{{\scriptstyle
S^{*}}}{{\longrightarrow}}E^{*}\stackrel{{\scriptstyle
J_{E^{*}}}}{{\longrightarrow}}E^{***}\stackrel{{\scriptstyle
T^{*}}}{{\longrightarrow}}X^{****}.$
It is easy to check that
$S^{*}J^{*}_{X}=\sum\limits_{n=1}^{\infty}J_{X^{**}}x^{**}_{n}\otimes
u^{*}_{n}.$ Combining Lemma 2.9 with Theorem 2.7, we get
$0=\sum\limits_{n=1}^{\infty}\langle
J_{X}^{**}x^{**}_{n},T^{*}u^{*}_{n}\rangle=\sum\limits_{n=1}^{\infty}\langle
J_{X^{**}}x^{**}_{n},T^{*}u^{*}_{n}\rangle=\sum\limits_{n=1}^{\infty}\langle
Tx^{**}_{n},u^{*}_{n}\rangle=1.$
This is a contradiction.
Therefore, we have proved in both cases that if $J_{X}S$ is latticially
$p$-nuclear, so is $S$. Since $\widetilde{\mathcal{N}}_{p}(E,X)$ can be
considered to be a closed subspace of $\widetilde{\mathcal{N}}_{p}(E,X^{**})$
under the canonical embedding $J_{X}$, we get
$\widetilde{\nu}_{p}(S)=\widetilde{\nu}_{p}(J_{X}S)=\widetilde{\nu}_{p}(S^{**}J_{E})\leq\widetilde{\nu}_{p}(S^{**})\leq\widetilde{\nu}^{p}(S^{*}).$
This completes the proof. ∎
At the rest of this section, we describe the space of positive $p$-majorizing
operators via positively $p$-nuclear operators. First we prove a lemma which
is interesting in itself.
###### Lemma 2.15.
Suppose that $T:X\rightarrow E$ is positively $p$-nuclear and $S:F\rightarrow
X$ is positive $p^{*}$-majorizing. Then $TS$ is nuclear and
$\nu(TS)\leq\widetilde{\nu}^{p}(T)\|S\|_{\Upsilon_{p^{*}}}$.
###### Proof.
Let $\epsilon>0$. Then $T$ admits a positively $p$-nuclear representation
$T=\sum\limits_{n=1}^{\infty}x^{*}_{n}\otimes u_{n}$ such that
$\|(x^{*}_{n})_{n}\|_{p^{*}}^{w}\|(u_{n})_{n}\|_{p}\leq(1+\epsilon)\widetilde{\nu}^{p}(T).$
By Theorem 2.7, we get
$\displaystyle\sum_{n=1}^{\infty}\|S^{*}x^{*}_{n}\|\|u_{n}\|$
$\displaystyle\leq(\sum_{n=1}^{\infty}\|S^{*}x^{*}_{n}\|^{p^{*}})^{\frac{1}{p^{*}}}(\sum_{n=1}^{\infty}\|u_{n}\|^{p})^{\frac{1}{p}}$
$\displaystyle\leq\|S\|_{\Upsilon_{p^{*}}}\|(x^{*}_{n})_{n}\|_{p^{*}}^{w}\|(u_{n})_{n}\|_{p}$
$\displaystyle\leq\|S\|_{\Upsilon_{p^{*}}}(1+\epsilon)\widetilde{\nu}^{p}(T).$
This means that $TS$ is nuclear and
$\nu(TS)\leq\|S\|_{\Upsilon_{p^{*}}}(1+\epsilon)\widetilde{\nu}^{p}(T).$
Letting $\epsilon\rightarrow 0$, we get
$\nu(TS)\leq\widetilde{\nu}^{p}(T)\|S\|_{\Upsilon_{p^{*}}}$.
∎
Let $E$ be a Banach space and $X$ be a Banach lattice. We set
$\mathcal{U}_{*}^{p}(E,X):=\\{S\in\mathcal{L}(E,X):TS$ is nuclear for all
$T\in\widetilde{\mathcal{N}}^{p}(X,E)\\}.$
For $S\in\mathcal{U}_{*}^{p}(E,X)$, we define
$V_{S}:\widetilde{\mathcal{N}}^{p}(X,E)\rightarrow\mathcal{N}(E),T\mapsto TS.$
It follows from the closed graph theorem that $V_{S}$ is continuous. We define
a norm $\zeta^{p}$ on $\mathcal{U}_{*}^{p}(E,X)$ by
$\zeta^{p}(S):=\|V_{S}\|,\quad S\in\mathcal{U}_{*}^{p}(E,X).$
A routine argument shows that $[\mathcal{U}_{*}^{p}(E,X),\zeta^{p}]$ is a
Banach space.
We note that if $E$ has the $AP$ and $U\in\mathcal{N}(E)$, then
$\textrm{trace}(U)=\sum\limits_{n=1}^{\infty}\langle u^{*}_{n},u_{n}\rangle$
is independent of the choice of the nuclear representation
$U=\sum\limits_{n=1}^{\infty}u^{*}_{n}\otimes u_{n}$. Moreover,
$|\textrm{trace}(U)|\leq\nu(U)$.
###### Theorem 2.16.
Suppose that $E$ has the $AP$. Then
$\Upsilon_{p^{*}}(E,X)=\mathcal{U}_{*}^{p}(E,X)$
for all Banach lattices $X$.
###### Proof.
By Lemma 2.15, we get $\Upsilon_{p^{*}}(E,X)\subseteq\mathcal{U}_{*}^{p}(E,X)$
and $\zeta^{p}\leq\|\cdot\|_{\Upsilon_{p^{*}}}$.
Conversely, for $S\in\mathcal{U}_{*}^{p}(E,X)$, we define
$\varphi\in(\widetilde{\mathcal{N}}^{p}(X,E))^{*}$ by
$\langle\varphi,T\rangle=\textrm{trace}(TS),\quad
T\in\widetilde{\mathcal{N}}^{p}(X,E).$
Clearly, $\|\varphi\|\leq\zeta^{p}(S)$. It follows from Theorem 2.12 that
there exists a unique operator $\widetilde{S}\in\Upsilon_{p^{*}}(E,X^{**})$
such that $\|\widetilde{S}\|_{\Upsilon_{p^{*}}}=\|\varphi\|$ and
$\textrm{trace}(\widetilde{S}T)=\langle\varphi,T\rangle$ for all
$T\in\widetilde{\mathcal{N}}^{p}(X,E)$. The uniqueness of $\widetilde{S}$
implies that $J_{X}S=\widetilde{S}.$ Hence $S$ is positive $p^{*}$-majorizing
and
$\|S\|_{\Upsilon_{p^{*}}}=\|J_{X}S\|_{\Upsilon_{p^{*}}}=\|\varphi\|\leq\zeta^{p}(S).$
The conclusion follows.
∎
## 3\. Positively $p$-integral operators
Let us begin this section with recalling the definition of maximal Banach
operator ideals.
###### Definition 3.1.
[9] Let $[\mathfrak{A},\mathbf{A}]$ be a Banach operator ideal.
For $T\in\mathcal{L}(E,F)$ define
$\mathbf{A}^{\max}(T):=\sup\\{\mathbf{A}(Q_{L}Ti_{M}):M\in FIN(E),L\in
COFIN(F)\\}$
$\mathfrak{A}^{\max}(E,F):=\\{T\in\mathcal{L}(E,F):\mathbf{A}^{\max}(T)<\infty\\}$
and call
$[\mathfrak{A},\mathbf{A}]^{\max}:=[\mathfrak{A}^{\max},\mathbf{A}^{\max}]$
the maximal hull of $[\mathfrak{A},\mathbf{A}]$.
$[\mathfrak{A},\mathbf{A}]$ is called maximal if
$[\mathfrak{A},\mathbf{A}]=[\mathfrak{A}^{\max},\mathbf{A}^{\max}]$.
There is another criterion for the maximal hull
$(\mathfrak{A},\mathbf{A})^{\max}$.
###### Theorem 3.2.
[28] Let $[\mathfrak{A},\mathbf{A}]$ be a Banach operator ideal. An operator
$T\in\mathcal{L}(E,F)$ belongs to $\mathfrak{A}^{\max}(E,F)$ if and only if
there exists a constant $C>0$ such that
$\mathbf{A}(RTS)\leq C\|R\|\|S\|$ for all $S\in\mathcal{F}(G,E)$ and
$R\in\mathcal{F}(F,H),$
where $G,H$ are arbitrary Banach spaces.
In this case,
$\mathbf{A}^{\max}(T)=\inf C.$
Recall [28] that an operator $S:E\rightarrow F$ is called $p$-integral if it
belongs to $[\mathcal{N}_{p},\nu_{p}]^{\max}.$ The $p$-integral norm of $S$ is
defined by $i_{p}(S):=\nu_{p}^{\max}(S).$ It follows from Theorem 3.2 that an
operator $S:E\rightarrow F$ is $p$-integral if and only if there exists a
constant $C>0$ such that $\nu_{p}(RTS)\leq C\|R\|\|S\|$ for all
$S\in\mathcal{F}(G,E)$ and $R\in\mathcal{F}(F,H),$ where $G,H$ are arbitrary
Banach spaces. Moreover, $i_{p}(S)=\inf C.$
In an analogous way, O. I. Zhukova [35] introduced the notion of latticially
$p$-integral operators by use of latticially $p$-nuclear operators.
###### Definition 3.3.
[35] An operator $S:E\rightarrow X$ is called latticially $p$-integral if
there is a number $C$ such that the inequality $\widetilde{\nu}_{p}(BSA)\leq
C\|A\|\|B\|$ is valid for arbitrary $F$ and $Y$ and arbitrary operators
$A\in\mathcal{F}(F,E),B\in\mathcal{F}(X,Y)_{+}$.
One set
$\widetilde{i}_{p}(S)=\inf C.$
The class of all latticially $p$-integral operators is denoted by
$\widetilde{\mathcal{I}}_{p}$. It easily follows from the left positive ideal
property of latticially $p$-nuclear operators that latticially $p$-integral
operators also have the left positive ideal property.
Naturally, we introduce the notion of positively $p$-integral operators by
means of positively $p$-nuclear operators.
###### Definition 3.4.
We say that an operator $T:X\rightarrow E$ is positively $p$-integral if there
exists a constant $C>0$ such that
$\widetilde{\nu}^{p}(RTS)\leq C\|R\|\|S\|$ for all
$S\in\mathcal{F}_{+}(Y,X),R\in\mathcal{F}(E,F)$,
where $Y$ is arbitrary Banach lattice and $F$ is arbitrary Banach space.
We put
$\widetilde{i}^{p}(T):=\inf C.$
The class of all positively $p$-integral operators is denoted by
$\widetilde{\mathcal{I}}^{p}$. It follows from Proposition 2.3 that positively
$p$-integral operators have the right positive ideal property. Clearly, every
positively $p$-nuclear operator is positively $p$-integral with
$\widetilde{i}^{p}\leq\widetilde{\nu}^{p}$.
The definitions of latticially $p$-integral operators and positively
$p$-integral operators both stem from another characterization of the maximal
hull of Banach operator ideals (Theorem 3.2), not from the original definition
of the maximal hull (Definition 3.1). But the following result shows that the
class of positively $p$-integral operators coincides with the right positive
maximal hull of positively $p$-nuclear operators under the hypothesis of order
completeness.
###### Theorem 3.5.
Let $X$ be an order complete Banach lattice and $E$ be a Banach space. Let
$C>0$ and $T\in\mathcal{L}(X,E)$. The following statements are equivalent:
$T$ is positively $p$-integral with $\widetilde{i}^{p}(T)\leq C$;
$\widetilde{\nu}^{p}(Q_{L}Ti_{G})\leq C$ for all $G\in LDim(X),L\in COFIN(E)$.
###### Proof.
The implication $(a)\Rightarrow(b)$ is trivial.
$(b)\Rightarrow(a)$. Given any finite-rank operator $S:E\rightarrow F$ and
positive finite-rank operator $R:Y\rightarrow X$. Let $M=RY$. Let
$\epsilon>0$. It follows from Lemma 2.10 that there exist a sublattice $Z$ of
$X$ containing $M$, a finite-dimensional sublattice $G$ of $Z$ and a positive
projection $P$ from $Z$ onto $G$ such that $\|Px-x\|\leq\epsilon\|x\|$ for all
$x\in M$. We define an operator $\widehat{S}:E/\textrm{Ker}(S)\rightarrow F$
by $u+\textrm{Ker}(S)\mapsto Su$. Clearly, the operator $\widehat{S}$ is one-
to-one, has the same range as $S$ and $\|\widehat{S}\|=\|S\|$. Then
$L:=\textrm{Ker}(S)$ is finite co-dimensional and $S=\widehat{S}Q_{L}$. By
(b), we get
$\displaystyle\widetilde{\nu}^{p}(STPR)$
$\displaystyle=\widetilde{\nu}^{p}(\widehat{S}Q_{L}Ti_{G}PR)$
$\displaystyle\leq C\|\widehat{S}\|\|PR\|$
$\displaystyle\leq(1+\epsilon)C\|S\|\|R\|.$
By Proposition 2.3, we have
$\displaystyle\widetilde{\nu}^{p}(STR)$
$\displaystyle\leq\widetilde{\nu}^{p}(STR-STPR)+\widetilde{\nu}^{p}(STPR)$
$\displaystyle\leq\widetilde{\nu}^{p}(STR-STPR)+(1+\epsilon)C\|S\|\|R\|$
$\displaystyle\leq\widetilde{\nu}^{1}(STR-STPR)+(1+\epsilon)C\|S\|\|R\|$
$\displaystyle\leq 2|\widetilde{\nu}^{1}|(STR-STPR)+(1+\epsilon)C\|S\|\|R\|$
$\displaystyle=2\nu(STR-STPR)+(1+\epsilon)C\|S\|\|R\|$
$\displaystyle=2\nu(ST)\|R-PR\|+(1+\epsilon)C\|S\|\|R\|$
$\displaystyle=2\epsilon\nu(ST)\|R\|+(1+\epsilon)C\|S\|\|R\|$
Letting $\epsilon\rightarrow 0$, we get
$\widetilde{\nu}^{p}(STR)\leq C\|S\|\|R\|.$
This completes the proof.
∎
###### Theorem 3.6.
Suppose that $X^{*}$ has the $PMAP$. Let $T\in\mathcal{L}(X,E)$ and let $C>0$.
The following statements are equivalent:
$T$ is positively $p$-integral with $\widetilde{i}^{p}(T)\leq C$.
$\sup\\{\widetilde{\nu}^{p}(Q_{L}T):L\in COFIN(E)\\}\leq C$.
###### Proof.
$(ii)\Rightarrow(i)$ is obvious.
$(i)\Rightarrow(ii)$. Let $L\in COFIN(E)$ and let $\epsilon>0$. We write
$Q_{L}T=\sum\limits_{i=1}^{n}x^{*}_{i}\otimes\phi_{i},x^{*}_{i}\in
X^{*},\phi_{i}\in E/L(i=1,2,\cdots,n)$. Choose $\delta>0$ such that
$\delta\sum\limits_{i=1}^{n}\|\phi_{i}\|<\epsilon.$ Since $X^{*}$ has the
$PMAP$, it follows from Lemma 2.8 that there exists an operator
$A\in\mathcal{F}_{+}(X)$ with $\|A\|\leq 1$ such that
$\|A^{*}x^{*}_{i}-x^{*}_{i}\|<\delta$ for all $i=1,2,\cdots,n$. By $(i)$, we
get $\widetilde{\nu}^{p}(Q_{L}TA)\leq C.$
By Proposition 2.3, we get
$\displaystyle\widetilde{\nu}^{p}(Q_{L}T)$
$\displaystyle\leq\widetilde{\nu}^{p}(Q_{L}T-Q_{L}TA)+\widetilde{\nu}^{p}(Q_{L}TA)$
$\displaystyle\leq\widetilde{\nu}^{1}(Q_{L}T-Q_{L}TA)+C$ $\displaystyle\leq
2|\widetilde{\nu}^{1}|(Q_{L}T-Q_{L}TA)+C$
$\displaystyle=2\nu(Q_{L}T-Q_{L}TA)+C$ $\displaystyle\leq
2\sum_{i=1}^{n}\|Ax^{*}_{i}-x^{*}_{i}\|\|\phi_{i}\|+C$ $\displaystyle\leq
2\epsilon+C.$
Letting $\epsilon\rightarrow 0$, we get $\widetilde{\nu}^{p}(Q_{L}T)\leq C$.
∎
An analogous argument shows the following theorem.
###### Theorem 3.7.
Suppose that $X$ has the $PMAP$. Let $S\in\mathcal{L}(E,X)$ and let $C>0$. The
following statements are equivalent:
$S$ is latticially $p$-integral with $\widetilde{i}_{p}(S)\leq C$.
$\sup\\{\widetilde{\nu}_{p}(Si_{M}):M\in FIN(E)\\}\leq C$.
###### Corollary 3.8.
If $S:E\rightarrow X$ is latticially $p$-integral, then $S^{*}$ is positively
$p$-integral. In this case,
$\widetilde{i}^{p}(S^{*})\leq\widetilde{i}_{p}(S)$.
If $T:X\rightarrow E$ is positively $p$-integral and $X^{*}$ has the $PMAP$,
then $T^{*}$ is latticially $p$-integral. In this case,
$\widetilde{i}_{p}(T^{*})\leq\widetilde{i}^{p}(T).$
###### Proof.
(a). Given any $A\in\mathcal{F}_{+}(Y,X^{*}),B\in\mathcal{F}(E^{*},F)$. We may
assume that $F$ is finite-dimensional. Let $\epsilon>0$. By [17, Lemma 3.1],
there exists a $weak^{*}$-continuous operator $C:E^{*}\rightarrow F$ such that
$\|C\|\leq(1+\epsilon)\|B\|$ and $C|_{S^{*}AY}=B|_{S^{*}AY}$. Let
$D:F^{*}\rightarrow E$ be an operator such that $D^{*}=C$. Since $S$ is
latticially $p$-integral, we get
$\widetilde{\nu}_{p}(A^{*}J_{X}SD)\leq\|A^{*}J_{X}\|\widetilde{i}_{p}(S)\|D\|\leq(1+\epsilon)\|A\|\widetilde{i}_{p}(S)\|B\|.$
Clearly, $BS^{*}A=(A^{*}J_{X}SD)^{*}J_{Y}$. By Proposition 2.3 (e), we get
$\displaystyle\widetilde{\nu}^{p}(BS^{*}A)$
$\displaystyle\leq\widetilde{\nu}^{p}((A^{*}J_{X}SD)^{*})$
$\displaystyle\leq\widetilde{\nu}_{p}(A^{*}J_{X}SD)$
$\displaystyle\leq(1+\epsilon)\|A\|\widetilde{i}_{p}(S)\|B\|.$
Letting $\epsilon\rightarrow 0$, we get
$\widetilde{\nu}^{p}(BS^{*}A)\leq\|A\|\widetilde{i}_{p}(S)\|B\|.$
Hence, $S^{*}$ is positively $p$-integral and
$\widetilde{i}^{p}(S^{*})\leq\widetilde{i}_{p}(S)$.
(b). Given $M\in FIN(E^{*})$ and $\epsilon>0$. We let
$L:={}^{\perp}\\!M=\\{u\in E:\langle u^{*},u\rangle=0$ for all $u^{*}\in
M\\}$. Then $L\in COFIN(E)$. Note that $Q_{L}^{*}:(E/L)^{*}\rightarrow E^{*}$
is an isometric embedding and the range of $Q_{L}^{*}$ is $L^{\perp}=M$. Let
us define an operator $A:M\rightarrow(E/L)^{*}$ by
$Au^{*}=(Q_{L}^{*})^{-1}(u^{*})(u^{*}\in M)$. Clearly, $\|A\|=1$ and
$Q^{*}_{L}A=i_{M}$. By Proposition 2.3 $(f)$ and Theorem 3.6, we get
$\displaystyle\widetilde{\nu}_{p}(T^{*}i_{M})$
$\displaystyle=\widetilde{\nu}_{p}(T^{*}Q^{*}_{L}A)$
$\displaystyle\leq\widetilde{\nu}_{p}(T^{*}Q^{*}_{L})$
$\displaystyle\leq\widetilde{\nu}^{p}(Q_{L}T)$
$\displaystyle\leq\widetilde{i}^{p}(T).$
By Theorem 3.7, $T^{*}$ is latticially $p$-integral and
$\widetilde{i}_{p}(T^{*})\leq\widetilde{i}^{p}(T).$
∎
The following result is immediate from Definition 3.3 and Definition 3.4.
###### Lemma 3.9.
If $S^{**}:E^{**}\rightarrow X^{**}$ is latticially $p$-integral, then so is
$S$. In this case, $\widetilde{i}_{p}(S)\leq\widetilde{i}_{p}(S^{**}).$
If $T^{**}:X^{**}\rightarrow E^{**}$ is positively $p$-integral, then so is
$T$. In this case, $\widetilde{i}^{p}(T)\leq\widetilde{i}^{p}(T^{**}).$
Combining Corollary 3.8 and Lemma 3.9, we obtain the following two
corollaries.
###### Corollary 3.10.
Suppose that $X^{**}$ has the $PMAP$. The following are equivalent for an
operator $S:E\rightarrow X$:
$S$ is latticially $p$-integral.
$S^{*}$ is positively $p$-integral.
$S^{**}$ is latticially $p$-integral.
In this case,
$\widetilde{i}_{p}(S)=\widetilde{i}^{p}(S^{*})=\widetilde{i}_{p}(S^{**}).$
###### Corollary 3.11.
Suppose that $X^{*}$ has the $PMAP$. The following are equivalent for an
operator $T:X\rightarrow E$:
$T$ is positively $p$-integral.
$T^{*}$ is latticially $p$-integral.
$T^{**}$ is positively $p$-integral.
In this case,
$\widetilde{i}^{p}(T)=\widetilde{i}_{p}(T^{*})=\widetilde{i}^{p}(T^{**}).$
Next we present an important example of positively $p$-integral operators.
###### Theorem 3.12.
Let $(\Omega,\Sigma,\mu)$ be a probability measure space and $1\leq p<\infty$.
Then the inclusion map $i_{p}:L_{p^{*}}(\mu)\rightarrow L_{1}(\mu)$ is
positively $p$-integral with $\widetilde{i}^{p}(i_{p})\leq 1$.
To prove Theorem 3.12, we need the following three elementary lemmas.
Let $\tau=(A_{i})_{i=1}^{n}$ be a partition of a probability measure space
$(\Omega,\Sigma,\mu)$. We define an operator
$Q_{\tau}:L_{p^{*}}(\mu)\rightarrow L_{1}(\mu),\quad
g\mapsto\sum_{i=1}^{n}\frac{\int_{A_{i}}gd\mu}{\mu(A_{i})}\chi_{A_{i}},$
where $\frac{\int_{A_{i}}gd\mu}{\mu(A_{i})}=0$ if $\mu(A_{i})=0$. It is easy
to see that $\|Q_{\tau}\|=1$.
###### Lemma 3.13.
$\widetilde{\nu}^{p}(Q_{\tau})=1.$
###### Proof.
Let $f_{i}=\frac{\chi_{A_{i}}}{\mu(A_{i})^{\frac{1}{p^{*}}}}(i=1,2,\cdots,n)$.
Then $\|(f_{i})_{i=1}^{n}\|_{p}=1$. For each $i$, we define
$\varphi_{i}\in(L_{p^{*}}(\mu))^{*}$ by
$\langle\varphi_{i},g\rangle=\frac{\int_{A_{i}}gd\mu}{\mu(A_{i})^{\frac{1}{p}}}(g\in
L_{p^{*}}(\mu)).$ Then $Q_{\tau}=\sum\limits_{i=1}^{n}\varphi_{i}\otimes
f_{i}$. Let us define an operator $T:L_{p^{*}}(\mu)\rightarrow l_{p^{*}}$ by
$Tg=(\langle\varphi_{i},g\rangle)_{i=1}^{n}$ for $g\in L_{p^{*}}(\mu)$. Note
that
$|\langle\varphi_{i},g\rangle|\leq\frac{\int_{A_{i}}|g|d\mu}{\mu(A_{i})^{\frac{1}{p}}}\leq\frac{(\int_{\Omega}|g\chi_{A_{i}}|^{p^{*}}d\mu)^{\frac{1}{p^{*}}}(\int_{\Omega}\chi_{A_{i}}d\mu)^{\frac{1}{p}}}{\mu(A_{i})^{\frac{1}{p}}}=(\int_{A_{i}}|g|^{p^{*}}d\mu)^{\frac{1}{p^{*}}}.$
Hence
$\sum_{i=1}^{n}|\langle\varphi_{i},g\rangle|^{p^{*}}\leq\sum_{i=1}^{n}\int_{A_{i}}|g|^{p^{*}}d\mu=\int_{\Omega}|g|^{p^{*}}d\mu.$
This implies
$\|(\varphi_{i})_{i=1}^{n}\|^{w}_{p^{*}}=\|T\|\leq 1.$
Consequently
$\widetilde{\nu}^{p}(Q_{\tau})\leq\|(\varphi_{i})_{i=1}^{n}\|^{w}_{p^{*}}\cdot\|(f_{i})_{i=1}^{n}\|_{p}\leq
1.$
Since $\|Q_{\tau}\|=1$, we get $\widetilde{\nu}^{p}(Q_{\tau})=1.$ ∎
The following lemma may be known. For the sake of completeness, we include the
proof here.
###### Lemma 3.14.
Let $f_{1},f_{2},\cdots,f_{n}\in L_{\infty}(\mu)$. Then, for every
$\epsilon>0$, there exists a partition $\tau=(A_{i})_{i=1}^{m}$ of $\Omega$
such that
$\|f_{j}-\sum_{i=1}^{m}\frac{\int_{A_{i}}f_{j}d\mu}{\mu(A_{i})}\chi_{A_{i}}\|_{p}<\epsilon,\quad
j=1,2,\cdots,n.$
###### Proof.
We only prove the conclusion for $n=2$. Other cases are analogous.
We may assume that $f_{1},f_{2}$ are bounded. We set
$\alpha=\min(\min_{t\in\Omega}f_{1}(t),\min_{t\in\Omega}f_{2}(t))$
and
$\beta=\max(\max_{t\in\Omega}f_{1}(t),\max_{t\in\Omega}f_{2}(t)).$
We choose $a_{0}<a_{1}<\cdots<a_{m}$ such that
$[\alpha,\beta]\subseteq\bigcup\limits_{i=1}^{m}(a_{i-1},a_{i}],\quad
a_{i}-a_{i-1}<\epsilon,i=1,2,\cdots,m.$
Let $A_{ij}=f_{1}^{-1}((a_{i-1},a_{i}])\cap
f_{2}^{-1}((a_{j-1},a_{j}])(i,j=1,2,\cdots,m)$. Then $(A_{ij})_{i,j=1}^{m}$ is
a partition of $\Omega$.
Note that
$a_{i-1}\mu(A_{ij})\leq\int_{A_{ij}}f_{1}d\mu\leq a_{i}\mu(A_{ij}),\quad
i,j=1,2,\cdots,m$
and hence
$|f_{1}(t)-\frac{\int_{A_{ij}}f_{1}d\mu}{\mu(A_{ij})}|\leq\epsilon,\quad t\in
A_{ij},i,j=1,2,\cdots,m.$
This means
$\int_{\Omega}|f_{1}-\sum_{i,j=1}^{m}\frac{\int_{A_{ij}}f_{1}d\mu}{\mu(A_{ij})}\chi_{A_{ij}}|^{p}d\mu=\sum_{i,j=1}^{m}\int_{A_{ij}}|f_{1}-\frac{\int_{A_{ij}}f_{1}d\mu}{\mu(A_{ij})}|^{p}d\mu\leq\epsilon^{p}.$
That is
$\|f_{1}-\sum_{i,j=1}^{m}\frac{\int_{A_{ij}}f_{1}d\mu}{\mu(A_{ij})}\chi_{A_{ij}}\|_{p}\leq\epsilon.$
Similarly
$\|f_{2}-\sum_{i,j=1}^{m}\frac{\int_{A_{ij}}f_{2}d\mu}{\mu(A_{ij})}\chi_{A_{ij}}\|_{p}\leq\epsilon.$
∎
###### Lemma 3.15.
Let $E$ be a Banach space and let $T\in\mathcal{F}(L_{1}(\mu),E)$. Then, for
every $\epsilon>0$, there exists a partition $\tau=(A_{i})_{i=1}^{m}$ of
$\Omega$ such that $\nu(Ti_{p}-TQ_{\tau})<\epsilon.$
###### Proof.
We write $T=\sum\limits_{j=1}^{n}f_{j}\otimes u_{j},f_{j}\in
L_{\infty}(\mu),u_{j}\in E,j=1,2,\cdots,n$. It follows from Lemma 3.14 that
there exists a partition $\tau=(A_{i})_{i=1}^{m}$ of $\Omega$ such that
$\|f_{j}-\sum_{i=1}^{m}\frac{\int_{A_{i}}f_{j}d\mu}{\mu(A_{i})}\chi_{A_{i}}\|_{p}<\frac{\epsilon}{\sum\limits_{i=1}^{n}\|u_{i}\|},\quad
j=1,2,\cdots,n.$
Hence
$\displaystyle\nu(Ti_{p}-TQ_{\tau})$
$\displaystyle\leq\sum\limits_{j=1}^{n}\|f_{j}-Q^{*}_{\tau}f_{j}\|\|u_{j}\|$
$\displaystyle=\sum\limits_{j=1}^{n}\|f_{j}-\sum_{i=1}^{m}\frac{\int_{A_{i}}f_{j}d\mu}{\mu(A_{i})}\chi_{A_{i}}\|_{p}\|u_{j}\|$
$\displaystyle<\epsilon.$
∎
Proof of Theorem 3.12.
Given any finite-rank operator $R:L_{1}(\mu)\rightarrow E$ and positive
finite-rank operator $S:X\rightarrow L_{p^{*}}(\mu)$. Let $\epsilon>0$.
According to Lemma 3.15, there exists a partition $\tau=(A_{i})_{i=1}^{m}$ of
$\Omega$ such that $\nu(Ri_{p}-RQ_{\tau})<\epsilon.$ By Proposition 2.3 and
Lemma 3.13, we get
$\displaystyle\widetilde{\nu}^{p}(Ri_{p}S)$
$\displaystyle\leq\widetilde{\nu}^{p}(Ri_{p}S-RQ_{\tau}S)+\widetilde{\nu}^{p}(RQ_{\tau}S)$
$\displaystyle\leq\widetilde{\nu}^{p}(Ri_{p}-RQ_{\tau})\|S\|+\|R\|\widetilde{\nu}^{p}(Q_{\tau})\|S\|$
$\displaystyle\leq\widetilde{\nu}^{1}(Ri_{p}-RQ_{\tau})\|S\|+\|R\|\|S\|$
$\displaystyle\leq 2|\widetilde{\nu}^{1}|(Ri_{p}-RQ_{\tau})\|S\|+\|R\|\|S\|$
$\displaystyle=2\nu(Ri_{p}-RQ_{\tau})\|S\|+\|R\|\|S\|$ $\displaystyle\leq
2\epsilon\|S\|+\|R\|\|S\|$
Letting $\epsilon\rightarrow 0$, we get
$\widetilde{\nu}^{p}(Ri_{p}S)\leq\|R\|\|S\|.$
This completes the proof. $\Box$
###### Remark 3.16.
It was known (see [10, Example 2.9 (b), Corollary 2.8] for instance) that the
canonical map $j_{p}$ from $C(K)$ to $L_{p}(\mu)$($\mu$\- regular Borel
measure on compact Hausdorff space $K$) is $p$-integral. O. I. Zhukova [35]
strengthened this result and proved that $j_{p}$ is latticially $p$-integral.
Although the adjoint of $i_{p}$ is the inclusion map of $L_{\infty}(\mu)$ into
$L_{p}(\mu)$, it seems that there is no implication between O. I. Zhukova’s
result and Theorem 3.12, even by Corollaries 3.10 and 3.11.
We’ll reveal a close relationship between positively $p$-nuclear operators and
positively $p$-integral operators. We need two lemmas.
###### Lemma 3.17.
Suppose that $X^{*}$ has the $PMAP$ and $E$ is a Banach space. Let
$T\in\widetilde{\mathcal{N}}^{p}(X,E).$ Then, for every $\epsilon>0$, there
exists an operator $R\in\mathcal{F}_{+}(X)$ with $\|R\|\leq 1$ such that
$\widetilde{\nu}^{p}(T-TR)<\epsilon$.
###### Proof.
Let $\epsilon>0$. We choose $\delta>0$ with
$2\delta+\delta(1+\delta)^{2}\widetilde{\nu}^{p}(T)<\epsilon.$ We choose a
positively $p$-nuclear representation
$T=\sum\limits_{j=1}^{\infty}x^{*}_{j}\otimes u_{j}$ such that
$\|(x^{*}_{j})_{j=1}^{\infty}\|_{p^{*}}^{w}\|(u_{j})_{j=1}^{\infty}\|_{p}\leq(1+\delta)\widetilde{\nu}^{p}(T).$
We may assume that $\|(x^{*}_{j})_{j}\|_{p^{*}}^{w}=1$. We choose
$1\leq\xi_{j}\rightarrow\infty$ such that
$\|(\xi_{j}u_{j})_{j=1}^{\infty}\|_{p}\leq(1+\delta)\|(u_{j})_{j=1}^{\infty}\|_{p}.$
Choose a positive integral $N$ with
$(\sum\limits_{j=N+1}^{\infty}\|u_{j}\|^{p})^{\frac{1}{p}}<\delta$ and also
choose a positive real $\eta>0$ with $\eta N^{\frac{1}{p^{*}}}<\delta$. Since
$X^{*}$ has the $PMAP$, it follows from Lemma 2.8 that there exists an
operator $R\in\mathcal{F}_{+}(X)$ with $\|R\|\leq 1$ such that
$\|R^{*}(\frac{x^{*}_{j}}{\xi_{j}})-\frac{x^{*}_{j}}{\xi_{j}}\|<\eta$ for all
$j$. Note that
$T-TR=\sum_{j=1}^{N}(x^{*}_{j}-R^{*}x^{*}_{j})\otimes
u_{j}+\sum_{j=N+1}^{\infty}(x^{*}_{j}-R^{*}x^{*}_{j})\otimes u_{j}.$
Hence
$\displaystyle\widetilde{\nu}^{p}(T-TR)$
$\displaystyle\leq\widetilde{\nu}^{p}(\sum_{j=1}^{N}(\frac{x^{*}_{j}}{\xi_{j}}-R^{*}(\frac{x^{*}_{j}}{\xi_{j}}))\otimes\xi_{j}u_{j})+\widetilde{\nu}^{p}(\sum_{j=N+1}^{\infty}(x^{*}_{j}-R^{*}x^{*}_{j})\otimes
u_{j})$
$\displaystyle\leq\|(\frac{x^{*}_{j}}{\xi_{j}}-R^{*}(\frac{x^{*}_{j}}{\xi_{j}}))_{j=1}^{N}\|_{p^{*}}^{w}\|(\xi_{j}u_{j})_{j=1}^{N}\|_{p}+\|(x^{*}_{j}-R^{*}x^{*}_{j})_{j=N+1}^{\infty}\|_{p^{*}}^{w}\|(u_{j})_{j=N+1}^{\infty}\|_{p}$
$\displaystyle\leq(\sum_{j=1}^{N}\|\frac{x^{*}_{j}}{\xi_{j}}-R^{*}(\frac{x^{*}_{j}}{\xi_{j}})\|^{p^{*}})^{\frac{1}{p^{*}}}(1+\delta)\|(u_{j})_{j=1}^{\infty}\|_{p}+2\|(x^{*}_{j})_{j=1}^{\infty}\|_{p^{*}}^{w}\|(u_{j})_{j=N+1}^{\infty}\|_{p}$
$\displaystyle\leq\eta
N^{\frac{1}{p^{*}}}(1+\delta)^{2}\widetilde{\nu}^{p}(T)+2\delta$
$\displaystyle\leq\delta(1+\delta)^{2}\widetilde{\nu}^{p}(T)+2\delta$
$\displaystyle<\epsilon,$
which completes the proof.
∎
###### Lemma 3.18.
Suppose that $E$ has the $MAP$ and $X$ is a Banach lattice. Let
$T\in\widetilde{\mathcal{N}}^{p}(X,E).$ Then, for every $\epsilon>0$, there
exists an operator $S\in\mathcal{F}(E)$ with $\|S\|\leq 1$ such that
$\widetilde{\nu}^{p}(T-ST)<\epsilon$.
###### Proof.
Let $\epsilon>0$. Let $\delta>0$ be such that
$\delta(1+\delta)^{2}\widetilde{\nu}^{p}(T)<\epsilon.$ We choose a positively
$p$-nuclear representation $T=\sum\limits_{j=1}^{\infty}x^{*}_{j}\otimes
u_{j}$ such that
$\|(x^{*}_{j})_{j=1}^{\infty}\|_{p^{*}}^{w}\|(u_{j})_{j=1}^{\infty}\|_{p}\leq(1+\delta)\widetilde{\nu}^{p}(T).$
Choose $1\leq\xi_{j}\rightarrow\infty$ such that
$\|(\xi_{j}u_{j})_{j=1}^{\infty}\|_{p}\leq(1+\delta)\|(u_{j})_{j=1}^{\infty}\|_{p}.$
Since $E$ has the $MAP$, there exists an operator $S\in\mathcal{F}(E)$ with
$\|S\|\leq 1$ such that
$\|S(\frac{u_{j}}{\xi_{j}\|u_{j}\|})-\frac{u_{j}}{\xi_{j}\|u_{j}\|}\|<\delta,\quad
j=1,2,\cdots.$
Hence
$\displaystyle\widetilde{\nu}^{p}(T-ST)$
$\displaystyle\leq\|(x^{*}_{j})_{j=1}^{\infty}\|_{p^{*}}^{w}\|(u_{j}-Su_{j})_{j=1}^{\infty}\|_{p}$
$\displaystyle\leq\|(x^{*}_{j})_{j=1}^{\infty}\|_{p^{*}}^{w}\delta(1+\delta)\|(u_{j})_{j=1}^{\infty}\|_{p}$
$\displaystyle\leq\delta(1+\delta)^{2}\widetilde{\nu}^{p}(T)$
$\displaystyle<\epsilon.$
This finishes the proof.
∎
###### Theorem 3.19.
Suppose that $X^{*}$ has the $PMAP$ and $E$ has the $MAP$. Then
$\widetilde{\nu}^{p}(T)=\widetilde{i}^{p}(T)$ for all
$T\in\widetilde{\mathcal{N}}^{p}(X,E).$
###### Proof.
Let $T\in\widetilde{\mathcal{N}}^{p}(X,E).$ It suffices to show that
$\widetilde{\nu}^{p}(T)\leq\widetilde{i}^{p}(T)$.
Let $\epsilon>0$. By Lemma 3.17, there exists an operator
$R\in\mathcal{F}_{+}(X)$ with $\|R\|\leq 1$ such that
$\widetilde{\nu}^{p}(T-TR)<\epsilon$. Applying Lemma 3.18 to $TR$, there
exists an operator $S\in\mathcal{F}(E)$ with $\|S\|\leq 1$ such that
$\widetilde{\nu}^{p}(TR-STR)<\epsilon$. Thus, we get
$\displaystyle\widetilde{\nu}^{p}(T)$
$\displaystyle\leq\widetilde{\nu}^{p}(T-TR)+\widetilde{\nu}^{p}(TR-
STR)+\widetilde{\nu}^{p}(STR)$ $\displaystyle\leq
2\epsilon+\widetilde{\nu}^{p}(STR)$ $\displaystyle\leq
2\epsilon+\|S\|\widetilde{i}^{p}(T)\|R\|$ $\displaystyle\leq
2\epsilon+\widetilde{i}^{p}(T).$
Letting $\epsilon\rightarrow 0$, we get
$\widetilde{\nu}^{p}(T)\leq\widetilde{i}^{p}(T),$
which completes the proof.
∎
To describe the space of positively $p$-integral operators, we set
$\Upsilon_{p}^{0}(E,X):=\overline{\mathcal{F}(E,X)}^{\|\cdot\|_{\Upsilon_{p}}}.$
###### Lemma 3.20.
Suppose that $E^{**}$ has the $MAP$ and $X^{*}$ has the $PMAP$. Let
$S\in\Upsilon_{p}^{0}(E,X)$ and let
$T\in\widetilde{\mathcal{I}}^{p^{*}}(X,E^{**})$. Then $TS$ is nuclear and
$\nu(TS)\leq\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}.$
###### Proof.
Case 1. $S$ is finite-rank.
Since $E^{**}$ has the $MAP$, $E^{*}$ also has the $MAP$. By [28, Proposition
10.3.1], we get
$\nu(TS)=\sup\\{|\textrm{trace}(RTS)|:R\in\mathcal{L}(E^{**}),\|R\|\leq 1\\}.$
Since $E^{**}$ has the $MAP$, we get
$\sup\\{|\textrm{trace}(RTS)|:R\in\mathcal{L}(E^{**}),\|R\|\leq
1\\}=\sup\\{|\textrm{trace}(RTS)|:R\in\mathcal{F}(E^{**}),\|R\|\leq 1\\}.$
Theorem 2.15 and Theorem 3.19 yield
$\displaystyle\sup\\{|\textrm{trace}(RTS)|:R\in\mathcal{F}(E^{**}),\|R\|\leq
1\\}$
$\displaystyle\leq\sup\\{\widetilde{\nu}^{p^{*}}(RT)\|S\|_{\Upsilon_{p}}:R\in\mathcal{F}(E^{**}),\|R\|\leq
1\\}$
$\displaystyle=\sup\\{\widetilde{i}^{p^{*}}(RT)\|S\|_{\Upsilon_{p}}:R\in\mathcal{F}(E^{**}),\|R\|\leq
1\\}$
$\displaystyle\leq\sup\\{\|R\|\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}:R\in\mathcal{F}(E^{**}),\|R\|\leq
1\\}$ $\displaystyle\leq\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}.$
Hence, we get
$\nu(TS)\leq\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}.$
Case 2. $S\in\Upsilon_{p}^{0}(E,X)$.
Let $\epsilon>0$. Then there exists a sequence $(S_{n})_{n}$ in
$\mathcal{F}(E,X)$ such that
$\sum\limits_{n}S_{n}=S$ in $\|\cdot\|_{\Upsilon_{p}}$ and
$\sum\limits_{n}\|S_{n}\|_{\Upsilon_{p}}\leq(1+\epsilon)\|S\|_{\Upsilon_{p}}.$
By Case 1,
$\nu(TS_{n})\leq\widetilde{i}^{p^{*}}(T)\|S_{n}\|_{\Upsilon_{p}}$ for all $n$.
This implies
$\sum_{n}\nu(TS_{n})\leq(1+\epsilon)\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}.$
Hence
$\sum\limits_{n}TS_{n}=U$ in $\nu$ for some $U\in\mathcal{N}(E,E^{**})$.
and so
$\sum\limits_{n}TS_{n}=U$ in operator norm $\|\cdot\|$.
Note that
$\sum\limits_{n}S_{n}=S$ in operator norm $\|\cdot\|$.
Therefore, we get $TS=U\in\mathcal{N}(E,E^{**})$. Moreover,
$\nu(TS)=\nu(U)\leq(1+\epsilon)\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}.$
Letting $\epsilon\rightarrow 0$, we get
$\nu(TS)\leq\widetilde{i}^{p^{*}}(T)\|S\|_{\Upsilon_{p}}.$
∎
###### Lemma 3.21.
Let $T\in\mathcal{F}(X,E)$.
If $E$ has the $MAP$ or $X^{*}$ has the $PMAP$, then
$\widetilde{\nu}^{p}(T)=\sup\\{|\textrm{trace}(RT)|:R\in\mathcal{F}(E,X),\|R\|_{\Upsilon_{p^{*}}}\leq
1\\}.$
If $E=F^{**}$ has the $MAP$, then
$\widetilde{\nu}^{p}(T)=\sup\\{|\textrm{trace}(TS)|:S\in\mathcal{F}(F,X),\|S\|_{\Upsilon_{p^{*}}}\leq
1\\}.$
###### Proof.
(a). By Theorem 2.12, we get
$\widetilde{\nu}^{p}(T)=\sup\\{|\textrm{trace}(ST)|:S\in\Upsilon_{p^{*}}(E,X^{**}),\|S\|_{\Upsilon_{p^{*}}}\leq
1\\}.$
We set
$c_{T}:=\sup\\{|\textrm{trace}(RT)|:R\in\mathcal{F}(E,X),\|R\|_{\Upsilon_{p^{*}}}\leq
1\\}.$
Clearly, $c_{T}\leq\widetilde{\nu}^{p}(T).$ It remains to prove the reverse.
Let $S\in\Upsilon_{p^{*}}(E,X^{**}),\|S\|_{\Upsilon_{p^{*}}}\leq 1$.
Case 1. $E$ has the $MAP$.
Let $\epsilon>0$. Then there exists an operator $A\in\mathcal{F}(E),\|A\|\leq
1+\epsilon$ such that $AT=T$. We let $R=SA$ and write
$R=\sum_{i=1}^{n}u^{*}_{i}\otimes x^{**}_{i},\quad u^{*}_{i}\in
E^{*},x^{**}_{i}\in X^{**},i=1,2,\cdots,n.$
and
$T=\sum_{j=1}^{m}x^{*}_{j}\otimes u_{j},\quad x^{*}_{j}\in X^{*},u_{j}\in
E,j=1,2,\cdots,m.$
We choose $\delta>0$ such that
$\delta(2+\delta)\sum_{j=1}^{m}\sum_{i=1}^{n}\|u^{*}_{i}\|\|u_{j}\|\|x^{**}_{i}\|\|x^{*}_{j}\|<\epsilon$
and $(1+\delta)^{2}\leq 1+\epsilon.$
We set $M=\textrm{span}\\{x^{**}_{i}:1\leq i\leq n\\}$ and
$L=\textrm{span}\\{x^{*}_{j}:1\leq i\leq m\\}.$ It follows from Lemma 2.10
that there exist a sublattice $Z$ of $X^{**}$ containing $M$, a finite-
dimensional sublattice $G$ of $Z$ and a positive projection $P$ from $Z$ onto
$G$ such that $\|Px^{**}-x^{**}\|\leq\delta\|x^{**}\|$ for all $x^{**}\in M$.
By Theorem 2.11, there exists a lattice isomorphism $B$ from $G$ into $X$ such
that $\|B\|,\|B^{-1}\|\leq 1+\delta$ and
$|\langle x^{**},x^{*}\rangle-\langle
x^{*},Bx^{**}\rangle|\leq\delta\|x^{**}\|\|x^{*}\|,\quad x^{**}\in G,x^{*}\in
L.$
Let $\widetilde{R}=BPR\in\mathcal{F}(E,X)$ and then
$\displaystyle\|\widetilde{R}\|_{\Upsilon_{p^{*}}}$
$\displaystyle=\|BPSA\|_{\Upsilon_{p^{*}}}$
$\displaystyle\leq\|BP\|\|S\|_{\Upsilon_{p^{*}}}\|A\|$
$\displaystyle\leq\|B\|\|P\|\|A\|$ $\displaystyle\leq(1+\epsilon)^{2}$
Note that for all $i,j$, we have
$\displaystyle|\langle x^{**}_{i},x^{*}_{j}\rangle-\langle
x^{*}_{j},BPx^{**}_{i}\rangle|$ $\displaystyle\leq|\langle
x^{**}_{i},x^{*}_{j}\rangle-\langle Px^{**}_{i},x^{*}_{j}\rangle|+|\langle
Px^{**}_{i},x^{*}_{j}\rangle-\langle x^{*}_{j},BPx^{**}_{i}\rangle|$
$\displaystyle\leq\delta\|x^{**}_{i}\|\|x^{*}_{j}\|+\delta\|Px^{**}_{i}\|\|x^{*}_{j}\|$
$\displaystyle\leq\delta\|x^{**}_{i}\|\|x^{*}_{j}\|+\delta(1+\delta)\|x^{**}_{i}\|\|x^{*}_{j}\|$
$\displaystyle=\delta(2+\delta)\|\|x^{**}_{i}\|\|x^{*}_{j}\|$
This implies
$\displaystyle|\textrm{trace}(ST)-\textrm{trace}(\widetilde{R}T)|$
$\displaystyle=|\textrm{trace}(RT)-\textrm{trace}(\widetilde{R}T)|$
$\displaystyle=|\sum_{j=1}^{m}\sum_{i=1}^{n}\langle
u^{*}_{i},u_{j}\rangle(\langle x^{**}_{i},x^{*}_{j}\rangle-\langle
x^{*}_{j},BPx^{**}_{i}\rangle)|$
$\displaystyle\leq\delta(2+\delta)\sum_{j=1}^{m}\sum_{i=1}^{n}\|u^{*}_{i}\|\|u_{j}\|\|x^{**}_{i}\|\|x^{*}_{j}\|$
$\displaystyle<\epsilon.$
This yields
$|\textrm{trace}(ST)|\leq\epsilon+|\textrm{trace}(\widetilde{R}T)|\leq\epsilon+(1+\epsilon)^{2}c_{T}.$
Hence
$\widetilde{\nu}^{p}(T)\leq\epsilon+(1+\epsilon)^{2}c_{T}.$
Letting $\epsilon\rightarrow 0$, we get $\widetilde{\nu}^{p}(T)\leq c_{T}.$
Case 2. $X^{*}$ has the $PMAP$.
Let $\epsilon>0$. We write $T=\sum\limits_{i=1}^{n}x^{*}_{i}\otimes
u_{i}(x^{*}_{i}\in X^{*},u_{i}\in E,i=1,2,\cdots,n).$ Choose $\delta>0$ with
$\delta\sum\limits_{i=1}^{n}\|Su_{i}\|<\epsilon.$ Since $X^{*}$ has the
$PAMP$, it follows from Lemma 2.8 that there exists an operator
$B\in\mathcal{F}_{+}(X),\|B\|\leq 1$ such that
$\|B^{*}x^{*}_{i}-x^{*}_{i}\|<\delta$ for all $i=1,2,\cdots,n$. We set
$R=B^{**}S\in\mathcal{F}(E,X)$. Then $\|R\|_{\Upsilon_{p^{*}}}\leq 1$ and
$\displaystyle|\textrm{trace}(ST)-\textrm{trace}(RT)|$
$\displaystyle=|\sum_{i=1}^{n}\langle
Su_{i},x^{*}_{i}\rangle-\sum_{i=1}^{n}\langle B^{*}x^{*}_{i},Su_{i}\rangle|$
$\displaystyle\leq\sum_{i=1}^{n}\|Su_{i}\|\|B^{*}x^{*}_{i}-x^{*}_{i}\|$
$\displaystyle\leq\delta\sum\limits_{i=1}^{n}\|Su_{i}\|<\epsilon.$
Hence
$\widetilde{\nu}^{p}(T)\leq\epsilon+c_{T}.$
Letting $\epsilon\rightarrow 0$, we get $\widetilde{\nu}^{p}(T)\leq c_{T}.$
This completes the proof of (a).
(b). By (a), it suffices to prove
$\sup\\{|\textrm{trace}(RT)|:R\in\mathcal{F}(E,X),\|R\|_{\Upsilon_{p^{*}}}\leq
1\\}=\sup\\{|\textrm{trace}(TS)|:S\in\mathcal{F}(F,X),\|S\|_{\Upsilon_{p^{*}}}\leq
1\\}.$
For the sake of convenience, we set
$\alpha:=\sup\\{|\textrm{trace}(RT)|:R\in\mathcal{F}(E,X),\|R\|_{\Upsilon_{p^{*}}}\leq
1\\}$
and
$\beta:=\sup\\{|\textrm{trace}(TS)|:S\in\mathcal{F}(F,X),\|S\|_{\Upsilon_{p^{*}}}\leq
1\\}.$
Let $S\in\mathcal{F}(F,X)$ with $\|S\|_{\Upsilon_{p^{*}}}\leq 1$. By Theorem
2.7, $\|S^{**}\|_{\Upsilon_{p^{*}}}=\|S\|_{\Upsilon_{p^{*}}}\leq 1.$ It is
easy to check that $\textrm{trace}(TS)=\textrm{trace}(S^{**}T)$. Hence, we get
$\beta\leq\alpha.$
Conversely, let $R\in\mathcal{F}(E,X)$ with $\|R\|_{\Upsilon_{p^{*}}}\leq 1$.
Let $\epsilon>0$. We write $T=\sum\limits_{i=1}^{n}x^{*}_{i}\otimes u_{i}$,
where $x^{*}_{i}\in X^{*},u_{i}\in E(i=1,2,\cdots,n)$. Choose $\delta>0$ such
that $\delta\|R\|\sum\limits_{i=1}^{n}\|x^{*}_{i}\|<\epsilon.$ Since $E$ has
the $MAP$, there exists an operator $A\in\mathcal{F}(E)$ with $\|A\|\leq 1$
such that $\|Au_{i}-u_{i}\|<\delta$ for all $i=1,2,\cdots,n.$ Hence we get
$\displaystyle|\textrm{trace}(RT)-\textrm{trace}(RAT)|$
$\displaystyle=|\sum\limits_{i=1}^{n}\langle
x^{*}_{i},Ru_{i}\rangle-\sum\limits_{i=1}^{n}\langle
x^{*}_{i},RAu_{i}\rangle|$
$\displaystyle\leq\sum\limits_{i=1}^{n}\|x^{*}_{i}\|\|R\|\|Au_{i}-u_{i}\|$
$\displaystyle<\epsilon.$ (3.1)
We also write $A=\sum\limits_{j=1}^{m}u^{*}_{j}\otimes w_{j},u^{*}_{j}\in
E^{*},w_{j}\in E(j=1,2,\cdots,m).$ We set $M=\textrm{span}\\{u^{*}_{j}:1\leq
j\leq m\\}$ and $L=\textrm{span}\\{u_{i}:1\leq i\leq n\\}$. It follows from
the principle of local reflexivity in Banach spaces that there exists an
operator $C:M\rightarrow F^{*}$ such that
(i) $C|_{M\cap F^{*}}=I_{M\cap F^{*}}$;
(ii) $(1-\epsilon)\|u^{*}\|\leq\|Cu^{*}\|\leq(1+\epsilon)\|u^{*}\|,\quad
u^{*}\in M$;
(iii) $\langle u^{*},u\rangle=\langle u,Cu^{*}\rangle,\quad u^{*}\in M,u\in
L.$
We set $B=\sum\limits_{j=1}^{m}Cu^{*}_{j}\otimes w_{j}$ and $S=RB$. Clearly,
$CA^{*}=B^{*}$. By (ii), we get
$\|S\|_{\Upsilon_{p^{*}}}\leq\|R\|_{\Upsilon_{p^{*}}}\|B\|\leq\|B^{*}\|\leq
1+\epsilon.$
By (ii), it be can verified that $\textrm{trace}(RAT)=\textrm{trace}(TS)$.
Thus (3) yields that
$|\textrm{trace}(RT)-\textrm{trace}(TS)|<\epsilon.$
This implies
$|\textrm{trace}(RT)|\leq\epsilon+(1+\epsilon)\beta.$
By the arbitrariness of $R$, we get
$\alpha\leq\epsilon+(1+\epsilon)\beta.$
Letting $\epsilon\rightarrow 0$, we get $\alpha\leq\beta.$ This means
$\alpha=\beta.$
∎
###### Theorem 3.22.
Suppose that $E^{**}$ has the $MAP$, $X^{*}$ has the $PMAP$ and $X$ is order
continuous. Then
$\widetilde{\mathcal{I}}^{p^{*}}(X,E^{**})=(\Upsilon_{p}^{0}(E,X))^{*}.$
###### Proof.
We define an operator
$U:\widetilde{\mathcal{I}}^{p^{*}}(X,E^{**})\rightarrow(\Upsilon_{p}^{0}(E,X))^{*}$
by
$T\mapsto U_{T}(S)=\textrm{trace}(TS),\quad
T\in\widetilde{\mathcal{I}}^{p^{*}}(X,E^{**}),S\in\Upsilon_{p}^{0}(E,X).$
By Lemma 3.20, we get $\|U_{T}\|\leq\widetilde{i}^{p^{*}}(T).$
Let $\varphi\in(\Upsilon_{p}^{0}(E,X))^{*}$. We define an operator
$T:X\rightarrow E^{**}$ by $\langle
Tx,u^{*}\rangle=\langle\varphi,u^{*}\otimes x\rangle$ for $x\in X,u^{*}\in
E^{*}$. Obviously, $\|T\|\leq\|\varphi\|$ and
$\langle\varphi,S\rangle=\textrm{trace}(TS)$ for all $S\in\mathcal{F}(E,X)$.
Claim. $T$ is positively $p^{*}$-integral.
Let $G\in LDim(X)$ and $L\in COFIN(E^{**})$. Let $\epsilon>0$. Since $X^{*}$
has the $PMAP$, $X$ also has the $PMAP$. By [24, Theorem 2.7], there exists an
operator $D\in\mathcal{F}_{+}(X)$ with $\|D\|\leq 1+\epsilon$ such that
$D|_{G}=I_{G}$. By Lemma 3.21(b), we get
$\displaystyle\widetilde{\nu}^{p^{*}}(Q_{L}Ti_{G})$
$\displaystyle=\widetilde{\nu}^{p^{*}}(Q_{L}TDi_{G})$
$\displaystyle\leq\widetilde{\nu}^{p^{*}}(TD)$
$\displaystyle=\sup\\{|\textrm{trace}(TDV)|:V\in\mathcal{F}(E,X),\|V\|_{\Upsilon_{p^{*}}}\leq
1\\}$
$\displaystyle=\sup\\{|\langle\varphi,DV\rangle|:V\in\mathcal{F}(E,X),\|V\|_{\Upsilon_{p^{*}}}\leq
1\\}$ $\displaystyle\leq\|\varphi\|\|D\|$
$\displaystyle\leq(1+\epsilon)\|\varphi\|.$
Letting $\epsilon\rightarrow 0$, we get
$\widetilde{\nu}^{p^{*}}(Q_{L}Ti_{G})\leq\|\varphi\|.$ It follows from Theorem
3.5 that $T$ is positively $p^{*}$-integral and
$\widetilde{i}^{p^{*}}(T)\leq\|\varphi\|.$
Finally, by the definition of $\Upsilon_{p}^{0}(E,X)$, we see that
$\varphi=U_{T}$. Therefore the mapping $U$ is a surjective linear isometry.
∎
## References
* [1] S. J. Bernau, A unified approach to the principle of local reflexivity, in: H.E. Lacey (Ed), Notes in Banach Spaces, Univ. Texas Press, Austin, 1980, pp. 427-439.
* [2] O. Blasco, Boundary values of vector-valued harmonic functions considered as operators, Studia Math. 86(1987), 19-33.
* [3] O. Blasco, Positive $p$-summing operators on $L_{p}$-spaces, Proc. Amer. Math. Soc. 100(1987), 275-280.
* [4] J. A. Chávez-Domínguez, Duality for Lipschitz $p$-summing operators, J. Funct. Anal. 261(2011), 387-407.
* [5] A. Belacel and D. Chen, Lipschitz $(p,r,s)$-integral operators and Lipschitz $(p,r,s)$-nuclear operators, J. Math. Anal. Appl. 461(2018), 1115-1137.
* [6] D. Chen, A. Belacel and J. A. Chávez-Domínguez, Positive $p$-summing operators and disjoint $p$-summing operators, Positivity, DOI: 10.1007/s11117-020-00798-y.
* [7] D. Chen and B. Zheng, Lipschitz $p$-integral operators and Lipschitz $p$-nuclear operators, Nonlinear Analysis 75(2012), 5270-5282.
* [8] J. L. Conroy and L. C. Moore, Local reflexivity in Banach lattices, Unpublished.
* [9] A. Defant and K. Floret, Tensor norms and operator ideals, North-Holland Mathematics Studies 176, North-Holland Publishing, Amsterdam, 1993.
* [10] J. Diestel, H. Jarchow and A. Tonge, Absolutely summing operators, Cambridge Studies in Adv. Math., Vol.43, Cambridge Univ. Press, Cambridge, 1995.
* [11] J. D. Farmer and W. B. Johnson, Lipschitz $p$-summing operators, Proc. Amer. Math. Soc. 137(2009), 2989-2995.
* [12] E. G. Effros, M. Junge and Z.-J. Ruan, Integral mappings and the principle of local reflexivity for noncommutative $L^{1}$-spaces, Ann. Math. 151(2000), 59-92.
* [13] V. A. Geĭler and I. L. Chuchaev, The second conjugate of a summing operator, Izv. Vyssh. Uchebn. Zaved. Mat., No. 12(1982), 17-22.
* [14] A. Grothendieck, Résumé de la théorie métrique des produits tensoriels topologiques, Bol. Soc. Mat. S$\tilde{a}$o Paulo.8(1953), 1-79.
* [15] A. Grothendieck, Produits tensoriels topologiques et espaces nucl$\acute{e}$aires, Mem. Amer. Math. Soc. 16 (1955).
* [16] A. Grothendieck, Sur certaines classes des suites dans les espaces de Banach, et le théoréme de Dvoretzky-Rogers, Bol. Soc. Mat. S$\tilde{a}$o Paulo.8(1956), 81-110.
* [17] W. B. Johnson, H. P. Rosenthal and M. Zippin, On bases, finite dimensional decompositions and weaker structures in Banach spaces, Israel J. Math. 9(1971), 488-506.
* [18] W. B. Johnson, B. Maurey and G. Schechtman, Non-linear factorization of linear operators, Bull. London. Math. Soc. 41(2009), 663-668.
* [19] M. Junge and J. Parcet, Maurey’s factorization theory for operator spaces, Math. Ann. 347(2010), 299-338.
* [20] L. Krsteva, The $p^{+}$-absolutely summing operators and their connection with $(b-o)$-linear operators [in Russian], Diplomnaya Rabota (Thesis), Leningrad Univ., Leningrad (1971).
* [21] J. Lindenstrauss and A. Pełczyński, Absolutely summing operators in $\mathcal{L}_{p}$ spaces and their applications, Studia Math. 29(1968), 275-326.
* [22] A. Lissitsin and E. Oja, The convex approximation property of Banach spaces, J. Math. Anal. Appl. 379(2011), 616-626.
* [23] P. Meyer-Nieberg, Banach lattices, Universitext, Springer-Verlag, Berlin-Heidelberg-New York, 1991.
* [24] N. J. Nielsen, The positive approximation property of Banach lattices, Israel J. Math. 62(1988), 99-112.
* [25] A. Persson, On some properties of $p$-nuclear and $p$-integral operators, Studia Math. 33(1969), 213-222.
* [26] A. Persson and A. Pietsch, $p$-nukleare und $p$-integrale Abbildungen in Banachräumen, Studia Math. 33(1969), 19-62.
* [27] A. Pietsch, Absolut $p$-summierende Abbildungen in normierten Räumen, Studia Math. 28(1967), 333-353.
* [28] A. Pietsch, Operator Ideals, North-Holland Math. Library, vol. 20, North-Holland Publishing Co., Amsterdam, 1980, translated from German by the author.
* [29] G. Pisier, Non-commutative vector valued $L_{p}$-spaces and completely $p$-summing maps, Asterisque. 247(1998).
* [30] O. I. Reinov, On linear operators with $p$-nuclear adjoints, Vestnik St. Petersburg Univ. Math. 33(2000), 19-21.
* [31] R. A. Ryan, Introduction to tensor products of Banach spaces, Springer Monographs in Mathematics, Springer-Verlag London Ltd., London, 2002.
* [32] H. H. Schaeffer, Normed tensor products of Banach lattices, Israel J. Math. 13(1972), 400-415.
* [33] H. H. Schaeffer, Banach lattices and positive operators, Springer-Verlang, Berlin and New York, 1974.
* [34] U. Schlotterbeck, Ueber Klassen majorisierbarer operatorenin Banachverbänden, Rev. Acad. Ci. Zaragoza. XXVI(1971), 585-614.
* [35] O. I. Zhukova, On modifications of the classes of $p$-nuclear, $p$-summing and $p$-integral operators, Sib. Math. J. 30(1998), 894-907.
|
# Elastic instabilities and bifurcations in flows of wormlike micellar
solutions past single and two vertically aligned microcylinders: Effect of
blockage and gap ratios
Mohd Bilal Khan C. Sasmal<EMAIL_ADDRESS>Soft Matter Engineering and
Microfluidics Lab, Department of Chemical Engineering, Indian Institute of
Technology Ropar, Punjab, India-140001.
###### Abstract
This study presents an extensive numerical investigation on the flow
characteristics of wormlike micellar solutions past a single and vertically
aligned two microcylinders placed in a microchannel in the creeping flow
regime. The rheological behaviour of the micellar solution is realized based
on the two-species Vasquez-Cook-McKinley (VCM) constitutive model, which takes
into account of both the breakage and reformation dynamics of micelles. For
the case of single microcylinder, as the blockage ratio (ratio of the cylinder
diameter to that of the channel height) is gradually varied, we find the
existence of a flow bifurcation in the system, and also a gradual transition
for a range of flow states, for instance, steady and symmetric or Newtonian
like, steady and asymmetric, unsteady periodic and asymmetric, unsteady quasi-
periodic and asymmetric, and finally, unsteady quasi-periodic and symmetric.
For the case of two microcylinders, we observe the presence of three distinct
flow states in the system, namely, diverging (D), asymmetric-diverging (AD)
and converging (C) states as the intercylinder spacing in between the two
cylinders is varied. Similar types of flow states are also observed in the
recent experiments dealing with wormlike micellar solutions. However, we show
that either this transition from one flow state to another in the case of a
single microcylinder or the occurrence of any flow state in the case of two
microcylinders, is strongly dependent upon the values of the Weissenberg
number and the non-linear VCM model parameter $\xi$, which basically indicates
how easy or hard to break a micelle. Based on the results and discussion
presented herein for the single and two microcylinders, we ultimately provide
the explanation for the formation of preferential paths or lanes during the
flow of viscoelastic fluids through a porous media, which was seen in many
prior experiments in the creeping flow regime.
††preprint: AIP/123-QED
## I Introduction
Addition of a small amount of highly flexible surfactant molecules into a
solvent like water greatly influences the flow characteristics of the
resulting solution in a broad-spectrum of measurable scales. Beyond a critical
concentration, these amphiphilic surfactant molecules spontaneously self-
assemble and form a large aggregate called micelles, which can be of different
shapes like spherical, ellipsoidal, wormlike, or lamellae Dreiss (2007);
Dreiss and Feng (2017). Further increasing the surfactant concentration leads
to the entanglement of these micelles, thereby originating complex
viscoelastic properties Yang (2002); Walker (2001). However, the rheological
behaviour of these micellar solutions, particularly wormlike micellar
solutions, is found to more complex than that seen for polymer solutions or
melts under otherwise identical conditions Rothstein (2008, 2003); Berret
(1997). This is because of the fact that these wormlike micelles can undergo
continuous scission and reformation in a flow field, which is unlikely to
happen for polymers due to the presence of a strong covalent backbone. Due to
the presence of interesting rheological properties, these micellar solutions
are widely used in many industrial applications, such as in the petroleum
industry in the enhanced oil recovery process, as drag reducing agent, in
cosmetics and pharmaceutical industries, in coating and paints industries, in
biomedical applications, etc Schramm (2000); Möbius, Miller, and Fainerman
(2001); Raffa _et al._ (2015). Therefore, a detailed understanding of the
complex flow behaviour of these micellar solutions is very much needed for
their better applications.
One of the examples wherein the complex flow behaviour of micellar solutions
can be seen is the flow through a porous media. In many experiments, it has
been found that the micellar solution selects a preferential path or lane
during the flow through a porous media. For instance, De et al. De _et al._
(2018) observed the formation of lanes when a micellar solution comprising of
cetyl tri-methyl ammonium bromide (CTAB) and sodium salicylate (NaSal) flows
through a model porous media consisting of a microchannel with cylindrical
pillars placed in it. In another study De _et al._ (2017), they found a
similar formation of lanes and their path switching phenomena when dealing
with a hydrolyzed polyacrylamide (HPAM) polymer solution. Muller et al.
Müller, Vorwerk, and Brunn (1998) also noticed the same phenomena in
polyalphaolefine polymer solution flowing in a model porous medium consisting
of a glass pipe filled with Duran glass spheres. They further noted spatial
and temporal variations of these preferential paths in the porous media.
Recently, both Walkama et al. Walkama, Waisbord, and Guasto (2020) and
Eberhard et al. Eberhard _et al._ (2020) also showed the formation of these
lanes in both ordered and disordered model porous structures during the flow
of a high molecular weight polyacrylamide (PAA) and xanthan gum polymer
solutions, respectively.
To understand such complex flow behaviour of either micellar or polymer
solutions in a porous media, it is always better to start with a simple system
consisting of a single microcylinder placed in a microchannel. This simple
benchmark system creates a non-homogeneous flow field in the system, which in
turn, facilitates the understanding of the flow behaviour of various complex
fluids. This ultimately leads to a better understanding of the flow behaviour
in a more complex system. For this reasoning, a significant amount of studies,
comprising of both experiments and numerical simulations, have been carried
out on this benchmark system both for polymer Alves, Pinho, and Oliveira
(2001); McKinley, Armstrong, and Brown (1993); Hu and Joseph (1990); Shiang
_et al._ (1997); Qin _et al._ (2019) as well as micellar Moss and Rothstein
(2010); Zhao, Shen, and Haward (2016); Haward _et al._ (2019); Khan and
Sasmal (2020) solutions. Some interesting flow physics have been found from
these studies which were not seen in simple Newtonian fluids under otherwise
identical conditions. For instance, the emergence of an elastic instability
Qin _et al._ (2019) and flow bifurcation Haward _et al._ (2019) have been
found in this model geometry.
Although the geometrical configuration of this model system is simple, the
flow dynamics within it can be greatly altered either by changing the blockage
ratio (ratio of the cylinder diameter to the channel height) or by placing
another microcylinder next or above or bottom to the existing cylinder with
various intercylinder spacings. For instance, both Moss and Rothstein Moss and
Rothstein (2010) and Zhao et al. Zhao, Shen, and Haward (2016) found that the
onset of the elastic instability in CPyCl (cetylpyridinium chloride)/NaSal and
CTAB/SHNC (3-hydroxy naphthalene-2-carboxylate) micellar solutions were
delayed as the blockage ratio was decreased. Furthermore, Zhao et al. Zhao,
Shen, and Haward (2016) observed a broad spectrum of flow states in this model
geometry as the blockage ratio and Weissenberg number were varied, for
instance, Newtonian like, bending streamlines, vortex growth upstream,
unsteady downstream, chaotic upstream and three-dimensional time dependent.
Recently, Varchanis et al. Varchanis _et al._ (2020) conducted both
experiments using polyethylene oxide (PEO) polymer solution and numerical
simulations using the linear Phan-Thein-Tanner (I-PTT) constitutive model over
a wide range of the blockage ratio. They found an existence of the
supercritical and subcritical pitchfork bifurcations in the flow field as the
blockage ratio was varied, and also observed no bifurcation in the flow for
certain ranges of the blockage ratio.
Apart from the influence of the blockage ratio, the placing of another
microcylinder in the channel can also greatly modify the flow field in this
model geometry. For example, Haward et al. Haward, Toda-Peters, and Shen
(2018) experimentally found a significant modification in the flow field in
between the two microcylinders than that seen for the single microcylinder
case, particularly at high Weissenberg numbers. Varshney and Steinberg
Varshney and Steinberg (2017) found an increase in the vortex growth in
between the two microcylinders. This is in stark contrast to the findings of
the suppression of a vortex by the polymer additives into a Newtonian solvent
Cressman, Bailey, and Goldburg (2001); Zhu and Xi (2019). Both these studies
used a polymer solution in their experiments wherein two microcylinders were
placed horizontally side-by-side. Recently, Hopkins et al. Hopkins, Haward,
and Shen (2020) performed experiments using CPyCl/NaSal micellar solution for
the flow past two microcylinders placed vertically side-by-side over a broad
range of the intercylinder gaps and Weissenberg numbers. This experimental
study, performed for the first time for this geometry, found the existence of
three stable flow states in the system depending upon the values of the
intercylinder gap and Weissenberg number, namely, diverging (D) state in which
all of the fluid preferably passes through the gaps in between the channel
walls and cylinder surface, asymmetric-diverging (AD) state in which the fluid
prefers to pass through either the gap in between the upper channel wall and
top cylinder surface or the lower channel wall and bottom cylinder surface,
and converging (C) state in which most of the fluids pass through the gap in
between the two cylinders. They presented a phase diagram on the existence of
all these flow states as a function of the intercylinder gap and Weissenberg
number, and also found a critical value of the intercylinder gap at which all
these three states, namely, D, AD and C co-exist together, thereby showing the
existence of a tristable state in viscoelastic fluids for the first time.
All these aforementioned studies demonstrate that the flow physics past a
microcylinder confined in a channel can become increasingly complex if one
changes either the blockage ratio or places an additional microcylinder in it.
This is primarily due to the variation of the extent of shear and extensional
flow fields in the domain, and due to the interaction of the elastic stresses
generated around the microcylinders. However, it can be seen that most of
these investigations are experimental, and in comparison to this, a very few
numerical studies have been carried out Varchanis _et al._ (2020).
Furthermore, these numerical simulations are based on the single-species
viscoelastic constitutive equations, thus restricting their applicability to
only polymer solutions in which breakage and reformation dynamics are absent
unlike wormlike micellar solutions. Therefore, these widely used single-
species viscoelastic constitutive equations sometimes unable to predict some
typical flow physics happening in wormlike micellar solutions. For instance,
many experimental studies have found an existence of unsteady motion of a
sphere falling freely in wormlike micellar solutions in the creeping flow
regime once the Weissenberg exceeds a critical value Mohammadigoushki and
Muller (2016); Chen and Rothstein (2004). It was predicted experimentally that
this motion was due to the breakage of long and stretched micelles downstream
of the sphere, resulting from an increase in the extensional flow strength.
Only recently Sasmal (2021), it has been proven that this motion is, indeed,
due to the breakage of micelles downstream of the sphere using the two-species
Vasquez-Cook-McKinley (VCM) model Vasquez, McKinley, and Cook (2007). This
model considers the wormlike micelles as an elastic segment composed of
Hookean springs, which all together form an elastic network that can
continuously break and reform in a flow field. The breaking and reforming
processes of this model were incorporated based on the discrete and simplified
version of Cate’s reversible breaking theory for wormlike micelles Cates
(1987). According to this model, a long micelle of length $L$ is likely to
break in the middle into two short micelles of equal length of $L/2$, and two
short micelles can also recombine into a long micelle. This is opposed to the
Cate’s original theory in which a long micelle can break at any point along
their length with equal probability and also micelles of any length can join
together to form a long micelles. However, the simplification adopted for the
breakage and reformation dynamics in the VCM model makes an easy
implementation in any CFD platform to simulate the complex flows of micellar
solutions, and it also allows to capture the temporal and spatial variations
in the number density of short and long micelles.
The VCM model efficiently captures all the typical flow characteristics of
wormlike micellar solutions like shear thinning, shear banding, extensional
hardening and subsequent thinning, etc. in homogeneous viscometric flows Pipe
_et al._ (2010); Zhou, McKinley, and Cook (2014). For non-viscometric flows,
the VCM model also successfully predicts many experimental observations seen
in flows through complex geometries, for instance, the formation of a lip
vortex in a microfluidic cross-slot cell Kalb, Cromer _et al._ (2017); Kalb,
Villasmil-Urdaneta, and Cromer (2018), flow characteristics in a micropore
with step expansion and contraction Sasmal (2020), transient evaluation of the
velocity profiles in a Taylor-Couette flow Mohammadigoushki _et al._ (2019),
etc. Only recently, the flow characteristics of WLM solutions through the
benchmark system of a microcylinder confined in a channel at a fixed blockage
ratio have been studied based on this VCM model by us in our earlier study
Khan and Sasmal (2020). In this investigation, likewise the experiments Moss
and Rothstein (2010); Zhao, Shen, and Haward (2016), we have also observed the
emergence of an elastic instability in the system once the Weissenberg exceeds
a critical value. Furthermore, we have shown that this instability is greatly
influenced by the non-linear VCM model parameter $\xi$ which basically
indicates how easy or hard to break a micelle. However, still, there is a gap
of knowledge present in the literature, in particular, for the flow past two
vertically aligned microcylinders which may facilitate the understanding of
the formation of preferential paths or lanes during the flow of viscoelastic
fluids in a porous media.
Therefore, the aim of this study is threefold: firstly, we aim to numerically
investigate how the blockage ratio would tend to influence the flow dynamics
of a micellar solution past a single microcylinder placed in a channel using
the two-species VCM constitutive model. Secondly, for the first time in
numerical simulations, we plan to extend the investigation for two vertically
aligned microcylinders placed in a channel for different intercylinder gap
ratios, and try to reproduce some of the flow behaviours observed in recent
experiments carried out with WLM solutions Hopkins, Haward, and Shen (2020).
Lastly and most importantly, we aim to provide the evidence behind the
formation of preferential paths or lanes during the flow of viscoelastic
fluids through a porous media based on the analysis of our single and double
microcylinders results.
## II Problem description and governing equations
The present study aims to investigate the flow behavior of wormlike micellar
solution past a single and two vertically aligned microcylinders of diameter
$d$ (or of radius $R$) placed in a rectangular microchannel with different
blockage $(BR)$ and gap $(G)$ ratios, as shown schematically in sub Fig. 1(a)
and (c), respectively. The WLM solution enters the channel with a uniform
velocity of $U_{in}$. In the case of single cylinder, the blockage ratio is
defined as the ratio of the cylinder diameter to that of the channel height,
i.e., $BR=\frac{d}{H}$. Whereas, in the case of double cylinders, the gap
ratio is defined as $G=\frac{S_{1}}{S_{1}+S_{2}}$, where $S_{1}$ is the
distance between the two cylinders and $S_{2}$ is the distance between the
channel wall and the surface of the cylinder. A value of $G=0$ implies that
the surfaces of the top and bottom cylinders just touch each other, while
$G=1$ indicates that the cylinder surface touches the channel wall. In both
the cases, the upstream $(L_{u})$ and downstream $(L_{d})$ length of the
channel are kept as $100d$. This length is found to be sufficiently high so
that it does not influence the flow dynamics around the microcylinders.
Figure 1: Schematic of the present problem for (a) single microcylinder and
(b) side-by-side vertically aligned two microcylinders. Here the flow
direction is shown by arrows in the schematic.
### II.1 Flow equations
The present flow field will be governed by the following equations, written in
their dimensionless forms:
Equation of continuity
$\bm{\nabla}\cdot\bm{U}=0$ (1)
Cauchy momentum equation
$El^{-1}\frac{D\bm{U}}{Dt}=-\nabla P+\nabla\cdot\bm{\tau}$ (2)
In the above equations, $\bm{U}$, $t$ and $\bm{\tau}$ are the velocity vector,
time and total extra stress tensor, respectively. All the spatial dimensions
are scaled by the cylinder radius $R$, velocity is scaled by
$R/\lambda_{eff}$, stress is scaled by the plateau modulus $G_{0}$ and time is
scaled by $\lambda_{eff}$. Here
$\lambda_{eff}=\frac{\lambda_{A}}{1+c_{Aeq}^{{}^{\prime}}\lambda_{A}}$ is the
effective relaxation time for the two-species VCM model in which $\lambda_{A}$
and $c_{Aeq}^{{}^{\prime}}$ are the dimensional relaxation time and
equilibrium breakage rate of the long worm A, respectively, as discussed in
detail in the subsequent subsection. The elasticity number is defined as
$El=\frac{Wi}{Re}$, where $Wi=\frac{\lambda_{eff}U_{in}}{R}$ is the
Weissenberg number, and $Re=\frac{RU_{in}\rho}{\eta_{0}}$ is the Reynolds
number. Here $\rho$ and $\eta_{0}$ are the solution density and zero-shear
rate viscosity, respectively. For an inertialess flow, the left hand side of
Eq. 2 is essentially zero. The total extra stress tensor, $\bm{\tau}$, for a
wormlike micellar solution is given as:
$\bm{\tau}=\bm{\tau_{w}}+\bm{\tau_{s}}$ (3)
where $\bm{\tau_{w}}$ is the non-Newtonian contribution from the wormlike
micelles whereas $\bm{\tau_{s}}$ is the contribution from that of the
Newtonian solvent which is equal to $\beta\dot{\bm{\gamma}}$. Here the
parameter $\beta$ is the ratio of the solvent viscosity to that of the zero-
shear rate viscosity of the wormlike micellar solution and
$\dot{\bm{\gamma}}=\nabla\bm{U}+\nabla\bm{U}^{T}$ is the strain-rate tensor.
For the two-species VCM model, the total extra stress tensor is given by
$\bm{\tau}=\bm{\tau}_{w}+\bm{\tau_{s}}=(\bm{A}+2\bm{B})-\left(n_{A}+n_{B}\right)\bm{I}+\beta\dot{\bm{\gamma}}$
(4)
Here $n_{A}$ and $\bm{A}$ are the number density and conformation tensor of
the long worm A respectively, whereas $n_{B}$ and $\bm{B}$ are to that of the
short worm B. The temporal and spatial evaluation of the number density and
conformation tensor for the short and long worms are written in the following
subsection based on the VCM model.
### II.2 Two-species constitutive equations for wormlike micelles: Vasquez-
Cook-McKinley (VCM) model
The VCM constitutive equations provide the species conservation equations for
the long $(n_{A})$ and short worms $(n_{B})$ along with the equations for the
evolution of their conformation tensors $\bm{A}$ and $\bm{B}$, respectively.
According to this model, the equations for the variations of $n_{A}$, $n_{B}$,
$\bm{A}$, and $\bm{B}$ are given in their non-dimensional forms as follows:
$\mu\frac{Dn_{A}}{Dt}-2\delta_{A}\nabla^{2}n_{A}=\frac{1}{2}c_{B}n_{B}^{2}-c_{A}n_{A}$
(5)
$\mu\frac{Dn_{B}}{Dt}-2\delta_{B}\nabla^{2}n_{B}=-c_{B}n_{B}^{2}+2c_{A}n_{A}$
(6)
$\mu\bm{A}_{(1)}+A-n_{A}\bm{I}-\delta_{A}\nabla^{2}\bm{A}=c_{B}n_{B}\bm{B}-c_{A}\bm{A}$
(7)
$\epsilon\mu\bm{B}_{(1)}+B-\frac{n_{B}}{2}\bm{I}-\epsilon\delta_{B}\nabla^{2}\bm{B}=-2\epsilon
c_{B}n_{B}\bm{B}+2\epsilon c_{A}\bm{A}$ (8)
Here the subscript $()_{(1)}$ denotes the upper-convected derivative defined
as $\frac{\partial()}{\partial
t}+\bm{U}\cdot\nabla()-\left((\nabla\bm{U})^{T}\cdot()+()\cdot\nabla\bm{U}\right)$.
The non-dimensional parameters $\mu$, $\epsilon$ and $\delta_{A,B}$ are
defined as $\frac{\lambda_{A}}{\lambda_{eff}}$,
$\frac{\lambda_{B}}{\lambda_{A}}$ and $\frac{\lambda_{A}D_{A,B}}{R^{2}}$,
respectively, where $\lambda_{B}$ is the relaxation time of the short worm $B$
and $D_{A,B}$ are the dimensional diffusivities of the long and short worms.
Furthermore, according to the VCM model, the non-dimensional breakage rate
$(c_{A})$ of the long worm A into two equally sized small worms B depends on
the local state of the stress field, given by the expression
$c_{A}=c_{Aeq}+\mu\frac{\xi}{3}\left(\dot{\bm{\gamma}}:\frac{\bm{A}}{n_{A}}\right)$.
On the other hand, the reforming rate of the long worm A from the two short
worms B is assumed to be constant, given by the equilibrium reforming rate,
i.e., $c_{B}=c_{Beq}$. Here the non-linear parameter $\xi$ is the scission
energy required to break a long worm into two equal-sized short worms. The
significance of this parameter is that as its value decreases, the amount of
stress needed to break a micelle increases. The values of the VCM model
parameters chosen for the present study are as follows: $\beta_{VCM}=10^{-4}$,
$\mu=2.6$, $C_{Aeq}=1.6$, $C_{Beq}=0.8607$, $\epsilon=0.005$,
$\delta_{A}=\delta_{B}$ and $\xi=0.00001,0.01,0.1$. The response of the
present micellar solution with these VCM model parameters in standard
viscometric flows is shown in Fig. 2. One can see that the solution exhibits
the shear-thinning property in shear flows and extensional hardening and
subsequent thinning in uniaxial extensional flows, which are very often seen
to occur for a wormlike micellar solution.
Figure 2: Variations of the non-dimensional shear stress (a) and shear
viscosity (b) with the non-dimensional shear rate (or the shear Weissenberg
number) and first normal stress difference (c) and extensional viscosity (d)
with the non-dimensional extension rate (or the extensional Weissenberg
number) in homogeneous shear and uniaxial extensional flows, respectively.
Here the symbols (both filled and open) are used to discuss some results
presented in section IV.
Furthermore, one can see that as the value of $\xi$ increases, the shear-
thinning tendency of the micellar solution increases, whereas extensional
hardening and subsequent thinning tendency decreases.
## III Numerical details
A finite volume method based open source computational fluid dynamics code
OpenFOAM Weller _et al._ (1998) and a recently developed rheoFoam solver
available in rheotool Pimenta and Alves (2016) has been used to solve the
aforementioned governing equations, namely, mass, momentum, constitutive and
number density evaluation equations. All the diffusion terms in the momentum,
constitutive and number density equations were discretized using the second-
order accurate Gauss linear orthogonal interpolation scheme. All the gradient
terms were discretized using the Gauss linear interpolation scheme. While the
linear systems of the pressure and velocity fields were solved using the
preconditioned conjugate solver (PCG) with DIC (Diagonal-based Incomplete
Cholesky) preconditioner, the stress fields were solved using the
preconditioned bi-conjugate gradient solver (PBiCG) solver with DILU
(Diagonal-based Incomplete LU) preconditioner Ajiz and Jennings (1984); Lee,
Zhang, and Lu (2003). All the advective terms in the constitutive equations
were discretized using the high-resolution CUBISTA (Convergent and Universally
Bounded Interpolation Scheme for Treatment of Advection) scheme for its
improved iterative convergence properties Alves, Oliveira, and Pinho (2003).
In the present study, the pressure-velocity coupling was established using the
SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) method, and the
improved both side diffusion (iBSD) technique was used to stabilize the
numerical solutions. The absolute tolerance level for the pressure, velocity,
stress and micellar concentration fields was set as $10^{-10}$.
A suitable grid density is selected for both the systems by performing the
standard grid independence study. In doing so, three different grid densities
for each blockage (in the case of single microcylinder) and gap (in the case
of two microcylinders) ratio, namely, G1, G2, and G3, consisting of a
different number of grid points on the cylinder surface as well as in the
whole computational domain were created, and the simulations were run at the
highest value of the Weissenberg number considered in the present study. After
inspecting the results (in terms of the variation of the velocity, stress and
number densities of micelles at different probe locations in the computation
domain) obtained for different grid densities, the grid G2 with a range of
59280-82900 (depending upon the blockage ratio) hexahedral cells for the
single microcylinder and 83200-88200 (depending upon the gap ratio) hexahedral
cells for the two microcylinders cases were found to be adequate for the
present study. During the making of any grid, a careful consideration is taken
into account. For instance, a very fine mesh is created in the vicinity of the
solid cylinder wall to capture the steep gradients of velocity, stress, or
concentration fields, whereas a relatively coarse mesh is created away from
the solid wall, see sub Figs. 1(b) and (d). Likewise, the grid independence
study, a systematic time independence study was also carried to choose an
optimum time step size, and a non-dimensional time step size of 0.00001 was
selected for both the systems. The computational domain and its meshing have
been done with the help of the blockMeshDict subroutine available in OpenFOAM.
Finally, appropriate boundary conditions are employed at different boundaries
of the present computational domain to complete the problem description. On
the solid surfaces, the standard no-slip and no-penetration boundary
conditions for the velocity, i.e., $\bm{U}=0$ are imposed, whereas a no-flux
boundary condition is assumed for both the stress and micellar number density,
i.e., $\textbf{n}\cdot\nabla\textbf{A}=0$ and
$\textbf{n}\cdot\nabla\textbf{B}=0$ and $\textbf{n}\cdot\nabla{n_{A}}=0$ and
$\textbf{n}\cdot\nabla{n_{B}}=0$, where n is the outward unit normal vector.
All the simulations were run in a parallel fashion with MPI (Message Passing
Interface) interface facility available in OpenFOAM wherein each simulation
was distributed among 8 to 12 CPU cores, each of having 2 GB RAM. A detailed
validation of the present numerical set up has already been presented in our
earlier studies Sasmal (2020); Khan and Sasmal (2020), and hence it is not
again performed here.
## IV Results and discussion
### IV.1 Single microcylinder case : Effect of blockage ratio
Before studying the complex flow dynamics of a wormlike micellar solution,
first, we present the results of the flow behavior of a simple Newtonian fluid
around a single microcylinder confined between two parallel channel walls at
different blockage ratios. Figure 3 shows the streamlines and velocity
magnitude plots of a Newtonian fluid at a particular value of $BR=0.34$. It
can be clearly seen that both the streamline and velocity magnitude plot show
a perfect fore-aft symmetry along the horizontal and vertical mid planes
passing through the origin, as expected for a simple Newtonian fluid flowing
under the creeping flow condition. The streamlines just follow a smooth order
and steady path without crossing to each other. Furthermore, the streamlines
are seen to be attached with the cylinder surface and hence, no separation of
flow happens. This result is inline with that observed in our earlier
numerical study Khan and Sasmal (2020) and experimental observation of Zhao
et. al. Zhao, Shen, and Haward (2016). The velocity magnitude is seen to be
maximum in the narrow gap between the channel wall and cylinder surface. For
other blockage ratios considered in this study, a similar flow pattern is
observed for the Newtonian fluid. The only difference seen is that the maximum
velocity magnitude in the gaps between the channel wall and cylinder surface
decreases as the blockage ratio decreases. This is simply due to an increase
in the flow area with the decreasing value of the blockage ratio.
Figure 3: Representative streamline and velocity magnitude plots for Newtonian
fluid with blockage ratio of $BR=0.34$.
Unlike the Newtonian fluid, the flow of WLM solutions is expected to be
strongly dependent on the blockage ratio due to its complex rheological
behaviour. Additionally, one can expect a strong dependency on the values of
the non-dimensional parameters like the Weissenberg number and non-linear VCM
model parameter $\xi$. At very low values of the Weissenberg number, for
instance at $Wi=0.01$, the flow behaviour of WLM solutions at different
blockage ratios is found to be similar as that observed for the Newtonian
fluid (results are not shown here). This is due to the presence of a weak
viscoelastic effect. However, as the Weissenberg number gradually increases to
higher values, the flow dynamics become strongly dependent on the values of
the blockage ratio, Weissenberg number and non-linear VCM model parameter
$\xi$. As for example, at $Wi=1$, although the flow remains steady, and the
streamlines follow a nice order path as that seen for Newtonian fluid and WLM
solutions at $Wi=0.01$, the symmetry in the flow profiles along the vertical
mid-plane passing through the origin starts to break, Fig. 4. As the blockage
ratio increases, the tendency of destroying this vertical symmetry increases,
for instance, see the results in sub Figs 4(b) and (d) at the values of
$BR=0.34$ and 0.167, respectively. However, the horizontal symmetry still
exists at this value of the Weissenberg number irrespective of the value of
$BR$. The corresponding surface plot of the non-dimensional principle stress
difference, defined as
$PSD=\sqrt{\left(\tau_{xx}-\tau_{yy}\right)^{2}+\left(2\tau_{xy}\right)^{2}}$,
is presented in Fig. 5 at different blockage ratios. Regardless of the
blockage ratio, the PSD value is seen to be high in the vicinity of the
cylinder surface due to the presence of a high shearing zone. Apart from this,
a strand of high PSD value, also known as the birefringent strand, is formed
along the mid horizontal plane downstream of the cylinder. This is due to the
formation of a highly extensional flow field in this region, which thereby
aligning more long micelles in the flow field as well as breaking them into
smaller ones. Both these facts tend to increase the PSD value in this region.
As the blockage ratio increases, the thickness as well as the value of this
birefringent strand increases due to an increase both in the shear and
extensional flow strengths.
Figure 4: Representative streamline and velocity magnitude plots of a WLM
solution at $Wi=1.0$ and $\xi=0.01$ for different blockage ratios. Figure 5:
Surface plot of principle stress difference of a WLM solution at $Wi=1.0$ and
$\xi=0.01$ for different blockage ratios.
As the value of the Weissenberg number is further incremented, say to 2.5, the
flow remains steady and horizontally symmetric in the case of the least
blockage ratio of $BR=0.167$, sub Fig. 6(e). On the other hand, at the maximum
blockage ratio of $BR=0.67$ considered in this study, the flow becomes
unsteady and quasi-periodic at the same Weissenberg number. At this blockage
ratio, a distortion in the streamline profiles is observed, particularly at
the rear side of the cylinder. Furthermore, the region of the maximum velocity
magnitude changes its position between the lower (sub Fig. 6(a)) and upper
narrow gap (sub Fig. 6(b)) regions situated in between the channel wall and
cylinder surface. This suggests the emergence of an elastic instability in the
flow field, and an elastic wave downstream of the cylinder due to the shifting
in the maximum velocity magnitude zone between the two gap regions, as
discussed and explained in detail in our earlier study Khan and Sasmal (2020).
Moreover, a small vortex is seen to form downstream of the cylinder at this
blockage ratio and Weissenberg number. The nature of the flow field at these
two extreme blockage ratios, namely, at $BR=0.167$ and 0.67, is further
confirmed in Fig. 7(a) wherein the temporal variation of the non-dimensional
stream-wise velocity is plotted at a probe location placed at the mid-point in
between the cylinder surface and channel wall for different blockage ratios.
At $BR=0.167$, it reaches to a steady value with time, suggesting the presence
of a steady state flow field. Whereas, at $BR=0.67$, it fluctuates with time
and therefore shows the occurrence of unsteadiness in the flow field. The
power spectrum of these velocity fluctuations is presented in sub Fig. 7(d),
and from this figure, it can be seen that the flow is governed by a single
dominant frequency along with a broad spectrum of small frequencies. This
indicates the quasi-periodic nature of the flow field at these values of $Wi$
and $BR$.
Figure 6: Representative streamline and velocity magnitude plots of a WLM
solution at $Wi=2.5$ and $\xi=0.01$ for different blockage ratios. Figure 7:
(a) Temporal variation of the stream-wise velocity component at a probe
location … and (b-d) power spectral density plot of the velocity fluctuations
at different blockage ratios at $Wi=2.5$ and $\xi=0.01$.
In between these two extreme blockage ratios considered in this study, there
is a range of blockage ratio present wherein the fluid prefers to flow through
one side of the cylinder, for instance, see sub Figs. 6(c) and (d) for the
results at $BR=0.34$ and 0.25, respectively. This results in the formation of
an almost stagnant region on the opposite side of the cylinder. Here the
preferential side occurs at $Y<0$ for $BR=0.34$ (sub Fig. 6(c)), whereas for
$BR=0.25$, it occurs at $Y>0$ (sub Fig. 6(d)). However, the selection of this
preferential side for the flow is completely random, and hence, there is an
equal opportunity present when the fluid can go through the other side of the
cylinder. The occurrence of this flow asymmetry indicates the origin of a
pitchfork bifurcation in the flow field. This kind of bifurcation in the flow
field has also been observed in earlier experimental investigations dealing
with polymer Haward, Hopkins, and Shen (2020) and WLM solutions Haward _et
al._ (2019), as well as in numerical investigations performed with a single-
species viscoelastic constitutive model Varchanis _et al._ (2020). At
$BR=0.34$, the flow field seems to be unsteady in nature, whereas it is steady
at $BR=0.25$, which can be seen from the temporal variation of the non-
dimensional stream-wise velocity presented in sub Fig. 7(a). The corresponding
power spectrum plot for velocity fluctuations at $BR=0.34$ is depicted in sub
Figs. 7(b). From this figure, one can see that the flow is governed by a
single dominant frequency, thereby suggesting the occurrence of a regular
periodic unsteadiness in the flow field. At $BR=0.57$, an asymmetry in the
flow field is also seen (results not shown here), and the flow field is again
found to be unsteady, which is quasi-periodic in nature as can be evident from
the power spectrum plot of velocity fluctuations presented in sub Fig.
7(c).The corresponding variation of the PSD value at $Wi=2.5$ and at different
blockage ratios is depicted in Fig. 8. Once again, at this Weissenberg number,
a long birefringent strand of high PSD value is seen to form downstream of the
cylinder likewise it is seen at $Wi=1$ (Fig 5). However, the PSD value is
higher at $Wi=2.5$ than that seen at $Wi=1$ due to an increase in the flow
strength. Furthermore, the strand is seen to be bending in nature downstream
of the cylinder at blockage ratios 0.34 (sub Fig. 8(b)) and 0.25 (sub Fig.
8(c)) due to the presence of an asymmetric flow at these blockage ratios.
Figure 8: Surface plot of principle stress difference of a WLM solution at
$Wi=2.5$ and $\xi=0.01$ for different blockage ratios.
To characterize the asymmetric nature of the flow more quantitatively, we
define a dimensionless flow asymmetry parameter $I_{s}$ as follows Varchanis
_et al._ (2020); Haward _et al._ (2019)
$I_{s}=\frac{U_{X,1}-U_{X,2}}{U_{X,1}+U_{X,2}}$ (9)
Here $U_{X,1}$ and $U_{X,2}$ are the stream-wise velocities at the midpoints
in between the cylinder surface and upper and lower channel walls,
respectively. A value of $|I_{s}|=0$ denotes a perfect symmetric flow;
whereas, $|I_{s}|=\pm 1$ implies a perfect asymmetric flow when the whole
fluid passes through one side of the cylinder. Note that in the case of an
unsteady flow, a time averaged value of $U_{X}$ is considered in the
calculation of $I_{s}$. The variation of the absolute value of $I_{s}$ with
the Weissenberg number and blockage ratio is presented in Fig 9. It can be
seen that the value of $I_{s}$ is essentially zero for the blockage ratios of
0.17 and 0.67. This is due to the existence of the steady symmetric and
unsteady symmetric quasi-periodic flows at these two blockage ratios,
respectively. On the other hand, at blockage ratios 0.25 and 0.34, a critical
value of the Weissenberg number is seen to present up to which the asymmetry
parameter is zero, and beyond that it suddenly starts to increase and finally
reaches almost to a constant value at high Weissenberg numbers. The critical
value of the Weissenberg number at which the transition from symmetric to an
asymmetric flow occurs (i.e., the onset of the pitchfork bifurcation),
increases as the blockage ratio decreases. For instance, at $BR=0.34$, it is
around 1.25 while it is around 1.75 at $BR=0.25$.
Figure 9: Variation of the flow asymmetry parameter $(I_{s})$ with the
Weissenberg number and blockage ratio at $\xi=0.01$. Figure 10: Variation of
the flow asymmetry parameter $(I_{s})$ with the blockage ratio at $Wi=2.5$ and
$\xi=0.01$. In this figure (I) steady and symmetric (II) steady and asymmetric
(III) unsteady, periodic and asymmetric (IV) unsteady, quasi-periodic and
asymmetric and (V) unsteady, quasi-periodic and symmetric.
Furthermore, one can see that the value of the flow asymmetry parameter
$I_{s}$ increases with the blockage ratio, which is in line with that observed
by Varchanis et al. Varchanis _et al._ (2020) in their simulations. Based on
the value of the flow asymmetry parameter, a phase diagram is presented in
Fig. 10 wherein different flow states observed in the present study with the
blockage ratio, are summarized at a Weissenberg number of 2.5 and non-linear
VCM model parameter $\xi=0.01$. At a blockage ratio lower than 0.167, the flow
is steady and symmetric. Beyond that and up to $BR=0.27$, a transition to a
steady and asymmetric flow occurs. After that the flow transits to an unsteady
periodic state and then to a quasi-periodic state as the blockage ratio
gradually increases. On further increasing the blockage ratio of more than
around 0.55, the flow transits to a quasi-periodic and symmetric state where a
resymmetrization in the flow occurs.
Next, we aim to explain the origin of this asymmetric flow resulting from the
flow bifurcation and elastic instabilities in WLM solutions. It is well known
that the onset of elastic instabilities either in polymer or micellar
solutions is the resultant of the presence of curved streamlines in the
vicinity of the microcylinder and the accumulation of the elastic stresses
downstream of the microcylinder Pakdel and McKinley (1996); McKinley, Pakdel,
and Öztekin (1996); Fardin and Lerouge (2012); Zhao, Shen, and Haward (2016),
which can be seen from the streamlines plot (Fig. 6) and the PSD contours
(Fig. 5) presented here as well. Very often, the criteria developed by
McKinley and co-workers are used to figure out the onset of these purely
elastic instabilities, written as McKinley, Pakdel, and Öztekin (1996)
$\left(\frac{\lambda
U}{\mathscr{R}}\frac{\tau_{xx}}{\eta_{0}\dot{\gamma}}\right)\geq M_{crit}^{2}$
(10)
where $\mathscr{R}$ is the characteristic radius of streamline curvature and
$\tau_{xx}$ is the tensile or normal stress along the flow direction. If the
dimensionless value of the left hand side of Eq. 10 becomes greater than or
equal to the critical $M_{crit}^{2}$ value at any position in the flow field,
an instability will then be originated in the system. For the flow of a
constant viscosity viscoelastic polymer (Boger fluid) solution past a cylinder
confined in a channel, a value of $M_{crit}=6.08$ was found from the linear
stability analysis McKinley, Pakdel, and Öztekin (1996). However, for the
present case of a wormlike micellar solution, this value should not be
obviously the same due to the presence of shear-thinning viscous properties
and breakage and reformation dynamics of the micelles. Once this instability
is triggered in the flow field, then a small and random lateral fluctuation of
the birefringent strand (as shown in Fig. 8) of high elastic stresses
downstream of the cylinder either in the $-Y$ or $+Y$ direction creates a
resistance to the flow of fluid in that direction. This forces the fluid to
pass through the other side of the cylinder. This will eventually create an
imbalance in the shear rate at the two sides of the cylinder. If the fluid
shows shear-thinning properties, this imbalance in the shear rate and hence
the viscosity gets accentuated, thereby resulting in the fluid to pass through
one side (at which the shear rate is high or the viscosity is low) of the
cylinder. This explanation is in line with that provided earlier for the flow
of either WLM solution Haward _et al._ (2019) or polymer solution Haward,
Hopkins, and Shen (2020) past a cylinder. Therefore, to show the asymmetric
flow, the fluid should have shear-thinning properties and a sufficient amount
of elastic stresses should be accumulated downstream of the cylinder Haward,
Hopkins, and Shen (2020).
To explicitly explain this, we calculate the local shear $(Wi_{s}^{l})$ and
extensional $(Wi_{e}^{l})$ Weissenberg numbers based on the local shear rate
in the gap region and local extension rate downstream of the cylinder
respectively for $BR=0.34$, $Wi=2.5$ and $\xi=0.01$ at which an asymmetric
flow was observed (sub Fig. 6(c)). We find that these values (presented as
open symbols in Fig. 2) are lied in the shear-thinning region (in case of the
shear Weissenberg number) and extensional hardening region (in case of the
extensional Weissenberg number) in the plots presented in Fig. 2. As the
blockage ratio increases to 0.67, the values (presented as filled symbols in
Fig. 2) of both $(Wi_{s}^{l})$ and $(Wi_{e}^{l})$ increase due to the increase
in the flow velocity resulting from the decrease in the flow area. Once again,
these values are shown in the same figure as symbols, and one can see that
although the value of $(Wi_{e}^{l})$ lies in the extensional hardening region,
the value of $(Wi_{s}^{l})$ lies in the plateau region in shear viscosity
plot. This causes a resymmetrization in the flow field at this blockage ratio
as shown in sub Figs. 6 (a) and (b).
This is further confirmed by changing the value of $\xi$ which indicates the
scission energy needed to break a micelle. As the value of $\xi$ increases to
0.1 or the micelles become progressively easier to break, a symmetric flow
(with $|I_{s}|=0$) is seen to present (sub Fig. 11(c)) at the same $BR=0.34$
and $Wi=2.5$ as opposed to an symmetric flow seen at $\xi=0.01$.
Figure 11: Representative streamline and velocity magnitude plots at $BR=0.34$
and $Wi=2.5$. (a) and (b) $\xi=0.00001$, (c) $\xi=0.1$.
This is simply due to the fact that although the shear-thinning property
increases with an increase in $\xi$ due to an easy breakage of micelles, the
magnitude of the elastic stresses downstream of the cylinder becomes
insufficient to create instability in the system. On the other hand, further
simulations were also run to a lower value of $\xi=0.0001$ at which the
micelles become more harder to break. It can be again seen a resymmetrization
in the flow field, sub Figs. 11(a) and (b) shown at two different times. At
this value of $\xi$, although the value of $Wi_{e}^{l}$ increases, the value
of $Wi_{s}^{l}$ lies in the plateau region shown in Fig. 2.
### IV.2 Two vertically aligned microcylinders case: Effect of gap ratio
After discussing the results for the case of a single microcylinder, we now
turn our attention to the present and discuss the results for two vertically
side-by-side placed microcylinders in a channel, as schematically shown in
Fig. 1(c). The streamlines and velocity magnitude plots for this configuration
are depicted in Fig. 12 at two gap ratios, namely, 0.28 (a-d) and 0.50 (e-f)
for a range of values of the Weissenberg number. Likewise the single cylinder
case, for a Newtonian fluid, a perfect symmetry along the horizontal and
vertical mid-planes passing through the origin, is present in the flow
profiles irrespective of the value of the gap ratio $G$, see sub Fig 12(a) and
(e). Although the fluid passes through all the three gaps available in the
system; however, at $G=0.28$, the magnitude of the velocity is larger at the
gap regions in between either the top or bottom cylinder and the channel wall
than that seen at the gap region in between the two cylinders. In contrast to
this, a reverse trend is seen for the gap ratio of $G=0.50$. This is simply
due to the fact that for a Newtonian fluid and in the creeping flow regime,
the volumetric flow rate of the fluid is linearly proportional to the
available flow area. At G = 0.28, the flow area is larger at the gap in
between either the top or bottom cylinder and the channel wall than that seen
in between the two cylinders; whereas, at $G=0.50$, the other way around
happens. Below a critical low value of the Weissenberg number
$Wi<Wi_{1}\approx 0.3$, the flow characteristics of a WLM solution look
similar to that of a Newtonian fluid regardless of the gap ratio, as it was
also seen for the single cylinder case. For instance, see the results that are
presented in sub Fig. 12(b) and (f) for gap ratios of 0.28 and 0.50,
respectively. This is solely due to the fact that at this low Weissenberg and
Reynolds number flows, the elastic effects as well as the breakage and
reformation dynamics of micelles are very weak and hence, it behaves like a
Newtonian fluid.
Figure 12: Representative streamline and velocity magnitude plots for
vertically side-by-side two microcylinders case at $\xi=0.01$.
However, as the Weissenberg number gradually increases to higher values and
exceeds the first critical Weissenberg number $(Wi_{1})$, the system then
undergoes the first transition due to the increase in the elastic forces. For
instance, at $G=0.28$, a transition from the low-Weissenberg number symmetric
state to a diverging state (D) state occurs, in which the fluid passes through
the gaps in between the cylinder and channel wall, and it completely avoids
the region in between the two cylinders, sub Fig. 12(c). The flow still
remains steady and symmetric along the horizontal mid-plane passing through
the origin, as can be observed in sub Fig. 13(a), wherein the temporal
variation of the non-dimensional stream-wise velocity is plotted at a probe
location placed at the origin. On further increasing the Weissenberg number
beyond a second critical value of the Weissenberg number $Wi>Wi_{2}$, a second
transition in the flow state is observed, in which the micellar solution
mostly prefers to flow through only the gap in between the top cylinder and
the channel wall $(Y>0)$, as shown in sub Fig. 12(d). However, there is an
equal opportunity present in which most of the fluid can also pass through the
gap in between the bottom cylinder and the channel wall $(Y<0)$ (not shown
here). This state is known as the asymmetric-diverging state (AD). In this
state, the flow becomes unsteady, as can be evident in sub Fig. 13(a) wherein
the non-dimensional stream-wise velocity is seen to be fluctuating with time.
The nature of this unsteadiness is quasi-periodic as the power-spectrum of the
velocity fluctuations is governed by more than one dominant frequencies, sub
Fig. 13(b). This state is analogous to the state observed in sub Fig. 6(d) for
the case of a single cylinder. On the other hand, at $G=0.5$, only one
transition in the flow state happens when the Weissenberg number exceeds its
first critical value $Wi>Wi_{1}$. In this state, the whole micellar solution
preferentially passes through the gap region in between the two cylinders,
avoiding the gap in between the cylinder and the channel wall. This state is
known as the converging state (C). However, a transition from a steady flow
field to an unsteady one occurs within this state as the Weissenberg number
gradually increases. For instance, one can see that the non-dimensional
stream-wise velocity reaches a steady value
Figure 13: Temporal variation of the stream-wise velocity component at a probe
location $X=0$ and $Y=0$ for two gap ratios, namely, 0.28 (a) and 0.5 (b). The
corresponding power spectral density plot of the velocity fluctuations at
$G=0.28$ (b) and at $G=0.5$. Here all the results are presented for non-linear
VCM model parameter $\xi=0.01$.
at $Wi=1.5$; whereas, it becomes fluctuating in nature as the Weissenberg
number is further increased to 2.5, sub Fig. 13(c). These velocity
fluctuations are governed by a two dominant frequencies (sub Fig. 13(d)) as
opposed to a range of frequency spectrum seen at $G=0.28$ (sub Fig. 13(b))
under otherwise identical conditions . Furthermore, the amplitude of these
velocity fluctuations is seen to be very large in the latter case as compared
to that seen in the former one.
Figure 14: Variation of the flow asymmetry parameter for the two
microcylinders case at $G=0.28$ (a-c) and at $G=0.5$ (d-f). In sub figure (c),
(I) Newtonian like state (II) Diverging or ’D’ state and (III) Asymmetric-
diverging or ’AD’ state, whereas in sub figure (f), (I) Newtonian like state
and (II) converging or ’C’ state. Figure 15: Variation of the principle stress
difference for the two microcylinders case (a) G = 0.28, Wi = 1.0 (b) G =
0.28, Wi = 5.0 (c) G = 0.5, Wi = 1.0 (d) G = 0.5, Wi = 5.0.
Likewise Hopkins et al. Hopkins, Haward, and Shen (2020), we also calculate
two asymmetrical parameters, namely, $I^{{}^{\prime}}_{d}$ and
$I^{{}^{\prime\prime}}_{d}$ to distinguish the flow states more quantitatively
for the two microcylinders case. These are defined as follows:
$I^{{}^{\prime}}_{d}=\frac{\frac{1}{2}\left(U_{X,u}+U_{X,l}\right)-U_{X,m}}{\frac{1}{2}\left(U_{X,u}+U_{X,l}\right)+U_{X,m}}$
(11)
$I^{{}^{\prime\prime}}_{d}=\frac{U_{X,u}-U_{X,l}}{U_{X,u}+U_{X,l}+U_{X,m}}$
(12)
In the above equations, $U_{X,u}$, $U_{X,l}$ and $U_{X,m}$ are the time-
averaged stream-wise velocities obtained at the mid-points placed in the upper
gap (between the top cylinder and channel wall), lower gap (between the bottom
cylinder and lower channel wall) and in the gap in between the two cylinders,
respectively. The variations of $I^{{}^{\prime}}_{d}$ and
$I^{{}^{\prime\prime}}_{d}$ with the Weissenberg number are shown in sub Figs.
14 (a-b) and (d-f) for the gap ratios of 0.28 and 0.5, respectively. The total
asymmetry parameter $I_{d}=I^{{}^{\prime}}_{d}+I^{{}^{\prime\prime}}_{d}$,
showing the complete bifurcation diagram, is presented in sub Figs (c) and (f)
at $G=0.28$ and 0.50, respectively. The first transition in the value of
$I^{{}^{\prime}}_{d}$ occurs at $Wi\approx 0.3$ when the flow transits from
symmetric to diverging state (D). After this transition, as the Weissenberg
number gradually increases, one can see that the value of
$I^{{}^{\prime}}_{d}$ also gradually increases, and ultimately leveling off to
a value of 1, sub Fig. 14(a). This trend in $I^{{}^{\prime}}_{d}$ thereby
suggesting that almost no fluid passes in between the two cylinders as the
Weissenberg number increases. The second transition in the flow state from the
diverging (D) to asymmetric-diverging (AD) state occurs when the transition in
the value of $I^{{}^{\prime\prime}}_{d}$ occurs at $Wi\approx 2.5$, sub Fig.
14(b). The complete bifurcation diagram at $G=0.28$ is shown in sub Fig. 14(c)
in terms of the variation of the total asymmetry parameter $I_{d}$ with $Wi$.
It can be seen that the first bifurcation leads to $I_{d}\rightarrow 1$,
whereas the second bifurcation results in $I_{d}\rightarrow 1.5$. On the other
hand, at $G=0.50$, the first bifurcation occurs when the flow transits from
symmetric to converging state (C) at $Wi\approx 0.15$, which can be marked by
the transition of the value of $I^{{}^{\prime}}_{d}$ in sub Fig. 14(d). As the
Weissenberg number increases, the value of $I^{{}^{\prime}}_{d}$ tends to -1,
thereby suggesting that all of the fluid prefers to flow through the gap
region in between the two cylinders. The value of $I^{{}^{\prime\prime}}_{d}$
almost remains zero over the whole range of the Weissenberg number considered
(sub Fig. 14(e)), and hence, a second bifurcation is not observed at $G=0.50$
as it was seen at $G=0.28$. The complete bifurcation diagram for this gap
ratio is depicted in sub Fig. 14(f).
To explain the formation of these different flow states in the case of flow
past two microcylinders, the corresponding PSD plots at these two gap ratios
are presented in Fig. 15. At $G=0.28$ and $Wi=1.0$ at which ’D’ states occurs,
it can be observed that the gap in between the two cylinders is closed by a
region of high PSD value (sub Fig. 15(a)), thereby blocking the fluid to pass
through this region. Furthermore, at this Weissenberg number, a long
birefringent strand of high PSD value is also formed in the mid-horizontal
plane downstream of the cylinders. As the Weissenberg number further increases
to higher values, both the length and magnitude of this strand increase. A
little and random lateral fluctuation in this strand in either $+Y$ or $-Y$
direction downstream of the cylinder blocks the flow of fluid in that
direction, resulting in the formation of ’AD’ state (sub Fig. 15(b)). This is
reminiscent of that seen in the case of single microcylinder. On the other
hand, at $G=0.5$, the velocity magnitude in between the two cylinders
progressively increases as the Weissenberg number increases due to the shear-
thinning property of the micellar solution, and hence more fluids prefer to
pass through this area due to the formation of a low-viscosity region. As a
result, the birefringent strands formed downstream of both the cylinders shift
towards the channel walls (see sub Figs. 15(c) and (d)), thereby blocking the
fluid to pass through the gap regions in between the cylinder surface and
channel wall. This facilitates more fluids to pass through the gap region in
between the two cylinders. This effect gets accumulated as the Weissenberg
number further increases, resulting in the formation of ’C’ state. At this gap
ratio, the space in between the two cylinders is not closed by a region of
high PSD value (sub Fig. 15(c)) as that seen at $G=0.28$ which can block the
flow, and therefore, the fluid can easily pass through this space. Likewise
the single microcylinder case, we have again found that the flow bifurcation
can be completely suppressed if the non-linear VCM model parameter $\xi$
increases to 0.1. In other words, if the micelles become progressively easier
to break, this bifurcation in the two cylinders case can also be completely
avoided due to the increase in the shear-thinning and decrease in the elastic
effects, Fig. 2. On the other hand, with a decreasing value of $\xi=0.0001$
when the micelles become progressively harder to break, we have again observed
the disappearance of these bifurcations in the flow irrespective of the gap
ratio, due to an increase in the elastic and decrease in the shear-thinning
effects, likewise we have seen for the single microcylinder case in the
preceding subsection.
Figure 16: Streamline and velocity magnitude plots for the flow of WLM
solutions through an ordered porous structure consisting of a microchannel
with multiple microcylinders placed in it at $Wi=4$ and $\xi=0.01$.
All these results presented and discussed here for single and two
microcylinders cases now can facilitate the understanding of the selection of
a preferential path or lane of a viscoelastic fluid during its flow through
either an ordered or disordered porous matrix observed in many prior
experiments De _et al._ (2017, 2018); Walkama, Waisbord, and Guasto (2020);
Eberhard _et al._ (2020); Müller, Vorwerk, and Brunn (1998). The onset of
this phenomena happens due to the occurrence of the flow bifurcation (either
’A’ or ’AD’ or ’C’ state) resulting from the interaction between the shear-
thinning properties of the micellar solution and elastic stresses generated in
the system, as explained above. Once the fluid prefers to flow through a
particular gap region in the porous media due to the flow bifurcation, it then
forms a lane or path as moves forward. To demonstrate this, we have carried
out further numerical simulations for an ordered porous matrix created by
placing nine microcylinders in a microchannel, as schematically shown in Fig.
16. One can clearly see the formation of a preferential path or lane during
the flow of micellar solutions through this ordered porous matrix.
## V Conclusions
In this study, the flow phenomena of wormlike micellar solutions (WLM) past a
single and two vertically aligned microcylinders placed in a rectangular
channel is numerically investigated in detail in the creeping flow regime. The
two-species Vasquez-Cook-McKinley (VCM) constitutive model, which includes
both the breakage and reformation dynamics of micelles, is used to
characterize the rheological behaviour of WLM solutions. At low Weissenberg
numbers, the flow dynamics is found to be steady and symmetric for both the
single and two microcylinders cases regardless of the blockage
($BR=\frac{D}{H}$, where $D$ is the cylinder diameter and $H$ is the channel
height) and gap ($G=\frac{S_{1}}{S_{1}+S_{2}}$ where $S_{1}$ is the distance
between the two cylinder and $S_{2}$ is the distance between the channel wall
and cylinder surface) ratio, likewise seen for simple Newtonian fluids in the
creeping flow regime. However, as the Weissenberg number gradually increases
to high values, the flow features become rich in physics and also become
dependent on the blockage and gap ratio. For instance, in the case of a single
microcylinder, a range of blockage ratio is found at which an asymmetric flow
is seen to exist due to the occurrence of a supercritical pitchfork
bifurcation in the flow field. At higher blockage ratios, a resymmetrization
in the flow field happens. Along with this, a transition for a wide range of
flow states is found as the blockage ratio gradually increases. However, all
these observations are found to be a strong function of the non-linear VCM
model parameter $\xi$ which basically indicates how easy or hard to break a
micelle. As the value of $\xi$ increases or it becomes progressively easier to
break a micelle (thereby increasing the shear-thinning tendency and decreasing
the elastic property), the asymmetric flow is totally disappeared irrespective
of the blockage ratio. On the other hand, as the micelles become progressively
hard to break or decreasing value of $\xi$, the asymmetric flow again
disappears. This suggests that there is a range of value of $\xi$ present at
which both the shear-thinning properties of the micellar solutions and an
accumulation of the elastic stresses downstream of the cylinder become
significant, which thereby resulting in an asymmetric flow in the system. This
observation is in line with that presented earlier for the flow of either WLM
Haward _et al._ (2019) or polymer Haward, Hopkins, and Shen (2020) solutions
past a cylinder.
In the case of two microcylinders aligned vertically to each other, once
again, the flow field of WLM solutions seems like Newtonian fluids, i.e.,
steady and symmetric at low Weissenberg numbers. As it gradually increases to
higher values, three distinct flow states are observed in the system, namely,
diverging (’D’) state at which most of the fluids pass through the gaps in
between the cylinder surface and channel walls, asymmetric-diverging (’AD’)
state at which the micellar solution prefers to flow through the gap in
between either the top channel wall and cylinder surface or the bottom channel
wall and cylinder surface, and converging state (’C’) at which most of the
fluids flow through the gap in between the two cylinders. All these flow
states are also observed in recent experiments Hopkins, Haward, and Shen
(2020) in the case of two microcylinders dealing with WLM solutions. We have
found that the occurrence of any of these states is strongly dependent upon
the values of the gap ratio and non-linear VCM model parameter $\xi$. Once
again, the reason behind the formation of these flow states lies to the fact
of the interaction between the shear-thinning properties and accumulation of
the elastic stresses downstream of the cylinders. Therefore, the formation of
any of these flow states can be controlled by changing the scission energy
needed to break a micelle or the value of $\xi$. We have found the occurrence
of a bistable state at $G=0.28$ and a single stable state at $G=0.50$. In
between these two $G$ values, one can expect a critical gap ratio at which all
these three states (tristable) co-exist together as seen in the recent
experiments Hopkins, Haward, and Shen (2020). However, we are unable to find
out that critical value of the gap ratio in the present simulations.
Finally, based on the results and explanations presented herein for the single
and two microcylinders, we have provided the reason behind the formation of
preferential paths or lanes during the flow of either WLM or polymer solutions
through a porous media, as observed in many earlier experiments De _et al._
(2018, 2017); Walkama, Waisbord, and Guasto (2020); Eberhard _et al._ (2020).
The onset of this phenomena happens due to the occurrence of the flow
bifurcation (either ’D’ or ’AD’ or ’C’ state) resulting from the interaction
between the shear-thinning properties of the viscoelastic fluid and elastic
stresses generated in the system. This lane formation can happen in both
polymer and wormlike micellar solutions as long as the solution exhibits both
the shear-thinning properties and accumulates sufficient elastic stresses
downstream of the obstacle, as it was experimentally observed in both the
solutions. For a wormlike micellar solution, both these shear-thinning and
elastic properties are influenced by the fact that how easy or hard to break a
micelle (by the non-linear parameter $\xi$ in the case of VCM model), and
hence, one can say that the lane formation in wormlike micellar solution is
indirectly dependent on the breakage and reformation dynamics of micelles.
## VI Acknowledgements
The authors would like to thank IIT Ropar for providing the funding through
the ISIRD research grant (Establishment 1/2018/IITRPR/921) to carry out this
work.
## VII Availability of data
The data that supports the findings of this study are available within the
article.
## References
* Dreiss (2007) C. A. Dreiss, “Wormlike micelles: where do we stand? recent developments, linear rheology and scattering techniques,” Soft Matt. 3, 956–970 (2007).
* Dreiss and Feng (2017) C. A. Dreiss and Y. Feng, _Wormlike Micelles: Advances in Systems, Characterisation and Applications_ , Vol. 6 (Royal Society of Chemistry, 2017).
* Yang (2002) J. Yang, “Viscoelastic wormlike micelles and their applications,” Cur. Opi. Col. Int. Sci. 7, 276–281 (2002).
* Walker (2001) L. M. Walker, “Rheology and structure of worm-like micelles,” Cur. Opi. Col. Int. Sci. 6, 451–456 (2001).
* Rothstein (2008) J. P. Rothstein, “Strong flows of viscoelastic wormlike micelle solutions,” Rheol. Rev 2008, 1–46 (2008).
* Rothstein (2003) J. P. Rothstein, “Transient extensional rheology of wormlike micelle solutions,” J. Rheol. 47, 1227–1247 (2003).
* Berret (1997) J.-F. Berret, “Transient rheology of wormlike micelles,” Langmuir 13, 2227–2234 (1997).
* Schramm (2000) L. L. Schramm, _Surfactants: Fundamentals and Applications in the Petroleum Industry_ (Cambridge University Press, 2000).
* Möbius, Miller, and Fainerman (2001) D. Möbius, R. Miller, and V. B. Fainerman, _Surfactants: Chemistry, Interfacial Properties, Applications_ (Elsevier, 2001).
* Raffa _et al._ (2015) P. Raffa, D. A. Z. Wever, F. Picchioni, and A. A. Broekhuis, “Polymeric surfactants: synthesis, properties, and links to applications,” Chem. Reviews 115, 8504–8563 (2015).
* De _et al._ (2018) S. De, S. P. Koesen, R. V. Maitri, M. Golombok, J. T. Padding, and J. F. M. van Santvoort, “Flow of viscoelastic surfactants through porous media,” AIChE J. 64, 773–781 (2018).
* De _et al._ (2017) S. De, J. Van Der Schaaf, N. G. Deen, J. A. M. Kuipers, E. A. J. F. Peters, and J. T. Padding, “Lane change in flows through pillared microchannels,” Phys. Fluids 29, 113102 (2017).
* Müller, Vorwerk, and Brunn (1998) M. Müller, J. Vorwerk, and P. Brunn, “Optical studies of local flow behaviour of a non-newtonian fluid inside a porous medium,” Rheol. Acta 37, 189–194 (1998).
* Walkama, Waisbord, and Guasto (2020) D. M. Walkama, N. Waisbord, and J. S. Guasto, “Disorder suppresses chaos in viscoelastic flows,” Phys. Rev. Lett. 124, 164501 (2020).
* Eberhard _et al._ (2020) U. Eberhard, H. Seybold, E. Secchi, J. Jiménez-Martínez, P. Rühs, A. Ofner, J. Andrade, and M. Holzner, “Mapping the local viscosity of non-Newtonian fluids flowing through disordered porous structures,” Sci. Reports 10, 1–12 (2020).
* Alves, Pinho, and Oliveira (2001) M. A. Alves, F. T. Pinho, and P. J. Oliveira, “The flow of viscoelastic fluids past a cylinder: finite-volume high-resolution methods,” J. Non-Newt. Fluid Mech. 97, 207–232 (2001).
* McKinley, Armstrong, and Brown (1993) G. H. McKinley, R. C. Armstrong, and R. Brown, “The wake instability in viscoelastic flow past confined circular cylinders,” Phil. Tran. Royal Society of London. Series A: Phys. Eng. Sci. 344, 265–304 (1993).
* Hu and Joseph (1990) H. H. Hu and D. D. Joseph, “Numerical simulation of viscoelastic flow past a cylinder,” J. Non-Newt. Fluid Mech. 37, 347–377 (1990).
* Shiang _et al._ (1997) A. H. Shiang, J. C. Lin, A. Öztekin, and D. Rockwell, “Viscoelastic flow around a confined circular cylinder: measurements using high-image-density particle image velocimetry,” J. Non-Newt. Fluid Mech. 73, 29–49 (1997).
* Qin _et al._ (2019) B. Qin, P. F. Salipante, S. D. Hudson, and P. E. Arratia, “Upstream vortex and elastic wave in the viscoelastic flow around a confined cylinder,” J. Fluid Mech. 864 (2019).
* Moss and Rothstein (2010) G. R. Moss and J. P. Rothstein, “Flow of wormlike micelle solutions past a confined circular cylinder,” J. Non-Newt. Fluid Mech. 165, 1505–1515 (2010).
* Zhao, Shen, and Haward (2016) Y. Zhao, A. Q. Shen, and S. J. Haward, “Flow of wormlike micellar solutions around confined microfluidic cylinders,” Soft Matt. 12, 8666–8681 (2016).
* Haward _et al._ (2019) S. J. Haward, N. Kitajima, K. Toda-Peters, T. Takahashi, and A. Q. Shen, “Flow of wormlike micellar solutions around microfluidic cylinders with high aspect ratio and low blockage ratio,” Soft Matt. 15, 1927–1941 (2019).
* Khan and Sasmal (2020) M. B. Khan and C. Sasmal, “Effect of chain scission on flow characteristics of wormlike micellar solutions past a confined microfluidic cylinder: A numerical analysis,” Soft Matt. 16, 5261–5272 (2020).
* Varchanis _et al._ (2020) S. Varchanis, C. C. Hopkins, A. Q. Shen, J. Tsamopoulos, and S. J. Haward, “Asymmetric flows of complex fluids past confined cylinders: A comprehensive numerical study with experimental validation,” Phys. Fluids 32, 053103 (2020).
* Haward, Toda-Peters, and Shen (2018) S. J. Haward, K. Toda-Peters, and A. Q. Shen, “Steady viscoelastic flow around high-aspect-ratio, low-blockage-ratio microfluidic cylinders,” J. Non-Newt. Fluid Mech. 254, 23–35 (2018).
* Varshney and Steinberg (2017) A. Varshney and V. Steinberg, “Elastic wake instabilities in a creeping flow between two obstacles,” Phys. Rev. Fluids 2, 051301 (2017).
* Cressman, Bailey, and Goldburg (2001) J. R. Cressman, Q. Bailey, and W. I. Goldburg, “Modification of a vortex street by a polymer additive,” Phys. Fluids 13, 867–871 (2001).
* Zhu and Xi (2019) L. Zhu and L. Xi, “Vortex dynamics in low-and high-extent polymer drag reduction regimes revealed by vortex tracking and conformation analysis,” Phys. Fluids 31, 095103 (2019).
* Hopkins, Haward, and Shen (2020) C. C. Hopkins, S. J. Haward, and A. Q. Shen, “Tristability in viscoelastic flow past side-by-side microcylinders,” arXiv preprint arXiv:2010.14749 (2020).
* Mohammadigoushki and Muller (2016) H. Mohammadigoushki and S. J. Muller, “Sedimentation of a sphere in wormlike micellar fluids,” J. Rheol. 60, 587–601 (2016).
* Chen and Rothstein (2004) S. Chen and J. P. Rothstein, “Flow of a wormlike micelle solution past a falling sphere,” J. Non-Newt. Fluid Mech. 116, 205–234 (2004).
* Sasmal (2021) C. Sasmal, “Unsteady motion past a sphere translating steadily in wormlike micellar solutions:A numerical analysis,” J. Fluid Mech. In press (2021).
* Vasquez, McKinley, and Cook (2007) P. A. Vasquez, G. H. McKinley, and P. L. Cook, “A network scission model for wormlike micellar solutions: I. Model formulation and viscometric flow predictions,” J. Non-Newt. Fluid Mech. 144, 122–139 (2007).
* Cates (1987) M. E. Cates, “Reptation of living polymers: dynamics of entangled polymers in the presence of reversible chain-scission reactions,” Macromolecules 20, 2289–2296 (1987).
* Pipe _et al._ (2010) C. J. Pipe, N. J. Kim, P. A. Vasquez, L. P. Cook, and G. H. McKinley, “Wormlike micellar solutions: II. Comparison between experimental data and scission model predictions,” J. Rheol. 54, 881–913 (2010).
* Zhou, McKinley, and Cook (2014) L. Zhou, G. H. McKinley, and L. P. Cook, “Wormlike micellar solutions: III. VCM model predictions in steady and transient shearing flows,” J. Non-Newt. Fluid Mech. 211, 70–83 (2014).
* Kalb, Cromer _et al._ (2017) A. Kalb, M. Cromer, _et al._ , “Role of chain scission in cross-slot flow of wormlike micellar solutions,” Phys. Rev. Fluids 2, 071301 (2017).
* Kalb, Villasmil-Urdaneta, and Cromer (2018) A. Kalb, L. A. Villasmil-Urdaneta, and M. Cromer, “Elastic instability and secondary flow in cross-slot flow of wormlike micellar solutions,” J. Non-Newt. Fluid Mech. 262, 79–91 (2018).
* Sasmal (2020) C. Sasmal, “Flow of wormlike micellar solutions through a long micropore with step expansion and contraction,” Phys. Fluids 32, 013103 (2020).
* Mohammadigoushki _et al._ (2019) H. Mohammadigoushki, A. Dalili, L. Zhou, and P. Cook, “Transient evolution of flow profiles in a shear banding wormlike micellar solution: Experimental results and a comparison with the VCM model,” Soft Matt. 15, 5483–5494 (2019).
* Weller _et al._ (1998) H. G. Weller, G. Tabor, H. Jasak, and C. Fureby, “A tensorial approach to computational continuum mechanics using object-oriented techniques,” Com. Phys. 12, 620–631 (1998).
* Pimenta and Alves (2016) F. Pimenta and M. Alves, “rheotool,” https://github.com/fppimenta/rheoTool (2016).
* Ajiz and Jennings (1984) M. A. Ajiz and A. Jennings, “A robust incomplete choleski-conjugate gradient algorithm,” Int. J. Num. Methods Eng. 20, 949–966 (1984).
* Lee, Zhang, and Lu (2003) J. Lee, J. Zhang, and C. C. Lu, “Incomplete LU preconditioning for large scale dense complex linear systems from electromagnetic wave scattering problems,” J. Comp. Phys. 185, 158–175 (2003).
* Alves, Oliveira, and Pinho (2003) M. A. Alves, P. J. Oliveira, and F. T. Pinho, “A convergent and universally bounded interpolation scheme for the treatment of advection,” Int. J. Num. Methods Fluids 41, 47–75 (2003).
* Haward, Hopkins, and Shen (2020) S. J. Haward, C. C. Hopkins, and A. Q. Shen, “Asymmetric flow of polymer solutions around microfluidic cylinders: Interaction between shear-thinning and viscoelasticity,” J. Non-Newt. Fluid Mech. 278, 104250 (2020).
* Pakdel and McKinley (1996) P. Pakdel and G. H. McKinley, “Elastic instability and curved streamlines,” Phys. Rev. Lett. 77, 2459 (1996).
* McKinley, Pakdel, and Öztekin (1996) G. H. McKinley, P. Pakdel, and A. Öztekin, “Rheological and geometric scaling of purely elastic flow instabilities,” J. Non-Newt. Fluid Mech. 67, 19–47 (1996).
* Fardin and Lerouge (2012) M.-A. Fardin and S. Lerouge, “Instabilities in wormlike micelle systems,” The European Phys. J. E 35, 1–29 (2012).
|
# Tuiteamos o pongamos un tuit? Investigating the Social Constraints of
Loanword Integration in Spanish Social Media
Ian Stewart
University of Michigan
<EMAIL_ADDRESS>
&Diyi Yang
Georgia Institute of Technology
<EMAIL_ADDRESS>
Jacob Eisenstein
Google Research
<EMAIL_ADDRESS>Work completed at Georgia Institute of Technology.
###### Abstract
Speakers of non-English languages often adopt loanwords from English to
express new or unusual concepts. While these loanwords may be borrowed
unchanged, speakers may also integrate the words to fit the constraints of
their native language, e.g. creating Spanish _tuitear_ from English _tweet_.
Linguists have often considered the process of loanword integration to be more
dependent on language-internal constraints, but sociolinguistic constraints
such as speaker background remain only qualitatively understood. We
investigate the role of social context and speaker background in Spanish
speakers' use of integrated loanwords on social media. We find first that
newspaper authors use the integrated forms of loanwords and native words more
often than social media authors, showing that integration is associated with
formal domains. In social media, we find that speaker background and
expectations of formality explain loanword and native word integration, such
that authors who use more Spanish and who write to a wider audience tend to
use integrated verb forms more often. This study shows that loanword
integration reflects not only language-internal constraints but also social
expectations that vary by conversation and speaker.
## 1 Introduction
Languages exchange loanwords constantly as multilingual people adopt words
from other languages to express themselves in their native language
(Haspelmath, 2009). The English word _tweet_ has been adopted into many other
languages following the success of Twitter, e.g. producing the Spanish verb
_tuitear_. One form of adoption is known as _integration_ by which a speaker
adapts the loanword to the underlying grammar of the language, e.g. adding the
Spanish verb ending _-ear_ to the loanword _tweet_ to help the word adhere to
Spanish grammar (Poplack and Dion, 2012). Speakers may choose to use loanwords
with the prescriptively correct form, in this case adding verbal morphology,
or with less standard forms, in this case using a paraphrase such as _send a
tweet_. We show several examples of this alternation in Table 1. To further
the theoretical understanding of the process of loanword integration, this
work assesses this process from a speaker's perspective.
Loanword | Verbs | Count
---|---|---
Connect | _conectear_ , _hacer un conexión_ | 7785
Like | _likear_ , _dar un like_ | 5666
Stalk | _stalkear_ , _ser un stalker_ | 5455
Flash | _flashear_ , _hacer flash_ | 4521
Ship | _shippear_ , _hacer ship_ | 4079
Table 1: Top 5 most frequent loanwords on social media and corresponding verb
forms.
Researchers have often studied the process of loanword adoption and
integration from a language-internal perspective, such as phonological
constraints on loanword use (Kang, 2011). However, loanwords also carry
_social meaning_ (Levendis and Calude, 2019) that relates to formality and
standard language norms, and speakers may have their own intuitions about the
``correct'' way to use a loanword. Therefore, a speaker's background, such as
their multilingual knowledge (Poplack, 1988), and the social context of a
conversation (Lev-Ari and Peperkamp, 2014) may also play a role in the
integration of loanwords. Such social and behavioral factors may also help
explain the long-term _acceptance_ of loanwords into a language (Chesley,
2010; Zenner et al., 2012). To that end, we leverage multilingual data from
social media to assess the speaker-level factors that underlie loanword
integration.
Our study provides the following contributions:
* •
We first collect verb forms for a variety of English loanwords related to
technology and social life online, as well as similar _control_ pairs for
native Spanish verbs (§ 3.1, § 3.2).
* •
To test for the effect of formality, we compare the rate of integrated verb
use for loanwords and native verbs between social media posts and newspaper
articles (§ 4.1). We find that loanwords and native verbs are integrated at a
higher rate in newspaper articles, suggesting that integration is associated
with more formal language registers.
* •
Drawing on this finding, we test the role of different contextual and speaker-
background factors as they explain the choice to use integrated verbs for
loanwords (§ 3.4, § 4.2). With regression analysis on social media data, we
show that speaker background plays a large role: Latin American speakers and
high-Spanish speakers tend to choose integrated verbs for loanwords and native
words. We also find that the context of a post explains integration, because
posts with a larger presumed audience have higher rates of integration.
Lastly, we find several points of divergence between loanwords and native
verbs, suggesting some differences in social perception of the word groups.
## 2 Related work
Loanword integration has mainly been studied from the perspective of
_pronunciation_ , i.e. whether a loanword adheres to the phonology of the
source or target language (Kang, 2011). Speakers may have to choose between
different valid pronunciations, e.g. pronouncing the word _Iraq_ with an
American English ``short-A'' (/Iræk/) or an Arabic ``long-A'' (/IrAk/)
(Hall-Lew et al., 2010). Traditional studies of loanword integration relied on
sociolinguistic interviews and elicitation, which often lack spontaneous
loanword use (Poplack, 1988). With the growing availability of large-scale
written corpora, researchers have tracked the adoption of loanwords over time,
particularly English loanwords into other languages (Chesley, 2010; Garley and
Hockenmaier, 2012; Zenner et al., 2012). Such large-scale corpora also allow
researchers to track _morphological_ integration (Coats, 2018; Kilgarriff,
2010), which is a word's ability to combine with bound morphemes from the
target language (e.g. _tuitear_ [``to tweet''] = _tuit_ [``tweet''] + _-ear_
[VERB.INF]). We continue this line of work and study the role of contextual
and speaker-background factors in loanword integration. This helps test
theories related to multilingual decisions Poplack et al. (2020) and how
loanwords are collectively adopted into a language (Levendis and Calude,
2019).
The loanword integration process relates partly to structure: if the source
and target language are similar (e.g. Italian and Spanish) then a speaker may
have little difficulty in integrating the loanword (Boersma et al., 2009;
Peperkamp, 2004). However, a speaker's decision to integrate a loanword also
depends on the speaker's prior experiences and the social context of the
conversation (Wohlgemuth, 2009). For one, the choice of using an integrated
loanword depends on the speaker's own background with the source language
(Poplack, 1988) and their willingness to uphold linguistic standards for the
loanword. In addition, the process of loanword integration may be related to
the _domain_ of speech, as some writing domains such as newspapers have strong
norms (Biber and Conrad, 2019) and therefore may prefer the formal version of
the loanword. Lastly, the social expectations of a given _conversation_ may
convince a speaker to use the integrated form (Lev-Ari and Peperkamp, 2014),
e.g. if their listeners are expecting a less formal response and therefore a
non-integrated loanword. While some work has tested both linguistic and social
constraints on the integration of loanwords (Garley, 2014; Sanchez, 2005),
linguists generally lack access to speech across a variety of speakers and
social contexts. This work addresses the social meaning of loanwords by
drawing on the rich speaker-level data available from social media.
## 3 Data
### 3.1 Identifying Loanwords
The use of a loanword is considered distinct from code-switching (switching
between languages), because a loanword is produced in isolation within the
``matrix'' language (Poplack, 1988; Cacoullos and Aaron, 2003). This study
concerns the alternation between integrated verbs, i.e. those in which the
loanword has been morphologically integrated into the language (_tuitear_ ``to
tweet'') and light verbs, i.e. phrases in which the loanword is used as a noun
(_poner un tweet_ ``to send a tweet''). We seek light verb phrases that are
semantically similar to the integrated verbs, to avoid possible confounds on
the choice between forms.
The list of loanword integrated verbs was identified from two resources:
Wiktionary and social media. We first collected all verbs on Spanish-language
Wiktionary that are English-origin loanwords and end in one of the standard
verb suffixes (_-(e)ar_).111Accessed 1 Jan 2020:
https://es.wiktionary.org/wiki/Categoria:ES:Palabras_de_origen_ingles. Using a
sample of Reddit and Twitter data,222Data sample of Spanish-language posts
ranges from 1 July 2017 to 30 June 2019. For Reddit this includes all comments
($\sim$560,000), for Twitter this includes a 1% sample from the Twitter stream
($\sim$110,000,000). we collected all words in Spanish-language posts tagged
using langid Lui and Baldwin (2012) that match the structure ENGLISH_WORD \+
_-(e)ar_ ,333English words collected from a standard spellcheck dictionary and
filtered to exclude words shorter than $n=4$ characters. Accessed 1 Nov 2019:
http://wordlist.aspell.net/dicts/. under the assumption that most loanword
verbs use the _-(e)ar_ conjugation (Rodney and Jubilado, 2012). From the
combined set of verbs, we removed all cases of ambiguity, e.g. _plantar_ ,
which can be formed by English _plant_ \+ _-ar_ , is also a native Spanish
word.
For each loanword, we identified a corresponding light verb phrase with a
meaning similar to the integrated form. Spanish has a closed class of light
verbs used to form phrases with nouns (Buckingham, 2013), such as _tomar_
(``take'') in _tomar un viaje_ (``take a vacation''). We used dictionary
definitions from Wiktionary and WordReference to identify valid light verb
forms, and we queried the internet for the remaining loanwords to determine
their validity (e.g. comparing search results for _hacer un tweet_ versus
_poner un tweet_). We validated the loanword pairs with Spanish linguistics
experts familiar with the process of loanword integration. The experts removed
several loanwords that may have been considered native words by Spanish
speakers.444E.g., Spanish speakers may not consider _flipar_ (“to flip”) to be
a loanword due to its older status.
This process yielded 120 integrated and light verb pairs that we used to
define the dependent variable of the study, i.e. integrated verb use vs. light
verb use. We show examples of the most frequent loanword and light verb pairs
in Table 1. Many of the words identified relate to technology and online
behavior (e.g. _likear_ ``to like (on social media)''), which represents a
sample bias. Because we study loanword use specifically on Twitter, it is
likely that the loanwords here relate more to the interests of the platform
community rather than the general population.
### 3.2 Identifying Native Verbs
Studying loanwords in isolation can yield interesting results, but we must
also determine whether the patterns of usage reflect constraints on Spanish
verbs in general (Wichmann and Wohlgemuth, 2008). To address this concern, we
collect an additional set of verbs that are native to Spanish.
We first identified light verb constructions from several grammar blogs and
dictionaries,555E.g. “support verbs” mentioned here, accessed 1 Jan 2020:
https://comunicarbien.wordpress.com/2011/08/06/verbos-de-apoyo/. and generated
the corresponding integrated verb by adding a standard verb suffix to the noun
phrase and verifying with a dictionary.666E.g. for the light verb construction
_tomar un viaje_ (“to take a trip”) with the noun _viaje_ , we generated the
integrated verb _viajar_ (“to travel”). This process yielded 49 pairs of
native integrated and light verbs that serve as a baseline to compare with
loanword use. We extracted all uses of these native verbs from the set of
loanword-using authors mentioned above. As shown in Table 2, the native verbs
occur more frequently than the loanword verbs, which compensates for the fact
that we have fewer word types for native verbs.
Native word | Verbs | Count
---|---|---
Dream | _soñar_ , _tener un sueño_ | 39,392
Buy | _comprar_ , _hacer la compra_ | 36,337
End | _terminar_ , _poner término_ | 34,234
Use | _usar_ , _hacer uso_ | 30,834
Test | _probar_ , _poner a prueba_ | 29,930
Table 2: Top 5 most frequent native word pairs and corresponding verb forms on
social media.
The complete list of loanwords and native verbs is provided in Appendix A for
replicability and for linguists to build upon in future work.
### 3.3 Collecting Loanword Author Data
For our social media data, we collect posts from a 1% Twitter archive sample
of Spanish-language posts, ranging from 1 July 2017 to 30 June 2019. We match
all original (non-RT) posts that contain at least one loanword verb form,
either in the integrated form or light verb form.777We searched for the most
frequently inflected forms of each verb, which include all forms of indicative
present, simple past and imperfect. We also remove all verb forms that are
ambiguous: e.g. the verb _acceso_ (“I access”) has the same spelling as the
noun _acceso_ (“access”). This yields roughly 87,000 posts from 80,000 unique
authors over the period of study, from which roughly 23,000 posts from 20,000
authors were used in the regression, after filtering for available variables
described in § 3.4.
Next, we collect all available prior posts from these loanword authors using
both the original archive sample (2017-2019) and from the authors' full
timelines (2014-2019).888Collected in Mar 2020. We recovered roughly 10
million posts from the authors (about 100 extra posts per author) from which
we extracted native verb use and speaker background variables for analysis
(see Table 3).
### 3.4 Extracting Speaker-Level Variables
Variable type | Name | Description | Mean / distribution |
---|---|---|---|---
Formality | | | Loanword posts | Native word posts
Post content | Hashtag | Whether post contains a hashtag. | 8.1% | 6.6%
| Mention | Whether post contains an @-mention. | 35.2% | 7.4%
| Post length | Length of post in characters, excluding the verb phrase. | 88 | 131
Background | | | All authors
Posting behavior | Activity | Mean posts per day. | 8.5
| Content re-sharing | Percent of prior posts that are retweets. | 35.2%
| Link sharing | Percent of prior posts that contain a URL. | 0.5%
Location | Location | Author's geographic region based on self-reported location. | 54.6% UNK, 34.7% Latin America, 7.0% Europe, 2.7% US, 0.9% Other
Language | Language type | Percent of prior posts written in Spanish. | 83.8% high Spanish, 15.5% medium Spanish, 0.7% low Spanish
| Verb use | Percent of prior native verb posts that contain an integrated verb. | 95.4%
Table 3: Summary of all social media variables used in study.
For the speaker-level analysis, we seek to assess the relative importance of
several author-level and post-level factors in explaining loanword
integration. Following prior work in loanword use, we investigate factors
related to formality Biber and Conrad (2019) and aspects of speaker background
Poplack and Dion (2012) that reflect support for language standards. We
therefore use the following metrics to predict verb integration.
* •
Formality:
* –
Post features: First, we approximate a post's intended _audience_ by marking
the presence of a hashtag (larger audience) and the presence of an @-mention
(smaller audience). We also use the length of a post — excluding the verb
phrase — to identify posts that are longer and therefore potentially more
formal, following prior work in perceptions of formality in online
communication Chhaya et al. (2018); Pavlick and Tetreault (2016).
* •
Speaker background:
* –
Posting behavior: Authors who post frequently may have more extensive
knowledge of linguistic norms online and therefore adhere to the standard
integrated verb form. For this metric, we extract the author's mean number of
prior posts per day. In addition, authors who share more content online may
also be more connected to online norms and may therefore adopt the more
standard verb form. We compute an author's rate of sharing as (1) the
percentage of prior posts that contain a URL and (2) the percentage of prior
posts that are retweets.
* –
Location: The Spanish dialects spoken in Latin America have diverged
significantly from Castilian Spanish (Lipski, 1994), which may result in
different patterns of loanword adoption. We identify authors'
location999Following prior work (Kariryaa et al., 2018), we use an author’s
self-reported location in their profile as a location marker. We define an
author as a resident of a particular country based on the presence of
unambiguous country, state or city keywords in their profile location. at the
region level: Latin America, US, Europe, or other.101010We acknowledge the
considerable diversity of Spanish dialects spoken in Latin America Buckingham
(2013), but we use the level of region in our analysis to avoid data sparsity.
* –
Language use: Bilingual speakers may be more likely to use the light verb
forms of the loanwords , because bilingual speakers often use paraphrases to
address unfamiliar concepts (Jenkins, 2003) and may perceive light verb
constructions differently Doğruöz and Nakov (2014). We tag the authors' prior
posts using langid,111111We filter to posts with a confidence score above 90%
to reduce likelihood of code-switching. and compute the rate of Spanish use
for all authors who have written at least 5 posts. We then bin language use
under the assumption that language use may not be linear. Authors who use
exclusively Spanish (100%) are assumed to be ``strict'' monolingual speakers
as compared to more ``relaxed'' bilingual (0-50%) or mid-range bilingual
(50-100%) speakers.
In addition to language choice, speakers who use more integrated native verbs
may also use more integrated forms for loanwords. We compute the authors' rate
of prior integrated verb use as the number of integrated native verb tokens (§
3.2) normalized by the total number of native verb tokens.
All variables in the social media data are summarized in Table 3. Note that we
choose not to analyze individuals' gender and age due to the relative
difficulty of extracting such information from social media data, particularly
in non-English contexts (Wang et al., 2019).
## 4 Results
### 4.1 Domain Differences in Loanword Integration
Figure 1: Integrated verb use across social media text (blue/left) and
newspaper text (orange/right). Each unit is the ratio of integrated verb use
for a single word type.
The first hypothesis to test concerns the role of domain. As newspapers are
generally considered more formal than social media (Biber and Conrad, 2019;
Pavlick and Tetreault, 2016), we expect that loanwords and native verbs to be
produced with the presumably more formal integrated forms.
H1: Writers in a more formal domain will tend to use the integrated form of
loanwords at a higher rate than writers in a less formal domain.
To test this hypothesis, we collect data from a corpus of Spanish language
newspapers from 21 different Spanish-speaking countries and regions.121212News
On the Web Spanish, approximately 7 billion tokens over 25 million documents,
accessed May 2020: https://www.corpusdelespanol.org/now/. We collect the 50
most frequent loanword pairs and native verb pairs from the social media data
and compute their raw frequencies in the newspaper data. For each pair of
integrated verb and light verb, we compute the rate of integrated verb use as
the normalized frequency of the integrated verb. Formally, for a word base
$w$, the set of all integrated verb forms $\mathcal{W}_{i,w}$, and the set of
all light verb forms for the word $\mathcal{W}_{l,w}$, the rate of integrated
verb use $I_{w}$ is defined as:
$I_{w}=\frac{\sum_{w_{i}\in\mathcal{W}_{i,w}}\text{count}(w_{i})}{\sum_{w^{\prime}\in
W_{i,w}\cup W_{l,w}}\text{count}(w^{\prime})}$
We show the rates of integration across domains and locations in Figure 1. The
first key finding is that the rate of integration is not significantly
different for newspapers across locations, despite known dialect differences
across regions. In addition, we see that for loanwords both social media and
newspapers favor the integrated form over the light verb form, in
correspondence with the expected ``hierarchy'' of loanword adaptation that
places light verbs below integration (Wohlgemuth, 2009). With respect to H1,
we see that newspaper writers consistently use the integrated form of
loanwords and native verbs more frequently than the social media authors.
Loanwords are integrated at a mean per-word rate of $91\%$ in the newspapers
as compared to $82\%$ in social media, while native verbs have a rate of
$93\%$ in the newspapers and $82\%$ in social media.131313Both cases had a
significant difference with $p<0.01$ by Wilcoxon’s signed-rank test. We show
in Figure 1 that this difference holds across all regions.141414We find
$p<0.05$ across all location pairs except loanwords in US America and native
verbs in Latin America, by Wilcoxon’s test with Bonferroni correction.
The consistent difference between social media and newspaper writing suggests
that the domain of newspaper writing has more formal standards with respect to
the use of both loanwords and native words (Geeraerts, 2003). Such consistency
may reflect differences in how newspaper writers are expected to cover
emerging phenomena such as new loanwords. A newspaper writer might be
encouraged to use the formal version of a newer loanword to maximize the
likelihood of their readers' understanding the word Iwasaki (1994); Llopis and
Sánchez-Lafuente (2009). To investigate this in more detail, we show the
loanwords with the highest absolute difference in integration rate across
social media and newspapers in Table 4. The loanwords that are integrated more
often in newspapers seem to be relatively newer and possibly related more to
online social media activity (e.g. _block_ , _hype_), while the loanwords that
are integrated more often on social media seem to be somewhat older and
relevant to a wider range of activities (e.g. _host_ , _rock_). This finding
about domain reinforces the _social_ meaning of loanword use, which informs
the following speaker-level analysis.
Word | $I_{w,\text{social media}}$ | $I_{w,\text{newspaper}}$ | $\Delta\>I_{w}$
---|---|---|---
zap | 0.179 | 1.000 | -0.821
block | 0.153 | 0.857 | -0.704
hype | 0.393 | 0.995 | -0.602
link | 0.335 | 0.872 | -0.536
like | 0.115 | 0.649 | -0.534
… | … | … | …
pitch | 0.998 | 0.988 | 0.011
host | 0.990 | 0.972 | 0.018
google | 0.561 | 0.531 | 0.030
rock | 0.787 | 0.648 | 0.139
DM | 1.000 | 0.120 | 0.880
Table 4: Loanwords with biggest differences in integration between newspaper
and social media.
### 4.2 Speaker-level factors in loanword integration
We now turn to speaker-level data to assess the relative impact of different
social factors in the use of integrated loanwords. If integrated verbs are
considered more formal than light verbs (§ 4.1), then we expect factors
relevant to formality and speech standards to predict integrated verb use for
both loanwords and native verbs:
H2: Speakers in social contexts that prefer formal language standards, and
with backgrounds that support more standard language use, will tend to use
integrated loanwords.
We use logistic regression to predict the use of an integrated verb (1/0) for
a given loanword or native word, using different subsets of post-level and
speaker-level features specified in § 3.4. We add fixed effects for all
sufficiently frequent authors and word types.151515All authors and words with
a count less than N=5 were assigned to a RARE category to avoid sparsity. To
avoid overfitting the fixed effect variables, we choose an L2 weight for ridge
regression, in order to maximize likelihood on held-out data.161616Weight
selected from grid search to maximize held-out likelihood on a 10% test split
of the data, for each separate regression. For the default values of
categorical variables in the regression, we specify ``Unknown'' for author
location and ``low Spanish'' for prior language use. All scalar variables
(post length, post activity, content sharing, link sharing, native integrated
verb use) were log-transformed and Z-normalized before regression.
We show the social media regression results in Table 5. The following
significant results emerge from the analysis.
| | Native words | Loanwords
---|---|---|---
Variable type | Variable | $\beta$ | S.E. | $\beta$ | S.E.
| Intercept | 2.572* | 0.030 | 1.376* | 0.234
Formality | | | | |
Post features | Has hashtag | 0.099* | 0.010 | 0.079 | 0.026
| Has mention | -0.050* | 0.009 | -0.087* | 0.015
| Post length | -0.046* | 0.002 | 0.051* | 0.008
Background | | | | |
Author behavior | Post activity | 0.006 | 0.003 | -0.034 | 0.011
| URL sharing | -0.015* | 0.003 | 0.024* | 0.010
| RT sharing | 0.025* | 0.003 | -0.010 | 0.009
Location | Latin America | 0.133* | 0.005 | 0.228* | 0.016
| Europe | -0.223* | 0.010 | -0.367* | 0.033
| US | 0.008 | 0.015 | -0.143 | 0.048
| Other | 0.171* | 0.025 | -0.193 | 0.082
Language | High Spanish | 0.606* | 0.031 | 0.589* | 0.110
| Medium Spanish | 0.687* | 0.030 | 0.424* | 0.107
| Integrated verb use | | | -0.006 | 0.007
Sample size | | 235969 | | 25436 |
Likelihood ratio (vs. null) | | 2427* | | 3995* |
Table 5: Regression results for predicting integrated verb use for loanwords.
* indicates $p<0.01$, otherwise $p>0.01$; Bonferroni correction applied for
significance testing for individual coefficients. Bold indicates variables for
which effects are significant across both conditions and point in opposite
directions.
#### 4.2.1 Speaker-level Factors: Formality
First, we find the following trends with respect to formality.
##### Post context matters
Speakers tend to use the integrated form more often for native verbs when
using hashtags ($\beta$=0.099) and less often for both loanwords and native
verbs when using @-mentions ($\beta$=-0.087 loanwords, $\beta$=-0.050 native
verbs). Prior work demonstrated a similar effect with nonstandard English
words on Twitter (Pavalanathan and Eisenstein, 2015) and found that hashtags
and @-mentions correlated with larger and smaller audience expectations. Since
formal language is often expected with a larger audience (Bell, 1984), Spanish
speakers may naturally choose the integrated verb forms to adapt to a larger
potential audience. For post length, we find that longer posts tend to have
integrated verbs more often for loanwords ($\beta$=0.051) and less often for
native verbs ($\beta$=-0.046). This effect may be related to post content
(e.g. including direct objects for loanword verbs) but it may also reflect
inherent differences in the perceptions of loanwords and native verbs.
#### 4.2.2 Speaker-level factors: Background
For loanword and native verb integration, we find the following trends with
respect to speaker background.
##### Information sharing affects integration differently
We find that the frequent URL-sharing speakers are more likely to use the
integrated form for loanwords ($\beta$=0.024), and less likely to use the
integrated form for native verbs ($\beta$=-0.015). If we assume that people
who share more URLs are more interested in sharing new information (Holton et
al., 2014), then these people may also be more likely to use formal verb forms
for newer words (loanwords) and informal forms for older words (native verbs),
due to the speakers' increased awareness of how new information should be
treated. For RT sharing, we find that authors who frequently retweet others
are more likely to use the integrated form of native verbs ($\beta$=0.025),
which suggests that authors with more social ties (higher network
embeddedness; cf. Milroy and Milroy 1985) tend toward more standard language
choices for frequently used words, i.e. native verbs.
##### Latin American authors prefer integration
For both word groups, Latin American authors use integrated verbs at a higher
rate ($\beta$=0.228 for loanwords, $\beta$=0.133 for native verbs). Prior
studies in World Englishes have found that dialects in post-colonial countries
such as India sometimes adopt more linguistically conservative features
(Sharma, 2017), which may be reflected in the higher rate of verb integration
in Latin America (cf. conservative pronunciation in Latin American Spanish;
Guy 2014). In contrast, authors from Europe tend to use less verb integration
($\beta$=-0.367 for loanwords, $\beta$=-0.223 for native verbs), which
suggests that using standard forms is less important for mainland Spain
authors due to the dialect's relative prestige (Hernández-Campoy and Villena-
Ponsoda, 2009).
##### More integration for monolinguals
For loanwords, high-Spanish authors use integrated verbs at a higher rate than
low-Spanish authors ($\beta$=0.589), and medium-Spanish authors use integrated
verbs at a slightly higher rate ($\beta$=0.424). For native verbs, both high-
Spanish and medium-Spanish authors use integrated verbs at a higher rate than
low-Spanish authors ($\beta$=0.606 high-Spanish, $\beta$=0.687 medium-
Spanish). Integrated verbs may be considered canonical and therefore more
accessible for monolingual speakers, while light verbs could be more readily
accessible to bilingual speakers who may default to simpler light verb
constructions (González-Vilbazo and López, 2011). For example, the loanword
phrase _dar un like_ may sound more natural to a bilingual speaker who is
uncertain of the acceptability of _likear_.
We note that for some of the variables such as post length and URL sharing,
the effect direction for loanword integration is the opposite of the direction
for native word integration. The use of loanwords may bear a different social
meaning for speakers as compared to native words (e.g. speakers consider
loanwords to be newer in their vocabulary, Levendis and Calude 2019), which
results in different effects on integration for the same social variable.
However, we leave more careful investigation of the differences between the
word types for future work.
## 5 Discussion
We investigate the tendency for Spanish-speaking authors to use integrated
verb forms for English loanwords, with a corpus of social media data augmented
with speaker-level information.
The study provides a data set of loanwords and native words that linguists can
use to investigate specific contexts of usage (e.g. in quotations, Iwasaki
1994). The study also offers a pipeline for collecting various forms of
loanwords using structured data (dictionaries) and data ``in the wild.'' More
broadly, our work demonstrates the utility of social media as a window into
speaker-level and contextual factors that underlie multilingual phenomena such
as loanwords.
Our analyses show that integrated verb use for loanwords is clearly connected
to underlying expectations of formality and standardness in language use,
which also apply to native verbs. The findings of this study provide
additional context to prior work that showed some social correlates of
loanword integration such as neighborhood composition (Poplack, 1988). The
decision to use integrated verb forms appears to rely not just on the
speakers' background (e.g. linguistic knowledge) but even utterance-level
context (e.g. audience), suggesting that the process is not ``inevitable''
(Poplack and Dion, 2012). Furthermore, the differences in domain-level and
speaker-level effects across word groups (and within word groups, e.g. Table
4) suggest different social perceptions, i.e. ``marked'' loanwords versus
older, well-accepted native verbs. Such implicit social evaluations can help
predict the long-term entrenchment of loanwords in a speech community Chesley
(2010); Zenner et al. (2012), and shed light on processes of cross-cultural
contact and attitudes Lev-Ari and Peperkamp (2014).
This study has several limitations that merit further research. First, the
findings are narrowly focused on one form of integration, i.e. the alternation
between different verb forms. Future work should consider other forms of
loanword integration on social media, including in orthography (Eng.
_football_ $\rightarrow$ Sp. _fútbol_) and syntax (_el key_ vs. _la key_ ``the
key'') (Montes-Alcalá and Shin, 2011; Vendelin and Peperkamp, 2006). It may be
the case that some forms of loanword integration are more socially salient
than others (Myers-Scotton, 1998) and therefore more strongly constrained by
factors such as audience expectations. In addition, this analysis found some
location-level effects but did not zoom in to the level of the community,
which is important since different speech communities may have different
perceptions of the social value of loanwords (Aaron, 2015; Garley, 2014). As
people of different linguistic backgrounds continue to interact on social
media (Kim et al., 2014), it will be important to consider how different sub-
communities on the platform adopt loanwords from one another, as such
processes can lead to long-term language change. Lastly, different languages
may have different expectations about the social meaning of integrated
loanword use, e.g. integrated verbs in Japanese may seem less formal than
their light verb equivalent (Tsujimura and Davis, 2011). More cross-linguistic
work is needed to understand how well the social ramifications of loanword
integration can be generalized (Haspelmath, 2009) and whether they reflect
culture-specific norms rather than inherent trends about language and
socialization.
## Acknowledgments
This project was funded under NSF CAREER grant #1452443 to JE and a Data
Curation Award from Georgia Institute of Technology's Institute for Data
Engineering and Science (IDEaS) to DY. The authors thank Dr. Cecilia Montes-
Alcalá and Dr. Lewis Chad Howe for their feedback on the validity of the
loanword and native word pairs, as well as their feedback on early paper
drafts. The authors also thank members of the Computational Linguistics lab
and the SALT Lab at Georgia Institute of Technology for their feedback.
## References
* Aaron (2015) Jessi Elana Aaron. 2015. Lone English-origin nouns in Spanish: The precedence of community norms. _International Journal of Bilingualism_ , 19(4):459–480.
* Bell (1984) Allan Bell. 1984. Language style as audience design. _Language in society_ , 13(2):145–204.
* Biber and Conrad (2019) Douglas Biber and Susan Conrad. 2019. _Register, genre, and style_. Cambridge University Press.
* Boersma et al. (2009) Paul Boersma, Silke Hamann, et al. 2009. Loanword adaptation as first-language phonological perception. _Loanword phonology_ , pages 11–58.
* Buckingham (2013) Louisa Buckingham. 2013. Light verb constructions in Latin American newspapers: Creative variants and coinages. _Spanish in Context_ , 10(1):114–135.
* Cacoullos and Aaron (2003) Rena Torres Cacoullos and Jessi Elana Aaron. 2003. Bare English-origin nouns in Spanish: Rates, constraints, and discourse functions. _Language Variation and Change_ , 15(3):289–328.
* Chesley (2010) Paula Chesley. 2010. Lexical borrowings in French: Anglicisms as a separate phenomenon. _Journal of French Language Studies_ , 20(3):231–251.
* Chhaya et al. (2018) Niyati Chhaya, Kushal Chawla, Tanya Goyal, Projjal Chanda, and Jaya Singh. 2018\. Frustrated, polite, or formal: Quantifying feelings and tone in email. In _Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media_ , pages 76–86.
* Coats (2018) Steven Coats. 2018. Variation of New German Verbal Anglicisms in a Social Media Corpus. In _Proceedings of the 6th conference on CMC and social media corpora for the humanities_.
* Doğruöz and Nakov (2014) A. Seza Doğruöz and Preslav Nakov. 2014. Predicting dialect variation in immigrant contexts using light verb constructions. In _EMNLP_ , pages 1391–1395.
* Garley (2014) Matt Garley. 2014. Seen and not heard: The relationship of orthography, morphology, and phonology in loanword adaptation in the german hip hop community. _Discourse, Context & Media_, 3:27–36.
* Garley and Hockenmaier (2012) Matt Garley and Julia Hockenmaier. 2012. Beefmoves: dissemination, diversity, and dynamics of English borrowings in a German hip hop forum. In _ACL_ , pages 135–139.
* Geeraerts (2003) Dirk Geeraerts. 2003. Cultural models of linguistic standardization. In René Dirven, Roslyn Frank, and Martin Pütz, editors, _Cognitive models in language and thought. Ideology, metaphors and meanings_ , volume 2568.
* González-Vilbazo and López (2011) Kay González-Vilbazo and Luis López. 2011. Some properties of light verbs in code-switching. _Lingua_ , 121(5):832–850.
* Guy (2014) Gregory Guy. 2014. Variation and change in Latin American Spanish and Portuguese. In _Portuguese-Spanish interfaces: Diachrony, synchrony, and contact_ , pages 443–464.
* Hall-Lew et al. (2010) Lauren Hall-Lew, Elizabeth Coppock, and Rebecca L Starr. 2010. Indexing political persuasion: Variation in the Iraq vowels. _American Speech_ , 85(1):91–102.
* Haspelmath (2009) Martin Haspelmath. 2009. Lexical borrowing: Concepts and issues. In _Loanwords in the world's language: A Comparative Handbook_ , pages 944–967.
* Hernández-Campoy and Villena-Ponsoda (2009) Juan Manuel Hernández-Campoy and Juan Andrés Villena-Ponsoda. 2009. Standardness and nonstandardness in Spain: dialect attrition and revitalization of regional dialects of Spanish. _International Journal of the Sociology of Language_ , 2009(196-197):181–214.
* Holton et al. (2014) Avery E Holton, Kang Baek, Mark Coddington, and Carolyn Yaschur. 2014. Seeking and sharing: Motivations for linking on Twitter. _Communication Research Reports_ , 31(1):33–40.
* Iwasaki (1994) Yasufumi Iwasaki. 1994. Englishization of Japanese and acculturation of English to Japanese culture. _World Englishes_ , 13(2):261–272.
* Jenkins (2003) Devin L. Jenkins. 2003. Bilingual Verb Constructions in Southwestern Spanish. _Bilingual Review_ , pages 195–204.
* Kang (2011) Yoonjung Kang. 2011. Loanword phonology. _The Blackwell companion to phonology_ , pages 1–25.
* Kariryaa et al. (2018) Ankit Kariryaa, Isaac Johnson, Johannes Schöning, and Brent Hecht. 2018. Defining and predicting the localness of volunteered geographic information using ground truth data. In _CHI_ , pages 1–12.
* Kilgarriff (2010) Adam Kilgarriff. 2010. Google the verb. _Language Resources and Evaluation_ , 44(3):281–290.
* Kim et al. (2014) Suin Kim, Ingmar Weber, Li Wei, and Alice Oh. 2014. Sociolinguistic analysis of Twitter in multilingual societies. In _Proceedings of the 25th ACM conference on Hypertext and social media_ , pages 243–248.
* Lev-Ari and Peperkamp (2014) Shiri Lev-Ari and Sharon Peperkamp. 2014. An experimental study of the role of social factors in language change: The case of loanword adaptations. _Laboratory Phonology_ , 5(3):379–401.
* Levendis and Calude (2019) Katharine Levendis and Andreea Calude. 2019. Perception and Flagging of Loanwords–A diachronic case-study of Māori loanwords in New Zealand English. _Ampersand_ , 6:100056.
* Lipski (1994) John Lipski. 1994. _Latin American Spanish_. Longman, New York.
* Llopis and Sánchez-Lafuente (2009) María Ángeles Orts Llopis and Ángela Almela Sánchez-Lafuente. 2009\. Translating the Spanish economic discourse of the crisis: Dealing with the inevitability of English loanwords. _International Journal of English Studies_ , 9(3):133–158.
* Lui and Baldwin (2012) Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In _ACL_ , pages 25–30.
* Milroy and Milroy (1985) James Milroy and Lesley Milroy. 1985. Linguistic change, social network and speaker innovation. _Journal of linguistics_ , 21(2):339–384.
* Montes-Alcalá and Shin (2011) Cecilia Montes-Alcalá and Naomi Lapidus Shin. 2011. Las keys versus el key: Feminine gender assignment in mixed-language texts. _Spanish in context_ , 8(1):119–143.
* Myers-Scotton (1998) Carol Myers-Scotton. 1998. A theoretical introduction to the markedness model. In _Codes and consequences: Choosing linguistic varieties_.
* Pavalanathan and Eisenstein (2015) Umashanthi Pavalanathan and Jacob Eisenstein. 2015. Audience-modulated variation in online social media. _American Speech_ , 90(2):187–213.
* Pavlick and Tetreault (2016) Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. _Transactions of the Association for Computational Linguistics_ , 4:61–74.
* Peperkamp (2004) Sharon Peperkamp. 2004. A psycholinguistic theory of loanword adaptations. In _Annual Meeting of the Berkeley Linguistics Society_ , volume 30, pages 341–352.
* Poplack (1988) Shana Poplack. 1988. Contrasting patterns of code-switching in two communities. _Codeswitching: Anthropological and sociolinguistic perspectives_ , 48:215–244.
* Poplack and Dion (2012) Shana Poplack and Nathalie Dion. 2012. Myths and facts about loanword development. _Language Variation and Change_ , 24(3):279–315.
* Poplack et al. (2020) Shana Poplack, Suzanne Robillard, Nathalie Dion, and John C. Paolillo. 2020. Revisiting phonetic integration in bilingual borrowing. _Language_ , 96(1):126–159.
* Rodney and Jubilado (2012) C Rodney and C Jubilado. 2012. Morphological Study of Verb of Anglicisms in Spanish Computer Language. _Polyglossia_ , 23:43–47.
* Sanchez (2005) Tara Sanchez. 2005. The (socio-)linguistics of morphological borrowing: A quantitative look at qualitative constraints and universals. _University of Pennsylvania Working Papers in Linguistics_ , 11(2):12.
* Sharma (2017) Devyani Sharma. 2017. English in India. In _Varieties of English_ , pages 311–329.
* Tsujimura and Davis (2011) Natsuko Tsujimura and Stuart Davis. 2011. A construction approach to innovative verbs in Japanese. _Cognitive Linguistics_ , 22(4):799–825.
* Vendelin and Peperkamp (2006) Inga Vendelin and Sharon Peperkamp. 2006. The influence of orthography on loanword adaptations. _Lingua_ , 116(7):996–1007.
* Wang et al. (2019) Zijian Wang, Scott Hale, David Ifeoluwa Adelani, Przemyslaw Grabowicz, Timo Hartman, Fabian Flöck, and David Jurgens. 2019. Demographic inference and representative population estimates from multilingual social media data. In _The Web Conference_ , pages 2056–2067.
* Wichmann and Wohlgemuth (2008) Søren Wichmann and Jan Wohlgemuth. 2008. Loan verbs in a typological perspective. _Empirical approaches to language typology_ , 35:89.
* Wohlgemuth (2009) Jan Wohlgemuth. 2009. _A typology of verbal borrowings_ , volume 211. Walter de Gruyter.
* Zenner et al. (2012) Eline Zenner, Dirk Speelman, and Dirk Geeraerts. 2012. Cognitive Sociolinguistics meets loanword research: Measuring variation in the success of anglicisms in Dutch. _Cognitive Linguistics_ , 23(4):749–792.
## Appendix A Appendix
### A.1 All integrated and light verb pairs
To assist study replication, we list all pairs of integrated and light verbs
for loanwords and native verbs used in this study. We list them in
alphabetical order (by integrated verb) in the format:
_loanword/translation_ : integrated verb ; light verb phrase(s)
##### Loanwords
* •
_access_ : accesar ; hacer/tener acces
* •
_aim_ : aimear ; hacer/tener aim
* •
_alert_ : alertear ; hacer alert
* •
_audit_ : auditar ; hacer (un) audit
* •
_ban_ : banear ; hacer un ban
* •
_bang_ : bangear ; hacer bang
* •
_bash_ : bashear ; hacer/dar bash
* •
_block_ : blockear ; hacer/dar (un) block
* •
_boycott_ : boicotear ; hacer (un) boicot
* •
_box_ : boxear ; hacer (el) box/boxing
* •
_bully_ : bulear ; hacer/ser (el) bully
* •
_bust_ : bustear ; hacer (el) bust
* •
_cast_ : castear ; hacer cast/casting
* •
_change_ : changear ; hacer change
* •
_chat_ : chatear ; hacer chat
* •
_check_ : chequear ; hacer un cheque
* •
_shoot_ : chutar ; hacer/tomar el shot
* •
_combat_ : combatear ; hacer (el) combat
* •
_connect_ : conectar ; hacer (un) conexión
* •
_crack_ : crackear ; hacer crack
* •
_customize_ : customizar ; hacer custom/customized
* •
_default_ : defaultear ; hacer default
* •
_delete_ : deletear ; hacer/poner delete
* •
_DM_ : dmear ; mandar/enviar/poner un dm
* •
_dope_ : dopar ; hacer doping
* •
_downvote_ : downvotear ; poner/dar (un) downvote
* •
_draft_ : draftear ; hacer/tener draft
* •
_drain_ : drenar ; hacer (el) dren
* •
_smash_ : esmachar ; hacer smash
* •
_sniff_ : esnifar ; hacer sniff
* •
_standard_ : estándar ; hacer (un) standard
* •
_exit_ : exitear ; hacer exit
* •
_export_ : exportear ; hacer export
* •
_externalize_ : externalizar ; hacer external
* •
_fangirl_ : fangirlear ; hacer/ser fangirl
* •
_film_ : filmar ; tomar (un) film
* •
_flash_ : flashear ; hacer (un) flash
* •
_flex_ : flexear ; hacer (un) flex
* •
_flirt_ : flirtear ; hacer flirt
* •
_focus_ : focalizar ; hacer focus
* •
_format_ : formatear ; hacer/dar (el) formato
* •
_form_ : formear ; hacer form
* •
_freak_ : friquear ; estar freaked
* •
_freeze_ : frizar ; hacer freeze
* •
_fund_ : fundear ; dar/hacer fund/funding
* •
_gentrify_ : gentrificar ; hacer/tener gentrificación
* •
_ghost_ : gostear ; hacer gost/ghost
* •
_google_ : googlear ; buscar en google
* •
_hack_ : hackear ; hacer hack
* •
_hail_ : hailear ; hacer hail
* •
_hang_ : hanguear ; hacer hang
* •
_harm_ : harmear ; hacer harm
* •
_hypnosis_ : hipnotizar ; hacer hipnosis
* •
_host_ : hostear ; hacer host
* •
_hype_ : hypear ; hacer hype
* •
_intercept_ : interceptear ; hacer/tirar interception
* •
_hang_ : janguear ; hacer hang (out)
* •
_lag_ : lagear ; hacer (un) lag
* •
_like_ : likear ; dar/poner (un) like
* •
_limit_ : limitear ; hacer (un) limit
* •
_lynch_ : linchar ; hacer lynch
* •
_link_ : linkear ; dar/poner (un) link
* •
_love_ : lovear ; hacer love
* •
_look_ : luquear ; dar/hacer (un) look
* •
_make_ : makear ; hacer make
* •
_melt_ : meltear ; hacer melt
* •
_mope_ : mopear ; hacer mope
* •
_nag_ : nagear ; hacer nag
* •
_knock_ : noquear ; dar/hacer (un) knockout
* •
_pack_ : packear ; hacer pack
* •
_pan_ : panear ; hacer/dar (un) panorama
* •
_panic_ : paniquear ; tener panic
* •
_park_ : parquear ; hacer parking
* •
_perform_ : performar ; hacer (un) performance
* •
_pitch_ : pichear ; hacer (un) pitch
* •
_pin_ : pinear ; hacer pin
* •
_PM_ : pmear ; enviar/mandar (un) pm
* •
_punch_ : ponchar ; hacer un punch
* •
_post_ : postear ; dar/poner (un) post
* •
_posterize_ : posterizar ; hacer poster
* •
_print_ : printear ; hacer print
* •
_protest_ : protestear ; hacer (un) protest
* •
_push_ : puchar ; hacer un push
* •
_pump_ : pumpear ; hacer pump(s)
* •
_quote_ : quotear ; hacer quote
* •
_rank_ : rankear ; hacer rank
* •
_rant_ : rantear ; hacer (un) rant
* •
_rape_ : rapear ; hacer (un) rape
* •
_record_ : recordear ; hacer (un) recording
* •
_render_ : renderizar ; hacer render(ed)
* •
_rent_ : rentear ; hacer rental/renting
* •
_report_ : reportear ; hacer (un) report
* •
_reset_ : resetear ; hacer reset
* •
_respect_ : respectear ; hacer respect
* •
_ring_ : ringear ; hacer ring
* •
_rock_ : rockear ; hacer rock
* •
_roll_ : rollear ; hacer roll
* •
_sample_ : samplear ; hacer (un) sample
* •
_selfie_ : selfiar ; tomar (un) selfie
* •
_sext_ : sextear ; dar/mandar un sext
* •
_ship_ : shippear ; hacer ship
* •
_shitpost_ : shitpostear ; hacer/poner un shitpost
* •
_shock_ : shockear ; hacer shock
* •
_sign-in_ : signear ; hacer sign-in
* •
_stalk_ : stalkear ; actuar como un stalker
* •
_strike_ : strikear ; hacer/dar un strike
* •
_surf_ : surfear ; hacer surf
* •
_tackle_ : taclear ; hacer tackle
* •
_text_ : textear ; mandar/enviar un text
* •
_tick_ : ticar ; hacer (un) tick
* •
_torment_ : tormentear ; hacer torment
* •
_touch_ : touchear ; hacer (un) touch
* •
_transport_ : transportear ; hacer transport
* •
_travel_ : travelear ; hacer travel
* •
_troll_ : trolear ; actuar como un trol
* •
_tweet_ : tweetear ; poner/enviar/hacer (un) tweet
* •
_twerk_ : twerkear ; hacer twerk
* •
_upvote_ : upvotear ; dar (un) upvote
* •
_vape_ : vapear ; hacer/tomar vape/vaping
* •
_zap_ : zapear ; hacer zap/zapping
##### Native verbs
* •
_admire_ : admirar ; tener admiración
* •
_befriend_ : amistar ; tener amistad
* •
_encourage_ : animar ; subir el ánimo
* •
_note_ : anotar ; tomar nota
* •
_land_ : aterrizar ; hacer un aterrizaje
* •
_joke_ : bromear ; hacer bromas
* •
_mock_ : burlarse ; hacer burla
* •
_punish_ : castigar ; poner un castigo
* •
_buy_ : comprar ; hacer la compra
* •
_copy_ : copiar ; hacer una copia
* •
_tickle_ : cosquillar ; hacer cosquillas
* •
_blame_ : culpar ; echar la culpa
* •
_damage_ : dañar ; hacer daño
* •
_decide_ : decidir ; tomar decisiones
* •
_apologize_ : disculparse ; pedir disculpas
* •
_shower_ : ducharse ; darse una ducha
* •
_question_ : dudar ; poner en duda
* •
_exemplify_ : ejemplificar ; poner un ejemplo
* •
_estimate_ : estimar ; tener estima
* •
_explain_ : explicar ; dar una explicación
* •
_finish_ : finalizar ; poner fin
* •
_photograph_ : fotografiar ; tomar fotos
* •
_escape_ : fugarse ; darse a la fuga
* •
_mention_ : mencionar ; hacer mención
* •
_look at_ : mirar ; echar una mirada
* •
_penalize_ : multar ; poner una multa
* •
_negotiate_ : negociar ; hacer negocios
* •
_originate_ : originar ; dar origen
* •
_participate_ : participar ; tomar parte
* •
_walk_ : pasear ; dar un paseo
* •
_step_ : pisar ; poner el pie
* •
_value_ : preciar ; poner precio
* •
_ask_ : preguntar ; hacer (una) pregunta
* •
_anticipate_ : prever ; tener previsto
* •
_test_ : probar ; poner a prueba
* •
_recommend_ : recomendar ; hacer recomendación
* •
_write_ : redactar ; hacer una redacción
* •
_cure_ : remediar ; poner remedio
* •
_breathe_ : respirar ; dar un respiro
* •
_jump_ : saltar ; dar un salto
* •
_nap_ : sestear ; echar una siesta
* •
_dream_ : soñar ; tener un sueño
* •
_end_ : terminar ; poner término
* •
_use_ : usar ; hacer uso
* •
_travel_ : viajar ; hacer un viaje
* •
_see_ : vistar ; echar un vistazo
* •
_fly_ : volar ; tomar un vuelo
|
# Unadjusted Langevin algorithm for non-convex weakly smooth potentials
Dao<EMAIL_ADDRESS>[ Xin
<EMAIL_ADDRESS>[ Yixin<EMAIL_ADDRESS>[
Department of Mathematics, University of Mississippi. Department of
Mathematics, University of Mississippi. Department of Computer Science,
University of Mississippi. University of Mississippi
(2020)
###### Abstract
Discretization of continuous-time diffusion processes is a widely recognized
method for sampling. However, the canonical Euler Maruyama discretization of
the Langevin diffusion process, referred as Unadjusted Langevin Algorithm
(ULA), studied mostly in the context of smooth (gradient Lipschitz) and
strongly log-concave densities, is a considerable hindrance for its deployment
in many sciences, including statistics and machine learning. In this paper, we
establish several theoretical contributions to the literature on such sampling
methods for non-convex distributions. Particularly, we introduce a new mixture
weakly smooth condition, under which we prove that ULA will converge with
additional log-Sobolev inequality. We also show that ULA for smoothing
potential will converge in $L_{2}$-Wasserstein distance. Moreover, using
convexification of nonconvex domain [24] in combination with regularization,
we establish the convergence in Kullback-Leibler (KL) divergence with the
number of iterations to reach $\epsilon$-neighborhood of a target distribution
in only polynomial dependence on the dimension. We relax the conditions of
[31] and prove convergence guarantees under isoperimetry, and non-strongly
convex at infinity.
Langevin Monte Carlo,
non-convex sampling,
Kullback-Leibler divergence,
regularization,
weakly smooth,
###### keywords:
[class=MSC]
††volume: 0††issue: 0
2010.00000
, and
###### Contents
1. 1 Introduction
2. 2 Preliminaries
1. 2.1 Assumptions on the potential $U$
2. 2.2 Smoothing using $p$-generalized Gaussian smoothing
3. 3 Convergence in KL divergence along ULA under LSI
1. 3.1 Recall KL divergence along Langevin dynamics
2. 3.2 Main result: KL divergence along Unadjusted Langevin Algorithm
3. 3.3 Sampling via smoothing potential
4. 4 Extended result
1. 4.1 ULA convergence under $\gamma-$Poincaré inequality, $\alpha$-mixture weakly smooth and $2-$dissipativity
2. 4.2 ULA convergence under non-strongly convex outside the ball, $\alpha$-mixture weakly smooth and $2-$dissipativity
5. 5 Conclusion
6. A Measure definitions and isoperimetry
7. B Proofs of $p$-generalized Gaussian smoothing
1. B.1 Proof of $\alpha$-mixture weakly smooth property
2. B.2 Proof of $p$-generalized Gaussian smoothing properties
8. C Proofs under LSI
1. C.1 Proof of Lemma 3.2
2. C.2 Proof of Lemma 3.2
3. C.3 Proof of Lemma 3.1
4. C.4 Proof of Theorem 3.1
9. D Proof of sampling via smoothing potential
1. D.1 Proof of Lemma 3.3
2. D.2 Proof of Lemma 3.2
3. D.3 Proof of Lemma 3.4
10. E Convexification of non-convex domain
1. E.1 Proof of Lemma 4.1
2. E.2 Proof of Lemma 4.2
3. E.3 Proof of lemma 4.3
4. E.4 Proof of lemma 4.1
5. E.5 Proof of lemma 4.1
6. E.6 Proof of Lemma 5
11. F Proof of additional lemmas
## 1 Introduction
Over the last decade, Bayesian inference has become one of the most prevalent
inferring instruments for a variety of disciplines, including the
computational statistics and statistical learning [30]. In general, Bayesian
inference seeks to generate samples of the posterior distribution of the form:
$\rho(\mathrm{x})=\mathrm{e}^{-U(x)}/\int_{R^{d}}\mathrm{e}^{-U(y)}\mathrm{d}y,$
(1.1)
where the function $U(\mathrm{x})$, also known as the potential function, is
often convex. The most conventional approaches, random walks Metropolis
Hasting [30], often struggle to select a proper proposal distribution for
sampling. As a result, it has been proposed to consider continuous dynamics
which inherently leaves the objective distribution $\rho$ invariant. Probably
one of the most well-known example of these continuous dynamic applications is
the over-damped Langevin diffusion [10] associated with $U$,
$\displaystyle dX_{t}=-\nabla U(X_{t})dt+\sqrt{2}dB_{t},$ (1.2)
where $B_{t}$ is a $d$-dimensional Brownian motion and its Euler-Maruyama
discretization hinges on the following update rule:
$\mathrm{x}_{k+1}=\mathrm{x}_{k}-\eta_{k}\nabla
U(\mathrm{x}_{k})+\sqrt{2\eta}\xi_{k},$ (1.3)
where $(\eta_{k})_{k\geq 1}$ is a sequence of step sizes, which can be kept
constant or decreasing to $0$, and $\xi_{k}\sim{N}(0,\ I_{d})$ ($I_{d}$
denotes identity matrix dimension $d$), are independent Gaussian random
vectors. It can be seen that Euler-Maruyama discretization, also known as
Langevin Monte Carlo (LMC) algorithm, does not involve knowledge of $U$, but
gradient of $U$ instead, which makes it ideally applicable where we typically
only know $U$ up to a normalizing constant. Owing to its simplicity,
efficiency, and well understood properties, there are various applications
using LMC [33, 9, 28, 34, 23]. Much of the theory of convergence of sampling
used to focus on asymptotic convergence, failing to provide a detailed study
of dimension dependence. Recently, there is a surge of interests in non-
asymptotic rates of convergence, which include dimension dependence,
especially polynomial dependence on the dimension of target distribution; see,
e.g., [8, 10, 12, 15, 3, 14, 7, 22, 25, 26]. However, there is a critical gap
in the theory of discretization of an underlying stochastic differential
equation (SDE) to the broad spectrum of applications in statistical inference.
In particular, the application of techniques from SDEs traditionally requires
$U(\mathrm{x})$ to have Lipschitz-continuous gradients. This requirement
prohibits many typical utilizations [13]. [6] has recently established an
original approach to deal with weakly smooth (possibly non-smooth) potential
problems through smoothing. Their technique rests on results obtained from the
optimization community, perturbing a gradient evaluating point by a Gaussian.
However, [6] analyzes over-damped Langevin diffusion in the contexts of convex
potential functions while many applications involve sampling in high
dimensional spaces have non convex settings. In another research, [16]
proposes a very beautiful result using tail growth for weakly smooth and
weakly dissipative potentials. By using degenerated convex and modified log
Sobolev inequality, they prove that LMC gets $\epsilon$-neighborhood of a
target distribution in KL divergence with the convergence rate of
$\tilde{O}(d^{\frac{1}{\alpha}+\frac{1+\alpha}{\alpha}(\frac{2}{\beta}-\mathrm{1}_{\\{\beta\neq
1\\}})}\epsilon^{\frac{-1}{\alpha}})$ where $\alpha$ and $\beta$ are degrees
of weakly smooth and dissipative defined in the next section.
Unfortunately, (weakly) smooth conditions typically can not cover a mixture of
distributions with different tail growth behaviors, which prohibit a large
range of applications. Therefore, we first introduce an $\alpha$-mixture
weakly smooth condition to overcome the limitation of the current weakly
smooth condition. Under our novel condition and log-Sobolev inequality, we
will recover the ULA convergence results [31]. In addition, we also show that
ULA for smoothing potential will converge in $L_{2}$-Wasserstein distance.
Since log-Sobolev inequality is preserved under bounded perturbation, we will
extend the results based on convexification of a non-convex domain [24]. Our
contributions can be outlined as follows.
First, for a potential function $U$, which satisfies an $\alpha$-mixture
weakly smooth, and $\gamma$-log Sobolev inequality, we prove that ULA achieves
the convergence rate of
$\tilde{O}\left(d^{\frac{1}{\alpha}}\gamma^{\frac{-1}{\alpha}}\epsilon^{\frac{-1}{\alpha}}\right)$
(1.4)
in KL-divergence.
Second, our convergence results also cover sampling from non-convex
potentials, satisfying $\alpha$-mixture weakly smooth, $2$-dissipative and
$\gamma$-Poincaré or $\gamma$-Talagrand with convergence rate in KL divergence
of
$\tilde{O}\left(d^{\frac{2}{\alpha}+1}\gamma^{\left(\frac{-1}{\alpha}-1\right)}\epsilon^{\frac{-1}{\alpha}}\right).$
(1.5)
Third, we further investigate the case of $\alpha$-mixture weakly smooth,
$2$-dissipative and non strongly convex outside the ball of radius $R$ and
obtain the convergence rate of
$\tilde{O}\left(d^{1+\frac{2}{\alpha}}\gamma^{\left(\frac{-1}{\alpha}-1\right)}e^{5\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4\max\left\\{L_{i}\right\\}R^{1+\alpha}\right)\left(\frac{1}{\alpha}+1\right)}\epsilon^{\frac{-1}{\alpha}}\right).$
(1.6)
Finally, our convergence results remain valid under finite perturbations,
indicating that it is applicable to an even larger class of potentials. Last
but not least, convergence in KL divergence implies convergence in total
variation and in $L_{2}$-Wasserstein metrics based on Pinsker inequalities,
which in turn gives convergence rates of
$O(\cdot\epsilon^{\frac{-2}{\alpha})}$ and
$O(\cdot\epsilon^{\frac{-2\beta}{\alpha}}d^{\frac{2}{\alpha}})$ in place of
$O(\cdot\epsilon^{\frac{-1}{\alpha})}$ in each case above, respectively for
total variation and $L_{2}$-Wasserstein metrics.
The rest of the paper is organized as follows. Section 2 sets out the
notation, definition and smoothing properties necessary to give our main
results in section 3. Section 4 extends the convexification of non convex
domain of [24] for strongly convex outside the ball to non-strongly convex
outside the ball, and employs this outcome in combination with regularization
to obtain convergence in KL divergence for potentials which satisfy log
Sobolev, Talagrand, Poincaré inequalities, or non-strongly convex at infinity
while Section 5 presents our conclusions.
## 2 Preliminaries
This section provides the notational conventions, assumptions, and some
auxiliary results used in this paper. We let $\left|s\right|$, for a real
number $s\in\mathbb{R}$, denote its absolute value and use $\left\langle\ ,\
\right\rangle$ to specify inner products. We use $\|x\|_{p}$ to denote the
$p$-norm of a vector $x\in\mathbb{R}^{d}$ and throughout the paper, we drop
the subscript and just write
$\|x\|\stackrel{{\scriptstyle\triangle}}{{=}}\|x\|_{2}$ whenever $p=2$. For a
function $U$ :$\mathbb{R}^{d}\rightarrow\mathbb{R}$, which is twice
differentiable, we use $\nabla U(x)$ and $\nabla^{2}U(x)$ to denote
correspondingly the gradient and the Hessian of $U$ with respect to $x$. We
use $A\succeq B$ if $A-B$ is a positive semi-definite matrix. We use big-oh
notation $O$ in the following sense that if $f(x)={\displaystyle O(g(x))}$
implies $\lim_{x\rightarrow\infty}\sup\frac{f(x)}{g(x)}<\infty$ and
$\tilde{O}$ suppresses the logarithmic factors.
### 2.1 Assumptions on the potential $U$
The objective of this paper is to sample from a distribution
$\pi\propto\exp(-U(x))$, where $x\in\mathbb{R}^{d}$. While sampling from the
exact distribution $\pi(x)$ is generally computationally demanding, it is
largely adequate to sample from an approximated distribution $\tilde{\pi}(x)$
which is in the vicinity of $\pi(x)$ by some distances. In this paper, we
suppose some of the following conditions hold:
###### Assumption 2.1.
($\alpha$-mixture weakly smooth) There exist
$0<\alpha=\alpha_{1}<...<\alpha_{N}\leq 1$, $i=1,..,N$ $0<L_{i}<\infty$ so
that $\forall x,\ y\in\mathbb{R}^{d}$, we obtain $\left\|\nabla U(x)-\nabla
U(y)\right\|\leq\sum_{i=1}^{N}L_{i}\left\|x-y\right\|^{\alpha_{i}}$ where
$\nabla U(x)$ represents an arbitrary sub-gradient of $U$ at $x$.
###### Assumption 2.2.
($\left(\alpha,\ell\right)-$weakly smooth) There exist $0\leq\ell$,
$0<L<\infty$ and $\alpha\in[0,1]$ so that $\forall x,\ y\in\mathbb{R}^{d}$, we
obtain
$\left\|\nabla U(x)-\nabla U(y)\right\|\leq
L\left(1+\left\|x-y\right\|^{\ell}\right)\left\|x-y\right\|^{\alpha},$
where $\nabla U(x)$ represents an arbitrary sub-gradient of $U$ at $x$.
###### Assumption 2.3.
($\left(\mu,\theta\right)$-degenerated convex outside the ball) There exists
some $\mu>0,$ $1\geq\theta\geq 0$ so that for every $\left\|x\right\|\geq R,$
the potential function $U(x)$ satisfies $\nabla^{2}U(x)\succeq
m\left(\left\|x\right\|\right)I_{d}$ where
$m\left(r\right)=\mu\left(1+r^{2}\right)^{-\frac{\theta}{2}}.$
###### Assumption 2.4.
($\beta-$dissipativity). There exists $\beta\geq 1$, $a$, $b>0$ such that
$\forall x\in\mathbb{R}^{d}$, $\left\langle\nabla U(x),x\right\rangle\geq
a\left\|x\right\|^{\beta}-b.$
###### Assumption 2.5.
($LSI\left(\gamma\right)$) There exists some $\gamma>0,$ so that for all
probability distribution $p\left(x\right)$ absolutely continuous $w.r.t.\
\pi\left(x\right)$, $H({\displaystyle p|\pi)\leq\frac{1}{2\gamma}I(p|\pi)}.$
###### Assumption 2.6.
($PI\left(\gamma\right)$) There exists some $\gamma>0,$ so that for all smooth
function $g\colon\mathbb{R}^{d}\to\mathbb{R}$,
$Var_{\pi}(g)\leq\frac{1}{\gamma}E_{\pi}\left[\left\|\nabla
g\right\|^{2}\right]$ where $Var_{\pi}(g)=E_{\pi}[g^{2}]-E_{\pi}[g]^{2}$ is
the variance of $g$ under $\pi$.
###### Assumption 2.7.
(non-strongly convex outside the ball) For every $\left\|x\right\|\geq R$, the
potential function $U(x)$ is positive semi-definite, that is for every
$y\in\mathbb{R}^{d}$, ${\displaystyle\left\langle y,\nabla^{2}U(x)\
y\right\rangle\geq 0}.$
###### Assumption 2.8.
The function $U(x)$ has stationary point at zero $\nabla U(0)=0.$
###### Remark 2.1.
Assumption 2.8 is imposed without loss of generality. Condition 2.1 often
holds for a mixture of distribution with different tail growth behaviors. It
is straightforward to generalize condition 2.1 from the mixture of two
distribution with the same constant $L$, so we will consider condition 2.2 to
simplify the proof while optimize the convergence rate. Condition 2.2 is an
extension of $\alpha$-weakly smooth or $\alpha-$Holder continuity of the
(sub)gradients of $U$ (that is when $\ell=0$, we recover normal
$\alpha$-weakly smooth).
### 2.2 Smoothing using $p$-generalized Gaussian smoothing
A feature that follows straightforwardly from Assumption 2.1 is that for
$\forall x,\ y\in\mathrm{\mathbb{R}}^{d}$:
###### Lemma 2.1.
If potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfies an
$\alpha$-mixture weakly smooth for some
$0<\alpha=\alpha_{1}<...<\alpha_{N}\leq 1$, $i=1,..,N$ $0<L_{i}<\infty$, then:
$U(y)\leq U(x)+\left\langle\nabla U(x),\
y-x\right\rangle+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\|y-x\|^{1+\alpha_{i}}.$
(2.1)
In particular, if the potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$
satisfies $\left(\alpha,\ell\right)-$weakly smooth for some $\alpha+\ell\leq
1$ and $\alpha\in(0,1]$, then:
$U(y)\leq U(x)+\left\langle\nabla U(x),\
y-x\right\rangle+\frac{L}{1+\alpha}\|y-x\|^{1+\alpha}+\frac{L}{1+\ell+\alpha}\|y-x\|^{1+\ell+\alpha}.$
(2.2)
###### Proof.
See Appendix B.1 ∎
Here, to deal with the heavy tail behavior of some distributions in the
mixture, we use $p$-generalized Gaussian smoothing. Particularly, for $\mu\geq
0$, $p$-generalized Gaussian smoothing $U_{\mu}$ of $U$ is defined as:
$U_{\mu}(y):=\mathrm{E}_{\xi}[U(y+\mu\xi)]=\frac{1}{\kappa}\int_{\mathbb{R}^{d}}U(y+\mu\xi)e^{-\left\|\xi\right\|_{p}^{p}/p}d\xi,$
where
$\kappa\stackrel{{{}_{def}}}{{=}}\int_{\mathbb{R}^{d}}e^{-\left\|\xi\right\|_{p}^{p}/p}d\xi=\frac{2^{d}\Gamma^{d}(\frac{1}{p})}{p^{d-\frac{d}{p}}}$
and $\xi\sim N_{p}(0,I_{d\times d})$ (the $p$-generalized Gaussian
distribution). The rationale for taking into account the $p$-generalized
Gaussian smoothing $U_{\mu}$ rather than $U$ is that it typically benefits
from superior smoothness properties. In particular, $U_{\mu}$ is smooth albeit
$U$ is not. In addition, $p$-generalized Gaussian smoothing is more
generalized than Gaussian smoothing in the sense that it contains all normal
distribution when $p=2$ and all Laplace distribution when $p=1$. This family
of distributions allows for tails that are either heavier or lighter than
normal and in the limit as well as containing all the continuous uniform
distribution. More importantly, we prove that a smoothing potential
$U_{\mu}(x)$ is actually smooth (gradient Lipschitz). This property is novel
and potentially useful in the optimization or sampling process, especially
when the potential exhibits some sort of weakly smooth behaviors. Here to
simplify the proof, we consider $p\in\mathbb{R}^{+},$ $2\geq p\geq 1$ and some
primary features of $U_{\mu}$ based on adapting those results of [27].
###### Lemma 2.2.
If potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfies an
$\alpha$-mixture weakly smooth for some
$0<\alpha=\alpha_{1}<...<\alpha_{N}\leq 1$, $i=1,..,N$ $0<L_{i}<\infty$, then:
(i) $\forall x\in\mathbb{R}^{d}$ :
$\left|U_{\mu}(x)-U(x)\right|{\displaystyle\leq\sum_{i}L_{i}\mu^{1+\alpha_{i}}d^{\frac{1+\alpha_{i}}{p}},}$
(ii) $\forall x\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(x)-\nabla
U(x)\right\|\leq\sum_{i}L_{i}\mu^{\alpha_{i}}d^{\frac{3}{p}}},$
(iii) $\forall x,\ y\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(y)-\nabla
U_{\mu}(x)\right\|\leq\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}\left\|y-x\right\|.}$
In particular, if the potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$
satisfies $\left(\alpha,\ell\right)-$weakly smooth for some $\alpha+\ell\leq
1$ and $\alpha\in[0,1]$, then:
(i) $\forall x\in\mathbb{R}^{d}$ :
$\left|U_{\mu}(x)-U(x)\right|{\displaystyle\leq
2L\mu^{1+\ell+\alpha}d^{\frac{1+\ell+\alpha}{p}},}$
(ii) $\forall x\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(x)-\nabla U(x)\right\|\leq L\mu^{\alpha}d^{1+\frac{1}{p}}},$
(iii) $\forall x,\ y\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(y)-\nabla
U_{\mu}(x)\right\|\leq\frac{L}{\mu^{1-\alpha}}d^{\frac{2}{p}}\left\|y-x\right\|.}$
###### Proof.
See Appendix B.2 ∎
## 3 Convergence in KL divergence along ULA under LSI
In this section we review the definition of KL divergence and the convergence
of KL divergence along the Langevin dynamics in continuous time under log-
Sobolev inequality. We then derive our main result for ULA algorithm under
LSI.
### 3.1 Recall KL divergence along Langevin dynamics
Let $p,\pi$ be probability density functions with respect to the Lebesgue
measure on $\mathbb{R}^{d}$. KL divergence of $p$ with respect to $\pi$ is
defined as
${\displaystyle
H(p|\pi)\stackrel{{\scriptstyle\triangle}}{{=}}\int_{\mathbb{R}^{d}}\log\frac{p(x)}{\pi(x)}\pi(x)dx.}$
(3.1)
By definition, KL divergence can be considered as a measure of asymmetric
“distance” of a probability distribution $p$ from a base distribution $\pi$.
$H(p|\pi)$ is always nonnegative and equals zero only when $p$ equals $\pi$ in
distribution. KL divergence is a rather strong measure of distance, which
upper bounds a variety of distance measures. We provide the definition of
other measures in Appendix A. In general, convergence in KL divergence implies
convergence in total variation by Csiszar-Kullback-Pinsker inequality. In
addition, under log-Sobolev inequality with constant $\gamma$, KL divergence
also controls the quadratic Wasserstein $W_{2}$ distance by
$\mathcal{W}_{2}(p,\ \pi)^{2}\leq\frac{2}{\gamma}H(p|\pi).$
The Langevin dynamics for target distribution $\pi=e^{-U}$ is a continuous-
time stochastic process $(X_{t})_{t\geq 0}$ in $\mathbb{R}^{d}$ that
progresses following the stochastic differential equation:
$\displaystyle dX_{t}=-\nabla U(X_{t})\,dt+\sqrt{2}\,dW_{t}$ (3.2)
where $(W_{t})_{t\geq 0}$ is the standard Brownian motion in $\mathbb{R}^{d}$.
If $(X_{t})_{t\geq 0}$ updates following the Langevin dynamics (3.2), then
their probability density function $(p_{t})_{t\geq 0}$ will satisfy the
following the Fokker-Planck equation:
$\displaystyle\frac{\partial p_{t}}{t}\,=\,\nabla\cdot(p_{t}\nabla U)+\Delta
p_{t}\,=\,\nabla\cdot\left(p_{t}\nabla\log\frac{p_{t}}{\pi}\right).$ (3.3)
Here $\nabla\cdot$ is the divergence and $\Delta$ is the Laplacian operator.
In general, by evolving along the Langevin dynamics, a distribution will get
closer to its target distribution $\pi$. From [31] Lemma 1, we have
$\displaystyle\frac{d}{dt}(H(p_{t}|\pi))=-\mathbb{E}_{\pi}\left\|\nabla\log\frac{p_{t}}{\pi}\right\|^{2}.$
(3.4)
Since $\mathbb{E}_{\pi}\left\|\nabla\log\frac{p_{t}}{\pi}\right\|^{2}\geq 0$,
the identity (3.4) exhibits that KL divergence with respect to $\pi$ is
diminishing along the Langevin dynamics, thus the distribution $p_{t}$
actually converges to $\pi$. When $\pi$ fulfills log-Sobolev inequality (LSI),
[31] Lemma 2 shows that
$\displaystyle H(p_{t}|\pi)\leq e^{-2\gamma t}H(p_{0}|\pi).$ (3.5)
Hence, KL divergence converges exponentially fast along the Langevin dynamics.
Log-Sobolev inequality can be thought as a relaxation of logconcavity in
continuous time. LSI was originally initiated by [17] for the scenario of
Gaussian measure, characterizes concentration of measure and sub-Gaussian tail
property, to name a few. [2] broadened it to strongly log-concave measure,
where $\pi$ enjoys LSI with constant $\gamma$ whenever $-\log\pi$ is
$\gamma$-strongly convex. However, LSI is more general than strongly log-
concave condition since it is preserved under bounded perturbation [18],
Lipschitz mapping, tensorization, among others. Therefore, we will study the
KL convergence under log-Sobolev inequality.
### 3.2 Main result: KL divergence along Unadjusted Langevin Algorithm
In general, a practical algorithm often needs to be discretized [20] but the
discretization algorithms are often more complicated and require more
assumptions. In this section, we investigate the behavior of KL divergence
along the Unadjusted Langevin Algorithm (ULA) in discrete time. In order to
sample from a target distribution $\pi=e^{-U}$ in $\mathbb{R}^{d}$, the
updating rule for the discretized ULA algorithm with step size $\eta>0$ is
defined as
$\displaystyle x_{k+1}=x_{k}-\eta\nabla U(x_{k})+\sqrt{2\eta}\,z_{k}$ (3.6)
where $z_{k}\sim N(0,I)$ is an independent standard Gaussian random variable
in $\mathbb{R}^{d}$. As $x_{k}$ is updated following ULA, let $p_{k}$
represent its probability distribution. It is known that ULA converges to a
biased limiting distribution $\pi_{\eta}\neq\pi$ for any fixed $\eta>0$. Thus,
$H(p_{k}|\pi)$ does not converge to $0$ along ULA, as it has an asymptotic
bias $H(\pi_{\eta}|\pi)>0$. When the true target distribution $\pi$ complies
with an $\alpha$-mixture weakly smoothness and LSI, we can prove convergence
in KL divergence along ULA. A key observation is that ULA algorithm will
converge uniformly in time if the discretization error between the ULA output
in one iteration and the Langevin dynamics is bounded. This technique has been
used in many papers, [31, 8]. Our proof structure is similar to that of [31],
whose analysis needs stronger assumptions.
Let $x_{k+1}\sim p_{k+1}$ be the output of one step of ULA (3.6) from
$x_{k}\sim p_{k}$, we have
###### Lemma 3.1.
Suppose $\pi$ is $\gamma-$log-Sobolev, $\alpha$-mixture weakly smooth,
$\max\left\\{L_{i}\right\\}=L\geq 1$. If
$0<\eta\leq\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}$
and then along each step of ULA (3.6),
$\displaystyle H(p_{k+1}|\pi)\leq
e^{-\gamma\eta}H(p_{k}|\pi)+2\eta^{\alpha+1}D_{3},$ (3.7)
where
$D_{3}=\sum_{i}10N^{3}L^{6}+16NL^{4}+8N^{2}L^{4}d^{\frac{3}{p}}+4NL^{2}d.$
(3.8)
In particular, if$\pi$ is $\gamma$-log-Sobolev,
$\left(\alpha,\ell\right)$-weakly smooth with $0<\alpha+\ell\leq 1$. If
$0<\eta\leq\left(\frac{\gamma}{2L^{1+\alpha}}\right)^{\frac{1}{\alpha}}$, then
along each step of ULA (3.6),
$\displaystyle H(p_{k+1}|\pi)\leq
e^{-\gamma\eta}H(p_{k}|\pi)+2\eta^{\alpha+1}D_{3}^{\prime},$ (3.9)
where
$D_{3}^{\prime}=16L^{2+2\alpha+2\ell}+4L^{2+2\alpha}d^{\frac{3-\alpha}{1+\alpha}\left(\alpha+\ell\right)}+4L^{2}d^{\alpha+\ell}$.
###### Proof.
See Appendix C.3. ∎
By using this component, we obtain the following theorem.
###### Theorem 3.1.
Suppose $\pi$ is $\gamma-$log-Sobolev, $\alpha$-mixture weakly
smooth,$\max\left\\{L_{i}\right\\}=L\geq 1$, and for any $x_{0}\sim p_{0}$
with $H(p_{0}|\pi)=C_{0}<\infty$, the iterates $x_{k}\sim p_{k}$ of LMC with
step size $\eta\leq
1\wedge\frac{1}{4\gamma}\wedge\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}$satisfies
$\displaystyle H(p_{k}|\pi)\leq e^{-\gamma\eta
k}H(p_{0}|\pi)+\frac{8\eta^{\alpha}D_{3}}{3\gamma},$ (3.10)
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma}\wedge\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\wedge\left(\frac{3\epsilon\gamma}{16D_{3}}\right)^{\frac{1}{\alpha}}$for
$k\geq\frac{1}{\gamma\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon}$
iterations.
###### Proof.
See Appendix C.4. ∎
If we initialize with a Gaussian distribution $p_{0}=N(0,\frac{1}{L}I)$, we
have the following lemma.
###### Lemma 3.2.
Suppose $\pi=e^{-U}$ is $\alpha$-mixture weakly smooth. Let
$p_{0}=N(0,\frac{1}{L}I)$. Then $H(p_{0}|\pi)\leq
U(0)-\frac{d}{2}\log\frac{2\Pi
e}{L}+\sum_{i}\frac{L}{1+\alpha_{i}}\left(\frac{d}{L}\right)^{\frac{1+\alpha_{i}}{2}}=O(d).$
###### Proof.
See Appendix C.1. ∎
Therefore, Theorem 3.1 states that to achieve $H(p_{k}|\pi)\leq\epsilon$, ULA
has iteration complexity
$\tilde{O}\left(\frac{d^{\frac{3-\alpha}{1+\alpha}}}{\epsilon^{\frac{1}{\alpha}}\gamma^{\frac{1}{\alpha}+1}}\right).$
By Pinsker’s inequality, we have
$TV\left(p_{k}|\pi\right)\leq\sqrt{\frac{H(p_{k}|\pi)}{2}}$ which implies that
to get $TV\left(p_{k}|\pi\right)\leq\epsilon$, it is enough to obtain
$H(p_{k}|\pi)\leq 2\epsilon^{2}$. This bound indicates that the number of
iteration to reach $\epsilon$ accuracy for total variation is
$\tilde{O}\left(d^{\frac{3-\alpha}{1+\alpha}}\gamma^{\frac{-1}{\alpha}-1}\epsilon^{\frac{-2}{\alpha}}\right)$.
On the other hand, from Talagrand inequality, which comes from log-Sobolev
inequality, we know that $W_{2}^{2}(p_{k},\ \pi)\leq H\left(p_{k}|\pi\right)$,
by replacing this in the bound above, we obtain the number of iteration for
$L_{2}$-Wasserstein distance is
$\tilde{O}\left(d^{\frac{3-\alpha}{1+\alpha}}\gamma^{\frac{-1}{\alpha}-1}\epsilon^{\frac{-2}{\alpha}}\right)$.
### 3.3 Sampling via smoothing potential
Inspired by the approach of [6], we study the convergence of the discrete-time
process for the smoothing potential that have the following form:
$U_{\mu}(x):=\mathrm{\mathbb{E}}_{\xi}[U(y+\mu\xi)].$ (3.11)
Observe from Lemma 2.2 that $U(\cdot)$ is $\alpha$-mixture weakly smooth but
$U_{\mu}(x)$ is smooth. Recall that ULA in terms of the smoothing potential
$U_{\mu}$ can be specified as:
$x_{k+1}=x_{k}-\eta\nabla U_{\mu}(x_{k})+\sqrt{2\eta}\varsigma_{k},$ (3.12)
where $\varsigma_{k}\sim N(0,\ I_{d\times d})$ are independent Gaussian random
vectors. In general, we do not have access to an oracle of $\nabla
U_{\mu}(x)$, so rather than working with $\nabla U_{\mu}(x)$ as specified by
Eq. 3.12, we need to use an estimate of the gradient:
$\displaystyle g_{\mu}(x)=\nabla U(x+\mu\xi)$ (3.13)
where $\xi\sim N_{p}(0,I_{d})$. Based on the above estimate of the gradient,
we obtain the following result.
###### Lemma 3.3.
For any $x_{k}\in\mathbb{R}^{d}$, $g_{\mu}(x_{k},\zeta_{k})=\nabla
U_{\mu}(x_{k})+\zeta_{k}$ is an unbiased estimator of $\nabla U_{\mu}$ such
that
$\displaystyle\mathrm{Var}\left[g_{\mu}(x_{k},\zeta_{k})\right]\leq
4N^{2}L^{2}\mu^{2\alpha}d^{\frac{2\alpha}{p}}.$
###### Proof.
See Appendix D.1. ∎
Let the distribution of the $k^{th}$ iterate $x_{k}$ be represented by
$\pi_{\mu,k}$, and let $\pi_{\mu}\propto\exp(-U_{\mu})$ be the distribution
with $U_{\mu}$ as the potential. First, we prove that the $p$-generalized
Gaussian smoothing does not alter the objective distribution substantially in
term of the Wasserstein distance, by bounding $W_{2}(\pi,\pi_{\mu})$.
###### Lemma 3.4.
Assume that $\pi\propto\exp(-\pi)$ and $\pi_{\mu}\propto\exp(-U_{\mu})$ and
$\pi$ has a bounded second moment, that is
$\int\left\|x\right\|^{2}\pi(x)dx=E_{2}<\infty$. We deduce the following
bounds
$W_{2}^{2}(\pi,\ \pi_{\mu})\leq 8.24NL\mu^{1+\alpha}d^{\frac{2}{p}}E_{2}.$
for any $\mu\leq 0.05$.
###### Proof.
See Appendix D.3. ∎
We then derive a result on mixing times of Langevin diffusion with stochastic
estimated gradients under log-Sobolev inequality condition, which enables us
to bound $W_{2}(\pi_{\mu,k},\pi_{\mu})$. Our main outcome is stated in the
subsequent theorem.
###### Theorem 3.2.
Suppose $\pi_{\mu}$ is $\gamma_{1}-$log-Sobolev, $\alpha$-mixture weakly
smooth, with $\max\left\\{L_{i}\right\\}=L\geq 1$ and
$\int\left\|x\right\|^{2}\pi(x)dx=E_{2}<\infty$ and for any $x_{0}\sim p_{0}$
with $H(p_{0}|\pi)=C_{0}<\infty$, the iterates $x_{k}\sim p_{k}$ of ULA with
step size
$\eta\leq\min\left\\{1,\frac{1}{4\gamma},\left(\frac{\gamma_{1}}{13N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\right\\}$
(3.14)
satisfies
$\displaystyle W_{2}(\pi_{\mu,K},\pi)\leq e^{-\frac{\gamma_{1}}{2}\eta
k}\sqrt{H(p_{0}|\pi_{\mu})}+\sqrt{\frac{8\eta^{\alpha}D_{4}}{3\gamma_{1}}}+3\sqrt{NLE_{2}}d^{\frac{1}{p}}\eta^{\frac{\alpha}{2}},$
where
$D_{4}=\sum_{i}10N^{3}L^{6}+16NL^{4}+8N^{2}L^{4}d^{\frac{3}{p}}+4NL^{2}d+8N^{2}L^{2}d^{\frac{2\alpha}{p}}$.
Then, for any $\epsilon>0$, to achieve $W_{2}(\pi_{\mu,K},\pi)<\epsilon$, it
suffices to run ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{1}}\wedge\left(\frac{\gamma}{13N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\wedge\left(\frac{\epsilon\gamma_{1}}{6\sqrt{D_{4}}}\right)^{\frac{2}{\alpha}}\wedge\left(\frac{\epsilon}{9\sqrt{NLE_{2}}d^{\frac{1}{p}}}\right)^{\frac{2}{\alpha}}$for
$k\geq\frac{2}{\gamma_{1}\eta}\log\frac{3\sqrt{H\left(p_{0}|\pi\right)\gamma_{1}}}{\epsilon}$
iterations.
###### Proof.
See Appendix D.2. ∎
## 4 Extended result
Since log-Sobolev inequalities are preserved under bounded perturbations by
[18]’s theorem, we provide our extended results through convexification of
non-convex domain [24, 35]. Convexification of non-convex domain is an
original approach proposed by [24, 35], developed and apply to strongly convex
outside a compact set by [35]. We would like to emphasize that it is non
trivial to apply their results in our case since the requirement of strong
convexity. Before starting our extension, we need an additional lemma, taken
from [24, 35], for our proof.
###### Lemma 4.1.
[[24] Lemma 2]. Let us define $\Omega=\mathbb{R}^{d}\backslash\mathbb{B}(0,R)$
where $\mathbb{B}(0,R)$ is the open ball of radius $R$ centered at $0$, and
define $V\left(x\right)=\inf\left\\{\sum_{i}\lambda_{i}U(\ x_{i})\right\\}$
where the infimum is running over all possible convex combination of points
$x_{i}$ (that is $\lambda_{i}\geq 0$, $\sum_{i}\lambda_{i}=1$ and
$\sum_{i}\lambda_{i}x_{i}=x$). Then for $\forall\ x\in\mathbb{B}(0,R)$, $V(\
x)$ can be represented as a convex combination of $U\left(x_{j}\right)$ such
that $\left\|x_{j}\right\|=R,$ that is
$V\left(x\right)=\inf\left\\{\sum_{j}\lambda_{j}U(\ x_{j})\right\\}$ where
$\lambda_{j}\geq 0$, $\sum_{j}\lambda_{j}=1$ and $\sum_{j}\lambda_{j}x_{j}=x$
and $\left\|x_{j}\right\|=R.$ Then,
$\inf_{\left\|\bar{x}\right\|=R}U(\bar{x})\leq V(\
x)\leq\sup_{\left\|\bar{x}\right\|=R}U(\bar{x}).$
###### Proof.
See Appendix E.1. ∎
Adapted techniques from [24] for non-strongly convex and $\alpha$-mixture
weakly smooth potentials, we derive a tighter bound for the difference between
constructed convex potential and the original one in the following lemma.
###### Lemma 4.2.
For $U$ satisfying $\alpha$-mixture weakly smooth and
$\left(\mu,\theta\right)$-degenerated convex outside the ball radius $R$,
there exists $\hat{U}\in C^{1}(\mathbb{R}^{d})$ with a Hessian that exists
everywhere on $\mathbb{R}^{d}$, and $\hat{U}$ is
$\left(\left(1-\theta\right)\frac{\mu}{2},\theta\right)$-degenerated convex on
$\mathbb{R}^{d}$ (that is
$\nabla^{2}\hat{U}(x)\succeq\left(1-\theta\right)\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}$),
such that
$\displaystyle\sup\left(\hat{U}(\ x)-U(\ x)\right)$
$\displaystyle-\inf\left(\hat{U}(\ x)-U(\
x)\right)\leq\sum_{i}L_{i}R^{1+\alpha_{i}}+\frac{4\mu}{\left(2-\theta\right)}\
R^{2-\theta}.$ (4.1)
###### Proof.
See Appendix E.2. ∎
###### Remark 4.1.
This result can be applied to potential with degenerated convex outside the
ball. Setting $\mu=0$ implies a result for potential with non-strongly convex
outside the ball, while setting $\theta=0$ implies a result for potential with
strongly convex outside the ball. The constant could be improved by a factor
of $2$ if we take $\epsilon$ defined in the proof to be arbitrarily small.
### 4.1 ULA convergence under $\gamma-$Poincaré inequality, $\alpha$-mixture
weakly smooth and $2-$dissipativity
In general, $PI$ is weaker than $LSI$. In order to apply the previous results
of log Sobolev inequalities, we will also need $2-$dissipativity assumption.
First, using convexification of non-convex domain result above, we have the
following lemma for bounded perturbation.
###### Lemma 4.3.
For $U$ satisfying $\gamma-$Poincaré, $\alpha$-mixture weakly smooth, there
exists $\breve{U}\in C^{1}(\mathbb{R}^{d})$ with a Hessian that exists
everywhere on $\mathbb{R}^{d}$, and $\breve{U}$ is log-Sobolev on
$\mathbb{R}^{d}$ such that
$\sup\left(\breve{U}(\ x)-U(\ x)\right)-\inf\left(\breve{U}(\ x)-U(\
x)\right)\leq 2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}.$
(4.2)
###### Proof.
See Appendix E.3. ∎
Using bounded perturbation theorem, this result implies $\pi$ satisfies a log-
Sobolev inequality, which in turn give us the following result.
###### Theorem 4.1.
Suppose $\pi$ is $\gamma-$Poincaré, $\alpha$-mixture weakly smooth with
$\alpha_{N}=1$ and $2-$dissipativity (i.e.$\left\langle\nabla
U(x),x\right\rangle\geq a\left\|x\right\|^{2}-b$) for some $a,b>0$, and for
any $x_{0}\sim p_{0}$ with $H(p_{0}|\pi)=C_{0}<\infty$, the iterates
$x_{k}\sim p_{k}$ of ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}$satisfies
$\displaystyle H(p_{k}|\pi)\leq e^{-\gamma_{3}\eta
k}H(p_{0}|\pi)+\frac{8\eta^{\alpha}D_{3}}{3\gamma_{3}},$ (4.3)
where $D_{3}$ is defined as in equation (3.8) and
$\displaystyle M_{2}$
$\displaystyle=\int\left\|x\right\|^{2}e^{-\breve{U}(x)}dx=O(d)$ (4.4)
$\displaystyle\zeta$
$\displaystyle=\sqrt{2\left[\frac{2\left(b+\left(L+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}+d\right)}{a}+M_{2}\right]\frac{e^{4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{\gamma}}$
(4.5) $\displaystyle A$ $\displaystyle=(1-\frac{L}{2})\frac{8}{a^{2}}+\zeta,$
(4.6) $\displaystyle B$
$\displaystyle=2\left[\frac{2\left(\left(b+4\left(L+\frac{\lambda_{0}}{4}\right)R^{2}+aR^{2}\right)+d\right)}{a}+M_{2}\right](1-\frac{L}{2}+\frac{1}{\zeta}),$
(4.7) $\displaystyle\gamma_{3}$ $\displaystyle=\frac{2\gamma
e^{-\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{[A\gamma+(B+2)e^{4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)})]}.$
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}\wedge\left(\frac{3\epsilon\gamma_{3}}{16D_{3}}\right)^{\frac{1}{\alpha}}$for
$k\geq\frac{1}{\gamma_{3}\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon}$
iterations.
###### Proof.
See Appendix E.5. ∎
From Theorem 4.1, LMC can achieve $H(p_{k}|\pi)\leq\epsilon$, with iteration
complexity of
$\tilde{O}\left(\frac{d^{\frac{1}{\alpha}}}{\epsilon^{\frac{1}{\alpha}}\gamma_{3}^{\frac{1}{\alpha}+1}}\right)$
where
$\displaystyle\gamma_{3}$ $\displaystyle=O\left(\frac{1}{d\gamma
e^{5\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}\right)$
so the number of iteration needed is
$\tilde{O}\left(\frac{d^{\frac{2}{\alpha}+1}e^{5\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)\left(\frac{1}{\alpha}+1\right)}}{\gamma_{3}^{\left(\frac{1}{\alpha}+1\right)}\epsilon^{\frac{1}{\alpha}}}\right).$
Similar as before, from Pinsker’s inequality, the number of iteration to reach
$\epsilon$ accuracy for total variation is
$\tilde{O}\left(\frac{d^{\frac{2}{\alpha}+1}e^{5\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)\left(\frac{1}{\alpha}+1\right)}}{\gamma_{3}^{\left(\frac{1}{\alpha}+1\right)}\epsilon^{\frac{2}{\alpha}}}\right).$
(4.8)
To have ${W}_{\alpha}(p_{k},\ \pi)\leq\epsilon$, it is sufficient to choose
$\mathrm{H}(p_{k}|\pi)=\tilde{O}\left(\epsilon^{4}d^{-2}\right)$, which in
turn implies the number of iteration for ${W}_{\alpha}(p_{k},\
\pi)\leq\epsilon$ is
$\tilde{O}\left(\frac{d^{\frac{4}{\alpha}+1}e^{5\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)\left(\frac{1}{\alpha}+1\right)}}{\gamma_{3}^{\left(\frac{1}{\alpha}+1\right)}\epsilon^{\frac{4}{\alpha}}}\right).$
(4.9)
### 4.2 ULA convergence under non-strongly convex outside the ball,
$\alpha$-mixture weakly smooth and $2-$dissipativity
Using convexification of non-convex domain result above, we obtain the
following lemma.
###### Lemma 4.4.
Suppose $\pi$ is non-strongly convex outside the ball of radius $R$,
$\alpha$-mixture weakly smooth with $\alpha_{N}=1$ and $2-$dissipativity
(i.e.$\left\langle\nabla U(x),x\right\rangle\geq a\left\|x\right\|^{2}-b$) for
some $a,b>0$, there exists $\breve{U}\in C^{1}(\mathbb{R}^{d})$ with a Hessian
that exists everywhere on $\mathbb{R}^{d}$, and $\breve{U}$ is convex on
$\mathbb{R}^{d}$ such that
$\sup\left(\breve{U}(\ x)-U(\ x)\right)-\inf\left(\breve{U}(\ x)-U(\
x)\right)\leq 2\sum_{i}L_{i}R^{1+\alpha_{i}}.$ (4.10)
###### Proof.
It comes directly from Lemma 4.2. ∎
Based on result in previous section, we get the following result.
###### Theorem 4.2.
Suppose $\pi$ is non-strongly convex outside the ball $\mathbb{B}(0,R)$,
$\alpha$-mixture weakly smooth with $\alpha_{N}=1$ and $2-$dissipativity
(i.e.$\left\langle\nabla U(x),x\right\rangle\geq a\left\|x\right\|^{2}-b$) for
some $a,b>0$, and for any $x_{0}\sim p_{0}$ with $H(p_{0}|\pi)=C_{0}<\infty$,
the iterates $x_{k}\sim p_{k}$ of LMC with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}$satisfies
$\displaystyle H(p_{k}|\pi)\leq e^{-\gamma_{3}\eta
k}H(p_{0}|\pi)+\frac{8\eta^{\alpha}D_{3}}{3\gamma_{3}},$ (4.11)
where $D_{3}$ is defined as in equation (3.8) and for some universal constant
$K$,
$\displaystyle M_{2}$
$\displaystyle=\int\left\|x\right\|^{2}e^{-\breve{U}(x)}dx=O(d)$ (4.12)
$\displaystyle\zeta$
$\displaystyle=K\sqrt{64d\left[\frac{2\left(b+\left(L+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}+d\right)}{a}+M_{2}\right]\left(\frac{a+b+2aR^{2}+3}{ae^{-4\left(4L_{N}R^{2}+4LR^{1+\alpha}\right)}}\right)}$
(4.13) $\displaystyle A$ $\displaystyle=(1-\frac{L}{2})\frac{8}{a^{2}}+\zeta,$
(4.14) $\displaystyle B$
$\displaystyle=2\left[\frac{2\left(\left(b+4\left(L+\frac{\lambda_{0}}{4}\right)R^{2}+aR^{2}\right)+d\right)}{a}+M_{2}\right](1-\frac{L}{2}+\frac{1}{\zeta}),$
(4.15) $\displaystyle\gamma_{3}$
$\displaystyle=\frac{2e^{-\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{A+(B+2)32K^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)e^{4\left(4L_{N}R^{2}+4LR^{1+\alpha}\right)}}=\frac{1}{O(d)}.$
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}\wedge\left(\frac{3\epsilon\gamma_{3}}{16D_{3}}\right)^{\frac{1}{\alpha}}$for
$k\geq\frac{1}{\gamma_{3}\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon}$
iterations.
###### Proof.
See Appendix E.5 . ∎
## 5 Conclusion
In this article, we derive polynomial-dimension theoretical assurances of
unadjusted LMC algorithm for a family of potentials that are $\alpha$-mixture
weakly smooth and isoperimetric (i.e. log Sobolev, Poincaré, and Talagrand).
In addition, we also investigate the family of potential which is non-strongly
convex outside the ball and $2$-dissipative. The analysis we proposed is an
extension of the recently published paper [31] in combination with the
convexification of non-convex domain [24]. There are a number of valuable
potential directions which one can explore, among them we speculate some here.
It is potential to broaden our results to apply underdamped LMC or higher
order LMC to these class of potential while the computational complexity
remains polynomial dependence on $d$. Another fascinating question is whether
it is feasible to sampling from distributions with non-smooth and totally non-
convex structure and integrate into derivative-free LMC algorithm.
## Appendix A Measure definitions and isoperimetry
Let $p,\pi$ be probability distributions on $\mathbb{R}^{d}$ with full support
and smooth densities, define the Kullback-Leibler (KL) divergence of $p$ with
respect to $\pi$ as
$H(p|\pi)\stackrel{{\scriptstyle\triangle}}{{=}}\int_{{R}^{d}}p(x)\log\frac{p(x)}{\pi(x)}\,dx.$
(A.1)
Likewise, we denote the entropy of $p$ with
${\displaystyle\mathrm{H}(p)\stackrel{{\scriptstyle\triangle}}{{=}}-\int
p(x)\log p(x)dx}$ (A.2)
and for $\mathcal{B}(\mathbb{R}^{d})$ denotes the Borel $\sigma$-field of
$\mathbb{R}^{d}$, define the relative Fisher information and total variation
metrics correspondingly as
${\displaystyle\mathrm{I}(p|\pi)\stackrel{{\scriptstyle\triangle}}{{=}}\int_{\mathbb{R}^{d}}p(x)\|\nabla\log\frac{p(x)}{\pi(x)}\|^{2}dx},$
(A.3) ${\displaystyle TV(p,{\displaystyle\
\pi)\stackrel{{\scriptstyle\triangle}}{{=}}\sup_{A\in\mathcal{B}(\mathbb{R}^{d})}|\int_{A}p(x)dx-\int_{A}\pi(x)dx|}.}$
(A.4)
Furthermore, we define a transference plan $\zeta$, a distribution on
$(\mathbb{R}^{d}\times\mathbb{R}^{d},\
\mathcal{B}(\mathbb{R}^{d}\times\mathbb{R}^{d}))$ (where
$\mathcal{B}(\mathbb{R}^{d}\times\mathbb{R}^{d})$ is the Borel $\sigma$-field
of ($\mathbb{R}^{d}\times\mathbb{R}^{d}$)) so that
$\zeta(A\times\mathbb{R}^{d})=p(A)$ and $\zeta(\mathbb{R}^{d}\times A)=\pi(A)$
for any $A\in\mathcal{B}(\mathbb{R}^{d})$. Let $\Gamma(P,\ Q)$ designate the
set of all such transference plans. Then for $\beta>0$, the
$L_{\beta}$-Wasserstein distance is formulated as:
$W_{\beta}(p,\pi)\stackrel{{\scriptstyle\triangle}}{{=}}\left(\inf_{\zeta\in\Gamma(P,Q)}\int_{x,y\in\mathbb{R}^{d}}\|x-y\|^{\beta}\mathrm{d}\zeta(x,\
y)\right)^{1/\beta}.$ (A.5)
Note that although KL divergence is an asymmetric measure of distance between
probability distributions, it is the preferred measure of distance here since
it also implies total variation distance via Pinsker’s inequality. In
addition, KL divergence also governs the quadratic Wasserstein $W_{2}$
distance under log-Sobolev, Talagrand, and Poincaré inequalities defined
below.
###### Definition A.1.
The probability distribution $p$ satisfies a logarithmic Sobolev inequality
with constant $\gamma>0$ (in short: $LSI(\gamma)$) if for all probability
distribution $p$ absolutely continuous $w.r.t.\ \pi$,
$H({\displaystyle p|\pi)\leq\frac{1}{2\gamma}I(p|\pi)}.$ (A.6)
###### Definition A.2.
The probability distribution $p$ satisfies a Talagrand inequality with
constant $\gamma>0$ (in short: $T(\gamma)$) if for all probability
distribution $p$, absolutely continuous $w.r.t.\ \pi$, with finite moments of
order 2,
$W_{2}(p,\ \pi)\leq\sqrt{\frac{2H(p|\pi)}{\gamma}}.$ (A.7)
###### Definition A.3.
The probability distribution $p$ satisfies a Poincaré inequality with constant
$\gamma>0$ (in short: $PI(\gamma)$) if for all smooth function
$g\colon\mathbb{R}^{d}\to\mathbb{R}$,
$Var_{p}(g)\leq\frac{1}{\gamma}E_{p}[\|\nabla g\|^{2}],$ (A.8)
where $Var_{p}(g)=E_{p}[g^{2}]-E_{p}[g]^{2}$ is the variance of $g$ under $p$.
## Appendix B Proofs of $p$-generalized Gaussian smoothing
### B.1 Proof of $\alpha$-mixture weakly smooth property
###### Lemma B.1.
If potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfies
$\alpha$-mixture weakly smooth then:
$U(y)\leq U(x)+\left\langle\nabla U(x),\
y-x\right\rangle+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\|y-x\|^{1+\alpha_{i}}.$
In particular, if potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfies
$\left(\alpha,\ell\right)-$weakly smooth for some $\alpha+\ell\leq 1$ and
$\alpha\in[0,1]$, then:
$U(y)\leq U(x)+\left\langle\nabla U(x),\
y-x\right\rangle+\frac{L}{1+\alpha}\|y-x\|^{1+\alpha}+\frac{L}{1+\ell+\alpha}\|y-x\|^{1+\ell+\alpha}.$
###### Proof.
We have
$\displaystyle\left|U(x)-U(y)-\langle\nabla U(y),x-y\rangle\right|$
$\displaystyle=$ $\displaystyle\Big{|}\int_{0}^{1}\langle\nabla
U(y+t(x-y)),x-y\rangle\text{d}t-\langle\nabla U(y),x-y\rangle\Big{|}$
$\displaystyle=$ $\displaystyle\Big{|}\int_{0}^{1}\langle\nabla
U(y+t(x-y))-\nabla U(y),x-y\rangle\text{d}t\Big{|}.$ $\displaystyle\leq$
$\displaystyle\int_{0}^{1}\|\nabla U(y+t(x-y))-\nabla U(y)\|\|x-y\|\text{d}t$
$\displaystyle\leq$
$\displaystyle\int_{0}^{1}\sum_{i}L_{i}t^{\alpha_{i}}\|x-y\|^{\alpha_{i}}\|x-y\|\text{d}t$
$\displaystyle=$
$\displaystyle\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\|x-y\|^{1+\alpha_{i}},$
where the first line comes from Taylor expansion, the third line follows from
Cauchy-Schwarz inequality and the fourth line is due to Assumption 2.1. This
gives us the desired result. By replacing Assumption 2.1 with Assumption 2.2,
we immediately get
$U(y)\leq U(x)+\left\langle\nabla U(x),\
y-x\right\rangle+\frac{L}{1+\alpha}\|y-x\|^{1+\alpha}+\frac{L}{1+\ell+\alpha}\|y-x\|^{1+\ell+\alpha}.$
∎
### B.2 Proof of $p$-generalized Gaussian smoothing properties
###### Lemma B.2.
If potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$ satisfies
$\alpha$-mixture weakly smooth then:
(i) $\forall x\in\mathbb{R}^{d}$ :
$\left|U_{\mu}(x)-U(x)\right|{\displaystyle\leq\sum_{i}L_{i}\mu^{1+\alpha_{i}}d^{\frac{1+\alpha_{i}}{p}},}$
(ii) $\forall x\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(x)-\nabla
U(x)\right\|\leq\sum_{i}L_{i}\mu^{\alpha_{i}}d^{\frac{3}{p}}},$
(iii) $\forall x,\ y\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(y)-\nabla
U_{\mu}(x)\right\|\leq\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}\left\|y-x\right\|.}$
In particular, if the potential $U:\mathbb{R}^{d}\rightarrow\mathbb{R}$
satisfies $\left(\alpha,\ell\right)-$weakly smooth for some $\alpha+\ell\leq
1$ and $\alpha\in[0,1]$, then:
(i) $\forall x\in\mathbb{R}^{d}$ :
$\left|U_{\mu}(x)-U(x)\right|{\displaystyle\leq
2L\mu^{1+\ell+\alpha}d^{\frac{1+\ell+\alpha}{p}},}$
(ii) $\forall x\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(x)-\nabla U(x)\right\|\leq 2L\mu^{\alpha}d^{\frac{3}{p}}},$
(iii) $\forall x,\ y\in\mathbb{R}^{d}$: ${\displaystyle\left\|\nabla
U_{\mu}(y)-\nabla
U_{\mu}(x)\right\|\leq\frac{L}{\mu^{1-\alpha}}d^{\frac{2}{p}}\left\|y-x\right\|.}$
###### Proof.
(i). Since $U_{\mu}(x)=\mathrm{\mathbb{E}}_{\xi}[U(x+\mu\xi)]$,
$U(x)=\mathrm{\mathbb{E}}_{\xi}[U(x)]$ and
$\mathbb{E}_{\xi}\mu\left\langle\nabla U(x),\ \xi\right\rangle=0$, we have
$U_{\mu}(x)-U(x)=\mathbb{E}_{\xi}\left[U(x+\mu\xi)-U(x)-\mu\left\langle\nabla
U(x),\ \xi\right\rangle\right].$
By the definition of the density of $p$-generalized Gaussian distribution [1],
we also have:
$U_{\mu}(x)-U(x)=\frac{1}{\kappa}\int_{\mathbb{R}^{d}}[U(x+\mu\xi)-U(x)-\mu\left\langle\nabla
U(x),\ \xi\right\rangle]e^{-\left\|\xi\right\|_{p}^{p}/p}d\xi.$
Applying Eq. 2.2 and previous inequality:
$\displaystyle|U_{\mu}(x)-U(x)|$
$\displaystyle=\left|\frac{1}{\kappa}\int_{\mathbb{R}^{d}}\left[U(x+\mu\xi)-U(x)-\mu\left\langle\nabla
U(x),\ \xi\right\rangle\right]e^{-\left\|\xi\right\|_{p}^{p}/p}d\xi\right|$
$\displaystyle\leq\sum_{i}\frac{L_{i}}{\kappa(1+\alpha_{i})}\mu^{1+\alpha_{i}}\int_{\mathbb{R}^{d}}\left\|\xi\right\|^{(1+\alpha_{i})}e^{-\left\|\xi\right\|_{p}^{p}/p}d\xi$
$\displaystyle=\sum_{i}\frac{L_{i}\mu^{1+\alpha_{i}}}{(1+\alpha_{i})}E\left[\left\|\xi\right\|^{(1+\alpha_{i})}\right].$
If $p\leq 2$ then $\left\|\xi\right\|\leq\left\|\xi\right\|_{p}$ and we get
$\displaystyle|U_{\mu}(x)-U(x)|$
$\displaystyle\leq\sum_{i}\frac{L_{i}\mu^{1+\alpha_{i}}}{(1+\alpha_{i})}E\left[\left\|\xi\right\|^{(1+\alpha_{i})}\right]$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\sum_{i}\frac{L_{i}\mu^{1+\alpha_{i}}}{(1+\alpha_{i})}\mathbb{E}\left[\left\|\xi\right\|_{p}^{2}\right]^{\frac{1+\alpha_{i}}{2}}$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\sum_{i}\frac{L_{i}\mu^{1+\alpha_{i}}}{(1+\alpha_{i})}\left(\left(d+1\right)^{\frac{2}{p}}\right)^{\frac{1+\alpha_{i}}{2}}$
$\displaystyle\leq\sum_{i}\frac{L_{i}\mu^{1+\alpha_{i}}}{(1+\alpha_{i})}d^{\frac{1+\alpha_{i}}{p}}$
$\displaystyle\leq\sum_{i}\frac{L_{i}\mu^{1+\alpha_{i}}}{(1+\alpha_{i})}d^{\frac{2}{p}}$
where step $1$ follows from Jensen inequality and $0\leq\alpha\leq 1$, step
$2$ is from Lemma F.16 below in which if $\xi\sim N_{p}\left(0,I_{d}\right)$
then $d^{\left\lfloor\frac{n}{p}\right\rfloor}\leq
E(\left\|\xi\right\|_{p}^{n})\leq\left[d+\frac{n}{2}\right]^{\frac{n}{p}}$where$\left\lfloor
x\right\rfloor$ denotes the largest integer less than or equal to $x$, and the
last step is by simplification when $d$ is large enough and $\mu$ is small
enough. By replacing Assumption 2.1 with Assumption 2.2, for $\mu$ is small
enough, we immediately get
$\left|U_{\mu}(x)-U(x)\right|{\displaystyle\leq
2L\mu^{1+\ell+\alpha}d^{\frac{1+\ell+\alpha}{p}}.}$
(ii). We adapt the technique of [27] to $p$-generalized Gaussian smoothing.
Let $y=x+\mu\xi$, then $U_{\mu}(x)$ is rewritten in another form as
$\displaystyle U_{\mu}(x)$
$\displaystyle=\mathrm{\mathbb{E}}_{\xi}[U(x+\mu\xi)]$
$\displaystyle=\frac{1}{\kappa\mu}\int_{\mathbb{R}^{d}}U(y)e^{-\frac{1}{p\mu^{p}}\left\|y-x\right\|_{p}^{p}}dy.$
Now taking the gradient with respect to $x$ of $U_{\mu}(x)$ gives
$\nabla_{x}U_{\mu}(x)=\frac{1}{\kappa\mu}\nabla_{x}\int_{\mathbb{R}^{d}}U(y)e^{-\frac{1}{p\mu^{p}}\left\|y-x\right\|_{p}^{p}}dy.$
By Fubini Theorem with some regularity (i.e. $\mathbb{E}|U(y)|<\infty$), we
can exchange the gradient and integral and get
$\displaystyle\nabla_{x}U_{\mu}(x)$
$\displaystyle=\frac{1}{\kappa\mu}\int_{\mathbb{R}^{d}}\nabla_{x}\left(U(y)e^{-\frac{1}{p\mu^{p}}\left\|y-x\right\|_{p}^{p}}\right)dy$
$\displaystyle=\frac{1}{\kappa\mu}\int_{\mathbb{R}^{d}}U(y)\nabla_{x}\left(e^{-\frac{1}{p\mu^{p}}\left\|y-x\right\|_{p}^{p}}\right)dy$
$\displaystyle=\frac{1}{\kappa\mu}\int_{\mathbb{R}^{d}}U(y)e^{-\frac{1}{p\mu^{p}}\left\|y-x\right\|_{p}^{p}}\frac{-1}{\mu^{p}}\left\|y-x\right\|_{p}^{p}\nabla_{x}(\left\|y-x\right\|_{p})dy$
$\displaystyle=\frac{1}{\kappa\mu}\int_{\mathbb{R}^{d}}U(y)e^{-\frac{1}{p\mu^{p}}\left\|y-x\right\|_{p}^{p}}\frac{1}{\mu^{p}}(y-x)\circ\left|y-x\right|^{p-2}dy.$
where $\circ$ stands for the Hadamard product and $\left|\cdot\right|$ is used
for absolute value of each component of the vector $y-x$. Therefore, by
changing variable back to $\xi$, we deduce
$\displaystyle\nabla_{x}U_{\mu}(x)$
$\displaystyle=\frac{1}{\kappa}\int_{\mathbb{R}^{d}}U(x+\mu\xi)e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\frac{1}{\mu}\xi\circ\left|\xi\right|^{p-2}d\xi$
$\displaystyle=\mathbb{E}_{\xi}\left[\frac{U(x+\mu\xi)\xi\circ\left|\xi\right|^{p-2}}{\mu}\right].$
In addition, if $\xi\sim N_{p}(0,I_{d})$,
$\mathbb{E}\left(\xi\right)=\frac{1}{\kappa}\int\xi
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi=0$ and then
$\nabla_{\xi}\mathbb{E}\left(\xi\right)=0$. Since $\xi
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}$ is bounded, we can exchange the
gradient and the integral and get
$\displaystyle\nabla_{\xi}\frac{1}{\kappa}\int\xi
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi$
$\displaystyle=\frac{1}{\kappa}\int\nabla_{\xi}\left(\xi
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\right)d\xi$ $\displaystyle 0$
$\displaystyle=\frac{1}{\kappa}\int
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi+\frac{1}{\kappa}\int\xi\nabla_{\xi}\left(e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\right)d\xi$
$\displaystyle 0$
$\displaystyle=1-\frac{1}{\kappa}\int\xi\mathrm{e}^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\left\|\xi\right\|_{p}^{p-1}\nabla_{\xi}\left(\left\|\xi\right\|_{p}\right)d\xi$
$\displaystyle 0$
$\displaystyle=1-\frac{1}{\kappa}\int\xi\cdot\xi\circ\left|\xi\right|^{p-2}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi,$
which implies
$\frac{1}{\kappa}\int\xi\cdot\xi\circ\left|\xi\right|^{p-2}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi=1.$
(B.1)
On the other hand, we also have $\frac{1}{\kappa}\int
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi=1$ so $\nabla_{\xi}\int
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi=0.$ By exchange the gradient
and the integral and we also get
$\displaystyle 0$ $\displaystyle=\nabla_{\xi}\int
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi$
$\displaystyle=\int\nabla_{\xi}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi$
$\displaystyle=\int\nabla_{\xi}\left(e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\right)d\xi$
$\displaystyle=-\int
e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\xi\circ\left|\xi\right|^{p-2}d\xi$
which implies that
$\mathbb{E}_{\xi}\left[\xi\circ\left|\xi\right|^{p-2}\right]=0.$ (B.2)
From B.1 and B.2, we obtain
$\displaystyle\left\|\nabla U_{\mu}(x)-\nabla U(x)\right\|$
$\displaystyle=\left\|\frac{1}{\kappa}\int_{\mathbb{R}^{d}}\left[\frac{U(x+\mu\xi)-U(x)}{\mu}-\left\langle\nabla
U(x),\xi\right\rangle\right]\xi\circ\left|\xi\right|^{p-2}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi\right\|$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\frac{1}{\kappa\mu}\int_{\mathbb{R}^{d}}\left|U(x+\mu\xi)-U(x)-\mu\left\langle\nabla
U(x),\xi\right\rangle\right|e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\left\|\xi\circ\left|\xi\right|^{p-2}\right\|d\xi$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\sum_{i}\frac{L_{i}\mu^{\alpha_{i}}}{\kappa\left(1+\alpha_{i}\right)}\int_{\mathbb{R}^{d}}\left\|\xi\right\|^{\alpha_{i}+1}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\left\|\xi\circ\left|\xi\right|^{p-2}\right\|d\xi$
$\displaystyle=\sum_{i}\frac{L_{i}\mu^{\alpha_{i}}}{\kappa\left(1+\alpha_{i}\right)}\int_{\mathbb{R}^{d}}\left\|\xi\right\|^{\alpha_{i}+1}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\left\|\xi^{p-1}\right\|d\xi,$
where step $1$ follows from Jensen inequality, step $2$ is due to 2.2 and the
last step follows from component-wise operation of norm. If $p\leq 2$, by
using generalized Holder inequality, $\left\|\xi^{p-1}\right\|$ can be bounded
as follow:
$\displaystyle\left\|\xi^{p-1}\right\|$
$\displaystyle\leq\left\|\xi^{p-1}\right\|_{p}$
$\displaystyle=\left\|\xi^{p-1}\cdot 1_{d}\right\|_{p}$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}\left\|\xi\right\|_{p}^{p-1}\left\|1_{d}\right\|_{p}^{2-p}$
$\displaystyle=\left\|\xi\right\|_{p}^{p-1}d^{\frac{2-p}{p}}.$ (B.3)
As a result, if $1\leq p\leq 2$ we have
$\displaystyle\left\|\nabla U_{\mu}(x)-\nabla U(x)\right\|$
$\displaystyle\leq\sum_{i}\frac{L_{i}\mu^{\alpha_{i}}}{\kappa\left(1+\alpha_{i}\right)}\int_{\mathbb{R}^{d}}\left\|\xi\right\|^{\alpha_{i}+1}\left\|\xi\right\|_{p}^{p-1}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\sum_{i}\frac{L_{i}\mu^{\alpha_{i}}}{\left(1+\alpha_{i}\right)}d^{\frac{2-p}{p}}\mathbb{E}\left[\left\|\xi\right\|_{p}^{p+\alpha_{i}}\right]$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\sum_{i}\frac{L_{i}\mu^{\alpha_{i}}}{\left(1+\alpha_{i}\right)}d^{\frac{2-p}{p}}\mathbb{E}\left[\left\|\xi\right\|_{p}^{2p}\right]^{\frac{p+\alpha}{2p}}$
$\displaystyle\stackrel{{{}_{3}}}{{\leq}}\sum_{i}\frac{L_{i}\mu^{\alpha_{i}}}{\left(1+\alpha_{i}\right)}d^{\frac{2-p}{p}}\left(d+p\right)^{\frac{p+\alpha}{p}}$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}\sum_{i}L_{i}\mu^{\alpha_{i}}d^{\frac{3}{p}}$
where step $1$ is from $\left\|\xi\right\|\leq\left\|\xi\right\|_{p}$, step
$2$ follows from Jensen inequality and $\alpha\leq p$, step $3$ is due to 2.2
and in the last two steps we have used simplification for large enough $d$ and
small enough $\mu$. By replacing Assumption 2.1 with Assumption 2.2, for $\mu$
is small enough, we immediately get
$\left\|\nabla U_{\mu}(x)-\nabla U(x)\right\|\leq
2L\mu^{\alpha}d^{\frac{3}{p}}.$
iii) In this case, using Eqs. 2.2 and B.2, we get:
$\nabla
U_{\mu}(x)=\frac{1}{\kappa}\int_{\mathbb{R}^{d}}\left[\frac{U(x+\mu\xi)-U(x)}{\mu}\right]\xi\circ\left|\xi\right|^{p-2}e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}d\xi.$
Let $V(x)=U(x+\mu\xi)-U(x)$, from above equation, we obtain
$\displaystyle\left\|\nabla U_{\mu}(y)-\nabla U_{\mu}(x)\right\|$
$\displaystyle=\left\|\frac{1}{\mu\kappa}\int_{\mathbb{R}^{d}}\left(V(y)-V(x)\right)e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\xi\circ\left|\xi\right|^{p-2}d\xi\right\|$
$\displaystyle=\left\|\frac{1}{\mu\kappa}\int_{\mathbb{R}^{d}}\int_{0}^{1}\left\langle\nabla
V\left(ty+\left(1-t\right)x\right),y-x\right\rangle
dt\,e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\xi\circ\left|\xi\right|^{p-2}d\xi\right\|$
$\displaystyle=\left\|\frac{1}{\mu\kappa}\int_{\mathbb{R}^{d}}\int_{0}^{1}\left\langle\nabla
U\left(ty+\left(1-t\right)x+\mu\xi\right)-\nabla
U\left(ty+\left(1-t\right)x\right),y-x\right\rangle
dt\,e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\xi\circ\left|\xi\right|^{p-2}d\xi\right\|$
$\displaystyle\leq\frac{1}{\mu\kappa}\int_{\mathbb{R}^{d}}\int_{0}^{1}\left\|\nabla
U\left(ty+\left(1-t\right)x+\mu\xi\right)-\nabla
U\left(ty+\left(1-t\right)x\right)\right\|\left\|y-x\right\|dt\,e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\left\|\xi\circ\left|\xi\right|^{p-2}\right\|d\xi$
$\displaystyle\leq\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}\kappa}\int_{\mathbb{R}^{d}}\left\|\xi\right\|^{\alpha_{i}}\left\|y-x\right\|\,e^{-\frac{1}{p}\left\|\xi\right\|_{p}^{p}}\left\|\xi^{p-1}\right\|d\xi.$
Since $p\leq 2$ we have
$\displaystyle\left\|\nabla U_{\mu}(y)-\nabla U_{\mu}(x)\right\|$
$\displaystyle\leq\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2-p}{p}}\mathbb{E}\left(\left\|\xi\right\|_{p}^{p-1+\alpha}\right)\left\|y-x\right\|$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2-p}{p}}\mathbb{E}\left(\left\|\xi\right\|_{p}^{p}\right)^{\frac{p-1+\alpha}{p}}\left\|y-x\right\|$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2-p}{p}}\left(d+\frac{p}{2}\right)^{\frac{p-1+\alpha}{p}}\left\|y-x\right\|$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}\left\|y-x\right\|,$
where step $1$ follows from Jensen inequality and $\alpha_{i}\leq 1$, step $2$
is due to 2.2 and in the last two step is because of simplification for large
enough $d$ and small enough $\mu$. By replacing Assumption 2.1 with Assumption
2.2, for $\mu$ is small enough, we immediately get
$\left\|\nabla U_{\mu}(y)-\nabla
U_{\mu}(x)\right\|\leq\frac{L}{\mu^{1-\alpha}}d^{\frac{2}{p}}\left\|y-x\right\|.$
∎
## Appendix C Proofs under LSI
### C.1 Proof of Lemma 3.2
###### Lemma C.1.
Suppose $\pi=e^{-U}$ satisfies $\alpha$-mixture weakly smooth. Let
$p_{0}=N(0,\frac{1}{L}I)$. Then $H(p_{0}|\pi)\leq
U(0)-\frac{d}{2}\log\frac{2\Pi
e}{L}+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left(\frac{d}{L}\right)^{\frac{1+\alpha_{i}}{2}}=O(d).$
###### Proof.
Since $U$ is mixture weakly smooth, for all $x\in\mathbb{R}^{d}$ we have
$\displaystyle U(x)$ $\displaystyle\leq U(0)+\langle\nabla
U(0),x\rangle+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\|x\|^{1+\alpha_{i}}$
$\displaystyle=U(0)+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\|x\|^{1+\alpha_{i}}.$
Let $X\sim\rho=N(0,\frac{1}{L}I)$. Then
$\displaystyle\mathbb{E}_{\rho}[U(X)]$ $\displaystyle\leq
U(0)+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\mathbb{E}_{\rho}\left(\|x\|^{1+\alpha_{i}}\right)$
$\displaystyle\leq
U(0)+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\mathbb{E}_{\rho}\left(\|x\|^{2}\right)^{\frac{1+\alpha_{i}}{2}}$
$\displaystyle\leq
U(0)+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left(\frac{d}{L}\right)^{\frac{1+\alpha_{i}}{2}}.$
Recall the entropy of $\rho$ is
$H(\rho)=-\mathbb{E}_{\rho}[\log\rho(X)]=\frac{d}{2}\log\frac{2\Pi e}{L}$.
Therefore, the KL divergence is
$\displaystyle\mathbb{E}(\rho|\pi)$
$\displaystyle=\int\rho\left(\log\rho+U\right)dx$
$\displaystyle=-H(\rho)+\mathbb{E}_{\rho}[U]$ $\displaystyle\leq
U(0)-\frac{d}{2}\log\frac{2\Pi
e}{L}+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left(\frac{d}{L}\right)^{\frac{1+\alpha_{i}}{2}}$
$\displaystyle=O(d).$
This is the desired result. ∎
### C.2 Proof of Lemma 3.2
###### Lemma C.2.
Assume $\pi=e^{-U(x)}$ is $\alpha$-mixture weakly smooth. Then
$\mathbb{E}_{\pi}\left[\left\|\nabla U(x)\right\|^{2}\right]\leq
2\left(\sum_{i}L_{i}\right)^{2}d^{\frac{3}{p}},$
In particular, if $\pi=e^{-U(x)}$ is $\left(\alpha,\ell\right)$-weakly smooth.
Then
$\mathbb{E}_{\pi}\left[\left\|\nabla U(x)\right\|^{2\alpha}\right]\leq
L^{2\alpha}d^{\frac{3-\alpha}{1+\alpha}\alpha},$
$\mathbb{E}_{\pi}\left[\left\|\nabla U(x)\right\|^{2\ell+2\alpha}\right]\leq
L^{2\left(\ell+\alpha\right)}d^{\frac{3-\alpha}{1+\alpha}\left(\ell+\alpha\right)},$
for $d$ sufficiently large.
###### Proof.
Since $\pi$ is stationary distribution, we have
$\frac{d}{dt}\mathbb{E}_{\pi}\left[U_{\mu}\left(x\right)\right]=\int\left(\left(\triangle
U_{\mu}\left(x\right)\right)-\left\langle\nabla U\left(x\right),\nabla
U_{\mu}\left(x\right)\right\rangle\right)\pi\left(x\right)dx=0.$
So
$\displaystyle\mathbb{E}_{\pi}\left\langle\nabla U\left(x\right),\nabla
U_{\mu}\left(x\right)\right\rangle$
$\displaystyle=\mathbb{E}_{\pi}\left(\triangle U_{\mu}\left(x\right)\right)$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}},$
where the last step comes from Lemma 2.2that $\nabla U_{\mu}\left(x\right)$ is
$\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}$-Lipschitz,
$\nabla^{2}U_{\mu}\left(x\right)\preceq\left(\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}\right)\,I$.
In addition,
$\displaystyle\mathbb{E}_{\pi}\left\langle\nabla U\left(x\right),\nabla
U_{\mu}\left(x\right)\right\rangle$
$\displaystyle=\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]+\mathbb{E}_{\pi}\left\langle\nabla
U\left(x\right),\nabla U_{\mu}\left(x\right)-\nabla
U\left(x\right)\right\rangle$
$\displaystyle\stackrel{{{}_{1}}}{{\geq}}\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]-\mathbb{E}_{\pi}\left\|\nabla
U\left(x\right)\right\|\left\|\nabla U_{\mu}\left(x\right)-\nabla
U\left(x\right)\right\|$
$\displaystyle\stackrel{{\scriptstyle}}{{\geq}}\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]-\sqrt{\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]}\sum_{i}L_{i}\mu^{\alpha_{i}}d^{\frac{3}{p}},$
where step $1$ follows from Young inequality and the last step comes from
Cauchy inequality and Lemma 2.2 . From quadratic inequality
$\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]-\sqrt{\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]}\sum_{i}L_{i}\mu^{\alpha_{i}}d^{\frac{3}{p}}\leq\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}$
and since $\sqrt{\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]}\geq 0$ we obtain
$\displaystyle\sqrt{\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]}$
$\displaystyle\leq\frac{1}{2}\left[\sqrt{\left(\sum_{i}L_{i}\mu^{\alpha_{i}}\right)^{2}d^{\frac{6}{p}}+4\sum_{i}\frac{L_{i}}{\mu^{1-\alpha_{i}}}d^{\frac{2}{p}}}+\sum_{i}L_{i}\mu^{\alpha_{i}}d^{\frac{3}{p}}\right].$
Simply choose $\mu=1,$ we get
$\displaystyle\mathbb{E}_{\pi}\left[\left\|\nabla U(x)\right\|^{2}\right]$
$\displaystyle\leq\frac{1}{4}\left[\sqrt{\left(\sum_{i}L_{i}\right)^{2}d^{\frac{6}{p}}+4\left(\sum_{i}L_{i}\right)d^{\frac{2}{p}}}+\sum_{i}L_{i}d^{\frac{3}{p}}\right]^{2}$
$\displaystyle\leq 2\left(\sum_{i}L_{i}\right)^{2}d^{\frac{3}{p}},$
for large enough $d.$ If we replace Assumption 2.1 by Assumption 2.2, we can
choose $p=2$ and $\mu=\frac{1}{d^{\frac{2}{1+\alpha}}}$, we deduce
$\displaystyle\mathbb{E}_{\pi}\left[\left\|\nabla U(x)\right\|^{2}\right]$
$\displaystyle\leq\frac{1}{4}\left[\sqrt{L^{2}\mu^{2\alpha}d^{\frac{6}{p}}+4\frac{Ld^{\frac{2}{p}}}{\mu^{1-\alpha}}}+L\mu^{\alpha}d^{\frac{3}{p}}\right]^{2}$
$\displaystyle\leq L^{2}d^{\frac{3-\alpha}{1+\alpha}},$
for $d$ large enough as desired. Since $\alpha\leq 1$, $x\rightarrow
x^{\alpha}$ is concave function. By Jensen inequality
$\displaystyle\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2\alpha}\right]$
$\displaystyle\leq\left(\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]\right)^{\alpha}$ $\displaystyle\leq
L^{2\alpha}d^{\frac{3-\alpha}{1+\alpha}\alpha}.$
Similarly, $\ell+\alpha\leq 1,$by Jensen inequality we also have
$\displaystyle\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2\ell+2\alpha}\right]$
$\displaystyle\leq\left(\mathbb{E}_{\pi}\left[\left\|\nabla
U(x)\right\|^{2}\right]\right)^{\ell+\alpha}$ $\displaystyle\leq
L^{2\left(\ell+\alpha\right)}d^{\frac{3-\alpha}{1+\alpha}\left(\ell+\alpha\right)},$
as desired.
∎
### C.3 Proof of Lemma 3.1
###### Lemma C.3.
Suppose $\pi$ is $\gamma-$log-Sobolev, $\alpha$-mixture weakly smooth with
$\max\left\\{L_{i}\right\\}=L\geq 1$. If
$0<\eta\leq\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}$
, then along each step of ULA (3.6),
$\displaystyle H(p_{k+1}|\pi)\leq
e^{-\gamma\eta}H(p_{k}|\pi)+2\eta^{\alpha+1}D_{3},$ (C.1)
where
$D_{3}=\sum_{i}10N^{3}L^{6}+16NL^{4}+8N^{2}L^{4}d^{\frac{3}{p}}+4NL^{2}d$.
In particular, if$\pi$ is $\gamma-$log-Sobolev,
$\left(\alpha,\ell\right)-$weakly smooth with $0<\alpha+\ell\leq 1$. If
$0<\eta\leq\left(\frac{\gamma}{2L^{1+\alpha}}\right)^{\frac{1}{\alpha}}$, then
along each step of ULA (3.6),
$\displaystyle H(p_{k+1}|\pi)\leq
e^{-\gamma\eta}H(p_{k}|\pi)+2\eta^{\alpha+1}D_{3}^{\prime},$ (C.2)
where
$D_{3}^{\prime}=16L^{2+2\alpha+2\ell}+4L^{2+2\alpha}d^{\frac{3-\alpha}{1+\alpha}\left(\alpha+\ell\right)}+4L^{2}d^{\alpha+\ell}$.
###### Proof.
We adapt the proof of [31]. First, recall that the discretization of the LMC
is
$x_{k,t}\stackrel{{\scriptstyle}}{{=}}x_{k}-t\nabla
U(x_{k})+\sqrt{2t}\,z_{k}$,
where $z_{k}\sim N(0,I)$ is independent of $x_{k}$. Let $x_{k}\sim p_{k}$ and
$x^{\ast}\sim\pi$ with an optimal coupling $(x_{k},x^{\ast})$ so that
$\mathbb{E}[\|x_{k}-x^{\ast}\|^{2}]=W_{2}(p_{k},\pi)^{2}$. Let
$D_{1i}=8NL_{i}^{2+2\alpha_{i}}\left(\left(\sum_{j}L_{i}\right)^{2}+1\right)+16L_{i}^{2+2\alpha_{i}}+8L_{i}^{2}\left(\sum_{i}L_{i}\right)^{2}d^{\frac{3}{p}}+4L_{i}^{2}d^{\alpha_{i}}$,
we deduce
$\displaystyle L_{i}^{2}E_{p_{k}}\left[\left\|-t\nabla
U(x_{k})+\sqrt{2t}z_{k}\right\|^{2\alpha_{i}}\right]$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}2L_{i}^{2}t^{2\alpha_{i}}\mathbb{E}_{p_{k}}\left[\left\|\nabla
U(x_{k})\right\|^{2\alpha_{i}}\right]+4L_{i}^{2}t^{\alpha_{i}}\mathbb{E}_{p_{k}}\left[\left\|z_{k}\right\|^{2\alpha_{i}}\right]$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}2L_{i}^{2}t^{2\alpha_{i}}\mathbb{E}_{p_{k}}\left[\left\|\nabla
U(x_{k})\right\|^{2\alpha_{i}}\right]+4L_{i}^{2}t^{\alpha_{i}}\mathbb{E}_{p_{k}}\left[\left\|z_{k}\right\|^{2}\right]^{\alpha_{i}}$
$\displaystyle\stackrel{{{}_{3}}}{{\leq}}4L_{i}^{2}t^{2\alpha_{i}}\mathbb{E}\left[\left\|\nabla
U(x_{k})-\nabla U(x^{*})\right\|^{2\alpha_{i}}+\left\|\nabla
U(x^{*})\right\|^{2\alpha_{i}}\right]+4L_{i}^{2}t^{\alpha_{i}}d^{\alpha_{i}}$
$\displaystyle\stackrel{{{}_{4}}}{{\leq}}4L_{i}^{2}t^{2\alpha_{i}}\mathrm{\mathbb{E}}\left(\sum_{i}L_{i}\left\|x_{k}-x^{*}\right\|^{\alpha_{i}}\right)^{2\alpha_{i}}+4L_{i}^{2}t^{2\alpha_{i}}\mathbb{E}\left\|\nabla
U(x^{*})\right\|^{2\alpha_{i}}+4L_{i}^{2}t^{\alpha_{i}}d^{\alpha_{i}}$
$\displaystyle\leq
8L_{i}^{2+2\alpha_{i}}t^{2\alpha_{i}}N\sum_{j}L_{i}^{2\alpha_{i}}\mathrm{\mathbb{E}}\left[\left\|x_{k}-x^{*}\right\|^{2\alpha_{j}\alpha_{i}}\right]+4L_{i}^{2}t^{2\alpha}\mathbb{E}\left\|\nabla
U(x^{*})\right\|^{2}$
$\displaystyle+4L_{i}^{2}t^{2\alpha}+4L_{i}^{2}t^{\alpha}d^{\alpha}$
$\displaystyle\stackrel{{{}_{5}}}{{\leq}}8NL_{i}^{2+2\alpha_{i}}t^{2\alpha_{i}}\left(\left(\sum_{j}L_{i}\right)^{2}+1\right)\mathrm{\mathbb{E}}\left[1+\left\|x_{k}-x^{*}\right\|^{2}\right]+4L_{i}^{2}t^{2\alpha}\mathbb{E}\left\|\nabla
U(x^{*})\right\|^{2}$
$\displaystyle+4L_{i}^{2}t^{2\alpha}+4L_{i}^{2}t^{\alpha}d^{\alpha}$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}8NL_{i}^{2+2\alpha_{i}}\eta^{2\alpha}\left(\left(\sum_{j}L_{i}\right)^{2}+1\right)\mathrm{\mathbb{E}}\left[\left\|x_{k}-x^{*}\right\|^{2}\right]$
$\displaystyle+\left(8NL_{i}^{2+2\alpha_{i}}\left(\left(\sum_{j}L_{i}\right)^{2}+1\right)+16L_{i}^{2+2\alpha_{i}}+8L_{i}^{2}\left(\sum_{i}L_{i}\right)^{2}d^{\frac{3}{p}}+4L_{i}^{2}d^{\alpha_{i}}\right)\eta^{\alpha_{i}}$
$\displaystyle\leq\frac{16N}{\gamma}\left(\left(\sum_{j}L_{i}\right)^{2}+1\right)L^{2+2\alpha_{i}}\eta^{2\alpha_{i}}H(p_{k}|\pi)+D_{1i}\eta^{\alpha_{i}},$
(C.3)
where step $1$ follows from Lemma F.13 in Appendix F, step $2$ is from
$\alpha\leq 1$ and Jensen’s inequality, step $3$ comes from normal
distribution, and step $4$ follows our Assumption 2.2, and in step $5$ we have
used $\alpha_{i}\leq 1$ and the last step is due to Talagrand inequality which
comes from log-Sobolev inequality and Lemma F.16 in Appendix F below.
Similarly, we get
$\displaystyle\mathrm{\mathbb{E}}_{p_{kt}}\left\|\nabla U(x_{k})-\nabla
U(x_{k,t})\right\|^{2}$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\sum_{i}L_{i}^{2}\mathrm{\mathbb{E}}_{p_{kt}}\left\|\tilde{x}_{k,t}-x_{k}\right\|^{2\alpha_{i}}$
$\displaystyle=\sum_{i}L_{i}^{2}\mathrm{\mathbb{E}}_{p_{k}}\left\|-t\nabla
U(x_{k})+\sqrt{2t}z_{k}\right\|^{2\alpha_{i}}$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\sum_{i}\frac{16N}{\gamma}\left(\left(\sum_{j}L_{i}\right)^{2}+1\right)L^{2+2\alpha_{i}}\eta^{2\alpha_{i}}H(p_{k}|\pi)+\left(\sum_{i}D_{1i}\eta^{\alpha_{i}}\right)$
$\displaystyle\stackrel{{{}_{3}}}{{\leq}}\frac{20N^{3}}{\gamma}L^{6}\eta^{2\alpha}H(p_{k}|\pi)+D_{3}\eta^{\alpha}$
(C.4)
where step $1$ follows from Assumption 2.2, step $2$ comes from similar
reasoning as equation (C.3), and the last step comes from
$\eta\leq\frac{1}{L}$ and $\eta\leq 1$ and definition of $D_{3}$. Therefore,
from [31] Lemma 3, the time derivative of KL divergence along LMC is bounded
by
$\displaystyle\frac{d}{dt}H\left(p_{k,t}|\pi\right)$
$\displaystyle\leq-\frac{3}{4}I\left(p_{k,t}|\pi\right)+\mathbb{E}_{p_{kt}}\left[\left\|\nabla
U(x_{k,t})-\nabla U(x_{k})\right\|^{2}\right]$
$\displaystyle\leq-\frac{3}{4}I(p_{k}|\pi)+\frac{20N^{3}}{\gamma}L^{6}\eta^{2\alpha}H(p_{k}|\pi)+D_{3}\eta^{\alpha}$
$\displaystyle\leq-\mathrm{\frac{3\gamma}{2}}H(p_{k,t}|\pi)+\frac{20N^{3}}{\gamma}L^{6}\eta^{2\alpha}H(p_{k}|\pi)+D_{3}\eta^{\alpha},$
where in the last inequality we have used the definition A.1 of LSI
inequality. Multiplying both sides by $e^{\frac{3\gamma}{2}t}$, and
integrating both sides from $t=0$ to $t=\eta$ we obtain
$\displaystyle e^{\frac{3\gamma}{2}\eta}H(p_{k+1}|\pi)-H(p_{k}|\pi)$
$\displaystyle\leq
2\left(\frac{e^{\frac{3\gamma}{2}\eta}-1}{3\gamma}\right)\left(\frac{20N^{3}}{\gamma}L^{6}\eta^{2\alpha}H(p_{k}|\pi)+D_{3}\eta^{\alpha}\right)$
(C.5) $\displaystyle\leq
2\eta\left(\frac{20N^{3}}{\gamma}L^{6}\eta^{2\alpha}H(p_{k}|\pi)+D_{3}\eta^{\alpha}\right)$
(C.6)
where the last line holds by $e^{c}\leq 1+2c$ for
$0<c=\frac{3\gamma}{2}\eta<1$. Rearranging the term of the above inequality
and using the facts that $1+\eta^{1+2\alpha}\frac{40N^{3}}{\gamma}L^{6}\leq
1+\frac{\gamma\eta}{2}\leq e^{\frac{\gamma\eta}{2}}$ when
$\eta\leq\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}$
and $e^{-\frac{3\gamma}{2}\eta}\leq 1$ leads to
$\displaystyle H(p_{k+1}|\pi)$ $\displaystyle\leq
e^{-\frac{3\gamma}{2}\eta}\left(1+\eta^{1+2\alpha}\frac{40N^{3}}{\gamma}L^{6}\right)H(p_{k}|\pi)+2\eta^{\alpha+1}D_{3}$
$\displaystyle\leq e^{-\gamma\eta}H(p_{k}|\pi)+2\eta^{\alpha+1}D_{3}.$ (C.7)
as desired. ∎
### C.4 Proof of Theorem 3.1
###### Theorem C.1.
Suppose $\pi$ is $\gamma-$log-Sobolev, $\alpha$-mixture weakly smooth with
$\max\left\\{L_{i}\right\\}=L\geq 1$, and for any $x_{0}\sim p_{0}$ with
$H(p_{0}|\pi)=C_{0}<\infty$, the iterates $x_{k}\sim p_{k}$ of ULA with step
size
$\eta\leq\min\left\\{1,\frac{1}{4\gamma},\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\right\\}$
(C.8)
satisfies
$\displaystyle H(p_{k}|\pi)\leq e^{-\frac{3\gamma}{2}\eta
k}H(p_{0}|\pi)+2\eta^{\alpha+1}D_{3},$ (C.9)
where
$D_{3}=\sum_{i}10N^{3}L^{6}+16NL^{4}+8N^{2}L^{4}d^{\frac{3}{p}}+4NL^{2}d$.
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run LMC with step size
$\eta\leq\min\left\\{1,\frac{1}{4\gamma},\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}},\left(\frac{3\epsilon\gamma}{16D_{3}}\right)^{\frac{1}{\alpha}}\right\\}$
(C.10)
for $k\geq\frac{1}{\gamma\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon}$
iterations.
###### Proof.
Applying inequality C.9 recursively, and using the inequality
$1-e^{-c}\geq\frac{3}{4}c$ for $0<c=\gamma\eta\leq\frac{1}{4}$ we obtain
$\displaystyle H(p_{k}|\pi)$ $\displaystyle\leq\,e^{-\gamma\eta
k}H(p_{0}|\pi)+\frac{2\eta^{\alpha+1}D_{3}}{1-e^{-\gamma\eta}}$
$\displaystyle\leq\,e^{-\gamma\eta
k}H(p_{0}|\pi)+\frac{2\eta^{\alpha+1}D_{3}}{\frac{3}{4}\gamma\eta}$
$\displaystyle\leq\,e^{-\gamma\eta
k}H(p_{0}|\pi)+\frac{8\eta^{\alpha}D_{3}}{3\gamma}.$ (C.11)
Note that last inequality holds if we choose $\eta$ such that it satisfies
$\eta\leq\min\left\\{1,\frac{1}{4\gamma},\left(\frac{\gamma}{9N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\right\\}.$
Given $\epsilon>0$, if we further assume
$\eta\leq\left(\frac{3\epsilon\gamma}{16D_{3}}\right)^{\frac{1}{\alpha}}$,
then the above implies $H(p_{k}|\pi)\leq e^{-\gamma\eta
k}H(p_{0}|\pi)+\frac{\epsilon}{2}.$ This means for
$k\geq\frac{1}{\gamma\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon},$ we
have $H(p_{k}|\pi)\leq\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$, as
desired. ∎
## Appendix D Proof of sampling via smoothing potential
### D.1 Proof of Lemma 3.3
###### Lemma D.1.
For any $x_{k}\in\mathbb{R}^{d}$, then $g_{\mu}(x_{k},\zeta_{k})=\nabla
U_{\mu}(x_{k})+\zeta_{k}$ is an unbiased estimator of $\nabla U_{\mu}$ such
that
$\displaystyle\mathrm{Var}\left[g_{\mu}(x_{k},\zeta_{k})\right]$
$\displaystyle\leq 4N^{2}L^{2}\mu^{2\alpha}d^{\frac{2\alpha}{p}}.$
###### Proof.
Recall that by definition of $U_{\mu}$, we have $\nabla
U_{\mu}(x)=\mathrm{\mathrm{\mathbb{E}}}_{\zeta}[U(x+\mu\mathrm{\zeta})]$,
where $\mathrm{\zeta}\sim N_{p}(0,I_{d\times d})$, and is independent of
$\zeta_{1}$. Clearly,
$\mathrm{E}_{\mathrm{\mathrm{\zeta_{1}}}}[g(x,\mathrm{\zeta_{1}})]=\nabla
U_{\mu}(x)$. We now proceed to bound the variance of $g(x,\zeta_{1})$. We
have:
$\displaystyle\mathrm{\mathbb{E}}_{\mathrm{\zeta_{1}}}[\|\nabla
U_{\mu}(x)-g(x,\zeta_{1})\|_{2}^{2}]$
$\displaystyle\leq\mathrm{\mathbb{E}}_{\zeta_{1}}[\|\mathrm{E}_{\zeta}[U(x+\mu\mathrm{\zeta})]-\nabla
U(x+\mu\mathrm{\zeta_{1}})\|^{2}]\text{ }$
$\displaystyle\leq\mathrm{\mathbb{E}}_{\zeta_{1},\mathrm{\zeta}}[\|\nabla
U(x+\mu\mathrm{\zeta})-\nabla U(x+\mu\mathrm{\zeta_{1}})\|^{2}].$
$\displaystyle\leq
N\sum_{i}L_{i}^{2}\mathrm{\mathbb{E}}_{\mathrm{\zeta_{1}},\mathrm{\zeta}}[\|\mu(\mathrm{\zeta}-\mathrm{\zeta_{1}})\|^{2\alpha_{i}}$
$\displaystyle\leq
N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}\mathrm{\mathbb{E}}_{\zeta_{1},\mathrm{\zeta}}[\|\mathrm{\zeta}-\mathrm{\zeta_{1}}\|^{2\alpha_{i}}]$
$\displaystyle\leq
2N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}\left(\mathrm{\mathbb{E}}\left[\|\mathrm{\zeta}\|^{2\alpha_{i}}\right]+\mathrm{\mathbb{E}}\left[\|\mathrm{\zeta_{1}}\|^{2\alpha_{i}}\right]\right)$
$\displaystyle\leq
2N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}\left(\left(\mathrm{\mathbb{E}}\left[\|\mathrm{\zeta}\|^{2}\right]\right)^{\alpha_{i}}+\left(\mathrm{\mathbb{E}}\left[\|\zeta_{1}\|^{2}\right]\right)^{\alpha_{i}}\right)$
$\displaystyle\leq
4N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}d^{\frac{2\alpha_{i}}{p}}$
$\displaystyle\leq 4N^{2}L^{2}\mu^{2\alpha}d^{\frac{2\alpha}{p}},$
as claimed. ∎
### D.2 Proof of Lemma 3.2
Before proving Theorem 3.2, we need an additional lemma.
###### Lemma D.2.
[[31] modified Lemma 3] Suppose $x_{k,t}$ is the interpolation of the
discretized process (1.2). Let $p_{k,t}$, $p_{kt}$ and $p_{kt\zeta}$ denote
its distribution, the joint distribution of $x_{k,t}$ and $x_{k}$ and the
joint distribution of $x_{k,t}$, $x_{k}$ and $\zeta$ respectively. Here
$g(x_{k},\zeta)$ is an estimate of $\nabla U(x_{k})$ with noise $\zeta$ such
that $E_{\zeta}g(x_{k},\zeta)=\nabla U(x_{k})$. Then
${\displaystyle\frac{d}{dt}H\left(p_{k,t}|\pi_{\mu}\right)\leq-\frac{3}{4}I\left(p_{k,t}|\pi_{\mu}\right)+\mathbb{E}_{p_{kt\zeta}}\left[\left\|\nabla
U(x_{k,t})-g(x_{k},\zeta)\right\|^{2}\right]}.$ (D.1)
###### Proof.
The steps follow exactly as in Lemma 3 and we provide the proof here for
completeness. For each $t>0$, let $p_{k\zeta|t}(x_{k},\zeta)$ denote the
distributions of $x_{k}$ and $\zeta$ conditioned on $x_{k,t}$ and
$p_{t|k\zeta}(x_{k,t})$ denote the distributions of $x_{k,t}$ conditioned on
$x_{k}$ and $\zeta$. Following Fokker-Planck equation, we have
$\frac{\partial p_{t|k\zeta}(x_{k,t})}{\partial
t}=\nabla\cdot\left(p_{t|k\zeta}(x_{k,t})g(x_{k},\zeta)\right)+\triangle
p_{t|k\zeta}(x_{k,t}),$ (D.2)
which integrating with respect to $x_{k}$ and $\zeta$ achieves
$\displaystyle\frac{\partial p_{k,t}(x)}{\partial t}$
$\displaystyle=\int\int\frac{\partial p_{t|k\zeta}(x)}{\partial
t}p_{k\zeta}(x_{k},\zeta)dx_{k}d\zeta$
$\displaystyle=\int\int\left(\nabla\cdot\left(p_{t|k\zeta}(x_{k,t})g(x_{k},\zeta)\right)+\triangle
p_{t|k\zeta}(x_{k,t})\right)dx_{k}d\zeta$
$\displaystyle=\int\int\left(\nabla\cdot\left(p_{t|k\zeta}(x_{k,t})g(x_{k},\zeta)\right)\right)+\triangle
p_{k,t}(x)$ $\displaystyle=\nabla\cdot(p_{k,t}(x)\int\int
p_{k\zeta|t}(x_{k})g(x_{k},\zeta)dx_{k}d\zeta)+\triangle p_{k,t}(x)$ (D.3)
$\displaystyle=\nabla\cdot\
(p_{k,t}(x)\mathrm{\mathbb{E}}_{p_{k\zeta|t}}[g(x_{k},\zeta)|x_{k,t}=x])+\triangle
p_{k,t}(x).$ (D.4)
Combining with $\int p_{t}\frac{\partial}{\partial
t}\log\frac{p_{t}}{\pi_{\mu}}\,dx=\int\frac{\partial p_{t}}{\partial
t}\,dx=\frac{d}{dt}\int p_{t}\,dx=0$, we get the following inequality for time
derivative of KL-divergence.
$\displaystyle\frac{d}{dt}H\left(p_{k,t}|\pi_{\mu}\right)$
$\displaystyle=\frac{d}{dt}\int
p_{k,t}(x)\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)dx$
$\displaystyle=\int\frac{\partial p_{k,t}}{\partial
t}(x)\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)dx$
$\displaystyle=\int\left[\nabla\cdot\left(p_{k,t}(x)\mathrm{\mathbb{E}}_{p_{k\zeta|t}}[g(x_{k},\zeta)|x_{k,t}=x]\right)\right]\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)dx$
$\displaystyle+\int\left[\triangle
p_{k,t}(x)\right]\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)dx$
$\displaystyle\stackrel{{\scriptstyle\left(i\right)}}{{=}}\int\left[\nabla\cdot\left(p_{k,t}(x)\mathrm{\mathbb{E}}_{p_{k\zeta|t}}[g(x_{k},\zeta)|x_{k,t}=x]\right)\right]\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)dx$
$\displaystyle+\int\left[\nabla\cdot\left(\nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)-\nabla
U(x)\right)\right]\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)dx$
$\displaystyle\stackrel{{\scriptstyle\left(ii\right)}}{{=}}-\int
p_{k,t}(x)\left\langle\mathrm{\mathbb{E}}_{p_{k\zeta|t}}[g(x_{k},\zeta)|x_{k,t}=x],\
\nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)\right\rangle dx$
$\displaystyle-\int
p_{k,t}(x)\left\langle\nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)-\nabla
U(x),\ \nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)\right\rangle dx$
$\displaystyle=-I\left(p_{k,t}|\pi_{\mu}\right)$ $\displaystyle+\int
p_{k,t}(x)\left\langle\nabla
U(x)-\mathrm{\mathbb{E}}_{p_{k\zeta|t}}[g(x_{k},\zeta)|x_{k,t}=x],\
{\displaystyle\nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)}\right\rangle
dx$
$\displaystyle=-I\left(p_{k,t}|\pi_{\mu}\right)+\mathrm{\mathbb{E}}_{p_{kt\zeta}}\left\langle\nabla
U(x_{k,t})-g(x_{k},\zeta),\
{\displaystyle\nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)}\right\rangle$
$\displaystyle\stackrel{{\scriptstyle\left(iii\right)}}{{\leq}}-I\left(p_{k,t}|\pi_{\mu}\right)$
$\displaystyle+\mathrm{E}_{p_{kt\zeta}}\left\|\nabla
U(x_{k,t})-g(x_{k},\zeta)\right\|^{2}+\frac{1}{4}\mathrm{\mathbb{E}}_{p_{k,t}}\left\|\nabla\log\left(\frac{p_{k,t}(x)}{\pi_{\mu}(x)}\right)\right\|^{2}$
$\displaystyle=-\frac{3}{4}I\left(p_{k,t}|\pi_{\mu}\right)+\mathrm{\mathbb{E}}_{p_{kt\zeta}}\left\|\nabla
U(x_{k,t})-g(x_{k},\zeta)\right\|^{2}$ (D.5)
in which equality $\left(i\right)$ is follows from $\triangle
p_{k,t}=\nabla\cdot(\nabla p_{k,t})$, equality $\left(ii\right)$ follows from
the divergence theorem, inequality $\left(iii\right)$ follows from
$\left\langle u,\
v\right\rangle{\displaystyle\leq\|u\|^{2}+\frac{1}{4}\|v\|^{2}}$, and in the
last step, the expectation is taken with respect to both $x_{k}$ ,$x_{k,t}$
and $\zeta.$ ∎
We now ready to state and prove Theorem 3.2.
###### Theorem D.1.
Suppose $\pi_{\mu}$ is $\gamma_{1}-$log-Sobolev, $\alpha$-mixture weakly
smooth, $L=1\vee\max\left\\{L_{i}\right\\}$, and for any $x_{0}\sim p_{0}$
with $H(p_{0}|\pi)=C_{0}<\infty$, the iterates $x_{k}\sim p_{k}$ of ULA with
step size
$\eta\leq\min\left\\{1,\frac{1}{4\gamma},\left(\frac{\gamma_{1}}{13N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\right\\}$
(D.6)
satisfies
$\displaystyle H(p_{k}|\pi_{\mu})\leq e^{-\frac{3\gamma_{1}}{2}\eta
k}H(p_{0}|\pi_{\mu})+2\eta^{\alpha+1}D_{4},$ (D.7)
where
$D_{4}=\sum_{i}10N^{3}L^{6}+16NL^{4}+8N^{2}L^{4}d^{\frac{3}{p}}+4NL^{2}d+8N^{2}L^{2}d^{\frac{2\alpha}{p}}$.
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run LMC with step size
$\eta\leq\min\left\\{1,\frac{1}{4\gamma_{1}},\left(\frac{\gamma_{1}}{13N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}},\left(\frac{3\epsilon\gamma_{1}}{16D_{4}}\right)^{\frac{1}{\alpha}}\right\\}$
(D.8)
for
$k\geq\frac{2}{\gamma_{1}\eta}\log\frac{3H\left(p_{0}|\pi_{\mu}\right)}{\epsilon}$
iterations.
###### Proof.
We adapt the proof of [31]. First, recall that the discretization of the ULA
is
$x_{k,t}\stackrel{{\scriptstyle}}{{=}}x_{k}-\eta
g(x_{k},\zeta)+\sqrt{2\eta}\,z_{k}$,
where $z_{k}\sim N(0,I)$ is independent of $x_{k}$. Let $x_{k}\sim p_{k}$ and
$x^{\ast}\sim\pi$ with an optimal coupling $(x_{k},x^{\ast})$ so that
$\mathbb{E}[\|x_{k}-x^{\ast}\|^{2}]=W_{2}(p_{\mu,k},\pi_{\mu})^{2}$. Choosing
$\mu=\sqrt{\eta}$, we have
$\displaystyle\mathrm{\mathbb{E}}_{p_{kt\zeta}}\left\|\nabla
U(x_{k,t})-g(x_{k},\zeta)\right\|^{2}$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}2\left[\mathrm{\mathbb{E}}_{p_{kt\zeta}}\left\|\nabla
U(x_{k,t})-\nabla U(x_{k})\right\|^{2}+\left\|\nabla
U(x_{k})-g(x_{k},\zeta)\right\|^{2}\right]$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\frac{40N^{3}}{\gamma_{1}}L^{6}\eta^{2\alpha}H(p_{k}|\pi_{\mu})+D_{3}\eta^{\alpha}+8N^{2}L^{2}\mu^{2\alpha}d^{\frac{2\alpha}{p}}$
$\displaystyle\stackrel{{\scriptstyle}}{{\leq}}\frac{40N^{3}}{\gamma_{1}}L^{6}\eta^{2\alpha}H(p_{k}|\pi_{\mu})+D_{4}\eta^{\alpha},$
where step 1 follows from Young inequality and Assumption 2, step $2$ comes
from equation (C.4) , and the last step comes from $\eta\leq\frac{1}{L}$ and
$\eta\leq 1$ and the definition of $D_{4}$. Therefore, from Lemma 3.2, the
time derivative of KL divergence along LMC is bounded by
$\displaystyle\frac{d}{dt}H\left(p_{k,t}|\pi_{\mu}\right)$
$\displaystyle\leq-\frac{3}{4}I(p_{k,t}|\pi_{\mu})+\frac{40N^{3}}{\gamma_{1}}L^{6}\eta^{2\alpha}H(p_{k}|\pi_{\mu})+D_{4}\eta^{\alpha}$
$\displaystyle\leq-\mathrm{\frac{3\gamma_{1}}{2}}H(p_{k,t}|\pi_{\mu})+\frac{40N^{3}}{\gamma_{1}}L^{6}\eta^{2\alpha}H(p_{k}|\pi_{\mu})+D_{4}\eta^{\alpha},$
(D.9)
where in the last inequality we have used the definition A.1 of LSI
inequality. Multiplying both sides by $e^{\frac{3\gamma_{1}}{2}t}$, and
integrating both sides from $t=0$ to $t=\eta$ we obtain
$\displaystyle
e^{\frac{3\gamma}{2}\eta}H(p_{k+1}|\pi_{\mu})-H(p_{k}|\pi_{\mu})$
$\displaystyle\leq
2\left(\frac{e^{\frac{3\gamma_{1}}{2}\eta}-1}{3\gamma_{1}}\right)\left(\frac{40N^{3}}{\gamma_{1}}L^{6}\eta^{2\alpha}H(p_{k}|\pi_{\mu})+D_{4}\eta^{\alpha}\right)$
$\displaystyle\leq
2\eta\left(\frac{40N^{3}}{\gamma_{1}}L^{6}\eta^{2\alpha}H(p_{k}|\pi)+D_{4}\eta^{\alpha}\right)$
(D.10)
where the last line holds by $e^{c}\leq 1+2c$ for
$0<c=\frac{3\gamma_{1}}{2}\eta<1$. Rearranging the term of the above
inequality and using the facts that
$1+\eta^{1+2\alpha}\frac{80N^{3}}{\gamma_{1}}L^{6}\leq
1+\frac{\gamma_{1}\eta}{2}\leq e^{\frac{\gamma_{1}\eta}{2}}$ when
$\eta\leq\left(\frac{\gamma_{1}}{13N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}$
and $e^{-\frac{3\gamma_{1}}{2}\eta}\leq 1$ leads to
$\displaystyle H(p_{k+1}|\pi_{\mu})$ $\displaystyle\leq
e^{-\frac{3\gamma_{1}}{2}\eta}\left(1+\eta^{1+2\alpha}\frac{80N^{3}}{\gamma_{1}}L^{6}\right)H(p_{k}|\pi_{\mu})+2\eta^{\alpha+1}D_{4}$
$\displaystyle\leq
e^{-\gamma_{1}\eta}H(p_{k}|\pi_{\mu})+2\eta^{\alpha+1}D_{3}.$ (D.11)
Applying this inequality recursively, and using the inequality
$1-e^{-c}\geq\frac{3}{4}c$ for $0<c=\gamma_{1}\eta\leq\frac{1}{4}$ we obtain
$\displaystyle H(p_{k}|\pi_{\mu})$ $\displaystyle\leq\,e^{-\gamma_{1}\eta
k}H(p_{0}|\pi_{\mu})+\frac{2\eta^{\alpha+1}D_{4}}{1-e^{-\gamma_{1}\eta}}$
$\displaystyle\leq\,e^{-\gamma_{1}\eta
k}H(p_{0}|\pi_{\mu})+\frac{2\eta^{\alpha+1}D_{4}}{\frac{3}{4}\gamma_{1}\eta}$
$\displaystyle\leq\,e^{-\gamma_{1}\eta
k}H(p_{0}|\pi_{\mu})+\frac{8\eta^{\alpha}D_{4}}{3\gamma_{1}}.$ (D.12)
Note that last inequality holds if we choose $\eta$ such that it satisfies
$\eta\leq\min\left\\{1,\frac{1}{4\gamma_{1}},\left(\frac{\gamma_{1}}{13N^{\frac{3}{2}}L^{3}}\right)^{\frac{1}{\alpha}}\right\\}.$
From Lemma 3.4, by choosing $\mu=\sqrt{\eta}$ small enough so that
$W_{2}(\pi,\ \pi_{\mu})\leq
3\sqrt{NLE_{2}}\eta^{\frac{\alpha}{2}}d^{\frac{1}{p}}$. Since $\pi$ satisfies
log-Sobolev inequality, by triangle inequality we also get
$\displaystyle W_{2}(p_{\mu k},\ \pi)$ $\displaystyle\leq W_{2}(p_{\mu k},\
\pi_{\mu})+W_{2}(\pi,\ \pi_{\mu})$
$\displaystyle\leq\sqrt{\frac{2}{\gamma}H(p_{\mu k},\pi_{\mu})}+W_{2}(\pi,\
\pi_{\mu})$
$\displaystyle\leq\frac{1}{\sqrt{\gamma_{1}}}e^{-\frac{\gamma_{1}}{2}\eta
k}\sqrt{H(p_{0}|\pi_{\mu})}+\frac{2}{\gamma_{1}}\eta^{\frac{\alpha}{2}}\sqrt{D_{4}}+3\sqrt{NLE_{2}}\eta^{\frac{\alpha}{2}}d^{\frac{1}{p}}.$
Given $\epsilon>0$, if we further assume
$\eta\leq\left(\frac{\epsilon\gamma_{1}}{6\sqrt{D_{4}}}\right)^{\frac{2}{\alpha}}\wedge\left(\frac{\epsilon}{9\sqrt{NLE_{2}}d^{\frac{1}{p}}}\right)^{\frac{2}{\alpha}}$,
then the above inequality implies
$H(p_{k}|\pi_{\mu})\leq\frac{1}{\sqrt{\gamma_{1}}}e^{-\frac{\gamma_{1}}{2}\eta
k}\sqrt{H(p_{0}|\pi_{\mu})}+\frac{2\epsilon}{3}.$ This means for
$k\geq\frac{2}{\gamma_{1}\eta}\log\frac{3\sqrt{H\left(p_{0}|\pi_{\mu}\right)\gamma_{1}}}{\epsilon},$
we have $H(p_{k}|\pi)\leq\frac{\epsilon}{3}+\frac{2\epsilon}{3}=\epsilon$, as
desired. ∎
### D.3 Proof of Lemma 3.4
###### Lemma D.3.
Assume that $\pi\propto\exp(-\pi)$ and $\pi_{\mu}\propto\exp(-U_{\mu})$ and
$\pi$ has a bounded second moment, that is
$\int\left\|x\right\|^{2}\pi(x)dx=E_{2}<\infty$. We deduce the following
bounds
$W_{2}^{2}(\pi,\ \pi_{\mu})\leq 8.24NL\mu^{1+\alpha}d^{\frac{2}{p}}E_{2}.$
for any $\mu\leq 0.05$.
###### Proof.
This proof adapts the technique of the proof of [11]’s Proposition 1\. Without
loss of generality we may assume that
${\displaystyle\int_{\mathbb{R}^{p}}\exp(-U(x))dx=1}$. We first give upper and
lower bounds to the normalizing constant of $\pi_{\mu}$, that is
$\displaystyle c_{\mu}$
$\displaystyle\stackrel{{{}_{\triangle}}}{{=}}\int_{\mathbb{R}^{d}}\pi(x)e^{-\left(U_{\mu}(x)-U(x)\right)}dx.$
$\displaystyle=\mathbb{E}_{\pi}\left(e^{-\left(U_{\mu}(x)-U(x)\right)}\right)$
The constant $c_{\mu}$ is an expectation of
$e^{-\left(U_{\mu}(x)-U(x)\right)}$ with respect to the density $\pi$ so it
can be trivially upper bounded by $e^{M}$ and lower bounded by $e^{-M}$ where
$\left|U_{\mu}(x)-U(x)\right|\leq\sum_{i}L_{i}\mu^{1+\alpha_{i}}d^{\frac{2}{p}}=M$.
Now we control the distance between densities $\pi$ and $\pi_{\mu}$ at any
fixed $x\in\mathbb{R}^{d}$:
$\displaystyle\left|\pi(x)-\pi_{\mu}(x)\right|$
$\displaystyle=\pi(x)\left|1-\frac{e^{-\left(U_{\mu}(x)-U(x)\right)}}{c_{\mu}}\right|$
$\displaystyle\leq\pi(x)\left\\{\left(1-\frac{e^{-\left(U_{\mu}(x)-U(x)\right)}}{e^{M}}\right)+e^{-\left(U_{\mu}(x)-U(x)\right)}\left(\frac{1}{c_{\mu}}-\frac{1}{e^{M}}\right)\right\\}$
$\displaystyle\leq\pi(x)\left(1-e^{-2M}+e^{2M}-1\right)$
$\displaystyle\leq\pi(x)\left(2M+e^{2M}-1\right).$
The first inequality is from triangle inequality of absolute value, second
inequality is trivial while the last inequality follows from $1-e^{-x}\leq x$
for any $x\geq 0$. To bound $W_{2}$, we use an inequality from [32](Theorem
6.15, page 115):
$W_{2}^{2}(\pi,\ \pi_{\mu})\leq
2\int_{\mathbb{R}^{d}}\|x\|_{2}^{2}\left|\pi(x)-\pi_{\mu}(x)\right|dx.$
Combining this with the bound on $\left|\pi(x)-\pi_{\mu}(x)\right|$ shown
above, we have
$\displaystyle W_{2}^{2}(\pi,\ \pi_{\mu})$ $\displaystyle\leq
2\int_{\mathbb{R}^{d}}\|x\|_{2}^{2}\pi(x)\left(2M+e^{2M}-1\right)dx$
$\displaystyle\leq 2\left(2M+e^{2M}-1\right)E_{\pi}\left[\|x\|^{2}\right]$
$\displaystyle\leq 2\left(2M+e^{2M}-1\right)E_{2}$ $\displaystyle\leq
8.24\sum_{i}L_{i}\mu^{1+\alpha_{i}}d^{\frac{2}{p}}E_{2}$ $\displaystyle\leq
8.24NL\mu^{1+\alpha}d^{\frac{2}{p}}E_{2},$
where in the last inequality $M<0.05$ ensures that $e^{2M}-1\leq 2.12M$. This
gives the desired result.
∎
## Appendix E Convexification of non-convex domain
### E.1 Proof of Lemma 4.1
###### Lemma E.1.
For function $V$ defined as
$V(\ x)=\inf_{\begin{subarray}{c}\\{\ x_{i}\\}\subset\Omega,\\\
\left\\{\lambda_{i}\big{|}\sum_{i}\lambda_{i}=1\right\\}\\\
\text{s.t.},\sum_{i}\lambda_{i}\ x_{i}=\
x\end{subarray}}\left\\{\sum_{i=1}^{l}\lambda_{i}U(\ x_{i})\right\\},$ (E.1)
$\forall\ x\in\mathbb{B}(0,R)$, $\inf_{\left\|x\right\|=R}U(x)\leq V(\
x)\leq\sup_{\left\|x\right\|=R}U(x)$.
###### Proof.
First, by definition of $V$ inside $\mathbb{B}(0,R)$, we show that for any
linear combination of the form $\sum_{i}\lambda_{i}U(\ x_{i})$
where$\sum_{i}\lambda_{i}=1,$ we can find another representation
$\sum_{j}\lambda_{j}U(\ x_{j})$ where $\sum_{j}\lambda_{j}=1$ and
$\left\|x_{j}\right\|=R$ such that $\sum_{j}\lambda_{j}U(\
x_{j})\leq\sum_{i}\lambda_{i}U(\ x_{i})$. This follows straightforwardly as
follows.
For any $\ x_{j}\in\\{\ x_{i}\\}$, such that $\left\|\bar{x}_{j}\right\|>R$,
there exists a new convex combination $\\{\
x_{i}\\}\bigcup\\{\bar{x}_{j}\\}\setminus\\{\ x_{j}\\}$ with
$\left\|\bar{x}_{j}\right\|=R$, such that $\sum_{i}\lambda_{i}U(\
x_{i})\geq\tilde{\lambda}_{j}U(\bar{x}_{j})+\sum_{i\neq
j}\tilde{\lambda}_{i}U(\ x_{i})$. In this case, we choose $\bar{x}_{j}$ where
$\left\|\bar{x}_{j}\right\|=R$, such that:
$\displaystyle\bar{x}_{j}$
$\displaystyle=\dfrac{1-\bar{\lambda}_{j}}{1-\lambda_{j}}\
x+\dfrac{\bar{\lambda}_{j}-\lambda_{j}}{1-\lambda_{j}}\
x_{j},\>\lambda_{j}<\bar{\lambda}_{j}<1,$ $\displaystyle=\bar{\lambda}_{j}\
x_{j}+\left(\dfrac{1-\bar{\lambda}_{j}}{1-\lambda_{j}}\right)\left(\sum_{i\neq
j}\lambda_{i}\ x_{i}\right).$ (E.2)
Since $U$ is convex on $\Omega$,
$U(\bar{x}_{j})\leq\bar{\lambda}_{j}U(\
x_{j})+\left(\dfrac{1-\bar{\lambda}_{j}}{1-\lambda_{j}}\right)\left(\sum_{i\neq
j}\lambda_{i}U(\ x_{i})\right).$ (E.3)
On the other hand,$x$ can be represented as a convex combination of $\\{\
x_{i}\\}\bigcup\\{\bar{x}_{j}\\}\setminus\\{\ x_{j}\\}$:
$\
x=\dfrac{\lambda_{j}}{\bar{\lambda}_{j}}\bar{x}_{j}+\left(1-\dfrac{\lambda_{j}}{\bar{\lambda}_{j}}\dfrac{1-\bar{\lambda}_{j}}{1-\lambda_{j}}\right)\left(\sum_{i\neq
j}\lambda_{i}\ x_{i}\right)=\tilde{\lambda}_{j}\bar{x}_{j}+\sum_{i\neq
j}\tilde{\lambda}_{i}\ x_{i},$ (E.4)
and that
$\displaystyle\sum_{i}\lambda_{i}U(\ x_{i})$
$\displaystyle\geq\dfrac{\lambda_{j}}{\bar{\lambda}_{j}}U(\bar{x}_{j})+\left(1-\dfrac{\lambda_{j}}{\bar{\lambda}_{j}}\dfrac{1-\bar{\lambda}_{j}}{1-\lambda_{j}}\right)\left(\sum_{i\neq
j}\lambda_{i}U(\ x_{i})\right)$
$\displaystyle=\tilde{\lambda}_{j}U(\bar{x}_{j})+\sum_{i\neq
j}\tilde{\lambda}_{i}U(\ x_{i}).$ (E.5)
As a result, $V(\ x)$ can be represented as
$V(\ x)=\inf_{\begin{subarray}{c}\\{\ x_{j}\\}\subset\Omega,\\\
\left\\{\lambda_{j}\big{|}\sum_{j}\lambda_{j}=1\right\\}\\\
\text{s.t.},\sum_{j}\lambda_{j}\ x_{j}=\
x,\,\left\|x_{i}\right\|=R\end{subarray}}\left\\{\sum_{j}\lambda_{j}U(\
x_{j})\right\\}.$ (E.6)
By the representation of $V$ inside $\mathbb{B}(0,R)$, we obtain
$\inf_{\left\|\bar{x}\right\|=R}U(\bar{x})\leq V(\
x)\leq\sup_{\left\|\bar{x}\right\|=R}U(\bar{x}).$ ∎
### E.2 Proof of Lemma 4.2
###### Lemma E.2.
For $U$ satisfying $\alpha$-mixture weakly smooth and
$\left(\mu,\theta\right)$-degenerated convex outside the ball radius $R$,
there exists $\hat{U}\in C^{1}(\mathbb{R}^{d})$ with a Hessian that exists
everywhere on $\mathbb{R}^{d}$, and $\hat{U}$ is
$\left(\left(1-\theta\right)\frac{\mu}{2},\theta\right)$-degenerated convex on
$\mathbb{R}^{d}$ (that is
$\nabla^{2}\hat{U}(x)\succeq\left(1-\theta\right)\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}$),
such that
$\displaystyle\sup\left(\hat{U}(\ x)-U(\ x)\right)$
$\displaystyle-\inf\left(\hat{U}(\ x)-U(\
x)\right)\leq\sum_{i}L_{i}R^{1+\alpha_{i}}+\frac{4\mu}{\left(2-\theta\right)}\
R^{2-\theta}.$ (E.7)
###### Proof.
Following closely to [24]’s approach, let $g(\
x)=\frac{\mu}{2\left(2-\theta\right)}\
\left(1+\left\|x\right\|^{2}\right)^{1-\frac{\theta}{2}}$ for $0\leq\theta<1$.
The gradient of $g\left(x\right)$ is $\nabla g(\
x)=\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}x$ and
the Hessian of $g\left(x\right)$ is
$\displaystyle\nabla^{2}g(\ x)$
$\displaystyle=\frac{\mu}{2}\left[\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}-\theta\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}-1}xx^{T}\right]$
$\displaystyle\preceq\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}.$
(E.8)
On the other hand, we also have
$\displaystyle\nabla^{2}g(\ x)$
$\displaystyle=\frac{\mu}{2}\left[\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}-\theta\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}-1}xx^{T}\right]$
$\displaystyle=\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}-1}\left[I_{d}+I_{d}\left\|x\right\|^{2}-\theta\left\|x\right\|^{2}\frac{xx^{T}}{\left\|x\right\|^{2}}\right]$
$\displaystyle=\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}-1}\left[I_{d}+I_{d}\left(1-\theta\right)\left\|x\right\|^{2}+\theta\left\|x\right\|^{2}\left(I_{d}-\frac{xx^{T}}{\left\|x\right\|^{2}}\right)\right]$
$\displaystyle\succeq\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}-1}\left(\left(1-\theta\right)\left\|x\right\|^{2}+1\right)I_{d}$
$\displaystyle\succeq\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}-1}\left(\left(1-\theta\right)\left(\left\|x\right\|^{2}+1\right)\right)I_{d}$
$\displaystyle\succeq\left(1-\theta\right)\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}.$
(E.9)
We adapt [35] by denoting
$\tilde{U}\left(x\right)=U\left(x\right)-g\left(x\right).$ Since
$U\left(x\right)$ is $\left(\mu,\theta\right)$-degenerated convex outside the
ball, we deduce for every $\left\|x\right\|\geq R,$
$\displaystyle\nabla^{2}\tilde{U}\left(x\right)$
$\displaystyle=\nabla^{2}U\left(x\right)-\nabla^{2}g\left(x\right)$
$\displaystyle\succeq\mu\left(1+\left\|x\right\|{}^{2}\right)^{-\frac{\theta}{2}}I_{d}-\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d}$
$\displaystyle\succeq\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}I_{d},$
(E.10)
which implies that $\tilde{U}\left(x\right)$ is
$\left(\frac{\mu}{2},\theta\right)$-degenerated convex outside the ball. Now,
we construct $\hat{U}(\ x)$ so that it is twice differentiable, degenerated
convex on all $\mathbb{R}^{d}$ and differs from $U(\ x)$ less than
$4LR^{1+\alpha}+4LR^{1+\ell+\alpha}+\frac{4\mu}{\left(2-\theta\right)}\
R^{2-\theta}$. Based on the same construction of [24], we first define the
function $V$ as the convex extension [35] of $\tilde{U}$ from domain
$\Omega=R^{d}\setminus\mathbb{B}\left(0,R\right)$ to its convex hull
$\Omega^{co}$, $V\left(x\right)=\inf\left\\{\sum_{i}\lambda_{i}\tilde{U}(\
x_{i})\right\\}$ for every $x\in\mathbb{R}^{d}.$ Since $\tilde{U}(\ x)$ is
convex in $\Omega$, $V(\ x)=\tilde{U}(\ x)$ for $\ x\in\Omega$. By Lemma 4.1,
$V(\ x)$ is convex on the entire domain $R^{d}$ and $V(\ x)$ can be
represented as
$V(\ x)=\inf_{\begin{subarray}{c}\\{\ x_{j}\\}\subset\Omega,\\\
\left\\{\lambda_{j}\big{|}\sum_{j}\lambda_{j}=1\right\\}\\\
\text{s.t.},\sum_{j}\lambda_{j}\ x_{j}=\
x,\mathrm{and}\left\|x_{i}\right\|=R\end{subarray}}\left\\{\sum_{j}\lambda_{j}\tilde{U}(\
x_{j})\right\\}.$ (E.11)
Therefore, $\forall\ x\in\mathbb{B}(0,R)$,
$\inf_{\left\|\bar{x}\right\|=R}\tilde{U}(\bar{x})\leq V(\
x)\leq\sup_{\left\|\bar{x}\right\|=R}\tilde{U}(\bar{x})$. Next we construct
$\tilde{V}(\ x)$ to be a smoothing of $V$ on
$\mathbb{B}\left(0,R+\epsilon\right)$. Consider the function
$\varphi{\displaystyle(x)}$ of a variable $x$ in $\mathbb{R}^{d}$ defined by
${\displaystyle\varphi(x)=\begin{cases}Ce^{-1/(1-\left\|x\right\|^{2})}&\text{
if }\left\|x\right\|<1\\\ 0&\text{ if }\left\|x\right\|\geq 1\end{cases}}$
(E.12)
where the numerical constant $C$ ensures normalization. Let
${\displaystyle\varphi_{\delta}(x)=\delta^{-d}\varphi(\delta^{-1}x)}$ be a
smooth function supported on the ball $\mathbb{B}(0,\delta)$. Define
$\displaystyle\tilde{V}(\ x)$ $\displaystyle=\int V(\ y)\varphi_{\delta}(\
x-y)dy$ $\displaystyle=\int V(\ x-y)\varphi_{\delta}(y)dy$
$\displaystyle=E_{y}\left[V(x-y)\right].$ (E.13)
The third equality implies that for any $x$ and $z\in\mathbb{R}^{d}$,
$\displaystyle\left\langle\nabla\tilde{V}(\ x)-\nabla\tilde{V}(\
z),x-z\right\rangle$ $\displaystyle=\left\langle\nabla
E_{y}\left[V(x-y)\right]-\nabla E_{y}\left[V(z-y)\right],x-z\right\rangle$
$\displaystyle\stackrel{{{}_{1}}}{{=}}\left\langle E_{y}\left[\nabla
V(x-y)\right]-E_{y}\left[\nabla V(z-y)\right],x-z\right\rangle$
$\displaystyle=\left\langle E_{y}\left[\nabla V(x-y)-\nabla
V(z-y)\right],x-z\right\rangle$ $\displaystyle=E_{y}\left\langle\nabla
V(x-y)-\nabla V(z-y),x-z\right\rangle$ $\displaystyle\geq 0,$ (E.14)
where step $1$ follows from exchangeability of gradient and integral and the
last line is because of convexity of $V$, which indicates $\tilde{V}$ is a
smooth and convex function on $R^{d}.$ Also, note that the definition of
$\tilde{V}$ implies that $\forall\left\|x\right\|<R+\epsilon$,
$\inf_{\left\|\bar{x}\right\|<R+\epsilon+\delta}V(\bar{x})\leq\tilde{V}(\
x)\leq\sup_{\left\|\bar{x}\right\|<R+\epsilon+\delta}V(\bar{x}).$ (E.15)
And by Lemma 4.1, for $\quad\forall\left\|\bar{x}\right\|<R+\epsilon$
$\displaystyle\inf_{\bar{x}\in\mathbb{B}\left(0,R+\epsilon+\delta\right)\setminus\mathbb{B}(0,R)}\tilde{U}(\bar{x})\leq\tilde{V}(\
x)\leq\sup_{\bar{x}\in\mathbb{B}\left(0,R+\epsilon+\delta\right)\setminus\mathbb{B}(0,R)}\tilde{U}(\bar{x}).$
(E.16)
Finally, we construct the auxiliary function:
$\displaystyle\hat{U}(\
x)-g\left(x\right)=\left\\{\begin{array}[]{l}\tilde{U}(\ x),\
\left\|x\right\|\geq R+2\epsilon\\\ \alpha(\ x)\tilde{U}(\ x)+(1-\alpha(\
x))\tilde{V}(\ x),\ R+\epsilon<\left\|x\right\|<R+2\epsilon\\\ \tilde{V}(\
x),\ \left\|x\right\|\leq R+\epsilon\end{array}\right.$ (E.20)
where $\alpha(\
x)=\dfrac{1}{2}\cos\left(\pi\dfrac{\left\|x\right\|^{2}}{\epsilon\left(2R+3\epsilon\right)^{2}}-\frac{\left(R+\epsilon\right)^{2}}{\epsilon\left(2R+3\epsilon\right)^{2}}\pi\right)+\dfrac{1}{2}$.
Here we know that $\tilde{U}(\ x)$ is convex and smooth in
$\mathbb{R}^{d}\setminus\mathbb{B}\left(0,R\right)$; $\tilde{V}(\ x)$ is also
convex and smooth in
$\mathbb{R}^{d}\setminus\mathbb{B}\left(0,R+\epsilon\right)$. Hence for
$R+\epsilon<\left\|x\right\|<R+2\epsilon$,
$\displaystyle\nabla^{2}\left(\hat{U}(\ x)-g\left(x\right)\right)$
$\displaystyle=\nabla^{2}\tilde{U}(\ x)+\nabla^{2}\left((1-\alpha(\
x))(\tilde{V}(\ x)-\tilde{U}(\ x))\right)$ $\displaystyle=\alpha(\
x)\nabla^{2}\tilde{U}(\ x)+(1-\alpha(\ x))\nabla^{2}\tilde{V}(\ x)$
$\displaystyle-\nabla^{2}\alpha(\ x)\left(\tilde{V}(\ x)-\tilde{U}(\
x)\right)-2\nabla\alpha(\ x)\left(\nabla\tilde{V}(\ x)-\nabla\tilde{U}(\
x)\right)^{T}$ $\displaystyle\succeq-\nabla^{2}\alpha(\ x)\left(\tilde{V}(\
x)-\tilde{U}(\ x)\right)-2\nabla\alpha(\ x)\left(\nabla\tilde{V}(\
x)-\nabla\tilde{U}(\ x)\right)^{T}.$ (E.21)
Note that for $R+\epsilon<\left\|x\right\|<R+2\epsilon$, we have
$\displaystyle\left\|\nabla g(\ x)-\nabla g(\ x-y)\right\|$
$\displaystyle=\left\|\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}x-\frac{\mu}{2}\left(1+\left\|x-y\right\|^{2}\right)^{-\frac{\theta}{2}}\left(x-y\right)\right\|$
(E.22)
$\displaystyle\leq\left\|\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}x-\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}\left(x-y\right)\right\|$
$\displaystyle+\left\|\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}\left(x-y\right)-\frac{\mu}{2}\left(1+\left\|x-y\right\|^{2}\right)^{-\frac{\theta}{2}}\left(x-y\right)\right\|$
$\displaystyle\leq\frac{\mu}{2}\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}\left\|y\right\|+\frac{\mu}{2}\left|\left(1+\left\|x\right\|^{2}\right)^{-\frac{\theta}{2}}-\left(1+\left\|x-y\right\|^{2}\right)^{-\frac{\theta}{2}}\right|\left\|x-y\right\|$
$\displaystyle\leq\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta+\frac{\mu}{2}\frac{\left|\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}-\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}\right|}{\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}}\left\|\left(x-y\right)\right\|$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta+\frac{\mu}{2}\frac{\left|\left(1+\left\|x\right\|^{2}\right)-\left(1+\left\|x-y\right\|^{2}\right)\right|}{\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}}\left\|\left(x-y\right)\right\|$
$\displaystyle\leq\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta+\frac{\mu}{2}\frac{\left|\left(\left\|x\right\|-\left\|x-y\right\|\right)\left(\left\|x\right\|+\left\|x-y\right\|\right)\right|}{\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}}\left\|\left(x-y\right)\right\|$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta+\frac{\mu}{2}\frac{\left\|y\right\|\left(\left\|x\right\|+\left\|x-y\right\|\right)}{\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}}\left\|\left(x-y\right)\right\|$
$\displaystyle\leq\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta+\frac{\mu}{2}\frac{2\left(R+2\epsilon+\delta\right)^{2}\delta}{\left(1+\left(R+\epsilon\right)^{2}\right)^{\frac{\theta}{2}}\left(1+\left(R+\epsilon-\delta\right)^{2}\right)^{\frac{\theta}{2}}},$
(E.23)
where $1$ follows from Lemma F.15, while $2$ is due to triangle inequality. As
a result, we get
$\displaystyle\left\|\nabla\tilde{V}(\ x)-\nabla\tilde{U}(\ x)\right\|$
$\displaystyle=\int\left\|\nabla\tilde{U}(\ x-\ y)-\nabla\tilde{U}(\
x)\right\|\varphi_{\delta}(\ y)dy$
$\displaystyle\leq\sum_{i}L_{i}\delta^{\alpha_{i}}+\left\|\nabla g(\ x)-\nabla
g(\ x-y)\right\|$ $\displaystyle\leq
NL\delta^{\alpha}+\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta$
(E.24)
$\displaystyle+\frac{\mu}{2}\frac{2\left(R+\epsilon-\delta\right)^{2}\delta}{\left(1+\left(R+\epsilon\right)^{2}\right)^{\frac{\theta}{2}}\left(1+\left(R+\epsilon-\delta\right)^{2}\right)^{\frac{\theta}{2}}}.$
(E.25)
On the other hand, we also acquire
$\displaystyle|\tilde{U}(\mathrm{x})-\tilde{U}(x-\mathrm{y})|$
$\displaystyle\leq\mathrm{\max}\left\\{\left\langle\nabla
U(\mathrm{x-y}),\mathrm{y}\right\rangle,\left\langle\nabla
U(\mathrm{x}),\mathrm{-y}\right\rangle\right\\}$
$\displaystyle+\sum_{i}\frac{L}{1+\alpha_{i}}\|\mathrm{y}\|^{\alpha_{i}+1}+\left|g\left(x\right)-g\left(x-y\right)\right|$
$\displaystyle\mathrm{\leq\max}\left\\{\left\langle\nabla
U(\mathrm{x-y}),\mathrm{y}\right\rangle,\left\langle\nabla
U(\mathrm{x}),\mathrm{-y}\right\rangle\right\\}+\sum_{i}\frac{L}{1+\alpha_{i}}\|\mathrm{y}\|^{\alpha_{i}+1}$
$\displaystyle+\left|\frac{\mu}{2\left(2-\theta\right)}\
\left(1+\left\|x\right\|^{2}\right)^{1-\frac{\theta}{2}}-\frac{\mu}{2\left(2-\theta\right)}\
\left(1+\left\|x-y\right\|^{2}\right)^{1-\frac{\theta}{2}}\right|$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\mathrm{\max}\left\\{\sum_{i}L_{i}\left\|\mathrm{x-y}\right\|^{\alpha_{i}}\left\|y\right\|,\sum_{i}L_{i}\left\|\mathrm{x}\right\|^{\alpha_{i}}\left\|y\right\|\right\\}$
$\displaystyle+\sum_{i}\frac{L}{1+\alpha_{i}}\|\mathrm{y}\|^{\alpha_{i}+1}+\frac{\mu}{2\left(2-\theta\right)}\left|\left(1+\left\|x\right\|^{2}\right)-\left(1+\left\|x-y\right\|^{2}\right)\right|$
(E.26) $\displaystyle\leq
L\left\|y\right\|\mathrm{\max}\left\\{\sum_{i}L_{i}\left\|\mathrm{x-y}\right\|^{\alpha_{i}},\sum_{i}L_{i}\left\|\mathrm{x}\right\|^{\alpha_{i}}\right\\}$
$\displaystyle+\sum_{i}\frac{L}{1+\alpha_{i}}\|\mathrm{y}\|^{\alpha_{i}+1}+\frac{\mu}{2\left(2-\theta\right)}\left|\left(\left\|x\right\|-\left\|x-y\right\|\right)\left(\left\|x\right\|+\left\|x-y\right\|\right)\right|$
(E.27) $\displaystyle\leq
L\left\|y\right\|\mathrm{\max}\left\\{\sum_{i}L_{i}\left\|\mathrm{x-y}\right\|^{\alpha_{i}},\sum_{i}L_{i}\left\|\mathrm{x}\right\|^{\alpha_{i}}\right\\}+$
(E.28)
$\displaystyle+\sum_{i}\frac{L}{1+\alpha_{i}}\|\mathrm{y}\|^{\alpha_{i}+1}+\frac{\mu}{2\left(2-\theta\right)}\left(\left\|x\right\|+\left\|x-y\right\|\right)\left\|y\right\|,$
(E.29)
where $1$ follows again from Lemma F.15 and the last inequality is because of
triangle inequality. Hence for $R+\epsilon<\left\|x\right\|<R+2\epsilon$,
$\left\|y\right\|\leq\delta$,
$\displaystyle\tilde{V}(\ x)-\tilde{U}(\ x)$
$\displaystyle=\int\left(\tilde{U}(\ x-\ y)-\tilde{U}(\
x)\right)\varphi_{\delta}(\ y)d\ y$ $\displaystyle\leq
L\left\|y\right\|\mathrm{\max}\left\\{\sum_{i}L_{i}\left\|\mathrm{x-y}\right\|^{\alpha_{i}},\sum_{i}L_{i}\left\|\mathrm{x}\right\|^{\alpha_{i}}\right\\}+$
$\displaystyle+\sum_{i}\frac{L}{1+\alpha_{i}}\|\mathrm{y}\|^{\alpha_{i}+1}+\frac{\mu}{2\left(2-\theta\right)}\left(\left\|x\right\|+\left\|x-y\right\|\right)\left\|y\right\|$
$\displaystyle\leq
L\delta\left[\sum_{i}L_{i}\left(R+2\epsilon+\delta\right)^{\alpha_{i}}\right]+$
$\displaystyle+\sum_{i}\frac{L}{1+\alpha_{i}}\delta^{\alpha_{i}+1}+\frac{\mu}{\left(2-\theta\right)}\left(R+2\epsilon+\delta\right)\delta$
Therefore, when $R+\epsilon<\left\|x\right\|<R+2\epsilon$,
$\displaystyle\nabla^{2}\left(\hat{U}(\ x)-g\left(x\right)\right)$
$\displaystyle\succeq-\frac{\left(R+\epsilon\right)^{2}\pi\left(L\delta\left[\sum_{i}L_{i}\left(R+2\epsilon+\delta\right)^{\alpha_{i}}\right]\right)}{\epsilon\left(2R+3\epsilon\right)}I_{d}$
$\displaystyle-\frac{\left(R+\epsilon\right)^{2}\pi\left(+\sum_{i}\frac{L}{1+\alpha_{i}}\delta^{\alpha_{i}+1}+\frac{\mu}{\left(2-\theta\right)}\left(R+2\epsilon+\delta\right)\delta\right)}{\epsilon\left(2R+3\epsilon\right)}I_{d}$
$\displaystyle-\frac{\left(R+\epsilon\right)^{4}\pi^{2}\left(NL\delta^{\alpha}+\frac{\mu}{2}\left(1+\left(R+\epsilon\right){}^{2}\right)^{-\frac{\theta}{2}}\delta\right)}{\epsilon^{2}\left(2R+3\epsilon\right)}I_{d}$
$\displaystyle-\frac{\left(R+\epsilon\right)^{4}\pi^{2}\left(\frac{\mu}{2}\frac{2\left(R+\epsilon-\delta\right)^{2}\delta}{\left(1+\left(R+\epsilon\right)^{2}\right)^{\frac{\theta}{2}}\left(1+\left(R+\epsilon-\delta\right)^{2}\right)^{\frac{\theta}{2}}}\right)}{\epsilon^{2}\left(2R+3\epsilon\right)}I_{d}.$
(E.30)
Taking the limit when $\delta\rightarrow 0^{+}$, we obtain that for
$R+\epsilon<\left\|x\right\|<R+2\epsilon$, $\nabla^{2}\left(\hat{U}(\
x)-g\left(x\right)\right)$ is positive semi-definite; hence it is positive
semi-definite on the entire $R^{d}$, or $\hat{U}(\ x)-g\left(x\right)$ is
convex on $\mathbb{R}^{d}$. From (E.16), we know that for
$R+\epsilon<\left\|x\right\|<R+2\epsilon$,
$\displaystyle\inf_{\bar{x}\in\mathbb{B}\left(0,R+2\epsilon\right)\setminus\mathbb{B}(0,R)}\tilde{U}(\bar{x})$
$\displaystyle\leq\hat{U}(\
x)-g\left(x\right)\leq\sup_{\bar{x}\in\mathbb{B}\left(0,R+2\epsilon\right)\setminus\mathbb{B}(0,R)}\tilde{U}(\bar{x}).$
(E.31)
Therefore,
$\displaystyle\sup\left(\hat{U}(\ x)-U(\ x)\right)-\inf\left(\hat{U}(\ x)-U(\
x)\right)$ $\displaystyle=\sup\left(\hat{U}(\ x)-g\left(x\right)-\tilde{U}(\
x)\right)-\inf\left(\hat{U}(\ x)-g\left(x\right)-\tilde{U}(\ x)\right)$ (E.32)
$\displaystyle\leq
2\left(\sup_{\bar{x}\in\mathbb{B}\left(0,R+2\epsilon\right)\setminus\mathbb{B}(0,R)}\tilde{U}(\bar{x})-\inf_{\bar{x}\in\mathbb{B}\left(0,R+2\epsilon\right)\setminus\mathbb{B}(0,R)}\tilde{U}(\bar{x})\right)$
$\displaystyle\leq
2\left(\sup_{\bar{x}\in\mathbb{B}\left(0,R+2\epsilon\right)}\tilde{U}(\bar{x})-\inf_{\bar{x}\in\mathbb{B}\left(0,R+2\epsilon\right)}\tilde{U}(\bar{x})\right).$
(E.33)
Since $U$ is $\left(\alpha,\ell\right)$-weakly smooth and $\nabla U(0)=0$, we
deduce
$\displaystyle\left|U(\ x)-U(0)\right|$ $\displaystyle=\left|U(\ x)-U(0)-\
\left\langle x,\nabla U(0)\right\rangle\right|$
$\displaystyle\leq\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left\|x\right\|^{1+\alpha_{i}}$
$\displaystyle\leq\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left(R+2\epsilon\right)^{1+\alpha_{i}}$
$\displaystyle\leq\sum_{i}L_{i}R^{1+\alpha_{i}}$ (E.34)
and
$\displaystyle\left|g(\ x)\right|$
$\displaystyle=\left|\frac{\mu}{2\left(2-\theta\right)}\
\left(1+\left\|x\right\|^{2}\right)^{1-\frac{\theta}{2}}\right|$
$\displaystyle\leq\frac{\mu}{2\left(2-\theta\right)}\
\left(1+\left(R+2\epsilon\right)^{2}\right)^{1-\frac{\theta}{2}}$
$\displaystyle\leq\frac{\mu}{\left(2-\theta\right)}\ R^{2-\theta}.$ (E.35)
So for $\forall\left\|x\right\|\leq\left(R+2\epsilon\right)$, $\epsilon$ is
sufficiently small,
$\displaystyle\sup_{\bar{x}\in\mathbb{B}\left(R+2\epsilon\right)}\tilde{U}(\bar{x})-\inf_{\bar{x}\in\mathbb{B}\left(R+2\epsilon\right)}$
$\displaystyle\tilde{U}(\bar{x})\leq\sum_{i}L_{i}R^{1+\alpha_{i}}+\frac{2\mu}{\left(2-\theta\right)}\
R^{2-\theta}.$
As a result, we get
$\displaystyle\sup\left(\hat{U}(\ x)-U(\ x)\right)-\inf$
$\displaystyle\left(\hat{U}(\ x)-U(\ x)\right)\leq
2\sum_{i}L_{i}R^{1+\alpha_{i}}+\frac{4\mu}{\left(2-\theta\right)}\
R^{2-\theta}.$
∎
###### Remark E.1.
When $\theta=0,$ the $\left(\mu,\theta\right)$-degenerated convex outside the
ball is equivalent to the $\mu$-strongly convex outside the ball, we achieve a
result for strongly convex outside the ball similar to [24] but for
$\left(\alpha,\ell\right)$-weakly smooth instead of smooth. The constant could
be improved by a factor of $2$ if we take $\epsilon$ to be arbitrarily small.
### E.3 Proof of lemma 4.3
###### Lemma E.3.
For $U$ satisfying $\gamma-$Poincaré, $\alpha$-mixture weakly smooth with
$\alpha_{N}=1$ and $2-$dissipative, there exists $\breve{U}\in
C^{1}(\mathbb{R}^{d})$ with a Hessian that exists everywhere on $R^{d}$, and
$\breve{U}$ is log-Sobolev on $\mathbb{R}^{d}$ such that
$\sup\left(\breve{U}(\ x)-U(\ x)\right)-\inf\left(\breve{U}(\ x)-U(\
x)\right)\leq 2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}.$
(E.36)
###### Proof.
First, given $R>0,$ let
$\overline{U}(\mathrm{x}):=U(\mathrm{x})+\frac{L_{N}+\lambda_{0}}{2}\left\|x\right\|^{2}$
for $\lambda_{0}=\frac{2L}{R^{1-\alpha}}$, we obtain the following property
$\displaystyle\left\langle\nabla\overline{U}(\mathrm{x})-\nabla\overline{U}(\mathrm{y}),x-y\right\rangle$
$\displaystyle=\left\langle\nabla\left(U(\mathrm{x})+\frac{L_{N}+\lambda_{0}}{2}\left\|x\right\|^{2}\right)-\nabla\left(U(\mathrm{y})+\frac{L_{N}+\lambda_{0}}{2}\left\|y\right\|^{2}\right),x-y\right\rangle$
(E.37) $\displaystyle=\left\langle\nabla U(\mathrm{x})-\nabla
U(\mathrm{y})+(L_{N}+\lambda_{0})\left(x-y\right),x-y\right\rangle$
$\displaystyle\stackrel{{\scriptstyle
i}}{{\geq}}-\sum_{i<N}L_{i}\left\|x-y\right\|^{1+\alpha}+\lambda_{0}\left\|x-y\right\|^{2}$
$\displaystyle\geq\frac{\lambda_{0}}{2}\left\|x-y\right\|^{2}\,for\,\left\|x-y\right\|\geq\left(\frac{NL}{\lambda_{0}}\right)^{\frac{1}{1-\alpha_{1}}}=R,$
(E.38)
where $(i)$ follows from Assumption 2.2. This implies that
$\overline{U}(\mathrm{x})$ is $\lambda_{0}-$ strongly convex outside the ball
$B_{R}=\left\\{x:\left\|x\right\|\leq R\right\\}$. Though
$\overline{U}(\mathrm{x})$ behaves differently than Lemma 4.2 assumptions,
with some additional verifications, we still can apply Lemma 4.2 to derive the
result. We sketch the proof as follows. There exists $\hat{U}\in
C^{1}(\mathbb{R}^{d})$ with a Hessian that exists everywhere on $R^{d}$,
$\displaystyle\hat{U}(\
x)-\frac{\lambda_{0}}{4}\left\|x\right\|^{2}=\left\\{\begin{array}[]{l}\tilde{\overline{U}}(\
x),\ \left\|x\right\|\geq R+2\epsilon\\\ \alpha(\ x)\tilde{\overline{U}}(\
x)+(1-\alpha(\ x))\tilde{V}(\ x),\ R+\epsilon<\left\|x\right\|<R+2\epsilon\\\
\tilde{V}(\ x),\ \left\|x\right\|\leq R+\epsilon\end{array}\right.$ (E.42)
where $\alpha(\ x)$ is defined as before. Both $\tilde{\overline{U}}(\ x)$ and
$\tilde{V}(\ x)$ are convex and smooth in
$\mathbb{R}^{d}\setminus\mathbb{B}\left(0,R\right)$ and for
$R+\epsilon<\left\|x\right\|<R+2\epsilon$, $\left\|y\right\|\leq\delta$,
$\displaystyle\nabla^{2}\left(\hat{U}(\
x)-\frac{\lambda_{0}}{4}\left\|x\right\|^{2}\right)$
$\displaystyle\succeq-\nabla^{2}\alpha(\ x)\left(\tilde{V}(\
x)-\tilde{\overline{U}}(\ x)\right)-2\nabla\alpha(\ x)\left(\nabla\tilde{V}(\
x)-\nabla\tilde{\overline{U}}(\ x)\right)^{T}.$ (E.43)
In this case, we have
$\displaystyle\left\|\nabla\tilde{V}(\ x)-\nabla\tilde{\overline{U}}(\
x)\right\|$ $\displaystyle=\left\|\nabla\int\left(\overline{U}(\ x-\
y)-\overline{U}(\ x)\right)\varphi_{\delta}(\ y)dy\right\|$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\left\|\nabla\int\left(U(\ x-\ y)-U(\
x)\right)\varphi_{\delta}(\ y)dy\right\|$
$\displaystyle+\lambda_{0}\int\left\|y\right\|\varphi_{\delta}(\ y)dy$
$\displaystyle\leq\left\|\int\left(\nabla U(\ x-\ y)-\nabla U(\
x)\right)\varphi_{\delta}(\ y)dy\right\|+\lambda_{0}\delta$
$\displaystyle\leq\sum_{i}L_{i}\delta^{\alpha_{1}}+\lambda_{0}\delta,$ (E.44)
where $1$ holds by triangle inequality and the last line is because of
$\left(\alpha,\ell\right)-$weakly smooth assumption, while
$\displaystyle\left|\tilde{\overline{U}}(\mathrm{x})-\tilde{\overline{U}}(x-\mathrm{y})\right|$
$\displaystyle\stackrel{{{}_{1}}}{{\leq}}\left|\overline{U}(\mathrm{x})-\overline{U}(x-\mathrm{y})\right|+\left|\frac{L+\lambda_{0}}{2}\left\|x\right\|^{2}-\frac{L+\lambda_{0}}{2}\left\|x-y\right\|^{2}\right|$
$\displaystyle\stackrel{{{}_{2}}}{{\leq}}\left\\{\left\langle\nabla
U(\mathrm{x-y}),\mathrm{y}\right\rangle\vee\left\langle\nabla
U(\mathrm{x}),\mathrm{-y}\right\rangle\right\\}+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left\|y\right\|^{\alpha_{i}+1}$
$\displaystyle+\frac{L_{N}+\lambda_{0}}{2}\left|\left\|x\right\|^{2}-\left\|x-y\right\|^{2}\right|$
(E.45) $\displaystyle\mathrm{\leq}\left\\{\left\langle\nabla
U(\mathrm{x-y}),\mathrm{y}\right\rangle\vee\left\langle\nabla
U(\mathrm{x}),\mathrm{-y}\right\rangle\right\\}+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left\|y\right\|^{\alpha_{i}+1}$
$\displaystyle+\frac{L_{N}+\lambda_{0}}{2}\left(\left\|x\right\|-\left\|x-y\right\|\right)\left(\left\|x\right\|+\left\|x-y\right\|\right)$
(E.46)
$\displaystyle\leq\left\\{\left(\sum_{i}L_{i}\left\|\mathrm{x-y}\right\|^{\alpha_{i}}\right)\left\|y\right\|\vee\left(\sum_{i}L_{i}\left\|\mathrm{x}\right\|^{\alpha_{i}}\right)\left\|y\right\|\right\\}$
$\displaystyle+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left\|y\right\|^{\alpha_{i}+1}+\frac{L_{N}+\lambda_{0}}{2}\left\|y\right\|\mathrm{\max}\left\\{\left\|\mathrm{x-y}\right\|,\left\|\mathrm{x}\right\|\right\\}$
(E.47)
$\displaystyle\leq\sum_{i}L_{i}\left(R+2\epsilon+\delta\right)^{\alpha_{i}}\delta$
(E.48)
$\displaystyle+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\delta^{\alpha_{i}+1}+\frac{L_{N}+\lambda_{0}}{2}\left(R+2\epsilon+\delta\right)\delta,$
(E.49)
where $1$ is due to triangle inequality, $2$ follows from Assumption 1, and
the last line holds by plugging in all the limits. Taking the limit when
$\delta\rightarrow 0^{+},$ and for sufficiently small $\epsilon$, we obtain
$\hat{U}(\ x)-\frac{\lambda_{0}}{4}\left\|x\right\|^{2}$ is convex on all
$\mathbb{R}^{d}$ or $\hat{U}(\ x)$ is $\frac{\lambda_{0}}{2}$\- strongly
convex. By definition of $\overline{U}$, for
$R+\epsilon<\left\|x\right\|<R+2\epsilon$ we obtain
$\displaystyle\left|\overline{U}(\ x)-\overline{U}(0)\right|$
$\displaystyle\leq\left|U(\ x)-U(0)-\ \left\langle x,\nabla
U(0)\right\rangle\right|+\frac{L_{N}+\lambda_{0}}{2}\left\|x\right\|^{2}$
$\displaystyle\leq+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left\|x\right\|^{\alpha_{i}+1}+\frac{L_{N}+\lambda_{0}}{2}\left\|x\right\|^{2}$
$\displaystyle\leq+\sum_{i}\frac{L_{i}}{1+\alpha_{i}}\left(R+2\epsilon+\delta\right)^{\alpha_{i}+1}+\frac{L_{N}+\lambda_{0}}{2}\left(R+2\epsilon+\delta\right)^{2}$
$\displaystyle\leq\sum_{i}L_{i}R^{1+\alpha_{i}}+\left(L_{N}+\lambda_{0}\right)R^{2}.$
(E.50)
As a result, from Lemma 4.2 we deduce
$\displaystyle\sup\left(\hat{U}(\ x)-\overline{U}(\ x)\right)$
$\displaystyle-\inf\left(\hat{U}(\ x)-\overline{U}(\ x)\right)\leq
2\sum_{i}L_{i}R^{1+\alpha_{i}}+2\left(L_{N}+\lambda_{0}\right)R^{2}.$ (E.51)
Let
$\breve{U}\left(x\right)=\hat{U}\left(x\right)-\left(\frac{L_{N}}{2}+\frac{\lambda_{0}}{4}\right)\left\|x\right\|^{2}$
then for $\left\|x\right\|>R+2\epsilon+\delta$,
$\hat{U}\left(x\right)=\overline{U}\left(x\right)$ so
$\breve{U}\left(x\right)=U\left(x\right)$. For $\left\|x\right\|\leq
R+2\epsilon+\delta$, we have
$\displaystyle\sup\left(\breve{U}(x)-U(x)\right)-\inf\left(\breve{U}(\
x)-U(x)\right)$
$\displaystyle\leq\sup\left(\hat{U}(x)+\frac{L_{N}+\lambda_{0}}{2}\left\|x\right\|^{2}-\overline{U}(x)\right)-\inf\left(\hat{U}(x)+\frac{L_{N}+\lambda_{0}}{2}\left\|x\right\|^{2}-\overline{U}(x)\right)$
$\displaystyle\leq\sup\left(\hat{U}(x)-\overline{U}(x)\right)-\inf\left(\hat{U}(x)-\overline{U}(x)\right)+\left(L_{N}+\lambda_{0}\right)\left(R+2\epsilon+\delta\right)^{2}$
$\displaystyle\leq
2\sum_{i}L_{i}R^{1+\alpha_{i}}+2\left(L_{N}+\lambda_{0}\right)R^{2}+2\left(L_{N}+\lambda_{0}\right)R^{2}.$
$\displaystyle\leq 2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}.$
(E.52)
So for every $x\in\mathbb{R}^{d},$
$\sup\left(\breve{U}(x)-U(x)\right)-\inf\left(\breve{U}(\ x)-U(x)\right)\leq
2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}.$
Since $U(x)$ is $PI(\gamma)$, and using [21]’s Lemma 1.2 we have, $\breve{U}(\
x)$ is Poincaré with constant
$\gamma_{1}=\gamma
e^{-4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}.$
On the other hand, we know that
$\nabla^{2}\breve{U}\left(x\right)=\nabla^{2}\hat{U}\left(x\right)-\left(L_{N}+\frac{\lambda_{0}}{2}\right)I\succeq-
LI$ for since $\hat{U}\left(x\right)$ is $\frac{\lambda_{0}}{2}-$strongly
convex, which implies that $\nabla^{2}\breve{U}\left(x\right)$ is lower
bounded by $-LI$. In addition, for $\left\|x\right\|>R+2\epsilon+\delta$ from
$2-$dissipative assumption, we have for some $a,$
$b>0,\left\langle\nabla\breve{U}(\mathrm{x}),x\right\rangle\geq
a\left\|x\right\|^{2}-b$, while for $\left\|x\right\|\leq R+2\epsilon+\delta$
$\displaystyle\left\langle\nabla\breve{U}\left(x\right),x\right\rangle$
$\displaystyle\geq\left\langle-\nabla\left(\left(\frac{L_{N}}{2}+\frac{\lambda_{0}}{4}\right)\left\|x\right\|^{2}\right),x\right\rangle$
$\displaystyle\geq-\left(L_{N}+\frac{\lambda_{0}}{2}\right)\left\|x\right\|^{2}$
$\displaystyle\geq-\left(L_{N}+\frac{\lambda_{0}}{2}\right)R^{2}.$
$\displaystyle\geq
a\left\|x\right\|^{2}-\left(L_{N}+\frac{\lambda_{0}}{2}\right)R^{2}-aR^{2}.$
so for every $x\in\mathbb{R}^{d},$
$\left\langle\nabla\breve{U}(\mathrm{x}),x\right\rangle\geq
a\left\|x\right\|^{2}-\left(b+\left(L_{N}+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}\right).$
We choose $W=e^{a_{1}\left\|x\right\|^{2}}$ and $V=a_{1}\left\|x\right\|^{2}$
with $0<a_{1}=\frac{a}{4}$. One sees that $W$ satisfies Lyapunov inequality
$\displaystyle\mathcal{L}W$
$\displaystyle=\left(2a_{1}d+4a_{1}^{2}\left\|x\right\|^{2}-2a_{1}\left\langle\nabla
U(\mathrm{x}),x\right\rangle\right)W$
$\displaystyle\leq\left(2a_{1}d+4a_{1}^{2}\left\|x\right\|^{2}-2a_{1}a\left\|x\right\|^{2}+2a_{1}\left(b+\left(L_{N}+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}\right)\right)W$
$\displaystyle\leq\left(-\frac{a^{2}}{2}\left\|x\right\|^{2}+\frac{a}{2}\left(b+\left(L_{N}+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}+d\right)\right)W.$
(E.53)
By [5]’s Theorem 1.9, $\breve{U}\left(x\right)$ satisfies a defective log
Sobolev. In addition, by Rothaus’ lemma, a defective log-Sobolev inequality
together with the $PI(\gamma_{1})$ implies the log-Sobolev inequality with the
log Sobolev constant is
$\gamma_{2}=\frac{2}{[A+(B+2)\frac{1}{\gamma_{{}_{1}}})]}$ where
$\displaystyle A$ $\displaystyle=(1-\frac{L}{2})\frac{8}{a^{2}}+\zeta,$ (E.54)
$\displaystyle B$
$\displaystyle=2\left[\frac{2\left(\left(b+4\left(L_{N}+\frac{\lambda_{0}}{4}\right)R^{2}+aR^{2}\right)+d\right)}{a}+M_{2}\right](1-\frac{L}{2}+\frac{1}{\zeta}).$
(E.55)
where $M_{2}=\int\left\|x\right\|^{2}e^{-\breve{U}(x)}dx$. But it is well
known from Lemma 10 that $M_{2}=O(d)$, so the log-Sobolev constant is just
$O(d)$. This concludes the proof. ∎
### E.4 Proof of lemma 4.1
###### Theorem E.1.
Suppose $\pi$ is $\gamma-$Poincaré, $\alpha$-mixture weakly smooth with
$\alpha_{N}=1$ and $2-$dissipativity (i.e.$\left\langle\nabla
U(x),x\right\rangle\geq a\left\|x\right\|^{2}-b$) for some $a,b>0$, and for
any $x_{0}\sim p_{0}$ with $H(p_{0}|\pi)=C_{0}<\infty$, the iterates
$x_{k}\sim p_{k}$ of LMC with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}$satisfies
$\displaystyle H(p_{k}|\pi)\leq e^{-\gamma_{3}\epsilon
k}H(p_{0}|\pi)+\frac{8\eta^{\alpha}D_{3}}{3\gamma_{3}},$ (E.56)
where $D_{3}$ is defined as in equation (3.8) and
$\displaystyle M_{2}$
$\displaystyle=\int\left\|x\right\|^{2}e^{-\breve{U}(x)}dx=O(d)$ (E.57)
$\displaystyle\zeta$
$\displaystyle=\sqrt{2\left[\frac{2\left(b+\left(L+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}+d\right)}{a}+M_{2}\right]\frac{e^{4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{\gamma}}$
(E.58) $\displaystyle A$ $\displaystyle=(1-\frac{L}{2})\frac{8}{a^{2}}+\zeta,$
(E.59) $\displaystyle B$
$\displaystyle=2\left[\frac{2\left(\left(b+4\left(L+\frac{\lambda_{0}}{4}\right)R^{2}+aR^{2}\right)+d\right)}{a}+M_{2}\right](1-\frac{L}{2}+\frac{1}{\zeta}),$
(E.60) $\displaystyle\gamma_{3}$ $\displaystyle=\frac{2\gamma
e^{-\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{A\gamma+(B+2)e^{4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}.$
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}\wedge\left(\frac{3\epsilon\gamma_{3}}{16D_{3}}\right)^{\frac{1}{\alpha}}$for
$k\geq\frac{1}{\gamma_{3}\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon}$
iterations.
###### Proof.
From Lemma E.36 , we can optimize over $\zeta$ and get
$\zeta=\sqrt{2\left[\frac{2\left(b+\left(L+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}+d\right)}{a}+M_{2}\right]\frac{1}{\gamma_{1}}}.$
By using Holley Stroock perturbation theorem [18], we have $U(x)$ is log-
Sobolev on $\mathbb{R}^{d}$ with constant
$\gamma_{3}=\frac{2\gamma
e^{-\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{[A\gamma+(B+2)e^{4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)})]}.$
Applying theorem 3.1, we get the desired result.∎
### E.5 Proof of lemma 4.1
###### Lemma E.4.
If $U$ satisfies Assumption 2.4, then
$U(x)\geq\frac{a}{2\beta}\|x\|^{\beta}+U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}R^{\alpha_{i}+1}-b.$
(E.61)
###### Proof.
Using the technique of [16], let
$R=\left(\frac{2b}{a}\right)^{\frac{1}{\beta}}$, we lower bound
$U\left(x\right)$ when $\left\|x\right\|\leq R$,
$\displaystyle U(x)$ $\displaystyle=U(0)+\int_{0}^{1}\left\langle\nabla
U(tx),\ x\right\rangle dt$ $\displaystyle\geq U(0)-\int_{0}^{1}\left\|\nabla
U(tx)\right\|\left\|x\right\|dt$ $\displaystyle\geq
U(0)-\sum_{i}L_{i}\left\|x\right\|^{\alpha_{i}+1}\int_{0}^{1}t^{\alpha_{i}}dt$
$\displaystyle\geq
U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left\|x\right\|^{\alpha_{i}+1}$
$\displaystyle\geq U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}R^{\alpha_{i}+1}$
(E.62)
For $\left\|x\right\|>R$, we can lower bound $U$ as follows.
$\displaystyle U(x)$
$\displaystyle=U(0)+\int_{0}^{\frac{R}{\|x\|}}\left\langle\nabla U(tx),\
x\right\rangle dt+\int_{\frac{R}{\|x\|}}^{1}\left\langle\nabla U(tx),\
x\right\rangle dt$ $\displaystyle\geq
U(0)-\int_{0}^{\frac{R}{\left\|x\right\|}}\left\|\nabla
U(tx)\right\|\left\|x\right\|dt+\int_{\frac{R}{\left\|x\right\|}}^{1}\frac{1}{t}\left\langle\nabla
U(tx),\ tx\right\rangle dt$ $\displaystyle\geq
U(0)-\left\|x\right\|\int_{0}^{\frac{R}{\left\|x\right\|}}\sum_{i}L_{i}\left\|tx\right\|^{\alpha_{i}}dt+\int_{\frac{R}{\left\|x\right\|}}^{1}\frac{1}{t}\left(a\left\|tx\right\|^{\beta}-b\right)dt$
$\displaystyle\stackrel{{{}_{1}}}{{\geq}}U(0)-\sum_{i}L_{i}\left\|x\right\|^{\alpha_{i}+1}\int_{0}^{\frac{R}{\left\|x\right\|}}t^{\alpha_{i}}dt\
+\frac{1}{2}\int_{\frac{R}{\left\|x\right\|}}^{1}\frac{1}{t}a\left\|tx\right\|^{\beta}dt$
$\displaystyle\stackrel{{{}_{2}}}{{\geq}}U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left\|x\right\|^{\alpha_{i}+1}\frac{R^{\alpha_{i}+1}}{\left\|x\right\|^{\alpha_{i}+1}}+\frac{a}{2}\left\|x\right\|^{\beta}\int_{\frac{R}{\left\|x\right\|}}^{1}t^{\beta-1}dt$
$\displaystyle\geq
U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}R^{\alpha_{i}+1}+\frac{a}{2\beta}\left\|x\right\|^{\beta}\left(1-\frac{R^{\beta}}{\left\|x\right\|^{\beta}}\right)$
$\displaystyle\geq\frac{a}{2\beta}\left\|x\right\|^{\beta}+U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}R^{\alpha_{i}+1}-b,$
(E.63)
where $1$ follows from Assumption 2.4 and $2$ uses the fact that if
$t{\displaystyle\geq\frac{R}{\left\|x\right\|}}$ then ${\displaystyle
a\left\|tx\right\|^{\beta}-b\geq\frac{a}{2}\left\|tx\right\|^{\beta}}.$ Now,
since for $\left\|x\right\|\leq R$,
$\frac{a}{2\beta}\left\|x\right\|^{\beta}\leq b$, we combine the inequality
for $\left\|x\right\|\leq R$ and get
$U(x)\geq\frac{a}{2\beta}\left\|x\right\|^{\beta}+U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}R^{\alpha_{i}+1}-b.$
(E.64)
∎
### E.6 Proof of Lemma 5
###### Lemma E.5.
Assume that $U$ satisfies Assumption 2.4, then for $\pi=e^{-U}$ and any
distribution $\rho$, we have
$\frac{4\beta}{a}\left[\mathrm{H}(\rho|\pi)+\tilde{d}+\tilde{\mu}\right]\geq\mathrm{E}_{\rho}\left[\left\|x\right\|{}^{\beta}\right],$
(E.65)
where
$\displaystyle\tilde{\mu}$
$\displaystyle=\frac{1}{2}\log(\frac{2}{\beta})+\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}+b+|U(0)|,$
(E.66) $\displaystyle\tilde{d}$
$\displaystyle=\frac{d}{\beta}\left[\frac{\beta}{2}log\left(\pi\right)+\log\left(\frac{4\beta}{a}\right)+(1-\frac{\beta}{2})\log(\frac{d}{2e})\right].$
(E.67)
###### Proof.
Let $q(x)=e^{\frac{a}{4\beta}\left\|x\right\|{}^{\beta}-U(x)}$ and $C_{q}=\int
e^{\frac{a}{4\beta}\left\|x\right\|{}^{\beta}-U(x)}dx$. First, we need to
bound $\log C_{q}$. Using Lemma E.4, we have
$\displaystyle U(x)$
$\displaystyle\geq\frac{a}{2\beta}\left\|x\right\|^{\beta}+U(0)-\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}-b.$
(E.68)
Regrouping the terms and integrating both sides gives
$\displaystyle\int e^{\frac{a}{4\beta}\left\|x\right\|{}^{\beta}-U(x)}dx\leq
e^{-U(0)+\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}+b}\int
e^{-\frac{a}{4\beta}\left\|x\right\|{}^{\beta}}dx$
$\displaystyle=\frac{2\pi^{d/2}}{\beta}\left(\frac{4\beta}{a}\right)^{\frac{d}{\beta}}e^{-U(0)+\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}+b}\frac{\Gamma\left(\frac{d}{\beta}\right)}{\Gamma\left(\frac{d}{2}\right)}$
$\displaystyle\leq\frac{2\pi^{d/2}}{\beta}\left(\frac{4\beta}{a}\right)^{\frac{d}{\beta}}\frac{\left(\frac{d}{\beta}\right)^{\frac{d}{\beta}-\frac{1}{2}}}{\left(\frac{d}{2}\right)^{\frac{d}{2}-\frac{1}{2}}}e^{\frac{d}{2}-\frac{d}{\beta}}e^{-U(0)+\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}+b},$
(E.69)
where the equality on the second line comes from using polar coordinates and
the third line follows from an inequality for the ratio of Gamma functions
[19]. Plugging this back into the previous inequality and taking logs, we
deduce
$\displaystyle{\displaystyle\log(C_{q})}$
$\displaystyle={\displaystyle\log(\int
e^{\frac{a}{4\beta}\left\|x\right\|{}^{\beta}-U(x)}dx)}$
$\displaystyle\leq\frac{d}{2}\log(\pi)+\frac{d}{\beta}\log\left(\frac{4\beta}{a}\right)+(\frac{d}{\beta}-\frac{d}{2})\log(\frac{d}{2e})$
$\displaystyle+(\frac{d}{\beta}+\frac{1}{2})\log(\frac{2}{\beta})+\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}+b+|U(0)|$
$\displaystyle\leq\frac{d}{\beta}\left[\frac{\beta}{2}log\left(\pi\right)+\log\left(\frac{4\beta}{a}\right)+(1-\frac{\beta}{2})\log(\frac{d}{2e})\right]$
$\displaystyle+\frac{1}{2}\log(\frac{2}{\beta})+\sum_{i}\frac{L_{i}}{\alpha_{i}+1}\left(\frac{2b}{a}\right)^{\frac{\alpha_{i}+1}{\beta}}+b+|U(0)|$
$\displaystyle\leq\tilde{d}+\tilde{\mu,}$ (E.70)
as definitions of $\tilde{d}$ and $\tilde{\mu}$. Using this bound on $\log
C_{q}$ we get
$\displaystyle\mathrm{H}(\rho|\pi)$
$\displaystyle=\int\rho\log\frac{\rho}{q/C_{q}}+\int\rho\log\frac{q/C_{q}}{\pi}$
$\displaystyle=\mathrm{H}(\rho|q/C_{q})+\mathrm{E}_{\rho}\left[\log\frac{q/C_{q}}{e^{-U}}\right]$
$\displaystyle\stackrel{{{}_{\left(1\right)}}}{{\geq}}\frac{a}{4\beta}\mathrm{E}_{\rho}\left[\left\|x\right\|{}^{\beta}\right]-\log\left(C_{q}\right)$
(E.71)
$\displaystyle\geq\frac{a}{4\beta}\mathrm{E}_{\rho}\left[\left\|x\right\|{}^{\beta}\right]-\tilde{d}-\tilde{\mu,}$
(E.72)
where $\left(1\right)$ follows from definition of $C_{q}$ and the fact that
relative information is always non-negative. Rearranging the terms completes
the proof. ∎
###### Theorem E.2.
Suppose $\pi$ is non-strongly convex outside the ball $\mathbb{B}(0,R)$,
$\alpha$-mixture weakly smooth with $\alpha_{N}=1$ and $2-$dissipativity
(i.e.$\left\langle\nabla U(x),x\right\rangle\geq a\left\|x\right\|^{2}-b$) for
some $a,b>0$, and for any $x_{0}\sim p_{0}$ with $H(p_{0}|\pi)=C_{0}<\infty$,
the iterates $x_{k}\sim p_{k}$ of LMC with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}$satisfies
$\displaystyle H(p_{k}|\pi)\leq e^{-\gamma_{3}\epsilon
k}H(p_{0}|\pi)+\frac{8\eta^{\alpha}D_{3}}{3\gamma_{3}},$ (E.73)
where $D_{3}$ is defined as in equation (3.8) and for some universal constant
$K$,
$\displaystyle M_{2}$
$\displaystyle=\int\left\|x\right\|^{2}e^{-\breve{U}(x)}dx=O(d)$ (E.74)
$\displaystyle\zeta$
$\displaystyle=K\sqrt{64d\left[\frac{2\left(b+\left(L+\frac{\lambda_{0}}{2}\right)R^{2}+aR^{2}+d\right)}{a}+M_{2}\right]\left(\frac{a+b+2aR^{2}+3}{ae^{-4\left(4L_{N}R^{2}+4LR^{1+\alpha}\right)}}\right)}$
(E.75) $\displaystyle A$ $\displaystyle=(1-\frac{L}{2})\frac{8}{a^{2}}+\zeta,$
(E.76) $\displaystyle B$
$\displaystyle=2\left[\frac{2\left(\left(b+4\left(L+\frac{\lambda_{0}}{4}\right)R^{2}+aR^{2}\right)+d\right)}{a}+M_{2}\right](1-\frac{L}{2}+\frac{1}{\zeta}),$
(E.77) $\displaystyle\gamma_{3}$
$\displaystyle=\frac{2e^{-\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}+4L_{N}R^{2}+4LR^{1+\alpha}\right)}}{A+(B+2)32K^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)e^{4\left(4L_{N}R^{2}+4LR^{1+\alpha}\right)}}=\frac{1}{O(d)}.$
Then, for any $\epsilon>0$, to achieve $H(p_{k}|\pi)<\epsilon$, it suffices to
run ULA with step size $\eta\leq
1\wedge\frac{1}{4\gamma_{3}}\wedge\left(\frac{\gamma_{3}}{16L^{1+\alpha}}\right)^{\frac{1}{\alpha}}\wedge\left(\frac{3\epsilon\gamma_{3}}{16D_{3}}\right)^{\frac{1}{\alpha}}$for
$k\geq\frac{1}{\gamma_{3}\eta}\log\frac{2H\left(p_{0}|\pi\right)}{\epsilon}$
iterations.
###### Proof.
Using Lemma 2, there exists $\breve{U}\left(x\right)\in C^{1}(R^{d})$ with its
Hessian exists everywhere on $R^{d}$, and $\breve{U}$ is convex on $R^{d}$
such that
$\sup\left(\breve{U}(\ x)-U(\ x)\right)-\inf\left(\breve{U}(\ x)-U(\
x)\right)\leq 2\sum_{i}L_{i}R^{1+\alpha_{i}}.$ (E.78)
We can prove by two different approaches.
First approach: Since $\breve{U}$ is convex, by Theorem 1.2 of [4],
$\breve{U}$ satisfies Poincaré inequality with constant
$\displaystyle\gamma$
$\displaystyle\geq\frac{1}{4K^{2}\int\left\|x-E_{\pi}(x)\right\|^{2}\pi\left(x\right)dx}$
$\displaystyle\stackrel{{{}_{1}}}{{\geq}}\frac{1}{8K^{2}\left(E_{\pi}\left(\left\|x\right\|^{2}\right)+\left\|E_{\pi}(x)\right\|^{2}\right)}$
$\displaystyle\stackrel{{\scriptstyle}}{{\geq}}\frac{1}{16K^{2}E_{\pi}\left(\left\|x\right\|^{2}\right)},$
where $K$ is a universal constant, step $1$ follows from Young inequality and
the last line is due to Jensen inequality. In addition, for
$\left\|x\right\|>R+2\epsilon+\delta$ from $2-$dissipative assumption, we have
for some $a,$
$b>0,\left\langle\nabla\breve{U}(x),x\right\rangle=\left\langle\nabla
U(x),x\right\rangle\geq a\left\|x\right\|^{2}-b$, while for
$\left\|x\right\|\leq R+2\epsilon+\delta$ by convexity of $\breve{U}$
$\displaystyle\left\langle\nabla\breve{U}(x),x\right\rangle$
$\displaystyle\geq 0$ $\displaystyle\geq
a\left\|x\right\|^{2}-a\left(R+2\epsilon+\delta\right)^{2}$ $\displaystyle\geq
a\left\|x\right\|^{2}-2aR^{2}.$
so for every $x\in\mathbb{R}^{d},$
$\left\langle\nabla\breve{U}(x),x\right\rangle\geq
a\left\|x\right\|^{2}-\left(b+2aR^{2}\right).$
Therefore, $\breve{U}(\mathrm{x})$ also satisfies $2-$dissipative, which
implies
$E_{\breve{\pi}}\left(\left\|x\right\|^{2}\right)\leq
2d\left(\frac{a+b+2aR^{2}+3}{a}\right),$
so the Poincaré constant satisfies
$\gamma\stackrel{{\scriptstyle}}{{\geq}}\frac{1}{32K^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)}.$
From [21]’s Lemma 1.2, we have $U$ satisfies Poincaré inequality with constant
$\gamma\geq\frac{1}{32K^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)}e^{-4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}\right)}.$
Now, applying previous section result, we derive the desired result.
Second approach. By employing Lemma F.16, combined with $2-$dissipative
assumption, we get:
$\int e^{\frac{a}{8}\left\|x\right\|{}^{2}-U(x)}dx\leq
e^{\left(\tilde{d}+\tilde{\mu}\right)},$ (E.79)
which in turn implies
$\int e^{\frac{a}{8}\left\|x\right\|{}^{2}-\breve{U}(x)}dx\leq
e^{\left(\tilde{d}+\tilde{\mu}\right)+2\sum_{i}L_{i}R^{1+\alpha_{i}}}.$ (E.80)
Let $\mu_{1}=\frac{e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}}{\int
e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx}$ and assume that
$\mu_{2}=\frac{\mu_{1}e^{\frac{a}{16p}\left\|x\right\|{}^{2}}}{\int
e^{\frac{a}{16p}\left\|x\right\|{}^{2}}d\mu_{1}}$. We have $\mu_{1}$ is
$\frac{a}{8p}$ strongly convex or log Sobolev with constant $\frac{a}{8p}$ and
by Cauchy Schwarz inequality, we have
$\displaystyle\left\|\frac{d\mu_{2}}{d\mu_{1}}\right\|_{L^{p}\left(\mu_{1}\right)}^{p}$
$\displaystyle=\frac{\int
e^{\frac{a}{16}\left\|x\right\|{}^{2}}d\mu_{1}}{\left(\int
e^{\frac{a}{16p}\left\|x\right\|{}^{2}}d\mu_{1}\right)^{p}}$
$\displaystyle\leq\left(\int
e^{\frac{a}{8}\left\|x\right\|{}^{2}}d\mu_{1}\right)^{\frac{1}{2}}\left(\int
e^{\frac{-a}{16p}\left\|x\right\|{}^{2}}d\mu_{1}\right)^{p}$
$\displaystyle=\left(\frac{\int
e^{\frac{a\left(2p-1\right)}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx}{\int
e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx}\right)^{\frac{1}{2}}\left(\frac{\int
e^{\frac{-a}{8p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx}{\int
e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx}\right)^{p}$ (E.81)
Since
$\displaystyle\left|U(\ x)-U(0)\right|$ $\displaystyle=\left|U(\ x)-U(0)-\
\left\langle x,\nabla U(0)\right\rangle\right|$
$\displaystyle\leq\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}\left\|x\right\|^{1+\alpha_{i}}+\frac{L_{N}}{2}\left\|x\right\|^{2}$
(E.82)
this implies $U(\
x)\leq\left|U(0)\right|+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}\left\|x\right\|^{1+\alpha_{i}}+\frac{L_{N}}{2}\left\|x\right\|^{2}$
which in turn indicates
$\displaystyle\int e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx$
$\displaystyle\geq\int
e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\left|U(0)\right|-\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}\left\|x\right\|^{1+\alpha_{i}}-\frac{L_{N}}{2}\left\|x\right\|^{2}-2\sum_{i}L_{i}R^{1+\alpha_{i}}}dx$
$\displaystyle\geq e^{-\left|U(0)\right|-2\sum_{i}L_{i}R^{1+\alpha_{i}}}\int
e^{\frac{-a}{16p}\left\|x\right\|{}^{2}-\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}\left\|x\right\|^{1+\alpha_{i}}-\frac{L_{N}}{2}\left\|x\right\|^{2}}dx$
$\displaystyle\geq
e^{-\left|U(0)\right|-2\sum_{i}L_{i}R^{1+\alpha_{i}}-\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}}\int
e^{-\left(\frac{a}{16p}+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}+\frac{L_{N}}{2}\right)\left\|x\right\|{}^{2}}dx$
$\displaystyle\geq\frac{\pi^{\frac{d}{2}}}{\left(\frac{a}{16p}+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}+\frac{L_{N}}{2}\right)^{\frac{d}{2}}}e^{-\left|U(0)\right|-2\sum_{i}L_{i}R^{1+\alpha_{i}}-\frac{L}{1+\alpha}}.$
(E.83)
On the other hand,
$\displaystyle\int e^{\frac{-a}{8p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx$
$\displaystyle\leq\int
e^{\frac{a\left(2p-1\right)}{16p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx$
$\displaystyle\leq\int e^{\frac{a}{8p}\left\|x\right\|{}^{2}-\breve{U}(x)}dx$
$\displaystyle\leq
e^{\left(\tilde{d}+\tilde{\mu}\right)+2\sum_{i}L_{i}R^{1+\alpha_{i}}}.$ (E.84)
Combining this with previous inequality, we obtain
$\displaystyle\left\|\frac{d\mu_{2}}{d\mu_{1}}\right\|_{L^{p}\left(\mu_{1}\right)}^{p}$
$\displaystyle\leq\left(\frac{e^{\left(\left(\tilde{d}+\tilde{\mu}\right)+2\sum_{i}L_{i}R^{1+\alpha_{i}}\right)}}{\frac{\pi^{\frac{d}{2}}}{\left(\frac{a}{16p}+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}+\frac{L_{N}}{2}\right)^{\frac{d}{2}}}e^{-\left|U(0)\right|-2\sum_{i}L_{i}R^{1+\alpha_{i}}-\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}}}\right)^{p+\frac{1}{2}}$
$\displaystyle=\Lambda^{p}.$ (E.85)
Taking logarithm of $\Lambda$ we get
$\displaystyle\log\Lambda$
$\displaystyle=\frac{\left(p+\frac{1}{2}\right)}{p}\log\left(\frac{e^{\left(\left(\tilde{d}+\tilde{\mu}\right)+2\sum_{i}L_{i}R^{1+\alpha_{i}}\right)}}{\frac{\pi^{\frac{d}{2}}}{\left(\frac{a}{16p}+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}+\frac{L_{N}}{2}\right)^{\frac{d}{2}}}e^{-\left|U(0)\right|-2\sum_{i}L_{i}R^{1+\alpha_{i}}-\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}}}\right)$
$\displaystyle=\frac{\left(p+\frac{1}{2}\right)}{p}\left(\tilde{d}+\frac{d}{2}\log\left(\frac{a}{8p}+\frac{a}{16p}+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}+\frac{L_{N}}{2}\right)-\frac{d}{2}\log\left(\pi\right)\right)$
$\displaystyle+\frac{\left(p+\frac{1}{2}\right)}{p}\left(\tilde{\mu}+2\sum_{i}L_{i}R^{1+\alpha_{i}}+\left|U(0)\right|+\sum_{i<N}\frac{L_{i}}{1+\alpha_{i}}\right)$
$\displaystyle=\tilde{O}\left(d\right).$ (E.86)
Since $\mu_{2}$ is log concave, from Lemma 9, we have for some universal
constant $C$ (not depending on $d$), it is log Sobolev with constant
$\displaystyle C({\displaystyle\Lambda,p)}$
$\displaystyle=\frac{1}{C}\frac{a}{8p}\frac{p-1}{p}\frac{1}{1+\log\Lambda}$
$\displaystyle=\frac{1}{C}\frac{a}{8p}\frac{p-1}{p}\frac{1}{1+\tilde{O}\left(d\right)}$
$\displaystyle=\frac{1}{\tilde{O}\left(d\right)}.$ (E.87)
From this, by using Holley-Stroock perturbation theorem, we obtain $U(\ x)$ is
log Sobolev on $R^{d}$ with constant
$\frac{1}{\tilde{O}\left(d\right)}e^{-2\sum_{i}L_{i}R^{1+\alpha_{i}}}.$ Now,
applying theorem 3.1, we derive the desired result.
∎
## Appendix F Proof of additional lemmas
###### Lemma F.13.
For any $0\leq\varpi\leq k\in N^{+}$, we have
$\|x+y\|^{\varpi}\leq 2^{k-1}(\|x\|^{\varpi}+\|y\|^{\varpi}).$ (F.1)
###### Proof.
Let’s consider functions $f_{k}(u)=2^{k-1}(u^{\varpi}+1)-(1+u)^{\varpi}$. We
prove $f_{k}(u)\geq 0$ for every $u\geq 0$ by induction. For $k=1$, since
$0\leq\varpi\leq 1,$ we have $f_{1}^{\prime}(u)=\varpi
u^{\varpi-1}-\varpi(1+u)^{\varpi-1}\geq 0$. This implies $f_{1}(u)$ increases
on $\left[0,\infty\right]$ and since $f(0)=0,$ which in turn indicates
$f(u)\geq 0.$ Therefore, the statement is true for $k=1.$
Assume that it is true for $k=n$, we will show that it is also true for
$k=n+1.$ If we differentiate $f_{n+1}(u)$ we get
$\displaystyle f_{n+1}^{\prime}(u)$ $\displaystyle=2^{n}\varpi
u^{\varpi-1}-\varpi(1+u)^{\varpi-1}$
$\displaystyle=\varpi\left(2^{n}u^{\varpi-1}-(1+u)^{\varpi-1}\right)$
$\displaystyle\geq 0,$ (F.2)
for $1\leq\varpi\leq n+1$ by induction assumption while for $0\leq\varpi\leq
1,$ $u^{\varpi-1}-(1+u)^{\varpi-1}\geq u^{\varpi-1}-(1+u)^{\varpi-1}\geq 0.$
Hence, $f$ increases on $\left[0,\infty\right]$ and since $f(0)=2^{k-1}-1\geq
0,$ this implies $f\geq 0$.
Applying to our case for $0\leq\varpi\leq k$,
$\displaystyle 2^{k-1}(\|x\|^{\varpi}+\|y\|^{\varpi})$
$\displaystyle=\|x\|^{\varpi}2^{k-1}\left(1+\left(\frac{\left\|y\right\|}{\left\|x\right\|}\right)^{\omega}\right)$
$\displaystyle\geq\|x\|^{\varpi}\left(1+\left(\frac{\left\|y\right\|}{\left\|x\right\|}\right)\right)^{\varpi}$
$\displaystyle=\left(\left\|x\right\|+\left\|y\right\|\right)^{\varpi}$
$\displaystyle\geq\left(\left\|x+y\right\|\right)^{\varpi},$ (F.3)
which conclude our proof. ∎
###### Lemma F.14.
For $\theta>0,$
$f\left(r\right)=m\left(r\right)r^{2}=\mu\left(1+r^{2}\right)^{-\frac{\theta}{2}}r{}^{2}\geq\frac{\mu}{2}r{}^{2-\theta}-\frac{\mu}{2}2{}^{\frac{2-\theta}{\theta}},$
and for $\theta=0,$ $f\left(r\right)=\mu r{}^{2}.$
###### Proof.
For $\theta=0,$ it is straightforward. For $\theta>0,$ from Lemma 2 above, for
$r\geq 2^{\frac{1}{\theta}}$,
$\displaystyle f\left(r\right)$
$\displaystyle=\mu\left(1+r^{2}\right)^{-\frac{\theta}{2}}r{}^{2}$
$\displaystyle\geq\mu\left(1+r^{\theta}\right)^{-1}r{}^{2}$
$\displaystyle=\mu\left(r^{2\theta}-1\right)^{-1}r{}^{2}\left(r^{\theta}-1\right)$
$\displaystyle\geq\mu r{}^{2-2\theta}\left(r^{\theta}-1\right)$
$\displaystyle\geq\frac{\mu}{2}r{}^{2-\theta}.$ (F.4)
For $r<2^{\frac{1}{\theta}}$, $f\left(r\right)\geq
0\geq\frac{\mu}{2}r{}^{2-\theta}-\frac{\mu}{2}2{}^{\frac{2-\theta}{\theta}}$
which concludes statement. ∎
###### Lemma F.15.
$f\left(\theta\right)=\left|\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}-\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}\right|$is
increasing function.
###### Proof.
If $\left\|x\right\|\geq\left\|x-y\right\|,$ we have
$f\left(\theta\right)=\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}-\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}$.
Differentiate $f$ with respect to $\theta$ gives
$\displaystyle f^{\prime}\left(\theta\right)$
$\displaystyle=\frac{1}{2}\ln\left(1+\left\|x\right\|^{2}\right)\left(1+\left\|x\right\|^{2}\right)^{\frac{\theta}{2}}$
$\displaystyle-\frac{1}{2}\ln\left(1+\left\|x-y\right\|^{2}\right)\left(1+\left\|x-y\right\|^{2}\right)^{\frac{\theta}{2}}$
$\displaystyle\geq 0$ (F.5)
Similarly, if $\left\|x\right\|\leq\left\|x-y\right\|$ we also obtain
$f^{\prime}\left(\theta\right)\geq 0,$ which implies that $f$ increases as
desired. ∎
###### Lemma F.16.
If $\xi\sim N_{p}\left(0,I_{d}\right)$ then
$d^{\left\lfloor\frac{n}{p}\right\rfloor}\leq
E(\left\|\xi\right\|_{p}^{n})\leq\left[d+\frac{n}{2}\right]^{\frac{n}{p}}$where$\left\lfloor
x\right\rfloor$ denotes the largest integer less than or equal to $x.$ If
$n=kp,$ then $E(\left\|\xi\right\|_{p}^{n})=d..(d+k-1)$.
###### Proof.
From [29], we have
$E(\left\|\xi\right\|_{p}^{n})=p^{\frac{n}{p}}\frac{\Gamma\left(\frac{d+n}{p}\right)}{\Gamma\left(\frac{d}{p}\right)}.$
Since $\Gamma$ is an inscreasing function,
$p^{\frac{n}{p}}\frac{\Gamma\left(\frac{d+n}{p}\right)}{\Gamma\left(\frac{d}{p}\right)}\geq
p^{\frac{n}{p}}\frac{\Gamma\left(\frac{d}{p}+\left\lfloor\frac{n}{p}\right\rfloor\right)}{\Gamma\left(\frac{d}{p}\right)}=p^{\frac{n}{p}}\frac{d}{p}\ldots\left(\frac{d}{p}+k-1\right)\geq
d^{\left\lfloor\frac{n}{p}\right\rfloor}.$
If $n=kp$ for $k\in N$ then
$E(\left\|\xi\right\|_{p}^{n})=p^{\frac{n}{p}}\frac{d}{p}\ldots\left(\frac{d}{p}+k-1\right).$
If $n\neq kp$, let $\left\lfloor\frac{n}{p}\right\rfloor=k$. Since $\Gamma$ is
log-convex, by Jensen’s inequality for any $p\geq 1$, we acquire
$\displaystyle\left(1-\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}\right)\log\Gamma\left(\frac{d}{p}\right)+\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}\log\Gamma\left(\frac{d}{p}+\left\lfloor\frac{n}{p}\right\rfloor+1\right)$
$\displaystyle\geq\log\Gamma\left(\left(1-\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}\right)\frac{d}{p}+\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}\left(\frac{d}{p}+\left\lfloor\frac{n}{p}\right\rfloor+1\right)\right)$
$\displaystyle\geq\log\Gamma\left(\frac{d+n}{p}\right)>0.$
Raising $e$ to the power of both sides, we get
$\Gamma\left(\frac{d}{p}\right)^{\left(1-\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}\right)}\Gamma\left(\frac{d}{p}+\left\lfloor\frac{n}{p}\right\rfloor+1\right)^{\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}}\geq\Gamma\left(\frac{d+n}{p}\right),$
which implies that
$\begin{array}[]{cc}\left[\frac{\Gamma\left(\frac{d}{p}+\left\lfloor\frac{n}{p}\right\rfloor+1\right)}{\Gamma\left(\frac{d}{p}\right)}\right]^{\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}}&\geq\frac{\Gamma\left(\frac{d+n}{p}\right)}{\Gamma\left(\frac{d}{p}\right)}\\\
\left[\frac{d}{p}\ldots\left(\frac{d}{p}+\left\lfloor\frac{n}{p}\right\rfloor\right)\right]^{\frac{n}{p\left\lfloor\frac{n}{p}\right\rfloor+p}}&\geq\frac{\Gamma\left(\frac{d+n}{p}\right)}{\Gamma\left(\frac{d}{p}\right)}.\end{array}$
Combining with
$E(\left\|\xi\right\|_{p}^{n})=p^{\frac{n}{p}}\frac{\Gamma\left(\frac{d+n}{p}\right)}{\Gamma\left(\frac{d}{p}\right)}$
gives the conclusion. ∎
## Acknowledgements
This research was funded in part by University of Mississippi summer grant.
## References
* [1] [author] Arellano-Valle, Reinaldo BR. B. and Richter, Wolf-DieterW.-D. (2012). On skewed continuous ln, p-symmetric distributions. Chilean Journal of Statistics 3 193–212.
* [2] [author] Bakry, DominiqueD. and Émery, MichelM. (1985). Diffusions hypercontractives. In Séminaire de Probabilités XIX 1983/84 177–206. Springer.
* [3] [author] Bernton, EspenE. (2018). Langevin monte carlo and jko splitting. arXiv preprint arXiv:1802.08671.
* [4] [author] Bobkov, Sergey GS. G. (1999). Isoperimetric and analytic inequalities for log-concave probability measures. The Annals of Probability 27 1903–1921.
* [5] [author] Cattiaux, PatrickP., Guillin, ArnaudA. and Wu, Li-MingL.-M. (2010). A note on Talagrand’s transportation inequality and logarithmic Sobolev inequality. Probability theory and related fields 148 285–304.
* [6] [author] Chatterji, Niladri SN. S., Diakonikolas, JelenaJ., Jordan, Michael IM. I. and Bartlett, Peter LP. L. (2019). Langevin Monte Carlo without smoothness. arXiv preprint arXiv:1905.13285.
* [7] [author] Chen, ZongchenZ. and Vempala, Santosh SS. S. (2019). Optimal convergence rate of Hamiltonian Monte Carlo for strongly logconcave distributions. arXiv preprint arXiv:1905.02313.
* [8] [author] Cheng, XiangX. and Bartlett, Peter LP. L. (2018). Convergence of Langevin MCMC in KL-divergence. PMLR 83 83 186–211.
* [9] [author] Dalalyan, Arnak SA. S. (2017). Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. arXiv preprint arXiv:1704.04752.
* [10] [author] Dalalyan, Arnak SA. S. and Karagulyan, AvetikA. (2019). User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. Stochastic Processes and their Applications 129 5278–5311.
* [11] [author] Dalalyan, Arnak SA. S., Riou-Durand, LionelL. and Karagulyan, AvetikA. (2019). Bounding the error of discretized Langevin algorithms for non-strongly log-concave targets. arXiv preprint arXiv:1906.08530.
* [12] [author] Durmus, AlainA., Majewski, SzymonS. and Miasojedow, BlazejB. (2019). Analysis of Langevin Monte Carlo via Convex Optimization. J. Mach. Learn. Res. 20 73–1.
* [13] [author] Durmus, AlainA., Moulines, EricE. and Pereyra, MarceloM. (2018). Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau. SIAM Journal on Imaging Sciences 11 473–506.
* [14] [author] Durmus, AlainA., Moulines, EricE. and Saksman, EeroE. (2017). On the convergence of hamiltonian monte carlo. arXiv preprint arXiv:1705.00166.
* [15] [author] Dwivedi, RaazR., Chen, YuansiY., Wainwright, Martin JM. J. and Yu, BinB. (2019). Log-concave sampling: Metropolis-Hastings algorithms are fast. Journal of Machine Learning Research 20 1–42.
* [16] [author] Erdogdu, Murat AM. A. and Hosseinzadeh, RasaR. (2020). On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness. arXiv preprint arXiv:2005.13097.
* [17] [author] Gross, LeonardL. (1975). Logarithmic sobolev inequalities. American Journal of Mathematics 97 1061–1083.
* [18] [author] Holley, RichardR. and Stroock, Daniel WD. W. (1986). Logarithmic Sobolev inequalities and stochastic Ising models.
* [19] [author] Kečkić, Jovan DJ. D. and Vasić, Petar MP. M. (1971). Some inequalities for the gamma function. Publications de l’Institut Mathématique 11 107–114.
* [20] [author] Kloeden, Peter EP. E. and Platen, EckhardE. (1992). Stochastic differential equations. In Numerical Solution of Stochastic Differential Equations 103–160. Springer.
* [21] [author] Ledoux, MichelM. (2001). Logarithmic Sobolev inequalities for unbounded spin systems revisited. In Séminaire de Probabilités XXXV 167–194. Springer.
* [22] [author] Lee, Yin TatY. T. and Vempala, Santosh SS. S. (2018). Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing 1115–1121.
* [23] [author] Li, XuechenX., Wu, YiY., Mackey, LesterL. and Erdogdu, Murat AM. A. (2019). Stochastic runge-kutta accelerates langevin monte carlo and beyond. In Advances in Neural Information Processing Systems 7748–7760.
* [24] [author] Ma, Yi-AnY.-A., Chen, YuansiY., Jin, ChiC., Flammarion, NicolasN. and Jordan, Michael IM. I. (2019). Sampling can be faster than optimization. Proceedings of the National Academy of Sciences 116 20881–20885.
* [25] [author] Mangoubi, OrenO. and Smith, AaronA. (2017). Rapid mixing of Hamiltonian Monte Carlo on strongly log-concave distributions. arXiv preprint arXiv:1708.07114.
* [26] [author] Mangoubi, OrenO. and Vishnoi, NisheethN. (2018). Dimensionally tight bounds for second-order Hamiltonian Monte Carlo. In Advances in neural information processing systems 6027–6037.
* [27] [author] Nesterov, YuriiY. and Spokoiny, VladimirV. (2017). Random gradient-free minimization of convex functions. Foundations of Computational Mathematics 17 527–566.
* [28] [author] Raginsky, MaximM., Rakhlin, AlexanderA. and Telgarsky, MatusM. (2017). Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis. arXiv preprint arXiv:1702.03849.
* [29] [author] Richter, Wolf-DieterW.-D. (2007). Generalized spherical and simplicial coordinates. Journal of Mathematical Analysis and Applications 336 1187–1202.
* [30] [author] Robert, ChristianC. and Casella, GeorgeG. (2013). Monte Carlo statistical methods. Springer Science & Business Media.
* [31] [author] Vempala, SantoshS. and Wibisono, AndreA. (2019). Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices. In Advances in Neural Information Processing Systems 8094–8106.
* [32] [author] Villani, CédricC. (2008). Optimal transport: old and new 338\. Springer Science & Business Media.
* [33] [author] Welling, MaxM. and Teh, Yee WY. W. (2011). Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11) 681–688.
* [34] [author] Xu, PanP., Chen, JinghuiJ., Zou, DifanD. and Gu, QuanquanQ. (2018). Global convergence of Langevin dynamics based algorithms for nonconvex optimization. In Advances in Neural Information Processing Systems 3122–3133.
* [35] [author] Yan, MinM. (2012). Extension of convex function. arXiv preprint arXiv:1207.0944.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.